aid
string | mid
string | abstract
string | related_work
string | ref_abstract
dict | title
string | text_except_rw
string | total_words
int64 |
---|---|---|---|---|---|---|---|
1907.13395 | 2964708426 | While many apps include built-in options to report bugs or request features, users still provide an increasing amount of feedback via social media, like Twitter. Compared to traditional issue trackers, the reporting process in social media is unstructured and the feedback often lacks basic context information, such as the app version or the device concerned when experiencing the issue. To make this feedback actionable to developers, support teams engage in recurring, effortful conversations with app users to clarify missing context items. This paper introduces a simple approach that accurately extracts basic context information from unstructured, informal user feedback on mobile apps, including the platform, device, app version, and system version. Evaluated against a truthset of 3014 tweets from official Twitter support accounts of the 3 popular apps Netflix, Snapchat, and Spotify, our approach achieved precisions from 81 to 99 and recalls from 86 to 98 for the different context item types. Combined with a chatbot that automatically requests missing context items from reporting users, our approach aims at auto-populating issue trackers with structured bug reports. | Research found that especially non-technical end-users are more likely to express their opinions on social networks, such as Twitter @cite_21 . Several studies have identified Twitter as an important source for crowd-based requirements engineering and software evolution @cite_43 @cite_37 @cite_4 . Similar to app reviews, tweets contain important information, such as feature requests or bug reports. By performing a survey with software engineering practitioners and researchers @cite_35 underlined the need for automatic analysis techniques to, e.g., summarize, classify, and prioritize tweets. The authors highlight that a manual analysis of the tweets is unfeasible due to its quantity, unstructured nature, and varying quality. @cite_43 found that tweets provide additional requirements-related information. Compared to app reviews, by mining tweets the authors extracted @math 22 authors have used tweets to crowdsource app features @cite_5 , to support release decisions @cite_14 , to categorize and summarize technical information included in tweets @cite_1 , or to rank the reported issues @cite_26 . These studies enforce the relevance of our approach. | {
"abstract": [
"Twitter messages (tweets) contain important information for software and requirements evolution, such as feature requests, bug reports and feature shortcoming descriptions. For this reason, Twitter is an important source for crowd-based requirements engineering and software evolution. However, a manual analysis of this information is unfeasible due to the large number of tweets, its unstructured nature and varying quality. Therefore, automatic analysis techniques are needed for, e.g., summarizing, classifying and prioritizing tweets. In this work we present a survey with 84 software engineering practitioners and researchers that studies the tweet attributes that are most telling of tweet priority when performing software evolution tasks. We believe that our results can be used to implement mechanisms for prioritizing user feedback with social components. Thus, it can be helpful for enhancing crowd-based requirements engineering and software evolution.",
"[Context and motivation] Research on eliciting requirements from a large number of online reviews using automated means has focused on functional aspects. Assuring the quality of an app is vital for its success. This is why user feedback concerning quality issues should be considered as well [Question problem] But to what extent do online reviews of apps address quality characteristics? And how much potential is there to extract such knowledge through automation? [Principal ideas results] By tagging online reviews, we found that users mainly write about \"usability\" and \"reliability\", but the majority of statements are on a subcharacteristic level, most notably regarding \"operability\", \"adaptability\", \"fault tolerance\", and \"interoperability\". A set of 16 language patterns regarding \"usability\" correctly identified 1,528 statements from a large dataset far more efficiently than our manual analysis of a small subset. [Contribution] We found that statements can especially be derived from online reviews about qualities by which users are directly affected, although with some ambiguity. Language patterns can identify statements about qualities with high precision, though the recall is modest at this time. Nevertheless, our results have shown that online reviews are an unused Big Data source for quality requirements.",
"Mobile application (app) stores have lowered the barriers to app market entry, leading to an accelerated and unprecedented pace of mobile software production. To survive in such a highly competitive and vibrant market, release engineering decisions should be driven by a systematic analysis of the complex interplay between the user, system, and market components of the mobile app ecosystem. To demonstrate the feasibility and value of such analysis, in this paper, we present a case study on the rise and fall of Yik Yak, one of the most popular social networking apps at its peak. In particular, we identify and analyze the design decisions that led to the downfall of Yik Yak and track rival apps' attempts to take advantage of this failure. We further perform a systematic in-depth analysis to identify the main user concerns in the domain of anonymous social networking apps and model their relations to the core features of the domain. Such a model can be utilized by app developers to devise sustainable release engineering strategies that can address urgent user concerns and maintain market viability.",
"",
"Twitter is one of the most popular social networks. Previous research found that users employ Twitter to communicate about software applications via short messages, commonly referred to as tweets, and that these tweets can be useful for requirements engineering and software evolution. However, due to their large number---in the range of thousands per day for popular applications---a manual analysis is unfeasible.In this work we present ALERTme, an approach to automatically classify, group and rank tweets about software applications. We apply machine learning techniques for automatically classifying tweets requesting improvements, topic modeling for grouping semantically related tweets and a weighted function for ranking tweets according to specific attributes, such as content category, sentiment and number of retweets. We ran our approach on 68,108 collected tweets from three software applications and compared its results against software practitioners' judgement. Our results show that ALERTme is an effective approach for filtering, summarizing and ranking tweets about software applications. ALERTme enables the exploitation of Twitter as a feedback channel for information relevant to software evolution, including end-user requirements.",
"When encountering an issue, technical users (e.g., developers) usually file the issue report to the issue tracking systems. But non-technical end-users are more likely to express their opinions on social network platforms, such as Twitter. For software systems (e.g., Firefox and Chrome) that have a high exposure to millions of non-technical end-users, it is important to monitor and solve issues observed by a large user base. The widely used micro-blogging site (i.e., Twitter) has millions of active users. Therefore, it can provide instant feedback on products to the developers. In this paper, we investigate whether social networks (i.e., Twitter) can improve the bug fixing process by analyzing the short messages posted by end-users on Twitter (i.e., tweets). We propose an approach to remove noisy tweets, and map the remaining tweets to bug reports. We conduct an empirical study to investigate the usefulness of Twitter in the bug fixing process. We choose two widely adopted browsers (i.e., Firefox and Chrome) that are also large and rapidly released software systems. We find that issue reports are not treated differently regardless whether users tweet about the issue or not, except that Firefox developers tend to label an issue as more severe if users tweet about it. The feedback from Firefox contributors confirms that the tweets are not currently leveraged in the bug fixing process, due to the challenges associated to discovering bugs through Twitter. Moreover, we observe that many issues are posted on Twitter earlier than on issue tracking systems. More specifically, at least one third of issues could have been reported to developers 8.2 days and 7.6 days earlier in Firefox and Chrome, respectively. In conclusion, tweets are useful in providing earlier acknowledgment of issues, which developers can potentially use to focus their efforts on the issues impacting a large user-base.",
"Twitter enables large populations of end-users of software to publicly share their experiences and concerns about software systems in the form of micro-blogs. Such data can be collected and classified to help software developers infer users' needs, detect bugs in their code, and plan for future releases of their systems. However, automatically capturing, classifying, and presenting useful tweets is not a trivial task. Challenges stem from the scale of the data available, its unique format, diverse nature, and high percentage of irrelevant information and spam. Motivated by these challenges, this paper reports on a three-fold study that is aimed at leveraging Twitter as a main source of software user requirements. The main objective is to enable a responsive, interactive, and adaptive data-driven requirements engineering process. Our analysis is conducted using 4,000 tweets collected from the Twitter feeds of 10 software systems sampled from a broad range of application domains. The results reveal that around 50 of collected tweets contain useful technical information. The results also show that text classifiers such as Support Vector Machines and Naive Bayes can be very effective in capturing and categorizing technically informative tweets. Additionally, the paper describes and evaluates multiple summarization strategies for generating meaningful summaries of informative software-relevant tweets.",
"The rise in popularity of mobile devices has led to a parallel growth in the size of the app store market, intriguing several research studies and commercial platforms on mining app stores. App store reviews are used to analyze different aspects of app development and evolution. However, app users’ feedback does not only exist on the app store. In fact, despite the large quantity of posts that are made daily on social media, the importance and value that these discussions provide remain mostly unused in the context of mobile app development. In this paper, we study how Twitter can provide complementary information to support mobile app development. By analyzing a total of 30,793 apps over a period of six weeks, we found strong correlations between the number of reviews and tweets for most apps. Moreover, through applying machine learning classifiers, topic modeling and subsequent crowd-sourcing, we successfully mined 22.4 additional feature requests and 12.89 additional bug reports from Twitter. We also found that 52.1 of all feature requests and bug reports were discussed on both tweets and reviews. In addition to finding common and unique information from Twitter and the app store, sentiment and content analysis were also performed for 70 randomly selected apps. From this, we found that tweets provided more critical and objective views on apps than reviews from the app store. These results show that app store review mining is indeed not enough; other information sources ultimately provide added value and information for app developers.",
"The ubiquity of mobile devices has led to unprecedented growth in not only the usage of apps, but also their capacity to meet people's needs. Smart phones take on a heightened role in emergency situations, as they may suddenly be among their owner's only possessions and resources. The 2016 wildfire in Fort McMurray, Canada, intrigued us to study the functionality of the existing apps by analyzing social media information. We investigated a method to suggest features that are useful for emergency apps. Our proposed method called MAPFEAT, combines various machine learning techniques to analyze tweets in conjunction with crowdsourcing and guides an extended search in app stores to find currently missing features in emergency apps based on the needs stated in social media. MAPFEAT is evaluated by a real-world case study of the Fort McMurray wildfire, where we analyzed 69,680 unique tweets recorded over a period from May 2nd to May 7th, 2016. We found that (i) existing wildfire apps covered a range of 28 features with not all of them being considered helpful or essential, (ii) a large range of needs articulated in tweets can be mapped to features existing in non-emergency related apps, and (iii) MAPFEAT's suggested feature set is better aligned with the needs expressed by general public. Only six of the features existing in wildfire apps is among top 40 crowdsourced features explored by MAPFEAT, with the most important one just ranked 13th. By using MAPFEAT, we proactively understand victims' needs and suggest mobile software support to the people impacted. MAPFEAT looks beyond the current functionality of apps in the same domain and extracts features using variety of crowdsourced data."
],
"cite_N": [
"@cite_35",
"@cite_37",
"@cite_14",
"@cite_4",
"@cite_26",
"@cite_21",
"@cite_1",
"@cite_43",
"@cite_5"
],
"mid": [
"2734462903",
"2760783434",
"2896305405",
"",
"2759603254",
"2767795225",
"2756686115",
"2788314565",
"2731662169"
]
} | Extracting and Analyzing Context Information in User-Support Conversations on Twitter | Modern apps include options to support users in providing relevant, complete, and correct context information when reporting bugs or feature requests. For example, Facebook attaches more than 30 context items to bug reports submitted via their apps, including the app version installed and the device in use [1]. Despite the presence of such options, an increasing amount of users still report their issues via social media, such as Twitter. A possible reason might be to increase the pressure on software vendors through the public visibility of reported issues. Research has shown, that mining tweets allows additional features and bugs to be extracted, that are not reported in official channels as app stores [2]. Mezouar et al. found that one third of the bugs reported in issue trackers can be discovered earlier by analyzing tweets [3]. Many app vendors are aware of these benefits and have thus created Twitter support accounts as @Netflixhelps, @Snapchatsupport, or @SpotifyCares.
Compared to structured reports in issue trackers that usually include context items [4], [5], feedback on Twitter is primarily provided by non-technical users in a less structured way [3]. Tweets that miss basic context items, such as the concerned platform, are likely to be non-actionable to developers. Hence, several support accounts prominently highlight the importance of this information in their Twitter bio. For instance, Spotify's profile includes "for tech queries, let us know your device/operating system", while Netflix states "for tech issues, please include device & error". However, tweets, such as opers or requiring further clarifications. Evaluated against a truthset of ∼3,000 tweets, our approach achieved precisions from 81% to 99% and recalls from 86% to 98% for the different context item types.
The remainder of the paper is structured as follows: Section II describes our research setting. Then, Section III introduces our approach to extract context items and Section IV reports on the evaluation results. Section V discusses the findings and potential threats to validity. Finally, Section VI surveys related work and Section VII concludes the paper.
II. RESEARCH SETTING
We describe the overall usage setting for our context extraction approach as well as our research method and data.
A. Overall Setting of this Work
Developers organize their work using issue trackers [6]. An issue usually corresponds to a unit of work to accomplish an improvement in a software system. Issues can be of different types, such as bug reports or feature requests. When creating an issue of a specific type, issue trackers use structured templates that request specific context items to be provided by the reporter. Bug reports require, e.g., the affected app version, while feature requests require a description of the desired feature. Traditionally, reporters were technically experienced persons, such as software testers or the developers themselves.
With the emergence of app stores and social media, also non-technical users began to frequently and informally communicate with developers -compared to existing public issue trackers of open source projects that were tailored towards more technical experienced users. Research has shown that users include requirements-related information such as bug reports in about one third of their informal feedback [7], [8], [9]. Recent studies specifically emphasized the benefits of mining tweets [10], [11].
There are several key challenges software practitioners face when working with bug reports included in informal user feedback, e.g. provided via app stores and social media: (1) Missing Information. Compared to reports in issue trackers, feedback in app stores and social media is primarily provided by non-technical users in a less structured way [3]. Unfortunately, users often miss to provide context items needed by developers such as the app version [4], [8], [12], [13], [14]. This is compounded by online review processes that are purposefully unguided [15] and lack quality checks, to allow many users to participate. (2) Unreproducible Issues. In case user feedback that reports bugs misses relevant context information, these bugs might become hard to reproduce [12], [16]. Even if developers are able to guess the user's interactions, an issue might only occur on specific combinations of device model and system version [17]. Research found that developers fail to identify erroneous configurations already for a low number of features [18]. (3) Manual Efforts. For developers to be able to understand and reproduce reported issues, support teams engage in effortful conversations with users [19]. Within our crawled dataset including tweets from the Netflix, Snapchat, and Spotify support accounts, more than 40% (∼2.2 million) of the tweets are provided by support teams, possibly to clarify missing context items.
We aim to automatically extract basic context items from tweets. Our approach is intended to be used in combination with a feedback classification and a chatbot approach to autopopulate issue trackers with structured bug reports mined from user feedback, as shown in Figure 2. The overall setting can continuously be applied, e.g., to an app's Twitter support account. It can be separated into four phases, of which the second phase is covered by this paper, while the remaining phases are left for future work. We briefly describe each of the phases in the following:
(1) Tweet Classification Phase. In the first phase, tweets addressed to the app's support account are classified by their types of requirements-related information. Only tweets reporting bugs (i.e., issues that potentially require context items to be understandable and reproducible by developers), are passed to the next phase. Tweets including other types of information, such as praise (e.g., "This is the greatest app I've ever used."), are excluded from further analysis. These do not require context items and a chatbot requesting such information would annoy app users. (2) Context Extraction Phase. In this phase, our context extraction approach is applied to single tweets or conversations consisting of multiple tweets that report bugs. Each tweet is mined to extract the four basic context items, including the platform, the device, the app version, and the system version. For example, the tweet "The app widget has died and is now a rectangular black hole. Xperia xz3 running Android", includes the device and platform. After processing a complete conversation, the approach verifies if all four items could be extracted. (3) Context Clarification Phase. If the four basic context items could not be extracted, a chatbot requests the missing information. In case of the example above, the chatbot would request the app version and system version by replying to the tweet: "Hey, help's here! Can you let us know the app version you're running, as well as the system version installed? We'll see what we can suggest". The conversations are periodically analyzed to see if the user provided the missing context items. (4) Issue Creation Phase. Once all context items are present, they are used to create a structured bug report within the app's issue tracker. The comment section of the issue tracker remains connected with the conversation on Twitter, so that developers can directly communicate with the reporting user to ask for further clarification or inform the user once the issue is fixed.
By automatically requesting missing context items, our approach reduces the manual effort for support teams, and aids developers by addressing the aforementioned challenges to facilitate actionable bug reports.
B. Research Method and Data
In the following, we describe our research method including the data collection, truthset creation, and data analysis phase, as shown in Figure 3.
1) Data Collection Phase: In the data collection phase, we crawled tweets using the Twitter Search API [20] in January 2019. We refer to this data as crawled dataset.
For our study, we collected tweets of the official Netflix, Snapchat, and Spotify support accounts. For each account, we used the search query 'q=@account name&f=tweets-&max position=' to crawl the tweets. The query parameter q is set to a combination of the @-symbol and the account name {Netflixhelps, Snapchatsupport, SpotifyCares}. Thereby, we only consider tweets directly addressed to the support accounts (cf. Figure 1). We do not crawl tweets that solely use related hashtags (e.g., "Listen to my #spotify playlist [...]." or "Today, relaxed #netflix sunday!"). The type parameter f is set to 'tweets' to receive all tweets addressed to the support accounts in temporal order, instead of only the top tweets as per default. The pagination parameter max position is set to the identifier of the last tweet received, as the API returns a fixed amount of 20 tweets per request. For each tweet, we extracted the identifier (id), text, creation date, conversation id, reply flag, as well as the author's name and id.
Each tweet can result in a conversation which possibly contains responses written by the support team, by users facing similar issues, or by the reporting user (cf. Figure 1). To extract these responses, we additionally crawl each of the collected tweets status urls, following the pattern 'https://twitter.com/user name/status/tweet id'. Table I summarizes the crawled dataset by the support accounts. The Netflix account (@Netflixhelps) [21] was created the earliest in February 2009 and exists for about 10 years. For this account, we crawled 1,643,281 tweets by 385,935 users. These tweets result in 686,488 conversations (∼2.4 tweets per conversation). The Snapchat account (@Snapchatsupport) [22] was created the latest in March 2014 and exists for about 5 years. We crawled 1,164,824 tweets by 422,643 users. These result in 612,645 conversations with about 1.9 tweets per conversation. The Spotify account (@SpotifyCares) [23] 2) Truthset Creation Phase: To be able to evaluate how well our approach extracts basic context items from tweets, we created a truthset including labelled tweets of the Netflix, Snapchat, and Spotify support accounts.
Before creating the truthset, we pre-processed the tweets of the crawled dataset by removing conversations including non-English tweets using the LangID library [24]. Then, we converted the tweet texts into lowercase, removed line breaks, double whitespaces, and mentions of support account names.
To create the truthset, we use the tool doccano [25], an open-source text annotation tool that can be used for, e.g., named entity recognition or sentiment analysis tasks. It can be deployed on a local or remote machine and offers rich functionality, such as user management. Using the tool, two human annotators performed a sequence labelling task by assigning the labels 'Platform', 'Device', 'App Version', and 'System Version' to sequences within the tweets. We started from a random sample of conversations which resulted in truthsets nearly including no context items, being unusable to measure the performance of our approach. Thus, we changed the sampling strategy and searched for conversations including the keyword 'App'. The labelled context items were often referring to platforms such as desktops or smart TVs, which we do not consider in this paper. To select tweets including relevant context items, we only consider conversations containing the words 'iOS' or 'Android' in at least one of their tweets, even though this introduces the bias of more platforms being mentioned within the truthset. From the extracted conversations we randomly selected as much for each account to contain about 1,000 user tweets. We removed tweets written by the support teams as our approach is designed to extract context items from user feedback. Further, user tweets include more context items and are needed to determine how our approach performs on informal language, e.g., referencing the device 'iPhone 6 Plus' by the alternative spelling 'iphone6+'.
In case of disagreements between the two coders, a third annotator resolved the conflicts which resulted mainly from different sequence lengths due to including additional information, such as the device manufacturer or system architecture (e.g., '8.4.17' vs. '8.4.17arm7'). We calculated the inter-coder reliability using Cohen's Kappa on a scale of 0-1 [26]. Per tweet of the truthset, we compare if the two coders agree or disagree that it includes context items. As suggested by Landis and Koch, we consider the ranges 0.61-0.80 as 'substantial' and 0.81-1.00 as 'almost perfect' [27]. The kappa agreement among the two coders is 0.933. Table II summarizes the truthset. It consists of 1,020 conversations including 3,014 tweets, of which 1,005 are tweets from the Netflix support account, 1,004 tweets from Snapchat, and 1,005 tweets from Spotify. Of these, 1,116 (37.03%) tweets include context information. The tweets include an overall amount of 1,840 context items (∼1.65 items per tweet), of which 931 (50.60%) mention the platform, 488 (26.52%) refer to the device, 295 (16.03%) indicate the system version, and 126 (6.85%) the app version.
3) Data Analysis Phase: In the data analysis, we answer how well basic context items can be automatically extracted from tweets. Therefore, we apply our approach to the truthset including labelled tweets of the Netflix, Snapchat, and Spotify support accounts. We measure the approach performance by comparing its output to the results of the human annotators. Then, we run summative analysis on the extracted information. Considering all support accounts, the approach achieved precisions from 81% to 99% and recalls from 86% to 98% for the different context item types. To support replication, our datasets and the analyses source code as Jupyter notebooks are publicly available on our website 1 .
III. CONTEXT EXTRACTION APPROACH
We describe a simple approach that accurately extracts basic context information from unstructured, informal user feedback on mobile apps. We decided to consider the context items platform, device, app version, and system version, as we identified these four types to be frequently requested by support teams during a manual data exploration of 100 conversations. Moreover, researchers highlighted their importance for understanding and reproducing issue reports [4].
Our approach focuses on the Android and iOS platform. Both platforms cover 99.9% of the mobile operating system market [28]. The approach is designed to work with other platforms as well (e.g., desktop apps, smart TV apps), by exchanging its configuration files, i.e., the pre-defined keyword lists, without modifying the actual implementation.
We separate the description by the context item types and their strategies used for extraction.
A. Platform and Device
We crawl pre-defined keyword lists, including platform and device names, and generate word vector representations to handle informal writing frequently used in social media. Word vector similarities allow spelling mistakes and abbreviations of items included within the pre-defined lists to be determined. The lists and alternative spellings are used to create regular expressions that are applied to user feedback in order to extract context information. Figure 4 summarizes our approach.
1) Pre-Defined Keyword Lists: We crawled pre-defined lists of code names for the Android platform, as well as lists including device names for iOS and Android. These lists are maintained by app store operators or user communities and updated regularly, e.g., with the release of new devices.
For the Android platform, 15 alternative code names exist, such as 'Cupcake', which we extracted from a public list [29]. For the iOS platform, no such alternative names exist.
For iOS devices we extracted 51 names, such as 'iPhone 8 Plus' [30]. Since several users only refer to the product line, e.g., "[...] the error appears on my iPhone.", we extend the device list by the 5 product lines iPhone, iPad, iPod Touch, Apple TV, and Apple Watch, resulting in 56 iOS devices.
For Android devices the diversity is much higher. We crawled an official list from Google Play containing all 23,387 supported devices [31]. The list includes four columns, listing the retail branding (e.g., 'Samsung'), marketing name (e.g., 'Galaxy S9'), device (e.g., 'star2qlteue'), and model (e.g., 'SM-G965U1'). We pre-process the list in five steps: We create a unique list of marketing names, as these possibly occur several times due to the same device being manufactured for different markets (e.g., European or Asian). The resulting list includes 15,392 devices. Then, we remove all marketing names shorter than 5 characters, such as 'V' or 'Q7', resulting in 13,259 devices. Further, we remove marketing names that are not mentioned within the collected tweets. We removed these, as word-vector models perform better on extracting similar words when a given input is included in the training data, while extracting alternative spellings for unseen words could negatively influence the results [32]. It significantly reduced the number of devices to 1,324. This step needs to be repeated in fixed periods of time when new tweets are addressed to the support accounts. As the list of marketing names also includes common words (e.g., 'five', 'go', or 'plus'), we used the natural language processing library spaCy [33] to remove words that appear in the vocabulary of the included en_core_web_sm model, trained on the CommonCrawl dataset. Thereby, we reduced the number of devices to 1,133.
Until this point the processing of the keyword lists is fully automated. We decided to manually fine-tune the Android device list by removing remaining common names not included within the vocabulary of the CommonCrawl dataset (e.g., 'horizon'), while preserving more specific names (e.g., 'galaxy s8'), resulting in 896 Android devices. This step could possibly be automated with datasets of larger vocabulary sizes.
2) Word Vector Representations: User feedback written in informal language might include alternative spellings of platform and device names, i.e., abbreviations or misspellings. For example, several users reference the Android code name 'Lollipop' as 'lolipop' or 'lollypop'.
To enable our approach to also identify these cases, we create word vector representations using the fastText library [34]. Comparing vector distances allows to automatically identify similar words that frequently appear in the same context. A subset of these similar words are alternative spellings of the platform and device names included in our lists. We decided to use fastText over simpler methods, such as the Levenshtein distance, to also identify alternative spellings that vary significantly. For example, users often reference the 'iPhone 6 Plus' as 'iphone6+', where the Levenshtein distance is 7. High edit distances would negatively impact the results by detecting, e.g., 'one' as alternative spelling to 'iPhone 4', where the edit distance is 5.
To train the fastText model, we pre-process all 5,254,969 crawled tweet texts according to the truthset (i.e., we convert the tweet texts into lowercase, remove line breaks, double whitespaces, and mentions of support account names).
Algorithm 1 lists the extraction of similar spellings for given keywords using word vector representations as pseudocode. It takes the pre-processed tweets and a keyword list (i.e., including the iOS devices names or Android platform code names) as input. The algorithm can be separated into four parts: First it tokenizes the tweets and removes non-informative tokens (line 2-13), then it trains the word vector model using the tweets (line 14-17), afterwards it obtains alternative spellings for each given keyword from the word vector model (line [18][19][20][21][22][23][24][25][26], finally it generates a regular expression of the original keywords and their alternative spellings (line 27-28). In the following, we explain each part separately: (1) Tokenize Tweets. We begin by tokenizing each tweet. We remove non-informative tokens including punctuation and spaces using spaCy's [35] built-in functionality.
(2) Train Word Vector Model. For the actual training of the model, we use Gensim [36] as suggested by spaCy. We use the default configuration and set the word vector size to 300, the minimum occurrences of words to 5, the window size to 5, and perform the training in 10 epochs. Our trained model has a vocabulary size of 149,889 words. (4) Generate Regular Expression. Per list, we combine the given keywords (e.g., devices) and their alternative spellings into a single regular expression using the 'OR' operator, e.g., 'iPhone XR|...|iPhone 7'. Later, we apply the Python functionality re.search(pattern, string) [37] to the user feedback. As users also include multiple devices, such as "[...] the error occurs on my iPhone 6 and iPad Mini.", we modify the function to return the locations of all matches within a given input. 3) Manual Fine-Tuning of Results: The proposed approach to extract context items using pre-defined keyword lists and word vector representations can be run completely automated. Whenever, e.g., new devices are released, the keyword lists are updated by the app store operators or user communities. These, as well as the updated tweets dataset, including the most recent tweets of an app support account which possibly contain alternative spellings of new device names, need to be regularly provided as input to Algorithm 1 to update the regular expression used to extract the context items. To finetune the results, we invested manual effort at two points.
First, when pre-processing the keyword lists to extract alternative spellings using word vectors, we manually removed device names solely consisting of common words (e.g., 'horizon') that could not be automatically removed. From the original 1,133 devices, we thereby removed 237 devices. This manual effort latest for about two hours. It needs to be repeated regularly, e.g., when new devices are added to the pre-defined keyword lists. However, in these cases the effort is significantly lower since only single device names need to be processed instead of all supported devices since the release of Google Play about 10 years ago.
Second, we decided to manually filter alternative spellings for Android code names. We found that these include words (such as 'bake') that are unrelated to the inputs in the context of software engineering (such as the Android code name 'pie'). Thereby, we removed 6 out of the 14 alternative spellings. Similar, we processed the alternative spellings for Android and iOS devices. For Android, we removed 326 spellings out of 392 alternative spellings. For iOS, we removed 22 of the 44 alternative spellings. We assume that more Android devices names have been removed, since the device names are very diverse and not as often included within the tweets which causes word vector models to suggest similar words that are not as closely related, compared to the names of iOS devices. This manual step lasted under an hour and is -as for the first step -significantly faster for future updates.
B. App and System Version
To extract app and system versions from user feedback, we crawled pre-defined keyword lists and created text patterns. The keyword lists include the released app and system versions. We collected 107 system versions for iOS and 59 for Android. Concerning the app versions [38], [39], we extracted 224 iOS and 133 Android versions for Netflix, 248 iOS and 346 Android versions for Snapchat, as well as 169 iOS and 165 Android versions for Spotify.
We tokenize the user feedback with spaCy. Then, we preprocess all tokens by removing leading characters before digits, such as 'v8.4.17'. If the leading characters equal a platform (e.g., 'iOS12'), we split the token to keep the platform. We also remove trailing characters often referring to system architectures, as such as '8.1.13arm7'. This might be a limitation that has to be adapted for other platforms. Versions for the platforms considered in our study cannot be named with leading or trailing characters (e.g., 'A1.0' or '1.1a').
By manually comparing the collected versions to those mentioned in the user feedback, we identified two challenges. First, the collected versions have intersections. For example, version 7.1.2 exists for the Netflix iOS app, as well as for both the iOS and Android operating system. Therefore, we cannot directly associate it with, e.g., the Android operating system. The intersections highly vary, the Netflix iOS app shares only 8 (3.57%) out of 224 versions with its Android app. In contrast, for Snapchat the app versions are much more similar with an intersection of 32.66% between iOS and Android. A relatively large overlap also exists for Android system versions and versions of the Netflix iOS app (27.12%), as well as the iOS operating system (42.37%). The second challenge are users reporting more detailed app versions (e.g., '8.0.1.785') than included in the public lists. In this example, the user refers to the Snapchat Android app where the list only includes the version '8.0.1', missing the subversion '.785'.
We implement a version matcher to handle these challenges, shown in a simplified manner in Algorithm 2. As input, it takes a conversation consisting of multiple tweets or single tweets, as well as the version lists. Further, the previously extracted platform and device can be provided as input, if exists. The algorithm can be separated into three steps: First, it generates a version tree (line 2-7). Then, it processes conversations or single tweets to extract included versions (line [8][9][10][11][12][13][14][15]. Optionally, it resolves existing conflicts (line [16][17][18][19][20][21][22][23][24]. In the following, we explain these steps separately: (2) Process Conversation or Tweet. The matcher takes each token including a number and respectively its previous token as input. Figure 6 shows the matcher traversing the version tree on separate levels (L1-L4) to process the input 'version 8.0.1.785'. The subversion '.785' (L4) is not included in our crawled lists. Therefore, the closest version, i.e., '8.0.1', of the previous level (L3) is selected. This version exists for both the Snapchat iOS and Android app, as well as the iOS system. As noted previously, not all app versions were included in the pre-defined lists, however we know that the collected list of system versions is complete. For this reason, the iOS operating system is removed as potential match (cf. Figure 6). If multiple system versions would remain, the matcher would process the previous token. If this token equals 'iOS' or 'Android', the matcher flags this respectively as iOS or Android system version. This is especially relevant for shorter versions, such as '8' or '8.0', where more potential matches exists. Since several possible matches remain, i.e., the version could refer to the iOS or Android app, it is conflicted and will be processed in the next phase.
(3) Resolve Conflicts (optional). If conflicts remain, potential version matches are assessed in their overall conversation context. Another feedback in the conversation might include additional context items, e.g., as a device either for Android or iOS, which helps to determine which platform the version is referring to. In the example conversation, the user previously wrote "The error occurs on my HTC One with Android installed.". As this feedback includes the Android platform and device, both context items are provided as input to Algorithm 2 (parameters p and d). If one of these relates to Android and none to iOS (line 18), the conflict is resolved by marking the version as Android app. A limitation of our approach are tweets, such as "It worked with my Galaxy S5, but is not working with my new Galaxy S6". In this case, the conflict would not be resolved and both devices would be extracted. We consider this as beneficial, as knowing that the error occurs on one device but not the other might help developers. However, automatically highlighting on which of the devices the reported issue does not occur requires more complex natural language processing approaches, which we do not consider as focus of our study.
IV. EVALUATION RESULTS
We evaluated the performance of our approach to extract basic context items (including the platform, device, app version, and system version) by comparing its results to the manually labelled truthset. Our truthset contains 3,014 tweets of the Netflix, Snapchat, and Spotify support accounts. Of these, 1,116 (37.03%) tweets include an overall amount of 1,840 context items (cf. Section II). Table III summarizes the results per context item and support account. The table shows the number of corresponding items in the truthset, the number of true positives (i.e., correctly identified context items), false positives (incorrect identified items, such as 'galaxy s8' instead of 'galaxy s8 plus'), false negatives (no items extracted although present in tweet), and true negatives (no items detected, where no items are present). Based on these, the approach's precision and recall is calculated. The precision indicates how many of the extracted items are correctly identified. The recall summarizes how many of all items included in the truthset were extracted. The table further combines the results per context item type for different apps by calculating their average. For different types, the precision varies from 81% to 99%, and the recall from 86% to 98%.
1) Platform: The platform is most frequently provided within the truthset, with 931 (50.60%) out of 1,840 context items. Of all platform mentions, our approach extracted 910 correctly (true positives) and missed to extract the remaining 21 (false negatives). The absence of the platform was correctly detected in 2,117 tweets (true negatives). For this context type, we refrain from reporting the precision. The truthset is biased, since we sampled for conversations that include the words 'Android' or 'iOS' in one of the tweets, to increase the amount of labelled context items. This ratio is not representative for the whole dataset. However, this is the least complex context item to extract and alternative platform code names, such as 'Gingerbread', were successfully extracted. The recall for the platform is 98%. False negatives result from alternative spellings of Android code names that are not used frequently. For these, additional tweets need to be collected to train the fastText model or its minimum occurrences of words has to be tuned to increase the vocabulary size.
2) Device: Users within the truthset report 488 (26.52%) context items referencing a device. Our approach identified 386 true positives, 38 false positives, 64 false negatives, and 2,544 true negatives. For this type, the approach achieved a precision of 91% and a recall of 86%.
The detected false positives, e.g., include the device 'Galaxy 3) App Version: The truthset includes 126 items (6.85%) reporting the app version. Our approach detected 98 true positives, 23 false positives, 5 false negatives, and 2,889 true negatives. The approach precision is 81% and recall 95%.
The detected false positives, e.g., include the version '0.9.0.133' appearing in the tweet "version 0.6.2.64 on the phone and think its 0.9.0.133 on the desktop" from the Spotify dataset. In the tweet the user also refers to the desktop version, however version '0.9.0' also exists for the Spotify iOS app. Only 15 tweets within the truthset were marked as conflicted. An example tweet reports a Spotify app version, that exists for both iOS and Android "i'm on 8.4.74. doesnt bother me too much... just thought i'd report it". For conflict resolution, other tweets within the conversation are analyzed, such as "just so you know... the toast keeps going out of sync with what's actually playing at the moment. using a pixel 2 on android 9.". In this tweets the user reports both the Android platform and Android device. The conflict is resolved by marking the version as Android app version. All 15 conflicts were resolved, as we analyzed only completed conversations.
V. DISCUSSION
A. Implications
Many software vendors, including Netflix, Snapchat, and Spotify, have recognized the advantages of gathering, analyzing, and reacting to user feedback provided via social media. About one third of the bugs reported in issue trackers can be discovered earlier by analyzing tweets [3]. Speed is certainly a major advantage of social media-like feedback channels. Compared to reviews in app stores, the conversational nature of Twitter allows additional features and bugs to be identified. However, for the reported issues to be actionable to developers, basic context information, such as the utilized app version or device, needs to be included.
Our research shows that support teams themselves are very active, providing ∼40% of the tweets within the crawled dataset, in many cases to clarify missing context items. Spotify and other vendors also initiated local support teams, such as @SpotifyCaresSE for Swedish users, with multiple involved persons [40]. Smaller teams receiving a large amount of feedback might not be able to afford such a large investment.
This paper introduced a simple unsupervised approach that identifies the presence of and extracts context items from tweets. The results of our approach can be primarily used to filter actionable issues, i.e., conversations in which basic context items are present. When present, tweet texts and included context items can be used to auto-populate issue trackers with structured information [4], [5].
Other conversations including only part of the basic context items might be non-actionable to developers. These can be automatically identified using our approach, as well as the exact information missing. When continuously applied to tweets, the output of our approach can be used, e.g., by a chatbot to immediately request missing context items from users by responding to conversations, e.g., "Can you tell us the device you are using?". Both measures help reduce the manual efforts of support teams on social media.
Besides support teams and developers, our approach can also assist users. Users -often lacking software engineering and issue tracking knowledge -might be unaware of the importance of context information and therefore simply exclude them from the feedback. In a first step, users can be made aware about the importance of context items. While composing a tweet, this can continuously be analyzed to detect if a bug is reported, a feature is requested, or if the user might simply provide praise [9]. When reporting a bug, a message can be shown to the user that the issue reported might only be actionable to developers when including context information. The context items that the user already included while writing the tweet, can be identified using our approach, and missing context items can even be suggested in-situ [16].
B. Limitations and Threats to Validity
The support accounts on Twitter which we selected for our study are all of popular apps that appear within the top 25 charts of the Apple App Store. To improve the generalizability of our results, further support accounts for apps of different popularity (i.e. receiving different amounts of feedback) should be considered in future studies. Also, further studies need to be carried out to determine if the type of selected apps might correlate with the amount of non-/technical users and possibly the amount of context items exchanged.
To create the truthset, we extracted only conversations including at least one of the keywords 'Android' or 'iOS'. Without this step the amount of context items in the truthset would have been too small. As these keywords are also detected as platform context, the percentage of context items reporting the platform might be not representative for the whole dataset. We also tried more general keywords, such as 'App' or no keywords at all but the extracted tweets included much less context items or context information related to platforms which we do not consider in our paper, such as Windows, Mac, or Linux. Further studies need to determine how our approach performs when considering all platforms supported by an app. Nevertheless, other identifiers for the platform, such as code names for Android versions (e.g., 'Froyo') could successfully be extracted by our approach.
The pre-defined keyword lists extracted for the platform, device, app version, and system version certainly influence our results. These need to be updated regularly for platforms and apps our approach is applied to. The results for the app and system version are negatively affected if the app versions are equal or similar to the system versions. To improve this circumstance, we consider the previous token to detect if a potential version refers to the Android or iOS system version. Further studies should not only consider the previous token but use different window sizes of tokens before and possibly after a potential version to increase the accuracy of the approach.
Finally, to improve our results we trained a fastText model on all collected tweets. We extracted similar spellings for the platform and device names of our pre-defined lists. We manually identified relevant alternative spellings from the extracted similar words, such as 'iphone6+' for the input 'iPhone 6 Plus'. This might introduce errors. Further tweets need to be collected to train the fastText model and determine if it provides more similar words to given inputs, such as device names. Then, this manual step can possibly be automated by only using a fixed threshold for the cosine distance.
VII. CONCLUSION
Despite built-in options to report issues in a structured manner, users continue to share a large amount of unstructured, informal feedback on software products via social media. This feedback contains information of relevance to development teams, such as bug reports or feature requests. Support teams engage in effortful conversations with users to clarify missing context information -for popular apps such as Spotify or Netflix in about 10 parallel conversations per hour.
We introduced a simple unsupervised approach to identify and extract basic context items from user feedback, including the affected platform, device, app-and system version. Evaluated against a manually labelled truthset of 3014 tweets, our approach achieved precisions from 81% to 99% and recalls from 86% to 98% for the different context item types. Our approach can assist support teams to identify and separate reported issues into non-/actionable. Actionable issues can be used to auto-populate issue trackers with structured information. Non-actionable issues can be automatically clarified, e.g., by chatbots requesting the missing context items from users. | 6,086 |
1907.13498 | 2964122540 | Abstract A vast amount of valuable data is produced and is becoming available for analysis as a result of advancements in smart cyber-physical systems. The data comes from various sources, such as healthcare, smart homes, smart vehicles, and often includes private, potentially sensitive information that needs appropriate sanitization before being released for analysis. The incremental and fast nature of data generation in these systems necessitates scalable privacy-preserving mechanisms with high privacy and utility. However, privacy preservation often comes at the expense of data utility. We propose a new data perturbation algorithm, SEAL (Secure and Efficient data perturbation Algorithm utilizing Local differential privacy), based on Chebyshev interpolation and Laplacian noise, which provides a good balance between privacy and utility with high efficiency and scalability. Empirical comparisons with existing privacy-preserving algorithms show that SEAL excels in execution speed, scalability, accuracy, and attack resistance. SEAL provides flexibility in choosing the best possible privacy parameters, such as the amount of added noise, which can be tailored to the domain and dataset. | Smart cyber-physical systems (SCPS) have become an important part of the IT landscape. Often these systems include IoT devices that allow effective and easy acquisition of data in areas such as healthcare, smart cities, smart vehicles, and smart homes @cite_66 . Data mining and analysis are among the primary goals of collecting data from SCPS. The infrastructural extensions of SCPSs have contributed to the exponential growth in the number of IoT sensors, but security is often overlooked, and the devices become a source of privacy leak. The security and privacy concerns of big data and data streams are not entirely new, but require constant attention due to technological advancements of the environments and the devices used @cite_40 . Confidentiality, authentication, and authorization are just a few of the concerns @cite_44 @cite_83 @cite_32 . Many studies have raised the importance of privacy and security of SCPS due to their heavy use of personally identifiable information (PII) @cite_84 . Controlling access via authentication @cite_67 , attribute-based encryption @cite_74 , temporal and location-based access control @cite_67 and employing constraint-based protocols @cite_61 are some examples of improving privacy of SCPS. | {
"abstract": [
"Edge processing in IoT networks offers the ability to enforce privacy at the point of data collection. However, such enforcement requires extra processing in terms of data filtering and the ability to configure the device with knowledge of policy. Supporting this processing with Cloud resources can reduce the burden this extra processing places on edge processing nodes and provide a route to enable user defined policy. Research from the PaaSage project [12] on Cloud modelling language is applied to IoT networks to support IoT and Cloud integration linking the worlds of Cloud and IoT in a privacy protecting way.",
"Data are today an asset more critical than ever for all organizations we may think of. Recent advances and trends, such as sensor systems, IoT, cloud computing, and data analytics, are making possible to pervasively, efficiently, and effectively collect data. However such pervasive data collection and the lack of security for IoT devices increase data privacy concerns. In this paper, we discuss relevant concepts and approaches for data privacy in IoT, and identify research challenges that must be addressed by comprehensive solutions to data privacy.",
"This paper addresses privacy issues in managing electronic health records by a third party cloud based service. Compared to traditional authentication-authorization mechanisms, the proposed approach minimizes the leakage of identity information of involved participants through unlinkability. Furthermore, it gives the ability to health record owners for making access control decisions. This solution employs an identity management scheme that enhances consumer privacy by preventing consumer profiling based on the credentials used to satisfy the service provider policies. The paper proposes a set of mechanisms to allow authenticated unlinkable access to electronic health records, while giving the record owners ability to make access control decisions. The security evaluation for accessing data in the cloud is detailed, and the implementation of the system is evaluated in this paper.",
"Smart grid is a promising power delivery infrastructure integrated with communication and information technologies. Its bi-directional communication and electricity flow enable both utilities and customers to monitor, predict, and manage energy usage. It also advances energy and environmental sustainability through the integration of vast distributed energy resources. Deploying such a green electric system has enormous and far-reaching economic and social benefits. Nevertheless, increased interconnection and integration also introduce cyber-vulnerabilities into the grid. Failure to address these problems will hinder the modernization of the existing power system. In order to build a reliable smart grid, an overview of relevant cyber security and privacy issues is presented. Based on current literatures, several potential research fields are discussed at the end of this paper.",
"Healthcare and tourism are among the fastest growing business domains in the world. These are biggest service industries that affect whole world population and give jobs to millions of people. Recently we witness major changes in both industries, as more services are transferred to small providers, including individual entrepreneurs and SMEs. This process is driven by huge growth in demand, which cannot be fulfilled by applying traditional solutions. The new generation of digital services reshapes landscape of both industries. Some players see it as a threat, as digital services replace their traditional business models. But fighting against progress is useless, especially when you cannot fulfill growing by old means. Internet of Things (IoT) is an integral part of the Future Internet ecosystem that will have major impact on development of healthcare and e-Tourism services. IoT provides an infrastructure to uniquely identify and link physical objects to their virtual representations in Internet. As a result any physical object can have virtual reflection in the service space. This gives an opportunity to replace actions on physical objects by operations on their virtual reflections, which can be done much faster, cheaper and more comfortable for people. This provides a huge space for developing and applying new business models. In this paper we summarize research and development results of IoT studies and discuss ideas on how to apply them to business.",
"Within the last decade, Security became a major focus in the traditional IT-Industry, mainly through the interconnection of systems and especially through the connection to the Internet. This opened up a huge new attack surface, which resulted in major takedowns of legitimate services and new forms of crime and destruction. This led to the development of a multitude of new defense mechanisms and strategies, as well as the establishing of Security procedures on both, organizational and technical level. Production systems have mostly remained in isolation during these past years, with security typically focused on the perimeter. Now, with the introduction of new paradigms like Industry 4.0, this isolation is questioned heavily with Physical Production Systems (PPSs) now connected to an IT-world resulting in cyber-physical systems sharing the attack surface of traditional web based interfaces while featuring completely different goals, parameters like lifetime and safety, as well as construction. In this work, we present an outline on the major security challenges faced by cyber-physical production systems. While many of these challenges harken back to issues also present in traditional web based IT, we will thoroughly analyze the differences. Still, many new attack vectors appeared in the past, either in practical attacks like Stuxnet, or in theoretical work. These attack vectors use specific features or design elements of cyber-physical systems to their advantage and are unparalleled in traditional IT. Furthermore, many mitigation strategies prevalent in traditional IT systems are not applicable in the industrial world, e.g., patching, thus rendering traditional strategies in IT-Security unfeasible. A thorough discussion of the major challenges in CPPS-Security is thus required in order to focus research on the most important targets.",
"",
"With the ever increasing number of connected devices and the over abundance of data generated by these devices, data privacy has become a critical concern in the Internet of Things (IoT). One promising privacy-preservation approach is Attribute-Based Encryption (ABE), a public key encryption scheme that enables fine-grained access control, scalable key management and flexible data distribution. This paper presents an in-depth performance evaluation of ABE that focuses on execution time, data and network overhead, energy consumption, and CPU and memory usage. We evaluate two major types of ABE, Key-Policy Attribute-Based Encryption (KP-ABE) and Ciphertext-Policy Attribute-Based Encryption (CP-ABE), on different classes of mobile devices including a laptop and a smartphone. To the best of our knowledge, this is the first comprehensive study of ABE dedicated solely to its performance. Our results provide insights into important practical issues of ABE, including what computing resources ABE requires in heterogeneous environments, at what cost ABE offers benefits, and under what situations ABE is best suited for use in the IoT.",
"The challenge of deriving insights from the Internet of Things (IoT) has been recognized as one of the most exciting and key opportunities for both academia and industry. Advanced analysis of big data streams from sensors and devices is bound to become a key area of data mining research as the number of applications requiring such processing increases. Dealing with the evolution over time of such data streams, i.e., with concepts that drift or change completely, is one of the core issues in IoT stream mining. This tutorial is a gentle introduction to mining IoT big data streams. The first part introduces data stream learners for classification, regression, clustering, and frequent pattern mining. The second part deals with scalability issues inherent in IoT applications, and discusses how to mine data streams on distributed engines such as Spark, Flink, Storm, and Samza."
],
"cite_N": [
"@cite_61",
"@cite_67",
"@cite_32",
"@cite_84",
"@cite_44",
"@cite_40",
"@cite_83",
"@cite_74",
"@cite_66"
],
"mid": [
"2204681158",
"2583534276",
"2578818417",
"2152190235",
"1533822453",
"2770854607",
"",
"1993719651",
"2508807458"
]
} | An Efficient and Scalable Privacy Preserving Algorithm for Big Data and Data Streams | Smart cyber-physical systems (SCPS) such as smart vehicles, smart grid, smart healthcare systems, and smart homes are becoming widely popular due to massive technological advancements in the past few years. These systems often interact with the environment to collect data mainly for analysis, e.g. to allow life activities to be more intelligent, efficient, and reliable [1]. Such data often includes sensitive details, but sharing confidential information with third parties can lead to a privacy breach.
From our perspective, privacy can be considered as "Controlled Information Release" [2]. We can define a privacy breach as the release of private/confidential information to an untrusted environment.
However, sharing the data with external parties may be necessary for data analysis, such as data mining and machine learning. Smart cyber-physical systems must have the ability to share information while limiting the disclosure of private information to third parties. Privacy-preserving data sharing and privacy-preserving data mining face significant challenges because of the size of the data and the speed at which data are produced. Robust, scalable, and efficient solutions are needed to preserve the privacy of big data and data streams generated by SCPS [3,4]. Various solutions for privacy-preserving data mining (PPDM) have been proposed for data sanitization; they aim to ensure confidentiality and privacy of data during data mining [5,6,7,8].
The two main approaches of PPDM are data perturbation [9,10] and encryption [11,12]. Although encryption provides a strong notion of security, due to its high computation complexity [13] it can be impractical for PPDM of SCPS-generated big data and data streams. Data perturbation, on the other hand, applies certain modifications such as randomization and noise addition to the original data to preserve privacy [14]. These modification techniques are less complex than cryptographic mechanisms [15]. Data perturbation mechanisms such as noise addition [16] and randomization [17] provide efficient solutions towards PPDM. However, the utility of perturbed data cannot be 100% as data perturbation applies modifications to the original data, and the ability to infer knowledge from the perturbed data can result in a certain level of privacy leak as well. A privacy model [18] describes the limitations to the utility and privacy of a perturbation mechanism. Examples of such earlier privacy models include k − anonymity [19,20] and l − diversity [21]. However, it has been shown that older privacy models are defenseless against certain types of attacks, such as minimality attacks [22], composition attacks [23] and foreground knowledge [24] attacks. Differential privacy (DP) is a privacy model that provides a robust solution to these issues by rendering maximum privacy via minimizing the chance of private data leak [25,26,27,28]. Nevertheless, current DP mechanisms fail for small databases and have limitations on implementing efficient solutions for data streams and big data. When the database is small, the utility of DP mechanisms diminishes due to insufficient data being available for a reasonable estimation of statistics [29]. At the other end of the scale, when the database is very large or continuously growing like in data streams produced by SCPS, the information leak of DP mechanisms is high due to the availability of too much information [30]. Most perturbation mechanisms tend to leak information when the data is high-dimensional, which is a consequence of the dimensionality curse [31]. Moreover, the significant amount of randomization produced by certain DP algorithms results in low data utility. Existing perturbation mechanisms often ignore the connection between utility and privacy, even though improvement of one leads to deterioration of the other [32].
Furthermore, the inability to efficiently process high volumes of data and data streams makes the existing methods unsuitable for privacy-preservation in smart cyber-physical systems. New approaches which can appropriately answer the complexities in privacy preservation of SCPS generated data are needed.
The main contribution of this paper is a robust and efficient privacy-preserving algorithm for smart cyber-physical systems, which addresses the issues existing perturbation algorithms have. Our solution, SEAL (Secure and Efficient data perturbation Algorithm utilizing Local differential privacy), employs polynomial interpolation and notions of differential privacy. SEAL is a linear perturbation system based on Chebyshev polynomial interpolation, which allows it to work faster than comparable methods. We used generic datasets retrieved from the UCI data repository 1 to evaluate SEAL's efficiency, scalability, accuracy, and attack resistance. The results indicate that SEAL performs well at privacy-preserving data classification of big data and data streams. SEAL outperforms existing alternative algorithms in efficiency, accuracy, and data privacy, which makes it an excellent solution for smart system data privacy preservation.
The rest of the paper is organized as follows. Section 2 provides a summary of existing related work. The fundamentals of the proposed method are briefly discussed in Section 3. Section 4 describes the technical details of SEAL. Section 5 presents the experimental settings and provides a comparative analysis of the performance and security of PABIDOT. The results are discussed in Section 6, and the paper is concluded in Section 7. Detailed descriptions of the underlying concepts of SEAL are given in the Appendices.
Fundamentals
In this section, we provide some background and discuss the fundamentals used in the proposed method (SEAL). Our approach is generating a privacy-preserved version of the dataset in question, and allowing only the generated dataset to be used in any application. We use Chebyshev interpolation based on least square fitting to model a particular input data series, and the model formation is subjected to noise addition using the Laplacian mechanism used in differential privacy. The noise integrated model is then used to synthesize a perturbed data series which approximate the properties of the original input data series.
Chebyshev Polynomials of the First Kind
For the interpolation of the input dataset, we use Chebyshev Polynomials of the First Kind. These polynomials are a set of orthogonal polynomials as given by Definition 3 (available in Appendix A) [74]. Chebyshev polynomials are a sequence of orthonormal polynomials that can be defined recursively.
Polynomial approximation and numerical integration are two of the areas where Chebyshev polynomials are heavily used [74]. More details on Chebyshev polynomials of the first kind can be found in Appendix A.
Least Squares Fitting
Least squares fitting (LSF) is a mathematical procedure which minimizes the sum of squares of the offsets of the points from the curve to find the best-fitting curve to a given set of points. We can use vertical least squares fitting which proceeds by finding the sum of squares of the vertical derivations R 2 (refer Equation B.1 in Appendix B) of a set of n data points [75]. To generate a linear fit considering f (x) = mx + b, we can minimize the expression of squared error between the estimated values and the original values (refer Equation B.5), which proceeds to obtaining the linear system shown in Equation 1 (using Equations B.8 and B.9). We can solve Equation 1 to find values of a and b to obtain the corresponding linear fit of f (x) = mx + b for a given data series.
b m = n n i=1 x i n i=1 x i n i=1 x 2 i −1 n i=1 y i n i=1 x i y i (1)
Differential Privacy
Differential Privacy (DP) is a privacy model that defines the bounds to how much information can be revealed to a third party or adversary about someone's data being present or absent in a particular database. Conventionally, (epsilon) and δ (delta) are used to denote the level of privacy rendered by a randomized privacy preserving algorithm (M ) over a particular database (D). Let us take two x and y adjacent datasets of D, where y differs from x only by one person. Then M satisfies ( , δ)-differential privacy if Equation (2) holds.
Privacy Budget and Privacy Loss ( ):
is called the privacy budget that provides an insight into the privacy loss of a DP algorithm. When the corresponding value of a particular differentially private algorithm A is increased, the amount of noise or randomization applied by A on the input data is decreased. The higher the value of , the higher the privacy loss.
Probability to Fail a.k.a. Probability of Error (δ): δ is the parameter that accounts for "bad events" that might result in high privacy loss; δ is the probability of the output revealing the identity of a particular individual, which can happen n × δ times where n is the number of records. To minimize the risk of privacy loss, n × δ has to be maintained at a low value. For example, the probability of a bad event is 1% when δ = 1 100×n .
Definition 1.
A randomized algorithm M with domain N |X| and range R is ( , δ)-differentially private for δ ≥ 0, if for every adjacent x, y ∈ N |X| and for any subset
S ⊆ R P r[(M (x) ∈ S)] ≤ exp( )P r[(M (y) ∈ S)] + δ(2)
Global vs. Local Differential Privacy
Global differential privacy (GDP) and local differential privacy (LDP) are the two main approaches to differential privacy. In the GDP setting, there is a trusted curator who applies carefully calibrated random noise to the real values returned for a particular query. The GDP setting is also called the trusted curator model [76]. Laplace mechanism and Gaussian mechanism [57] are two of the most frequently used noise generation methods in GDP [57]. A randomized algorithm, M provides -global differential privacy if for any two adjacent datasets x, y and S ⊆ R, P r[(M (x) ∈ S)] ≤ exp( )P r[(M (y) ∈ S)] + δ (i.e. Equation (2) holds). On the other hand, LDP eliminates the need of a trusted curator by randomizing the data before the curator can access them. Hence, LDP is also called the untrusted curator model [59]. LDP can also be used by a trusted party to randomize all records in a database at once. LDP algorithms may often produce too noisy data, as noise is applied commonly to achieve individual record privacy. LDP is considered to be a strong and rigorous notion of privacy that provides plausible deniability. Due to the above properties, LDP is deemed to be a state-of-the-art approach for privacy-preserving data collection and distribution. A randomized algorithm A provides -local differential privacy if Equation (3) holds [60].
P r[A(v 1 ) ∈ Q] ≤ exp( )P r[A(v 2 ) ∈ Q](3)
Sensitivity
Sensitivity is defined as the maximum influence that a single individual data item can have on the result of a numeric query. Consider a function f , the sensitivity (∆f ) of f can be given as in Equation (4) where x and y are two neighboring databases (or in LDP, adjacent records) and . 1 represents the L1 norm of a vector [77].
∆f = max{ f (x) − f (y) 1 }(4)
Laplace Mechanism
The Laplace mechanism is considered to be one of the most generic mechanisms to achieve differential privacy [57]. Laplace noise can be added to a function output (F (D)) as given in Equation (6) to produce a differentially private output. ∆f denotes the sensitivity of the function f . In the local differentially private setting, the scale of the Laplacian noise is equal to ∆f / , and the position is the current input value (F (D)).
P F (D) = F (D) + Lap( ∆f ) (5) P F (D) = 2∆f e − |x−F (D)| ∆F(6)
Our Approach
The proposed method, SEAL, is designed to preserve the privacy of big data and data streams generated by systems such as smart cyber-physical systems. One of our aims was balancing privacy and utility, as they may adversely affect each other. For example, the spatial arrangement of a dataset can potentially contribute to its utility in data mining, as the results generated by the analysis mechanisms such as data classification and clustering are often influenced by the spatial arrangement of the input data. However, the spatial arrangement can be affected when privacy mechanisms apply methods like randomization. In other words, while data perturbation mechanisms improve privacy, at the same time they may reduce utility. Conversely, an increasing utility can detrimentally affect privacy. To address these difficulties, SEAL processes the data in three steps: (1) determine the sensitivity of the dataset to calibrate how much random noise is necessary to provide sufficient privacy, (2) conduct polynomial interpolation with calibrated noise to approximate a noisy function over the original data, and (3) use the approximated function to generate perturbed data. These steps guarantee that SEAL applies enough randomization to preserve privacy while preserving the spatial arrangement of the original data.
SEAL uses polynomial interpolation accompanied by noise addition, which is calibrated according to the instructions of differential privacy. We use the first four orders of the Chebyshev polynomial of the first kind in the polynomial interpolation process. Then we calibrate random Laplacian noise to apply a stochastic error to the interpolation process, in order to generate the perturbed data. Figure 1 shows the integration of SEAL in the general purpose data flow of SCPS. As shown in the figure, the data perturbed by the SEAL layer comes directly from the SCPS. That means that the data in the storage module has already gone through SEAL's privacy preservation process and does not contain any original data. Figure 1: Arrangement of SEAL in a smart system environment. In this setting, we assume that the original data are perturbed before reaching the storage devices. Any public or private services will have access only to the perturbed data. Figure 2 shows the flow of SEAL where the proposed noisy Chebyshev model (represented by a green note) is used to approximate each of the individual attributes of a particular input dataset or data stream. The approximated noisy function is used to synthesize perturbed data, which is then subjected to random tuple shuffling to reduce the vulnerability to data linkage attacks.
Privacy-Preserving Polynomial Interpolation for Noisy Chebyshev Model Generation
We approximate an input data series (experimental data) by a functional expression with added randomization in order to inherit the properties of differential privacy. For approximation, our method uses the first four orders of Chebyshev polynomials of the first kind. We systematically add calibrated random Laplacian noise in the interpolation process, i.e. apply randomization to the approximation.
Then we use the approximated function to re-generate the dataset in a privacy-preserving manner. We can denote an approximated functionf of degree (m − 1) using Equation 7, where the degree of (ϕ k ) is
k−1.
For the approximation, we consider the root mean square error (RMSE) E between the estimated values and the original values (refer to Equation C.14). We use the first four Chebyshev polynomials of the first kind for the approximation, which limits the number of coefficients to four (we name the coefficients as a 1 , a 2 , a 3 , and a 4 ). Now we can minimize E (the RMSE) to obtain an estimated function f * (x), thus seeking to minimize the squared error M (a 1 , a 2 , a 3 , a 4 ). For more details refer to Equation
(x) = a 1 ϕ 1 (x) + a 2 ϕ 2 (x) + · · · + a m ϕ m (x)(7)
4.1.1. Introducing privacy to the approximation process utilizing differential privacy (the determination of the sensitivity and the position of Laplacian noise)
We apply the notion of differential privacy to the private data generation process by introducing randomized Laplacian noise to the root mean square error (RMSE) minimization process. Random
Laplacian noise introduces a calibrated randomized error in deriving the values for a 1 , a 2 , a 3 , and a 4
with an error (refer to Equations C.22, C.25, C.28 and C.31). We add Laplacian noise with a sensitivity of 1, as the input dataset is normalized within the bounds of 0 and 1, which restricts the minimum output to 0 and maximum output to 1 (refer to Equation (C.20)). We select the position of Laplacian noise to be 0, as the goal is to keep the local minima of RMSE around 0. We can factorize the noise introduced squared error minimization equations to form a linear system which can be denoted by
(9) A = [a 1 , a 2 , a 3 , a 4 ] T (10) B = [b 1 , b 2 , b 3 , b 4 ] T(11)
Now we solve the corresponding linear system (formed using Equations C.35-C.37), to obtain noisy values for a 1 , a 2 , a 3 , and a 4 in order to approximate the input data series with a noisy function. The results will differ each time we calculate the values for a 1 , a 2 , a 3 , and a 4 as we have randomized the process of interpolation by adding randomized Laplacian noise calibrated using a user-defined value. The smaller the , the higher the privacy. It is recommended to use an in the interval (0, 10), which is considered to be the default range to provide a sufficient level of privacy.
Algorithmic Steps of SEAL for Static Data and Data Streams
Algorithm 1 presents the systematic flow of steps in randomizing the data to produce a privacypreserving output. The algorithm accepts input dataset (D), privacy budget (defined in Equation C.21), window size (ws) and threshold (t) as the input parameters. The window size defines the number of data instances to be perturbed in one cycle of randomization. The window size of a data stream is essential to maintain the speed of the post-processing analysis/modification (e.g. data perturbation, classification, and clustering) done to the data stream [78]. For static data sets, the threshold is maintained with a default value of −1. For a static dataset, t = −1 ignores that a specific number of perturbed windows need to be released before the whole dataset is completed. In the case of data streams, the window size (ws) and the threshold t are useful as ws can be maintained as a data buffer and t can be specified with a certain number to let the algorithm know that it has to release every t number of processed windows. Maintaining t is important for data streams because data streams are growing infinitely in most cases, and the algorithm makes sure that the data is released in predefined intervals.
According to conventional differential privacy, the acceptable values of should be within a small range, ideally in the interval of (0, 9] [79]. Due to the lower sensitivity of the interpolation process, increasing greater than 2 may lower privacy. It is the users' responsibility to decrease or increase depending on the requirements. We suggest an of 1 to have a balance between privacy and utility.
If the user chooses an value less than 1, the algorithm will provide higher randomization, hence providing higher privacy and lower utility, whereas lower privacy and a higher utility will be provided in case of an value higher than 1. The selection of ws depends specifically on the size of the particular dataset. A comparably larger ws can be chosen for a large dataset, while ws can be smaller for a small dataset. For a static dataset, ws can range from a smaller value such as one-tenth the size of the dataset to the full size of the dataset. The minimum value of ws should not go down to a small value (e.g. < 100) because it increases the number of perturbation cycles and introduces an extreme level of randomization to the input dataset, resulting in poor utility. For a data stream, ws is considered as the buffer size and can range from a smaller value to any number of tuples that fit in the memory of the computer. Further discussions on selecting suitable values for and ws is provided in Section 5.2.1.
Algorithm 1 Steps of the perturbation algorithm: SEAL
Inputs : D ← input dataset (numeric)
← scale of Laplacian noise ws ← data buffer/window size t ← threshold for the maximum number of windows processed before a data release (default value of t = −1)
Outputs : D p ← perturbed dataset 1: divide D in to data partitions (wi) of size ws 2: x = [1, . . . ,D p = merge(D p , w p i ) 20:
if rep==t then ples for biomedical and healthcare systems which can be effectively facilitated and improved using SCPS [80]. However, biomedicine and healthcare data can contain a large amount of sensitive, personal information. SEAL provides a practical solution and can impose privacy in such scenarios to limit potential privacy leak from such systems [80]. Figure 3 shows a use case for SEAL integration in a healthcare smart cyber-physical system.
Patients can have several sensors attached to them for recording different physical parameters. The recorded data are then transmitted to a central unit which can be any readily available digital device such as a smartphone, a personal computer, or an embedded computer. A large variety of sensors are available today, e.g. glucose monitors, blood pressure monitors [81]. In the proposed setting, we assume that the processing unit that runs SEAL, perturbs all sensitive inputs forwarded to the central unit. As shown in the figure, we assume that the central units do not receive any unperturbed sensitive information, and the data repositories will store only perturbed data, locally or in a remote data center.
Data analysts can access and use only the perturbed data to conduct their analyses. Since the data is perturbed, adversarial attacks on privacy will not be successful. Figure 3: A use case: The integration of SEAL in a healthcare smart cyber-physical system. As shown in the figure, SEAL perturbs data as soon as they leave the source (medical sensors, medical devices, etc.). In the proposed setting, SEAL assumes that there is no trusted party.
Experimental Results
In this section, we discuss the experimental setup, resources used, experiments, and their results.
The experiments were conducted using seven datasets retrieved from the UCI data repository 2 . We compare the results of SEAL against the results of rotation perturbation (RP), geometric perturbation (GP) and data condensation (DC). For performance comparison with SEAL, we selected GP and RP when using static datasets, while DC was used with data streams. The main reason for selecting GP, RP, and DC is that they are multidimensional perturbation mechanisms that correlate with the technique used in the linear system of SEAL as given in Equation C.34. Figure 4 shows the analytical setup which was used to test the performance of SEAL. We perturbed the input data using SEAL, RP,
Experimental Setup
For the experiments we used a
Perturbation methods used for comparison
Random rotation perturbation (RP), geometric data perturbation (GP), and data condensation (DC) are three types of matrix multiplicative perturbation approaches which are considered to provide high utility in classification and clustering [83]. In RP, the original data matrix is multiplied using a random rotation matrix which has the properties of an orthogonal matrix. A rotational matrix of rotation is repeated until the algorithm converges at the desired level of privacy [9]. In GP, a random translation matrix is added to the process of perturbation in order to enhance privacy. The method accompanies three components: rotation perturbation, translation perturbation, and distance perturbation [10]. Due to the isometric nature of transformations, the perturbation process preserves the distance between the tuples, resulting in high utility for classification and clustering. RP and GP can only be used for static datasets in their current setting, due to their recursive approach to deriving the optimal perturbation. DC is specifically introduced for data streams. In DC, data are divided into multiple homogeneous groups of predefined size (accepted as user input) in such a way that the difference between the records in a particular group is minimal, and a certain level of statistical information about different records is maintained. The sanitized data is generated using a uniform random distribution based on the eigenvectors which are generated using the eigendecomposition of the characteristic covariance matrices of each homogeneous group [52].
R follows the property of R × R T = R T × R = I,
Classification algorithms used in the experiments
Different classes of classification algorithms employ different classification strategies [84]. To investigate the performance of SEAL with diverse classification methods, we chose five different algorithms as the representative of different classes, namely: Multilayer Perceptron (MLP) [82], k-Nearest Neighbor (kNN) [82], Sequential Minimal Optimization (SMO) [85], Naive Bayes [82], and J48 [86], and tested SEAL for its utility in terms of classification accuracy. MLP uses back-propagation to classify instances [82]. kNN is a non-parametric method used for classification [82]. SMO is an implementation of John Platt's sequential minimal optimization algorithm for training a support vector classifier [85]. Naive Bayes is a fast classification algorithm based on probabilistic classifiers [82]. J48 is an implementation of the decision tree based classification algorithm [82].
Performance Evaluation of SEAL
We evaluated the performance of SEAL with regard to classification accuracy, attack resistance, time complexity, scalability, and also looked at data streams. First, we generated perturbed data using SEAL, RP, GP, and DC for the datasets: WCDS, WQDS, PBDS, LRDS, and SSDS (refer to Table 1) under the corresponding settings. The perturbed data were then used to determine classification accuracy and attack resistance for each perturbed dataset. During the classification accuracy experiments, k of k-nearest neighbor (kNN) classification algorithm was kept at 1. The aggregated results were rated using the nonparametric statistical comparison test, Friedman's rank test, which is analogous to a standard one-way repeated-measures analysis of variance [87]. We recorded the statistical significance values, and the Friedman's mean ranks (FMR) returned by the rank test. The time consumption of SEAL was evaluated using runtime complexity analysis. We ran SEAL on two large-scale datasets, HPDS and HIDS, to test its scalability. Finally, the performance of SEAL was tested on data streams by running it on the LRDS dataset, and the results were compared with those produced by DC.
Effect of randomization on the degree of privacy
One of the main features of SEAL is its ability to perturb a dataset while preserving the original shape of data distribution. We ran SEAL on the same data series to detect the effect of randomization in two different instances of perturbation. This experiment is to check and guarantee that SEAL does not publish similar perturbed data when it is applied with the same value to the same data on different occasions. This feature enables SEAL to prevent privacy leak via data linkage attacks that are exploiting multiple data releases. As depicted in Figure 5, in two separate applications, SEAL generates two distinct randomized data series, while preserving the shape of the original data series.
The left-hand plot of Figure 5 shows the data generated under an of 1, whereas the right-hand . The two plots that were plotted above the original data series represent two instances of perturbation conducted by SEAL on the original data series.
Dynamics of privacy budget ( ) and window size (ws)
As explained in Section 5.2.1, smaller means higher randomization, which results in decreased utility. Figure 6a shows the change of classification accuracy against an increasing . As shown in the figure, classification accuracy increases with an increasing privacy budget ( ). Figure 6a shows a more predictable pattern of increasing utility (classification accuracy) against increasing . The choice of a proper depends on the application requirements: a case that needs higher privacy should have a smaller , while a larger will provide better utility. As it turns out, two-digit values provide no useful privacy. Given that SEAL tries to preserve the shape of the original data distribution, we recommend a range of 0.4 to 3 for to limit unanticipated privacy leaks. We showed that SEAL provides better privacy and utility than comparable methods under a privacy budget of 1. Next, we tested the effect of window size (ws) on classification accuracy and the magnitude of randomization performed by SEAL. As shown on Figure 6b, classification accuracy increases when ws increases. When ws is small, the dataset is divided into more groups than when ws is large.
When there is more than one group to be perturbed, SEAL applies randomization on each group distinctly. Since each of the groups is subjected to distinct randomization, the higher the number of groups, the larger the perturbation of the dataset. For smaller sizes of ws, SEAL will produce higher perturbation, resulting in more noise, reduced accuracy, improved privacy, and better resistance to data reconstruction attacks. Table 2 provides the classification accuracies when using the original dataset and the datasets perturbed by the three methods. During the experiments for classification accuracy, we maintained at 1 and ws at the total length of the dataset. For example, if the dataset contained n number of tuples, ws was maintained at n. After producing the classification accuracies, Friedman's rank test was conducted on the data available in Table 2 to rank the three methods: GP, RP, and SEAL.
Classification accuracy
The mean ranks produced by Friedman's rank (FR) test are presented in the last row of Table 2 10 .
The p-value suggests that the difference between the classification accuracies of RP, GP, and SEAL are significantly different. When evaluating FMR values on classification accuracies, a higher rank means that the corresponding method tends to produce better classification results. The mean ranks indicate that SEAL provides comparatively higher classification accuracy. SEAL is capable of providing higher utility in terms of classification accuracy due to its ability to maintain the shape of the original 10 The FR test returned a χ 2 value of 27.6566, a degree of freedom of 2 and a p-value of 9.8731e-07.
data distribution despite the introduced randomization. Although SEAL provides better performance overall than the other two methods, we can notice that in a few cases (as shown in Table 2) SEAL has produced slightly lower classification accuracies. We assume that this is due to the effect of variable random noise applied by SEAL. However, these lower accuracies are still on par with accuracies produced by the other two methods. Table 3 shows the three methods' (RP, GP, and SEAL) resistance to three attack methods: naive snooping (NI), independent component analysis (ICA) and known I/O attack (IO) [9,83]. We used the same parameter settings of SEAL ( = 1 and ws=number of tuples) which were used in classification accuracy experiments for attack resistance analysis as well. IO and ICA data reconstruction attacks try to restore the original data from the perturbed data and are more successful in attacking matrix multiplicative data perturbation. FastICA package [88] was used to evaluate the effectiveness of ICAbased reconstruction of the perturbed data. We obtained the attack resistance values as standard deviation values of (i) the difference between the normalized original data and the perturbed data for NI, and (ii) the difference between the normalized original data and reconstructed data for ICA and IO. During the IO attack analysis, we assume that around 10% of the original data is known to the adversary. The "min" values under each test indicate the minimum guarantee of resistance while "avg" values give an impression of the overall resistance.
Attack resistance
We evaluated the data available in Table 3 using Friedman's rank test to generate the mean ranks for GP, RP, and SEAL. The mean ranks produced by Friedman's rank test are given in the last row of Table 3 11 . The p-value implies that the difference between the attack resistance values is significantly different. As for the FMR values on attack resistance, a higher rank means that the corresponding method tends to be more attack-resistant. The mean ranks suggest that SEAL provides comparatively higher security than the comparable methods against the privacy attacks. 11 The test statistics: χ 2 value of 14.6387, a degree of freedom of 2 and a p-value of 6.6261e-04.
Time complexity comparison
Both RP and GP show O(n 2 ) time complexity to perturb one record with n attributes. The total complexity to perturb a dataset of m records is O(m × n 2 ). However, both RP and GP run for r number of iterations (which is taken as a user input) to find the optimal perturbation instance of the dataset within the r iterations. Therefore, the overall complexity is O(m × r × n 2 ). Under each iteration of r, the algorithms run data reconstruction using ICA and known IO attacks to find the vulnerability level of the perturbed dataset. Each attack runs another k number of iterations (which is another user input) to reconstruct k number of instances. Usually, k is much larger than r. For one iteration of k, IO and ICA contribute a complexity of O(n × m) [89]. Hence, the overall complexity of RP or GP in producing an optimal perturbed dataset is equal to O(m 2 × r × k × n 3 ) which is a much larger computational complexity compared to the linear computational complexity of SEAL. Figure 8 shows the time consumption plots of the three methods plotted together on the same figure. As shown on the figures, the curves of SEAL lie almost on the x-axis due to its extremely low time consumption compared to the other two methods.
Scalability
We conducted the scalability analysis of SEAL on an SGI UV3000 supercomputer (a detailed specification of the supercomputer is given in Section 5.1). SEAL was tested for its scalability on two large datasets: HPDS and HIDS. The results are given in Table 4. It is apparent that SEAL is more efficient than RP, GP, and DC; in fact, RP and GP did not even converge after 100 hours (the time limit of the batch scripts were set to 100 h). Both RP and GP use recursive loops to achieve optimal perturbation, which slows down the perturbation process. Therefore, RP and GP are not suitable for perturbing big data and data streams. DC is effective in perturbing big data, but SEAL performs better by providing better efficiency and utility.
Performance on data streams
We checked the performance of SEAL on data streams with regard to (i) classification accuracy and (ii) M inimum ST D(D − D p ). The latter provides evidence to the minimum guarantee of attack resistance provided under a particular instance of perturbation. As shown in Figure 9a, the classification accuracy of SEAL increases with increasing buffer size. This property is valuable for the perturbation of infinitely growing data streams generated by systems such as smart cyber-physical systems. The figure indicates that when a data stream grows infinitely, the use of smaller window sizes would negatively affect the utility of the perturbed data. When the window size is large, the utility of the perturbed data is closer to the utility of the original data stream. We can also notice that DC performs poorly in terms of classification accuracy compared to SEAL. It was previously noticed that DC works well only for tiny buffer sizes such as 5 or 10 [70]. However, according to Figure 9b, the minimum guarantee of attack resistance drops when the buffer size decreases, which restricts the use of DC with smaller buffer sizes. According to Figure 9b, however, SEAL still provides a consistent minimum guarantee of attack resistance, which allows SEAL to be used with any suitable buffer size.
Discussion
The proposed privacy-preserving mechanism (named SEAL) for big data and data streams performs data perturbation based on Chebyshev polynomial interpolation and the application of a Laplacian mechanism for noise addition. SEAL uses the first four orders of Chebyshev polynomials of the first kind for the polynomial interpolation of a particular dataset. Although Legendre polynomials would offer a better approximation of the original data during interpolation, Chebyshev polynomials are simpler to calculate and provide improved privacy; a higher interpolation error, i.e. increased deviation from the original data would intuitively provide greater privacy than Legendre polynomials. Moreover, we intend to maintain the spatial arrangement of the original data, and this requirement is fully satisfied by Chebyshev interpolation. During the interpolation, SEAL adds calibrated noise using the Laplacian mechanism to introduce randomization, and henceforth privacy, to the perturbed data. The Laplacian noise allows the interpolation process to be performed with an anticipated random error for the root mean squared error minimization. We follow the conventions of differential privacy for noise addition, the introduction of noise is in accordance with the characteristic privacy budget . The privacy budget ( ) allows users (data curators) of SEAL to adjust the amount of noise. Smaller values of (usually less than 1 but greater than 0) add more noise to generate more randomization, whereas large values of add less noise and generate less randomization. The privacy budget is especially useful for multiple data release, where the data curator can apply proper noise in the perturbation process in consecutive data releases. SEAL's ability to maintain the shape of the original data distribution after noise addition is a clear advantage, and enables SEAL to provide convincingly higher utility than a standard local differentially private algorithm. This characteristic may come at a price, and the privacy enforced by a standard differentially private mechanism can be a little higher than that of SEAL.
The experimental results of SEAL show that it performs well on both static data and data streams.
We evaluated SEAL in terms of classification accuracy, attack resistance, time complexity, scalability, and data stream performance. We tested each of these parameters using seven datasets, five classification algorithms, and three attack methods. SEAL outperforms the comparable methods: RP, GP, and DC in all these areas, proving that SEAL is an excellent choice for privacy preservation of data produced by SCPS and related technologies. SEAL produces high utility perturbed data in terms of classification accuracy, due to its ability to preserve the underlying characteristics such as the shape of the original data distribution. Although we apply an extensive amount of noise by using a small value, SEAL still tries to maintain the shape of the original data. The experiments show that even in extremely noisy perturbation environments, SEAL can provide higher utility compared to similar perturbation mechanisms, as shown in Section 5.1. SEAL shows excellent resistance with regard to data reconstruction attacks, proving that it offers excellent privacy. SEAL takes several steps to enhance the privacy of the perturbed data, namely (1) approximation through noisy interpolation, (2) scaling/normalization, and (3) data shuffling. These three steps help it outperform the other, similar perturbation mechanisms in terms of privacy.
In Section 5.1 we showed that SEAL has linear time complexity, O(n). This characteristic is crucial for big data and data streams. The scalability experiments confirm that SEAL processes big datasets and data streams very efficiently. As shown in Figure 9, SEAL also offers significantly better utility and attack resistance than data condensation. The amount of time spent by SEAL in processing one data record is around 0.03 to 0.09 milliseconds, which means that SEAL can perturb approximately 11110 to 33330 records per second. We note that runtime speed depends on the computing environment, such as CPU speed, memory speed, and disk IO speeds. The processing speed of SEAL in our experimental setup suits many practical examples of data streams, e.g. Sense your City (CITY) 13 and NYC Taxi cab (TAXI) 14 [90]. The results clearly demonstrate that SEAL is an efficient and reliable privacy preserving mechanism for practical big data and data stream scenarios.
Conclusion
In this paper, we proposed a solution for maintaining data privacy in large-scale data publishing and analysis scenarios, which is becoming an important issue in various environments, such as smart cyber-physical systems. We proposed a novel algorithm named SEAL to perturb data to maintain data privacy. Linear time complexity (O(n)) of SEAL allows it to work efficiently with continuously growing data streams and big data. Our experiments and comparisons indicate that SEAL produces higher classification accuracy, efficiency, and scalability while preserving better privacy with higher attack resistance than similar methods. The results prove that SEAL suits the dynamic environments presented by smart cyber-physical environments very well. SEAL can be an effective privacy-preserving mechanism for smart cyber-physical systems such as vehicles, grid, healthcare systems, and homes, as it can effectively perturb continuous data streams generated by sensors monitoring an individual or group of individuals and process them on the edge/fog devices before transmission to cloud systems for further analysis.
The current configuration of SEAL does not allow distributed data perturbation, and it limits the utility only to privacy-preserving data classification. A potential future extension of SEAL can address a distributed perturbation scenario that would allow SEAL to perturb sensor outputs individually while capturing the distinct latencies introduced by the sensors. SEAL could then combine the individually perturbed data using the corresponding timestamps and latencies to produce the privacy-protected data records. Further investigation on privacy parameter tuning would allow extended utility towards other areas such as descriptive statistics. 13 Sense your City is an urban environmental monitoring project that used crowd-sourcing to deploy sensors at 7 cities across 3 continents in 2015 with about 12 sensors per city, and it generates 7000 messages/ sec. 14
T 0 (x) = 1 (A.2) T 1 (x) = x (A.3) T 2 (x) = 2x 2 − 1 (A.4) T 3 (x) = 4x 3 − 3x (A.5) T 4 (x) = 8x 4 − 8x 2 + 1 (A.6)
Furthermore, we can represent any Chebyshev polynomial of the first kind using the recurrence relation given in Equation A.7, where T 0 (x) = 1 and T 1 (x) = x.
T n+1 (x) = 2xT n (x) − T n−1 (x) (A.7)
Appendix B. Least Square Fitting
In least square fitting, vertical least squares fitting proceeds by finding the sum of squares of the vertical derivations R 2 (refer Equation B.1) of a set of n data points [75]. a 1 , a 2 , . . . , a n ) − y i 2 (B.1)
R 2 ≡ f (x i ,
Now, we can choose to minimize the quantity given in Equation B.2, which can be considered as an average approximation error. This is also referred to as the root mean square error in approximating (x i , y i ) by a function f (x i , a 1 , a 2 , . . . , a n ).
E = 1 n n i=1
f (x i , a 1 , a 2 , . . . , a n ) − y i 2 (B.2)
Let's assume that f (x) is in a known class of functions, C. It can be shown that a functionf * which is most likely to equal to f will also minimize Equation B.3 among all functionsf (x) in C. This is called the least squares approximation to the data (x i , y i ).
E = 1 n n i=1 f (x i , a 1 , a 2 , . . . , a n ) − y i 2 (B.3)
Minimizing E is equivalent to minimizing R 2 , although the minimum values will be different. Thus and m are allowed to vary arbitrarily.
R 2 = n i=1 [mx i + b − y i ]∂R 2 ∂b = n i=1 2 [mx i + b − y i ] (B.8) ∂R 2 ∂m = n i=1 2 mx 2 i + bx i − x i y i (B.x) = mx + b. nb + n i=1 x i m = n i=1 y i n i=1 x i b + n i=1 x 2 i m = n i=1 x i y i (B.10) n n i=1 x i n i=1 x i n i=1 x 2 i b m = n i=1 y i n i=1 x i y i (B.11) So, b m = n n i=1 x i n i=1 x i n i=1 x 2 i −1 n i=1 y i n i=1 x i y i (B.12)
Appendix C. Privacy-Preserving Polynomial Model Generation
Consider a dataset {(x i , y i )|1 ≤ i ≤ n}, and let f (x) = a 1 ϕ 1 (x) + a 2 ϕ 2 (x) + · · · + a m ϕ m (x) (C .1) where, a 1 , a 2 , . . . , a m are coefficients and ϕ 1 (x), ϕ 2 (x), . . . , ϕ m (x) are Chebeshev polynomials of first kind,
ϕ 1 (x) = T 0 (x) = 1 (C.2) ϕ 2 (x) = T 1 (x) = x (C.3) ϕ n (x) = T n+1 (x) = 2xT n (x) − T n−1 (x) (C.4)
Assume that the data {x i } are chosen from an interval [α, β]. The Chebyshev polynomials can be modified as given in Equation C.5,
ϕ k (x) = T k−1 2x − α − β β − α (C.5)
The approximated functionf of degree (m − 1) can be given by
ϕ k (x) = T k−1 2x − α − β β − α = T k−1 (2x − 1) (C.6)
From Equation C.1 and Equation C.6 we have the following equations for m = 4. f (x) = a 1 ϕ 1 (x) + a 2 ϕ 2 (x) + a 3 ϕ 3 (x) + a 4 ϕ 4 (x) (C.11)
ϕ 1 (x) = T 0 (2x − 1) = 1 (C.7) ϕ 2 (x) = T 1 (2x − 1) = 2x − 1 (C.8) ϕ 3 (x) = T 2 (2x − 1) = 8x 2 − 8x + 1 (C.9) ϕ 4 (x) = T 3 (2x − 1) = 32x 3 − 48x 2 + 18x − 1 (C.f (x) = a 1 (1) + a 2 (2x − 1) + a 3 (8x 2 − 8x + 1) + a 4 (32x 3 − 48x 2 + 18x − 1) (C.12)
Let the actual input be y i , where i = 1 to n. The error of the approximated input can be determined by Equation C. 13.
e i =f (x i ) − y i (C.13)
We need to determine the values of a 1 , a 2 , a 3 , and a 4 in such a way that the errors (e i ) are small.
In order to determine the best values for a 1 , a 2 , a 3 , and a 4 , we use the root mean square error given in Equation C.14.
E = 1 n n i=1 f (x i ) − y i 2 (C.14)
Let's take the least squares fitting off (x) of the class of functions C which minimizes E asf * (x).
We can obtainf * (x) by minimizing E. Thus we seek to minimize M (a 1 , a 2 , a 3 , a 4 ) which is given in
Equation C.15. M (a 1 , a 2 , a 3 , a 4 ) = n i=1 a 1 + a 2 (2x − 1) + a 3 (8x 2 − 8x + 1) + a 4 (32x 3 − 48x 2 + 18x − 1) − y i 2 (C.15)
The values of a 1 , a 2 , a 3 , and a 4 that minimize M (a 1 , a 2 , a 3 , a 4 ) will satisfy the expressions given in
= ∂ n i=1 [a1+a2(2x−1)+a3(8x 2 −8x+1)+a 4 (32x 3 −48x 2 +18x−1)−y i] 2 ∂a 1 = 0 (C.16) ∂M (a 1 ,a 2 ,a 3 ,a 4 ) ∂a 2 = ∂ n i=1 [a1+a2(2x−1)+a3(8x 2 −8x+1)+a 4 (32x 3 −48x 2 +18x−1)−y i] 2 ∂a 2 = 0 (C.17) ∂M (a 1 ,a 2 ,a 3 ,a 4 ) ∂a 3 = ∂ n i=1 [a1+a2(2x−1)+a3(8x 2 −8x+1)+a 4 (32x 3 −48x 2 +18x−1)−y i] 2 ∂a 3 = 0 (C.18) ∂M (a 1 ,a 2 ,a 3 ,a 4 ) ∂a 4 = ∂ n i=1 [a1+a2(2x−1)+a3(8x 2 −8x+1)+a 4 (32x 3 −48x 2 +18x−1)−y i] 2 ∂a 4 = 0 (C.19)
Appendix C.1. Utilizing differential privacy to introduce randomization to the approximation process
To decide the amount of noise, we have to determine the sensitivity of the noise addition process.
Given that we add the noise to the approximated values off (x), the sensitivity (∆f ) can be defined using Equation C.20, which is the maximum difference between the highest and the lowest possible output values off (x). Since the input dataset is normalized within the bounds of 0 and 1, the minimum possible input or output is 0 while the maximum possible input or output is 1. Therefore, we define the sensitivity of the noise addition process to be 1. contributes with an error to the process of finding the coefficients for a 1 , a 2 , a 3 , and a 4 , which is given in Equation C.36. Since the sensitivity (∆f ) of the noise addition process is 1, as defined in Equation
C.20, the scale (spread) of the Laplacian noise is 1/ . We restrict the position (µ) of the Laplacian noise at 0 as the goal is to achieve the global minima keeping the RMSE at 0 after the randomization.
∂M (a 1 ,a 2 ,a 3 ,a 4 )
∂a 1 = ∂ n i=1 [a1+a2(2x−1)+a3(8x 2 −8x+1)+a 4 (32x 3 −48x 2 +18x−1)+Lap i ( ∆f )−y i] 2 ∂a 1 = 0 (C.22)
After applying the partial derivation on Equation C.22 with respect to a 1 , we can obtain Equation C.23 which leads to obtaining Equation C.24. n i=1 2 a 1 + a 2 (2x − 1) + a 3 (8x 2 − 8x + 1) + a 4 (32x 3 − 48x 2 + 18x − 1) + Lap i ( ∆f ) − y i = 0 (C.23)
Let's use m ij to denote the coefficients, and b i to represent the constants in the right hand side of the equal symbol in the factorised Equations C.24, C.27, C.30, and C.33.
a 1 n m11 +a 2 2 n i=1 x i − n m12 +a 3 8 n i=1 x 2 i − 8 n i=1 x i + n m13 + a 4 32 n i=1 x 3 i − 48 n i=1 x 2 i + 18 n i=1 x i − n m14 = n i=1 y i − n i=1
Lap i ∆f n i=1 2 a 1 + a 2 (2x − 1) + a 3 (8x 2 − 8x + 1) + a 4 (32x 3 − 48x 2 + 18x − 1) + Lap i ( ∆f ) − y i (2x i − 1) = 0 (C.26)
a 1 2 n i=1 x i − n m21 +a 2 4 n i=1 x 2 i − 4 n i=1 x i + n m22 + a 3 16 n i=1 x 3 i − 24 n i=1 x 2 i + 10 n i=1 x i − n m23 + a 4 64 n i=1 x 4 i − 128 n i=1 x 3 i + 84 n i=1 x 2 i − 20 n i=1 x i + n m24 = 2 n i=1 x i y i − n i=1 y i − 2 n i=1 x i Lap i ∆f − n i=1
Lap i ∆f = 0 (C.28) n i=1 2 a 1 + a 2 (2x − 1) + a 3 (8x 2 − 8x + 1) + a 4 (32x 3 − 48x 2 + 18x − 1) + Lap i ( ∆f ) − y i (8x 2 − 8x + 1) = 0 (C.29)
a 1 8 n i=1 x 2 i − 8 n i=1 x i + n m31 +a 2 16 n i=1 x 3 i − 24 n i=1 x 2 i + 10 n i=1 x i − n m32 + a 3 64 n i=1 x 4 i − 128 n i=1 x 3 i + 80 n i=1 x 2 i − 16 n i=1 x i + n m33 + a 4 256 n i=1 x 5 i − 640 n i=1 x 4 i + 560 n i=1 x 3 i − 200 n i=1 x 2 i + 26 n i=1 x i − n m34 = 8 n i=1 x 2 i y i − 8 n i=1 x i Lap i ∆f + n i=1 y i − 8 n i=1 x 2 i Lap i ∆f − 8 n i=1 x i Lap i ∆f + n i=1
Lap i ∆f = 0 (C.31) n i=1 2 a 1 + a 2 (2x − 1) + a 3 (8x 2 − 8x + 1) + a 4 (32x 3 − 48x 2 + 18x − 1) + Lap i ( ∆f ) − y i (32x 3 − 48x 2 + 18x − 1) = 0 (C.32)
a 1 32 n i=1 x 3 i − 48 n i=1 x 2 i + 18 n i=1 x i − n m41 + a 2 64 n i=1 x 4 i − 128 n i=1 x 3 i + 84 n i=1 x 2 i − 20 n i=1 x i + n m42 + a 3 256 n i=1 x 5 i − 640 n i=1 x 4 i + 560 n i=1 x 3 i − 200 n i=1 x 2 i + 26 n i=1 x i − n m43 + a 4 1024 n i=1 x 6 i − 3072 n i=1 x 5 i + 3456 n i=1 x 4 i − 1792 n i=1 x 3 i + 420 n i=1 x 2 i − 36 n i=1 x + n m44 = 32 n i=1 x 3 i y i − 48 n i=1 x 2 i y i + 18 n i=1 x i y i − n i=1 y i − 32 n i=1 x 3 i Lap i ∆f − 48 n i=1 x 2 i Lap i ∆f + 18 n i=1 x i Lap i ∆f − n i=1
Lap i ∆f | 9,499 |
1907.13315 | 2966804802 | Person re-identification (Re-ID) has achieved great improvement with deep learning and a large amount of labelled training data. However, it remains a challenging task for adapting a model trained in a source domain of labelled data to a target domain of only unlabelled data available. In this work, we develop a self-training method with progressive augmentation framework (PAST) to promote the model performance progressively on the target dataset. Specially, our PAST framework consists of two stages, namely, conservative stage and promoting stage. The conservative stage captures the local structure of target-domain data points with triplet-based loss functions, leading to improved feature representations. The promoting stage continuously optimizes the network by appending a changeable classification layer to the last layer of the model, enabling the use of global information about the data distribution. Importantly, we propose a new self-training strategy that progressively augments the model capability by adopting conservative and promoting stages alternately. Furthermore, to improve the reliability of selected triplet samples, we introduce a ranking-based triplet loss in the conservative stage, which is a label-free objective function basing on the similarities between data pairs. Experiments demonstrate that the proposed method achieves state-of-the-art person Re-ID performance under the unsupervised cross-domain setting. Code is available at: this https URL | Among these existing works, PTGAN @cite_29 and SPGAN @cite_32 transfer source images into target-domain style by CycleGAN and then use translated images to train a model. However, due to unable to guarantee the identity of generated images, these style transfer learning methods can not result in satisfactory performance. Another line of unsupervised cross-domain person Re-ID works @cite_5 @cite_54 @cite_49 @cite_25 combine other auxiliary information as an assistant task to improve the model generalization. For instance, TFusion @cite_9 integrates spatio-temporal patterns to improve the Re-ID precision, while EANet @cite_36 uses pose segmentation. TJ-AIDL @cite_5 learns an attribute-semantic and identity discriminative feature representation space simultaneously, which can be transferred to any new target domain for re-id tasks. Similar as the difficulty of supervised learning, these domain adaptation approaches suffer from the requirement of collecting attribute annotations. | {
"abstract": [
"Person re-identification (ReID) has achieved significant improvement under the single-domain setting. However, directly exploiting a model to new domains is always faced with huge performance drop, and adapting the model to new domains without target-domain identity labels is still challenging. In this paper, we address cross-domain ReID and make contributions for both model generalization and adaptation. First, we propose Part Aligned Pooling (PAP) that brings significant improvement for cross-domain testing. Second, we design a Part Segmentation (PS) constraint over ReID feature to enhance alignment and improve model generalization. Finally, we show that applying our PS constraint to unlabeled target domain images serves as effective domain adaptation. We conduct extensive experiments between three large datasets, Market1501, CUHK03 and DukeMTMC-reID. Our model achieves state-of-the-art performance under both source-domain and cross-domain settings. For completeness, we also demonstrate the complementarity of our model to existing domain adaptation methods. The code is available at this https URL.",
"Although the performance of person Re-Identification (ReID) has been significantly boosted, many challenging issues in real scenarios have not been fully investigated, e.g., the complex scenes and lighting variations, viewpoint and pose changes, and the large number of identities in a camera network. To facilitate the research towards conquering those issues, this paper contributes a new dataset called MSMT171 with many important features, e.g., 1) the raw videos are taken by an 15-camera network deployed in both indoor and outdoor scenes, 2) the videos cover a long period of time and present complex lighting variations, and 3) it contains currently the largest number of annotated identities, i.e., 4,101 identities and 126,441 bounding boxes. We also observe that, domain gap commonly exists between datasets, which essentially causes severe performance drop when training and testing on different datasets. This results in that available training data cannot be effectively leveraged for new testing domains. To relieve the expensive costs of annotating new training samples, we propose a Person Transfer Generative Adversarial Network (PTGAN) to bridge the domain gap. Comprehensive experiments show that the domain gap could be substantially narrowed-down by the PTGAN.",
"",
"Most of the proposed person re-identification algorithms conduct supervised training and testing on single labeled datasets with small size, so directly deploying these trained models to a large-scale real-world camera network may lead to poor performance due to underfitting. It is challenging to incrementally optimize the models by using the abundant unlabeled data collected from the target domain. To address this challenge, we propose an unsupervised incremental learning algorithm, TFusion, which is aided by the transfer learning of the pedestrians' spatio-temporal patterns in the target domain. Specifically, the algorithm firstly transfers the visual classifier trained from small labeled source dataset to the unlabeled target dataset so as to learn the pedestrians' spatial-temporal patterns. Secondly, a Bayesian fusion model is proposed to combine the learned spatio-temporal patterns with visual features to achieve a significantly improved classifier. Finally, we propose a learning-to-rank based mutual promotion procedure to incrementally optimize the classifiers based on the unlabeled data in the target domain. Comprehensive experiments based on multiple real surveillance datasets are conducted, and the results show that our algorithm gains significant improvement compared with the state-of-art cross-dataset unsupervised person re-identification algorithms.",
"Being a cross-camera retrieval task, person re-identification suffers from image style variations caused by different cameras. The art implicitly addresses this problem by learning a camera-invariant descriptor subspace. In this paper, we explicitly consider this challenge by introducing camera style (CamStyle) adaptation. CamStyle can serve as a data augmentation approach that smooths the camera style disparities. Specifically, with CycleGAN, labeled training images can be style-transferred to each camera, and, along with the original training samples, form the augmented training set. This method, while increasing data diversity against over-fitting, also incurs a considerable level of noise. In the effort to alleviate the impact of noise, the label smooth regularization (LSR) is adopted. The vanilla version of our method (without LSR) performs reasonably well on few-camera systems in which over-fitting often occurs. With LSR, we demonstrate consistent improvement in all systems regardless of the extent of over-fitting. We also report competitive accuracy compared with the state of the art. Code is available at: https: github.com zhunzhong07 CamStyle",
"",
"Most existing person re-identification (re-id) methods require supervised model learning from a separate large set of pairwise labelled training data for every single camera pair. This significantly limits their scalability and usability in real-world large scale deployments with the need for performing re-id across many camera views. To address this scalability problem, we develop a novel deep learning method for transferring the labelled information of an existing dataset to a new unseen (unlabelled) target domain for person re-id without any supervised learning in the target domain. Specifically, we introduce an Transferable Joint Attribute-Identity Deep Learning (TJ-AIDL) for simultaneously learning an attribute-semantic and identitydiscriminative feature representation space transferrable to any new (unseen) target domain for re-id tasks without the need for collecting new labelled training data from the target domain (i.e. unsupervised learning in the target domain). Extensive comparative evaluations validate the superiority of this new TJ-AIDL model for unsupervised person re-id over a wide range of state-of-the-art methods on four challenging benchmarks including VIPeR, PRID, Market-1501, and DukeMTMC-ReID.",
""
],
"cite_N": [
"@cite_36",
"@cite_29",
"@cite_54",
"@cite_9",
"@cite_32",
"@cite_49",
"@cite_5",
"@cite_25"
],
"mid": [
"2907197374",
"2963047834",
"",
"2963852441",
"2963289251",
"",
"2794651663",
""
]
} | Self-training with progressive augmentation for unsupervised cross-domain person re-identification * | Person re-identification (Re-ID) is a crucial task in surveillance and security, which aims to locate a target pedestrian across non-overlapping camera views using a probe image. With the advantages of convolutional neural networks (CNN), many person Re-ID works focus on supervised learning [12,29,37,3,46,2,4,18,28,5,24] and achieve satisfactory improvements. Despite the great * Work was done when X. Zhang was visiting The University of Adelaide. First two authors contributed to this work equally. C. Shen is the corresponding author: [email protected] Here we use Duke [43] as the source domain and Market-1501 [42] as the target domain.
success, they depend on large labelled datasets which are costly and sometime impossible to obtain. To tackle this problem, a few unsupervised learning methods [34,22,20] propose to take advantage of abundant unlabelled data, which are easier to collect in general. Unfortunately, due to lack of supervision information, the performance of unsupervised methods is typically weak, thus being less effective for practical usages. In contrast, unsupervised cross-domain methods [36,8,34,45,16,25,10,23,19,27] propose to use both labelled datasets (source domain) and unlabelled datasets (target domain). However, directly applying the models trained in the source domain to the target domain leads to unsatisfactory performances due to the inconsistent characteristics between the two domains, which is known as the domain shift problem [19]. In unsupervised cross-domain Re-ID, the problem becomes how to transfer the learned information of a pre-trained model from the source domain to the target domain effectively in an unsupervised manner.
Some domain transfer methods [45,16,25,10,23,19,27,22] have taken great efforts to address this challenge, where the majority are based on pseudo label estimation [10,27,23]. They extract embedding features of unlabelled target datasets from the pre-trained model and apply unsupervised clustering methods (e.g., k-means and DBSCAN [9]) to separate the data into different clusters. The samples in the same cluster are assumed to belong to the same person, which are adapted for training as in supervised learning. The drawback of these methods is that the performance highly depends on the clustering quality, reflecting on whether samples with the same identity are assigned to one cluster. In other words, performance relies on to what extent are the pseudo labels from clustering consistent with ground truth identity labels. Since the percentage of corrupted labels largely affect the model generalization on the target dataset [40], we propose a method to improve the quality of labels in a progressive way which results in considerable improvement of model generalization on the unseen target dataset.
Here we propose a new Self-Training with Progressive Augmentation framework (PAST) to: 1) restrain error amplification at early training epochs when the quality of pseudo label can be low; and 2) progressively incorporate more confidently labelled examples for self-training when the label quality is becoming better. PAST has two learning stages, i.e., conservative and promoting stage, which consider complementary data information via different learning strategies for self-training. Conservative Stage. As shown in Figure 1, the percentage of correctly labelled data is low at first due to the domain shift. In this scenario, we need to select confidently labelled examples to reduce label noise. We consider the similarity score between images as a good indicator of confidence measure. Beside the widely used clustering-based triplet loss (CTL) [15], which is sensitive to the quality of pseudo labels generated from clustering method, we propose a novel label-free loss function, ranking-based triplet loss (RTL), to better capture the characteristic of data distribution in the target domain.
Specifically, we calculate the ranking score matrix for the whole target dataset and generate triplets by selecting the positive and negative examples from the top η and (η, 2η] ranked images for each anchor. The triplets are then fed into the model and trained with the proposed RTL. In the conservative stage, we mainly consider the local structure of data distribution which is crucial for avoiding model collapse when the label quality is mediocre at early learning epochs. Promoting Stage. However, as the number of training triplets dramatically grows in large datasets and triplets only focus on local information, the learning process with triplet loss inevitably becomes instability and suffers from the local-optimal result, as shown by the "CTL" and "CTL+RTL" in Figure 1. To remedy this issue, we propose to use the global distribution of data points for network training at the promoting stage. That is, we treat each cluster as a class and convert the learning process into a classification problem. Softmax cross-entropy loss is used to force different categories staying apart for encouraging inter-class separability. After the promoting stage, the model is prone to be more stable which facilitates learning the discriminative features. Since the error is most likely amplified when training on images with extremely corrupted labels using the softmax cross-entropy loss, we employ this stage following the conservative learning stage and carry out two stages interchangeably. With this alternate process, our proposed PAST framework can stabilize the training process and progressively improve the capability of model generalization on the target domain.
To summarize, our main contributions are as follows. 1) We present a novel self-training with progressive augmentation framework (PAST) to solve the unsupervised cross-domain person Re-ID problem. By executing the twostage self-training process, namely, conducting conservative and promoting stage alternately, our method considerably improve the model generalization on unlabelled targetdomain datasets.
2) We propose a ranking-based triplet loss (RTL), solely relying on similarity scores of data points, to avoid selecting triplet samples using unreliable pseudo labels.
3) We take advantage of global data distribution for model training with softmax cross-entropy loss, which is beneficial for training stability and promoting the capability of model generalization.
4) Experimental results on three large-scale datasets indicate the effectiveness of our proposed method on the task of unsupervised cross-domain person Re-ID.
Our Method
For unsupervised cross-domain person Re-ID, the problem that we concentrate on is how to learn robust feature representations for unlabelled target datasets using the prior knowledge from the labelled source datasets. In this section, we present our proposed self-training with progressive augmentation framework (PAST) in detail.
Overview of Our Proposed Framework
The overall framework of our proposed self-training with progressive augmentation framework (PAST) is described in Figure 2. The framework is based on a deep neural network M trained on ImageNet [7], which contains two main components: conservative stage and promoting stage.
We first fine-tune the model M using labelled source training dataset S in a supervised manner. Then, this pre-trained model is utilized to extract features F on all training images in the target domain T , which are used as the input features of our framework. For the conservative stage, based on the ranking score matrix D R learned from the input features, we can generate a more reliable training set T U via the HDBSCAN [1] clustering method (other clustering methods can be employed here too). This updated training set T U is a subset of the whole training data T . Combining with two triplet-based loss functions, i.e., clustering-based triplet loss (CTL) and the proposed ranking-based triplet loss (RTL), local structure of the current updated training set can be captured for model optimization. After that, we can use the new model to extract features F U of the current training set T U . Next, in the promoting stage, with the new features F U from the conservative stage, we propose to employ softmax cross-entropy loss for further optimizing the network. At this stage, the global distribution of the training set is considered to improve the discrimination of feature representation. Finally, the capability of model generalization is improved gradually by training the network with the conservative stage and promoting stage alternately.
Conservative Stage
In the task of unsupervised domain adaptation, it is a natural goal to gather samples of the same identity together and push samples from different classes away from each other. Triplet loss [45,27,23] has been proved to be able to discover meaningful underlying local structure of data distribution by generating reliable triplets of the target data. Different from the supervised setting, pseudo labels are assigned to unlabelled samples, which is more difficult to construct high-quality triplets. Therefore, our goal is to design a learning strategy to not only generate reliable samples but also improve the model performance.
In practice, we conduct the following procedure in the conservative stage. At the beginning, on the whole training dataset T :
{x 1 , x 2 , ..., x N }, we extract features F: {f (x 1 ), f (x 2 ), ..., f (x N )
} from the current model, and adopt the k-reciprocal encoding [44], which is a variation of the Jaccard distance between nearest neighbors sets, to generate the distance matrix D as:
D = [D J (x 1 ) D J (x 2 ) . . . D J (x N )] T , D J (x i ) = [d J (x i , x 1 ) d J (x i , x 2 ) . . . d J (x i , x N )], ∀i ∈ {1, 2, . . . , N },(1)
where D J (x i ) represents the distance vector of one specific person x i with all training images. d J (x i , x j ) is the Jaccard distance between sample x i and x j .
According to the fact that a smaller distance reflects more similarities between two images, we sort every distance vector D J (x i ) from smallest value to largest value, yielding ranking score matrix D R as:
D R = [D R (x 1 ) D R (x 2 ) . . . D R (x N )] T , D R (x i ) = [d J (x i , x 1 ) d J (x i , x 2 ) . . . d J (x i , x N )], ∀i ∈ {1, 2, . . . , N }, (2) where D R (x i ) is the ranking format of D J (x i ) from small to large. Given a specific sample x i , x j in d J (x i , x j ) repre- sents the j-th most similar sample.
Then, we apply a hierarchical density-based clustering algorithm (HDBSCAN) [1] on D R to split the whole training images into different clusters, which are considered as pseudo labels. After HDBSCAN, some images, not belonging to any clusters, are discarded. Thus, we use images with assigned labels as the updated training set T U for further model optimization.
We combine two types of triplet loss functions together to update the model, i.e., clustering-based triplet loss (CTL) and ranking-based triplet loss (RTL), which are different from the way of triplets selection as well as the way for model optimization.
Clustering-based Triplet Loss (CTL). One loss function that we use is batch hard mining triplet loss [15], proposed to mine relations among samples within a mini-batch. We randomly sample P clusters and K instances in each cluster to compose a mini-batch with size of P K. For each anchor image x a , the corresponding hardest positive sample x p and the hardest negative sample x n within the batch are selected to form a triplet. Since the pseudo labels are from a clustering method, we rename this loss function as clustering-based triplet loss (CTL), which is formulated as,
L CT L = P K a=1 [m + ||f (x a ) − f (x p )|| 2 − ||f (x a ) − f (x n )|| 2 ] + = P i=1 K a=1 [m + hardest positive max p=1...K ||f (x i,a ) − f (x i,p )|| 2 − min n=1...K j=1...P j =i ||f (x i,a ) − f (x j,n )|| 2 hardest negative ] + ,(3)where x i,j is a data point representing the j-th image of the i-th cluster in the batch. f (x i,j ) is the feature vector of x i,j .
Ranking-based Triplet Loss (RTL). However, it is clear that the effect of CTL highly depends on the quality of label estimation, which is hard to decide whether the clustering result is correct or not. Therefore, we propose a Ranking-based Triplet Loss (RTL) to make full use of the ranking score matrix D R . It is a label-free method reflecting the relation between data pairs. Specifically, given a training anchor x a , positive sample x p is randomly selected from the top η nearest neighbors according to the ranking score vector D R (x a ), and negative sample x n is from the location (η, 2η]. In addition, instead of hard margin in CTL,we introduce a soft margin based on the relative ranking position of x p and x n , which can adapt well to different scales of intra-class variation. The formula of RTL is shown as,
L RT L = P K a=1 [ |P p − P n | η m + ||f (x a ) − f (x p )|| 2 − ||f (x a ) − f (x n )|| 2 ] + ,(4)
where the selected anchors in each batch are the same as CTL. m is a basic hard margin same as Eq. (3). η is the maximum of ranking position for positive sample selection. P p and P n are the ranking positions of x p and x n with respect to x a . To summarize, we optimize the network using the combination of CTL and RTL to better capture the localconstraint information of data distribution. Our final tripletbased loss function in conservative stage is shown in Eq. (5):
L C = L RT L + λL CT L ,(5)
where λ is the loss weight to trade off the influence of two loss functions. Experiments show that this combined tripletbased loss function can certainly improve the capability of model representation.
Promoting Stage
Nevertheless, since triplet-based loss functions only focus on the data relation within each triplet, the model will be prone to instability and stuck into a suboptimal local minimum. To alleviate this problem, we propose to apply classification loss to further improve model generalization by taking advantage of global information of training samples. In the promoting stage, a fully-connected layer is added at the end of the model as a classifier layer, which is initialized according to the features of current training set. Softmax cross-entropy loss is used as the objective function, which is formulated as:
L P = − P K i=1 log e W T y i xi C c=1 e W T c xi ,(6)
whereŷ i is the pseudo label of the sample x i . C is the number of clusters from the HDBSCAN clustering method with updated training set T U . Feature-based Weight Initialization for Classifier. Due to the variation of cluster numbers C, the newly added classifier layer CL should be initialized every time executing HDBSCAN. Instead of random initialization, we exploit the mean features of each cluster as the initial parameters. Specifically, for each cluster c, we calculate the mean feature F c by averaging all the embedding features of its elements. The parameters W of CL are initialized as follows: where W ∈ R d×C , W c is the c-th column of W, and d is the feature dimensionality. An advantage of this initialization is that we can use the previous information to avoid the fluctuation of accuracy caused by random initialization, which is useful for the convergence of model training.
W c = F c , c ∈ {1, 2, . . . , C},(7)
Alternate Training
The learning process is expected to progressively improve the model capability of generalization, which can avoid model to fall into local optimum. In this paper, we carefully develop a simple yet effective self-training strategy which can capture local structure and global information of training images. That is, the conservative stage and the promoting stage are conducted alternately. At the beginning, the model is trained only using the local relations between data points alone, so that the difficulty of error amplification brought by softmax loss can be prevented. After several training steps in the conservative stage, the ability of model representation and the quality of clusters are more trusty. Then model capability is further augmented using Softmax cross-entropy loss in the promoting stage and the updated model is used as the initial state for conservative stage alternately. As the training goes on, model generalization is improved, allowing to learn more discriminate feature representation of training images. The details of this two-stage alternate self-training are included in Algorithm 1. We also list one visual example of this alternate self-training process, shown in Figure 3. It is proved that our proposed PAST framework is also useful for refining the quality of clusters. 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 Conservative Stage Iteration 4 Figure 3 -The alternate self-training process of our PAST framework on one visual example. All images belong to same person in truth. Samples with same color denotes that they are assigned to same pseudo label generated by HDBSCAN clustering method. Gray figure means the sample not belonging to any cluster and not being used for model training. From training iteration 1 to iteration 4, more samples are selected for training. At the same time, the pseudo labels are more reliable.
Experiments
We evaluate our unsupervised self-learning method on cross-domain Person Re-ID tasks. Three common largescale person Re-ID datasets are used, Market-1501 [42], DukeMTMC-Re-ID [43], and CUHK03 [17].
Market-1501 [42] contains 32,668 labelled images of 1,501 identities taken by 6 cameras, which are detected and cropped via Deformable Part Model (DPM) [11]. The dataset is split into training set with 12,936 images of 751 identities and test set with 19,732 images of 750 identities.
DukeMTMC-Re-ID [43] consists of 36,411 labelled images belonging to 1,404 identities observed by 8 camera views. As the format of Matket-1501 dataset, it has 16,522 images of 702 identities for the training set and the remaining 19,889 images of 702 identities for the test set. Hereafter Duke refers to this dataset.
CUHK03 [17] is composed of 14,096 images from 1,467 identities captured by 2 cameras. This dataset was constructed by both manual labelling and DPM. In this work, we experiment on the images detected using DPM. To be in consistency with the protocol of Market-1501 and Duke, new train/test evaluation protocol [44] are used: 7,365 images with 767 identities for training and the remaining 6,732 images with 700 identities for testing.
Implementation Details
Model and Preprocessing. We adopt PCB [29] as our model structure, in which ResNet-50 [14] without last classification layer is used as backbone model. Similar as EANet [16], we use 9 regions for feature representation. Instead of using part aligned pooling [16], we change to use even parts like PCB for simplification. The dimension of each embedding layer is set to 256. Following each embedding layer, we also implement the classifier layer with one fully connected layer in the promoting stage. The classifier output changes according to the number of clusters generated from HDBSCAN clustering process.
All input images are resized to 384×128×3. It is noting that we only apply random flipping as data augmentation.
Training Settings. We use the SGD optimizer with a momentum of 0.9 and weight decay of 5 × 10 −4 to train the model. Without otherwise specification, in all experiments we set batch size to 64 and the iteration step to 4. Instead of directly using same learning rates for both conservative and promoting stage, we believe that individually setting the specialized learning rates can work better for our PAST framework. The reason is that the parameters from the conservative stage should be updated slower in the promoting stage for avoiding error amplification caused by Softmax cross-entropy loss. Specifically, the learning rate is initialized to 10 −4 on fine-tune layers and 2×10 −4 on embedding layers in the conservative stage, while for the promoting stage, newly added classifier layers use an initial learning rate of 10 −3 and all other layers 5 × 10 −5 . After 3 iterations, all learning rates are multiplied by 0.1. The margin hyper parameter m is set to 0.3 in both Eq. (3) and Eq. (4).
Evaluating Settings. For performance evaluation, feature vectors from embedding layers of 9 parts are normalized separately and then concatenated as the output representation. Given a query image, we calculate cosine distance with all gallery images and then sort it as final ranking result. We utilize the Cumulated Matching Characteristics (CMC) [13] and mean Average Precision (mAP) [42] as the performance evaluation measures. CMC curve shows the probability that a query appears in different size of candidate lists. As for mAP, given a single query, the Average Precision (AP) is computed from the area under its precision-recall curve. The mAP is then calculated as the mean value of AP across all queries. Note that single-shot setting is adopted similar to [29] in all experiments. Table 1 -The effectiveness of conservative stage and promoting stage in our proposed Self-training with Progressive Augmentation Framework (PAST). D→M represents that we use Duke [43] as source domain and Market-1501 [42] as target domain. * denotes that the results are produced by us. DT means Direct Transfer from PCB with 9 regions. R means applying k-reciprocal encoding method [44]. CTL represents clustering-based triplet loss [15], while RTL is our proposed rankingbased triplet loss. Our PAST framework consists of conservative stage and promoting stage that are denoted by C and P respectively.
Ablation Study
In this subsection, we aim to thoroughly analyse the effectiveness of each components in our PAST framework.
Effectiveness of the Conservative Stage. As shown in Table 1, we conduct several experiments to verify the effectiveness of the individual components CTL, RTL and the combination of these two triplet loss functions on the task of M→D and D→M. First, only with CTL, we improve the performance by 18.49% and 12.14% at Rank-1 accuracy compared with the results from k-reciprocal encoding method [44] on M→D and D→M respectively. Second, we observe that containing only our proposed RTL, the Rank-1 accuracy and mAP increase by 21% and 12.64% for M→D, while 12.91% and 5.69% on D→M. This obvious improvement shows that both CTL and RTL are useful for increasing model generalization. And CTL obtains slightly lower performance than RTL. Then, as described in Eq. (5), we combine CTL and RTL together to jointly optimize model in our conservative stage. It is clear that we achieve better results on both M→D and D→M. Especially for D→M, we gain 2.38% and 4.42% on Rank-1 and mAP comparing to only using CTL, which shows the significant benefit of our RTL. Through this conservative stage, we can learn a relative powerful model for target domain.
Effectiveness of the Promoting Stage. However, as illustrated in Figure 1, there is no further gains even with more training iterations when only using triplet-based loss functions. We believe that it is because during conservative stage, the model only sees local structure of data distribution brought by triplet samples. Thus, in our PAST framework, we employ softmax cross-entropy loss as the objective function in the promoting stage to train the model with the conservative stage alternately. Refer to Table 1 again, compared with only using conservative stage, our PAST can further improve mAP and Rank-1 by 2.21% and 0.72% on M→D task, and 4.03% and 4.12% for D→M. Meanwhile, from Figure 3, the quality of clusters is also improved with our PAST framework. This shows that the promoting stage does play an important role in model generalization. Through the above experiments, different components in our PAST have been evaluated and verified. We show that our PAST framework is not only beneficial for improving model generation but also refining clustering quality.
Comparison with Different Clustering Methods. We evaluate three different clustering methods, i.e., k-means, DBSCAN [9] and HDBSCAN [1] in the conservative stage. The performance of utilizing these clustering methods under different settings are specified in Table 2. For k-means, the number of cluster centroids k is set to 702 and 751 on target data of Market-1501 and Duke respectively, which is the same as the number of identities of source training data. It is clear that HDBSCAN performs better than k-means and DBSCAN under either only using conservative stage or whole PAST framework. For instance, using HDBSCAN can achieve mAP 54.26% and Rank-1 72.35% for M→D task in PAST framework, which are 4.29% and 3.41% higher than using k-means, and 1.19% and 0.45% than using DBSCAN. In addition, we also observe that whatever clustering method we use, our PAST framework always outperforms only using conservative stage. This means that on the one hand, HDBSCAN clustering method has more powerful effect in our framework; on the other hand, our PAST framework indeed provides improvement of feature representation on target domain.
Comparison with State-of-the-art Methods
Following evaluation setting in [16,45], we compare our proposed PAST framework with state-of-the-art unsupervised cross-domain methods, shown in Table 3. It can be seen that only using conservative stage with CTL and RTL for training, the performance is already competitive with other cross-domain adaptive methods. For example, although EANet [16] proposes complex part-aligned pooling and combines pose segmentation to provide more information for adaptation, our conservative stage still outperforms it by 3.93% in Rank-1 and 4.05% in mAP when testing on M→D. Moreover, our PAST framework surpasses all previous methods by a large margin, which achieves 54. 26% 79.48%, 69.88% in Rank-1 accuracy for M→D, M→D, C→M, C→D. We can also prove that it is useful to alternately use conservative and promoting stage by comparing with the last two rows in Table 3. Especially, our PAST can improve 4.71% and 5.21% in Rank-1 and mAP for C→D compared with only using conservative stage.
Parameter Analysis
Besides, we conduct additional experiments to evaluate the parameter sensitivity.
Analysis of the Loss Weight λ. λ is a hyper parameter which is used to trade off the effect between rankingbased triplet loss (RTL) and clustering-based triplet loss (CTL). We evaluate the impact of λ, which is sampled from {0.1, 0.2, 0.5, 1.0, 2.0}, on the task of D→M. The results are shown in Figure 4 (a). We observe that the best result is obtained when λ is set to 0.5. Note that large or small λ has limitation on the improvement of performance.
Analysis of the Minimum Samples S min . In addition, we analyse how the number of minimum samples (S min ) for every cluster in HDBSCAN clustering affects the Re-ID results. We test the impact of {5, 10, 15, 20} minimum samples on the performance of our PAST framework on D→M setting. As shown in Figure 4 (b), we can see that setting S min to 10 yields superior accuracy. Meanwhile, different S min has large variance on the final number of pseudo identities from HDBSCAN. We believe that it is because samples from the same class will be separated to several clusters when S min is too small, while low-density classes will be abandoned if S min is too large. This can be verified from Figure 4 (c), the number of identity from HDBSCAN with minimum sample 10 is 625, which is the closest one to the true value 751 in Market-1501 training set.
Conclusion
In this paper, we have presented a self-training with progressive augmentation framework (PAST) for unsupervised cross-domain person re-identification. Our PAST consists of two different stages, i.e., the conservative and promoting stage, which are adopted alternately to offer complementary information for each other. Specifically, the conservative stage mainly captures local information with triplet-based loss functions, while the promoting stage is used for extracting global information. For alleviating the dependence on clustering quality, we also propose a novel label-free ranking-based triplet loss. With these proposed method, the model generalization gains significant improvement, as well as the capability of feature representation on target domain. Extensive experiments show that our PAST outperforms the state-of-the-art unsupervised cross-domain algorithms by a large margin.
We plan to extend our work to other unsupervised crossdomain applications, such as face recognition and image retrieval tasks.
More Qualitative Analyses
Qualitative Analysis of the Feature Representation. To demonstrate the results intuitively, we visualize the feature embeddings calculated by our PAST framework in 2-D using t-SNE [31]. negative samples. As illustrated in Figure 5, images belonging to the same identity are almost well gathered together, while those from different classes usually stay apart from each other. It implies that our PAST framework can improve the capability of model generalization which is beneficial for learning discriminative feature representation on the target-domain dataset.
Qualitative Analysis of the Triplet Selection. In Figure 6, we visualize the triplet samples generated in the conservative stage for CTL and RTL, respectively. We summarize the main advantages of the proposed PAST method in the following.
1. The proposed PAST algorithm can significantly improve the quality of the clustering assignments during training. As shown in the first row of the iterations from 1 to 4, the images assigned to the same class by the proposed method tend to be more and more similar. On the other hand, the quality of the pseudo labels assigned to each images is steadily improved during training. It means that our PAST framework is beneficial for learning discriminating feature presentation and can assign more reliable pseudo labels to target images. The accurate pseudo labels can be used to promoting stage to improve the model generalization further. 2. RTL is useful for remedying the variance caused by CTL. Refer to Figure 6 again, we can observe that the third cluster in iteration 2 is noisy and the selected triplets from CTL are not faithful. However, RTL can select correct positive sample even the cluster is dirty. We believe that the reason is that RTL just depends on the similarity ranking matrix and the top η similar images are used for generating positive samples, which is more reliable when the features representation is not so discriminative. 3. RTL helps to further optimize the network, especially in the later iteration. From Figure 6, we can also see that different clusters in one mini-batch may look different due to unique color of clothes, which results in extremely simple negative samples and slows down the optimization when training on CTL. Whereas, considering the triplets generated from the RTL, negative images are extremely similar to the anchors, which is even hard to be well recognized by human beings. For example, at the second column in iteration 4, all images look like one person, although images from the first two rows are same person, while those from the third row belong to another person.
True Positive
False Positive False Negative Figure 5 -Qualitative analysis of the feature representation by using t-SNE [31] visualization on a subset of Market-1501 [42] training data. According to the clustering result, we choose the Top-50 identities which contain Top-50 the largest number of images. Points with the same color have the same (ground-truth) identity. The green circle means images from the same identity are gathered together, and the cluster is extremely reliable. Images in orange circle are both from same identity, yet they are clustered to two different classes. We can see that due to the camera style, images from the two classes have different appearances. In the red circle, although our algorithm may gather images from different (ground-truth) identities into the same cluster, these images usually share very similar appearances and are hard to distinguish with each other. For instances, every image in the red circle contains one person with white clothes and a black bicycle. Figure 6 -Quality of the triplet selection over training iterations. Images from different clusters are divided by yellow line. The red line means generated triplets are not completely correct, while green line represents generated triplets are completely correct. The solid line and dashed line are for triplets, which are generated from CTL and RTL respectively. We use Duke [43] as the source domain and Market-1501 [42] as the target domain. | 5,387 |
1907.13315 | 2966804802 | Person re-identification (Re-ID) has achieved great improvement with deep learning and a large amount of labelled training data. However, it remains a challenging task for adapting a model trained in a source domain of labelled data to a target domain of only unlabelled data available. In this work, we develop a self-training method with progressive augmentation framework (PAST) to promote the model performance progressively on the target dataset. Specially, our PAST framework consists of two stages, namely, conservative stage and promoting stage. The conservative stage captures the local structure of target-domain data points with triplet-based loss functions, leading to improved feature representations. The promoting stage continuously optimizes the network by appending a changeable classification layer to the last layer of the model, enabling the use of global information about the data distribution. Importantly, we propose a new self-training strategy that progressively augments the model capability by adopting conservative and promoting stages alternately. Furthermore, to improve the reliability of selected triplet samples, we introduce a ranking-based triplet loss in the conservative stage, which is a label-free objective function basing on the similarities between data pairs. Experiments demonstrate that the proposed method achieves state-of-the-art person Re-ID performance under the unsupervised cross-domain setting. Code is available at: this https URL | Beyond the above methods, some approaches @cite_41 @cite_57 @cite_49 focus on estimating pseudo identity labels on the target domain so as to learn deep models in a supervised manner. Usually, clustering methods are used in the feature space to generate a series of clusters which are used to update networks with an embedding loss ( , triplet loss @cite_58 or contrastive loss) @cite_45 @cite_49 or classification loss ( , softmax cross-entropy loss) @cite_41 . Whereas, embedding loss functions suffer from the limitation of sub-optimal results and slow convergence, while classification loss extremely depends on the quality of pseudo labels. While the work in @cite_0 introduces a simple domain adaptation framework which also use both triplet loss and softmax cross-entropy loss jointly, it aims at solving one-shot leaning problem. | {
"abstract": [
"The superiority of deeply learned pedestrian representations has been reported in very recent literature of person re-identification (re-ID). In this article, we consider the more pragmatic issue of learning a deep feature with no or only a few labels. We propose a progressive unsupervised learning (PUL) method to transfer pretrained deep representations to unseen domains. Our method is easy to implement and can be viewed as an effective baseline for unsupervised re-ID feature learning. Specifically, PUL iterates between (1) pedestrian clustering and (2) fine-tuning of the convolutional neural network (CNN) to improve the initialization model trained on the irrelevant labeled dataset. Since the clustering results can be very noisy, we add a selection operation between the clustering and fine-tuning. At the beginning, when the model is weak, CNN is fine-tuned on a small amount of reliable examples that locate near to cluster centroids in the feature space. As the model becomes stronger, in subsequent iterations, more images are being adaptively selected as CNN training samples. Progressively, pedestrian clustering and the CNN model are improved simultaneously until algorithm convergence. This process is naturally formulated as self-paced learning. We then point out promising directions that may lead to further improvement. Extensive experiments on three large-scale re-ID datasets demonstrate that PUL outputs discriminative features that improve the re-ID accuracy. Our code has been released at https: github.com hehefan Unsupervised-Person-Re-identification-Clustering-and-Fine-tuning.",
"",
"Deep metric learning aims to learn a function mapping image pixels to embedding feature vectors that model the similarity between images. The majority of current approaches are non-parametric, learning the metric space directly through the supervision of similar (pairs) or relatively similar (triplets) sets of images. A difficult challenge for training these approaches is mining informative samples of images as the metric space is learned with only the local context present within a single mini-batch. Alternative approaches use parametric metric learning to eliminate the need for sampling through supervision of images to proxies. Although this simplifies optimization, such proxy-based approaches have lagged behind in performance. In this work, we demonstrate that a standard classification network can be transformed into a variant of proxy-based metric learning that is competitive against non-parametric approaches across a wide variety of image retrieval tasks. We address key challenges in proxy-based metric learning such as performance under extreme classification and describe techniques to stabilize and learn higher dimensional embeddings. We evaluate our approach on the CAR-196, CUB-200-2011, Stanford Online Product, and In-Shop datasets for image retrieval and clustering. Finally, we show that our softmax classification approach can learn high-dimensional binary embeddings that achieve new state-of-the-art performance on all datasets evaluated with a memory footprint that is the same or smaller than competing approaches.",
"We study the problem of unsupervised domain adaptive re-identification (re-ID) which is an active topic in computer vision but lacks a theoretical foundation. We first extend existing unsupervised domain adaptive classification theories to re-ID tasks. Concretely, we introduce some assumptions on the extracted feature space and then derive several loss functions guided by these assumptions. To optimize them, a novel self-training scheme for unsupervised domain adaptive re-ID tasks is proposed. It iteratively makes guesses for unlabeled target data based on an encoder and trains the encoder based on the guessed labels. Extensive experiments on unsupervised domain adaptive person re-ID and vehicle re-ID tasks with comparisons to the state-of-the-arts confirm the effectiveness of the proposed theories and self-training framework. Our code is available at this https URL .",
"",
"In the past few years, the field of computer vision has gone through a revolution fueled mainly by the advent of large datasets and the adoption of deep convolutional neural networks for end-to-end learning. The person re-identification subfield is no exception to this. Unfortunately, a prevailing belief in the community seems to be that the triplet loss is inferior to using surrogate losses (classification, verification) followed by a separate metric learning step. We show that, for models trained from scratch as well as pretrained ones, using a variant of the triplet loss to perform end-to-end deep metric learning outperforms most other published methods by a large margin."
],
"cite_N": [
"@cite_41",
"@cite_57",
"@cite_0",
"@cite_45",
"@cite_49",
"@cite_58"
],
"mid": [
"2963975998",
"",
"2903034054",
"2884197239",
"",
"2598634450"
]
} | Self-training with progressive augmentation for unsupervised cross-domain person re-identification * | Person re-identification (Re-ID) is a crucial task in surveillance and security, which aims to locate a target pedestrian across non-overlapping camera views using a probe image. With the advantages of convolutional neural networks (CNN), many person Re-ID works focus on supervised learning [12,29,37,3,46,2,4,18,28,5,24] and achieve satisfactory improvements. Despite the great * Work was done when X. Zhang was visiting The University of Adelaide. First two authors contributed to this work equally. C. Shen is the corresponding author: [email protected] Here we use Duke [43] as the source domain and Market-1501 [42] as the target domain.
success, they depend on large labelled datasets which are costly and sometime impossible to obtain. To tackle this problem, a few unsupervised learning methods [34,22,20] propose to take advantage of abundant unlabelled data, which are easier to collect in general. Unfortunately, due to lack of supervision information, the performance of unsupervised methods is typically weak, thus being less effective for practical usages. In contrast, unsupervised cross-domain methods [36,8,34,45,16,25,10,23,19,27] propose to use both labelled datasets (source domain) and unlabelled datasets (target domain). However, directly applying the models trained in the source domain to the target domain leads to unsatisfactory performances due to the inconsistent characteristics between the two domains, which is known as the domain shift problem [19]. In unsupervised cross-domain Re-ID, the problem becomes how to transfer the learned information of a pre-trained model from the source domain to the target domain effectively in an unsupervised manner.
Some domain transfer methods [45,16,25,10,23,19,27,22] have taken great efforts to address this challenge, where the majority are based on pseudo label estimation [10,27,23]. They extract embedding features of unlabelled target datasets from the pre-trained model and apply unsupervised clustering methods (e.g., k-means and DBSCAN [9]) to separate the data into different clusters. The samples in the same cluster are assumed to belong to the same person, which are adapted for training as in supervised learning. The drawback of these methods is that the performance highly depends on the clustering quality, reflecting on whether samples with the same identity are assigned to one cluster. In other words, performance relies on to what extent are the pseudo labels from clustering consistent with ground truth identity labels. Since the percentage of corrupted labels largely affect the model generalization on the target dataset [40], we propose a method to improve the quality of labels in a progressive way which results in considerable improvement of model generalization on the unseen target dataset.
Here we propose a new Self-Training with Progressive Augmentation framework (PAST) to: 1) restrain error amplification at early training epochs when the quality of pseudo label can be low; and 2) progressively incorporate more confidently labelled examples for self-training when the label quality is becoming better. PAST has two learning stages, i.e., conservative and promoting stage, which consider complementary data information via different learning strategies for self-training. Conservative Stage. As shown in Figure 1, the percentage of correctly labelled data is low at first due to the domain shift. In this scenario, we need to select confidently labelled examples to reduce label noise. We consider the similarity score between images as a good indicator of confidence measure. Beside the widely used clustering-based triplet loss (CTL) [15], which is sensitive to the quality of pseudo labels generated from clustering method, we propose a novel label-free loss function, ranking-based triplet loss (RTL), to better capture the characteristic of data distribution in the target domain.
Specifically, we calculate the ranking score matrix for the whole target dataset and generate triplets by selecting the positive and negative examples from the top η and (η, 2η] ranked images for each anchor. The triplets are then fed into the model and trained with the proposed RTL. In the conservative stage, we mainly consider the local structure of data distribution which is crucial for avoiding model collapse when the label quality is mediocre at early learning epochs. Promoting Stage. However, as the number of training triplets dramatically grows in large datasets and triplets only focus on local information, the learning process with triplet loss inevitably becomes instability and suffers from the local-optimal result, as shown by the "CTL" and "CTL+RTL" in Figure 1. To remedy this issue, we propose to use the global distribution of data points for network training at the promoting stage. That is, we treat each cluster as a class and convert the learning process into a classification problem. Softmax cross-entropy loss is used to force different categories staying apart for encouraging inter-class separability. After the promoting stage, the model is prone to be more stable which facilitates learning the discriminative features. Since the error is most likely amplified when training on images with extremely corrupted labels using the softmax cross-entropy loss, we employ this stage following the conservative learning stage and carry out two stages interchangeably. With this alternate process, our proposed PAST framework can stabilize the training process and progressively improve the capability of model generalization on the target domain.
To summarize, our main contributions are as follows. 1) We present a novel self-training with progressive augmentation framework (PAST) to solve the unsupervised cross-domain person Re-ID problem. By executing the twostage self-training process, namely, conducting conservative and promoting stage alternately, our method considerably improve the model generalization on unlabelled targetdomain datasets.
2) We propose a ranking-based triplet loss (RTL), solely relying on similarity scores of data points, to avoid selecting triplet samples using unreliable pseudo labels.
3) We take advantage of global data distribution for model training with softmax cross-entropy loss, which is beneficial for training stability and promoting the capability of model generalization.
4) Experimental results on three large-scale datasets indicate the effectiveness of our proposed method on the task of unsupervised cross-domain person Re-ID.
Our Method
For unsupervised cross-domain person Re-ID, the problem that we concentrate on is how to learn robust feature representations for unlabelled target datasets using the prior knowledge from the labelled source datasets. In this section, we present our proposed self-training with progressive augmentation framework (PAST) in detail.
Overview of Our Proposed Framework
The overall framework of our proposed self-training with progressive augmentation framework (PAST) is described in Figure 2. The framework is based on a deep neural network M trained on ImageNet [7], which contains two main components: conservative stage and promoting stage.
We first fine-tune the model M using labelled source training dataset S in a supervised manner. Then, this pre-trained model is utilized to extract features F on all training images in the target domain T , which are used as the input features of our framework. For the conservative stage, based on the ranking score matrix D R learned from the input features, we can generate a more reliable training set T U via the HDBSCAN [1] clustering method (other clustering methods can be employed here too). This updated training set T U is a subset of the whole training data T . Combining with two triplet-based loss functions, i.e., clustering-based triplet loss (CTL) and the proposed ranking-based triplet loss (RTL), local structure of the current updated training set can be captured for model optimization. After that, we can use the new model to extract features F U of the current training set T U . Next, in the promoting stage, with the new features F U from the conservative stage, we propose to employ softmax cross-entropy loss for further optimizing the network. At this stage, the global distribution of the training set is considered to improve the discrimination of feature representation. Finally, the capability of model generalization is improved gradually by training the network with the conservative stage and promoting stage alternately.
Conservative Stage
In the task of unsupervised domain adaptation, it is a natural goal to gather samples of the same identity together and push samples from different classes away from each other. Triplet loss [45,27,23] has been proved to be able to discover meaningful underlying local structure of data distribution by generating reliable triplets of the target data. Different from the supervised setting, pseudo labels are assigned to unlabelled samples, which is more difficult to construct high-quality triplets. Therefore, our goal is to design a learning strategy to not only generate reliable samples but also improve the model performance.
In practice, we conduct the following procedure in the conservative stage. At the beginning, on the whole training dataset T :
{x 1 , x 2 , ..., x N }, we extract features F: {f (x 1 ), f (x 2 ), ..., f (x N )
} from the current model, and adopt the k-reciprocal encoding [44], which is a variation of the Jaccard distance between nearest neighbors sets, to generate the distance matrix D as:
D = [D J (x 1 ) D J (x 2 ) . . . D J (x N )] T , D J (x i ) = [d J (x i , x 1 ) d J (x i , x 2 ) . . . d J (x i , x N )], ∀i ∈ {1, 2, . . . , N },(1)
where D J (x i ) represents the distance vector of one specific person x i with all training images. d J (x i , x j ) is the Jaccard distance between sample x i and x j .
According to the fact that a smaller distance reflects more similarities between two images, we sort every distance vector D J (x i ) from smallest value to largest value, yielding ranking score matrix D R as:
D R = [D R (x 1 ) D R (x 2 ) . . . D R (x N )] T , D R (x i ) = [d J (x i , x 1 ) d J (x i , x 2 ) . . . d J (x i , x N )], ∀i ∈ {1, 2, . . . , N }, (2) where D R (x i ) is the ranking format of D J (x i ) from small to large. Given a specific sample x i , x j in d J (x i , x j ) repre- sents the j-th most similar sample.
Then, we apply a hierarchical density-based clustering algorithm (HDBSCAN) [1] on D R to split the whole training images into different clusters, which are considered as pseudo labels. After HDBSCAN, some images, not belonging to any clusters, are discarded. Thus, we use images with assigned labels as the updated training set T U for further model optimization.
We combine two types of triplet loss functions together to update the model, i.e., clustering-based triplet loss (CTL) and ranking-based triplet loss (RTL), which are different from the way of triplets selection as well as the way for model optimization.
Clustering-based Triplet Loss (CTL). One loss function that we use is batch hard mining triplet loss [15], proposed to mine relations among samples within a mini-batch. We randomly sample P clusters and K instances in each cluster to compose a mini-batch with size of P K. For each anchor image x a , the corresponding hardest positive sample x p and the hardest negative sample x n within the batch are selected to form a triplet. Since the pseudo labels are from a clustering method, we rename this loss function as clustering-based triplet loss (CTL), which is formulated as,
L CT L = P K a=1 [m + ||f (x a ) − f (x p )|| 2 − ||f (x a ) − f (x n )|| 2 ] + = P i=1 K a=1 [m + hardest positive max p=1...K ||f (x i,a ) − f (x i,p )|| 2 − min n=1...K j=1...P j =i ||f (x i,a ) − f (x j,n )|| 2 hardest negative ] + ,(3)where x i,j is a data point representing the j-th image of the i-th cluster in the batch. f (x i,j ) is the feature vector of x i,j .
Ranking-based Triplet Loss (RTL). However, it is clear that the effect of CTL highly depends on the quality of label estimation, which is hard to decide whether the clustering result is correct or not. Therefore, we propose a Ranking-based Triplet Loss (RTL) to make full use of the ranking score matrix D R . It is a label-free method reflecting the relation between data pairs. Specifically, given a training anchor x a , positive sample x p is randomly selected from the top η nearest neighbors according to the ranking score vector D R (x a ), and negative sample x n is from the location (η, 2η]. In addition, instead of hard margin in CTL,we introduce a soft margin based on the relative ranking position of x p and x n , which can adapt well to different scales of intra-class variation. The formula of RTL is shown as,
L RT L = P K a=1 [ |P p − P n | η m + ||f (x a ) − f (x p )|| 2 − ||f (x a ) − f (x n )|| 2 ] + ,(4)
where the selected anchors in each batch are the same as CTL. m is a basic hard margin same as Eq. (3). η is the maximum of ranking position for positive sample selection. P p and P n are the ranking positions of x p and x n with respect to x a . To summarize, we optimize the network using the combination of CTL and RTL to better capture the localconstraint information of data distribution. Our final tripletbased loss function in conservative stage is shown in Eq. (5):
L C = L RT L + λL CT L ,(5)
where λ is the loss weight to trade off the influence of two loss functions. Experiments show that this combined tripletbased loss function can certainly improve the capability of model representation.
Promoting Stage
Nevertheless, since triplet-based loss functions only focus on the data relation within each triplet, the model will be prone to instability and stuck into a suboptimal local minimum. To alleviate this problem, we propose to apply classification loss to further improve model generalization by taking advantage of global information of training samples. In the promoting stage, a fully-connected layer is added at the end of the model as a classifier layer, which is initialized according to the features of current training set. Softmax cross-entropy loss is used as the objective function, which is formulated as:
L P = − P K i=1 log e W T y i xi C c=1 e W T c xi ,(6)
whereŷ i is the pseudo label of the sample x i . C is the number of clusters from the HDBSCAN clustering method with updated training set T U . Feature-based Weight Initialization for Classifier. Due to the variation of cluster numbers C, the newly added classifier layer CL should be initialized every time executing HDBSCAN. Instead of random initialization, we exploit the mean features of each cluster as the initial parameters. Specifically, for each cluster c, we calculate the mean feature F c by averaging all the embedding features of its elements. The parameters W of CL are initialized as follows: where W ∈ R d×C , W c is the c-th column of W, and d is the feature dimensionality. An advantage of this initialization is that we can use the previous information to avoid the fluctuation of accuracy caused by random initialization, which is useful for the convergence of model training.
W c = F c , c ∈ {1, 2, . . . , C},(7)
Alternate Training
The learning process is expected to progressively improve the model capability of generalization, which can avoid model to fall into local optimum. In this paper, we carefully develop a simple yet effective self-training strategy which can capture local structure and global information of training images. That is, the conservative stage and the promoting stage are conducted alternately. At the beginning, the model is trained only using the local relations between data points alone, so that the difficulty of error amplification brought by softmax loss can be prevented. After several training steps in the conservative stage, the ability of model representation and the quality of clusters are more trusty. Then model capability is further augmented using Softmax cross-entropy loss in the promoting stage and the updated model is used as the initial state for conservative stage alternately. As the training goes on, model generalization is improved, allowing to learn more discriminate feature representation of training images. The details of this two-stage alternate self-training are included in Algorithm 1. We also list one visual example of this alternate self-training process, shown in Figure 3. It is proved that our proposed PAST framework is also useful for refining the quality of clusters. 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 Conservative Stage Iteration 4 Figure 3 -The alternate self-training process of our PAST framework on one visual example. All images belong to same person in truth. Samples with same color denotes that they are assigned to same pseudo label generated by HDBSCAN clustering method. Gray figure means the sample not belonging to any cluster and not being used for model training. From training iteration 1 to iteration 4, more samples are selected for training. At the same time, the pseudo labels are more reliable.
Experiments
We evaluate our unsupervised self-learning method on cross-domain Person Re-ID tasks. Three common largescale person Re-ID datasets are used, Market-1501 [42], DukeMTMC-Re-ID [43], and CUHK03 [17].
Market-1501 [42] contains 32,668 labelled images of 1,501 identities taken by 6 cameras, which are detected and cropped via Deformable Part Model (DPM) [11]. The dataset is split into training set with 12,936 images of 751 identities and test set with 19,732 images of 750 identities.
DukeMTMC-Re-ID [43] consists of 36,411 labelled images belonging to 1,404 identities observed by 8 camera views. As the format of Matket-1501 dataset, it has 16,522 images of 702 identities for the training set and the remaining 19,889 images of 702 identities for the test set. Hereafter Duke refers to this dataset.
CUHK03 [17] is composed of 14,096 images from 1,467 identities captured by 2 cameras. This dataset was constructed by both manual labelling and DPM. In this work, we experiment on the images detected using DPM. To be in consistency with the protocol of Market-1501 and Duke, new train/test evaluation protocol [44] are used: 7,365 images with 767 identities for training and the remaining 6,732 images with 700 identities for testing.
Implementation Details
Model and Preprocessing. We adopt PCB [29] as our model structure, in which ResNet-50 [14] without last classification layer is used as backbone model. Similar as EANet [16], we use 9 regions for feature representation. Instead of using part aligned pooling [16], we change to use even parts like PCB for simplification. The dimension of each embedding layer is set to 256. Following each embedding layer, we also implement the classifier layer with one fully connected layer in the promoting stage. The classifier output changes according to the number of clusters generated from HDBSCAN clustering process.
All input images are resized to 384×128×3. It is noting that we only apply random flipping as data augmentation.
Training Settings. We use the SGD optimizer with a momentum of 0.9 and weight decay of 5 × 10 −4 to train the model. Without otherwise specification, in all experiments we set batch size to 64 and the iteration step to 4. Instead of directly using same learning rates for both conservative and promoting stage, we believe that individually setting the specialized learning rates can work better for our PAST framework. The reason is that the parameters from the conservative stage should be updated slower in the promoting stage for avoiding error amplification caused by Softmax cross-entropy loss. Specifically, the learning rate is initialized to 10 −4 on fine-tune layers and 2×10 −4 on embedding layers in the conservative stage, while for the promoting stage, newly added classifier layers use an initial learning rate of 10 −3 and all other layers 5 × 10 −5 . After 3 iterations, all learning rates are multiplied by 0.1. The margin hyper parameter m is set to 0.3 in both Eq. (3) and Eq. (4).
Evaluating Settings. For performance evaluation, feature vectors from embedding layers of 9 parts are normalized separately and then concatenated as the output representation. Given a query image, we calculate cosine distance with all gallery images and then sort it as final ranking result. We utilize the Cumulated Matching Characteristics (CMC) [13] and mean Average Precision (mAP) [42] as the performance evaluation measures. CMC curve shows the probability that a query appears in different size of candidate lists. As for mAP, given a single query, the Average Precision (AP) is computed from the area under its precision-recall curve. The mAP is then calculated as the mean value of AP across all queries. Note that single-shot setting is adopted similar to [29] in all experiments. Table 1 -The effectiveness of conservative stage and promoting stage in our proposed Self-training with Progressive Augmentation Framework (PAST). D→M represents that we use Duke [43] as source domain and Market-1501 [42] as target domain. * denotes that the results are produced by us. DT means Direct Transfer from PCB with 9 regions. R means applying k-reciprocal encoding method [44]. CTL represents clustering-based triplet loss [15], while RTL is our proposed rankingbased triplet loss. Our PAST framework consists of conservative stage and promoting stage that are denoted by C and P respectively.
Ablation Study
In this subsection, we aim to thoroughly analyse the effectiveness of each components in our PAST framework.
Effectiveness of the Conservative Stage. As shown in Table 1, we conduct several experiments to verify the effectiveness of the individual components CTL, RTL and the combination of these two triplet loss functions on the task of M→D and D→M. First, only with CTL, we improve the performance by 18.49% and 12.14% at Rank-1 accuracy compared with the results from k-reciprocal encoding method [44] on M→D and D→M respectively. Second, we observe that containing only our proposed RTL, the Rank-1 accuracy and mAP increase by 21% and 12.64% for M→D, while 12.91% and 5.69% on D→M. This obvious improvement shows that both CTL and RTL are useful for increasing model generalization. And CTL obtains slightly lower performance than RTL. Then, as described in Eq. (5), we combine CTL and RTL together to jointly optimize model in our conservative stage. It is clear that we achieve better results on both M→D and D→M. Especially for D→M, we gain 2.38% and 4.42% on Rank-1 and mAP comparing to only using CTL, which shows the significant benefit of our RTL. Through this conservative stage, we can learn a relative powerful model for target domain.
Effectiveness of the Promoting Stage. However, as illustrated in Figure 1, there is no further gains even with more training iterations when only using triplet-based loss functions. We believe that it is because during conservative stage, the model only sees local structure of data distribution brought by triplet samples. Thus, in our PAST framework, we employ softmax cross-entropy loss as the objective function in the promoting stage to train the model with the conservative stage alternately. Refer to Table 1 again, compared with only using conservative stage, our PAST can further improve mAP and Rank-1 by 2.21% and 0.72% on M→D task, and 4.03% and 4.12% for D→M. Meanwhile, from Figure 3, the quality of clusters is also improved with our PAST framework. This shows that the promoting stage does play an important role in model generalization. Through the above experiments, different components in our PAST have been evaluated and verified. We show that our PAST framework is not only beneficial for improving model generation but also refining clustering quality.
Comparison with Different Clustering Methods. We evaluate three different clustering methods, i.e., k-means, DBSCAN [9] and HDBSCAN [1] in the conservative stage. The performance of utilizing these clustering methods under different settings are specified in Table 2. For k-means, the number of cluster centroids k is set to 702 and 751 on target data of Market-1501 and Duke respectively, which is the same as the number of identities of source training data. It is clear that HDBSCAN performs better than k-means and DBSCAN under either only using conservative stage or whole PAST framework. For instance, using HDBSCAN can achieve mAP 54.26% and Rank-1 72.35% for M→D task in PAST framework, which are 4.29% and 3.41% higher than using k-means, and 1.19% and 0.45% than using DBSCAN. In addition, we also observe that whatever clustering method we use, our PAST framework always outperforms only using conservative stage. This means that on the one hand, HDBSCAN clustering method has more powerful effect in our framework; on the other hand, our PAST framework indeed provides improvement of feature representation on target domain.
Comparison with State-of-the-art Methods
Following evaluation setting in [16,45], we compare our proposed PAST framework with state-of-the-art unsupervised cross-domain methods, shown in Table 3. It can be seen that only using conservative stage with CTL and RTL for training, the performance is already competitive with other cross-domain adaptive methods. For example, although EANet [16] proposes complex part-aligned pooling and combines pose segmentation to provide more information for adaptation, our conservative stage still outperforms it by 3.93% in Rank-1 and 4.05% in mAP when testing on M→D. Moreover, our PAST framework surpasses all previous methods by a large margin, which achieves 54. 26% 79.48%, 69.88% in Rank-1 accuracy for M→D, M→D, C→M, C→D. We can also prove that it is useful to alternately use conservative and promoting stage by comparing with the last two rows in Table 3. Especially, our PAST can improve 4.71% and 5.21% in Rank-1 and mAP for C→D compared with only using conservative stage.
Parameter Analysis
Besides, we conduct additional experiments to evaluate the parameter sensitivity.
Analysis of the Loss Weight λ. λ is a hyper parameter which is used to trade off the effect between rankingbased triplet loss (RTL) and clustering-based triplet loss (CTL). We evaluate the impact of λ, which is sampled from {0.1, 0.2, 0.5, 1.0, 2.0}, on the task of D→M. The results are shown in Figure 4 (a). We observe that the best result is obtained when λ is set to 0.5. Note that large or small λ has limitation on the improvement of performance.
Analysis of the Minimum Samples S min . In addition, we analyse how the number of minimum samples (S min ) for every cluster in HDBSCAN clustering affects the Re-ID results. We test the impact of {5, 10, 15, 20} minimum samples on the performance of our PAST framework on D→M setting. As shown in Figure 4 (b), we can see that setting S min to 10 yields superior accuracy. Meanwhile, different S min has large variance on the final number of pseudo identities from HDBSCAN. We believe that it is because samples from the same class will be separated to several clusters when S min is too small, while low-density classes will be abandoned if S min is too large. This can be verified from Figure 4 (c), the number of identity from HDBSCAN with minimum sample 10 is 625, which is the closest one to the true value 751 in Market-1501 training set.
Conclusion
In this paper, we have presented a self-training with progressive augmentation framework (PAST) for unsupervised cross-domain person re-identification. Our PAST consists of two different stages, i.e., the conservative and promoting stage, which are adopted alternately to offer complementary information for each other. Specifically, the conservative stage mainly captures local information with triplet-based loss functions, while the promoting stage is used for extracting global information. For alleviating the dependence on clustering quality, we also propose a novel label-free ranking-based triplet loss. With these proposed method, the model generalization gains significant improvement, as well as the capability of feature representation on target domain. Extensive experiments show that our PAST outperforms the state-of-the-art unsupervised cross-domain algorithms by a large margin.
We plan to extend our work to other unsupervised crossdomain applications, such as face recognition and image retrieval tasks.
More Qualitative Analyses
Qualitative Analysis of the Feature Representation. To demonstrate the results intuitively, we visualize the feature embeddings calculated by our PAST framework in 2-D using t-SNE [31]. negative samples. As illustrated in Figure 5, images belonging to the same identity are almost well gathered together, while those from different classes usually stay apart from each other. It implies that our PAST framework can improve the capability of model generalization which is beneficial for learning discriminative feature representation on the target-domain dataset.
Qualitative Analysis of the Triplet Selection. In Figure 6, we visualize the triplet samples generated in the conservative stage for CTL and RTL, respectively. We summarize the main advantages of the proposed PAST method in the following.
1. The proposed PAST algorithm can significantly improve the quality of the clustering assignments during training. As shown in the first row of the iterations from 1 to 4, the images assigned to the same class by the proposed method tend to be more and more similar. On the other hand, the quality of the pseudo labels assigned to each images is steadily improved during training. It means that our PAST framework is beneficial for learning discriminating feature presentation and can assign more reliable pseudo labels to target images. The accurate pseudo labels can be used to promoting stage to improve the model generalization further. 2. RTL is useful for remedying the variance caused by CTL. Refer to Figure 6 again, we can observe that the third cluster in iteration 2 is noisy and the selected triplets from CTL are not faithful. However, RTL can select correct positive sample even the cluster is dirty. We believe that the reason is that RTL just depends on the similarity ranking matrix and the top η similar images are used for generating positive samples, which is more reliable when the features representation is not so discriminative. 3. RTL helps to further optimize the network, especially in the later iteration. From Figure 6, we can also see that different clusters in one mini-batch may look different due to unique color of clothes, which results in extremely simple negative samples and slows down the optimization when training on CTL. Whereas, considering the triplets generated from the RTL, negative images are extremely similar to the anchors, which is even hard to be well recognized by human beings. For example, at the second column in iteration 4, all images look like one person, although images from the first two rows are same person, while those from the third row belong to another person.
True Positive
False Positive False Negative Figure 5 -Qualitative analysis of the feature representation by using t-SNE [31] visualization on a subset of Market-1501 [42] training data. According to the clustering result, we choose the Top-50 identities which contain Top-50 the largest number of images. Points with the same color have the same (ground-truth) identity. The green circle means images from the same identity are gathered together, and the cluster is extremely reliable. Images in orange circle are both from same identity, yet they are clustered to two different classes. We can see that due to the camera style, images from the two classes have different appearances. In the red circle, although our algorithm may gather images from different (ground-truth) identities into the same cluster, these images usually share very similar appearances and are hard to distinguish with each other. For instances, every image in the red circle contains one person with white clothes and a black bicycle. Figure 6 -Quality of the triplet selection over training iterations. Images from different clusters are divided by yellow line. The red line means generated triplets are not completely correct, while green line represents generated triplets are completely correct. The solid line and dashed line are for triplets, which are generated from CTL and RTL respectively. We use Duke [43] as the source domain and Market-1501 [42] as the target domain. | 5,387 |
1907.12679 | 2966292672 | We introduce the metric using BERT (Bidirectional Encoder Representations from Transformers) (, 2019) for automatic machine translation evaluation. The experimental results of the WMT-2017 Metrics Shared Task dataset show that our metric achieves state-of-the-art performance in segment-level metrics task for all to-English language pairs. | ReVal https: github.com rohitguptacs ReVal @cite_5 is also a metric using sentence embeddings. ReVal trains sentence embeddings from labeled data in WMT Metrics Shared Task and semantic similarity estimation tasks, but can not achieve sufficient performance because it uses only small data. RUSE trains only regression models from labeled data using sentence embeddings pre-trained on large data such as Quick Thought @cite_8 . | {
"abstract": [
"Many state-of-the-art Machine Translation (MT) evaluation metrics are complex, involve extensive external resources (e.g. for paraphrasing) and require tuning to achieve best results. We present a simple alternative approach based on dense vector spaces and recurrent neural networks (RNNs), in particular Long Short Term Memory (LSTM) networks. ForWMT-14, our new metric scores best for two out of five language pairs, and overall best and second best on all language pairs, using Spearman and Pearson correlation, respectively. We also show how training data is computed automatically from WMT ranks data.",
"In this work we propose a simple and efficient framework for learning sentence representations from unlabelled data. Drawing inspiration from the distributional hypothesis and recent work on learning sentence representations, we reformulate the problem of predicting the context in which a sentence appears as a classification problem. This allows us to efficiently learn different types of encoding functions, and we show that the model learns high-quality sentence representations. We demonstrate that our sentence representations outperform state-of-the-art unsupervised and supervised representation learning methods on several downstream NLP tasks that involve understanding sentence semantics while achieving an order of magnitude speedup in training time."
],
"cite_N": [
"@cite_5",
"@cite_8"
],
"mid": [
"2250597803",
"2963644595"
]
} | Machine Translation Evaluation with BERT Regressor | This study describes a segment-level metric for automatic machine translation evaluation (MTE). The MTE metrics with a high correlation with human evaluation enable the continuous integration and deployment of a machine translation (MT) system.
In our previous study (Shimanaka et al., 2018), we proposed RUSE 1 (Regressor Using Sentence Embeddings) that is a segment-level MTE metric using pre-trained sentence embeddings capable of capturing global information that cannot be captured by local features based on character or word N-grams. In WMT-2018 Metrics Shared Task (Ma et al., 2018), RUSE was the best metric on segment-level for all to-English language pairs. This result indicates that pre-trained sentence embeddings are effective feature for automatic evaluation of machine translation.
Research related to applying pre-trained language representations to downstream tasks has been rapidly developing in recent years. In particular, BERT (Bidirectional Encoder Representations from Transformers) (Devlin et al., 2019) has achieved the best performance in many downstream tasks and is attracting attention. BERT is designed to pre-train using "masked language model" (MLM) and "next sentence prediction" 1 https://github.com/Shi-ma/RUSE (NSP) on large amounts of raw text and fine-tune for a supervised downstream task. For example, in the case of solving single sentence classification tasks such as sentiment analysis and in the case of solving sentence-pair classification tasks such as natural language inference task, fine-tuning is performed in different ways. As a result, BERT also performs well in the task of estimating the similarity between sentence pairs, which is considered to be a similar task of automatic machine translation evaluation.
Therefore, we propose the MTE metric that using BERT. The experimental results in segmentlevel metrics task conducted using the datasets for all to-English language pairs on WMT17 indicated that the proposed metric shows higher correlation with human evaluations than RUSE, and achieves the best performance. As a result of detailed analysis, it is clarified that the three main points of difference with RUSE, the pre-training method, the sentence-pair encoding, and the fine-tuning of the pre-trained encoder, contribute to the performance improvement of BERT.
Blend: the metric based on local features
Blend which achieved the best performance in WMT-2017 is an ensemble metric that incorporates 25 lexical metrics provided by the Asiya MT evaluation toolkit, as well as four other metrics. Blend is a metric that uses many features, but relies only on local information that can not simultaneously consider the whole sentence simultaneously, such as character-based editing distances and features based on word N-grams.
RUSE: the metric based on sentence embeddings
RUSE (Shimanaka et al., 2018) which achieved the best performance in WMT-2018 is a metric using sentence embeddings pre-trained on large amounts of text. Unlike previous metrics such as Blend, RUSE has the advantage of simultaneously considering the information of the whole sentence as a distributed representation. ReVal 2 (Gupta et al., 2015) is also a metric using sentence embeddings. ReVal trains sentence embeddings from labeled data in WMT Metrics Shared Task and semantic similarity estimation tasks, but can not achieve sufficient performance because it uses only small data. RUSE trains only regression models from labeled data using sentence embeddings pre-trained on large data such as Quick Thought (Logeswaran and Lee, 2018).
As shown in Figure 1(a), RUSE encodes an MT hypothesis and an reference translation by a sentence encoder, respectively. Then, following In-ferSent (Conneau et al., 2017), a features are extracted by combining sentence embeddings of the two sentences, and the evaluation score is estimated by the regression model based on multilayer perceptron (MLP). 2 https://github.com/rohitguptacs/ReVal
BERT for MTE
In this study, we use BERT (Devlin et al., 2019) for MTE. Like RUSE, BERT for MTE uses pretrained sentence embeddings and estimates the evaluation score using the regression model based on MLP. However, as shown in the figure 1(b), in BERT for MTE, both an MT hypothesis and an reference translation are encoded simultaneously by the sentence-pair encoder. Then, the sentencepair embedding is input to the regression model based on MLP. Unlike RUSE, the pre-trained encoder is also fine-tuning with MLP. In the following, we explain the three differences between RUSE and BERT in detail which are the pretraining method, the sentence-pair encoding, and the fine-tuning of the pre-trained encoder.
Pre-training Method
BERT is designed to pre-train using two types of unsupervised task simultaneously on large amounts of raw text.
Masked Language Model (MLM) After replacing some tokens in the raw corpus with [MASK] tokens, we estimate the original tokens by a bidirectional language model. By this unsupervised pre-training, BERT encoder learns the relation between tokens in the sentence.
Next Sentence Prediction (NSP) Some sentences in the raw corpus are randomly replaced with other sentences, and then binary classification is performed to determine whether two consecutive sentences are adjacent or not. By this unsupervised pre-training, BERT encoder learns the relationship between two consecutive sentences.
Sentence-pair Encoding
In BERT, instead of encoding each sentence independently, it encodes a sentence-pairs simultane- ously for task of dealing with sentence pairs such as NSP and Natural Language Inference. The first token of every sequence is always a special classification token ([CLS]) and each sentence is separated with a special end-of-sentence token ([SEP]) ( Figure 2). Finally, the final hidden state corresponding to a special [CLS] token is used as the aggregate sequence representation for classification tasks.
Fine-tuning of the Pre-trained Encoder
In BERT, after obtaining a sentence embedding or a sentence-pair embedding using an encoder, it is used as an input of MLP to solve applied tasks such as classification and regression. When training an MLP with labeled data of the applied task, we also fine-tune the pre-trained encoder.
Experiments
We performed experiments using the WMT-2017 Metrics Shared Task dataset to verify the performance of BERT for MTE. Table 1 shows the number of instances in WMT Metrics Shared Task dataset (segment-level) for to-English language pairs 3 used in this study. A total of 5,360 instances in WMT-2015 and WMT-2016 Metrics Shared Task datasets will be divided randomly, and 90% is used for training and 10% for development. A total of 3,920 instances (560 instances for each language pair) in WMT-2017 Metrics Shared Task dataset is used for evaluation.
Settings
As a comparison method, we use SentBLEU 4 which is the baseline of WMT Metrics Shared Task, Blend (Ma et al., 2017) which achieved the best performance in WMT-2017 Metrics Shared Task, and RUSE (Shimanaka et al., 2018) which achieved the best performance in WMT-2018 Metrics Shared Task. We evaluated each metric using the Pearson correlation coefficient between the metric scores and the DA human scores.
Among the trained models published by the authors, BERT BASE (uncased) 5 is used for MTE with BERT. The Hyper-parameters for fine-tuning BERT are determined through grid search in the following parameters using the development data.
• Batch size ∈ {16, 32}
• Learning rate(Adam) ∈ {5e-5, 3e-5, 2e-5}
Results
Table 2 presents the experimental results of the WMT-2017 Metrics Shared Task dataset. BERT for MTE achieved the best per-formance in all to-English language pairs. In Section 5, we compare RUSE and BERT and do a detailed analysis. cs-en de-en fi-en lv-en ro-en ru-en tr-en zh-en WMT-2015 500 500 500 --500 --WMT-2016 560 560 560 -560 560 560 -WMT-2017 560 560 560 560 -560 560 560 WMT-2015(Stanojević et al., 2015, WMT-2016(Bojar et al., 2016, and WMT-2017 Metrics Shared Task (Bojar et al., 2017).
cs-en de-en fi-en lv-en ru-en tr-en zh-en avg.
SentBLEU (Bojar et al., 2017)
Analysis: Comparison of RUSE and BERT
In order to analyze the three main points of difference between RUSE and BERT, the pre-training method, the sentence-pair encoding, and the finetuning of the pre-trained encoder, we conduct an experiment with the following settings.
RUSE with GloVe-BoW: The mean vector of word embeddings of GloVe (Pennington et al., 2014)(glove.840B.300d 6 ) (300 dimension) in each sentence is used as the sentence embeddings in Figure 1(a).
RUSE with Quick
Thought: Quick Thought (Logeswaran and Lee, 2018) pretrained on both 45 million sentences in the BookCorpus (Zhu et al., 2015) and about 130 million sentences in UMBC WebBase coupus (Han et al., 2013) is used as the sentence encoder in Figure 1(a).
RUSE with BERT:
A concatenation of the last four hidden layers (3,072 dimention) corresponding to the [CLS] token of BERT that takes a single sentence as input is used as the sentence embeddings in Figure 1(a).
BERT (w/o fine-tuning): A concatenation of the last four hidden layers (3,072 dimension) corresponding to the [CLS] token of BERT that takes a sentence-pair as the input sequence is used as 6 https://nlp.stanford.edu/projects/glove the input of the MLP Regressor in Figure 1(b). In this case, the part of the BERT encoder is not finetuned.
BERT: The last hidden layer (768 dimension) corresponding to the [CLS] token of BERT that takes a sentence-pair as the input sequence is used as the input of the MLP Regressor in Figure 1(b). In this case, the part of the BERT encoder is finetuned.
The Hyper-parameters for RUSE and BERT (w/o fine-tuning) are determined through grid search in the following parameters using the development data. Pre-training Method The top three rows of Table 3 show the performance impact of the method of pre-learning in the sentence encoder. First, Quick Thought based on sentence embeddings has better performance consistently than GloVe-BoW based on word embeddings. Second, BERT cs-en de-en fi-en lv-en ru-en tr-en zh-en avg. pret-rained by both MLM and NSP perform better on many language pairs than Quick Thought pre-trained only by NSP. In other words, the pretraining method using Masked Language Model (MLM), which is one of the major features of BERT, is also useful for MTE.
Sentence-pair Encoding Comparing RUSE with BERT and BERT (w/o fine-tuning) shows the impact of the sentence-pair encoding on the performance of MTE. In the case of many language pairs, the latter, which simultaneously encodes an MT hypothesis and a reference translation, has higher performance than the former, which encodes them independently. Although RUSE performs feature extraction that combines sentence embeddings of two sentences in the same way as InferSent, this is not necessarily the method of feature extraction suitable for MTE. On the other hand, the sentence-pair encoding of BERT obtains sentence embeddings considering the relation of sentence-pair without explicitly extracting the feature. In BERT, there is a possibility that the relation of sentence-pair can be trained well at the time of pre-training by NSP.
Fine-tuning of the Pre-trained Encoder The bottom two rows of Table 3 show the performance impact of the fine-tuning of the pre-trained encoder. In the case of all language pairs, BERT, which fine-tune the pre-trained encoder with MLP, performs much better than RUSE, which only trains MLP. In other words, the fine-tuning of the pre-trained encoder, which is one of the major features of BERT, is also useful for machine translation evaluation.
Conclusion
In this study, we proposed the metric for automatic machine translation evaluation with BERT. Our segment-level MTE metric with BERT achieved the best performance in segment-level metrics tasks on the WMT17 dataset for all to-English language pairs. In addition, as a result of analysis based on comparison with RUSE which is our previous work, it is shown that three points of the pre-training method, the sentence-pair encoding, and the fine-tuning of the pre-trained encoder contributed to the performance improvement of BERT respectively. | 1,879 |
1907.12679 | 2966292672 | We introduce the metric using BERT (Bidirectional Encoder Representations from Transformers) (, 2019) for automatic machine translation evaluation. The experimental results of the WMT-2017 Metrics Shared Task dataset show that our metric achieves state-of-the-art performance in segment-level metrics task for all to-English language pairs. | As shown in Figure , RUSE encodes an MT hypothesis and an reference translation by a sentence encoder, respectively. Then, following InferSent @cite_12 , a features are extracted by combining sentence embeddings of the two sentences, and the evaluation score is estimated by the regression model based on multi-layer perceptron (MLP). | {
"abstract": [
"Many modern NLP systems rely on word embeddings, previously trained in an unsupervised manner on large corpora, as base features. Efforts to obtain embeddings for larger chunks of text, such as sentences, have however not been so successful. Several attempts at learning unsupervised representations of sentences have not reached satisfactory enough performance to be widely adopted. In this paper, we show how universal sentence representations trained using the supervised data of the Stanford Natural Language Inference datasets can consistently outperform unsupervised methods like SkipThought vectors on a wide range of transfer tasks. Much like how computer vision uses ImageNet to obtain features, which can then be transferred to other tasks, our work tends to indicate the suitability of natural language inference for transfer learning to other NLP tasks. Our encoder is publicly available."
],
"cite_N": [
"@cite_12"
],
"mid": [
"2612953412"
]
} | Machine Translation Evaluation with BERT Regressor | This study describes a segment-level metric for automatic machine translation evaluation (MTE). The MTE metrics with a high correlation with human evaluation enable the continuous integration and deployment of a machine translation (MT) system.
In our previous study (Shimanaka et al., 2018), we proposed RUSE 1 (Regressor Using Sentence Embeddings) that is a segment-level MTE metric using pre-trained sentence embeddings capable of capturing global information that cannot be captured by local features based on character or word N-grams. In WMT-2018 Metrics Shared Task (Ma et al., 2018), RUSE was the best metric on segment-level for all to-English language pairs. This result indicates that pre-trained sentence embeddings are effective feature for automatic evaluation of machine translation.
Research related to applying pre-trained language representations to downstream tasks has been rapidly developing in recent years. In particular, BERT (Bidirectional Encoder Representations from Transformers) (Devlin et al., 2019) has achieved the best performance in many downstream tasks and is attracting attention. BERT is designed to pre-train using "masked language model" (MLM) and "next sentence prediction" 1 https://github.com/Shi-ma/RUSE (NSP) on large amounts of raw text and fine-tune for a supervised downstream task. For example, in the case of solving single sentence classification tasks such as sentiment analysis and in the case of solving sentence-pair classification tasks such as natural language inference task, fine-tuning is performed in different ways. As a result, BERT also performs well in the task of estimating the similarity between sentence pairs, which is considered to be a similar task of automatic machine translation evaluation.
Therefore, we propose the MTE metric that using BERT. The experimental results in segmentlevel metrics task conducted using the datasets for all to-English language pairs on WMT17 indicated that the proposed metric shows higher correlation with human evaluations than RUSE, and achieves the best performance. As a result of detailed analysis, it is clarified that the three main points of difference with RUSE, the pre-training method, the sentence-pair encoding, and the fine-tuning of the pre-trained encoder, contribute to the performance improvement of BERT.
Blend: the metric based on local features
Blend which achieved the best performance in WMT-2017 is an ensemble metric that incorporates 25 lexical metrics provided by the Asiya MT evaluation toolkit, as well as four other metrics. Blend is a metric that uses many features, but relies only on local information that can not simultaneously consider the whole sentence simultaneously, such as character-based editing distances and features based on word N-grams.
RUSE: the metric based on sentence embeddings
RUSE (Shimanaka et al., 2018) which achieved the best performance in WMT-2018 is a metric using sentence embeddings pre-trained on large amounts of text. Unlike previous metrics such as Blend, RUSE has the advantage of simultaneously considering the information of the whole sentence as a distributed representation. ReVal 2 (Gupta et al., 2015) is also a metric using sentence embeddings. ReVal trains sentence embeddings from labeled data in WMT Metrics Shared Task and semantic similarity estimation tasks, but can not achieve sufficient performance because it uses only small data. RUSE trains only regression models from labeled data using sentence embeddings pre-trained on large data such as Quick Thought (Logeswaran and Lee, 2018).
As shown in Figure 1(a), RUSE encodes an MT hypothesis and an reference translation by a sentence encoder, respectively. Then, following In-ferSent (Conneau et al., 2017), a features are extracted by combining sentence embeddings of the two sentences, and the evaluation score is estimated by the regression model based on multilayer perceptron (MLP). 2 https://github.com/rohitguptacs/ReVal
BERT for MTE
In this study, we use BERT (Devlin et al., 2019) for MTE. Like RUSE, BERT for MTE uses pretrained sentence embeddings and estimates the evaluation score using the regression model based on MLP. However, as shown in the figure 1(b), in BERT for MTE, both an MT hypothesis and an reference translation are encoded simultaneously by the sentence-pair encoder. Then, the sentencepair embedding is input to the regression model based on MLP. Unlike RUSE, the pre-trained encoder is also fine-tuning with MLP. In the following, we explain the three differences between RUSE and BERT in detail which are the pretraining method, the sentence-pair encoding, and the fine-tuning of the pre-trained encoder.
Pre-training Method
BERT is designed to pre-train using two types of unsupervised task simultaneously on large amounts of raw text.
Masked Language Model (MLM) After replacing some tokens in the raw corpus with [MASK] tokens, we estimate the original tokens by a bidirectional language model. By this unsupervised pre-training, BERT encoder learns the relation between tokens in the sentence.
Next Sentence Prediction (NSP) Some sentences in the raw corpus are randomly replaced with other sentences, and then binary classification is performed to determine whether two consecutive sentences are adjacent or not. By this unsupervised pre-training, BERT encoder learns the relationship between two consecutive sentences.
Sentence-pair Encoding
In BERT, instead of encoding each sentence independently, it encodes a sentence-pairs simultane- ously for task of dealing with sentence pairs such as NSP and Natural Language Inference. The first token of every sequence is always a special classification token ([CLS]) and each sentence is separated with a special end-of-sentence token ([SEP]) ( Figure 2). Finally, the final hidden state corresponding to a special [CLS] token is used as the aggregate sequence representation for classification tasks.
Fine-tuning of the Pre-trained Encoder
In BERT, after obtaining a sentence embedding or a sentence-pair embedding using an encoder, it is used as an input of MLP to solve applied tasks such as classification and regression. When training an MLP with labeled data of the applied task, we also fine-tune the pre-trained encoder.
Experiments
We performed experiments using the WMT-2017 Metrics Shared Task dataset to verify the performance of BERT for MTE. Table 1 shows the number of instances in WMT Metrics Shared Task dataset (segment-level) for to-English language pairs 3 used in this study. A total of 5,360 instances in WMT-2015 and WMT-2016 Metrics Shared Task datasets will be divided randomly, and 90% is used for training and 10% for development. A total of 3,920 instances (560 instances for each language pair) in WMT-2017 Metrics Shared Task dataset is used for evaluation.
Settings
As a comparison method, we use SentBLEU 4 which is the baseline of WMT Metrics Shared Task, Blend (Ma et al., 2017) which achieved the best performance in WMT-2017 Metrics Shared Task, and RUSE (Shimanaka et al., 2018) which achieved the best performance in WMT-2018 Metrics Shared Task. We evaluated each metric using the Pearson correlation coefficient between the metric scores and the DA human scores.
Among the trained models published by the authors, BERT BASE (uncased) 5 is used for MTE with BERT. The Hyper-parameters for fine-tuning BERT are determined through grid search in the following parameters using the development data.
• Batch size ∈ {16, 32}
• Learning rate(Adam) ∈ {5e-5, 3e-5, 2e-5}
Results
Table 2 presents the experimental results of the WMT-2017 Metrics Shared Task dataset. BERT for MTE achieved the best per-formance in all to-English language pairs. In Section 5, we compare RUSE and BERT and do a detailed analysis. cs-en de-en fi-en lv-en ro-en ru-en tr-en zh-en WMT-2015 500 500 500 --500 --WMT-2016 560 560 560 -560 560 560 -WMT-2017 560 560 560 560 -560 560 560 WMT-2015(Stanojević et al., 2015, WMT-2016(Bojar et al., 2016, and WMT-2017 Metrics Shared Task (Bojar et al., 2017).
cs-en de-en fi-en lv-en ru-en tr-en zh-en avg.
SentBLEU (Bojar et al., 2017)
Analysis: Comparison of RUSE and BERT
In order to analyze the three main points of difference between RUSE and BERT, the pre-training method, the sentence-pair encoding, and the finetuning of the pre-trained encoder, we conduct an experiment with the following settings.
RUSE with GloVe-BoW: The mean vector of word embeddings of GloVe (Pennington et al., 2014)(glove.840B.300d 6 ) (300 dimension) in each sentence is used as the sentence embeddings in Figure 1(a).
RUSE with Quick
Thought: Quick Thought (Logeswaran and Lee, 2018) pretrained on both 45 million sentences in the BookCorpus (Zhu et al., 2015) and about 130 million sentences in UMBC WebBase coupus (Han et al., 2013) is used as the sentence encoder in Figure 1(a).
RUSE with BERT:
A concatenation of the last four hidden layers (3,072 dimention) corresponding to the [CLS] token of BERT that takes a single sentence as input is used as the sentence embeddings in Figure 1(a).
BERT (w/o fine-tuning): A concatenation of the last four hidden layers (3,072 dimension) corresponding to the [CLS] token of BERT that takes a sentence-pair as the input sequence is used as 6 https://nlp.stanford.edu/projects/glove the input of the MLP Regressor in Figure 1(b). In this case, the part of the BERT encoder is not finetuned.
BERT: The last hidden layer (768 dimension) corresponding to the [CLS] token of BERT that takes a sentence-pair as the input sequence is used as the input of the MLP Regressor in Figure 1(b). In this case, the part of the BERT encoder is finetuned.
The Hyper-parameters for RUSE and BERT (w/o fine-tuning) are determined through grid search in the following parameters using the development data. Pre-training Method The top three rows of Table 3 show the performance impact of the method of pre-learning in the sentence encoder. First, Quick Thought based on sentence embeddings has better performance consistently than GloVe-BoW based on word embeddings. Second, BERT cs-en de-en fi-en lv-en ru-en tr-en zh-en avg. pret-rained by both MLM and NSP perform better on many language pairs than Quick Thought pre-trained only by NSP. In other words, the pretraining method using Masked Language Model (MLM), which is one of the major features of BERT, is also useful for MTE.
Sentence-pair Encoding Comparing RUSE with BERT and BERT (w/o fine-tuning) shows the impact of the sentence-pair encoding on the performance of MTE. In the case of many language pairs, the latter, which simultaneously encodes an MT hypothesis and a reference translation, has higher performance than the former, which encodes them independently. Although RUSE performs feature extraction that combines sentence embeddings of two sentences in the same way as InferSent, this is not necessarily the method of feature extraction suitable for MTE. On the other hand, the sentence-pair encoding of BERT obtains sentence embeddings considering the relation of sentence-pair without explicitly extracting the feature. In BERT, there is a possibility that the relation of sentence-pair can be trained well at the time of pre-training by NSP.
Fine-tuning of the Pre-trained Encoder The bottom two rows of Table 3 show the performance impact of the fine-tuning of the pre-trained encoder. In the case of all language pairs, BERT, which fine-tune the pre-trained encoder with MLP, performs much better than RUSE, which only trains MLP. In other words, the fine-tuning of the pre-trained encoder, which is one of the major features of BERT, is also useful for machine translation evaluation.
Conclusion
In this study, we proposed the metric for automatic machine translation evaluation with BERT. Our segment-level MTE metric with BERT achieved the best performance in segment-level metrics tasks on the WMT17 dataset for all to-English language pairs. In addition, as a result of analysis based on comparison with RUSE which is our previous work, it is shown that three points of the pre-training method, the sentence-pair encoding, and the fine-tuning of the pre-trained encoder contributed to the performance improvement of BERT respectively. | 1,879 |
1907.12649 | 2965749255 | In the last few years, Header Bidding (HB) has gained popularity among web publishers and is challenging the status quo in the ad ecosystem. Contrary to the traditional waterfall standard, HB aims to give back control of the ad inventory to publishers, increase transparency, fairness and competition among advertisers, thus, resulting in higher ad-slot prices. Although promising, little is known about this new ad-tech protocol: How does it work internally and what are the different implementations of HB? What is the performance overhead, and how does it affect the user experience? Does it, indeed, provide higher revenues to publishers than the waterfall model? Who are the dominating entities in this new protocol? To respond to all these questions and shed light on this new, buzzing ad-technology, we design and implement HBDetector: a holistic HB detection mechanism that can capture HB auctions independently of the implementation followed in a website. By running HBDetector across the top 35,000 Alexa websites, we collect and analyze a dataset of 800k auctions. Our results show that: (i) 14.28 of the top Alexa websites utilize HB. (ii) Publishers tend to collaborate mostly with a relatively low number of demand partners, which are already big players in waterfall standard, (iii) HB latency can be significantly higher than waterfall, with up to 3x latency in the median cases. | User data and their economics have long been an interesting topic and attracted a considerable body of research @cite_25 @cite_18 @cite_0 @cite_9 @cite_19 @cite_2 @cite_29 @cite_39 @cite_38 @cite_6 @cite_13 . In particular, in @cite_19 , Acquisti al discuss the value of privacy after defining two concepts (i) : the monetary amount users are willing to pay to protect their privacy, and (ii) : the compensation that users are willing to accept for their privacy loss. In two user-studies @cite_0 @cite_9 authors measure how much users value their own offline and online personal data, and consequently how much they would sell them to advertisers. @cite_2 , authors propose transactional'' privacy to allow users to decide what personal information can be released and receive compensation from selling them. | {
"abstract": [
"",
"The OECD, the European Union and other public and private initiatives are claiming for the necessity of tools that create awareness among Internet users about the monetary value associated to the commercial exploitation of their online personal information. This paper presents the first tool addressing this challenge, the Facebook Data Valuation Tool (FDVT). The FDVT provides Facebook users with a personalized and real-time estimation of the revenue they generate for Facebook. Relying on the FDVT, we are able to shed light into several relevant HCI research questions that require a data valuation tool in place. The obtained results reveal that (i) there exists a deep lack of awareness among Internet users regarding the monetary value of personal information, (ii) data valuation tools such as the FDVT are useful means to reduce such knowledge gap, (iii) 1 3 of the users testing the FDVT show a substantial engagement with the tool.",
"In the context of a myriad of mobile apps which collect personally identifiable information (PII) and a prospective market place of personal data, we investigate a user-centric monetary valuation of mobile PII. During a 6-week long user study in a living lab deployment with 60 participants, we collected their daily valuations of 4 categories of mobile PII (communication, e.g. phonecalls made received, applications, e.g. time spent on different apps, location and media, e.g. photos taken) at three levels of complexity (individual data points, aggregated statistics and processed, i.e. meaningful interpretations of the data). In order to obtain honest valuations, we employ a reverse second price auction mechanism. Our findings show that the most sensitive and valued category of personal information is location. We report statistically significant associations between actual mobile usage, personal dispositions, and bidding behavior. Finally, we outline key implications for the design of mobile services and future markets of personal data.",
"",
"Online advertising drives the economy of the World Wide Web. Modern websites of any size and popularity include advertisements to monetize visits from their users. To this end, they assign an area of their web page to an advertising company (so called ad exchange) that will use it to display promotional content. By doing this, the website owner implicitly trusts that the advertising company will offer legitimate content and it will not put the site's visitors at risk of falling victims of malware campaigns and other scams. In this paper, we perform the first large-scale study of the safety of the advertisements that are encountered by the users on the Web. In particular, we analyze to what extent users are exposed to malicious content through advertisements, and investigate what are the sources of this malicious content. Additionally, we show that some ad exchanges are more prone to serving malicious advertisements than others, probably due to their deficient filtering mechanisms. The observations that we make in this paper shed light on a little studied, yet important, aspect of advertisement networks, and can help both advertisement networks and website owners in securing their web pages and in keeping their visitors safe.",
"Third-party services form an integral part of the mobile ecosystem: they allow app developers to add features such as performance analytics and social network integration, and to monetize their apps by enabling user tracking and targeted ad delivery. At present users, researchers, and regulators all have at best limited understanding of this third-party ecosystem. In this paper we seek to shrink this gap. Using data from users of our ICSI Haystack app we gain a rich view of the mobile ecosystem: we identify and characterize domains associated with mobile advertising and user tracking, thereby taking an important step towards greater transparency. We furthermore outline our steps towards a public catalog and census of analytics services, their behavior, their personal data collection processes, and their use across mobile apps.",
"Most online service providers offer free services to users and in part, these services collect and monetize personally identifiable information (PII), primarily via targeted advertisements. Against this backdrop of economic exploitation of PII, it is vital to understand the value that users put to their own PII. Although studies have tried to discover how users value their privacy, little is known about how users value their PII while browsing, or the exploitation of their PII. Extracting valuations of PII from users is non-trivial - surveys cannot be relied on as they do not gather information of the context where PII is being released, thus reducing validity of answers. In this work, we rely on refined Experience Sampling - a data collection method that probes users to valuate their PII at the time and place where it was generated in order to minimize retrospective recall and hence increase measurement validity. For obtaining an honest valuation of PII, we use a reverse second price auction. We developed a web browser plugin and had 168 users - living in Spain - install and use this plugin for 2 weeks in order to extract valuations of PII in different contexts. We found that users value items of their online browsing history for about ∈7 ( 10USD), and they give higher valuations to their offline PII, such as age and address (about 25∈ or 36USD). When it comes to PII shared in specific online services, users value information pertaining to financial transactions and social network interactions more than activities like search and shopping. No significant distinction was found between valuations of different quantities of PII (e.g. one vs. 10 search keywords), but deviation was found between types of PII (e.g. photos vs. keywords). Finally, the users' preferred goods for exchanging their PII included money and improvements in service, followed by getting more free services and targeted advertisements.",
"AbstractUnderstanding the value that individuals assign to the protection of their personal data is of great importance for business, law, and public policy. We use a field experiment informed by behavioral economics and decision research to investigate individual privacy valuations and find evidence of endowment and order effects. Individuals assigned markedly different values to the privacy of their data depending on (1) whether they were asked to consider how much money they would accept to disclose otherwise private information or how much they would pay to protect otherwise public information and (2) the order in which they considered different offers for their data. The gap between such values is large compared with that observed in comparable studies of consumer goods. The results highlight the sensitivity of privacy valuations to contextual, nonnormative factors.",
"Monetizing personal information is a key economic driver of online industry. End-users are becoming more concerned about their privacy, as evidenced by increased media attention. This paper proposes a mechanism called 'transactional' privacy that can be applied to personal information of users. Users decide what personal information about themselves is released and put on sale while receiving compensation for it. Aggregators purchase access to exploit this information when serving ads to a user. Truthfulness and efficiency, attained through an unlimited supply auction, ensure that the interests of all parties in this transaction are aligned. We demonstrate the effectiveness of transactional privacy for web-browsing using a large mobile trace from a major European capital. We integrate transactional privacy in a privacy-preserving system that curbs leakage of information. These mechanisms combine to form a market of personal information that can be managed by a trusted third party.",
"",
"Online advertising is progressively moving towards a programmatic model in which ads are matched to actual interests of individuals collected as they browse the web. Letting the huge debate around privacy aside, a very important question in this area, for which little is known, is: How much do advertisers pay to reach an individual? In this study, we develop a first of its kind methodology for computing exactly that - the price paid for a web user by the ad ecosystem - and we do that in real time. Our approach is based on tapping on the Real Time Bidding (RTB) protocol to collect cleartext and encrypted prices for winning bids paid by advertisers in order to place targeted ads. Our main technical contribution is a method for tallying winning bids even when they are encrypted. We achieve this by training a model using as ground truth prices obtained by running our own \"probe\" ad-campaigns. We design our methodology through a browser extension and a back-end server that provides it with fresh models for encrypted bids. We validate our methodology using a one year long trace of 1600 mobile users and demonstrate that it can estimate a user's advertising worth with more than 82 accuracy."
],
"cite_N": [
"@cite_38",
"@cite_18",
"@cite_9",
"@cite_29",
"@cite_6",
"@cite_39",
"@cite_0",
"@cite_19",
"@cite_2",
"@cite_13",
"@cite_25"
],
"mid": [
"",
"2611078990",
"1996060862",
"2405111563",
"2012286502",
"2527400238",
"2117093634",
"1967317786",
"2117406138",
"",
"2769136280"
]
} | 0 |
||
1907.12649 | 2965749255 | In the last few years, Header Bidding (HB) has gained popularity among web publishers and is challenging the status quo in the ad ecosystem. Contrary to the traditional waterfall standard, HB aims to give back control of the ad inventory to publishers, increase transparency, fairness and competition among advertisers, thus, resulting in higher ad-slot prices. Although promising, little is known about this new ad-tech protocol: How does it work internally and what are the different implementations of HB? What is the performance overhead, and how does it affect the user experience? Does it, indeed, provide higher revenues to publishers than the waterfall model? Who are the dominating entities in this new protocol? To respond to all these questions and shed light on this new, buzzing ad-technology, we design and implement HBDetector: a holistic HB detection mechanism that can capture HB auctions independently of the implementation followed in a website. By running HBDetector across the top 35,000 Alexa websites, we collect and analyze a dataset of 800k auctions. Our results show that: (i) 14.28 of the top Alexa websites utilize HB. (ii) Publishers tend to collaborate mostly with a relatively low number of demand partners, which are already big players in waterfall standard, (iii) HB latency can be significantly higher than waterfall, with up to 3x latency in the median cases. | Bashir al in @cite_30 , study the diffusion of user tracking caused by RTB-based programmatic ad-auctions. Results of their study show that under specific assumptions, no less than 52 tracking companies can observe at least 91 an attempt to shed light upon Facebook's ad ecosystem, Andreou al in @cite_4 investigate the level of transparency provided by the mechanisms Why am I seeing this?'' and Ad Preferences Page. The authors built a browser extension to collect Facebook ads and information extracted from these two mechanisms before performing their own ad campaigns and target users that used their browser extension. They show that ad explanations are often incomplete and misleading. @cite_23 , the authors aim to enhance the transparency in ad ecosystem with regards to information sharing, by developing a content agnostic methodology to detect client- and server- side flows of information between ad exchanges and leveraging retargeted ads. By using crawled data, the authors collected 35.4k ad impressions and identified 4 different kinds of information sharing behavior between ad exchanges. | {
"abstract": [
"",
"Targeted advertising has been subject to many privacy complaints from both users and policy makers. Despite this attention, users still have little understanding of what data the advertising platforms have about them and why they are shown particular ads. To address such concerns, Facebook recently introduced two transparency mechanisms: a \"Why am I seeing this?\" button that provides users with an explanation of why they were shown a particular ad (ad explanations), and an Ad Preferences Page that provides users with a list of attributes Facebook has inferred about them and how (data explanations). In this paper, we investigate the level of transparency provided by these two mechanisms. We first define a number of key properties of explanations and then evaluate empirically whether Facebook's explanations satisfy them. For our experiments, we develop a browser extension that collects the ads users receive every time they browse Facebook, their respective explanations, and the attributes listed on the Ad Preferences Page; we then use controlled experiments where we create our own ad campaigns and target the users that installed our extension. Our results show that ad explanations are often incomplete and sometimes misleading while data explanations are often incomplete and vague. Taken together, our findings have significant implications for users, policy makers, and regulators as social media advertising services mature.",
"Numerous surveys have shown that Web users are concerned about the loss of privacy associated with online tracking. Alarmingly, these surveys also reveal that people are also unaware of the amount of data sharing that occurs between ad exchanges, and thus underestimate the privacy risks associated with online tracking. In reality, the modern ad ecosystem is fueled by a flow of user data between trackers and ad exchanges. Although recent work has shown that ad exchanges routinely perform cookie matching with other exchanges, these studies are based on brittle heuristics that cannot detect all forms of information sharing, especially under adversarial conditions. In this study, we develop a methodology that is able to detect client- and server-side flows of information between arbitrary ad exchanges. Our key insight is to leverage retargeted ads as a tool for identifying information flows. Intuitively, our methodology works because it relies on the semantics of how exchanges serve ads, rather than focusing on specific cookie matching mechanisms. Using crawled data on 35,448 ad impressions, we show that our methodology can successfully categorize four different kinds of information sharing behavior between ad exchanges, including cases where existing heuristic methods fail. We conclude with a discussion of how our findings and methodologies can be leveraged to give users more control over what kind of ads they see and how their information is shared between ad exchanges."
],
"cite_N": [
"@cite_30",
"@cite_4",
"@cite_23"
],
"mid": [
"2889145385",
"2793294490",
"2486891920"
]
} | 0 |
||
1907.12352 | 2966538158 | We present ScaleTrotter, a conceptual framework for an interactive, multi-scale visualization of biological mesoscale data and, specifically, genome data. ScaleTrotter allows viewers to smoothly transition from the nucleus of a cell to the atomistic composition of the DNA, while bridging several orders of magnitude in scale. The challenges in creating an interactive visualization of genome data are fundamentally different in several ways from those in other domains like astronomy that require a multi-scale representation as well. First, genome data has intertwined scale levels---the DNA is an extremely long, connected molecule that manifests itself at all scale levels. Second, elements of the DNA do not disappear as one zooms out---instead the scale levels at which they are observed group these elements differently. Third, we have detailed information and thus geometry for the entire dataset and for all scale levels, posing a challenge for interactive visual exploration. Finally, the conceptual scale levels for genome data are close in scale space, requiring us to find ways to visually embed a smaller scale into a coarser one. We address these challenges by creating a new multi-scale visualization concept. We use a scale-dependent camera model that controls the visual embedding of the scales into their respective parents, the rendering of a subset of the scale hierarchy, and the location, size, and scope of the view. In traversing the scales, ScaleTrotter is roaming between 2D and 3D visual representations that are depicted in integrated visuals. We discuss, specifically, how this form of multi-scale visualization follows from the specific characteristics of the genome data and describe its implementation. Finally, we discuss the implications of our work to the general illustrative depiction of multi-scale data. | On a high level, our work relates to the use of abstraction in creating effective visual representations, , the use of . Viola and Isenberg @cite_39 describe this concept as a process, which removes detail when transitioning from a lower-level to a higher-level representation, yet which preserves the overall concept. While they attribute the removed detail to natural variation, noise, etc.'' in the investigated multi-scale representation we actually deal with a different data scenario: DNA assemblies at different levels of scale. We thus technically do not deal with a concept-preserving transformation'' @cite_39 , but with a process in which the underlying representational concept (or parts of it) can change. Nonetheless, their view of abstraction as an interactive process that allows viewers to relate one representation (at one scale) to another one (at a different scale) is essential to our work. | {
"abstract": [
"We explore the concept of abstraction as it is used in visualization, with the ultimate goal of understanding and formally defining it. Researchers so far have used the concept of abstraction largely by intuition without a precise meaning. This lack of specificity left questions on the characteristics of abstraction, its variants, its control, or its ultimate potential for visualization and, in particular, illustrative visualization mostly unanswered. In this paper we thus provide a first formalization of the abstraction concept and discuss how this formalization affects the application of abstraction in a variety of visualization scenarios. Based on this discussion, we derive a number of open questions still waiting to be answered, thus formulating a research agenda for the use of abstraction for the visual representation and exploration of data. This paper, therefore, is intended to provide a contribution to the discussion of the theoretical foundations of our field, rather than attempting to provide a completed and final theory."
],
"cite_N": [
"@cite_39"
],
"mid": [
"2751478023"
]
} | ScaleTrotter: Illustrative Visual Travels Across Negative Scales | The recent advances in visualization have allowed us to depict and understand many aspects of the structure and composition of the living cell. For example, cellVIEW [30] provides detailed visuals for viewers to understand the composition of a cell in an interactive exploration tool and Lindow et al. [35] created an impressive interactive illustrative depiction of RNA and DNA structures. Most such visualizations only provide a depiction of components/processes at a single scale level. Living cells, however, comprise structures that function at scales that range from the very small to the very large. The best example is DNA, which is divided and packed into visible chromosomes during mitosis and meiosis, while being read out at the scale level of base pairs. In between these scale levels, the DNA's structures are typically only known to structural biologists, while beyond the base pairs their atomic composition has implications for specific DNA properties.
The amount of information stored in the DNA is enormous. The human genome consists of roughly 3.2 Gb (giga base pairs) [1,52]. This information would fill 539,265 pages of the TVCG template, which would stack up to approx. 27 m. Yet, the whole information is contained inside the cell's nucleus with only approx. 6 µm diameter [1, page 179]. Similar to a coiled telephone cord, the DNA creates a compact structure that contains the long strand of genetic information. This organization results in several levels of perceivable structures (as shown in Fig. 1), which have been studied and visualized separately in the past. The problem thus arises of how to comprehend and explore the whole scope of this massive amount of multi-scale information. If we teach students or the general public about the relationships between the two extremes, for instance, we have to ensure that they understand how the different scales work together. Domain experts, in contrast, deal with questions such as whether correlations exist between the spatial vicinity of bases and genetic disorders. It may manifest itself through two genetically different characteristics that are far from each other in sequence but close to each other in the DNA's 3D configuration. For experts we thus want to ensure that they can access the information at any of the scales. They should also be able to smoothly navigate the information space. The fundamental problem is thus to understand how we can enable a smooth and intuitive navigation in space and scale with seamless transitions. For this purpose we derive specific requirements of multiscale domains and data with negative scale exponents and analyze how the constraints affect their representations. Based on our analysis we introduce ScaleTrotter, an interactive multi-scale visualization of the human DNA, ranging from the level of the interphase chromosomes 1 in the 6 µm nucleus to the level of base pairs (≈ 2 nm) resp. atoms (≈ 0.12 nm). We cover a scale range of 4-5 orders of magnitude in spatial size, and allow viewers to interactively explore as well as smoothly interpolate between the scales. We focus specifically on the visual transition between neighboring scales, so that viewers can mentally connect them and, ultimately, understand how the DNA is constructed. With our work we go beyond existing multi-scale visualizations due to the DNA's specific character. Unlike multiscale data from other fields, the DNA physically connects conceptual elements across all the scales (like the phone cord) so it never disappears from view. We also need to show detailed data everywhere and, for all stages, the scales are close together in scale space.
We base our implementation on multi-scale data from genome research about the positions of DNA building blocks, which are given at a variety of different scales. We then transition between these levels using what we call visual embedding. It maintains the context of larger-scale elements while adding details from the next-lower scale. We combine this process with scale-dependent rendering that only shows relevant amounts of data on the screen. Finally, we support interactive data exploration through scale-dependent view manipulations, interactive focus specification, and visual highlighting of the zoom focus.
In summary, our contributions are as follows. First, we analyze the unique requirements of multi-scale representations of genome data and show that they cannot be met with existing approaches. Second, we demonstrate how to achieve smooth scale transitions for genome data through visual embedding of one scale within another based on measured and simulated data. We further limit the massive data size with a scale-dependent camera model to avoid visual clutter and to facilitate interactive exploration. Third, we describe the implementation of this approach and compare our results to existing illustrations. Finally, we report on feedback from professional illustrators and domain experts. It indicates that our interactive visualization can serve as a fundamental building block for tools that target both domain experts and laypeople.
Abstraction in illustrative visualization
On a high level, our work relates to the use of abstraction in creating effective visual representations, i. e., the use of visual abstraction. Viola and Isenberg [58] describe this concept as a process, which removes detail when transitioning from a lower-level to a higher-level representation, yet which preserves the overall concept. While they attribute the removed detail to "natural variation, noise, etc." in the investigated multi-scale representation we actually deal with a different data scenario: DNA assemblies at different levels of scale. We thus technically do not deal with a "concept-preserving transformation" [58], but with a process in which the underlying representational concept (or parts of it) can change. Nonetheless, their view of abstraction as an interactive process that allows viewers to relate one representation (at one scale) to another one (at a different scale) is essential to our work.
Also important from Viola and Isenberg's discussion [58] is their concept of axes of abstraction, which are traversed in scale space. We also connect the DNA representations at different scales, facilitating a smooth transition between them. In creating this axis of abstraction, we focus primarily on changes of Viola and Isenberg's geometric axis, but without a geometric interpolation of different representations. Instead, we use visual embedding of one scale in another one.
Scale-dependent molecular and genome visualization
We investigate multi-scale representations of the DNA, which relates to work in bio-molecular visualization. Several surveys have summarized work in this field [2,28,29,39], so below we only point out selected approaches. In addition, a large body of work by professional illustrators on mesoscale cell depiction inspired us such as visualizing the human chromosome down to the detail of individual parts of the molecule [19].
In general, as one navigates through large-scale 3D scenes, the underlying subject matter is intrinsically complex and requires appropriate interaction to aid intellection [17]. The inspection of individual parts is challenging, in particular if the viewer is too far away to appreciate its visual details. Yet large, detailed datasets or procedural approaches are essential to create believable representations. To generate not only efficient but effective visualizations, we thus need to remove detail in Viola and Isenberg's [58] visual abstraction sense. This allows us to render at interactive rates as well as to see the intended structures, which would otherwise be hidden due to cluttered views. Consequently, even most single-scale small-scale representations use some type of multiscale approach and with it introduce abstraction. Generally we can distinguish three fundamental techniques: multi-scale representations by leaving out detail of a single data source, multi-scale techniques that actively represent preserved features at different scales, and multi-scale approaches that can also transit between representations of different scales. We discuss approaches for these three categories next.
Multi-scale visualization by means of leaving out detail
An example of leaving out details in a multi-scale context is Parulek et al.'s [46] continuous levels-of-detail for large molecules and, in particular, proteins. They reduced detail of far-away structures for faster rendering. They used three different conceptual distances to create increasingly coarser depictions such as those used in traditional molecular illustration. For distant parts of a molecule, in particular, they seamlessly transition to super atoms using implicit surface blending.
The cellVIEW framework [30] also employs a similar level-of-detail (LOD) principle using advanced GPU methods for proteins in the HIV. It also removes detail to depict internal structures, and procedurally generates the needed elements. In mesoscopic visualization, Lindow et al. [34] applied grid-based volume rendering to sphere raycasting to show large numbers of atoms. They bridged five orders of magnitude in length scale by exploiting the reoccurrence of molecular sub-entities. Finally, Falk et al. [13] proposed out-of-core optimizations for visualizing large-scale whole-cell simulations. Their approach extended Lindow et al.'s [34] work and provides a GPU ray marching for triangle rendering to depict pre-computed molecular surfaces.
Approaches in this category thus create a "glimpse" of multi-scale representations by removing detail and adjusting the remaining elements accordingly. We use this principle, in fact, in an extreme form to handle the multi-scale character of the chromosome data. We completely remove the detail of a large part of the dataset. If we would show all small details, an interactive rendering would be impossible and they would distract from the depicted elements. Nonetheless, this approach typically only uses a single level of data and does not incorporate different conceptual levels of scale.
Different shape representations by conceptual scale
The encoding of structures through different conceptual scales is often essential. Lindow et al. [35], for instance, described different rendering methods of nucleic acids-from 3D tertiary structures to linear 2D and graph models-with a focus on visual quality and performance. They demonstrate how the same data can be used to create both 3Dspatial representations and abstract 2D mappings of genome data. This produces three scale levels: the actual sequence, the helical form in 3D, and the spatial assembly of this form together with proteins. Waltemate et al. [59] represented the mesoscopic level with meshes or microscopic images, while showing detail through molecule assemblies. To transition between the mesoscopic and the molecular level, they used a membrane mapping to allow users to inspect and resolve areas on demand. A magnifier tool overlays the high-scale background with lower-scale details. This approach relates to our transition scheme, as we depict the higher scale as background and the lower scale as foreground. A texture-based molecule rendering has been proposed by Bajaj et al. [6]. Their method reduces the visual clutter at higher levels by incorporating a biochemically sensitive LOD hierarchy.
Tools used by domain experts also visualize different conceptual genome scales. To the best of our knowledge, the first tool to visualize the 3D human genome has been Genome3D [4]. It allows researchers to select a discrete scale level and then load data specifically for this level. The more recent GMOL tool [43] shows 3D genome data captured from Hi-C data [56]. GMOL uses a six-scale system similar to the one that we employ and we derived our data from theirs. They only support a discrete "toggling between scales" [43], while we provide a smooth scale transition. Moreover, we add further semantic scale levels at the lower end to connect base locations and their atomistic compositions.
Conceptual scale representations with smooth transition
A smooth transition between scales has previously been recognized as important. For instance, van der Zwan et al. [57] carried out structural abstraction with seamless transitions for molecules by continuously adjusting the 3D geometry of the data. Miao et al. [38] substantially extended this concept and applied it to DNA nanostructure visualization. They used ten semantic scales and defined smooth transitions between them. This process allows scientists to interact at the appropriate scale level. Later, Miao et al. [37] combined this approach with three dimensional embeddings. In addition to temporal changes of scale, Lueks et al. [36] explored a seamless and continuous spatial multiscale transition by geometry adjustment, controlled by the location in image or in object space. Finally, Kerpedjiev et al. [25] demonstrated multi-scale navigation of 2D genome maps and 1D genome tracks employing a smooth transition for the user to zoom into views.
All these approaches only transition between nearby scale levels and manipulate the depicted data geometry, which limits applicability. These methods, however, do not work in domains where a geometry transition cannot be defined. Further, they are limited in domains where massive multi-scale transitions are needed due to the large amount of geometry that is required for the detailed scale levels. We face these issues in our work and resolve them using visual embeddings instead of geometry transitions as well as a scale-dependent camera concept. Before detailing our approach, however, we first discuss general multiscale visualization techniques from other visualization domains.
General multi-scale data visualization
The vast differences in spatial scale of our world in general have fascinated people for a long time. Illustrators have created explanations of these scale differences in the form of images (e. g., [60] and [47, Fig. 1]), videos (e. g., the seminal "Powers of Ten" video [11] from 1977), and newer interactive experiences (e. g., [15]). Most illustrators use a smart composition of images blended such that the changes are (almost) unnoticeable, while some use clever perspectives to portray the differences in scale. These inspirations have prompted researchers in visualization to create similar multi-scale experiences, based on real datasets.
The classification from Sect. 2.2 for molecular and genome visualization applies here as well. Everts et al. [12], e. g., removed detail from brain fiber tracts to observe the characteristics of the data at a higher scale. Hsu et al. [22] defined various cameras for a dataset, each showing a different level of detail. They then used image masks and camera ray interpolation to create smooth spatial scale transitions that show the data's multi-scale character. Next, Glueck et al. [16]'s approach exemplifies the change of shape representations by conceptual scale by smoothly changing a multi-scale coordinate grid and position pegs to aid depth perception and multi-scale navigation of 3D scenes. They simply remove detail for scales that no longer contribute much to the visualization. In their accompanying video, interestingly, they limited the detail for each scale to only the focus point of the scale transition to maintain interactive frame rates. Another example of this category are geographic multi-scale representations such as online maps (e. g., Google or Bing maps), which contain multiple scale representations, but typically toggle between them as the user zooms in or out. Finally, virtual globes are an example for conceptual scale representations with smooth transitions. They use smooth texture transitions to show an increasing level of detail as one zooms in. Another example is Mohammed et al.'s [41] Abstractocyte tool, which depicts differently abstracted astrocytes and neurons. It allows users to smoothly transition between the cell-type abstractions using both geometry transformations and blending. We extend the latter to our visual embedding transition.
Also these approaches only cover a relatively small scale range. Even online map services cover less than approx. six orders of magnitude. Besides the field of bio-molecular and chemistry research discussed in Sect. 2.2, in fact, only astronomy deals with large scale differences. Here, structures range from celestial bodies (≥ ≈ 10 2 m) 2 to the size of the observable universe (1.3 · 10 26 m), in total 24 orders of magnitude.
To depict such data, visualization researchers have created explicit multi-scale rendering architectures. Schatz et al. [51], for example, combined the rendering of overview representations of larger structures with the detailed depiction of parts that are close to the camera or have high importance. To truly traverse the large range of scales of the universe, however, several datasets that cover different orders of size and detail magnitude have to be combined into a dedicated data rendering and exploration framework. The first such framework was introduced by Fu et al. [14,21] who used scale-independent modeling and rendering and power-scaled coordinates to produce scale-insensitive visualizations. This approach essentially treats, models, and visualizes each scale separately and then blends scales in and out as they appear or disappear. The different scales of entities in the universe can also be modeled using a ScaleGraph [26], which facilitates scale-independent rendering using scene graphs. Axelsson et al. [5] later extended this concept to the Dynamic Scene Graph, which, in the OpenSpace system [8], supports several high-detail locations and stereoscopic rendering. The Dynamic Scene Graph uses a dynamic camera node attachment to visualize scenes of varying scale and with high floating point precision.
With genome data we face similar problems concerning scaledependent data and the need to traverse a range of scales. We also face the challenge that our conceptual scales are packed much more tightly in scale space as we explain next. This leads to fundamental differences between both application domains.
MULTI-SCALE GENOME VISUALIZATION
Visualizing the nuclear human genome-from the nucleus that contains all chromosomal genetic material down to the very atoms that make up the DNA-is challenging due to the inherent organization of the DNA in tubular arrangements. DNA in its B-form is only 2 nm [3] wide, which in its fibrous form or at more detailed scales would be too thin to be perceived. This situation is even more aggravated by the dense organization of the DNA and the structural hierarchy that bridges several scales. The previously discussed methods do not deal with such a combination of structural characteristics. Below we thus discuss the challenges that arise from the properties of these biological entities and how we address them by developing our new approach that smoothly transitions between views of the genome at its various scales.
Challenges of interactive multiscale DNA visualization
Domain scientists who sequence, investigate, and generally work with genome data use a series of conceptual levels for analysis and visualization [43]: the genome scale (containing all approx. 3.2 Gb of the human genome), the chromosome scale (50-100 Mb), the loci scale (in the order of Mb), the fiber scale (in the order of Kb), the nucleosome scale (146 b), and the nucleotide scale (i. e., 1 b), in addition to the atomistic composition of the nucleotides. These seven scales cover a range of approx. 4-5 orders of magnitude in physical size. In astronomy or astrophysics, in contrast, researchers deal with a similar number of scales: 3 approx. 7-8 conceptual scales of objects, yet over a range of some 24 orders of magnitude of physical size. 4 A fundamental difference between multi-scale visualizations in the two domains is, therefore, the scale density of the conceptual levels that need to be depicted.
Multi-scale astronomy visualization [5,14,21,26] deals with positiveexponent scale-space 5 (Fig. 2, top), where two neighboring scales are relatively far apart in scale space. For example, planets are much smaller than stars, stars are much smaller than galaxies, galaxies are much smaller than galaxy clusters, etc. On average, two scales have a distance of three or more orders of magnitude in physical space. The consequence of this high distance in scale space between neighboring conceptual levels is that, as one zooms out, elements from one scale typically all but disappear before the elements on the next conceptual level become visible. This aspect is used in creating multi-scale astronomy visualizations. For example, Axelsson Fig. 2. Multi-scale visualization in astronomy vs. genomics. The size difference between celestial bodies is extremely large (e. g., sun vs. earth-the earth is almost invisible at that scale). The distance between earth and moon is also large, compared to their sizes. In the genome, we have similar relative size differences, yet molecules are densely packed as exemplified by the two base pairs in the DNA double helix.
Graph [5] uses spheres of influence to control the visibility range of objects from a given subtree of the scene graph. In fact, the low scale density of the conceptual levels made the seamless animation of the astronomy/astrophysics section in the "Powers of Ten" Video [11] from 1977 possible-in a time before computer graphics could be used to create such animations. Eames and Eames [11] simply and effectively blended smoothly between consecutive images that depicted the respective scales. For the cell/genome part, however, they use sudden transitions between conceptual scales without spatial continuity, and they also leave out several of the conceptual scales that scientists use today such as the chromosomes and the nucleosomes.
The reason for this problem of smoothly transitioning between scales in genome visualization-i. e., in negative-exponent scale-space 6 ( Fig. 2, bottom)-is that the conceptual levels of a multi-scale visualization are much closer to each other in scale. In contrast to astronomy's positive-exponent scale-space, there is only an average scale distance of about 0.5-0.6 orders of magnitude of physical space between two conceptual scales. Elements on one conceptual scale are thus still visible when elements from the next conceptual scale begin to appear. The scales for genome visualizations are thus much denser compared to astronomy's average scale distance of three orders of magnitude.
Moreover, in the genome the building blocks are physically connected in space and across conceptual scales, except for the genome and chromosome levels. From the atoms to the chromosome scale, we have a single connected component. It is assembled in different geometric ways, depending on the conceptual scale at which we choose to observe. For example, the sequence of all nucleotides (base pairs) of the 46 chromosomes in a human cell would stretch for 2 m, with each base pair only being 2 nm wide [3], while a complete set of chromosomes fits into the 6 µm wide nucleus. Nonetheless, in all scales between the sequence of nucleotides and a chromosome we deal with the same, physically connected structure. In astronomy, instead, the physical space between elements within a conceptual scale is mostly empty and elements are physically not connected-elements are only connected by proximity (and gravity), not by visible links.
The large inter-scale distance and physical connectedness, naturally, also create the problem of how to visualize the relationship between two conceptual scale levels. The mentioned multi-scale visualization systems from astronomy [5,14,21,26] use animation for this purpose, sometimes adding invisible and intangible elements such as orbits of celestial bodies. In general multi-scale visualization approaches, multiscale coordinate grids [16] can assist the perception of scale-level relationships. These approaches only work if the respective elements are independent of each other and can fade visually as one zooms out, for example, into the next-higher conceptual scale. The connected composition of the genome does make these approaches impossible. In the genome, in addition, we have a complete model for the details in each conceptual level, derived from data that are averages of measurements from many experiments on a single organism type. We are thus able to and need to show visual detail everywhere-as opposed to only close to a single point like planet Earth in astronomy.
Ultimately, all these points lead to two fundamental challenges for us to solve. The first (discussed in Sect. 3.2 and 3.3) is how to visually create effective transitions between conceptual scales. The transitional scales shall show the containment and relationship character of the data even in still images and seamlessly allow us to travel across the scales as we are interacting. They must deal with the continuous nature of the depicted elements, which are physically connected in space and across scales. The second challenge is a computational one. Positional information of all atoms from the entire genome would not fit into GPU memory and will prohibit interactive rendering performance. We discuss how to overcome these computational issues in Sect. 4, along with the implementation of the visual design from Sect. 3.2 and 3.3.
Visual embedding of conceptual scales
Existing multi-scale visualizations of DNA [36,38,57] or other data [41] often use geometry manipulations to transition from one scale to the next. For the full genome, however, this approach would create too much detail to be useful and would require too many elements to be rendered. Moreover, two consecutive scales may differ significantly in structure and organization. A nucleosome, e. g., consists of nucleotides in double-helix form, wrapped around a histone protein. We thus need appropriate abstracted representations for the whole set of geometry in a given scale that best depict the scale-dependent structure and still allow us to create smooth transitions between scales.
Nonetheless, the mentioned geometry-based multi-scale transformations still serve as an important inspiration to our work. They often provide intermediate representations that may not be entirely accurate, but show how one scale relates to another one, even in a still image. Viewers can appreciate the properties of both involved scale levels, such as in Miao et al.'s [38] transition between nucleotides and strands. Specifically, we take inspiration from traditional illustration where a related visual metaphor has been used before. As exemplified by Fig. 3, illustrators sometimes use an abstracted representation of a coarser scale to aid viewers with understanding the overall composition as well as the spatial location of the finer details. This embedding of one representation scale into the next is similar to combining several layers of visual information-or super-imposition [42, pp. 288 ff]. It is a common approach, for example, in creating maps. In visualization, this principle has been used in the past (e. g., [10,23,49,50]), typically applying some form of transparency to be able to perceive the different layers. Transparency, however, can easily lead to visualizations that are difficult to understand [9]. Simple outlines to indicate the coarser shape or context can also be useful [54]. In our case, even outlines easily lead to clutter due to the immense amount of detail in the genome data. Moreover, we are not interested in showing that some elements are spatially inside others, but rather that the elements are part of a higher-level structure, thus are conceptually contained.
We therefore propose visual scale embedding of the detailed scale into its coarser parent (see the illustration in Fig. 4). We render an Fig. 10]. We thus completely flattened the context as shown in Fig. 4 and inspired by previous multi-scale visualizations from structural biology [46]. Then we render the detailed geometry of the next-smaller scale on top of it. This concept adequately supports our goal of smooth scale transitions. A geometric representation of the coarser scale is first shown using 3D shading as long as it is still small on the screen, i. e., the camera is far away. It transitions to a flat, canvas-like representation when the camera comes closer and the detail in this scale is not enough anymore. We now add the representation of the more detailed scale on top-again using 3D shading, as shown for two scale transitions in Fig. 5. Our illustrative visualization concept combines the 2D aspect of the flattened coarser scale with the 3D detail of the finer scale. With it we make use of superimposed representations as argued by Viola and Isenberg [58], which are an alternative to spatially or temporally juxtaposed views. In our case, the increasingly abstract character of rendering of the coarser scale (as we flatten it during zooming in) relates to its increasingly contextual and conceptual nature. Our approach thus relates to semantic zooming [48] because the context layer turns into a flat surface or canvas, irrespective of the underlying 3D structure and regardless of the specific chosen view direction. This type of scale zoom does not have the character of cut-away techniques as often used in tools to explore containment in 3D data (e. g., [31,33]). Instead, it is more akin to the semantic zooming in the visualization of abstract data, which is embedded in the 2D plane (e. g., [61]).
Multi-scale visual embedding and scale-dependent view
One visual embedding step connects two consecutive semantic scales. We now concatenate several steps to assemble the whole hierarchy (Fig. 6). This is conceptually straightforward because each scale by itself is shown using 3D shading. Nonetheless, as we get to finer and finer details, we face the two major problems mentioned at the start of Sect. 3.2: visual clutter and limitations of graphics processing. Both are caused by the tight scale space packing of the semantic levels in the genome. At detailed scales, a huge number of elements are potentially visible, e. g., 3.2 Gb at the level of nucleotides. To address this issue, we adjust the camera concept to the multi-scale nature of the data.
In previous multi-scale visualization frameworks [5,14,21,26], researchers have already used scale-constrained camera navigation. For example, they apply a scale-dependent camera speed to quickly cover the huge distances at coarse levels and provide fine control for detailed levels. In addition, they used a scale-dependent physical camera size or scope such that the depicted elements would appropriately fill the distance between near and far plane, or use depth buffer remapping [14] to cover a larger depth range. In astronomy and astrophysics, however, we do not face the problem of a lot of nearby elements in detailed levels of scale due to their loose scale-space packing. After all, if we look into the night sky we do not see much more than "a few" stars from our galactic neighborhood which, in a visualization system, can easily be represented by a texture map. Axelsson et al. [5], for example, simply attach their cameras to nodes within the scale level they want to depict.
For the visualization of genome data, however, we have to introduce an active control of the scale-dependent data-hierarchy size or scope as we would "physically see," for example, all nucleosomes or nucleotides up to the end of the nucleus. Aside from the resulting clutter, such complete genome views would also conceptually not be helpful because, due to the nature of the genome, the elements within a detailed scale largely repeat themselves. The visual goal should thus be to only show a relevant and scale-dependent subset of each hierarchy level. We thus limit the rendering scope to a subset of the hierarchy, depending on the chosen scale level and spatial focus point. The example in Fig. 7 depicts the nucleosome scale, where we only show a limited number of nucleosomes to the left and the right of the current focus point in the sequence, while the rest of the hierarchy has been blended out. We thereby extend the visual metaphor of the canvas, which we applied in the visual embedding, and use the white background of the frame buffer as a second, scale-dependent canvas, which limits the visibility of the detail. In contrast to photorealism 7 that drives many multi-scale visu- alizations in astronomy, we are interested in appropriately abstracted representations through a scale-dependent removal of distant detail to support viewers in focusing on their current region of interest.
IMPLEMENTATION
Based on the conceptual design from Sect. 3 we now describe the implementation of our multi-scale genome visualization framework. We first describe the used and then explain the shader-based realization of the scale transitions using a series of visual embedding steps as well as some interaction considerations.
Data sources and data hierarchy
Researchers in genome studies have a high interest in understanding the relationships between the spatial structure at the various scale levels and the biological function of the DNA. Therefore they have created a multi-scale dataset that allows them to look at the genome in different spatial scale levels [43]. This data was derived by Nowotny et al. [43] from a model of the human genome by Asbury et al. [4], which in turn was constructed based on various data sources and observed properties. [7] approach of space-filling, fractal packing. As a result, Nowotny et al. [43] obtained the positions of the nucleotides in space, and from these computed the positions of fibers, loci, and chromosomes (Fig. 8). They stored this data in their own Genome Scale System (GSS) format and also provided the positions of the nucleotides for one nucleosome (Fig. 8, bottom-right). Even with this additional data, we still have to procedurally generate further information as we visualize this data such as the orientations of the nucleosomes (based on the location of two consecutive nucleosomes) and the linker DNA strands of nucleotides connecting two consecutive nucleosomes.
This data provides positions at every scale level, without additional information about the actual sizes. Only at the nucleotide and atom scales the sizes are known. It was commonly thought that nucleosomes are tightly and homogeneously packed into 30 nm fibers, 120 nm chromonema, and 300-700 nm chromatids, but recent studies [45] disprove this organization and confirm the existence of flexible chains with diameters of 5-24 nm. Therefore, for all hierarchically organized scales coarser than the nucleosome, we do not have information about the specific shape that each data point represents. We use spheres with scale-adjusted sizes as rendering primitives as they well portray the chaining of elements according to the data-point sequence. With respect to visualizing this multi-scale phenomenon, the data hierarchy (i. e., 100 nucleosomes = 1 fiber, 100 fibers = 1 locus, approx. 100 loci = 1 chromosome) is not the same as the hierarchy of semantic scales that a viewer sees. For example, the dataset contains a level that stores the chromosome positions, but if rendered we would only see one sphere for each chromosome ( Fig. 9(b)). Such a depiction would not easily be recognized as representing a chromosome due to the lack of detail. The chromosomes by themselves only become apparent once we display them with more shape details using the data level of the loci as given in Fig. 9(c). The locations at the chromosomes data scale can instead be better used to represent the semantic level of the nucleus by rendering them as larger spheres, all with the same color and with a single outline around the entire shape as illustrated in Fig. 9(a).
In Table 1 we list the relationships between data hierarchy and semantic hierarchy for the entire set of scales we support. From the table follows that the choice of color assignment and the subset of rendered elements on the screen supports viewers in understanding the semantic level, which we want to portray. For example, by rendering the fiber positions colored by chromosome we facilitate the understanding of a detailed depiction of a chromosome, rather than that chromosomes consist of several loci. In an alternative depiction for domain experts, who are interested in studying the loci regions, we could instead assign the colors by loci for the fiber data level and beyond.
We added two additional scale transitions that are not realized by visual embedding, but instead by color transitions. The first of these transitions changes the colors from the previously maintained chromosome color to nucleotide colors as the nucleotide positions are rendered in their 3D shape to illustrate that the nucleosomes themselves consist of pairs of nucleotides. The following transition then uses visual embedding as before, to transition to atoms while maintaining nucleotide colors. The last transition, again changes this color assignment such that the atoms are rendered in their typical element colors, using 3D shading and without flattening them.
Realizing visual scale embedding
For our proof-of-concept implementation we build on the molecular visualization functionality provided in the Marion framework [40]. We added to this framework the capability to load the previously described GSS data. We thus load and store the highest detail of the datathe 23,958,240 nucleosome positions-as well as all positions of the coarser scales. To show more detail, we use the single nucleosome example in the data, which consists of 292 nucleotides and then create the ≈ 24 · 10 6 instances for the semantic nucleosome scale. Here we fully use of Le Muzic et al.'s [30] technique of employing the tessellation stages on the GPU, which dynamically injects the atoms of the nucleosome. We apply a similar instancing approach for transitioning to an atomistic representation, based on the 1AOI model from the PDB. To visually represent the elements, we utilize 2D sphere impostors instead of sphere meshes [30]. Specifically, we use triangular 2D billboards (i. e., only three vertices) that always face the camera and assign the depth to each fragment that it would get if it had been a sphere.
If we wanted to directly render all atoms at the finest detail scale, we would have to deal with ≈ 3.2 Gb ·70 atoms/b = 224 · 10 9 atoms. This amount of detail is not possible to render at interactive rates. With LOD optimizations, such as the creation of super-atoms for distant elements, cellVIEW could process 15 · 10 9 atoms at 60 Hz [30]. This amount of detail does not seem to be necessary in our case. Our main goal is the depiction of the scale transitions and too much detail would cause visual noise and distractions. We use the scale-dependent removal of distant detail described in Sect. 3.3. As listed in Table 1, for coarse scales we show all chromosomes. Starting with the semantic fibers scale, we only show the focus chromosome. For the semantic nucleosomes level, we only show the focus fiber and two additional fibers in both directions of the sequence. To indicate that the sequence continues, we gradually fade out the ends of the sequence of nucleosomes as shown in Fig. 7. For finer scales beyond the nucleosomes, we maintain the sequence of five fibers around the focus point, but remove the detail of the links between nucleosomes.
To manage the different rendering scopes and color assignments, we assign IDs to elements in a data scale and record the IDs of the hierarchy ancestors of an element. For example, each chromosome data element gets an ID, which in turn is known to the loci data instances. We use this ID to assign a color to the chromosomes. Because we continue rendering all chromosomes even at the fiber data level respectively semantic chromosome with detail level, we also pass the IDs of the chromosomes to the fiber data elements. Later, the IDs of the fiber data elements are used to determine the rendering scope in the data levels of nucleotide positions and finer (more detail).
For realizing the transition in the visual scale embedding, i. e., transitioning from the coarser scale S N to the finer scale S N+1 , we begin by alpha-blending S N rendered with 3D detail and flattened S N . We achieve the 3D detail with screen-space ambient occlusion (SSAO), while the flattened version does not use SSAO. Next we transition between S N and S N+1 by first rendering S N and then S N+1 on top, the latter with increasing opacity. Here we avoid visual clutter by only adding detail to elements in S N+1 on top of those regions that belonged to their parents in S N . The necessary information for this purpose comes from the previously mentioned IDs. We thus first render all flattened elements of S N , before blending in detail elements from S N+1 . In the final transition of visual scale embedding, we remove the elements from S N through alpha-blending. For the two color transitions discussed in Sect. 4.1 we simply alpha-blend between the corresponding elements of S N and S N+1 , but with different color assignments.
Interaction considerations
The rendering speeds are in the range of 15-35 fps on an Intel Core ™ PC (i7-8700K, 6 cores, 32 GB RAM, 3.70 GHz, nVidia Quadro P4000, Windows 10 x64). In addition to providing a scale-controlled traversal of the scale hierarchy toward a focus point, we thus allow users to interactively explore the data and choose their focus point themselves.
To support this interaction, we allow users to apply transformations such as rotation and panning. We also allow users to click on the data to select a new focus point, which controls the removal of elements to be rendered at specific scale transitions (as shown in Table 1). First, users can select the focus chromosome (starting at loci positions), whose position is the median point within the sequence of fiber positions for that chromosome. This choice controls which chromosome remains as we transition from the fiber to the nucleosome data scale. Next, starting at the nucleosome data scale, users can select a strand of five consecutive fiber positions, which then ensures that only this strand remains as we transition from nucleosome to nucleotide positions.
To further support the interactive exploration, we also adjust the colors of the elements to be in focus next. For example, the subset of a chromosome next in focus is rendered in a slightly lighter color than the remaining elements of the same level. This approach provides a natural visual indication of the current focus point and guides the view of the users as they explore the scales.
To achieve the scale-constrained camera navigation, we measure the distance to a transition or interaction target point in the data sequence. We measure this distance as the span between the camera location and the position of the target level in its currently active scale. This distance then informs the setting of camera parameters and SSAO passes. After the user has selected a new focus point, the current distance to the camera will change, so we adjust also the global scale parameter that we use to control the scale navigation.
DISCUSSION
Based on our design and implementation we now compare our results with existing visual examples, examine potential application domains, discuss limitations, and suggest several directions for improvement.
Comparison to traditionally created illustrations
Measuring the ground truth is only possible to a certain degree, which makes the comparison to ScaleTrotter difficult. One reason is that no static genetic material exists in living cells. Moreover, microscopy is also limited at the scale levels with which we are dealing. We have to rely on the data from the domain experts with its own limitations (Sect. 5.4) as the input for creating our visualization and compare the results with existing illustrations in both static and animated form.
We first look at traditional static multi-scale illustrations as shown in Fig. 10; other illustrations similar to the one in Fig. 10(a) can be found in Annunziato's [3] and Ou et al.'s [45] works. In Fig. 10(a), the illustrators perform the scale transition along a 1D path, supported by the DNA's extreme length. We do not take this route as we employ the actual positions of elements from the involved datasets. This means that we could also apply our approach to biologic agents such as proteins that do not have an extremely long extent. Moreover, the static illustrations have some continuous scale transitions, e. g., the detail of the DNA molecule itself or the sizes of the nucleosomes. Some transitions in the multi-scale representation, however, are more sudden such as the transition from the DNA to nucleosomes, the transition from the nucleosomes to the condensed chromatin fiber, and the transition from that fiber to the 700 nm wide chromosome leg. Fig. 10(b) has only one such transition. The changeover happens directly between the nucleosome level and the mitotic chromosome. We show transitions between scales interactively using our visual scale embedding. The static illustrations in Fig. 10 just use the continuous nature of the DNA to evoke the same hierarchical layering of the different scales. The benefit of the spatial scale transitions in the static illustrations is that a single view can depict all scale levels, while our temporally-controlled scale transitions allow us to interactively explore any point in both the genome's spatial layout and in scale. Moreover, we also show the actual physical configuration of every scale according to the datasets that genome researchers provide, representing the current state of knowledge.
We also compare our results to animated illustrations as exemplified by the "Powers of Ten" video 8 [11] and a video treating the composition of the genome 9 and created by Drew Berry et al. in 2003. The "Powers of Ten" video only shows the fibers of the DNA double helix curled into loops-a notion that has since been revised by the domain experts. Nonetheless, the video still shows a continuous transition in scale through blending of aligned representations from the fibers, to the nucleotides, to the atoms. It even suggests that we should continue the scale journey beyond the atoms. The second video, in contrast, shows the scale transitions starting from the DNA double helix and zooming out. The scale transitions are depicted as "physical" assembly processes, e. g., going from the double helix to nucleosomes, and from nucleosomes to fibers. Furthermore, shifts of focus or hard cuts are applied as well. The process of assembling an elongated structure through curling up can nicely illustrate the composition of the low-level genome structures, but only if no constraints on the rest of the fibrous structure exist. In our interactive illustration, we have such constraints where we can zoom out and in and where we have restrictions on the locations of all elements coming from the given data. Moreover, the construction also potentially creates a lot of motion due to the dense nature of the genome and, thus, visual noise which might impact the overall visualization. On the other hand, both videos convey the message that no element is static at the small scales. We do not yet show this functionality in our visualizations.
Both static and dynamic traditional visualizations depict the composition of the genome in its mitotic stage. The chromosomes only assume this stage, however, when the cell divides. Our visualization is the first that provides the user with an interactive exploration with smooth scale transitions of the genome in its interphase state, the state in which the chromosomes exist most of the time.
Feedback from illustrators and application scenarios
To discuss the creation of illustrations for laypeople with ScaleTrotter, we asked two professional illustrators for feedback who work on biological and medical visualizations. One of them has ten years experience as a professional scientific illustrator and animator with a focus on biological and medical illustrations for science education. The other expert is a certified illustrator with two years experience plus a PhD in Bioengineering. We conducted a semi-structured interview (approx. 60 min) with them, to get critical feedback [24,27] on our illustrative multi-scale visualization and to learn how our approach compares to the way they deal with multi-scale depictions in their daily work.
They immediately considered our ScaleTrotter approach for showing genome scale transitions as part of a general story to tell. They missed the necessary additional support for telling a story such as the contextual representation of a cell (for which we could investigate cellVIEW [30]) and, in general, audio support and narration. Although they had to judge our results isolated from other story telling methods, they saw the benefits of an interactive tool for creating narratives that goes beyond the possibilities of their manual approaches.
We also got a number of specific pieces of advice for improvement. In particular, they recommended different settings for when to make certain transitions in scale space. The illustrators also suggested the addition of "contrast" for those parts that will be in focus next as we zoom in-a feature we then added and describe in Sect. 4.3.
According to them, our concept of using visual scale embedding to transition between different scalar representations has not yet been used in animated illustrations, yet the general concept of showing detail together with context as illustrated in Fig. 3 is known. Instead of using visual scale embedding, they use techniques discussed in Sect. 5.1, or they employ cut-outs with rectangles or boxes to indicate the transition between scales. Our visual scale embedding is seen by them as a clear innovation: "to have a smooth transition between the scales is really cool." Moreover, they were excited about the ability to freely select a point of focus and interactively zoom into the corresponding detail. Basically, they said that our approach would bring them closer to their vision of a "molecular Maya" because it is "essential to have a scientifically correct reference." Connected to this point we also discussed the application of ScaleTrotter in genome research. Due to their close collaborations with domain experts they emphasized that the combination of the genomics sequence data plus some type of spatial information will be essential for future research. A combination of our visualization, which is based on the domain's state-of-the-art spatial data, with existing tools could allow genome scientists to better understand the function of genes and certain genetic diseases.
In summary, they are excited about the visual results and see application possibilities both in teaching and in data exploration.
Feedback from genome scientists
As a result of our conversation with the illustrators they also connected us to a neurobiologist who investigates 3D genome structures at single cell levels, e. g., by comparing cancerous with healthy cells. His group is interested in interactions between different regions of the genome. Although the spatial characteristics of the data are of key importance to them, they still use 2D tools. The scientist confirmed that a combination of their 2D representations with our interactive 3D-spatial multi-scale method would considerably help them to understand the interaction of sequentially distant but spatially close parts of the genome, processes such as gene expression, and DNA-protein interactions.
We also presented our approach to an expert in molecular biology with 52 years of age and 22 years of post-PhD experience. He specializes in genetics and studies the composition, architecture, and function of SMC complexes. We conducted a semi-structured interview (approx. 60 minutes) to discuss our results. He stated that transitions between several scales are definitely useful for analyzing the 3D genome. He was satisfied with the coarser chromosomes and loci representations, but had suggestions for improving the nucleosome and atomic scales. In particular, he noted the lack of proteins such as histones. He compared our visualization with existing electron microscopy images [44,45], and suggested that a more familiar filament-like representation could increase understandability. In his opinion, some scale transitions happened too early (e. g., the transition from chromosome-colored to nucleotidecolored nucleotides). We adjusted our parametrization accordingly. In addition, based on his feedback, we added an interactive scale offset control that now allows users to adjust the scale representation for a given zoom level. This offset only adapts the chosen representation according to Table 1, while leaving the size on the screen unchanged. The expert also suggested to build on the current approach and extend it with more scales, which we plan to do in the future. Similar to the feedback from the neurobiologist, also the molecular biologist agrees that an integration with existing 2D examination tools has a great potential to improve the workflow in a future visualization system.
Limitations
There are several limitations of our work, the first set relating to the source data. While we used actual data generated by domain experts based on the latest understanding of the genome, it is largely generated using simulations and not actual measurements (Sect. 4.1. We do not use actual sequence data at the lowest scales. Moreover, our specific dataset only contains 45 chromosomes, instead of the correct number of 46. We also noticed that the dataset contains 23,958,240 nucleosome positions, yet when we multiply this with the sum of 146 base pairs per nucleosome we arrive at ≈ 3.5 Gb for the entire genome-not even including the linker base pairs in this calculation and for only 45 chromosomes. Ultimately better data is required. The overall nature of the visualization and the scale transitions would not be impacted by the modified data and we believe that the data quality is already sufficient for general illustration and teaching purposes.
Another limitation is the huge size of the data. Loading all positions for the interactive visualization takes approx. two minutes, but we have not yet explored the feasibility of also loading actual sequence data. We could investigate loading data on-demand for future interactive applications, in particular in the context of tools for domain experts. For such applications we would also likely have to reconsider our design decision to leave out data in the detailed scales, as these may interact with the parts that we do show. We would need to develop a space-dependent look-up to identify parts from the entire genome that potentially interact with the presently shown focus sequences. Another limitation relates to the selection of detail to zoom into. At the moment, we determine the focus interactively based on the currently depicted scale level. This makes it, for example, difficult to select a chromosome deep inside the nucleus or fibers deep inside a chromosome. A combination with an abstract data representation-for example with a domain expert sequencing tool-would address this problem.
Future work
Beyond addressing the mentioned issues, we would like to pursue a number of additional ideas in the future. A next step towards adoption of our approach in biological or medical research is to build an analytical system on top of ScaleTrotter that allows us to query various scientifically relevant aspects. As noted in Sect. 5.2, one scenario are spatial queries to determine whether two genes are located in a close spatial vicinity in case they somehow are related. Other visualization systems developed in the past for analyzing gene expressions can benefit from the structural features that ScaleTrotter offers.
Extending to other subject matters, we will also have to investigate scale transitions where the scales cannot be represented with sequences of blobs. For example, can we also use linear or volumetric representations and extend our visual space embedding to such structures? Alternatively, can we find more effective scale transitions to use such as geometry-based ones (e. g., [36,38,57]), in addition to the visual embedding and the color changes we use so far? We have to avoid over-using the visual variable color which is a scarce resource. Many elements could use color at different scales, so dynamic methods for color management will be essential.
Another direction for future research are generative methods for completing the basic skeletal genetic information on the fly. Currently we use data that are based on positions of nucleotides, while higher-level structures are constructed from these. Information about nucleotide orientations and their connectivity is missing, as well as the specific sequence which is currently not derived from real data. ScaleTrotter does not contain higher-level structures and protein complexes that hold the genome together and which would need to be modeled with a strict scientific accuracy in mind. An algorithmic generation of such models from Hi-C data would allow biologists to adjust the model parameters according to their mental model, and would give them a system for generating new hypotheses. Such a generative approach would also integrate well with the task of adding processes in which involve the DNA, such as condensation, replication, and cell division.
A related fundamental question is how to visualize the dynamic characteristics of the molecular world. It would be highly useful to portray the transition between the interphase and the mitotic form of the DNA, to support visualizing the dynamic processes of reading out the DNA, and to even show the Brownian motion of the atoms.
Finally, our visualization relies on dedicated decisions of how to parameterize the scale transitions. While we used our best judgment to adjust the settings, the resulting parameterization may not be universally valid. An interactive illustration for teaching may need parameters different from those in a tool for domain experts. It would be helpful to derive templates that could be used in different application contexts.
CONCLUSION
ScaleTrotter constitutes one step towards understanding the mysteries of human genetics-not only for a small group of scientists, but also for larger audiences. It is driven by our desire as humans to understand "was die Welt im Innersten zusammenhält" [what "binds the world, and guides its course"] [18]. We believe that our visualization has the potential to serve as the basis of teaching material about the genome and part of the inner workings of biologic processes. It is intended both for the general public and as a foundation for future visual data exploration for genome researchers. In both cases we support, for the first time, an interactive and seamless exploration of the full range of scales-from the nucleus to the atoms of the DNA.
From our discussion it became clear that such multi-scale visualizations need to be created in a fundamentally different way as compared to those excellent examples used in the astronomy domain. In this paper we thus distinguish between the positive-exponent scale-space of astronomy (looking inside-out) and the negative-exponent scale-space of genome data (looking outside-in). For the latter we provide a multiscale visualization approach based on visual scale embedding. We also discuss an example on how the controlled use of abstraction in (illustrative) visualization allows us to employ a space-efficient superimposition of visual representations. This is opposed to juxtaposed views [58], which are ubiquitous in visualization today.
A remaining question is whether the tipping point between the different types of scale spaces is really approximately one meter (1 · 10 0 m) or whether we should use a different point in scale space such as 1 mm. The answer to this question requires further studies on how to illustrate multi-scale subject matter. An example is to generalize our approach to other biologic phenomena such as mitotic DNA or microtubules as suggested in Sect. 5.5. If we continue our journey down the negative-exponent scale-space we may discover a third scale-space region. Models of atoms and subatomic particles seem to again comprise much empty space, similar to the situation in the positive-exponent scale-space. A bigger vision of this work thus is to completely replicate the "Powers of Ten" video-the 36 orders of magnitude from the size of the observable universe to sub-atomic particles-but with an interactive tool and based on current data and visualizations. | 9,576 |
1907.12352 | 2966538158 | We present ScaleTrotter, a conceptual framework for an interactive, multi-scale visualization of biological mesoscale data and, specifically, genome data. ScaleTrotter allows viewers to smoothly transition from the nucleus of a cell to the atomistic composition of the DNA, while bridging several orders of magnitude in scale. The challenges in creating an interactive visualization of genome data are fundamentally different in several ways from those in other domains like astronomy that require a multi-scale representation as well. First, genome data has intertwined scale levels---the DNA is an extremely long, connected molecule that manifests itself at all scale levels. Second, elements of the DNA do not disappear as one zooms out---instead the scale levels at which they are observed group these elements differently. Third, we have detailed information and thus geometry for the entire dataset and for all scale levels, posing a challenge for interactive visual exploration. Finally, the conceptual scale levels for genome data are close in scale space, requiring us to find ways to visually embed a smaller scale into a coarser one. We address these challenges by creating a new multi-scale visualization concept. We use a scale-dependent camera model that controls the visual embedding of the scales into their respective parents, the rendering of a subset of the scale hierarchy, and the location, size, and scope of the view. In traversing the scales, ScaleTrotter is roaming between 2D and 3D visual representations that are depicted in integrated visuals. We discuss, specifically, how this form of multi-scale visualization follows from the specific characteristics of the genome data and describe its implementation. Finally, we discuss the implications of our work to the general illustrative depiction of multi-scale data. | Also important from Viola and Isenberg's discussion @cite_39 is their concept of , which are traversed in scale space. We also connect the DNA representations at different scales, facilitating a smooth transition between them. In creating this axis of abstraction, we focus primarily on changes of Viola and Isenberg's geometric axis, but without a geometric interpolation of different representations. Instead, we use visual embedding of one scale in another one. | {
"abstract": [
"We explore the concept of abstraction as it is used in visualization, with the ultimate goal of understanding and formally defining it. Researchers so far have used the concept of abstraction largely by intuition without a precise meaning. This lack of specificity left questions on the characteristics of abstraction, its variants, its control, or its ultimate potential for visualization and, in particular, illustrative visualization mostly unanswered. In this paper we thus provide a first formalization of the abstraction concept and discuss how this formalization affects the application of abstraction in a variety of visualization scenarios. Based on this discussion, we derive a number of open questions still waiting to be answered, thus formulating a research agenda for the use of abstraction for the visual representation and exploration of data. This paper, therefore, is intended to provide a contribution to the discussion of the theoretical foundations of our field, rather than attempting to provide a completed and final theory."
],
"cite_N": [
"@cite_39"
],
"mid": [
"2751478023"
]
} | ScaleTrotter: Illustrative Visual Travels Across Negative Scales | The recent advances in visualization have allowed us to depict and understand many aspects of the structure and composition of the living cell. For example, cellVIEW [30] provides detailed visuals for viewers to understand the composition of a cell in an interactive exploration tool and Lindow et al. [35] created an impressive interactive illustrative depiction of RNA and DNA structures. Most such visualizations only provide a depiction of components/processes at a single scale level. Living cells, however, comprise structures that function at scales that range from the very small to the very large. The best example is DNA, which is divided and packed into visible chromosomes during mitosis and meiosis, while being read out at the scale level of base pairs. In between these scale levels, the DNA's structures are typically only known to structural biologists, while beyond the base pairs their atomic composition has implications for specific DNA properties.
The amount of information stored in the DNA is enormous. The human genome consists of roughly 3.2 Gb (giga base pairs) [1,52]. This information would fill 539,265 pages of the TVCG template, which would stack up to approx. 27 m. Yet, the whole information is contained inside the cell's nucleus with only approx. 6 µm diameter [1, page 179]. Similar to a coiled telephone cord, the DNA creates a compact structure that contains the long strand of genetic information. This organization results in several levels of perceivable structures (as shown in Fig. 1), which have been studied and visualized separately in the past. The problem thus arises of how to comprehend and explore the whole scope of this massive amount of multi-scale information. If we teach students or the general public about the relationships between the two extremes, for instance, we have to ensure that they understand how the different scales work together. Domain experts, in contrast, deal with questions such as whether correlations exist between the spatial vicinity of bases and genetic disorders. It may manifest itself through two genetically different characteristics that are far from each other in sequence but close to each other in the DNA's 3D configuration. For experts we thus want to ensure that they can access the information at any of the scales. They should also be able to smoothly navigate the information space. The fundamental problem is thus to understand how we can enable a smooth and intuitive navigation in space and scale with seamless transitions. For this purpose we derive specific requirements of multiscale domains and data with negative scale exponents and analyze how the constraints affect their representations. Based on our analysis we introduce ScaleTrotter, an interactive multi-scale visualization of the human DNA, ranging from the level of the interphase chromosomes 1 in the 6 µm nucleus to the level of base pairs (≈ 2 nm) resp. atoms (≈ 0.12 nm). We cover a scale range of 4-5 orders of magnitude in spatial size, and allow viewers to interactively explore as well as smoothly interpolate between the scales. We focus specifically on the visual transition between neighboring scales, so that viewers can mentally connect them and, ultimately, understand how the DNA is constructed. With our work we go beyond existing multi-scale visualizations due to the DNA's specific character. Unlike multiscale data from other fields, the DNA physically connects conceptual elements across all the scales (like the phone cord) so it never disappears from view. We also need to show detailed data everywhere and, for all stages, the scales are close together in scale space.
We base our implementation on multi-scale data from genome research about the positions of DNA building blocks, which are given at a variety of different scales. We then transition between these levels using what we call visual embedding. It maintains the context of larger-scale elements while adding details from the next-lower scale. We combine this process with scale-dependent rendering that only shows relevant amounts of data on the screen. Finally, we support interactive data exploration through scale-dependent view manipulations, interactive focus specification, and visual highlighting of the zoom focus.
In summary, our contributions are as follows. First, we analyze the unique requirements of multi-scale representations of genome data and show that they cannot be met with existing approaches. Second, we demonstrate how to achieve smooth scale transitions for genome data through visual embedding of one scale within another based on measured and simulated data. We further limit the massive data size with a scale-dependent camera model to avoid visual clutter and to facilitate interactive exploration. Third, we describe the implementation of this approach and compare our results to existing illustrations. Finally, we report on feedback from professional illustrators and domain experts. It indicates that our interactive visualization can serve as a fundamental building block for tools that target both domain experts and laypeople.
Abstraction in illustrative visualization
On a high level, our work relates to the use of abstraction in creating effective visual representations, i. e., the use of visual abstraction. Viola and Isenberg [58] describe this concept as a process, which removes detail when transitioning from a lower-level to a higher-level representation, yet which preserves the overall concept. While they attribute the removed detail to "natural variation, noise, etc." in the investigated multi-scale representation we actually deal with a different data scenario: DNA assemblies at different levels of scale. We thus technically do not deal with a "concept-preserving transformation" [58], but with a process in which the underlying representational concept (or parts of it) can change. Nonetheless, their view of abstraction as an interactive process that allows viewers to relate one representation (at one scale) to another one (at a different scale) is essential to our work.
Also important from Viola and Isenberg's discussion [58] is their concept of axes of abstraction, which are traversed in scale space. We also connect the DNA representations at different scales, facilitating a smooth transition between them. In creating this axis of abstraction, we focus primarily on changes of Viola and Isenberg's geometric axis, but without a geometric interpolation of different representations. Instead, we use visual embedding of one scale in another one.
Scale-dependent molecular and genome visualization
We investigate multi-scale representations of the DNA, which relates to work in bio-molecular visualization. Several surveys have summarized work in this field [2,28,29,39], so below we only point out selected approaches. In addition, a large body of work by professional illustrators on mesoscale cell depiction inspired us such as visualizing the human chromosome down to the detail of individual parts of the molecule [19].
In general, as one navigates through large-scale 3D scenes, the underlying subject matter is intrinsically complex and requires appropriate interaction to aid intellection [17]. The inspection of individual parts is challenging, in particular if the viewer is too far away to appreciate its visual details. Yet large, detailed datasets or procedural approaches are essential to create believable representations. To generate not only efficient but effective visualizations, we thus need to remove detail in Viola and Isenberg's [58] visual abstraction sense. This allows us to render at interactive rates as well as to see the intended structures, which would otherwise be hidden due to cluttered views. Consequently, even most single-scale small-scale representations use some type of multiscale approach and with it introduce abstraction. Generally we can distinguish three fundamental techniques: multi-scale representations by leaving out detail of a single data source, multi-scale techniques that actively represent preserved features at different scales, and multi-scale approaches that can also transit between representations of different scales. We discuss approaches for these three categories next.
Multi-scale visualization by means of leaving out detail
An example of leaving out details in a multi-scale context is Parulek et al.'s [46] continuous levels-of-detail for large molecules and, in particular, proteins. They reduced detail of far-away structures for faster rendering. They used three different conceptual distances to create increasingly coarser depictions such as those used in traditional molecular illustration. For distant parts of a molecule, in particular, they seamlessly transition to super atoms using implicit surface blending.
The cellVIEW framework [30] also employs a similar level-of-detail (LOD) principle using advanced GPU methods for proteins in the HIV. It also removes detail to depict internal structures, and procedurally generates the needed elements. In mesoscopic visualization, Lindow et al. [34] applied grid-based volume rendering to sphere raycasting to show large numbers of atoms. They bridged five orders of magnitude in length scale by exploiting the reoccurrence of molecular sub-entities. Finally, Falk et al. [13] proposed out-of-core optimizations for visualizing large-scale whole-cell simulations. Their approach extended Lindow et al.'s [34] work and provides a GPU ray marching for triangle rendering to depict pre-computed molecular surfaces.
Approaches in this category thus create a "glimpse" of multi-scale representations by removing detail and adjusting the remaining elements accordingly. We use this principle, in fact, in an extreme form to handle the multi-scale character of the chromosome data. We completely remove the detail of a large part of the dataset. If we would show all small details, an interactive rendering would be impossible and they would distract from the depicted elements. Nonetheless, this approach typically only uses a single level of data and does not incorporate different conceptual levels of scale.
Different shape representations by conceptual scale
The encoding of structures through different conceptual scales is often essential. Lindow et al. [35], for instance, described different rendering methods of nucleic acids-from 3D tertiary structures to linear 2D and graph models-with a focus on visual quality and performance. They demonstrate how the same data can be used to create both 3Dspatial representations and abstract 2D mappings of genome data. This produces three scale levels: the actual sequence, the helical form in 3D, and the spatial assembly of this form together with proteins. Waltemate et al. [59] represented the mesoscopic level with meshes or microscopic images, while showing detail through molecule assemblies. To transition between the mesoscopic and the molecular level, they used a membrane mapping to allow users to inspect and resolve areas on demand. A magnifier tool overlays the high-scale background with lower-scale details. This approach relates to our transition scheme, as we depict the higher scale as background and the lower scale as foreground. A texture-based molecule rendering has been proposed by Bajaj et al. [6]. Their method reduces the visual clutter at higher levels by incorporating a biochemically sensitive LOD hierarchy.
Tools used by domain experts also visualize different conceptual genome scales. To the best of our knowledge, the first tool to visualize the 3D human genome has been Genome3D [4]. It allows researchers to select a discrete scale level and then load data specifically for this level. The more recent GMOL tool [43] shows 3D genome data captured from Hi-C data [56]. GMOL uses a six-scale system similar to the one that we employ and we derived our data from theirs. They only support a discrete "toggling between scales" [43], while we provide a smooth scale transition. Moreover, we add further semantic scale levels at the lower end to connect base locations and their atomistic compositions.
Conceptual scale representations with smooth transition
A smooth transition between scales has previously been recognized as important. For instance, van der Zwan et al. [57] carried out structural abstraction with seamless transitions for molecules by continuously adjusting the 3D geometry of the data. Miao et al. [38] substantially extended this concept and applied it to DNA nanostructure visualization. They used ten semantic scales and defined smooth transitions between them. This process allows scientists to interact at the appropriate scale level. Later, Miao et al. [37] combined this approach with three dimensional embeddings. In addition to temporal changes of scale, Lueks et al. [36] explored a seamless and continuous spatial multiscale transition by geometry adjustment, controlled by the location in image or in object space. Finally, Kerpedjiev et al. [25] demonstrated multi-scale navigation of 2D genome maps and 1D genome tracks employing a smooth transition for the user to zoom into views.
All these approaches only transition between nearby scale levels and manipulate the depicted data geometry, which limits applicability. These methods, however, do not work in domains where a geometry transition cannot be defined. Further, they are limited in domains where massive multi-scale transitions are needed due to the large amount of geometry that is required for the detailed scale levels. We face these issues in our work and resolve them using visual embeddings instead of geometry transitions as well as a scale-dependent camera concept. Before detailing our approach, however, we first discuss general multiscale visualization techniques from other visualization domains.
General multi-scale data visualization
The vast differences in spatial scale of our world in general have fascinated people for a long time. Illustrators have created explanations of these scale differences in the form of images (e. g., [60] and [47, Fig. 1]), videos (e. g., the seminal "Powers of Ten" video [11] from 1977), and newer interactive experiences (e. g., [15]). Most illustrators use a smart composition of images blended such that the changes are (almost) unnoticeable, while some use clever perspectives to portray the differences in scale. These inspirations have prompted researchers in visualization to create similar multi-scale experiences, based on real datasets.
The classification from Sect. 2.2 for molecular and genome visualization applies here as well. Everts et al. [12], e. g., removed detail from brain fiber tracts to observe the characteristics of the data at a higher scale. Hsu et al. [22] defined various cameras for a dataset, each showing a different level of detail. They then used image masks and camera ray interpolation to create smooth spatial scale transitions that show the data's multi-scale character. Next, Glueck et al. [16]'s approach exemplifies the change of shape representations by conceptual scale by smoothly changing a multi-scale coordinate grid and position pegs to aid depth perception and multi-scale navigation of 3D scenes. They simply remove detail for scales that no longer contribute much to the visualization. In their accompanying video, interestingly, they limited the detail for each scale to only the focus point of the scale transition to maintain interactive frame rates. Another example of this category are geographic multi-scale representations such as online maps (e. g., Google or Bing maps), which contain multiple scale representations, but typically toggle between them as the user zooms in or out. Finally, virtual globes are an example for conceptual scale representations with smooth transitions. They use smooth texture transitions to show an increasing level of detail as one zooms in. Another example is Mohammed et al.'s [41] Abstractocyte tool, which depicts differently abstracted astrocytes and neurons. It allows users to smoothly transition between the cell-type abstractions using both geometry transformations and blending. We extend the latter to our visual embedding transition.
Also these approaches only cover a relatively small scale range. Even online map services cover less than approx. six orders of magnitude. Besides the field of bio-molecular and chemistry research discussed in Sect. 2.2, in fact, only astronomy deals with large scale differences. Here, structures range from celestial bodies (≥ ≈ 10 2 m) 2 to the size of the observable universe (1.3 · 10 26 m), in total 24 orders of magnitude.
To depict such data, visualization researchers have created explicit multi-scale rendering architectures. Schatz et al. [51], for example, combined the rendering of overview representations of larger structures with the detailed depiction of parts that are close to the camera or have high importance. To truly traverse the large range of scales of the universe, however, several datasets that cover different orders of size and detail magnitude have to be combined into a dedicated data rendering and exploration framework. The first such framework was introduced by Fu et al. [14,21] who used scale-independent modeling and rendering and power-scaled coordinates to produce scale-insensitive visualizations. This approach essentially treats, models, and visualizes each scale separately and then blends scales in and out as they appear or disappear. The different scales of entities in the universe can also be modeled using a ScaleGraph [26], which facilitates scale-independent rendering using scene graphs. Axelsson et al. [5] later extended this concept to the Dynamic Scene Graph, which, in the OpenSpace system [8], supports several high-detail locations and stereoscopic rendering. The Dynamic Scene Graph uses a dynamic camera node attachment to visualize scenes of varying scale and with high floating point precision.
With genome data we face similar problems concerning scaledependent data and the need to traverse a range of scales. We also face the challenge that our conceptual scales are packed much more tightly in scale space as we explain next. This leads to fundamental differences between both application domains.
MULTI-SCALE GENOME VISUALIZATION
Visualizing the nuclear human genome-from the nucleus that contains all chromosomal genetic material down to the very atoms that make up the DNA-is challenging due to the inherent organization of the DNA in tubular arrangements. DNA in its B-form is only 2 nm [3] wide, which in its fibrous form or at more detailed scales would be too thin to be perceived. This situation is even more aggravated by the dense organization of the DNA and the structural hierarchy that bridges several scales. The previously discussed methods do not deal with such a combination of structural characteristics. Below we thus discuss the challenges that arise from the properties of these biological entities and how we address them by developing our new approach that smoothly transitions between views of the genome at its various scales.
Challenges of interactive multiscale DNA visualization
Domain scientists who sequence, investigate, and generally work with genome data use a series of conceptual levels for analysis and visualization [43]: the genome scale (containing all approx. 3.2 Gb of the human genome), the chromosome scale (50-100 Mb), the loci scale (in the order of Mb), the fiber scale (in the order of Kb), the nucleosome scale (146 b), and the nucleotide scale (i. e., 1 b), in addition to the atomistic composition of the nucleotides. These seven scales cover a range of approx. 4-5 orders of magnitude in physical size. In astronomy or astrophysics, in contrast, researchers deal with a similar number of scales: 3 approx. 7-8 conceptual scales of objects, yet over a range of some 24 orders of magnitude of physical size. 4 A fundamental difference between multi-scale visualizations in the two domains is, therefore, the scale density of the conceptual levels that need to be depicted.
Multi-scale astronomy visualization [5,14,21,26] deals with positiveexponent scale-space 5 (Fig. 2, top), where two neighboring scales are relatively far apart in scale space. For example, planets are much smaller than stars, stars are much smaller than galaxies, galaxies are much smaller than galaxy clusters, etc. On average, two scales have a distance of three or more orders of magnitude in physical space. The consequence of this high distance in scale space between neighboring conceptual levels is that, as one zooms out, elements from one scale typically all but disappear before the elements on the next conceptual level become visible. This aspect is used in creating multi-scale astronomy visualizations. For example, Axelsson Fig. 2. Multi-scale visualization in astronomy vs. genomics. The size difference between celestial bodies is extremely large (e. g., sun vs. earth-the earth is almost invisible at that scale). The distance between earth and moon is also large, compared to their sizes. In the genome, we have similar relative size differences, yet molecules are densely packed as exemplified by the two base pairs in the DNA double helix.
Graph [5] uses spheres of influence to control the visibility range of objects from a given subtree of the scene graph. In fact, the low scale density of the conceptual levels made the seamless animation of the astronomy/astrophysics section in the "Powers of Ten" Video [11] from 1977 possible-in a time before computer graphics could be used to create such animations. Eames and Eames [11] simply and effectively blended smoothly between consecutive images that depicted the respective scales. For the cell/genome part, however, they use sudden transitions between conceptual scales without spatial continuity, and they also leave out several of the conceptual scales that scientists use today such as the chromosomes and the nucleosomes.
The reason for this problem of smoothly transitioning between scales in genome visualization-i. e., in negative-exponent scale-space 6 ( Fig. 2, bottom)-is that the conceptual levels of a multi-scale visualization are much closer to each other in scale. In contrast to astronomy's positive-exponent scale-space, there is only an average scale distance of about 0.5-0.6 orders of magnitude of physical space between two conceptual scales. Elements on one conceptual scale are thus still visible when elements from the next conceptual scale begin to appear. The scales for genome visualizations are thus much denser compared to astronomy's average scale distance of three orders of magnitude.
Moreover, in the genome the building blocks are physically connected in space and across conceptual scales, except for the genome and chromosome levels. From the atoms to the chromosome scale, we have a single connected component. It is assembled in different geometric ways, depending on the conceptual scale at which we choose to observe. For example, the sequence of all nucleotides (base pairs) of the 46 chromosomes in a human cell would stretch for 2 m, with each base pair only being 2 nm wide [3], while a complete set of chromosomes fits into the 6 µm wide nucleus. Nonetheless, in all scales between the sequence of nucleotides and a chromosome we deal with the same, physically connected structure. In astronomy, instead, the physical space between elements within a conceptual scale is mostly empty and elements are physically not connected-elements are only connected by proximity (and gravity), not by visible links.
The large inter-scale distance and physical connectedness, naturally, also create the problem of how to visualize the relationship between two conceptual scale levels. The mentioned multi-scale visualization systems from astronomy [5,14,21,26] use animation for this purpose, sometimes adding invisible and intangible elements such as orbits of celestial bodies. In general multi-scale visualization approaches, multiscale coordinate grids [16] can assist the perception of scale-level relationships. These approaches only work if the respective elements are independent of each other and can fade visually as one zooms out, for example, into the next-higher conceptual scale. The connected composition of the genome does make these approaches impossible. In the genome, in addition, we have a complete model for the details in each conceptual level, derived from data that are averages of measurements from many experiments on a single organism type. We are thus able to and need to show visual detail everywhere-as opposed to only close to a single point like planet Earth in astronomy.
Ultimately, all these points lead to two fundamental challenges for us to solve. The first (discussed in Sect. 3.2 and 3.3) is how to visually create effective transitions between conceptual scales. The transitional scales shall show the containment and relationship character of the data even in still images and seamlessly allow us to travel across the scales as we are interacting. They must deal with the continuous nature of the depicted elements, which are physically connected in space and across scales. The second challenge is a computational one. Positional information of all atoms from the entire genome would not fit into GPU memory and will prohibit interactive rendering performance. We discuss how to overcome these computational issues in Sect. 4, along with the implementation of the visual design from Sect. 3.2 and 3.3.
Visual embedding of conceptual scales
Existing multi-scale visualizations of DNA [36,38,57] or other data [41] often use geometry manipulations to transition from one scale to the next. For the full genome, however, this approach would create too much detail to be useful and would require too many elements to be rendered. Moreover, two consecutive scales may differ significantly in structure and organization. A nucleosome, e. g., consists of nucleotides in double-helix form, wrapped around a histone protein. We thus need appropriate abstracted representations for the whole set of geometry in a given scale that best depict the scale-dependent structure and still allow us to create smooth transitions between scales.
Nonetheless, the mentioned geometry-based multi-scale transformations still serve as an important inspiration to our work. They often provide intermediate representations that may not be entirely accurate, but show how one scale relates to another one, even in a still image. Viewers can appreciate the properties of both involved scale levels, such as in Miao et al.'s [38] transition between nucleotides and strands. Specifically, we take inspiration from traditional illustration where a related visual metaphor has been used before. As exemplified by Fig. 3, illustrators sometimes use an abstracted representation of a coarser scale to aid viewers with understanding the overall composition as well as the spatial location of the finer details. This embedding of one representation scale into the next is similar to combining several layers of visual information-or super-imposition [42, pp. 288 ff]. It is a common approach, for example, in creating maps. In visualization, this principle has been used in the past (e. g., [10,23,49,50]), typically applying some form of transparency to be able to perceive the different layers. Transparency, however, can easily lead to visualizations that are difficult to understand [9]. Simple outlines to indicate the coarser shape or context can also be useful [54]. In our case, even outlines easily lead to clutter due to the immense amount of detail in the genome data. Moreover, we are not interested in showing that some elements are spatially inside others, but rather that the elements are part of a higher-level structure, thus are conceptually contained.
We therefore propose visual scale embedding of the detailed scale into its coarser parent (see the illustration in Fig. 4). We render an Fig. 10]. We thus completely flattened the context as shown in Fig. 4 and inspired by previous multi-scale visualizations from structural biology [46]. Then we render the detailed geometry of the next-smaller scale on top of it. This concept adequately supports our goal of smooth scale transitions. A geometric representation of the coarser scale is first shown using 3D shading as long as it is still small on the screen, i. e., the camera is far away. It transitions to a flat, canvas-like representation when the camera comes closer and the detail in this scale is not enough anymore. We now add the representation of the more detailed scale on top-again using 3D shading, as shown for two scale transitions in Fig. 5. Our illustrative visualization concept combines the 2D aspect of the flattened coarser scale with the 3D detail of the finer scale. With it we make use of superimposed representations as argued by Viola and Isenberg [58], which are an alternative to spatially or temporally juxtaposed views. In our case, the increasingly abstract character of rendering of the coarser scale (as we flatten it during zooming in) relates to its increasingly contextual and conceptual nature. Our approach thus relates to semantic zooming [48] because the context layer turns into a flat surface or canvas, irrespective of the underlying 3D structure and regardless of the specific chosen view direction. This type of scale zoom does not have the character of cut-away techniques as often used in tools to explore containment in 3D data (e. g., [31,33]). Instead, it is more akin to the semantic zooming in the visualization of abstract data, which is embedded in the 2D plane (e. g., [61]).
Multi-scale visual embedding and scale-dependent view
One visual embedding step connects two consecutive semantic scales. We now concatenate several steps to assemble the whole hierarchy (Fig. 6). This is conceptually straightforward because each scale by itself is shown using 3D shading. Nonetheless, as we get to finer and finer details, we face the two major problems mentioned at the start of Sect. 3.2: visual clutter and limitations of graphics processing. Both are caused by the tight scale space packing of the semantic levels in the genome. At detailed scales, a huge number of elements are potentially visible, e. g., 3.2 Gb at the level of nucleotides. To address this issue, we adjust the camera concept to the multi-scale nature of the data.
In previous multi-scale visualization frameworks [5,14,21,26], researchers have already used scale-constrained camera navigation. For example, they apply a scale-dependent camera speed to quickly cover the huge distances at coarse levels and provide fine control for detailed levels. In addition, they used a scale-dependent physical camera size or scope such that the depicted elements would appropriately fill the distance between near and far plane, or use depth buffer remapping [14] to cover a larger depth range. In astronomy and astrophysics, however, we do not face the problem of a lot of nearby elements in detailed levels of scale due to their loose scale-space packing. After all, if we look into the night sky we do not see much more than "a few" stars from our galactic neighborhood which, in a visualization system, can easily be represented by a texture map. Axelsson et al. [5], for example, simply attach their cameras to nodes within the scale level they want to depict.
For the visualization of genome data, however, we have to introduce an active control of the scale-dependent data-hierarchy size or scope as we would "physically see," for example, all nucleosomes or nucleotides up to the end of the nucleus. Aside from the resulting clutter, such complete genome views would also conceptually not be helpful because, due to the nature of the genome, the elements within a detailed scale largely repeat themselves. The visual goal should thus be to only show a relevant and scale-dependent subset of each hierarchy level. We thus limit the rendering scope to a subset of the hierarchy, depending on the chosen scale level and spatial focus point. The example in Fig. 7 depicts the nucleosome scale, where we only show a limited number of nucleosomes to the left and the right of the current focus point in the sequence, while the rest of the hierarchy has been blended out. We thereby extend the visual metaphor of the canvas, which we applied in the visual embedding, and use the white background of the frame buffer as a second, scale-dependent canvas, which limits the visibility of the detail. In contrast to photorealism 7 that drives many multi-scale visu- alizations in astronomy, we are interested in appropriately abstracted representations through a scale-dependent removal of distant detail to support viewers in focusing on their current region of interest.
IMPLEMENTATION
Based on the conceptual design from Sect. 3 we now describe the implementation of our multi-scale genome visualization framework. We first describe the used and then explain the shader-based realization of the scale transitions using a series of visual embedding steps as well as some interaction considerations.
Data sources and data hierarchy
Researchers in genome studies have a high interest in understanding the relationships between the spatial structure at the various scale levels and the biological function of the DNA. Therefore they have created a multi-scale dataset that allows them to look at the genome in different spatial scale levels [43]. This data was derived by Nowotny et al. [43] from a model of the human genome by Asbury et al. [4], which in turn was constructed based on various data sources and observed properties. [7] approach of space-filling, fractal packing. As a result, Nowotny et al. [43] obtained the positions of the nucleotides in space, and from these computed the positions of fibers, loci, and chromosomes (Fig. 8). They stored this data in their own Genome Scale System (GSS) format and also provided the positions of the nucleotides for one nucleosome (Fig. 8, bottom-right). Even with this additional data, we still have to procedurally generate further information as we visualize this data such as the orientations of the nucleosomes (based on the location of two consecutive nucleosomes) and the linker DNA strands of nucleotides connecting two consecutive nucleosomes.
This data provides positions at every scale level, without additional information about the actual sizes. Only at the nucleotide and atom scales the sizes are known. It was commonly thought that nucleosomes are tightly and homogeneously packed into 30 nm fibers, 120 nm chromonema, and 300-700 nm chromatids, but recent studies [45] disprove this organization and confirm the existence of flexible chains with diameters of 5-24 nm. Therefore, for all hierarchically organized scales coarser than the nucleosome, we do not have information about the specific shape that each data point represents. We use spheres with scale-adjusted sizes as rendering primitives as they well portray the chaining of elements according to the data-point sequence. With respect to visualizing this multi-scale phenomenon, the data hierarchy (i. e., 100 nucleosomes = 1 fiber, 100 fibers = 1 locus, approx. 100 loci = 1 chromosome) is not the same as the hierarchy of semantic scales that a viewer sees. For example, the dataset contains a level that stores the chromosome positions, but if rendered we would only see one sphere for each chromosome ( Fig. 9(b)). Such a depiction would not easily be recognized as representing a chromosome due to the lack of detail. The chromosomes by themselves only become apparent once we display them with more shape details using the data level of the loci as given in Fig. 9(c). The locations at the chromosomes data scale can instead be better used to represent the semantic level of the nucleus by rendering them as larger spheres, all with the same color and with a single outline around the entire shape as illustrated in Fig. 9(a).
In Table 1 we list the relationships between data hierarchy and semantic hierarchy for the entire set of scales we support. From the table follows that the choice of color assignment and the subset of rendered elements on the screen supports viewers in understanding the semantic level, which we want to portray. For example, by rendering the fiber positions colored by chromosome we facilitate the understanding of a detailed depiction of a chromosome, rather than that chromosomes consist of several loci. In an alternative depiction for domain experts, who are interested in studying the loci regions, we could instead assign the colors by loci for the fiber data level and beyond.
We added two additional scale transitions that are not realized by visual embedding, but instead by color transitions. The first of these transitions changes the colors from the previously maintained chromosome color to nucleotide colors as the nucleotide positions are rendered in their 3D shape to illustrate that the nucleosomes themselves consist of pairs of nucleotides. The following transition then uses visual embedding as before, to transition to atoms while maintaining nucleotide colors. The last transition, again changes this color assignment such that the atoms are rendered in their typical element colors, using 3D shading and without flattening them.
Realizing visual scale embedding
For our proof-of-concept implementation we build on the molecular visualization functionality provided in the Marion framework [40]. We added to this framework the capability to load the previously described GSS data. We thus load and store the highest detail of the datathe 23,958,240 nucleosome positions-as well as all positions of the coarser scales. To show more detail, we use the single nucleosome example in the data, which consists of 292 nucleotides and then create the ≈ 24 · 10 6 instances for the semantic nucleosome scale. Here we fully use of Le Muzic et al.'s [30] technique of employing the tessellation stages on the GPU, which dynamically injects the atoms of the nucleosome. We apply a similar instancing approach for transitioning to an atomistic representation, based on the 1AOI model from the PDB. To visually represent the elements, we utilize 2D sphere impostors instead of sphere meshes [30]. Specifically, we use triangular 2D billboards (i. e., only three vertices) that always face the camera and assign the depth to each fragment that it would get if it had been a sphere.
If we wanted to directly render all atoms at the finest detail scale, we would have to deal with ≈ 3.2 Gb ·70 atoms/b = 224 · 10 9 atoms. This amount of detail is not possible to render at interactive rates. With LOD optimizations, such as the creation of super-atoms for distant elements, cellVIEW could process 15 · 10 9 atoms at 60 Hz [30]. This amount of detail does not seem to be necessary in our case. Our main goal is the depiction of the scale transitions and too much detail would cause visual noise and distractions. We use the scale-dependent removal of distant detail described in Sect. 3.3. As listed in Table 1, for coarse scales we show all chromosomes. Starting with the semantic fibers scale, we only show the focus chromosome. For the semantic nucleosomes level, we only show the focus fiber and two additional fibers in both directions of the sequence. To indicate that the sequence continues, we gradually fade out the ends of the sequence of nucleosomes as shown in Fig. 7. For finer scales beyond the nucleosomes, we maintain the sequence of five fibers around the focus point, but remove the detail of the links between nucleosomes.
To manage the different rendering scopes and color assignments, we assign IDs to elements in a data scale and record the IDs of the hierarchy ancestors of an element. For example, each chromosome data element gets an ID, which in turn is known to the loci data instances. We use this ID to assign a color to the chromosomes. Because we continue rendering all chromosomes even at the fiber data level respectively semantic chromosome with detail level, we also pass the IDs of the chromosomes to the fiber data elements. Later, the IDs of the fiber data elements are used to determine the rendering scope in the data levels of nucleotide positions and finer (more detail).
For realizing the transition in the visual scale embedding, i. e., transitioning from the coarser scale S N to the finer scale S N+1 , we begin by alpha-blending S N rendered with 3D detail and flattened S N . We achieve the 3D detail with screen-space ambient occlusion (SSAO), while the flattened version does not use SSAO. Next we transition between S N and S N+1 by first rendering S N and then S N+1 on top, the latter with increasing opacity. Here we avoid visual clutter by only adding detail to elements in S N+1 on top of those regions that belonged to their parents in S N . The necessary information for this purpose comes from the previously mentioned IDs. We thus first render all flattened elements of S N , before blending in detail elements from S N+1 . In the final transition of visual scale embedding, we remove the elements from S N through alpha-blending. For the two color transitions discussed in Sect. 4.1 we simply alpha-blend between the corresponding elements of S N and S N+1 , but with different color assignments.
Interaction considerations
The rendering speeds are in the range of 15-35 fps on an Intel Core ™ PC (i7-8700K, 6 cores, 32 GB RAM, 3.70 GHz, nVidia Quadro P4000, Windows 10 x64). In addition to providing a scale-controlled traversal of the scale hierarchy toward a focus point, we thus allow users to interactively explore the data and choose their focus point themselves.
To support this interaction, we allow users to apply transformations such as rotation and panning. We also allow users to click on the data to select a new focus point, which controls the removal of elements to be rendered at specific scale transitions (as shown in Table 1). First, users can select the focus chromosome (starting at loci positions), whose position is the median point within the sequence of fiber positions for that chromosome. This choice controls which chromosome remains as we transition from the fiber to the nucleosome data scale. Next, starting at the nucleosome data scale, users can select a strand of five consecutive fiber positions, which then ensures that only this strand remains as we transition from nucleosome to nucleotide positions.
To further support the interactive exploration, we also adjust the colors of the elements to be in focus next. For example, the subset of a chromosome next in focus is rendered in a slightly lighter color than the remaining elements of the same level. This approach provides a natural visual indication of the current focus point and guides the view of the users as they explore the scales.
To achieve the scale-constrained camera navigation, we measure the distance to a transition or interaction target point in the data sequence. We measure this distance as the span between the camera location and the position of the target level in its currently active scale. This distance then informs the setting of camera parameters and SSAO passes. After the user has selected a new focus point, the current distance to the camera will change, so we adjust also the global scale parameter that we use to control the scale navigation.
DISCUSSION
Based on our design and implementation we now compare our results with existing visual examples, examine potential application domains, discuss limitations, and suggest several directions for improvement.
Comparison to traditionally created illustrations
Measuring the ground truth is only possible to a certain degree, which makes the comparison to ScaleTrotter difficult. One reason is that no static genetic material exists in living cells. Moreover, microscopy is also limited at the scale levels with which we are dealing. We have to rely on the data from the domain experts with its own limitations (Sect. 5.4) as the input for creating our visualization and compare the results with existing illustrations in both static and animated form.
We first look at traditional static multi-scale illustrations as shown in Fig. 10; other illustrations similar to the one in Fig. 10(a) can be found in Annunziato's [3] and Ou et al.'s [45] works. In Fig. 10(a), the illustrators perform the scale transition along a 1D path, supported by the DNA's extreme length. We do not take this route as we employ the actual positions of elements from the involved datasets. This means that we could also apply our approach to biologic agents such as proteins that do not have an extremely long extent. Moreover, the static illustrations have some continuous scale transitions, e. g., the detail of the DNA molecule itself or the sizes of the nucleosomes. Some transitions in the multi-scale representation, however, are more sudden such as the transition from the DNA to nucleosomes, the transition from the nucleosomes to the condensed chromatin fiber, and the transition from that fiber to the 700 nm wide chromosome leg. Fig. 10(b) has only one such transition. The changeover happens directly between the nucleosome level and the mitotic chromosome. We show transitions between scales interactively using our visual scale embedding. The static illustrations in Fig. 10 just use the continuous nature of the DNA to evoke the same hierarchical layering of the different scales. The benefit of the spatial scale transitions in the static illustrations is that a single view can depict all scale levels, while our temporally-controlled scale transitions allow us to interactively explore any point in both the genome's spatial layout and in scale. Moreover, we also show the actual physical configuration of every scale according to the datasets that genome researchers provide, representing the current state of knowledge.
We also compare our results to animated illustrations as exemplified by the "Powers of Ten" video 8 [11] and a video treating the composition of the genome 9 and created by Drew Berry et al. in 2003. The "Powers of Ten" video only shows the fibers of the DNA double helix curled into loops-a notion that has since been revised by the domain experts. Nonetheless, the video still shows a continuous transition in scale through blending of aligned representations from the fibers, to the nucleotides, to the atoms. It even suggests that we should continue the scale journey beyond the atoms. The second video, in contrast, shows the scale transitions starting from the DNA double helix and zooming out. The scale transitions are depicted as "physical" assembly processes, e. g., going from the double helix to nucleosomes, and from nucleosomes to fibers. Furthermore, shifts of focus or hard cuts are applied as well. The process of assembling an elongated structure through curling up can nicely illustrate the composition of the low-level genome structures, but only if no constraints on the rest of the fibrous structure exist. In our interactive illustration, we have such constraints where we can zoom out and in and where we have restrictions on the locations of all elements coming from the given data. Moreover, the construction also potentially creates a lot of motion due to the dense nature of the genome and, thus, visual noise which might impact the overall visualization. On the other hand, both videos convey the message that no element is static at the small scales. We do not yet show this functionality in our visualizations.
Both static and dynamic traditional visualizations depict the composition of the genome in its mitotic stage. The chromosomes only assume this stage, however, when the cell divides. Our visualization is the first that provides the user with an interactive exploration with smooth scale transitions of the genome in its interphase state, the state in which the chromosomes exist most of the time.
Feedback from illustrators and application scenarios
To discuss the creation of illustrations for laypeople with ScaleTrotter, we asked two professional illustrators for feedback who work on biological and medical visualizations. One of them has ten years experience as a professional scientific illustrator and animator with a focus on biological and medical illustrations for science education. The other expert is a certified illustrator with two years experience plus a PhD in Bioengineering. We conducted a semi-structured interview (approx. 60 min) with them, to get critical feedback [24,27] on our illustrative multi-scale visualization and to learn how our approach compares to the way they deal with multi-scale depictions in their daily work.
They immediately considered our ScaleTrotter approach for showing genome scale transitions as part of a general story to tell. They missed the necessary additional support for telling a story such as the contextual representation of a cell (for which we could investigate cellVIEW [30]) and, in general, audio support and narration. Although they had to judge our results isolated from other story telling methods, they saw the benefits of an interactive tool for creating narratives that goes beyond the possibilities of their manual approaches.
We also got a number of specific pieces of advice for improvement. In particular, they recommended different settings for when to make certain transitions in scale space. The illustrators also suggested the addition of "contrast" for those parts that will be in focus next as we zoom in-a feature we then added and describe in Sect. 4.3.
According to them, our concept of using visual scale embedding to transition between different scalar representations has not yet been used in animated illustrations, yet the general concept of showing detail together with context as illustrated in Fig. 3 is known. Instead of using visual scale embedding, they use techniques discussed in Sect. 5.1, or they employ cut-outs with rectangles or boxes to indicate the transition between scales. Our visual scale embedding is seen by them as a clear innovation: "to have a smooth transition between the scales is really cool." Moreover, they were excited about the ability to freely select a point of focus and interactively zoom into the corresponding detail. Basically, they said that our approach would bring them closer to their vision of a "molecular Maya" because it is "essential to have a scientifically correct reference." Connected to this point we also discussed the application of ScaleTrotter in genome research. Due to their close collaborations with domain experts they emphasized that the combination of the genomics sequence data plus some type of spatial information will be essential for future research. A combination of our visualization, which is based on the domain's state-of-the-art spatial data, with existing tools could allow genome scientists to better understand the function of genes and certain genetic diseases.
In summary, they are excited about the visual results and see application possibilities both in teaching and in data exploration.
Feedback from genome scientists
As a result of our conversation with the illustrators they also connected us to a neurobiologist who investigates 3D genome structures at single cell levels, e. g., by comparing cancerous with healthy cells. His group is interested in interactions between different regions of the genome. Although the spatial characteristics of the data are of key importance to them, they still use 2D tools. The scientist confirmed that a combination of their 2D representations with our interactive 3D-spatial multi-scale method would considerably help them to understand the interaction of sequentially distant but spatially close parts of the genome, processes such as gene expression, and DNA-protein interactions.
We also presented our approach to an expert in molecular biology with 52 years of age and 22 years of post-PhD experience. He specializes in genetics and studies the composition, architecture, and function of SMC complexes. We conducted a semi-structured interview (approx. 60 minutes) to discuss our results. He stated that transitions between several scales are definitely useful for analyzing the 3D genome. He was satisfied with the coarser chromosomes and loci representations, but had suggestions for improving the nucleosome and atomic scales. In particular, he noted the lack of proteins such as histones. He compared our visualization with existing electron microscopy images [44,45], and suggested that a more familiar filament-like representation could increase understandability. In his opinion, some scale transitions happened too early (e. g., the transition from chromosome-colored to nucleotidecolored nucleotides). We adjusted our parametrization accordingly. In addition, based on his feedback, we added an interactive scale offset control that now allows users to adjust the scale representation for a given zoom level. This offset only adapts the chosen representation according to Table 1, while leaving the size on the screen unchanged. The expert also suggested to build on the current approach and extend it with more scales, which we plan to do in the future. Similar to the feedback from the neurobiologist, also the molecular biologist agrees that an integration with existing 2D examination tools has a great potential to improve the workflow in a future visualization system.
Limitations
There are several limitations of our work, the first set relating to the source data. While we used actual data generated by domain experts based on the latest understanding of the genome, it is largely generated using simulations and not actual measurements (Sect. 4.1. We do not use actual sequence data at the lowest scales. Moreover, our specific dataset only contains 45 chromosomes, instead of the correct number of 46. We also noticed that the dataset contains 23,958,240 nucleosome positions, yet when we multiply this with the sum of 146 base pairs per nucleosome we arrive at ≈ 3.5 Gb for the entire genome-not even including the linker base pairs in this calculation and for only 45 chromosomes. Ultimately better data is required. The overall nature of the visualization and the scale transitions would not be impacted by the modified data and we believe that the data quality is already sufficient for general illustration and teaching purposes.
Another limitation is the huge size of the data. Loading all positions for the interactive visualization takes approx. two minutes, but we have not yet explored the feasibility of also loading actual sequence data. We could investigate loading data on-demand for future interactive applications, in particular in the context of tools for domain experts. For such applications we would also likely have to reconsider our design decision to leave out data in the detailed scales, as these may interact with the parts that we do show. We would need to develop a space-dependent look-up to identify parts from the entire genome that potentially interact with the presently shown focus sequences. Another limitation relates to the selection of detail to zoom into. At the moment, we determine the focus interactively based on the currently depicted scale level. This makes it, for example, difficult to select a chromosome deep inside the nucleus or fibers deep inside a chromosome. A combination with an abstract data representation-for example with a domain expert sequencing tool-would address this problem.
Future work
Beyond addressing the mentioned issues, we would like to pursue a number of additional ideas in the future. A next step towards adoption of our approach in biological or medical research is to build an analytical system on top of ScaleTrotter that allows us to query various scientifically relevant aspects. As noted in Sect. 5.2, one scenario are spatial queries to determine whether two genes are located in a close spatial vicinity in case they somehow are related. Other visualization systems developed in the past for analyzing gene expressions can benefit from the structural features that ScaleTrotter offers.
Extending to other subject matters, we will also have to investigate scale transitions where the scales cannot be represented with sequences of blobs. For example, can we also use linear or volumetric representations and extend our visual space embedding to such structures? Alternatively, can we find more effective scale transitions to use such as geometry-based ones (e. g., [36,38,57]), in addition to the visual embedding and the color changes we use so far? We have to avoid over-using the visual variable color which is a scarce resource. Many elements could use color at different scales, so dynamic methods for color management will be essential.
Another direction for future research are generative methods for completing the basic skeletal genetic information on the fly. Currently we use data that are based on positions of nucleotides, while higher-level structures are constructed from these. Information about nucleotide orientations and their connectivity is missing, as well as the specific sequence which is currently not derived from real data. ScaleTrotter does not contain higher-level structures and protein complexes that hold the genome together and which would need to be modeled with a strict scientific accuracy in mind. An algorithmic generation of such models from Hi-C data would allow biologists to adjust the model parameters according to their mental model, and would give them a system for generating new hypotheses. Such a generative approach would also integrate well with the task of adding processes in which involve the DNA, such as condensation, replication, and cell division.
A related fundamental question is how to visualize the dynamic characteristics of the molecular world. It would be highly useful to portray the transition between the interphase and the mitotic form of the DNA, to support visualizing the dynamic processes of reading out the DNA, and to even show the Brownian motion of the atoms.
Finally, our visualization relies on dedicated decisions of how to parameterize the scale transitions. While we used our best judgment to adjust the settings, the resulting parameterization may not be universally valid. An interactive illustration for teaching may need parameters different from those in a tool for domain experts. It would be helpful to derive templates that could be used in different application contexts.
CONCLUSION
ScaleTrotter constitutes one step towards understanding the mysteries of human genetics-not only for a small group of scientists, but also for larger audiences. It is driven by our desire as humans to understand "was die Welt im Innersten zusammenhält" [what "binds the world, and guides its course"] [18]. We believe that our visualization has the potential to serve as the basis of teaching material about the genome and part of the inner workings of biologic processes. It is intended both for the general public and as a foundation for future visual data exploration for genome researchers. In both cases we support, for the first time, an interactive and seamless exploration of the full range of scales-from the nucleus to the atoms of the DNA.
From our discussion it became clear that such multi-scale visualizations need to be created in a fundamentally different way as compared to those excellent examples used in the astronomy domain. In this paper we thus distinguish between the positive-exponent scale-space of astronomy (looking inside-out) and the negative-exponent scale-space of genome data (looking outside-in). For the latter we provide a multiscale visualization approach based on visual scale embedding. We also discuss an example on how the controlled use of abstraction in (illustrative) visualization allows us to employ a space-efficient superimposition of visual representations. This is opposed to juxtaposed views [58], which are ubiquitous in visualization today.
A remaining question is whether the tipping point between the different types of scale spaces is really approximately one meter (1 · 10 0 m) or whether we should use a different point in scale space such as 1 mm. The answer to this question requires further studies on how to illustrate multi-scale subject matter. An example is to generalize our approach to other biologic phenomena such as mitotic DNA or microtubules as suggested in Sect. 5.5. If we continue our journey down the negative-exponent scale-space we may discover a third scale-space region. Models of atoms and subatomic particles seem to again comprise much empty space, similar to the situation in the positive-exponent scale-space. A bigger vision of this work thus is to completely replicate the "Powers of Ten" video-the 36 orders of magnitude from the size of the observable universe to sub-atomic particles-but with an interactive tool and based on current data and visualizations. | 9,576 |
1907.12352 | 2966538158 | We present ScaleTrotter, a conceptual framework for an interactive, multi-scale visualization of biological mesoscale data and, specifically, genome data. ScaleTrotter allows viewers to smoothly transition from the nucleus of a cell to the atomistic composition of the DNA, while bridging several orders of magnitude in scale. The challenges in creating an interactive visualization of genome data are fundamentally different in several ways from those in other domains like astronomy that require a multi-scale representation as well. First, genome data has intertwined scale levels---the DNA is an extremely long, connected molecule that manifests itself at all scale levels. Second, elements of the DNA do not disappear as one zooms out---instead the scale levels at which they are observed group these elements differently. Third, we have detailed information and thus geometry for the entire dataset and for all scale levels, posing a challenge for interactive visual exploration. Finally, the conceptual scale levels for genome data are close in scale space, requiring us to find ways to visually embed a smaller scale into a coarser one. We address these challenges by creating a new multi-scale visualization concept. We use a scale-dependent camera model that controls the visual embedding of the scales into their respective parents, the rendering of a subset of the scale hierarchy, and the location, size, and scope of the view. In traversing the scales, ScaleTrotter is roaming between 2D and 3D visual representations that are depicted in integrated visuals. We discuss, specifically, how this form of multi-scale visualization follows from the specific characteristics of the genome data and describe its implementation. Finally, we discuss the implications of our work to the general illustrative depiction of multi-scale data. | We investigate multi-scale representations of the DNA, which relates to work in bio-molecular visualization. Several surveys have summarized work in this field @cite_14 @cite_3 @cite_25 @cite_17 , so below we only point out selected approaches. In addition, a large body of work by professional illustrators on mesoscale cell depiction inspired us such as visualizing the human chromosome down to the detail of individual parts of the molecule @cite_15 . | {
"abstract": [
"Structural properties of molecules are of primary concern in many fields. This report provides a comprehensive overview on techniques that have been developed in the fields of molecular graphics and visualization with a focus on applications in structural biology. The field heavily relies on computerized geometric and visual representations of three-dimensional, complex, large, and time-varying molecular structures. The report presents a taxonomy that demonstrates which areas of molecular visualization have already been extensively investigated and where the field is currently heading. It discusses visualizations for molecular structures, strategies for efficient display regarding image quality and frame rate, covers different aspects of level of detail, and reviews visualizations illustrating the dynamic aspects of molecular simulation data. The report concludes with an outlook on promising and important research topics to enable further success in advancing the knowledge about interaction of molecular structures.",
"",
"Abstract Modeling and visualization of the cellular mesoscale, bridging the nanometer scale of molecules to the micrometer scale of cells, is being studied by an integrative approach. Data from structural biology, proteomics, and microscopy are combined to simulate the molecular structure of living cells. These cellular landscapes are used as research tools for hypothesis generation and testing, and to present visual narratives of the cellular context of molecular biology for dissemination, education, and outreach.",
"Structural properties of molecules are of primary concern in many fields. This report provides a comprehensive overview on techniques that have been developed in the fields of molecular graphics and visualization with a focus on applications in structural biology. The field heavily relies on computerized geometric and visual representations of three-dimensional, complex, large and time-varying molecular structures. The report presents a taxonomy that demonstrates which areas of molecular visualization have already been extensively investigated and where the field is currently heading. It discusses visualizations for molecular structures, strategies for efficient display regarding image quality and frame rate, covers different aspects of level of detail and reviews visualizations illustrating the dynamic aspects of molecular simulation data. The survey concludes with an outlook on promising and important research topics to foster further success in the development of tools that help to reveal molecular secrets.",
"Abstract We provide a high-level survey of multiscale molecular visualization techniques, with a focus on application-domain questions, challenges, and tasks. We provide a general introduction to molecular visualization basics and describe a number of domain-specific tasks that drive this work. These tasks, in turn, serve as the general structure of the following survey. First, we discuss methods that support the visual analysis of molecular dynamics simulations. We discuss, in particular, visual abstraction and temporal aggregation. In the second part, we survey multiscale approaches that support the design, analysis, and manipulation of DNA nanostructures and related concepts for abstraction, scale transition, scale-dependent modeling, and navigation of the resulting abstraction spaces. In the third part of the survey, we showcase approaches that support interactive exploration within large structural biology assemblies up to the size of bacterial cells. We describe fundamental rendering techniques as well as approaches for element instantiation, visibility management, visual guidance, camera control, and support of depth perception. We close the survey with a brief listing of important tools that implement many of the discussed approaches and a conclusion that provides some research challenges in the field."
],
"cite_N": [
"@cite_14",
"@cite_3",
"@cite_15",
"@cite_25",
"@cite_17"
],
"mid": [
"2267499985",
"",
"2807479273",
"2549327495",
"2890964348"
]
} | ScaleTrotter: Illustrative Visual Travels Across Negative Scales | The recent advances in visualization have allowed us to depict and understand many aspects of the structure and composition of the living cell. For example, cellVIEW [30] provides detailed visuals for viewers to understand the composition of a cell in an interactive exploration tool and Lindow et al. [35] created an impressive interactive illustrative depiction of RNA and DNA structures. Most such visualizations only provide a depiction of components/processes at a single scale level. Living cells, however, comprise structures that function at scales that range from the very small to the very large. The best example is DNA, which is divided and packed into visible chromosomes during mitosis and meiosis, while being read out at the scale level of base pairs. In between these scale levels, the DNA's structures are typically only known to structural biologists, while beyond the base pairs their atomic composition has implications for specific DNA properties.
The amount of information stored in the DNA is enormous. The human genome consists of roughly 3.2 Gb (giga base pairs) [1,52]. This information would fill 539,265 pages of the TVCG template, which would stack up to approx. 27 m. Yet, the whole information is contained inside the cell's nucleus with only approx. 6 µm diameter [1, page 179]. Similar to a coiled telephone cord, the DNA creates a compact structure that contains the long strand of genetic information. This organization results in several levels of perceivable structures (as shown in Fig. 1), which have been studied and visualized separately in the past. The problem thus arises of how to comprehend and explore the whole scope of this massive amount of multi-scale information. If we teach students or the general public about the relationships between the two extremes, for instance, we have to ensure that they understand how the different scales work together. Domain experts, in contrast, deal with questions such as whether correlations exist between the spatial vicinity of bases and genetic disorders. It may manifest itself through two genetically different characteristics that are far from each other in sequence but close to each other in the DNA's 3D configuration. For experts we thus want to ensure that they can access the information at any of the scales. They should also be able to smoothly navigate the information space. The fundamental problem is thus to understand how we can enable a smooth and intuitive navigation in space and scale with seamless transitions. For this purpose we derive specific requirements of multiscale domains and data with negative scale exponents and analyze how the constraints affect their representations. Based on our analysis we introduce ScaleTrotter, an interactive multi-scale visualization of the human DNA, ranging from the level of the interphase chromosomes 1 in the 6 µm nucleus to the level of base pairs (≈ 2 nm) resp. atoms (≈ 0.12 nm). We cover a scale range of 4-5 orders of magnitude in spatial size, and allow viewers to interactively explore as well as smoothly interpolate between the scales. We focus specifically on the visual transition between neighboring scales, so that viewers can mentally connect them and, ultimately, understand how the DNA is constructed. With our work we go beyond existing multi-scale visualizations due to the DNA's specific character. Unlike multiscale data from other fields, the DNA physically connects conceptual elements across all the scales (like the phone cord) so it never disappears from view. We also need to show detailed data everywhere and, for all stages, the scales are close together in scale space.
We base our implementation on multi-scale data from genome research about the positions of DNA building blocks, which are given at a variety of different scales. We then transition between these levels using what we call visual embedding. It maintains the context of larger-scale elements while adding details from the next-lower scale. We combine this process with scale-dependent rendering that only shows relevant amounts of data on the screen. Finally, we support interactive data exploration through scale-dependent view manipulations, interactive focus specification, and visual highlighting of the zoom focus.
In summary, our contributions are as follows. First, we analyze the unique requirements of multi-scale representations of genome data and show that they cannot be met with existing approaches. Second, we demonstrate how to achieve smooth scale transitions for genome data through visual embedding of one scale within another based on measured and simulated data. We further limit the massive data size with a scale-dependent camera model to avoid visual clutter and to facilitate interactive exploration. Third, we describe the implementation of this approach and compare our results to existing illustrations. Finally, we report on feedback from professional illustrators and domain experts. It indicates that our interactive visualization can serve as a fundamental building block for tools that target both domain experts and laypeople.
Abstraction in illustrative visualization
On a high level, our work relates to the use of abstraction in creating effective visual representations, i. e., the use of visual abstraction. Viola and Isenberg [58] describe this concept as a process, which removes detail when transitioning from a lower-level to a higher-level representation, yet which preserves the overall concept. While they attribute the removed detail to "natural variation, noise, etc." in the investigated multi-scale representation we actually deal with a different data scenario: DNA assemblies at different levels of scale. We thus technically do not deal with a "concept-preserving transformation" [58], but with a process in which the underlying representational concept (or parts of it) can change. Nonetheless, their view of abstraction as an interactive process that allows viewers to relate one representation (at one scale) to another one (at a different scale) is essential to our work.
Also important from Viola and Isenberg's discussion [58] is their concept of axes of abstraction, which are traversed in scale space. We also connect the DNA representations at different scales, facilitating a smooth transition between them. In creating this axis of abstraction, we focus primarily on changes of Viola and Isenberg's geometric axis, but without a geometric interpolation of different representations. Instead, we use visual embedding of one scale in another one.
Scale-dependent molecular and genome visualization
We investigate multi-scale representations of the DNA, which relates to work in bio-molecular visualization. Several surveys have summarized work in this field [2,28,29,39], so below we only point out selected approaches. In addition, a large body of work by professional illustrators on mesoscale cell depiction inspired us such as visualizing the human chromosome down to the detail of individual parts of the molecule [19].
In general, as one navigates through large-scale 3D scenes, the underlying subject matter is intrinsically complex and requires appropriate interaction to aid intellection [17]. The inspection of individual parts is challenging, in particular if the viewer is too far away to appreciate its visual details. Yet large, detailed datasets or procedural approaches are essential to create believable representations. To generate not only efficient but effective visualizations, we thus need to remove detail in Viola and Isenberg's [58] visual abstraction sense. This allows us to render at interactive rates as well as to see the intended structures, which would otherwise be hidden due to cluttered views. Consequently, even most single-scale small-scale representations use some type of multiscale approach and with it introduce abstraction. Generally we can distinguish three fundamental techniques: multi-scale representations by leaving out detail of a single data source, multi-scale techniques that actively represent preserved features at different scales, and multi-scale approaches that can also transit between representations of different scales. We discuss approaches for these three categories next.
Multi-scale visualization by means of leaving out detail
An example of leaving out details in a multi-scale context is Parulek et al.'s [46] continuous levels-of-detail for large molecules and, in particular, proteins. They reduced detail of far-away structures for faster rendering. They used three different conceptual distances to create increasingly coarser depictions such as those used in traditional molecular illustration. For distant parts of a molecule, in particular, they seamlessly transition to super atoms using implicit surface blending.
The cellVIEW framework [30] also employs a similar level-of-detail (LOD) principle using advanced GPU methods for proteins in the HIV. It also removes detail to depict internal structures, and procedurally generates the needed elements. In mesoscopic visualization, Lindow et al. [34] applied grid-based volume rendering to sphere raycasting to show large numbers of atoms. They bridged five orders of magnitude in length scale by exploiting the reoccurrence of molecular sub-entities. Finally, Falk et al. [13] proposed out-of-core optimizations for visualizing large-scale whole-cell simulations. Their approach extended Lindow et al.'s [34] work and provides a GPU ray marching for triangle rendering to depict pre-computed molecular surfaces.
Approaches in this category thus create a "glimpse" of multi-scale representations by removing detail and adjusting the remaining elements accordingly. We use this principle, in fact, in an extreme form to handle the multi-scale character of the chromosome data. We completely remove the detail of a large part of the dataset. If we would show all small details, an interactive rendering would be impossible and they would distract from the depicted elements. Nonetheless, this approach typically only uses a single level of data and does not incorporate different conceptual levels of scale.
Different shape representations by conceptual scale
The encoding of structures through different conceptual scales is often essential. Lindow et al. [35], for instance, described different rendering methods of nucleic acids-from 3D tertiary structures to linear 2D and graph models-with a focus on visual quality and performance. They demonstrate how the same data can be used to create both 3Dspatial representations and abstract 2D mappings of genome data. This produces three scale levels: the actual sequence, the helical form in 3D, and the spatial assembly of this form together with proteins. Waltemate et al. [59] represented the mesoscopic level with meshes or microscopic images, while showing detail through molecule assemblies. To transition between the mesoscopic and the molecular level, they used a membrane mapping to allow users to inspect and resolve areas on demand. A magnifier tool overlays the high-scale background with lower-scale details. This approach relates to our transition scheme, as we depict the higher scale as background and the lower scale as foreground. A texture-based molecule rendering has been proposed by Bajaj et al. [6]. Their method reduces the visual clutter at higher levels by incorporating a biochemically sensitive LOD hierarchy.
Tools used by domain experts also visualize different conceptual genome scales. To the best of our knowledge, the first tool to visualize the 3D human genome has been Genome3D [4]. It allows researchers to select a discrete scale level and then load data specifically for this level. The more recent GMOL tool [43] shows 3D genome data captured from Hi-C data [56]. GMOL uses a six-scale system similar to the one that we employ and we derived our data from theirs. They only support a discrete "toggling between scales" [43], while we provide a smooth scale transition. Moreover, we add further semantic scale levels at the lower end to connect base locations and their atomistic compositions.
Conceptual scale representations with smooth transition
A smooth transition between scales has previously been recognized as important. For instance, van der Zwan et al. [57] carried out structural abstraction with seamless transitions for molecules by continuously adjusting the 3D geometry of the data. Miao et al. [38] substantially extended this concept and applied it to DNA nanostructure visualization. They used ten semantic scales and defined smooth transitions between them. This process allows scientists to interact at the appropriate scale level. Later, Miao et al. [37] combined this approach with three dimensional embeddings. In addition to temporal changes of scale, Lueks et al. [36] explored a seamless and continuous spatial multiscale transition by geometry adjustment, controlled by the location in image or in object space. Finally, Kerpedjiev et al. [25] demonstrated multi-scale navigation of 2D genome maps and 1D genome tracks employing a smooth transition for the user to zoom into views.
All these approaches only transition between nearby scale levels and manipulate the depicted data geometry, which limits applicability. These methods, however, do not work in domains where a geometry transition cannot be defined. Further, they are limited in domains where massive multi-scale transitions are needed due to the large amount of geometry that is required for the detailed scale levels. We face these issues in our work and resolve them using visual embeddings instead of geometry transitions as well as a scale-dependent camera concept. Before detailing our approach, however, we first discuss general multiscale visualization techniques from other visualization domains.
General multi-scale data visualization
The vast differences in spatial scale of our world in general have fascinated people for a long time. Illustrators have created explanations of these scale differences in the form of images (e. g., [60] and [47, Fig. 1]), videos (e. g., the seminal "Powers of Ten" video [11] from 1977), and newer interactive experiences (e. g., [15]). Most illustrators use a smart composition of images blended such that the changes are (almost) unnoticeable, while some use clever perspectives to portray the differences in scale. These inspirations have prompted researchers in visualization to create similar multi-scale experiences, based on real datasets.
The classification from Sect. 2.2 for molecular and genome visualization applies here as well. Everts et al. [12], e. g., removed detail from brain fiber tracts to observe the characteristics of the data at a higher scale. Hsu et al. [22] defined various cameras for a dataset, each showing a different level of detail. They then used image masks and camera ray interpolation to create smooth spatial scale transitions that show the data's multi-scale character. Next, Glueck et al. [16]'s approach exemplifies the change of shape representations by conceptual scale by smoothly changing a multi-scale coordinate grid and position pegs to aid depth perception and multi-scale navigation of 3D scenes. They simply remove detail for scales that no longer contribute much to the visualization. In their accompanying video, interestingly, they limited the detail for each scale to only the focus point of the scale transition to maintain interactive frame rates. Another example of this category are geographic multi-scale representations such as online maps (e. g., Google or Bing maps), which contain multiple scale representations, but typically toggle between them as the user zooms in or out. Finally, virtual globes are an example for conceptual scale representations with smooth transitions. They use smooth texture transitions to show an increasing level of detail as one zooms in. Another example is Mohammed et al.'s [41] Abstractocyte tool, which depicts differently abstracted astrocytes and neurons. It allows users to smoothly transition between the cell-type abstractions using both geometry transformations and blending. We extend the latter to our visual embedding transition.
Also these approaches only cover a relatively small scale range. Even online map services cover less than approx. six orders of magnitude. Besides the field of bio-molecular and chemistry research discussed in Sect. 2.2, in fact, only astronomy deals with large scale differences. Here, structures range from celestial bodies (≥ ≈ 10 2 m) 2 to the size of the observable universe (1.3 · 10 26 m), in total 24 orders of magnitude.
To depict such data, visualization researchers have created explicit multi-scale rendering architectures. Schatz et al. [51], for example, combined the rendering of overview representations of larger structures with the detailed depiction of parts that are close to the camera or have high importance. To truly traverse the large range of scales of the universe, however, several datasets that cover different orders of size and detail magnitude have to be combined into a dedicated data rendering and exploration framework. The first such framework was introduced by Fu et al. [14,21] who used scale-independent modeling and rendering and power-scaled coordinates to produce scale-insensitive visualizations. This approach essentially treats, models, and visualizes each scale separately and then blends scales in and out as they appear or disappear. The different scales of entities in the universe can also be modeled using a ScaleGraph [26], which facilitates scale-independent rendering using scene graphs. Axelsson et al. [5] later extended this concept to the Dynamic Scene Graph, which, in the OpenSpace system [8], supports several high-detail locations and stereoscopic rendering. The Dynamic Scene Graph uses a dynamic camera node attachment to visualize scenes of varying scale and with high floating point precision.
With genome data we face similar problems concerning scaledependent data and the need to traverse a range of scales. We also face the challenge that our conceptual scales are packed much more tightly in scale space as we explain next. This leads to fundamental differences between both application domains.
MULTI-SCALE GENOME VISUALIZATION
Visualizing the nuclear human genome-from the nucleus that contains all chromosomal genetic material down to the very atoms that make up the DNA-is challenging due to the inherent organization of the DNA in tubular arrangements. DNA in its B-form is only 2 nm [3] wide, which in its fibrous form or at more detailed scales would be too thin to be perceived. This situation is even more aggravated by the dense organization of the DNA and the structural hierarchy that bridges several scales. The previously discussed methods do not deal with such a combination of structural characteristics. Below we thus discuss the challenges that arise from the properties of these biological entities and how we address them by developing our new approach that smoothly transitions between views of the genome at its various scales.
Challenges of interactive multiscale DNA visualization
Domain scientists who sequence, investigate, and generally work with genome data use a series of conceptual levels for analysis and visualization [43]: the genome scale (containing all approx. 3.2 Gb of the human genome), the chromosome scale (50-100 Mb), the loci scale (in the order of Mb), the fiber scale (in the order of Kb), the nucleosome scale (146 b), and the nucleotide scale (i. e., 1 b), in addition to the atomistic composition of the nucleotides. These seven scales cover a range of approx. 4-5 orders of magnitude in physical size. In astronomy or astrophysics, in contrast, researchers deal with a similar number of scales: 3 approx. 7-8 conceptual scales of objects, yet over a range of some 24 orders of magnitude of physical size. 4 A fundamental difference between multi-scale visualizations in the two domains is, therefore, the scale density of the conceptual levels that need to be depicted.
Multi-scale astronomy visualization [5,14,21,26] deals with positiveexponent scale-space 5 (Fig. 2, top), where two neighboring scales are relatively far apart in scale space. For example, planets are much smaller than stars, stars are much smaller than galaxies, galaxies are much smaller than galaxy clusters, etc. On average, two scales have a distance of three or more orders of magnitude in physical space. The consequence of this high distance in scale space between neighboring conceptual levels is that, as one zooms out, elements from one scale typically all but disappear before the elements on the next conceptual level become visible. This aspect is used in creating multi-scale astronomy visualizations. For example, Axelsson Fig. 2. Multi-scale visualization in astronomy vs. genomics. The size difference between celestial bodies is extremely large (e. g., sun vs. earth-the earth is almost invisible at that scale). The distance between earth and moon is also large, compared to their sizes. In the genome, we have similar relative size differences, yet molecules are densely packed as exemplified by the two base pairs in the DNA double helix.
Graph [5] uses spheres of influence to control the visibility range of objects from a given subtree of the scene graph. In fact, the low scale density of the conceptual levels made the seamless animation of the astronomy/astrophysics section in the "Powers of Ten" Video [11] from 1977 possible-in a time before computer graphics could be used to create such animations. Eames and Eames [11] simply and effectively blended smoothly between consecutive images that depicted the respective scales. For the cell/genome part, however, they use sudden transitions between conceptual scales without spatial continuity, and they also leave out several of the conceptual scales that scientists use today such as the chromosomes and the nucleosomes.
The reason for this problem of smoothly transitioning between scales in genome visualization-i. e., in negative-exponent scale-space 6 ( Fig. 2, bottom)-is that the conceptual levels of a multi-scale visualization are much closer to each other in scale. In contrast to astronomy's positive-exponent scale-space, there is only an average scale distance of about 0.5-0.6 orders of magnitude of physical space between two conceptual scales. Elements on one conceptual scale are thus still visible when elements from the next conceptual scale begin to appear. The scales for genome visualizations are thus much denser compared to astronomy's average scale distance of three orders of magnitude.
Moreover, in the genome the building blocks are physically connected in space and across conceptual scales, except for the genome and chromosome levels. From the atoms to the chromosome scale, we have a single connected component. It is assembled in different geometric ways, depending on the conceptual scale at which we choose to observe. For example, the sequence of all nucleotides (base pairs) of the 46 chromosomes in a human cell would stretch for 2 m, with each base pair only being 2 nm wide [3], while a complete set of chromosomes fits into the 6 µm wide nucleus. Nonetheless, in all scales between the sequence of nucleotides and a chromosome we deal with the same, physically connected structure. In astronomy, instead, the physical space between elements within a conceptual scale is mostly empty and elements are physically not connected-elements are only connected by proximity (and gravity), not by visible links.
The large inter-scale distance and physical connectedness, naturally, also create the problem of how to visualize the relationship between two conceptual scale levels. The mentioned multi-scale visualization systems from astronomy [5,14,21,26] use animation for this purpose, sometimes adding invisible and intangible elements such as orbits of celestial bodies. In general multi-scale visualization approaches, multiscale coordinate grids [16] can assist the perception of scale-level relationships. These approaches only work if the respective elements are independent of each other and can fade visually as one zooms out, for example, into the next-higher conceptual scale. The connected composition of the genome does make these approaches impossible. In the genome, in addition, we have a complete model for the details in each conceptual level, derived from data that are averages of measurements from many experiments on a single organism type. We are thus able to and need to show visual detail everywhere-as opposed to only close to a single point like planet Earth in astronomy.
Ultimately, all these points lead to two fundamental challenges for us to solve. The first (discussed in Sect. 3.2 and 3.3) is how to visually create effective transitions between conceptual scales. The transitional scales shall show the containment and relationship character of the data even in still images and seamlessly allow us to travel across the scales as we are interacting. They must deal with the continuous nature of the depicted elements, which are physically connected in space and across scales. The second challenge is a computational one. Positional information of all atoms from the entire genome would not fit into GPU memory and will prohibit interactive rendering performance. We discuss how to overcome these computational issues in Sect. 4, along with the implementation of the visual design from Sect. 3.2 and 3.3.
Visual embedding of conceptual scales
Existing multi-scale visualizations of DNA [36,38,57] or other data [41] often use geometry manipulations to transition from one scale to the next. For the full genome, however, this approach would create too much detail to be useful and would require too many elements to be rendered. Moreover, two consecutive scales may differ significantly in structure and organization. A nucleosome, e. g., consists of nucleotides in double-helix form, wrapped around a histone protein. We thus need appropriate abstracted representations for the whole set of geometry in a given scale that best depict the scale-dependent structure and still allow us to create smooth transitions between scales.
Nonetheless, the mentioned geometry-based multi-scale transformations still serve as an important inspiration to our work. They often provide intermediate representations that may not be entirely accurate, but show how one scale relates to another one, even in a still image. Viewers can appreciate the properties of both involved scale levels, such as in Miao et al.'s [38] transition between nucleotides and strands. Specifically, we take inspiration from traditional illustration where a related visual metaphor has been used before. As exemplified by Fig. 3, illustrators sometimes use an abstracted representation of a coarser scale to aid viewers with understanding the overall composition as well as the spatial location of the finer details. This embedding of one representation scale into the next is similar to combining several layers of visual information-or super-imposition [42, pp. 288 ff]. It is a common approach, for example, in creating maps. In visualization, this principle has been used in the past (e. g., [10,23,49,50]), typically applying some form of transparency to be able to perceive the different layers. Transparency, however, can easily lead to visualizations that are difficult to understand [9]. Simple outlines to indicate the coarser shape or context can also be useful [54]. In our case, even outlines easily lead to clutter due to the immense amount of detail in the genome data. Moreover, we are not interested in showing that some elements are spatially inside others, but rather that the elements are part of a higher-level structure, thus are conceptually contained.
We therefore propose visual scale embedding of the detailed scale into its coarser parent (see the illustration in Fig. 4). We render an Fig. 10]. We thus completely flattened the context as shown in Fig. 4 and inspired by previous multi-scale visualizations from structural biology [46]. Then we render the detailed geometry of the next-smaller scale on top of it. This concept adequately supports our goal of smooth scale transitions. A geometric representation of the coarser scale is first shown using 3D shading as long as it is still small on the screen, i. e., the camera is far away. It transitions to a flat, canvas-like representation when the camera comes closer and the detail in this scale is not enough anymore. We now add the representation of the more detailed scale on top-again using 3D shading, as shown for two scale transitions in Fig. 5. Our illustrative visualization concept combines the 2D aspect of the flattened coarser scale with the 3D detail of the finer scale. With it we make use of superimposed representations as argued by Viola and Isenberg [58], which are an alternative to spatially or temporally juxtaposed views. In our case, the increasingly abstract character of rendering of the coarser scale (as we flatten it during zooming in) relates to its increasingly contextual and conceptual nature. Our approach thus relates to semantic zooming [48] because the context layer turns into a flat surface or canvas, irrespective of the underlying 3D structure and regardless of the specific chosen view direction. This type of scale zoom does not have the character of cut-away techniques as often used in tools to explore containment in 3D data (e. g., [31,33]). Instead, it is more akin to the semantic zooming in the visualization of abstract data, which is embedded in the 2D plane (e. g., [61]).
Multi-scale visual embedding and scale-dependent view
One visual embedding step connects two consecutive semantic scales. We now concatenate several steps to assemble the whole hierarchy (Fig. 6). This is conceptually straightforward because each scale by itself is shown using 3D shading. Nonetheless, as we get to finer and finer details, we face the two major problems mentioned at the start of Sect. 3.2: visual clutter and limitations of graphics processing. Both are caused by the tight scale space packing of the semantic levels in the genome. At detailed scales, a huge number of elements are potentially visible, e. g., 3.2 Gb at the level of nucleotides. To address this issue, we adjust the camera concept to the multi-scale nature of the data.
In previous multi-scale visualization frameworks [5,14,21,26], researchers have already used scale-constrained camera navigation. For example, they apply a scale-dependent camera speed to quickly cover the huge distances at coarse levels and provide fine control for detailed levels. In addition, they used a scale-dependent physical camera size or scope such that the depicted elements would appropriately fill the distance between near and far plane, or use depth buffer remapping [14] to cover a larger depth range. In astronomy and astrophysics, however, we do not face the problem of a lot of nearby elements in detailed levels of scale due to their loose scale-space packing. After all, if we look into the night sky we do not see much more than "a few" stars from our galactic neighborhood which, in a visualization system, can easily be represented by a texture map. Axelsson et al. [5], for example, simply attach their cameras to nodes within the scale level they want to depict.
For the visualization of genome data, however, we have to introduce an active control of the scale-dependent data-hierarchy size or scope as we would "physically see," for example, all nucleosomes or nucleotides up to the end of the nucleus. Aside from the resulting clutter, such complete genome views would also conceptually not be helpful because, due to the nature of the genome, the elements within a detailed scale largely repeat themselves. The visual goal should thus be to only show a relevant and scale-dependent subset of each hierarchy level. We thus limit the rendering scope to a subset of the hierarchy, depending on the chosen scale level and spatial focus point. The example in Fig. 7 depicts the nucleosome scale, where we only show a limited number of nucleosomes to the left and the right of the current focus point in the sequence, while the rest of the hierarchy has been blended out. We thereby extend the visual metaphor of the canvas, which we applied in the visual embedding, and use the white background of the frame buffer as a second, scale-dependent canvas, which limits the visibility of the detail. In contrast to photorealism 7 that drives many multi-scale visu- alizations in astronomy, we are interested in appropriately abstracted representations through a scale-dependent removal of distant detail to support viewers in focusing on their current region of interest.
IMPLEMENTATION
Based on the conceptual design from Sect. 3 we now describe the implementation of our multi-scale genome visualization framework. We first describe the used and then explain the shader-based realization of the scale transitions using a series of visual embedding steps as well as some interaction considerations.
Data sources and data hierarchy
Researchers in genome studies have a high interest in understanding the relationships between the spatial structure at the various scale levels and the biological function of the DNA. Therefore they have created a multi-scale dataset that allows them to look at the genome in different spatial scale levels [43]. This data was derived by Nowotny et al. [43] from a model of the human genome by Asbury et al. [4], which in turn was constructed based on various data sources and observed properties. [7] approach of space-filling, fractal packing. As a result, Nowotny et al. [43] obtained the positions of the nucleotides in space, and from these computed the positions of fibers, loci, and chromosomes (Fig. 8). They stored this data in their own Genome Scale System (GSS) format and also provided the positions of the nucleotides for one nucleosome (Fig. 8, bottom-right). Even with this additional data, we still have to procedurally generate further information as we visualize this data such as the orientations of the nucleosomes (based on the location of two consecutive nucleosomes) and the linker DNA strands of nucleotides connecting two consecutive nucleosomes.
This data provides positions at every scale level, without additional information about the actual sizes. Only at the nucleotide and atom scales the sizes are known. It was commonly thought that nucleosomes are tightly and homogeneously packed into 30 nm fibers, 120 nm chromonema, and 300-700 nm chromatids, but recent studies [45] disprove this organization and confirm the existence of flexible chains with diameters of 5-24 nm. Therefore, for all hierarchically organized scales coarser than the nucleosome, we do not have information about the specific shape that each data point represents. We use spheres with scale-adjusted sizes as rendering primitives as they well portray the chaining of elements according to the data-point sequence. With respect to visualizing this multi-scale phenomenon, the data hierarchy (i. e., 100 nucleosomes = 1 fiber, 100 fibers = 1 locus, approx. 100 loci = 1 chromosome) is not the same as the hierarchy of semantic scales that a viewer sees. For example, the dataset contains a level that stores the chromosome positions, but if rendered we would only see one sphere for each chromosome ( Fig. 9(b)). Such a depiction would not easily be recognized as representing a chromosome due to the lack of detail. The chromosomes by themselves only become apparent once we display them with more shape details using the data level of the loci as given in Fig. 9(c). The locations at the chromosomes data scale can instead be better used to represent the semantic level of the nucleus by rendering them as larger spheres, all with the same color and with a single outline around the entire shape as illustrated in Fig. 9(a).
In Table 1 we list the relationships between data hierarchy and semantic hierarchy for the entire set of scales we support. From the table follows that the choice of color assignment and the subset of rendered elements on the screen supports viewers in understanding the semantic level, which we want to portray. For example, by rendering the fiber positions colored by chromosome we facilitate the understanding of a detailed depiction of a chromosome, rather than that chromosomes consist of several loci. In an alternative depiction for domain experts, who are interested in studying the loci regions, we could instead assign the colors by loci for the fiber data level and beyond.
We added two additional scale transitions that are not realized by visual embedding, but instead by color transitions. The first of these transitions changes the colors from the previously maintained chromosome color to nucleotide colors as the nucleotide positions are rendered in their 3D shape to illustrate that the nucleosomes themselves consist of pairs of nucleotides. The following transition then uses visual embedding as before, to transition to atoms while maintaining nucleotide colors. The last transition, again changes this color assignment such that the atoms are rendered in their typical element colors, using 3D shading and without flattening them.
Realizing visual scale embedding
For our proof-of-concept implementation we build on the molecular visualization functionality provided in the Marion framework [40]. We added to this framework the capability to load the previously described GSS data. We thus load and store the highest detail of the datathe 23,958,240 nucleosome positions-as well as all positions of the coarser scales. To show more detail, we use the single nucleosome example in the data, which consists of 292 nucleotides and then create the ≈ 24 · 10 6 instances for the semantic nucleosome scale. Here we fully use of Le Muzic et al.'s [30] technique of employing the tessellation stages on the GPU, which dynamically injects the atoms of the nucleosome. We apply a similar instancing approach for transitioning to an atomistic representation, based on the 1AOI model from the PDB. To visually represent the elements, we utilize 2D sphere impostors instead of sphere meshes [30]. Specifically, we use triangular 2D billboards (i. e., only three vertices) that always face the camera and assign the depth to each fragment that it would get if it had been a sphere.
If we wanted to directly render all atoms at the finest detail scale, we would have to deal with ≈ 3.2 Gb ·70 atoms/b = 224 · 10 9 atoms. This amount of detail is not possible to render at interactive rates. With LOD optimizations, such as the creation of super-atoms for distant elements, cellVIEW could process 15 · 10 9 atoms at 60 Hz [30]. This amount of detail does not seem to be necessary in our case. Our main goal is the depiction of the scale transitions and too much detail would cause visual noise and distractions. We use the scale-dependent removal of distant detail described in Sect. 3.3. As listed in Table 1, for coarse scales we show all chromosomes. Starting with the semantic fibers scale, we only show the focus chromosome. For the semantic nucleosomes level, we only show the focus fiber and two additional fibers in both directions of the sequence. To indicate that the sequence continues, we gradually fade out the ends of the sequence of nucleosomes as shown in Fig. 7. For finer scales beyond the nucleosomes, we maintain the sequence of five fibers around the focus point, but remove the detail of the links between nucleosomes.
To manage the different rendering scopes and color assignments, we assign IDs to elements in a data scale and record the IDs of the hierarchy ancestors of an element. For example, each chromosome data element gets an ID, which in turn is known to the loci data instances. We use this ID to assign a color to the chromosomes. Because we continue rendering all chromosomes even at the fiber data level respectively semantic chromosome with detail level, we also pass the IDs of the chromosomes to the fiber data elements. Later, the IDs of the fiber data elements are used to determine the rendering scope in the data levels of nucleotide positions and finer (more detail).
For realizing the transition in the visual scale embedding, i. e., transitioning from the coarser scale S N to the finer scale S N+1 , we begin by alpha-blending S N rendered with 3D detail and flattened S N . We achieve the 3D detail with screen-space ambient occlusion (SSAO), while the flattened version does not use SSAO. Next we transition between S N and S N+1 by first rendering S N and then S N+1 on top, the latter with increasing opacity. Here we avoid visual clutter by only adding detail to elements in S N+1 on top of those regions that belonged to their parents in S N . The necessary information for this purpose comes from the previously mentioned IDs. We thus first render all flattened elements of S N , before blending in detail elements from S N+1 . In the final transition of visual scale embedding, we remove the elements from S N through alpha-blending. For the two color transitions discussed in Sect. 4.1 we simply alpha-blend between the corresponding elements of S N and S N+1 , but with different color assignments.
Interaction considerations
The rendering speeds are in the range of 15-35 fps on an Intel Core ™ PC (i7-8700K, 6 cores, 32 GB RAM, 3.70 GHz, nVidia Quadro P4000, Windows 10 x64). In addition to providing a scale-controlled traversal of the scale hierarchy toward a focus point, we thus allow users to interactively explore the data and choose their focus point themselves.
To support this interaction, we allow users to apply transformations such as rotation and panning. We also allow users to click on the data to select a new focus point, which controls the removal of elements to be rendered at specific scale transitions (as shown in Table 1). First, users can select the focus chromosome (starting at loci positions), whose position is the median point within the sequence of fiber positions for that chromosome. This choice controls which chromosome remains as we transition from the fiber to the nucleosome data scale. Next, starting at the nucleosome data scale, users can select a strand of five consecutive fiber positions, which then ensures that only this strand remains as we transition from nucleosome to nucleotide positions.
To further support the interactive exploration, we also adjust the colors of the elements to be in focus next. For example, the subset of a chromosome next in focus is rendered in a slightly lighter color than the remaining elements of the same level. This approach provides a natural visual indication of the current focus point and guides the view of the users as they explore the scales.
To achieve the scale-constrained camera navigation, we measure the distance to a transition or interaction target point in the data sequence. We measure this distance as the span between the camera location and the position of the target level in its currently active scale. This distance then informs the setting of camera parameters and SSAO passes. After the user has selected a new focus point, the current distance to the camera will change, so we adjust also the global scale parameter that we use to control the scale navigation.
DISCUSSION
Based on our design and implementation we now compare our results with existing visual examples, examine potential application domains, discuss limitations, and suggest several directions for improvement.
Comparison to traditionally created illustrations
Measuring the ground truth is only possible to a certain degree, which makes the comparison to ScaleTrotter difficult. One reason is that no static genetic material exists in living cells. Moreover, microscopy is also limited at the scale levels with which we are dealing. We have to rely on the data from the domain experts with its own limitations (Sect. 5.4) as the input for creating our visualization and compare the results with existing illustrations in both static and animated form.
We first look at traditional static multi-scale illustrations as shown in Fig. 10; other illustrations similar to the one in Fig. 10(a) can be found in Annunziato's [3] and Ou et al.'s [45] works. In Fig. 10(a), the illustrators perform the scale transition along a 1D path, supported by the DNA's extreme length. We do not take this route as we employ the actual positions of elements from the involved datasets. This means that we could also apply our approach to biologic agents such as proteins that do not have an extremely long extent. Moreover, the static illustrations have some continuous scale transitions, e. g., the detail of the DNA molecule itself or the sizes of the nucleosomes. Some transitions in the multi-scale representation, however, are more sudden such as the transition from the DNA to nucleosomes, the transition from the nucleosomes to the condensed chromatin fiber, and the transition from that fiber to the 700 nm wide chromosome leg. Fig. 10(b) has only one such transition. The changeover happens directly between the nucleosome level and the mitotic chromosome. We show transitions between scales interactively using our visual scale embedding. The static illustrations in Fig. 10 just use the continuous nature of the DNA to evoke the same hierarchical layering of the different scales. The benefit of the spatial scale transitions in the static illustrations is that a single view can depict all scale levels, while our temporally-controlled scale transitions allow us to interactively explore any point in both the genome's spatial layout and in scale. Moreover, we also show the actual physical configuration of every scale according to the datasets that genome researchers provide, representing the current state of knowledge.
We also compare our results to animated illustrations as exemplified by the "Powers of Ten" video 8 [11] and a video treating the composition of the genome 9 and created by Drew Berry et al. in 2003. The "Powers of Ten" video only shows the fibers of the DNA double helix curled into loops-a notion that has since been revised by the domain experts. Nonetheless, the video still shows a continuous transition in scale through blending of aligned representations from the fibers, to the nucleotides, to the atoms. It even suggests that we should continue the scale journey beyond the atoms. The second video, in contrast, shows the scale transitions starting from the DNA double helix and zooming out. The scale transitions are depicted as "physical" assembly processes, e. g., going from the double helix to nucleosomes, and from nucleosomes to fibers. Furthermore, shifts of focus or hard cuts are applied as well. The process of assembling an elongated structure through curling up can nicely illustrate the composition of the low-level genome structures, but only if no constraints on the rest of the fibrous structure exist. In our interactive illustration, we have such constraints where we can zoom out and in and where we have restrictions on the locations of all elements coming from the given data. Moreover, the construction also potentially creates a lot of motion due to the dense nature of the genome and, thus, visual noise which might impact the overall visualization. On the other hand, both videos convey the message that no element is static at the small scales. We do not yet show this functionality in our visualizations.
Both static and dynamic traditional visualizations depict the composition of the genome in its mitotic stage. The chromosomes only assume this stage, however, when the cell divides. Our visualization is the first that provides the user with an interactive exploration with smooth scale transitions of the genome in its interphase state, the state in which the chromosomes exist most of the time.
Feedback from illustrators and application scenarios
To discuss the creation of illustrations for laypeople with ScaleTrotter, we asked two professional illustrators for feedback who work on biological and medical visualizations. One of them has ten years experience as a professional scientific illustrator and animator with a focus on biological and medical illustrations for science education. The other expert is a certified illustrator with two years experience plus a PhD in Bioengineering. We conducted a semi-structured interview (approx. 60 min) with them, to get critical feedback [24,27] on our illustrative multi-scale visualization and to learn how our approach compares to the way they deal with multi-scale depictions in their daily work.
They immediately considered our ScaleTrotter approach for showing genome scale transitions as part of a general story to tell. They missed the necessary additional support for telling a story such as the contextual representation of a cell (for which we could investigate cellVIEW [30]) and, in general, audio support and narration. Although they had to judge our results isolated from other story telling methods, they saw the benefits of an interactive tool for creating narratives that goes beyond the possibilities of their manual approaches.
We also got a number of specific pieces of advice for improvement. In particular, they recommended different settings for when to make certain transitions in scale space. The illustrators also suggested the addition of "contrast" for those parts that will be in focus next as we zoom in-a feature we then added and describe in Sect. 4.3.
According to them, our concept of using visual scale embedding to transition between different scalar representations has not yet been used in animated illustrations, yet the general concept of showing detail together with context as illustrated in Fig. 3 is known. Instead of using visual scale embedding, they use techniques discussed in Sect. 5.1, or they employ cut-outs with rectangles or boxes to indicate the transition between scales. Our visual scale embedding is seen by them as a clear innovation: "to have a smooth transition between the scales is really cool." Moreover, they were excited about the ability to freely select a point of focus and interactively zoom into the corresponding detail. Basically, they said that our approach would bring them closer to their vision of a "molecular Maya" because it is "essential to have a scientifically correct reference." Connected to this point we also discussed the application of ScaleTrotter in genome research. Due to their close collaborations with domain experts they emphasized that the combination of the genomics sequence data plus some type of spatial information will be essential for future research. A combination of our visualization, which is based on the domain's state-of-the-art spatial data, with existing tools could allow genome scientists to better understand the function of genes and certain genetic diseases.
In summary, they are excited about the visual results and see application possibilities both in teaching and in data exploration.
Feedback from genome scientists
As a result of our conversation with the illustrators they also connected us to a neurobiologist who investigates 3D genome structures at single cell levels, e. g., by comparing cancerous with healthy cells. His group is interested in interactions between different regions of the genome. Although the spatial characteristics of the data are of key importance to them, they still use 2D tools. The scientist confirmed that a combination of their 2D representations with our interactive 3D-spatial multi-scale method would considerably help them to understand the interaction of sequentially distant but spatially close parts of the genome, processes such as gene expression, and DNA-protein interactions.
We also presented our approach to an expert in molecular biology with 52 years of age and 22 years of post-PhD experience. He specializes in genetics and studies the composition, architecture, and function of SMC complexes. We conducted a semi-structured interview (approx. 60 minutes) to discuss our results. He stated that transitions between several scales are definitely useful for analyzing the 3D genome. He was satisfied with the coarser chromosomes and loci representations, but had suggestions for improving the nucleosome and atomic scales. In particular, he noted the lack of proteins such as histones. He compared our visualization with existing electron microscopy images [44,45], and suggested that a more familiar filament-like representation could increase understandability. In his opinion, some scale transitions happened too early (e. g., the transition from chromosome-colored to nucleotidecolored nucleotides). We adjusted our parametrization accordingly. In addition, based on his feedback, we added an interactive scale offset control that now allows users to adjust the scale representation for a given zoom level. This offset only adapts the chosen representation according to Table 1, while leaving the size on the screen unchanged. The expert also suggested to build on the current approach and extend it with more scales, which we plan to do in the future. Similar to the feedback from the neurobiologist, also the molecular biologist agrees that an integration with existing 2D examination tools has a great potential to improve the workflow in a future visualization system.
Limitations
There are several limitations of our work, the first set relating to the source data. While we used actual data generated by domain experts based on the latest understanding of the genome, it is largely generated using simulations and not actual measurements (Sect. 4.1. We do not use actual sequence data at the lowest scales. Moreover, our specific dataset only contains 45 chromosomes, instead of the correct number of 46. We also noticed that the dataset contains 23,958,240 nucleosome positions, yet when we multiply this with the sum of 146 base pairs per nucleosome we arrive at ≈ 3.5 Gb for the entire genome-not even including the linker base pairs in this calculation and for only 45 chromosomes. Ultimately better data is required. The overall nature of the visualization and the scale transitions would not be impacted by the modified data and we believe that the data quality is already sufficient for general illustration and teaching purposes.
Another limitation is the huge size of the data. Loading all positions for the interactive visualization takes approx. two minutes, but we have not yet explored the feasibility of also loading actual sequence data. We could investigate loading data on-demand for future interactive applications, in particular in the context of tools for domain experts. For such applications we would also likely have to reconsider our design decision to leave out data in the detailed scales, as these may interact with the parts that we do show. We would need to develop a space-dependent look-up to identify parts from the entire genome that potentially interact with the presently shown focus sequences. Another limitation relates to the selection of detail to zoom into. At the moment, we determine the focus interactively based on the currently depicted scale level. This makes it, for example, difficult to select a chromosome deep inside the nucleus or fibers deep inside a chromosome. A combination with an abstract data representation-for example with a domain expert sequencing tool-would address this problem.
Future work
Beyond addressing the mentioned issues, we would like to pursue a number of additional ideas in the future. A next step towards adoption of our approach in biological or medical research is to build an analytical system on top of ScaleTrotter that allows us to query various scientifically relevant aspects. As noted in Sect. 5.2, one scenario are spatial queries to determine whether two genes are located in a close spatial vicinity in case they somehow are related. Other visualization systems developed in the past for analyzing gene expressions can benefit from the structural features that ScaleTrotter offers.
Extending to other subject matters, we will also have to investigate scale transitions where the scales cannot be represented with sequences of blobs. For example, can we also use linear or volumetric representations and extend our visual space embedding to such structures? Alternatively, can we find more effective scale transitions to use such as geometry-based ones (e. g., [36,38,57]), in addition to the visual embedding and the color changes we use so far? We have to avoid over-using the visual variable color which is a scarce resource. Many elements could use color at different scales, so dynamic methods for color management will be essential.
Another direction for future research are generative methods for completing the basic skeletal genetic information on the fly. Currently we use data that are based on positions of nucleotides, while higher-level structures are constructed from these. Information about nucleotide orientations and their connectivity is missing, as well as the specific sequence which is currently not derived from real data. ScaleTrotter does not contain higher-level structures and protein complexes that hold the genome together and which would need to be modeled with a strict scientific accuracy in mind. An algorithmic generation of such models from Hi-C data would allow biologists to adjust the model parameters according to their mental model, and would give them a system for generating new hypotheses. Such a generative approach would also integrate well with the task of adding processes in which involve the DNA, such as condensation, replication, and cell division.
A related fundamental question is how to visualize the dynamic characteristics of the molecular world. It would be highly useful to portray the transition between the interphase and the mitotic form of the DNA, to support visualizing the dynamic processes of reading out the DNA, and to even show the Brownian motion of the atoms.
Finally, our visualization relies on dedicated decisions of how to parameterize the scale transitions. While we used our best judgment to adjust the settings, the resulting parameterization may not be universally valid. An interactive illustration for teaching may need parameters different from those in a tool for domain experts. It would be helpful to derive templates that could be used in different application contexts.
CONCLUSION
ScaleTrotter constitutes one step towards understanding the mysteries of human genetics-not only for a small group of scientists, but also for larger audiences. It is driven by our desire as humans to understand "was die Welt im Innersten zusammenhält" [what "binds the world, and guides its course"] [18]. We believe that our visualization has the potential to serve as the basis of teaching material about the genome and part of the inner workings of biologic processes. It is intended both for the general public and as a foundation for future visual data exploration for genome researchers. In both cases we support, for the first time, an interactive and seamless exploration of the full range of scales-from the nucleus to the atoms of the DNA.
From our discussion it became clear that such multi-scale visualizations need to be created in a fundamentally different way as compared to those excellent examples used in the astronomy domain. In this paper we thus distinguish between the positive-exponent scale-space of astronomy (looking inside-out) and the negative-exponent scale-space of genome data (looking outside-in). For the latter we provide a multiscale visualization approach based on visual scale embedding. We also discuss an example on how the controlled use of abstraction in (illustrative) visualization allows us to employ a space-efficient superimposition of visual representations. This is opposed to juxtaposed views [58], which are ubiquitous in visualization today.
A remaining question is whether the tipping point between the different types of scale spaces is really approximately one meter (1 · 10 0 m) or whether we should use a different point in scale space such as 1 mm. The answer to this question requires further studies on how to illustrate multi-scale subject matter. An example is to generalize our approach to other biologic phenomena such as mitotic DNA or microtubules as suggested in Sect. 5.5. If we continue our journey down the negative-exponent scale-space we may discover a third scale-space region. Models of atoms and subatomic particles seem to again comprise much empty space, similar to the situation in the positive-exponent scale-space. A bigger vision of this work thus is to completely replicate the "Powers of Ten" video-the 36 orders of magnitude from the size of the observable universe to sub-atomic particles-but with an interactive tool and based on current data and visualizations. | 9,576 |
1907.12352 | 2966538158 | We present ScaleTrotter, a conceptual framework for an interactive, multi-scale visualization of biological mesoscale data and, specifically, genome data. ScaleTrotter allows viewers to smoothly transition from the nucleus of a cell to the atomistic composition of the DNA, while bridging several orders of magnitude in scale. The challenges in creating an interactive visualization of genome data are fundamentally different in several ways from those in other domains like astronomy that require a multi-scale representation as well. First, genome data has intertwined scale levels---the DNA is an extremely long, connected molecule that manifests itself at all scale levels. Second, elements of the DNA do not disappear as one zooms out---instead the scale levels at which they are observed group these elements differently. Third, we have detailed information and thus geometry for the entire dataset and for all scale levels, posing a challenge for interactive visual exploration. Finally, the conceptual scale levels for genome data are close in scale space, requiring us to find ways to visually embed a smaller scale into a coarser one. We address these challenges by creating a new multi-scale visualization concept. We use a scale-dependent camera model that controls the visual embedding of the scales into their respective parents, the rendering of a subset of the scale hierarchy, and the location, size, and scope of the view. In traversing the scales, ScaleTrotter is roaming between 2D and 3D visual representations that are depicted in integrated visuals. We discuss, specifically, how this form of multi-scale visualization follows from the specific characteristics of the genome data and describe its implementation. Finally, we discuss the implications of our work to the general illustrative depiction of multi-scale data. | In general, as one navigates through large-scale 3D scenes, the underlying subject matter is intrinsically complex and requires appropriate interaction to aid intellection @cite_8 . The inspection of individual parts is challenging, in particular if the viewer is too far away to appreciate its visual details. Yet large, detailed datasets or procedural approaches are essential to create believable representations. To generate not only efficient but visualizations, we thus need to remove detail in Viola and Isenberg's @cite_39 visual abstraction sense. This allows us to render at interactive rates as well as to see the intended structures, which would otherwise be hidden due to cluttered views. Consequently, even most single-scale small-scale representations use some type of multi-scale approach and with it introduce abstraction. Generally we can distinguish three fundamental techniques: multi-scale representations by leaving out detail of a single data source, multi-scale techniques that actively represent preserved features at different scales, and multi-scale approaches that can also transit between representations of different scales. We discuss approaches for these three categories next. | {
"abstract": [
"We explore the concept of abstraction as it is used in visualization, with the ultimate goal of understanding and formally defining it. Researchers so far have used the concept of abstraction largely by intuition without a precise meaning. This lack of specificity left questions on the characteristics of abstraction, its variants, its control, or its ultimate potential for visualization and, in particular, illustrative visualization mostly unanswered. In this paper we thus provide a first formalization of the abstraction concept and discuss how this formalization affects the application of abstraction in a variety of visualization scenarios. Based on this discussion, we derive a number of open questions still waiting to be answered, thus formulating a research agenda for the use of abstraction for the visual representation and exploration of data. This paper, therefore, is intended to provide a contribution to the discussion of the theoretical foundations of our field, rather than attempting to provide a completed and final theory.",
"Virtual three-dimensional (3-D) environments have become pervasive tools in a number of professional and recreational tasks. However, interacting with these environments can be challenging for users, especially as these environments increase in complexity and scale. In this paper, we argue that the design of 3-D interaction techniques is an ill-defined problem. This claim is elucidated through the context of data-rich and geometrically complex multiscale virtual 3-D environments, where unexpected factors can encumber intellection and navigation. We develop an abstract model to guide our discussion, which illustrates the cyclic relationship of understanding and navigating; a relationship that supports the iterative refinement of a consistent mental representation of the virtual environment. Finally, we highlight strategies to support the design of interactions in multiscale virtual environments, and propose general categories of research focus."
],
"cite_N": [
"@cite_39",
"@cite_8"
],
"mid": [
"2751478023",
"2040747612"
]
} | ScaleTrotter: Illustrative Visual Travels Across Negative Scales | The recent advances in visualization have allowed us to depict and understand many aspects of the structure and composition of the living cell. For example, cellVIEW [30] provides detailed visuals for viewers to understand the composition of a cell in an interactive exploration tool and Lindow et al. [35] created an impressive interactive illustrative depiction of RNA and DNA structures. Most such visualizations only provide a depiction of components/processes at a single scale level. Living cells, however, comprise structures that function at scales that range from the very small to the very large. The best example is DNA, which is divided and packed into visible chromosomes during mitosis and meiosis, while being read out at the scale level of base pairs. In between these scale levels, the DNA's structures are typically only known to structural biologists, while beyond the base pairs their atomic composition has implications for specific DNA properties.
The amount of information stored in the DNA is enormous. The human genome consists of roughly 3.2 Gb (giga base pairs) [1,52]. This information would fill 539,265 pages of the TVCG template, which would stack up to approx. 27 m. Yet, the whole information is contained inside the cell's nucleus with only approx. 6 µm diameter [1, page 179]. Similar to a coiled telephone cord, the DNA creates a compact structure that contains the long strand of genetic information. This organization results in several levels of perceivable structures (as shown in Fig. 1), which have been studied and visualized separately in the past. The problem thus arises of how to comprehend and explore the whole scope of this massive amount of multi-scale information. If we teach students or the general public about the relationships between the two extremes, for instance, we have to ensure that they understand how the different scales work together. Domain experts, in contrast, deal with questions such as whether correlations exist between the spatial vicinity of bases and genetic disorders. It may manifest itself through two genetically different characteristics that are far from each other in sequence but close to each other in the DNA's 3D configuration. For experts we thus want to ensure that they can access the information at any of the scales. They should also be able to smoothly navigate the information space. The fundamental problem is thus to understand how we can enable a smooth and intuitive navigation in space and scale with seamless transitions. For this purpose we derive specific requirements of multiscale domains and data with negative scale exponents and analyze how the constraints affect their representations. Based on our analysis we introduce ScaleTrotter, an interactive multi-scale visualization of the human DNA, ranging from the level of the interphase chromosomes 1 in the 6 µm nucleus to the level of base pairs (≈ 2 nm) resp. atoms (≈ 0.12 nm). We cover a scale range of 4-5 orders of magnitude in spatial size, and allow viewers to interactively explore as well as smoothly interpolate between the scales. We focus specifically on the visual transition between neighboring scales, so that viewers can mentally connect them and, ultimately, understand how the DNA is constructed. With our work we go beyond existing multi-scale visualizations due to the DNA's specific character. Unlike multiscale data from other fields, the DNA physically connects conceptual elements across all the scales (like the phone cord) so it never disappears from view. We also need to show detailed data everywhere and, for all stages, the scales are close together in scale space.
We base our implementation on multi-scale data from genome research about the positions of DNA building blocks, which are given at a variety of different scales. We then transition between these levels using what we call visual embedding. It maintains the context of larger-scale elements while adding details from the next-lower scale. We combine this process with scale-dependent rendering that only shows relevant amounts of data on the screen. Finally, we support interactive data exploration through scale-dependent view manipulations, interactive focus specification, and visual highlighting of the zoom focus.
In summary, our contributions are as follows. First, we analyze the unique requirements of multi-scale representations of genome data and show that they cannot be met with existing approaches. Second, we demonstrate how to achieve smooth scale transitions for genome data through visual embedding of one scale within another based on measured and simulated data. We further limit the massive data size with a scale-dependent camera model to avoid visual clutter and to facilitate interactive exploration. Third, we describe the implementation of this approach and compare our results to existing illustrations. Finally, we report on feedback from professional illustrators and domain experts. It indicates that our interactive visualization can serve as a fundamental building block for tools that target both domain experts and laypeople.
Abstraction in illustrative visualization
On a high level, our work relates to the use of abstraction in creating effective visual representations, i. e., the use of visual abstraction. Viola and Isenberg [58] describe this concept as a process, which removes detail when transitioning from a lower-level to a higher-level representation, yet which preserves the overall concept. While they attribute the removed detail to "natural variation, noise, etc." in the investigated multi-scale representation we actually deal with a different data scenario: DNA assemblies at different levels of scale. We thus technically do not deal with a "concept-preserving transformation" [58], but with a process in which the underlying representational concept (or parts of it) can change. Nonetheless, their view of abstraction as an interactive process that allows viewers to relate one representation (at one scale) to another one (at a different scale) is essential to our work.
Also important from Viola and Isenberg's discussion [58] is their concept of axes of abstraction, which are traversed in scale space. We also connect the DNA representations at different scales, facilitating a smooth transition between them. In creating this axis of abstraction, we focus primarily on changes of Viola and Isenberg's geometric axis, but without a geometric interpolation of different representations. Instead, we use visual embedding of one scale in another one.
Scale-dependent molecular and genome visualization
We investigate multi-scale representations of the DNA, which relates to work in bio-molecular visualization. Several surveys have summarized work in this field [2,28,29,39], so below we only point out selected approaches. In addition, a large body of work by professional illustrators on mesoscale cell depiction inspired us such as visualizing the human chromosome down to the detail of individual parts of the molecule [19].
In general, as one navigates through large-scale 3D scenes, the underlying subject matter is intrinsically complex and requires appropriate interaction to aid intellection [17]. The inspection of individual parts is challenging, in particular if the viewer is too far away to appreciate its visual details. Yet large, detailed datasets or procedural approaches are essential to create believable representations. To generate not only efficient but effective visualizations, we thus need to remove detail in Viola and Isenberg's [58] visual abstraction sense. This allows us to render at interactive rates as well as to see the intended structures, which would otherwise be hidden due to cluttered views. Consequently, even most single-scale small-scale representations use some type of multiscale approach and with it introduce abstraction. Generally we can distinguish three fundamental techniques: multi-scale representations by leaving out detail of a single data source, multi-scale techniques that actively represent preserved features at different scales, and multi-scale approaches that can also transit between representations of different scales. We discuss approaches for these three categories next.
Multi-scale visualization by means of leaving out detail
An example of leaving out details in a multi-scale context is Parulek et al.'s [46] continuous levels-of-detail for large molecules and, in particular, proteins. They reduced detail of far-away structures for faster rendering. They used three different conceptual distances to create increasingly coarser depictions such as those used in traditional molecular illustration. For distant parts of a molecule, in particular, they seamlessly transition to super atoms using implicit surface blending.
The cellVIEW framework [30] also employs a similar level-of-detail (LOD) principle using advanced GPU methods for proteins in the HIV. It also removes detail to depict internal structures, and procedurally generates the needed elements. In mesoscopic visualization, Lindow et al. [34] applied grid-based volume rendering to sphere raycasting to show large numbers of atoms. They bridged five orders of magnitude in length scale by exploiting the reoccurrence of molecular sub-entities. Finally, Falk et al. [13] proposed out-of-core optimizations for visualizing large-scale whole-cell simulations. Their approach extended Lindow et al.'s [34] work and provides a GPU ray marching for triangle rendering to depict pre-computed molecular surfaces.
Approaches in this category thus create a "glimpse" of multi-scale representations by removing detail and adjusting the remaining elements accordingly. We use this principle, in fact, in an extreme form to handle the multi-scale character of the chromosome data. We completely remove the detail of a large part of the dataset. If we would show all small details, an interactive rendering would be impossible and they would distract from the depicted elements. Nonetheless, this approach typically only uses a single level of data and does not incorporate different conceptual levels of scale.
Different shape representations by conceptual scale
The encoding of structures through different conceptual scales is often essential. Lindow et al. [35], for instance, described different rendering methods of nucleic acids-from 3D tertiary structures to linear 2D and graph models-with a focus on visual quality and performance. They demonstrate how the same data can be used to create both 3Dspatial representations and abstract 2D mappings of genome data. This produces three scale levels: the actual sequence, the helical form in 3D, and the spatial assembly of this form together with proteins. Waltemate et al. [59] represented the mesoscopic level with meshes or microscopic images, while showing detail through molecule assemblies. To transition between the mesoscopic and the molecular level, they used a membrane mapping to allow users to inspect and resolve areas on demand. A magnifier tool overlays the high-scale background with lower-scale details. This approach relates to our transition scheme, as we depict the higher scale as background and the lower scale as foreground. A texture-based molecule rendering has been proposed by Bajaj et al. [6]. Their method reduces the visual clutter at higher levels by incorporating a biochemically sensitive LOD hierarchy.
Tools used by domain experts also visualize different conceptual genome scales. To the best of our knowledge, the first tool to visualize the 3D human genome has been Genome3D [4]. It allows researchers to select a discrete scale level and then load data specifically for this level. The more recent GMOL tool [43] shows 3D genome data captured from Hi-C data [56]. GMOL uses a six-scale system similar to the one that we employ and we derived our data from theirs. They only support a discrete "toggling between scales" [43], while we provide a smooth scale transition. Moreover, we add further semantic scale levels at the lower end to connect base locations and their atomistic compositions.
Conceptual scale representations with smooth transition
A smooth transition between scales has previously been recognized as important. For instance, van der Zwan et al. [57] carried out structural abstraction with seamless transitions for molecules by continuously adjusting the 3D geometry of the data. Miao et al. [38] substantially extended this concept and applied it to DNA nanostructure visualization. They used ten semantic scales and defined smooth transitions between them. This process allows scientists to interact at the appropriate scale level. Later, Miao et al. [37] combined this approach with three dimensional embeddings. In addition to temporal changes of scale, Lueks et al. [36] explored a seamless and continuous spatial multiscale transition by geometry adjustment, controlled by the location in image or in object space. Finally, Kerpedjiev et al. [25] demonstrated multi-scale navigation of 2D genome maps and 1D genome tracks employing a smooth transition for the user to zoom into views.
All these approaches only transition between nearby scale levels and manipulate the depicted data geometry, which limits applicability. These methods, however, do not work in domains where a geometry transition cannot be defined. Further, they are limited in domains where massive multi-scale transitions are needed due to the large amount of geometry that is required for the detailed scale levels. We face these issues in our work and resolve them using visual embeddings instead of geometry transitions as well as a scale-dependent camera concept. Before detailing our approach, however, we first discuss general multiscale visualization techniques from other visualization domains.
General multi-scale data visualization
The vast differences in spatial scale of our world in general have fascinated people for a long time. Illustrators have created explanations of these scale differences in the form of images (e. g., [60] and [47, Fig. 1]), videos (e. g., the seminal "Powers of Ten" video [11] from 1977), and newer interactive experiences (e. g., [15]). Most illustrators use a smart composition of images blended such that the changes are (almost) unnoticeable, while some use clever perspectives to portray the differences in scale. These inspirations have prompted researchers in visualization to create similar multi-scale experiences, based on real datasets.
The classification from Sect. 2.2 for molecular and genome visualization applies here as well. Everts et al. [12], e. g., removed detail from brain fiber tracts to observe the characteristics of the data at a higher scale. Hsu et al. [22] defined various cameras for a dataset, each showing a different level of detail. They then used image masks and camera ray interpolation to create smooth spatial scale transitions that show the data's multi-scale character. Next, Glueck et al. [16]'s approach exemplifies the change of shape representations by conceptual scale by smoothly changing a multi-scale coordinate grid and position pegs to aid depth perception and multi-scale navigation of 3D scenes. They simply remove detail for scales that no longer contribute much to the visualization. In their accompanying video, interestingly, they limited the detail for each scale to only the focus point of the scale transition to maintain interactive frame rates. Another example of this category are geographic multi-scale representations such as online maps (e. g., Google or Bing maps), which contain multiple scale representations, but typically toggle between them as the user zooms in or out. Finally, virtual globes are an example for conceptual scale representations with smooth transitions. They use smooth texture transitions to show an increasing level of detail as one zooms in. Another example is Mohammed et al.'s [41] Abstractocyte tool, which depicts differently abstracted astrocytes and neurons. It allows users to smoothly transition between the cell-type abstractions using both geometry transformations and blending. We extend the latter to our visual embedding transition.
Also these approaches only cover a relatively small scale range. Even online map services cover less than approx. six orders of magnitude. Besides the field of bio-molecular and chemistry research discussed in Sect. 2.2, in fact, only astronomy deals with large scale differences. Here, structures range from celestial bodies (≥ ≈ 10 2 m) 2 to the size of the observable universe (1.3 · 10 26 m), in total 24 orders of magnitude.
To depict such data, visualization researchers have created explicit multi-scale rendering architectures. Schatz et al. [51], for example, combined the rendering of overview representations of larger structures with the detailed depiction of parts that are close to the camera or have high importance. To truly traverse the large range of scales of the universe, however, several datasets that cover different orders of size and detail magnitude have to be combined into a dedicated data rendering and exploration framework. The first such framework was introduced by Fu et al. [14,21] who used scale-independent modeling and rendering and power-scaled coordinates to produce scale-insensitive visualizations. This approach essentially treats, models, and visualizes each scale separately and then blends scales in and out as they appear or disappear. The different scales of entities in the universe can also be modeled using a ScaleGraph [26], which facilitates scale-independent rendering using scene graphs. Axelsson et al. [5] later extended this concept to the Dynamic Scene Graph, which, in the OpenSpace system [8], supports several high-detail locations and stereoscopic rendering. The Dynamic Scene Graph uses a dynamic camera node attachment to visualize scenes of varying scale and with high floating point precision.
With genome data we face similar problems concerning scaledependent data and the need to traverse a range of scales. We also face the challenge that our conceptual scales are packed much more tightly in scale space as we explain next. This leads to fundamental differences between both application domains.
MULTI-SCALE GENOME VISUALIZATION
Visualizing the nuclear human genome-from the nucleus that contains all chromosomal genetic material down to the very atoms that make up the DNA-is challenging due to the inherent organization of the DNA in tubular arrangements. DNA in its B-form is only 2 nm [3] wide, which in its fibrous form or at more detailed scales would be too thin to be perceived. This situation is even more aggravated by the dense organization of the DNA and the structural hierarchy that bridges several scales. The previously discussed methods do not deal with such a combination of structural characteristics. Below we thus discuss the challenges that arise from the properties of these biological entities and how we address them by developing our new approach that smoothly transitions between views of the genome at its various scales.
Challenges of interactive multiscale DNA visualization
Domain scientists who sequence, investigate, and generally work with genome data use a series of conceptual levels for analysis and visualization [43]: the genome scale (containing all approx. 3.2 Gb of the human genome), the chromosome scale (50-100 Mb), the loci scale (in the order of Mb), the fiber scale (in the order of Kb), the nucleosome scale (146 b), and the nucleotide scale (i. e., 1 b), in addition to the atomistic composition of the nucleotides. These seven scales cover a range of approx. 4-5 orders of magnitude in physical size. In astronomy or astrophysics, in contrast, researchers deal with a similar number of scales: 3 approx. 7-8 conceptual scales of objects, yet over a range of some 24 orders of magnitude of physical size. 4 A fundamental difference between multi-scale visualizations in the two domains is, therefore, the scale density of the conceptual levels that need to be depicted.
Multi-scale astronomy visualization [5,14,21,26] deals with positiveexponent scale-space 5 (Fig. 2, top), where two neighboring scales are relatively far apart in scale space. For example, planets are much smaller than stars, stars are much smaller than galaxies, galaxies are much smaller than galaxy clusters, etc. On average, two scales have a distance of three or more orders of magnitude in physical space. The consequence of this high distance in scale space between neighboring conceptual levels is that, as one zooms out, elements from one scale typically all but disappear before the elements on the next conceptual level become visible. This aspect is used in creating multi-scale astronomy visualizations. For example, Axelsson Fig. 2. Multi-scale visualization in astronomy vs. genomics. The size difference between celestial bodies is extremely large (e. g., sun vs. earth-the earth is almost invisible at that scale). The distance between earth and moon is also large, compared to their sizes. In the genome, we have similar relative size differences, yet molecules are densely packed as exemplified by the two base pairs in the DNA double helix.
Graph [5] uses spheres of influence to control the visibility range of objects from a given subtree of the scene graph. In fact, the low scale density of the conceptual levels made the seamless animation of the astronomy/astrophysics section in the "Powers of Ten" Video [11] from 1977 possible-in a time before computer graphics could be used to create such animations. Eames and Eames [11] simply and effectively blended smoothly between consecutive images that depicted the respective scales. For the cell/genome part, however, they use sudden transitions between conceptual scales without spatial continuity, and they also leave out several of the conceptual scales that scientists use today such as the chromosomes and the nucleosomes.
The reason for this problem of smoothly transitioning between scales in genome visualization-i. e., in negative-exponent scale-space 6 ( Fig. 2, bottom)-is that the conceptual levels of a multi-scale visualization are much closer to each other in scale. In contrast to astronomy's positive-exponent scale-space, there is only an average scale distance of about 0.5-0.6 orders of magnitude of physical space between two conceptual scales. Elements on one conceptual scale are thus still visible when elements from the next conceptual scale begin to appear. The scales for genome visualizations are thus much denser compared to astronomy's average scale distance of three orders of magnitude.
Moreover, in the genome the building blocks are physically connected in space and across conceptual scales, except for the genome and chromosome levels. From the atoms to the chromosome scale, we have a single connected component. It is assembled in different geometric ways, depending on the conceptual scale at which we choose to observe. For example, the sequence of all nucleotides (base pairs) of the 46 chromosomes in a human cell would stretch for 2 m, with each base pair only being 2 nm wide [3], while a complete set of chromosomes fits into the 6 µm wide nucleus. Nonetheless, in all scales between the sequence of nucleotides and a chromosome we deal with the same, physically connected structure. In astronomy, instead, the physical space between elements within a conceptual scale is mostly empty and elements are physically not connected-elements are only connected by proximity (and gravity), not by visible links.
The large inter-scale distance and physical connectedness, naturally, also create the problem of how to visualize the relationship between two conceptual scale levels. The mentioned multi-scale visualization systems from astronomy [5,14,21,26] use animation for this purpose, sometimes adding invisible and intangible elements such as orbits of celestial bodies. In general multi-scale visualization approaches, multiscale coordinate grids [16] can assist the perception of scale-level relationships. These approaches only work if the respective elements are independent of each other and can fade visually as one zooms out, for example, into the next-higher conceptual scale. The connected composition of the genome does make these approaches impossible. In the genome, in addition, we have a complete model for the details in each conceptual level, derived from data that are averages of measurements from many experiments on a single organism type. We are thus able to and need to show visual detail everywhere-as opposed to only close to a single point like planet Earth in astronomy.
Ultimately, all these points lead to two fundamental challenges for us to solve. The first (discussed in Sect. 3.2 and 3.3) is how to visually create effective transitions between conceptual scales. The transitional scales shall show the containment and relationship character of the data even in still images and seamlessly allow us to travel across the scales as we are interacting. They must deal with the continuous nature of the depicted elements, which are physically connected in space and across scales. The second challenge is a computational one. Positional information of all atoms from the entire genome would not fit into GPU memory and will prohibit interactive rendering performance. We discuss how to overcome these computational issues in Sect. 4, along with the implementation of the visual design from Sect. 3.2 and 3.3.
Visual embedding of conceptual scales
Existing multi-scale visualizations of DNA [36,38,57] or other data [41] often use geometry manipulations to transition from one scale to the next. For the full genome, however, this approach would create too much detail to be useful and would require too many elements to be rendered. Moreover, two consecutive scales may differ significantly in structure and organization. A nucleosome, e. g., consists of nucleotides in double-helix form, wrapped around a histone protein. We thus need appropriate abstracted representations for the whole set of geometry in a given scale that best depict the scale-dependent structure and still allow us to create smooth transitions between scales.
Nonetheless, the mentioned geometry-based multi-scale transformations still serve as an important inspiration to our work. They often provide intermediate representations that may not be entirely accurate, but show how one scale relates to another one, even in a still image. Viewers can appreciate the properties of both involved scale levels, such as in Miao et al.'s [38] transition between nucleotides and strands. Specifically, we take inspiration from traditional illustration where a related visual metaphor has been used before. As exemplified by Fig. 3, illustrators sometimes use an abstracted representation of a coarser scale to aid viewers with understanding the overall composition as well as the spatial location of the finer details. This embedding of one representation scale into the next is similar to combining several layers of visual information-or super-imposition [42, pp. 288 ff]. It is a common approach, for example, in creating maps. In visualization, this principle has been used in the past (e. g., [10,23,49,50]), typically applying some form of transparency to be able to perceive the different layers. Transparency, however, can easily lead to visualizations that are difficult to understand [9]. Simple outlines to indicate the coarser shape or context can also be useful [54]. In our case, even outlines easily lead to clutter due to the immense amount of detail in the genome data. Moreover, we are not interested in showing that some elements are spatially inside others, but rather that the elements are part of a higher-level structure, thus are conceptually contained.
We therefore propose visual scale embedding of the detailed scale into its coarser parent (see the illustration in Fig. 4). We render an Fig. 10]. We thus completely flattened the context as shown in Fig. 4 and inspired by previous multi-scale visualizations from structural biology [46]. Then we render the detailed geometry of the next-smaller scale on top of it. This concept adequately supports our goal of smooth scale transitions. A geometric representation of the coarser scale is first shown using 3D shading as long as it is still small on the screen, i. e., the camera is far away. It transitions to a flat, canvas-like representation when the camera comes closer and the detail in this scale is not enough anymore. We now add the representation of the more detailed scale on top-again using 3D shading, as shown for two scale transitions in Fig. 5. Our illustrative visualization concept combines the 2D aspect of the flattened coarser scale with the 3D detail of the finer scale. With it we make use of superimposed representations as argued by Viola and Isenberg [58], which are an alternative to spatially or temporally juxtaposed views. In our case, the increasingly abstract character of rendering of the coarser scale (as we flatten it during zooming in) relates to its increasingly contextual and conceptual nature. Our approach thus relates to semantic zooming [48] because the context layer turns into a flat surface or canvas, irrespective of the underlying 3D structure and regardless of the specific chosen view direction. This type of scale zoom does not have the character of cut-away techniques as often used in tools to explore containment in 3D data (e. g., [31,33]). Instead, it is more akin to the semantic zooming in the visualization of abstract data, which is embedded in the 2D plane (e. g., [61]).
Multi-scale visual embedding and scale-dependent view
One visual embedding step connects two consecutive semantic scales. We now concatenate several steps to assemble the whole hierarchy (Fig. 6). This is conceptually straightforward because each scale by itself is shown using 3D shading. Nonetheless, as we get to finer and finer details, we face the two major problems mentioned at the start of Sect. 3.2: visual clutter and limitations of graphics processing. Both are caused by the tight scale space packing of the semantic levels in the genome. At detailed scales, a huge number of elements are potentially visible, e. g., 3.2 Gb at the level of nucleotides. To address this issue, we adjust the camera concept to the multi-scale nature of the data.
In previous multi-scale visualization frameworks [5,14,21,26], researchers have already used scale-constrained camera navigation. For example, they apply a scale-dependent camera speed to quickly cover the huge distances at coarse levels and provide fine control for detailed levels. In addition, they used a scale-dependent physical camera size or scope such that the depicted elements would appropriately fill the distance between near and far plane, or use depth buffer remapping [14] to cover a larger depth range. In astronomy and astrophysics, however, we do not face the problem of a lot of nearby elements in detailed levels of scale due to their loose scale-space packing. After all, if we look into the night sky we do not see much more than "a few" stars from our galactic neighborhood which, in a visualization system, can easily be represented by a texture map. Axelsson et al. [5], for example, simply attach their cameras to nodes within the scale level they want to depict.
For the visualization of genome data, however, we have to introduce an active control of the scale-dependent data-hierarchy size or scope as we would "physically see," for example, all nucleosomes or nucleotides up to the end of the nucleus. Aside from the resulting clutter, such complete genome views would also conceptually not be helpful because, due to the nature of the genome, the elements within a detailed scale largely repeat themselves. The visual goal should thus be to only show a relevant and scale-dependent subset of each hierarchy level. We thus limit the rendering scope to a subset of the hierarchy, depending on the chosen scale level and spatial focus point. The example in Fig. 7 depicts the nucleosome scale, where we only show a limited number of nucleosomes to the left and the right of the current focus point in the sequence, while the rest of the hierarchy has been blended out. We thereby extend the visual metaphor of the canvas, which we applied in the visual embedding, and use the white background of the frame buffer as a second, scale-dependent canvas, which limits the visibility of the detail. In contrast to photorealism 7 that drives many multi-scale visu- alizations in astronomy, we are interested in appropriately abstracted representations through a scale-dependent removal of distant detail to support viewers in focusing on their current region of interest.
IMPLEMENTATION
Based on the conceptual design from Sect. 3 we now describe the implementation of our multi-scale genome visualization framework. We first describe the used and then explain the shader-based realization of the scale transitions using a series of visual embedding steps as well as some interaction considerations.
Data sources and data hierarchy
Researchers in genome studies have a high interest in understanding the relationships between the spatial structure at the various scale levels and the biological function of the DNA. Therefore they have created a multi-scale dataset that allows them to look at the genome in different spatial scale levels [43]. This data was derived by Nowotny et al. [43] from a model of the human genome by Asbury et al. [4], which in turn was constructed based on various data sources and observed properties. [7] approach of space-filling, fractal packing. As a result, Nowotny et al. [43] obtained the positions of the nucleotides in space, and from these computed the positions of fibers, loci, and chromosomes (Fig. 8). They stored this data in their own Genome Scale System (GSS) format and also provided the positions of the nucleotides for one nucleosome (Fig. 8, bottom-right). Even with this additional data, we still have to procedurally generate further information as we visualize this data such as the orientations of the nucleosomes (based on the location of two consecutive nucleosomes) and the linker DNA strands of nucleotides connecting two consecutive nucleosomes.
This data provides positions at every scale level, without additional information about the actual sizes. Only at the nucleotide and atom scales the sizes are known. It was commonly thought that nucleosomes are tightly and homogeneously packed into 30 nm fibers, 120 nm chromonema, and 300-700 nm chromatids, but recent studies [45] disprove this organization and confirm the existence of flexible chains with diameters of 5-24 nm. Therefore, for all hierarchically organized scales coarser than the nucleosome, we do not have information about the specific shape that each data point represents. We use spheres with scale-adjusted sizes as rendering primitives as they well portray the chaining of elements according to the data-point sequence. With respect to visualizing this multi-scale phenomenon, the data hierarchy (i. e., 100 nucleosomes = 1 fiber, 100 fibers = 1 locus, approx. 100 loci = 1 chromosome) is not the same as the hierarchy of semantic scales that a viewer sees. For example, the dataset contains a level that stores the chromosome positions, but if rendered we would only see one sphere for each chromosome ( Fig. 9(b)). Such a depiction would not easily be recognized as representing a chromosome due to the lack of detail. The chromosomes by themselves only become apparent once we display them with more shape details using the data level of the loci as given in Fig. 9(c). The locations at the chromosomes data scale can instead be better used to represent the semantic level of the nucleus by rendering them as larger spheres, all with the same color and with a single outline around the entire shape as illustrated in Fig. 9(a).
In Table 1 we list the relationships between data hierarchy and semantic hierarchy for the entire set of scales we support. From the table follows that the choice of color assignment and the subset of rendered elements on the screen supports viewers in understanding the semantic level, which we want to portray. For example, by rendering the fiber positions colored by chromosome we facilitate the understanding of a detailed depiction of a chromosome, rather than that chromosomes consist of several loci. In an alternative depiction for domain experts, who are interested in studying the loci regions, we could instead assign the colors by loci for the fiber data level and beyond.
We added two additional scale transitions that are not realized by visual embedding, but instead by color transitions. The first of these transitions changes the colors from the previously maintained chromosome color to nucleotide colors as the nucleotide positions are rendered in their 3D shape to illustrate that the nucleosomes themselves consist of pairs of nucleotides. The following transition then uses visual embedding as before, to transition to atoms while maintaining nucleotide colors. The last transition, again changes this color assignment such that the atoms are rendered in their typical element colors, using 3D shading and without flattening them.
Realizing visual scale embedding
For our proof-of-concept implementation we build on the molecular visualization functionality provided in the Marion framework [40]. We added to this framework the capability to load the previously described GSS data. We thus load and store the highest detail of the datathe 23,958,240 nucleosome positions-as well as all positions of the coarser scales. To show more detail, we use the single nucleosome example in the data, which consists of 292 nucleotides and then create the ≈ 24 · 10 6 instances for the semantic nucleosome scale. Here we fully use of Le Muzic et al.'s [30] technique of employing the tessellation stages on the GPU, which dynamically injects the atoms of the nucleosome. We apply a similar instancing approach for transitioning to an atomistic representation, based on the 1AOI model from the PDB. To visually represent the elements, we utilize 2D sphere impostors instead of sphere meshes [30]. Specifically, we use triangular 2D billboards (i. e., only three vertices) that always face the camera and assign the depth to each fragment that it would get if it had been a sphere.
If we wanted to directly render all atoms at the finest detail scale, we would have to deal with ≈ 3.2 Gb ·70 atoms/b = 224 · 10 9 atoms. This amount of detail is not possible to render at interactive rates. With LOD optimizations, such as the creation of super-atoms for distant elements, cellVIEW could process 15 · 10 9 atoms at 60 Hz [30]. This amount of detail does not seem to be necessary in our case. Our main goal is the depiction of the scale transitions and too much detail would cause visual noise and distractions. We use the scale-dependent removal of distant detail described in Sect. 3.3. As listed in Table 1, for coarse scales we show all chromosomes. Starting with the semantic fibers scale, we only show the focus chromosome. For the semantic nucleosomes level, we only show the focus fiber and two additional fibers in both directions of the sequence. To indicate that the sequence continues, we gradually fade out the ends of the sequence of nucleosomes as shown in Fig. 7. For finer scales beyond the nucleosomes, we maintain the sequence of five fibers around the focus point, but remove the detail of the links between nucleosomes.
To manage the different rendering scopes and color assignments, we assign IDs to elements in a data scale and record the IDs of the hierarchy ancestors of an element. For example, each chromosome data element gets an ID, which in turn is known to the loci data instances. We use this ID to assign a color to the chromosomes. Because we continue rendering all chromosomes even at the fiber data level respectively semantic chromosome with detail level, we also pass the IDs of the chromosomes to the fiber data elements. Later, the IDs of the fiber data elements are used to determine the rendering scope in the data levels of nucleotide positions and finer (more detail).
For realizing the transition in the visual scale embedding, i. e., transitioning from the coarser scale S N to the finer scale S N+1 , we begin by alpha-blending S N rendered with 3D detail and flattened S N . We achieve the 3D detail with screen-space ambient occlusion (SSAO), while the flattened version does not use SSAO. Next we transition between S N and S N+1 by first rendering S N and then S N+1 on top, the latter with increasing opacity. Here we avoid visual clutter by only adding detail to elements in S N+1 on top of those regions that belonged to their parents in S N . The necessary information for this purpose comes from the previously mentioned IDs. We thus first render all flattened elements of S N , before blending in detail elements from S N+1 . In the final transition of visual scale embedding, we remove the elements from S N through alpha-blending. For the two color transitions discussed in Sect. 4.1 we simply alpha-blend between the corresponding elements of S N and S N+1 , but with different color assignments.
Interaction considerations
The rendering speeds are in the range of 15-35 fps on an Intel Core ™ PC (i7-8700K, 6 cores, 32 GB RAM, 3.70 GHz, nVidia Quadro P4000, Windows 10 x64). In addition to providing a scale-controlled traversal of the scale hierarchy toward a focus point, we thus allow users to interactively explore the data and choose their focus point themselves.
To support this interaction, we allow users to apply transformations such as rotation and panning. We also allow users to click on the data to select a new focus point, which controls the removal of elements to be rendered at specific scale transitions (as shown in Table 1). First, users can select the focus chromosome (starting at loci positions), whose position is the median point within the sequence of fiber positions for that chromosome. This choice controls which chromosome remains as we transition from the fiber to the nucleosome data scale. Next, starting at the nucleosome data scale, users can select a strand of five consecutive fiber positions, which then ensures that only this strand remains as we transition from nucleosome to nucleotide positions.
To further support the interactive exploration, we also adjust the colors of the elements to be in focus next. For example, the subset of a chromosome next in focus is rendered in a slightly lighter color than the remaining elements of the same level. This approach provides a natural visual indication of the current focus point and guides the view of the users as they explore the scales.
To achieve the scale-constrained camera navigation, we measure the distance to a transition or interaction target point in the data sequence. We measure this distance as the span between the camera location and the position of the target level in its currently active scale. This distance then informs the setting of camera parameters and SSAO passes. After the user has selected a new focus point, the current distance to the camera will change, so we adjust also the global scale parameter that we use to control the scale navigation.
DISCUSSION
Based on our design and implementation we now compare our results with existing visual examples, examine potential application domains, discuss limitations, and suggest several directions for improvement.
Comparison to traditionally created illustrations
Measuring the ground truth is only possible to a certain degree, which makes the comparison to ScaleTrotter difficult. One reason is that no static genetic material exists in living cells. Moreover, microscopy is also limited at the scale levels with which we are dealing. We have to rely on the data from the domain experts with its own limitations (Sect. 5.4) as the input for creating our visualization and compare the results with existing illustrations in both static and animated form.
We first look at traditional static multi-scale illustrations as shown in Fig. 10; other illustrations similar to the one in Fig. 10(a) can be found in Annunziato's [3] and Ou et al.'s [45] works. In Fig. 10(a), the illustrators perform the scale transition along a 1D path, supported by the DNA's extreme length. We do not take this route as we employ the actual positions of elements from the involved datasets. This means that we could also apply our approach to biologic agents such as proteins that do not have an extremely long extent. Moreover, the static illustrations have some continuous scale transitions, e. g., the detail of the DNA molecule itself or the sizes of the nucleosomes. Some transitions in the multi-scale representation, however, are more sudden such as the transition from the DNA to nucleosomes, the transition from the nucleosomes to the condensed chromatin fiber, and the transition from that fiber to the 700 nm wide chromosome leg. Fig. 10(b) has only one such transition. The changeover happens directly between the nucleosome level and the mitotic chromosome. We show transitions between scales interactively using our visual scale embedding. The static illustrations in Fig. 10 just use the continuous nature of the DNA to evoke the same hierarchical layering of the different scales. The benefit of the spatial scale transitions in the static illustrations is that a single view can depict all scale levels, while our temporally-controlled scale transitions allow us to interactively explore any point in both the genome's spatial layout and in scale. Moreover, we also show the actual physical configuration of every scale according to the datasets that genome researchers provide, representing the current state of knowledge.
We also compare our results to animated illustrations as exemplified by the "Powers of Ten" video 8 [11] and a video treating the composition of the genome 9 and created by Drew Berry et al. in 2003. The "Powers of Ten" video only shows the fibers of the DNA double helix curled into loops-a notion that has since been revised by the domain experts. Nonetheless, the video still shows a continuous transition in scale through blending of aligned representations from the fibers, to the nucleotides, to the atoms. It even suggests that we should continue the scale journey beyond the atoms. The second video, in contrast, shows the scale transitions starting from the DNA double helix and zooming out. The scale transitions are depicted as "physical" assembly processes, e. g., going from the double helix to nucleosomes, and from nucleosomes to fibers. Furthermore, shifts of focus or hard cuts are applied as well. The process of assembling an elongated structure through curling up can nicely illustrate the composition of the low-level genome structures, but only if no constraints on the rest of the fibrous structure exist. In our interactive illustration, we have such constraints where we can zoom out and in and where we have restrictions on the locations of all elements coming from the given data. Moreover, the construction also potentially creates a lot of motion due to the dense nature of the genome and, thus, visual noise which might impact the overall visualization. On the other hand, both videos convey the message that no element is static at the small scales. We do not yet show this functionality in our visualizations.
Both static and dynamic traditional visualizations depict the composition of the genome in its mitotic stage. The chromosomes only assume this stage, however, when the cell divides. Our visualization is the first that provides the user with an interactive exploration with smooth scale transitions of the genome in its interphase state, the state in which the chromosomes exist most of the time.
Feedback from illustrators and application scenarios
To discuss the creation of illustrations for laypeople with ScaleTrotter, we asked two professional illustrators for feedback who work on biological and medical visualizations. One of them has ten years experience as a professional scientific illustrator and animator with a focus on biological and medical illustrations for science education. The other expert is a certified illustrator with two years experience plus a PhD in Bioengineering. We conducted a semi-structured interview (approx. 60 min) with them, to get critical feedback [24,27] on our illustrative multi-scale visualization and to learn how our approach compares to the way they deal with multi-scale depictions in their daily work.
They immediately considered our ScaleTrotter approach for showing genome scale transitions as part of a general story to tell. They missed the necessary additional support for telling a story such as the contextual representation of a cell (for which we could investigate cellVIEW [30]) and, in general, audio support and narration. Although they had to judge our results isolated from other story telling methods, they saw the benefits of an interactive tool for creating narratives that goes beyond the possibilities of their manual approaches.
We also got a number of specific pieces of advice for improvement. In particular, they recommended different settings for when to make certain transitions in scale space. The illustrators also suggested the addition of "contrast" for those parts that will be in focus next as we zoom in-a feature we then added and describe in Sect. 4.3.
According to them, our concept of using visual scale embedding to transition between different scalar representations has not yet been used in animated illustrations, yet the general concept of showing detail together with context as illustrated in Fig. 3 is known. Instead of using visual scale embedding, they use techniques discussed in Sect. 5.1, or they employ cut-outs with rectangles or boxes to indicate the transition between scales. Our visual scale embedding is seen by them as a clear innovation: "to have a smooth transition between the scales is really cool." Moreover, they were excited about the ability to freely select a point of focus and interactively zoom into the corresponding detail. Basically, they said that our approach would bring them closer to their vision of a "molecular Maya" because it is "essential to have a scientifically correct reference." Connected to this point we also discussed the application of ScaleTrotter in genome research. Due to their close collaborations with domain experts they emphasized that the combination of the genomics sequence data plus some type of spatial information will be essential for future research. A combination of our visualization, which is based on the domain's state-of-the-art spatial data, with existing tools could allow genome scientists to better understand the function of genes and certain genetic diseases.
In summary, they are excited about the visual results and see application possibilities both in teaching and in data exploration.
Feedback from genome scientists
As a result of our conversation with the illustrators they also connected us to a neurobiologist who investigates 3D genome structures at single cell levels, e. g., by comparing cancerous with healthy cells. His group is interested in interactions between different regions of the genome. Although the spatial characteristics of the data are of key importance to them, they still use 2D tools. The scientist confirmed that a combination of their 2D representations with our interactive 3D-spatial multi-scale method would considerably help them to understand the interaction of sequentially distant but spatially close parts of the genome, processes such as gene expression, and DNA-protein interactions.
We also presented our approach to an expert in molecular biology with 52 years of age and 22 years of post-PhD experience. He specializes in genetics and studies the composition, architecture, and function of SMC complexes. We conducted a semi-structured interview (approx. 60 minutes) to discuss our results. He stated that transitions between several scales are definitely useful for analyzing the 3D genome. He was satisfied with the coarser chromosomes and loci representations, but had suggestions for improving the nucleosome and atomic scales. In particular, he noted the lack of proteins such as histones. He compared our visualization with existing electron microscopy images [44,45], and suggested that a more familiar filament-like representation could increase understandability. In his opinion, some scale transitions happened too early (e. g., the transition from chromosome-colored to nucleotidecolored nucleotides). We adjusted our parametrization accordingly. In addition, based on his feedback, we added an interactive scale offset control that now allows users to adjust the scale representation for a given zoom level. This offset only adapts the chosen representation according to Table 1, while leaving the size on the screen unchanged. The expert also suggested to build on the current approach and extend it with more scales, which we plan to do in the future. Similar to the feedback from the neurobiologist, also the molecular biologist agrees that an integration with existing 2D examination tools has a great potential to improve the workflow in a future visualization system.
Limitations
There are several limitations of our work, the first set relating to the source data. While we used actual data generated by domain experts based on the latest understanding of the genome, it is largely generated using simulations and not actual measurements (Sect. 4.1. We do not use actual sequence data at the lowest scales. Moreover, our specific dataset only contains 45 chromosomes, instead of the correct number of 46. We also noticed that the dataset contains 23,958,240 nucleosome positions, yet when we multiply this with the sum of 146 base pairs per nucleosome we arrive at ≈ 3.5 Gb for the entire genome-not even including the linker base pairs in this calculation and for only 45 chromosomes. Ultimately better data is required. The overall nature of the visualization and the scale transitions would not be impacted by the modified data and we believe that the data quality is already sufficient for general illustration and teaching purposes.
Another limitation is the huge size of the data. Loading all positions for the interactive visualization takes approx. two minutes, but we have not yet explored the feasibility of also loading actual sequence data. We could investigate loading data on-demand for future interactive applications, in particular in the context of tools for domain experts. For such applications we would also likely have to reconsider our design decision to leave out data in the detailed scales, as these may interact with the parts that we do show. We would need to develop a space-dependent look-up to identify parts from the entire genome that potentially interact with the presently shown focus sequences. Another limitation relates to the selection of detail to zoom into. At the moment, we determine the focus interactively based on the currently depicted scale level. This makes it, for example, difficult to select a chromosome deep inside the nucleus or fibers deep inside a chromosome. A combination with an abstract data representation-for example with a domain expert sequencing tool-would address this problem.
Future work
Beyond addressing the mentioned issues, we would like to pursue a number of additional ideas in the future. A next step towards adoption of our approach in biological or medical research is to build an analytical system on top of ScaleTrotter that allows us to query various scientifically relevant aspects. As noted in Sect. 5.2, one scenario are spatial queries to determine whether two genes are located in a close spatial vicinity in case they somehow are related. Other visualization systems developed in the past for analyzing gene expressions can benefit from the structural features that ScaleTrotter offers.
Extending to other subject matters, we will also have to investigate scale transitions where the scales cannot be represented with sequences of blobs. For example, can we also use linear or volumetric representations and extend our visual space embedding to such structures? Alternatively, can we find more effective scale transitions to use such as geometry-based ones (e. g., [36,38,57]), in addition to the visual embedding and the color changes we use so far? We have to avoid over-using the visual variable color which is a scarce resource. Many elements could use color at different scales, so dynamic methods for color management will be essential.
Another direction for future research are generative methods for completing the basic skeletal genetic information on the fly. Currently we use data that are based on positions of nucleotides, while higher-level structures are constructed from these. Information about nucleotide orientations and their connectivity is missing, as well as the specific sequence which is currently not derived from real data. ScaleTrotter does not contain higher-level structures and protein complexes that hold the genome together and which would need to be modeled with a strict scientific accuracy in mind. An algorithmic generation of such models from Hi-C data would allow biologists to adjust the model parameters according to their mental model, and would give them a system for generating new hypotheses. Such a generative approach would also integrate well with the task of adding processes in which involve the DNA, such as condensation, replication, and cell division.
A related fundamental question is how to visualize the dynamic characteristics of the molecular world. It would be highly useful to portray the transition between the interphase and the mitotic form of the DNA, to support visualizing the dynamic processes of reading out the DNA, and to even show the Brownian motion of the atoms.
Finally, our visualization relies on dedicated decisions of how to parameterize the scale transitions. While we used our best judgment to adjust the settings, the resulting parameterization may not be universally valid. An interactive illustration for teaching may need parameters different from those in a tool for domain experts. It would be helpful to derive templates that could be used in different application contexts.
CONCLUSION
ScaleTrotter constitutes one step towards understanding the mysteries of human genetics-not only for a small group of scientists, but also for larger audiences. It is driven by our desire as humans to understand "was die Welt im Innersten zusammenhält" [what "binds the world, and guides its course"] [18]. We believe that our visualization has the potential to serve as the basis of teaching material about the genome and part of the inner workings of biologic processes. It is intended both for the general public and as a foundation for future visual data exploration for genome researchers. In both cases we support, for the first time, an interactive and seamless exploration of the full range of scales-from the nucleus to the atoms of the DNA.
From our discussion it became clear that such multi-scale visualizations need to be created in a fundamentally different way as compared to those excellent examples used in the astronomy domain. In this paper we thus distinguish between the positive-exponent scale-space of astronomy (looking inside-out) and the negative-exponent scale-space of genome data (looking outside-in). For the latter we provide a multiscale visualization approach based on visual scale embedding. We also discuss an example on how the controlled use of abstraction in (illustrative) visualization allows us to employ a space-efficient superimposition of visual representations. This is opposed to juxtaposed views [58], which are ubiquitous in visualization today.
A remaining question is whether the tipping point between the different types of scale spaces is really approximately one meter (1 · 10 0 m) or whether we should use a different point in scale space such as 1 mm. The answer to this question requires further studies on how to illustrate multi-scale subject matter. An example is to generalize our approach to other biologic phenomena such as mitotic DNA or microtubules as suggested in Sect. 5.5. If we continue our journey down the negative-exponent scale-space we may discover a third scale-space region. Models of atoms and subatomic particles seem to again comprise much empty space, similar to the situation in the positive-exponent scale-space. A bigger vision of this work thus is to completely replicate the "Powers of Ten" video-the 36 orders of magnitude from the size of the observable universe to sub-atomic particles-but with an interactive tool and based on current data and visualizations. | 9,576 |
1907.12006 | 2964608056 | Online updating a tracking model to adapt to object appearance variations is challenging. For SGD-based model optimization, using a large learning rate may help to converge the model faster but has the risk of letting the loss wander wildly. Thus traditional optimization methods usually choose a relatively small learning rate and iterate for more steps to converge the model, which is time-consuming. In this paper, we propose to offline train a recurrent neural optimizer to predict an adaptive learning rate for model updating in a meta-learning setting, which can converge the model in a few gradient steps. This substantially improves the convergence speed of updating the tracking model, while achieving better performance. Moreover, we also propose a simple yet effective training trick called Random Filter Scaling to prevent overfitting, which boosts the performance greatly. Finally, we extensively evaluate our tracker, ROAM, on the OTB, VOT, GOT-10K, TrackingNet and LaSOT benchmark and our method performs favorably against state-of-the-art algorithms. | Learning to learn or meta-learning has a long history @cite_46 @cite_17 @cite_42 . With the recent successes of applying meta-learning on few-shot classification @cite_55 @cite_53 and reinforcement learning @cite_15 @cite_28 , it has regained attention. The pioneering work @cite_37 designs an off-line learned optimizer using gradient decent and shows promising performance compared with traditional optimization methods. However, it does not generalize well for large numbers of descent step. To mitigate this problem, @cite_27 proposes several training techniques, including parameters scaling and combination with convex functions to coordinate the learning process of the optimizer. @cite_26 also addresses this issue by designing a hierarchical RNN architecture with dynamically adapted input and output scaling. In contrast to other works that output an increment for each parameter update, which is prone to overfitting due to different gradient scales, we instead associate an adaptive learning rate produced by a recurrent neural network with the computed gradient for fast convergence of the model update. | {
"abstract": [
"The move from hand-designed features to learned features in machine learning has been wildly successful. In spite of this, optimization algorithms are still designed by hand. In this paper we show how the design of an optimization algorithm can be cast as a learning problem, allowing the algorithm to learn to exploit structure in the problems of interest in an automatic way. Our learned algorithms, implemented by LSTMs, outperform generic, hand-designed competitors on the tasks for which they are trained, and also generalize well to new tasks with similar structure. We demonstrate this on a number of tasks, including simple convex problems, training neural networks, and styling images with neural art.",
"Learning to learn has emerged as an important direction for achieving artificial intelligence. Two of the primary barriers to its adoption are an inability to scale to larger problems and a limited ability to generalize to new tasks. We introduce a learned gradient descent optimizer that generalizes well to new tasks, and which has significantly reduced memory and computation overhead. We achieve this by introducing a novel hierarchical RNN architecture, with minimal per-parameter overhead, augmented with additional architectural features that mirror the known structure of optimization tasks. We also develop a meta-training ensemble of small, diverse optimization tasks capturing common properties of loss landscapes. The optimizer learns to outperform RMSProp ADAM on problems in this corpus. More importantly, it performs comparably or better when applied to small convolutional neural networks, despite seeing no neural networks in its meta-training set. Finally, it generalizes to train Inception V3 and ResNet V2 architectures on the ImageNet dataset for thousands of steps, optimization problems that are of a vastly different scale than those it was trained on. We release an open source implementation of the meta-training algorithm.",
"We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learning problems, including classification, regression, and reinforcement learning. The goal of meta-learning is to train a model on a variety of learning tasks, such that it can solve new learning tasks using only a small number of training samples. In our approach, the parameters of the model are explicitly trained such that a small number of gradient steps with a small amount of training data from a new task will produce good generalization performance on that task. In effect, our method trains the model to be easy to fine-tune. We demonstrate that this approach leads to state-of-the-art performance on two few-shot image classification benchmarks, produces good results on few-shot regression, and accelerates fine-tuning for policy gradient reinforcement learning with neural network policies.",
"",
"",
"Though deep neural networks have shown great success in the large data domain, they generally perform poorly on few-shot learning tasks, where a model has to quickly generalize after seeing very few examples from each class. The general belief is that gradient-based optimization in high capacity models requires many iterative steps over many examples to perform well. Here, we propose an LSTM-based meta-learner model to learn the exact optimization algorithm used to train another learner neural network in the few-shot regime. The parametrization of our model allows it to learn appropriate parameter updates specifically for the scenario where a set amount of updates will be made, while also learning a general initialization of the learner network that allows for quick convergence of training. We demonstrate that this meta-learning model is competitive with deep metric-learning techniques for few-shot learning.",
"",
"",
"We develop a met alearning approach for learning hierarchically structured poli- cies, improving sample efficiency on unseen tasks through the use of shared primitives—policies that are executed for large numbers of timesteps. Specifi- cally, a set of primitives are shared within a distribution of tasks, and are switched between by task-specific policies. We provide a concrete metric for measuring the strength of such hierarchies, leading to an optimization problem for quickly reaching high reward on unseen tasks. We then present an algorithm to solve this problem end-to-end through the use of any off-the-shelf reinforcement learning method, by repeatedly sampling new tasks and resetting task-specific policies. We successfully discover meaningful motor primitives for the directional movement of four-legged robots, solely by interacting with distributions of mazes. We also demonstrate the transferability of primitives to solve long-timescale sparse-reward obstacle courses, and we enable 3D humanoid robots to robustly walk and crawl with the same policy.",
""
],
"cite_N": [
"@cite_37",
"@cite_26",
"@cite_15",
"@cite_28",
"@cite_53",
"@cite_55",
"@cite_42",
"@cite_27",
"@cite_46",
"@cite_17"
],
"mid": [
"2963775850",
"2951634833",
"2604763608",
"",
"",
"2753160622",
"",
"2963451996",
"2963161674",
""
]
} | 0 |
||
1907.11840 | 2966209849 | Exploring deep convolutional neural networks of high efficiency and low memory usage is very essential for a wide variety of machine learning tasks. Most of existing approaches used to accelerate deep models by manipulating parameters or filters without data, e.g., pruning and decomposition. In contrast, we study this problem from a different perspective by respecting the difference between data. An instance-wise feature pruning is developed by identifying informative features for different instances. Specifically, by investigating a feature decay regularization, we expect intermediate feature maps of each instance in deep neural networks to be sparse while preserving the overall network performance. During online inference, subtle features of input images extracted by intermediate layers of a well-trained neural network can be eliminated to accelerate the subsequent calculations. We further take coefficient of variation as a measure to select the layers that are appropriate for acceleration. Extensive experiments conducted on benchmark datasets and networks demonstrate the effectiveness of the proposed method. | In order to excavate the complexity of each instance, several works are proposed for assigning different parts of the designed network to different input data dynamically. For example, @cite_28 @cite_9 @cite_21 utilized attention and gate layers to evaluate each channel and discard some of them with subtle importances during the inference phrase. @cite_2 @cite_20 @cite_7 utilized a gate cell to discard some layers in pre-trained deep neural networks for efficient inference. @cite_31 @cite_4 @cite_22 @cite_5 @cite_24 further proposed the branch selection operation to allow the learned neural networks to change themselves according to different input data. @cite_30 @cite_3 @cite_27 applied the dynamic strategy on the activations of feature maps in neural networks. | {
"abstract": [
"We introduce the Dynamic Capacity Network (DCN), a neural network that can adaptively assign its capacity across different portions of the input data. This is achieved by combining modules of two types: low-capacity subnetworks and high-capacity sub-networks. The low-capacity sub-networks are applied across most of the input, but also provide a guide to select a few portions of the input on which to apply the high-capacity sub-networks. The selection is made using a novel gradient-based attention mechanism, that efficiently identifies input regions for which the DCN's output is most sensitive and to which we should devote more capacity. We focus our empirical evaluation on the Cluttered MNIST and SVHN image datasets. Our findings indicate that DCNs are able to drastically reduce the number of computations, compared to traditional convolutional neural networks, while maintaining similar or even better performance.",
"",
"We propose and systematically evaluate three strategies for training dynamically-routed artificial neural networks: graphs of learned transformations through which different input signals may take different paths. Though some approaches have advantages over others, the resulting networks are often qualitatively similar. We find that, in dynamically-routed networks trained to classify images, layers and branches become specialized to process distinct categories of images. Additionally, given a fixed computational budget, dynamically-routed networks tend to perform better than comparable statically-routed networks.",
"Do convolutional networks really need a fixed feed-forward structure? What if, after identifying the high-level concept of an image, a network could move directly to a layer that can distinguish fine-grained differences? Currently, a network would first need to execute sometimes hundreds of intermediate layers that specialize in unrelated aspects. Ideally, the more a network already knows about an image, the better it should be at deciding which layer to compute next. In this work, we propose convolutional networks with adaptive inference graphs (ConvNet-AIG) that adaptively define their network topology conditioned on the input image. Following a high-level structure similar to residual networks (ResNets), ConvNet-AIG decides for each input image on the fly which layers are needed. In experiments on ImageNet we show that ConvNet-AIG learns distinct inference graphs for different categories. Both ConvNet-AIG with 50 and 101 layers outperform their ResNet counterpart, while using (20 ) and (33 ) less computations respectively. By grouping parameters into layers for related classes and only executing relevant layers, ConvNet-AIG improves both efficiency and overall classification quality. Lastly, we also study the effect of adaptive inference graphs on the susceptibility towards adversarial examples. We observe that ConvNet-AIG shows a higher robustness than ResNets, complementing other known defense mechanisms.",
"Making deep convolutional neural networks more accurate typically comes at the cost of increased computational and memory resources. In this paper, we reduce this cost by exploiting the fact that the importance of features computed by convolutional layers is highly input-dependent, and propose feature boosting and suppression (FBS), a new method to predictively amplify salient convolutional channels and skip unimportant ones at run-time. FBS introduces small auxiliary connections to existing convolutional layers. In contrast to channel pruning methods which permanently remove channels, it preserves the full network structures and accelerates convolution by dynamically skipping unimportant input and output channels. FBS-augmented networks are trained with conventional stochastic gradient descent, making it readily available for many state-of-the-art CNNs. We compare FBS to a range of existing channel pruning and dynamic execution schemes and demonstrate large improvements on ImageNet classification. Experiments show that FBS can respectively provide @math and @math savings in compute on VGG-16 and ResNet-18, both with less than @math top-5 accuracy loss.",
"In this paper, we propose a Runtime Neural Pruning (RNP) framework which prunes the deep neural network dynamically at the runtime. Unlike existing neural pruning methods which produce a fixed pruned model for deployment, our method preserves the full ability of the original network and conducts pruning according to the input image and current feature maps adaptively. The pruning is performed in a bottom-up, layer-by-layer manner, which we model as a Markov decision process and use reinforcement learning for training. The agent judges the importance of each convolutional kernel and conducts channel-wise pruning conditioned on different samples, where the network is pruned more when the image is easier for the task. Since the ability of network is fully preserved, the balance point is easily adjustable according to the available resources. Our method can be applied to off-the-shelf network structures and reach a better tradeoff between speed and accuracy, especially with a large pruning rate.",
"Employing deep neural networks to obtain state-of-the-art performance on computer vision tasks can consume billions of floating point operations and several Joules of energy per evaluation. Network pruning, which statically removes unnecessary features and weights, has emerged as a promising way to reduce this computation cost. In this paper, we propose channel gating, a dynamic, fine-grained, training-based computation-cost-reduction scheme. Channel gating works by identifying the regions in the features which contribute less to the classification result and turning off a subset of the channels for computing the pixels within these uninteresting regions. Unlike static network pruning, the channel gating optimizes computations exploiting characteristics specific to each input at run-time. We show experimentally that applying channel gating in state-of-the-art networks can achieve 66 and 60 reduction in FLOPs with 0.22 and 0.29 accuracy loss on the CIFAR-10 and CIFAR-100 datasets, respectively.",
"In this paper, we present a novel and general network structure towards accelerating the inference process of convolutional neural networks, which is more complicated in network structure yet with less inference complexity. The core idea is to equip each original convolutional layer with another low-cost collaborative layer (LCCL), and the element-wise multiplication of the ReLU outputs of these two parallel layers produces the layer-wise output. The combined layer is potentially more discriminative than the original convolutional layer, and its inference is faster for two reasons: 1) the zero cells of the LCCL feature maps will remain zero after element-wise multiplication, and thus it is safe to skip the calculation of the corresponding high-cost convolution in the original convolutional layer, 2) LCCL is very fast if it is implemented as a 1*1 convolution or only a single filter shared by all channels. Extensive experiments on the CIFAR-10, CIFAR-100 and ILSCRC-2012 benchmarks show that our proposed network structure can accelerate the inference process by 32 on average with negligible performance drop.",
"",
"Conventional deep convolutional neural networks (CNNs) apply convolution operators uniformly in space across all feature maps for hundreds of layers - this incurs a high computational cost for real-time applications. For many problems such as object detection and semantic segmentation, we are able to obtain a low-cost computation mask, either from a priori problem knowledge, or from a low-resolution segmentation network. We show that such computation masks can be used to reduce computation in the high-resolution main network. Variants of sparse activation CNNs have previously been explored on small-scale tasks and showed no degradation in terms of object classification accuracy, but often measured gains in terms of theoretical FLOPs without realizing a practical speedup when compared to highly optimized dense convolution implementations. In this work, we leverage the sparsity structure of computation masks and propose a novel tiling-based sparse convolution algorithm. We verified the effectiveness of our sparse CNN on LiDAR-based 3D object detection, and we report significant wall-clock speed-ups compared to dense convolution without noticeable loss of accuracy.",
"While deeper convolutional networks are needed to achieve maximum accuracy in visual perception tasks, for many inputs shallower networks are sufficient. We exploit this observation by learning to skip convolutional layers on a per-input basis. We introduce SkipNet, a modified residual network, that uses a gating network to selectively skip convolutional blocks based on the activations of the previous layer. We formulate the dynamic skipping problem in the context of sequential decision making and propose a hybrid learning algorithm that combines supervised learning and reinforcement learning to address the challenges of non-differentiable skipping decisions. We show SkipNet reduces computation by (30-90 ) while preserving the accuracy of the original model on four benchmark datasets and outperforms the state-of-the-art dynamic networks and static compression methods. We also qualitatively evaluate the gating policy to reveal a relationship between image scale and saliency and the number of layers skipped.",
"There is growing interest in improving the design of deep network architectures to be both accurate and low cost. This paper explores semantic specialization as a mechanism for improving the computational efficiency (accuracy-per-unit-cost) of inference in the context of image classification. Specifically, we propose a network architecture template called HydraNet, which enables state-of-the-art architectures for image classification to be transformed into dynamic architectures which exploit conditional execution for efficient inference. HydraNets are wide networks containing distinct components specialized to compute features for visually similar classes, but they retain efficiency by dynamically selecting only a small number of components to evaluate for any one input image. This design is made possible by a soft gating mechanism that encourages component specialization during training and accurately performs component selection during inference. We evaluate the HydraNet approach on both the CIFAR-100 and ImageNet classification tasks. On CIFAR, applying the HydraNet template to the ResNet and DenseNet family of models reduces inference cost by 2-4A— while retaining the accuracy of the baseline architectures. On ImageNet, applying the HydraNet template improves accuracy up to 2.5 when compared to an efficient baseline architecture with similar inference cost.",
"Deep neural networks are state of the art methods for many learning tasks due to their ability to extract increasingly better features at each network layer. However, the improved performance of additional layers in a deep network comes at the cost of added latency and energy usage in feedforward inference. As networks continue to get deeper and larger, these costs become more prohibitive for real-time and energy-sensitive applications. To address this issue, we present BranchyNet, a novel deep network architecture that is augmented with additional side branch classifiers. The architecture allows prediction results for a large portion of test samples to exit the network early via these branches when samples can already be inferred with high confidence. BranchyNet exploits the observation that features learned at an early layer of a network may often be sufficient for the classification of many data points. For more difficult samples, which are expected less frequently, BranchyNet will use further or all network layers to provide the best likelihood of correct prediction. We study the BranchyNet architecture using several well-known networks (LeNet, AlexNet, ResNet) and datasets (MNIST, CIFAR10) and show that it can both improve accuracy and significantly reduce the inference time of the network.",
"This paper proposes a deep learning architecture based on Residual Network that dynamically adjusts the number of executed layers for the regions of the image. This architecture is end-to-end trainable, deterministic and problem-agnostic. It is therefore applicable without any modifications to a wide range of computer vision problems such as image classification, object detection and image segmentation. We present experimental results showing that this model improves the computational efficiency of Residual Networks on the challenging ImageNet classification and COCO object detection datasets. Additionally, we evaluate the computation time maps on the visual saliency dataset cat2000 and find that they correlate surprisingly well with human eye fixation positions."
],
"cite_N": [
"@cite_30",
"@cite_4",
"@cite_22",
"@cite_7",
"@cite_28",
"@cite_9",
"@cite_21",
"@cite_3",
"@cite_24",
"@cite_27",
"@cite_2",
"@cite_5",
"@cite_31",
"@cite_20"
],
"mid": [
"2963518064",
"2962935523",
"2604231779",
"2884751099",
"2896006880",
"2752037867",
"2806990599",
"2604998962",
"2964062240",
"2963896595",
"2963393494",
"2798722023",
"2962677625",
"2562731582"
]
} | 0 |
||
1907.11857 | 2964959430 | With the emergence of diverse data collection techniques, objects in real applications can be represented as multi-modal features. What's more, objects may have multiple semantic meanings. Multi-modal and Multi-label [1] (MMML) problem becomes a universal phenomenon. The quality of data collected from different channels are inconsistent and some of them may not benefit for prediction. In real life, not all the modalities are needed for prediction. As a result, we propose a novel instance-oriented Multi-modal Classifier Chains (MCC) algorithm for MMML problem, which can make convince prediction with partial modalities. MCC extracts different modalities for different instances in the testing phase. Extensive experiments are performed on one real-world herbs dataset and two public datasets to validate our proposed algorithm, which reveals that it may be better to extract many instead of all of the modalities at hand. | In this section, we briefly present state-of-the-art methods in multi-modal and multi-label @cite_3 fields. As for modality extraction in multi-modal learning, it is closely related to feature extraction @cite_17 . Therefore, we briefly review some related work on these two aspects in this section. | {
"abstract": [
"Multi-label learning studies the problem where each example is represented by a single instance while associated with a set of labels simultaneously. During the past decade, significant amount of progresses have been made towards this emerging machine learning paradigm. This paper aims to provide a timely review on this area with emphasis on state-of-the-art multi-label learning algorithms. Firstly, fundamentals on multi-label learning including formal definition and evaluation metrics are given. Secondly and primarily, eight representative multi-label learning algorithms are scrutinized under common notations with relevant analyses and discussions. Thirdly, several related learning settings are briefly summarized. As a conclusion, online resources and open research problems on multi-label learning are outlined for reference purposes.",
"Feature selection techniques have become an apparent need in many bioinformatics applications. In addition to the large pool of techniques that have already been developed in the machine learning and data mining fields, specific applications in bioinformatics have led to a wealth of newly proposed techniques. In this article, we make the interested reader aware of the possibilities of feature selection, providing a basic taxonomy of feature selection techniques, and discussing their use, variety and potential in a number of both common as well as upcoming bioinformatics applications. Contact: [email protected] Supplementary information: http: bioinformatics.psb.ugent.be supplementary_data yvsae fsreview"
],
"cite_N": [
"@cite_3",
"@cite_17"
],
"mid": [
"2114315281",
"2119387367"
]
} | MANY COULD BE BETTER THAN ALL: A NOVEL INSTANCE-ORIENTED ALGORITHM FOR MULTI-MODAL MULTI-LABEL PROBLEM | In many natural scenarios, objects might be complicated with multi-modal features and have multiple semantic meanings simultaneously.
For one thing, data is collected from diverse channels and exhibits heterogeneous properties: each of these domains present different views of the same object, where each modality can have its own individual representation space and semantic meanings. Such forms of data are known as multimodal data. In a multi-modal setting, different modalities are with various extraction cost. Previous researches, i.e., dimensionality reduction methods, generally assume that all the multi-modal features of test instances have been already extracted without considering the extraction cost. While in practical applications, there is no aforehand multi-modal features prepared, modality extraction need to be performed in the testing phase at first. While for the complex multi-modal data collection nowadays, the heavy computation burden of feature * Corresponding author extraction for different modalities has become the dominant factor that hurts the efficiency.
For another, real-world objects might have multiple semantic meanings. To account for the multiple semantic meanings that one real-world object might have, one direct solution is to assign a set of proper labels to the object to explicitly express its semantics. In multi-label learning, each object is associated with a set of labels instead of a single label. Previous researches, i.e., classifier chains algorithm is a high-order approach considering the relationship among labels, but it is affected by the ordering specified by predicted labels.
To address all the above challenges, this paper introduces a novel algorithm called Multi-modal Classifier Chains (MCC) inspired by Long Short-Term Memory (LSTM) [2] [3]. Information of previous selected modalities can be considered as storing in memory cell. The deep-learning framework simultaneously generates next modality of features and conducts the classification according to the input raw signals in a data-driven way, which could avoid some biases from feature engineering and reduce the mismatch between feature extraction and classifier. The main contributions are:
• We propose a novel MCC algorithm considering not only interrelation among different modalities, but also relationship among different labels.
• MCC algorithm utilizes multi-modal information under budget, which shows that MCC can make a convince prediction with less average modality extraction cost.
The remainder of this paper is organized as follows. Section 2 introduces related work. Section 3 presents the proposed MCC model. In section 4, empirical evaluations are given to show the superiority of MCC. Finally, section 5 presents conclusion and future work.
METHODOLOGY
This section first summarizes some formal symbols and definitions used throughout this paper, and then introduces the formulation of the proposed MCC model. An overview of our MCC algorithm is shown in Fig.1
Notation
In the following, bold character denotes vector (e.g., X). The task of this paper is to learn a function h:
X → 2 Y from a training dataset with N data samples D = {(X i , Y i )} N i=1 . The i-th instance (X i , Y i ) contains a feature vector X i ∈ X and a label vector Y i ∈ Y. X i = [X 1 i , X 2 i , . . . , X P i ] ∈ R d1+d2+···+d P is a combi- nation of all modalities and d m is the dimensionality of fea- tures in m-th modality. Y i = [y 1 i , y 2 i , . . . , y L i ] ∈ {−1,
1} L denotes the label vector of X i . P is the number of modalities and L is the number of labels.
Moreover, we define c = {c 1 , c 2 , . . . , c P } to represent the extraction cost of P modalities. Modality extraction sequence of X i is denoted as
S i = {S 1 i , S 2 i , . . . , S m i }, m ∈ {1, 2, . . . , P }, m ≤ P , where S m i ∈ {1, 2, .
. . , P } represents m-th modality of features to extract of X i and satisfies the following condition: ∀m, n(m = n) ∈ {1, 2, . . . , P }, S m i = S n i . It is noteworthy that different instances not only correspond to different extraction sequences but also may have different length of modalities of features extraction sequence. Furthermore, we define some notations used for testing phase. Suppose there is a testing dataset with M data samples
T = {(X i , Y i )} M i=1 .
We denote predicted labels of T as
Z = {Z i } M i=1 , in which Z i = (z 1 i , z 2 i , . . . , z L i ) represents
all predicted labels of X i in T and Z j = (z j 1 , z j 2 , . . . , z j M ) T represents j-th predicted labels of all testing dataset.
MCC algorithm
On one hand, MMML is related to multi-label learning and here we extend Classifier Chains to deal with it. On the other hand, each binary classification problem in Classifier Chains can be transferred into multi-modal problem and this procedure aims at making a convince prediction with less average modality extraction cost.
Classifier Chains
Considering correlation among labels, we extend Classifier Chains to deal with Multi-modal and Multi-label problem. Classifier Chains algorithm transforms the multi-label learning problem into a chain of binary classification problems, where subsequent binary classifiers in the chain is built upon the predictions of preceding ones [4], thus to consider the full relativity of the label hereby. The greatest challenge to CC is how to form a recurrence relation chain τ . In this paper, we propose a heuristic Gini index based Classifier Chains algorithm to specify τ .
First of all, we split the multi-label dataset into several single-label datasets, i.e, for j-th label in {y 1 , y 2 , . . . , y L },
we rebuild dataset D j = {(X i , y j i )} N i=1
as j-th dataset of single-label. Secondly, we calculate Gini index [17] of each rebuilt single-label dataset D j , (j = 1, 2, . . . , L).
Gini(D j ) = |Y| k=1 k =k p k p k = 1 − |Y| k=1 p 2 k(1)
where p k represents the probability of randomly choosing two samples with same labels, p k represents the probability of randomly choosing two samples with different labels and |Y| represents number of labels in D j .
And then we get predicted label chain τ = {τ i } L i=1 , composed of indexes of sorted {Gini(D i )} L i=1 which is sorted in descending order. For L class labels {y 1 , y 2 , . . . , y L }, we are supposed to split the label set one by one according to τ and then train L binary classifiers.
For the j-th label y τj , (j = 1, 2, . . . , L) in the ordered list τ , a corresponding binary training dataset is reconstructed by appending a set of labels preceding y τj i to each instance X i :
D τj = {([X i , xd τj i ], y τj i )} N i=1(2)
where xd Meanwhile, a corresponding binary testing dataset is constructed by appending each instance with its relevance to those labels preceding y τj :
T τj = {([X i , xt τj i ], y τj i )} M i=1 (3) where xt τj i = (z τ1 i , . . . , z τj−1 i
) represents the binary assignment of those labels preceding z τj i on X i (specifically xt τ1 i = ∅) and [X i , xt τj i ] represents concatenating vector X i and xt τj i . We denote c l as extraction cost of xt τj i , which is the same as extraction cost of xd τj i . If j > 1, each instance in T τj is composed of P + 1 modalities of features and one label y τj i . After that, we propose an efficient Multi-modal Classifier Chains (MCC) algorithm, which will be introduced in the following paragraph. By passing a combination of training dataset D τj and extraction cost c as parameters of MCC, we get Z τj . The final predicted labels of T is the concatenation of Z τj , (j = 1, 2, . . . , L), i.e., Z = (Z τ1 , Z τ2 , . . . , Z τ L )
Multi-modal Classifier Chains
In order to induce a binary classifier f l : X × {−1, 1} with less average modality extraction cost and better performance in MCC, we design Multi-modal Classifier Chains (MCC) algorithm which is inspired by LSTM. MCC extracts modalities of features one by one until it's able to make a confident prediction. MCC algorithm extracts different modalities sequence with different length for difference instances, while previous feature extraction method extract all modalities of features and use the same features for all instances.
MCC adopts LSTM network to convert the variable X i ∈ X into a set of hidden representations
H t i = [h 1 i , h 2 i , . . . , h t i ], h t i ∈ R h . Here,X S t i i = [X 1 i , .
. . ,X m i , . . . ,X P i ] is an adaptation of X i . In the t-th step, the modality to be extracted is denoted as
S t i . If m = S t i ,X m i = X S t i i , 0 otherwise. For example, if S t i = 3,X 3 i = [0, 0, X 3 i , .
. . , 0]. Similar to peephole LSTM, MCC has three gates as well as two states: forget gate layer, input gate layer, cell state layer, output gate layer, hidden state layer, listed as follows:
f t = σ([W f c , W f h , W f x ][C t−1 , h t−1 ,X t ] T + b f ) i t = σ([W ic , W ih , W ix ][C t−1 , h t−1 ,X t ] T + b i ) C t = f t ·C t−1 +i t ·tanh([W ch , W cx ][h t−1 ,X t ] T +b C ) o t = σ([W oc , W oh , W ox ][C t , h t−1 ,X t ] T + b o ) h t = o t · tanh(C t )
Different from LSTM, MCC adds two full connections to predict current label and next modality to be extracted. For one thing, there is a full connection between hidden layer and label prediction layer, with weight vectorŴ l . For another, there is a full connection between hidden layer and modality prediction layer, with weight vectorŴ m . Moreover, bias vector are denoted as b l and b m respectively.
• Label prediction layer: This layer predicts label according to a nonlinear softmax function f l j (.).
f l j (H t i ) = σ(H t iŴl + b l )(4)
• Modality prediction layer: This layer predicts next modality according to a linear function f m j (.) and selects maximum as next modality to be extracted.
f m j (H t i ) = H t iŴm + b m(5)
We use FL = [f l 1 , f l 2 , . . . , f l L ] and FM = [f m 1 , f m 2 , . . . , f m L ] to denote the label prediction function set and modality prediction function set respectively.
Next, we design loss function composed of loss term and regularization term for producing optimum and faster results. Above all, we design loss of instanceX i with S t i modality as follows.
L t i = L l (f l j (H t i ), y i ) + L m (f m j (H t i ),X t i )(6)
Here we adopt log loss for label prediction loss function L l and hinge loss for modality prediction loss function L m , where modality prediction is measured by distances to K Nearest Neighbors [18]. Meanwhile, we add Ridge Regression (L2 norm) to the overall loss function.
Ω t i = ||Ŵ m || 2 + ||Ŵ l || 2 + ||c · f m j (H t i )||(7)
where ||.|| represents L2 norm and c represents extraction cost of each modality. The loss term is the sum of loss in all instances at t-th step. The overall loss function is as follows.
L t = N i (L t i + λ · Ω t i )(8)
where λ = 0.1 is trade-off between loss and regularization. In order to optimize the aforementioned loss function L t , we adopt a novel pre-dimension learning rate method for gradient descent called AdaDelta [19]. Here, we denote all the parameters in Eq.8 as W = [Ŵ m ,Ŵ l , λ].
At t-th step, we start by computing gradient g t = ∂Lt ∂Wt and accumulating decaying average of the squared gradients:
E[g 2 ] t = ρE[g 2 ] t−1 + (1 − ρ)g 2 t(9)
where ρ is a decay constant and ρ = 0.95. The resulting parameter update is then:
W t = − E[( W ) 2 ] t−1 + E[g 2 ] t + g t(10)
where is a constant and = 1e −8 .
Algorithm 1 The pseudo code of MCC algorithm
Input:
D ={(X i , Y i )} N i=1 : Training dataset; c ={c i } P i=1
: Extraction cost of P modalities; Output:
FL : set of label prediction function FM : set of modality prediction function 1: Calculate predicted label chain τ = {τ i } L i=1 with Eq.1 2: for j in τ do 3: Construct D τj with Eq.2 4: while cnt < N iter , cnt++ do Calculate L t with Eq.8 15: Compute gradient g t = ∂Lt ∂Wt 16:
Accumulate Update f l j and f m j as in Eq.4 and Eq.5 24: end for 25: return FL, FM;
And then, we accumulate update:
E[ W 2 ] t = ρE[ W 2 ] t−1 + (1 − ρ) W 2 t(11)
The pseudo-code of MCC is summarized in Algorithm 1. N b denotes batch size of training phase. N iter represents maximum number of iterations. C th represents the threshold of cost. A th represents the threshold of accuracy of the predicted label.ĉ t i denotes the sum of extraction cost and a t i denotes accuracy of current predicted label.
EXPERIMENT
Dataset Description
We manually collect one real-world Herbs dataset and adapt two publicly available datasets including Emotions [20] and Scene [6]. As for Herbs, there are 5 modalities with explicit modal partitions: channel tropism, symptom, function, dosage and flavor. As for Emotions and Scene, we divide the features into different modalities according to information entropy gain. The details are summarized in Table 1.
Experimental Settings
All the experiments are running on a machine with 3.2GHz Inter Core i7 processor and 64GB main memory. We compare MCC with four multi-label algorithms: BR, CC, ECC, MLKNN [21] and one state-of-the-art multi-modal algorithm: DMP [15]. For multi-label learner, all modalities of a dataset are concatenated together as a single modal input. For multi-modal method, we treat each label independently.
F-measure is one of the most popular metrics for evaluation of binary classification [22]. To have a fair comparison, we employ three widely adopted standard metrics, i.e., Micro-average, Hamming-Loss, Subset-Accuracy [4]. In addition, we use Cost-average to measure the average modality extraction cost. For the sake of convenience in the regularization function computation, extraction cost of each modality is set to 1.0 in the experiment. Furthermore, we set the cost of new modality (predicted labels) to 0.1 to demonstrate its superiority compared with DMP.
Experimental Results
For all these algorithms, we report the best results of the optimal parameters in terms of classification performance. Meanwhile, we perform 10-fold cross validation (CV) and take the average value of the results in the end.
For one thing, table 2 shows the experimental results of our proposed MCC algorithm as well as other five comparing algorithms. It is obvious that MCC outperforms the other five algorithms on all metrics. For another, as shown in table 3, MCC uses less average modality extraction cost than DMP, while other four multi-label algorithms use all the modalities.
CONCLUSION
Complex objects, i.e., the articles, the images, etc can always be represented with multi-modal and multi-label information. However, the quality of modalities extracted from various channels are inconsistent. Using data from all modalities is not a wise decision. In this paper, we propose a novel Multi-modal Classifier Chains (MCC) algorithm to improve supplements categorization prediction for MMML problem. Experiments in one real-world dataset and two public datasets validate the effectiveness of our algorithm. MCC makes great use of modalities, which can make a convince prediction with many instead of all modalities. Consequently, MCC reduces modality extraction cost, but it has the limitation of timeconsuming compared with other algorithms. In the future work, how to improve extraction parallelism is a very interesting work. | 2,960 |
1907.11857 | 2964959430 | With the emergence of diverse data collection techniques, objects in real applications can be represented as multi-modal features. What's more, objects may have multiple semantic meanings. Multi-modal and Multi-label [1] (MMML) problem becomes a universal phenomenon. The quality of data collected from different channels are inconsistent and some of them may not benefit for prediction. In real life, not all the modalities are needed for prediction. As a result, we propose a novel instance-oriented Multi-modal Classifier Chains (MCC) algorithm for MMML problem, which can make convince prediction with partial modalities. MCC extracts different modalities for different instances in the testing phase. Extensive experiments are performed on one real-world herbs dataset and two public datasets to validate our proposed algorithm, which reveals that it may be better to extract many instead of all of the modalities at hand. | Multi-label learning is a fundamental problem in machine leaning with a wide range of applications. In multi-label learning, each instance is associated with multiple interdependent labels. Binary Relevance (BR) @cite_11 algorithm is the most simple and efficient solution of multi-label algorithms. However, the effectiveness of the resulting approaches might be suboptimal due to the ignorance of label correlations. To tackle this problem, Classifier Chains (CC) @cite_6 was proposed as a high-order approach to consider correlations between labels. It is obviously that the performance of CC is seriously affected by the training order of labels. To account for the effect of ordering, Ensembles of Classifiers Chains (ECC) @cite_6 is an ensemble framework of CC, which can be built with @math random permutation instead of inducing one classifier chain. Entropy Chain Classifier (ETCC) @cite_1 extends CC by calculating the contribution between two labels using information entropy theory while Latent Dirichlet Allocation Multi-Label (LDAML) @cite_14 exploiting global correlations among labels. LDAML mainly solve the problem of large portion of single label instance in some special multi-label datasets. Due to high dimensionality of data , dimensionality reduction @cite_13 or feature extraction should be taken into consideration. | {
"abstract": [
"Parkinson's disease is a debilitating and chronic disease of the nervous system. Traditional Chinese Medicine (TCM) is a new way for diagnosing Parkinson, and the data of Chinese Medicine for diagnosing Parkinson is a multi-label data set. Considering that the symptoms as the labels in Parkinson data set always have correlations with each other, we can facilitate the multi-label learning process by exploiting label correlations. Current multi-label classification methods mainly try to exploit the correlations from label pairwise or label chain. In this paper, we propose a simple and efficient framework for multi-label classification called Latent Dirichlet Allocation Multi-Label (LDAML), which aims at leaning the global correlations by using the topic model on the class labels. Briefly, we try to obtain the abstract “topics” on the label set by topic model, which can exploit the global correlations among the labels. Extensive experiments clearly validate that the proposed approach is a general and effective framework which can improve most of the multi-label algorithms' performance. Based on the framework, we achieve satisfying experimental results on TCM Parkinson data set which can provide a reference and help for the development of this field.",
"Parkinson disease is a chronic, degenerative disease of the central nervous system, which commonly occurs in the elderly. Until now, no treatment has shown efficacy. Traditional Chinese Medicine is a new way for Parkinson, and the data of Chinese Medicine for Parkinson is a multi-label dataset. Classifier Chains(CC) is a popular multi-label classification algorithm, this algorithm considers the relativity between labels, and contains the high efficiency of Binary classification algorithm at the same time. But CC algorithm does not indicate how to obtain the predicted order chain actually, while more emphasizes the randomness or artificially specified. In this paper, we try to apply Multi-label classification technology to build a model of Chinese Medicine for Parkinson, which we hope to improve this field. We propose a new algorithm ETCC based on CC model. This algorithm can optimize the order chain on global perspective and have a better result than the algorithm CC.",
"The widely known binary relevance method for multi-label classification, which considers each label as an independent binary problem, has often been overlooked in the literature due to the perceived inadequacy of not directly modelling label correlations. Most current methods invest considerable complexity to model interdependencies between labels. This paper shows that binary relevance-based methods have much to offer, and that high predictive performance can be obtained without impeding scalability to large datasets. We exemplify this with a novel classifier chains method that can model label correlations while maintaining acceptable computational complexity. We extend this approach further in an ensemble framework. An extensive empirical evaluation covers a broad range of multi-label datasets with a variety of evaluation metrics. The results illustrate the competitiveness of the chaining method against related and state-of-the-art methods, both in terms of predictive performance and time complexity.",
"Many areas of science depend on exploratory data analysis and visualization. The need to analyze large amounts of multivariate data raises the fundamental problem of dimensionality reduction: how to discover compact representations of high-dimensional data. Here, we introduce locally linear embedding (LLE), an unsupervised learning algorithm that computes low-dimensional, neighborhood-preserving embeddings of high-dimensional inputs. Unlike clustering methods for local dimensionality reduction, LLE maps its inputs into a single global coordinate system of lower dimensionality, and its optimizations do not involve local minima. By exploiting the local symmetries of linear reconstructions, LLE is able to learn the global structure of nonlinear manifolds, such as those generated by images of faces or documents of text. How do we judge similarity? Our mental representations of the world are formed by processing large numbers of sensory in",
"In classic pattern recognition problems, classes are mutually exclusive by definition. Classification errors occur when the classes overlap in the feature space. We examine a different situation, occurring when the classes are, by definition, not mutually exclusive. Such problems arise in semantic scene and document classification and in medical diagnosis. We present a framework to handle such problems and apply it to the problem of semantic scene classification, where a natural scene may contain multiple objects such that the scene can be described by multiple class labels (e.g., a field scene with a mountain in the background). Such a problem poses challenges to the classic pattern recognition paradigm and demands a different treatment. We discuss approaches for training and testing in this scenario and introduce new metrics for evaluating individual examples, class recall and precision, and overall accuracy. Experiments show that our methods are suitable for scene classification; furthermore, our work appears to generalize to other classification problems of the same nature."
],
"cite_N": [
"@cite_14",
"@cite_1",
"@cite_6",
"@cite_13",
"@cite_11"
],
"mid": [
"2774730889",
"2205156083",
"1999954155",
"2053186076",
"2156935079"
]
} | MANY COULD BE BETTER THAN ALL: A NOVEL INSTANCE-ORIENTED ALGORITHM FOR MULTI-MODAL MULTI-LABEL PROBLEM | In many natural scenarios, objects might be complicated with multi-modal features and have multiple semantic meanings simultaneously.
For one thing, data is collected from diverse channels and exhibits heterogeneous properties: each of these domains present different views of the same object, where each modality can have its own individual representation space and semantic meanings. Such forms of data are known as multimodal data. In a multi-modal setting, different modalities are with various extraction cost. Previous researches, i.e., dimensionality reduction methods, generally assume that all the multi-modal features of test instances have been already extracted without considering the extraction cost. While in practical applications, there is no aforehand multi-modal features prepared, modality extraction need to be performed in the testing phase at first. While for the complex multi-modal data collection nowadays, the heavy computation burden of feature * Corresponding author extraction for different modalities has become the dominant factor that hurts the efficiency.
For another, real-world objects might have multiple semantic meanings. To account for the multiple semantic meanings that one real-world object might have, one direct solution is to assign a set of proper labels to the object to explicitly express its semantics. In multi-label learning, each object is associated with a set of labels instead of a single label. Previous researches, i.e., classifier chains algorithm is a high-order approach considering the relationship among labels, but it is affected by the ordering specified by predicted labels.
To address all the above challenges, this paper introduces a novel algorithm called Multi-modal Classifier Chains (MCC) inspired by Long Short-Term Memory (LSTM) [2] [3]. Information of previous selected modalities can be considered as storing in memory cell. The deep-learning framework simultaneously generates next modality of features and conducts the classification according to the input raw signals in a data-driven way, which could avoid some biases from feature engineering and reduce the mismatch between feature extraction and classifier. The main contributions are:
• We propose a novel MCC algorithm considering not only interrelation among different modalities, but also relationship among different labels.
• MCC algorithm utilizes multi-modal information under budget, which shows that MCC can make a convince prediction with less average modality extraction cost.
The remainder of this paper is organized as follows. Section 2 introduces related work. Section 3 presents the proposed MCC model. In section 4, empirical evaluations are given to show the superiority of MCC. Finally, section 5 presents conclusion and future work.
METHODOLOGY
This section first summarizes some formal symbols and definitions used throughout this paper, and then introduces the formulation of the proposed MCC model. An overview of our MCC algorithm is shown in Fig.1
Notation
In the following, bold character denotes vector (e.g., X). The task of this paper is to learn a function h:
X → 2 Y from a training dataset with N data samples D = {(X i , Y i )} N i=1 . The i-th instance (X i , Y i ) contains a feature vector X i ∈ X and a label vector Y i ∈ Y. X i = [X 1 i , X 2 i , . . . , X P i ] ∈ R d1+d2+···+d P is a combi- nation of all modalities and d m is the dimensionality of fea- tures in m-th modality. Y i = [y 1 i , y 2 i , . . . , y L i ] ∈ {−1,
1} L denotes the label vector of X i . P is the number of modalities and L is the number of labels.
Moreover, we define c = {c 1 , c 2 , . . . , c P } to represent the extraction cost of P modalities. Modality extraction sequence of X i is denoted as
S i = {S 1 i , S 2 i , . . . , S m i }, m ∈ {1, 2, . . . , P }, m ≤ P , where S m i ∈ {1, 2, .
. . , P } represents m-th modality of features to extract of X i and satisfies the following condition: ∀m, n(m = n) ∈ {1, 2, . . . , P }, S m i = S n i . It is noteworthy that different instances not only correspond to different extraction sequences but also may have different length of modalities of features extraction sequence. Furthermore, we define some notations used for testing phase. Suppose there is a testing dataset with M data samples
T = {(X i , Y i )} M i=1 .
We denote predicted labels of T as
Z = {Z i } M i=1 , in which Z i = (z 1 i , z 2 i , . . . , z L i ) represents
all predicted labels of X i in T and Z j = (z j 1 , z j 2 , . . . , z j M ) T represents j-th predicted labels of all testing dataset.
MCC algorithm
On one hand, MMML is related to multi-label learning and here we extend Classifier Chains to deal with it. On the other hand, each binary classification problem in Classifier Chains can be transferred into multi-modal problem and this procedure aims at making a convince prediction with less average modality extraction cost.
Classifier Chains
Considering correlation among labels, we extend Classifier Chains to deal with Multi-modal and Multi-label problem. Classifier Chains algorithm transforms the multi-label learning problem into a chain of binary classification problems, where subsequent binary classifiers in the chain is built upon the predictions of preceding ones [4], thus to consider the full relativity of the label hereby. The greatest challenge to CC is how to form a recurrence relation chain τ . In this paper, we propose a heuristic Gini index based Classifier Chains algorithm to specify τ .
First of all, we split the multi-label dataset into several single-label datasets, i.e, for j-th label in {y 1 , y 2 , . . . , y L },
we rebuild dataset D j = {(X i , y j i )} N i=1
as j-th dataset of single-label. Secondly, we calculate Gini index [17] of each rebuilt single-label dataset D j , (j = 1, 2, . . . , L).
Gini(D j ) = |Y| k=1 k =k p k p k = 1 − |Y| k=1 p 2 k(1)
where p k represents the probability of randomly choosing two samples with same labels, p k represents the probability of randomly choosing two samples with different labels and |Y| represents number of labels in D j .
And then we get predicted label chain τ = {τ i } L i=1 , composed of indexes of sorted {Gini(D i )} L i=1 which is sorted in descending order. For L class labels {y 1 , y 2 , . . . , y L }, we are supposed to split the label set one by one according to τ and then train L binary classifiers.
For the j-th label y τj , (j = 1, 2, . . . , L) in the ordered list τ , a corresponding binary training dataset is reconstructed by appending a set of labels preceding y τj i to each instance X i :
D τj = {([X i , xd τj i ], y τj i )} N i=1(2)
where xd Meanwhile, a corresponding binary testing dataset is constructed by appending each instance with its relevance to those labels preceding y τj :
T τj = {([X i , xt τj i ], y τj i )} M i=1 (3) where xt τj i = (z τ1 i , . . . , z τj−1 i
) represents the binary assignment of those labels preceding z τj i on X i (specifically xt τ1 i = ∅) and [X i , xt τj i ] represents concatenating vector X i and xt τj i . We denote c l as extraction cost of xt τj i , which is the same as extraction cost of xd τj i . If j > 1, each instance in T τj is composed of P + 1 modalities of features and one label y τj i . After that, we propose an efficient Multi-modal Classifier Chains (MCC) algorithm, which will be introduced in the following paragraph. By passing a combination of training dataset D τj and extraction cost c as parameters of MCC, we get Z τj . The final predicted labels of T is the concatenation of Z τj , (j = 1, 2, . . . , L), i.e., Z = (Z τ1 , Z τ2 , . . . , Z τ L )
Multi-modal Classifier Chains
In order to induce a binary classifier f l : X × {−1, 1} with less average modality extraction cost and better performance in MCC, we design Multi-modal Classifier Chains (MCC) algorithm which is inspired by LSTM. MCC extracts modalities of features one by one until it's able to make a confident prediction. MCC algorithm extracts different modalities sequence with different length for difference instances, while previous feature extraction method extract all modalities of features and use the same features for all instances.
MCC adopts LSTM network to convert the variable X i ∈ X into a set of hidden representations
H t i = [h 1 i , h 2 i , . . . , h t i ], h t i ∈ R h . Here,X S t i i = [X 1 i , .
. . ,X m i , . . . ,X P i ] is an adaptation of X i . In the t-th step, the modality to be extracted is denoted as
S t i . If m = S t i ,X m i = X S t i i , 0 otherwise. For example, if S t i = 3,X 3 i = [0, 0, X 3 i , .
. . , 0]. Similar to peephole LSTM, MCC has three gates as well as two states: forget gate layer, input gate layer, cell state layer, output gate layer, hidden state layer, listed as follows:
f t = σ([W f c , W f h , W f x ][C t−1 , h t−1 ,X t ] T + b f ) i t = σ([W ic , W ih , W ix ][C t−1 , h t−1 ,X t ] T + b i ) C t = f t ·C t−1 +i t ·tanh([W ch , W cx ][h t−1 ,X t ] T +b C ) o t = σ([W oc , W oh , W ox ][C t , h t−1 ,X t ] T + b o ) h t = o t · tanh(C t )
Different from LSTM, MCC adds two full connections to predict current label and next modality to be extracted. For one thing, there is a full connection between hidden layer and label prediction layer, with weight vectorŴ l . For another, there is a full connection between hidden layer and modality prediction layer, with weight vectorŴ m . Moreover, bias vector are denoted as b l and b m respectively.
• Label prediction layer: This layer predicts label according to a nonlinear softmax function f l j (.).
f l j (H t i ) = σ(H t iŴl + b l )(4)
• Modality prediction layer: This layer predicts next modality according to a linear function f m j (.) and selects maximum as next modality to be extracted.
f m j (H t i ) = H t iŴm + b m(5)
We use FL = [f l 1 , f l 2 , . . . , f l L ] and FM = [f m 1 , f m 2 , . . . , f m L ] to denote the label prediction function set and modality prediction function set respectively.
Next, we design loss function composed of loss term and regularization term for producing optimum and faster results. Above all, we design loss of instanceX i with S t i modality as follows.
L t i = L l (f l j (H t i ), y i ) + L m (f m j (H t i ),X t i )(6)
Here we adopt log loss for label prediction loss function L l and hinge loss for modality prediction loss function L m , where modality prediction is measured by distances to K Nearest Neighbors [18]. Meanwhile, we add Ridge Regression (L2 norm) to the overall loss function.
Ω t i = ||Ŵ m || 2 + ||Ŵ l || 2 + ||c · f m j (H t i )||(7)
where ||.|| represents L2 norm and c represents extraction cost of each modality. The loss term is the sum of loss in all instances at t-th step. The overall loss function is as follows.
L t = N i (L t i + λ · Ω t i )(8)
where λ = 0.1 is trade-off between loss and regularization. In order to optimize the aforementioned loss function L t , we adopt a novel pre-dimension learning rate method for gradient descent called AdaDelta [19]. Here, we denote all the parameters in Eq.8 as W = [Ŵ m ,Ŵ l , λ].
At t-th step, we start by computing gradient g t = ∂Lt ∂Wt and accumulating decaying average of the squared gradients:
E[g 2 ] t = ρE[g 2 ] t−1 + (1 − ρ)g 2 t(9)
where ρ is a decay constant and ρ = 0.95. The resulting parameter update is then:
W t = − E[( W ) 2 ] t−1 + E[g 2 ] t + g t(10)
where is a constant and = 1e −8 .
Algorithm 1 The pseudo code of MCC algorithm
Input:
D ={(X i , Y i )} N i=1 : Training dataset; c ={c i } P i=1
: Extraction cost of P modalities; Output:
FL : set of label prediction function FM : set of modality prediction function 1: Calculate predicted label chain τ = {τ i } L i=1 with Eq.1 2: for j in τ do 3: Construct D τj with Eq.2 4: while cnt < N iter , cnt++ do Calculate L t with Eq.8 15: Compute gradient g t = ∂Lt ∂Wt 16:
Accumulate Update f l j and f m j as in Eq.4 and Eq.5 24: end for 25: return FL, FM;
And then, we accumulate update:
E[ W 2 ] t = ρE[ W 2 ] t−1 + (1 − ρ) W 2 t(11)
The pseudo-code of MCC is summarized in Algorithm 1. N b denotes batch size of training phase. N iter represents maximum number of iterations. C th represents the threshold of cost. A th represents the threshold of accuracy of the predicted label.ĉ t i denotes the sum of extraction cost and a t i denotes accuracy of current predicted label.
EXPERIMENT
Dataset Description
We manually collect one real-world Herbs dataset and adapt two publicly available datasets including Emotions [20] and Scene [6]. As for Herbs, there are 5 modalities with explicit modal partitions: channel tropism, symptom, function, dosage and flavor. As for Emotions and Scene, we divide the features into different modalities according to information entropy gain. The details are summarized in Table 1.
Experimental Settings
All the experiments are running on a machine with 3.2GHz Inter Core i7 processor and 64GB main memory. We compare MCC with four multi-label algorithms: BR, CC, ECC, MLKNN [21] and one state-of-the-art multi-modal algorithm: DMP [15]. For multi-label learner, all modalities of a dataset are concatenated together as a single modal input. For multi-modal method, we treat each label independently.
F-measure is one of the most popular metrics for evaluation of binary classification [22]. To have a fair comparison, we employ three widely adopted standard metrics, i.e., Micro-average, Hamming-Loss, Subset-Accuracy [4]. In addition, we use Cost-average to measure the average modality extraction cost. For the sake of convenience in the regularization function computation, extraction cost of each modality is set to 1.0 in the experiment. Furthermore, we set the cost of new modality (predicted labels) to 0.1 to demonstrate its superiority compared with DMP.
Experimental Results
For all these algorithms, we report the best results of the optimal parameters in terms of classification performance. Meanwhile, we perform 10-fold cross validation (CV) and take the average value of the results in the end.
For one thing, table 2 shows the experimental results of our proposed MCC algorithm as well as other five comparing algorithms. It is obvious that MCC outperforms the other five algorithms on all metrics. For another, as shown in table 3, MCC uses less average modality extraction cost than DMP, while other four multi-label algorithms use all the modalities.
CONCLUSION
Complex objects, i.e., the articles, the images, etc can always be represented with multi-modal and multi-label information. However, the quality of modalities extracted from various channels are inconsistent. Using data from all modalities is not a wise decision. In this paper, we propose a novel Multi-modal Classifier Chains (MCC) algorithm to improve supplements categorization prediction for MMML problem. Experiments in one real-world dataset and two public datasets validate the effectiveness of our algorithm. MCC makes great use of modalities, which can make a convince prediction with many instead of all modalities. Consequently, MCC reduces modality extraction cost, but it has the limitation of timeconsuming compared with other algorithms. In the future work, how to improve extraction parallelism is a very interesting work. | 2,960 |
1907.11857 | 2964959430 | With the emergence of diverse data collection techniques, objects in real applications can be represented as multi-modal features. What's more, objects may have multiple semantic meanings. Multi-modal and Multi-label [1] (MMML) problem becomes a universal phenomenon. The quality of data collected from different channels are inconsistent and some of them may not benefit for prediction. In real life, not all the modalities are needed for prediction. As a result, we propose a novel instance-oriented Multi-modal Classifier Chains (MCC) algorithm for MMML problem, which can make convince prediction with partial modalities. MCC extracts different modalities for different instances in the testing phase. Extensive experiments are performed on one real-world herbs dataset and two public datasets to validate our proposed algorithm, which reveals that it may be better to extract many instead of all of the modalities at hand. | In this paper, taking both multi-label learning and feature extraction into consideration, we propose MCC model with an end-to-end approach @cite_7 for MMML problem, which is inspired by adaptive decision methods. Different from previous feature selection or dimensionality reduction methods, MCC extracts different modalities for different instances and different labels. Consequently, when presented with an unseen instance, we would extract the most informative and cost-effective modalities for it. Empirical study shows the efficiency and effectiveness of MCC, which can achieve better classification performance with less average modalities. | {
"abstract": [
"Traditional Chinese Medicine (TCM) is an influential form of medical treatment in China and surrounding areas. In this paper, we propose a TCM prescription generation task that aims to automatically generate a herbal medicine prescription based on textual symptom descriptions. Sequence-to-sequence (seq2seq) model has been successful in dealing with conditional sequence generation tasks like dialogue generation. We explore a potential end-to-end solution to the TCM prescription generation task using seq2seq models. However, experiments show that directly applying seq2seq model leads to unfruitful results due to the severe repetition problem. To solve the problem, we propose a novel architecture for the decoder with masking and coverage mechanism. The experimental results demonstrate that the proposed method is effective in diversifying the outputs, which significantly improves the F1 score by nearly 10 points (8.34 on test set 1 and 10.23 on test set 2)."
],
"cite_N": [
"@cite_7"
],
"mid": [
"2786150157"
]
} | MANY COULD BE BETTER THAN ALL: A NOVEL INSTANCE-ORIENTED ALGORITHM FOR MULTI-MODAL MULTI-LABEL PROBLEM | In many natural scenarios, objects might be complicated with multi-modal features and have multiple semantic meanings simultaneously.
For one thing, data is collected from diverse channels and exhibits heterogeneous properties: each of these domains present different views of the same object, where each modality can have its own individual representation space and semantic meanings. Such forms of data are known as multimodal data. In a multi-modal setting, different modalities are with various extraction cost. Previous researches, i.e., dimensionality reduction methods, generally assume that all the multi-modal features of test instances have been already extracted without considering the extraction cost. While in practical applications, there is no aforehand multi-modal features prepared, modality extraction need to be performed in the testing phase at first. While for the complex multi-modal data collection nowadays, the heavy computation burden of feature * Corresponding author extraction for different modalities has become the dominant factor that hurts the efficiency.
For another, real-world objects might have multiple semantic meanings. To account for the multiple semantic meanings that one real-world object might have, one direct solution is to assign a set of proper labels to the object to explicitly express its semantics. In multi-label learning, each object is associated with a set of labels instead of a single label. Previous researches, i.e., classifier chains algorithm is a high-order approach considering the relationship among labels, but it is affected by the ordering specified by predicted labels.
To address all the above challenges, this paper introduces a novel algorithm called Multi-modal Classifier Chains (MCC) inspired by Long Short-Term Memory (LSTM) [2] [3]. Information of previous selected modalities can be considered as storing in memory cell. The deep-learning framework simultaneously generates next modality of features and conducts the classification according to the input raw signals in a data-driven way, which could avoid some biases from feature engineering and reduce the mismatch between feature extraction and classifier. The main contributions are:
• We propose a novel MCC algorithm considering not only interrelation among different modalities, but also relationship among different labels.
• MCC algorithm utilizes multi-modal information under budget, which shows that MCC can make a convince prediction with less average modality extraction cost.
The remainder of this paper is organized as follows. Section 2 introduces related work. Section 3 presents the proposed MCC model. In section 4, empirical evaluations are given to show the superiority of MCC. Finally, section 5 presents conclusion and future work.
METHODOLOGY
This section first summarizes some formal symbols and definitions used throughout this paper, and then introduces the formulation of the proposed MCC model. An overview of our MCC algorithm is shown in Fig.1
Notation
In the following, bold character denotes vector (e.g., X). The task of this paper is to learn a function h:
X → 2 Y from a training dataset with N data samples D = {(X i , Y i )} N i=1 . The i-th instance (X i , Y i ) contains a feature vector X i ∈ X and a label vector Y i ∈ Y. X i = [X 1 i , X 2 i , . . . , X P i ] ∈ R d1+d2+···+d P is a combi- nation of all modalities and d m is the dimensionality of fea- tures in m-th modality. Y i = [y 1 i , y 2 i , . . . , y L i ] ∈ {−1,
1} L denotes the label vector of X i . P is the number of modalities and L is the number of labels.
Moreover, we define c = {c 1 , c 2 , . . . , c P } to represent the extraction cost of P modalities. Modality extraction sequence of X i is denoted as
S i = {S 1 i , S 2 i , . . . , S m i }, m ∈ {1, 2, . . . , P }, m ≤ P , where S m i ∈ {1, 2, .
. . , P } represents m-th modality of features to extract of X i and satisfies the following condition: ∀m, n(m = n) ∈ {1, 2, . . . , P }, S m i = S n i . It is noteworthy that different instances not only correspond to different extraction sequences but also may have different length of modalities of features extraction sequence. Furthermore, we define some notations used for testing phase. Suppose there is a testing dataset with M data samples
T = {(X i , Y i )} M i=1 .
We denote predicted labels of T as
Z = {Z i } M i=1 , in which Z i = (z 1 i , z 2 i , . . . , z L i ) represents
all predicted labels of X i in T and Z j = (z j 1 , z j 2 , . . . , z j M ) T represents j-th predicted labels of all testing dataset.
MCC algorithm
On one hand, MMML is related to multi-label learning and here we extend Classifier Chains to deal with it. On the other hand, each binary classification problem in Classifier Chains can be transferred into multi-modal problem and this procedure aims at making a convince prediction with less average modality extraction cost.
Classifier Chains
Considering correlation among labels, we extend Classifier Chains to deal with Multi-modal and Multi-label problem. Classifier Chains algorithm transforms the multi-label learning problem into a chain of binary classification problems, where subsequent binary classifiers in the chain is built upon the predictions of preceding ones [4], thus to consider the full relativity of the label hereby. The greatest challenge to CC is how to form a recurrence relation chain τ . In this paper, we propose a heuristic Gini index based Classifier Chains algorithm to specify τ .
First of all, we split the multi-label dataset into several single-label datasets, i.e, for j-th label in {y 1 , y 2 , . . . , y L },
we rebuild dataset D j = {(X i , y j i )} N i=1
as j-th dataset of single-label. Secondly, we calculate Gini index [17] of each rebuilt single-label dataset D j , (j = 1, 2, . . . , L).
Gini(D j ) = |Y| k=1 k =k p k p k = 1 − |Y| k=1 p 2 k(1)
where p k represents the probability of randomly choosing two samples with same labels, p k represents the probability of randomly choosing two samples with different labels and |Y| represents number of labels in D j .
And then we get predicted label chain τ = {τ i } L i=1 , composed of indexes of sorted {Gini(D i )} L i=1 which is sorted in descending order. For L class labels {y 1 , y 2 , . . . , y L }, we are supposed to split the label set one by one according to τ and then train L binary classifiers.
For the j-th label y τj , (j = 1, 2, . . . , L) in the ordered list τ , a corresponding binary training dataset is reconstructed by appending a set of labels preceding y τj i to each instance X i :
D τj = {([X i , xd τj i ], y τj i )} N i=1(2)
where xd Meanwhile, a corresponding binary testing dataset is constructed by appending each instance with its relevance to those labels preceding y τj :
T τj = {([X i , xt τj i ], y τj i )} M i=1 (3) where xt τj i = (z τ1 i , . . . , z τj−1 i
) represents the binary assignment of those labels preceding z τj i on X i (specifically xt τ1 i = ∅) and [X i , xt τj i ] represents concatenating vector X i and xt τj i . We denote c l as extraction cost of xt τj i , which is the same as extraction cost of xd τj i . If j > 1, each instance in T τj is composed of P + 1 modalities of features and one label y τj i . After that, we propose an efficient Multi-modal Classifier Chains (MCC) algorithm, which will be introduced in the following paragraph. By passing a combination of training dataset D τj and extraction cost c as parameters of MCC, we get Z τj . The final predicted labels of T is the concatenation of Z τj , (j = 1, 2, . . . , L), i.e., Z = (Z τ1 , Z τ2 , . . . , Z τ L )
Multi-modal Classifier Chains
In order to induce a binary classifier f l : X × {−1, 1} with less average modality extraction cost and better performance in MCC, we design Multi-modal Classifier Chains (MCC) algorithm which is inspired by LSTM. MCC extracts modalities of features one by one until it's able to make a confident prediction. MCC algorithm extracts different modalities sequence with different length for difference instances, while previous feature extraction method extract all modalities of features and use the same features for all instances.
MCC adopts LSTM network to convert the variable X i ∈ X into a set of hidden representations
H t i = [h 1 i , h 2 i , . . . , h t i ], h t i ∈ R h . Here,X S t i i = [X 1 i , .
. . ,X m i , . . . ,X P i ] is an adaptation of X i . In the t-th step, the modality to be extracted is denoted as
S t i . If m = S t i ,X m i = X S t i i , 0 otherwise. For example, if S t i = 3,X 3 i = [0, 0, X 3 i , .
. . , 0]. Similar to peephole LSTM, MCC has three gates as well as two states: forget gate layer, input gate layer, cell state layer, output gate layer, hidden state layer, listed as follows:
f t = σ([W f c , W f h , W f x ][C t−1 , h t−1 ,X t ] T + b f ) i t = σ([W ic , W ih , W ix ][C t−1 , h t−1 ,X t ] T + b i ) C t = f t ·C t−1 +i t ·tanh([W ch , W cx ][h t−1 ,X t ] T +b C ) o t = σ([W oc , W oh , W ox ][C t , h t−1 ,X t ] T + b o ) h t = o t · tanh(C t )
Different from LSTM, MCC adds two full connections to predict current label and next modality to be extracted. For one thing, there is a full connection between hidden layer and label prediction layer, with weight vectorŴ l . For another, there is a full connection between hidden layer and modality prediction layer, with weight vectorŴ m . Moreover, bias vector are denoted as b l and b m respectively.
• Label prediction layer: This layer predicts label according to a nonlinear softmax function f l j (.).
f l j (H t i ) = σ(H t iŴl + b l )(4)
• Modality prediction layer: This layer predicts next modality according to a linear function f m j (.) and selects maximum as next modality to be extracted.
f m j (H t i ) = H t iŴm + b m(5)
We use FL = [f l 1 , f l 2 , . . . , f l L ] and FM = [f m 1 , f m 2 , . . . , f m L ] to denote the label prediction function set and modality prediction function set respectively.
Next, we design loss function composed of loss term and regularization term for producing optimum and faster results. Above all, we design loss of instanceX i with S t i modality as follows.
L t i = L l (f l j (H t i ), y i ) + L m (f m j (H t i ),X t i )(6)
Here we adopt log loss for label prediction loss function L l and hinge loss for modality prediction loss function L m , where modality prediction is measured by distances to K Nearest Neighbors [18]. Meanwhile, we add Ridge Regression (L2 norm) to the overall loss function.
Ω t i = ||Ŵ m || 2 + ||Ŵ l || 2 + ||c · f m j (H t i )||(7)
where ||.|| represents L2 norm and c represents extraction cost of each modality. The loss term is the sum of loss in all instances at t-th step. The overall loss function is as follows.
L t = N i (L t i + λ · Ω t i )(8)
where λ = 0.1 is trade-off between loss and regularization. In order to optimize the aforementioned loss function L t , we adopt a novel pre-dimension learning rate method for gradient descent called AdaDelta [19]. Here, we denote all the parameters in Eq.8 as W = [Ŵ m ,Ŵ l , λ].
At t-th step, we start by computing gradient g t = ∂Lt ∂Wt and accumulating decaying average of the squared gradients:
E[g 2 ] t = ρE[g 2 ] t−1 + (1 − ρ)g 2 t(9)
where ρ is a decay constant and ρ = 0.95. The resulting parameter update is then:
W t = − E[( W ) 2 ] t−1 + E[g 2 ] t + g t(10)
where is a constant and = 1e −8 .
Algorithm 1 The pseudo code of MCC algorithm
Input:
D ={(X i , Y i )} N i=1 : Training dataset; c ={c i } P i=1
: Extraction cost of P modalities; Output:
FL : set of label prediction function FM : set of modality prediction function 1: Calculate predicted label chain τ = {τ i } L i=1 with Eq.1 2: for j in τ do 3: Construct D τj with Eq.2 4: while cnt < N iter , cnt++ do Calculate L t with Eq.8 15: Compute gradient g t = ∂Lt ∂Wt 16:
Accumulate Update f l j and f m j as in Eq.4 and Eq.5 24: end for 25: return FL, FM;
And then, we accumulate update:
E[ W 2 ] t = ρE[ W 2 ] t−1 + (1 − ρ) W 2 t(11)
The pseudo-code of MCC is summarized in Algorithm 1. N b denotes batch size of training phase. N iter represents maximum number of iterations. C th represents the threshold of cost. A th represents the threshold of accuracy of the predicted label.ĉ t i denotes the sum of extraction cost and a t i denotes accuracy of current predicted label.
EXPERIMENT
Dataset Description
We manually collect one real-world Herbs dataset and adapt two publicly available datasets including Emotions [20] and Scene [6]. As for Herbs, there are 5 modalities with explicit modal partitions: channel tropism, symptom, function, dosage and flavor. As for Emotions and Scene, we divide the features into different modalities according to information entropy gain. The details are summarized in Table 1.
Experimental Settings
All the experiments are running on a machine with 3.2GHz Inter Core i7 processor and 64GB main memory. We compare MCC with four multi-label algorithms: BR, CC, ECC, MLKNN [21] and one state-of-the-art multi-modal algorithm: DMP [15]. For multi-label learner, all modalities of a dataset are concatenated together as a single modal input. For multi-modal method, we treat each label independently.
F-measure is one of the most popular metrics for evaluation of binary classification [22]. To have a fair comparison, we employ three widely adopted standard metrics, i.e., Micro-average, Hamming-Loss, Subset-Accuracy [4]. In addition, we use Cost-average to measure the average modality extraction cost. For the sake of convenience in the regularization function computation, extraction cost of each modality is set to 1.0 in the experiment. Furthermore, we set the cost of new modality (predicted labels) to 0.1 to demonstrate its superiority compared with DMP.
Experimental Results
For all these algorithms, we report the best results of the optimal parameters in terms of classification performance. Meanwhile, we perform 10-fold cross validation (CV) and take the average value of the results in the end.
For one thing, table 2 shows the experimental results of our proposed MCC algorithm as well as other five comparing algorithms. It is obvious that MCC outperforms the other five algorithms on all metrics. For another, as shown in table 3, MCC uses less average modality extraction cost than DMP, while other four multi-label algorithms use all the modalities.
CONCLUSION
Complex objects, i.e., the articles, the images, etc can always be represented with multi-modal and multi-label information. However, the quality of modalities extracted from various channels are inconsistent. Using data from all modalities is not a wise decision. In this paper, we propose a novel Multi-modal Classifier Chains (MCC) algorithm to improve supplements categorization prediction for MMML problem. Experiments in one real-world dataset and two public datasets validate the effectiveness of our algorithm. MCC makes great use of modalities, which can make a convince prediction with many instead of all modalities. Consequently, MCC reduces modality extraction cost, but it has the limitation of timeconsuming compared with other algorithms. In the future work, how to improve extraction parallelism is a very interesting work. | 2,960 |
1907.11458 | 2966260985 | Video surveillance can be significantly enhanced by using both top-view data, e.g., those from drone-mounted cameras in the air, and horizontal-view data, e.g., those from wearable cameras on the ground. Collaborative analysis of different-view data can facilitate various kinds of applications, such as human tracking, person identification, and human activity recognition. However, for such collaborative analysis, the first step is to associate people, referred to as subjects in this paper, across these two views. This is a very challenging problem due to large human-appearance difference between top and horizontal views. In this paper, we present a new approach to address this problem by exploring and matching the subjects' spatial distributions between the two views. More specifically, on the top-view image, we model and match subjects' relative positions to the horizontal-view camera in both views and define a matching cost to decide the actual location of horizontal-view camera and its view angle in the top-view image. We collect a new dataset consisting of top-view and horizontal-view image pairs for performance evaluation and the experimental results show the effectiveness of the proposed method. | Our work can be regarded as a problem of associating first-person and third-person cameras, which has been studied by many researchers. For example, @cite_2 identify a first-person camera wearer in a third-person video by incorporating spatial and temporal information from the videos of both cameras. In @cite_15 , information from first- and third-person cameras, together with laser range data, is fused to improve depth perception and 3D reconstruction. @cite_9 predict gaze behavior in social scenes using both first- and third-person cameras. In @cite_7 , first- and third-person cameras are synchronized, followed by associating subjects between their videos. In @cite_8 , a first-person video is combined to multiple third-person videos for more reliable action recognition. The third-person cameras in these methods usually bear horizontal views or views with certain slope angle. Differently, in this paper the third-person camera is mounted on a drone and produces top-view images, making cross-view appearance matching a very difficult problem. | {
"abstract": [
"In a world of pervasive cameras, public spaces are often captured from multiple perspectives by cameras of different types, both fixed and mobile. An important problem is to organize these heterogeneous collections of videos by finding connections between them, such as identifying correspondences between the people appearing in the videos and the people holding or wearing the cameras. In this paper, we wish to solve two specific problems: (1) given two or more synchronized third-person videos of a scene, produce a pixel-level segmentation of each visible person and identify corresponding people across different views (i.e., determine who in camera A corresponds with whom in camera B), and (2) given one or more synchronized third-person videos as well as a first-person video taken by a mobile or wearable camera, segment and identify the camera wearer in the third-person videos. Unlike previous work which requires ground truth bounding boxes to estimate the correspondences, we perform person segmentation and identification jointly. We find that solving these two problems simultaneously is mutually beneficial, because better fine-grained segmentation allows us to better perform matching across views, and information from multiple views helps us perform more accurate segmentation. We evaluate our approach on two challenging datasets of interacting people captured from multiple wearable cameras, and show that our proposed method performs significantly better than the state-of-the-art on both person segmentation and identification.",
"In this paper, we study the problem of recognizing human actions in the presence of a single egocentric camera and multiple static cameras. Some actions are better presented in static cameras, where the whole body of an actor and the context of actions are visible. Some other actions are better recognized in egocentric cameras, where subtle movements of hands and complex object interactions are visible. In this paper, we introduce a model that can benefit from the best of both worlds by learning to predict the importance of each camera in recognizing actions in each frame. By joint discriminative learning of latent camera importance variables and action classifiers, our model achieves successful results in the challenging CMU-MMAC dataset. Our experimental results show significant gain in learning to use the cameras according to their predicted importance. The learned latent variables provide a level of understanding of a scene that enables automatic cinematography by smoothly switching between cameras in order to maximize the amount of relevant information in each frame.",
"We present a method to predict primary gaze behavior in a social scene. Inspired by the study of electric fields, we posit \"social charges\"-latent quantities that drive the primary gaze behavior of members of a social group. These charges induce a gradient field that defines the relationship between the social charges and the primary gaze direction of members in the scene. This field model is used to predict primary gaze behavior at any location or time in the scene. We present an algorithm to estimate the time-varying behavior of these charges from the primary gaze behavior of measured observers in the scene. We validate the model by evaluating its predictive precision via cross-validation in a variety of social scenes.",
"We consider scenarios in which we wish to perform joint scene understanding, object tracking, activity recognition, and other tasks in scenarios in which multiple people are wearing body-worn cameras while a third-person static camera also captures the scene. To do this, we need to establish person-level correspondences across first-and third-person videos, which is challenging because the camera wearer is not visible from his her own egocentric video, preventing the use of direct feature matching. In this paper, we propose a new semi-Siamese Convolutional Neural Network architecture to address this novel challenge. We formulate the problem as learning a joint embedding space for first-and third-person videos that considers both spatial-and motion-domain cues. A new triplet loss function is designed to minimize the distance between correct first-and third-person matches while maximizing the distance between incorrect ones. This end-to-end approach performs significantly better than several baselines, in part by learning the first-and third-person features optimized for matching jointly with the distance measure itself.",
"The user interface is the central element of a telepresence robotic system and its visualization modalities greatly affect the operator's situation awareness, and thus its performance. Depending on the task at hand and the operator's preferences, going from ego- and exocentric viewpoints and improving the depth representation can provide better perspectives of the operation environment. Our system, which combines a 3D reconstruction of the environment using laser range finder readings with two video projection methods, allows the operator to easily switch from ego- to exocentric viewpoints. This paper presents the interface developed and demonstrates its capabilities by having 13 operators teleoperate a mobile robot in a navigation task."
],
"cite_N": [
"@cite_7",
"@cite_8",
"@cite_9",
"@cite_2",
"@cite_15"
],
"mid": [
"2963924762",
"2253138976",
"2024274943",
"2610060393",
"1980296915"
]
} | Multiple Human Association between Top and Horizontal Views by Matching Subjects' Spatial Distributions | The advancement of moving-camera technologies provides a new perspective for video surveillance. Unmanned aerial vehicles (UAVs), such as drones in the air, can provide top views of a group of subjects on the ground. Wearable cameras, such as Google Glass and GoPro, mounted over the head of a wearer (one of the subjects on the ground), can provide horizontal views of the same group of subjects. As shown in Fig. 1, the data collected from these two views well complement each other -top-view images contain no mutual occlusions and well exhibit a global picture and the relative positions of the subjects, while horizontal-view images can capture the detailed appearance, action, and behavior of subjects of interest in a much closer distance. Clearly, their collaborative analysis can significantly improve the video-surveillance capabili-
Top-view
Horizontal-view Figure 1. An illustration of the top-view (left) and horizontal-view (right) images. The former is taken by a camera mounted to a drone in the air and the latter is taken by a GoPro worn by a wearer who walked on the ground. The proposed method identifies on the top-view image the location and view angle of the camera (indicated by red box) that produces the horizontal-view image, and associate subjects, indicated by identical color boxes, across these two videos. ties such as human tracking, human detection, and activity recognition.
The first step for such a collaborative analysis is to accurately associate the subjects across these two views, i.e., we need to identify any person present in both views and identify his location in both views, as shown in Fig. 1. In general, this can be treated as a person re-identification (reid) problem -for each subject in one view, re-identify him in the other view. However, this is a very challenging person re-id problem because the same subject may show totally different appearance in top and horizontal views, not to mention that the top view of subjects contains very limited features by only showing the top of heads and shoulders and it can be very difficult to distinguish different subjects from their top views, as shown in Fig. 1.
Prior works [1,2,3] tried to alleviate the challenge of this problem by assuming 1) the view direction of the top-view camera in the air has certain slope such that subjects' body, and even part of the background, are still partially visible in top views and can be used for feature matching to the horizontal views, and 2) the view angle of the horizontalview camera on the ground is consistent with the moving direction of the camera wearer and can be easily estimated by computing optical flow in the top-view videos This can be used to identify the on-the-ground camera wearer in the top-view video. These two assumptions, however, limit the their applicability in practice, e.g., the horizontal-view camera wearer may turn head (and therefore the head-mounted camera) when he walks, leading to inconsistency between his moving direction and wearable-camera view direction.
In this paper, we develop a new approach to associate subjects across top and horizontal views without the above two assumptions. Our main idea is to explore the spatial distribution of the subjects for cross-view subject association. From the horizontal-view image, we detect all the subjects, and estimate their depths and spatial distribution using the sizes and locations of the detected subjects, respectively. On the corresponding top-view image, we traverse each detected subject and possible direction to localize the horizontal-view camera (wearer), as well as its view angle. For each traversed location and direction, we estimate the spatial distribution of all the visible subjects. We finally define a matching cost between the subjects' spatial distributions in top and horizontal views to decide the horizontalview camera location and view angle, with which we can associate the subjects across the two views. In the experiments, we collect a new dataset consisting of image pairs from top and horizontal-views for performance evaluation. Experimental results verify that the proposed method can effectively associate multiple subjects across top and horizontal views.
The main contributions of this paper are: 1) We propose to use the spatial distribution of multiple subjects for associating subjects across top and horizontal views, instead of using subject appearance and motions in prior works. 2) We develop geometry-based algorithms to model and match the subjects spatial distributions across top and horizontal views. 3) We collect a new dataset of top-view and horizontal-view images for evaluating the proposed crossview subject association.
The remainder of this paper is organized as follows. Section 2 reviews the related work. Section 3 elaborates on the proposed method and Section 4 reports the experimental results, followed by a brief conclusion in Section 5.
Proposed Method
In this section, we first give an overview of the proposed method and then elaborate on the main steps.
Overview
Given a top-view image and a horizontal-view image that are taken by respective cameras at the same time, we detect all persons (referred to as subjects in this paper) on both images by a person detector [15]. being the j-th detected subject. The goal of cross-view subject association is to identify all the matched subjects between T and H that indicate the the same persons.
Let T = {O top i } N i=1
In this paper, we address this problem by exploring the spatial distributions of the detected subjects in both views.
More specifically, from each detected subject O top i in the top view, we infer a vector V top i = (x top i , y top i )
that reflects its relative position to the horizontal-view camera (wearer) on the ground. Then for each detected subject O hor j in the horizontal view, we also infer a vector V hor j = (x hor j , y hor j ) to reflect its relative position to the horizontal-view camera on the ground. We associate the subjects detected in two views by seeking matchings between two vector sets
V top (T , θ, O) = {V top i } N i=1 and V hor (E) = {V hor j } M j=1 ,
where O and θ are the location and view angle of the horizontal-view camera (wearer) in the top-view image and they are unknown priorly. Finally, we define a matching cost function φ to measure the dissimilarity between the two vector sets and optimize this function for finding the matching subjects between two views, as well as the camera location O, and camera view angle θ. In the following, we elaborate on each step of the proposed method.
Vector Representation
In this section, we discuss how to derive V top and V hor . On the top-view image, we first assume that the horizontalview camera location O and its view angle θ are given. This way, we can compute its field of view in the top-view image and all the detected subjects' relative positions to the horizontal-view camera on the ground. Horizontal-view image is egocentric and we can compute the detected subjects' relative positions to the camera based on the subjects' sizes and positions on the horizontal-view image.
Top-View Vector Representation
As shown in Fig. 2(a), in the top-view image we can easily compute the left and right boundaries of the field of view of the horizontal-view camera, denoted by L, R, respectively, based on the given camera location O and its view angle θ. For a subject at P in the field of view, we estimate its relative position to the horizontal-view camera by using two geometry parametersx andŷ, wherex is the (signed) distance to the horizontal-view camera along the (camera) right direction V , as shown in Fig. 2(a) andŷ is the depth. Based on pinhole camera model, we can calculate them by
x = f cot OP , V ŷ = | OP | · sin OP , V ,(1)
where ·, · indicates the angle between two directions and f is the focus length of horizontal-view camera.
Next we consider the range ofx. From Fig. 2(a), we can get
x
min = f cot L, V = f cot( π+α 2 ) x max = f cot R, V = f cot( π−α 2 ),(2)
where α ∈ [0, π] is the given field-of-view angle of the horizontal-view camera as indicated in Fig. 2
(a). From Eq. (2), we havex max = −x min > 0.
To enable the matching to the vector representation from the horizontal view, we further normalize the value range of
x to [−1, 1], i.e., x top =x f cot( π−α 2 ) y top =ŷ.(3)
With this normalization, we actually do not need the actual value of f in the proposed method.
Let O top k , k ∈ K ⊂ {1, 2, · · · , N } be the subset of detected subjects in the field of view in the top-view image. We can find the vector representation for all of them and sort them in terms of x top values in an ascending order. We then stack them together as
V top = (x top , y top ) ∈ R |K|×2(4)
where |K| is the size of K, and x top and y top are the column-wise vectors of all the x top and y top values of the subjects in the field of view, respectively.
Horizontal-View Vector Representation
For each subject in the horizontal-view image, we also compute a vector representation to make it consistent to the topview vector representation, i.e., x-value reflects the distance to the horizontal-view camera along the right direction and y-value reflects the depth to the horizontal-view camera. As shown in Fig. 2 (b), in the horizontal-view image, let Q = (x,ỹ) and h be the location and height of a detected subject, respectively. If we take the top-left corner of the image as the origin of the coordinate,x − W 2 , with W being the width of the horizontal-view image, is actually the subject's distance to the horizontal-view camera along the right direction. To facilitate the matching to the top-view vectors, we normalize its value range to
[−1, 1] by x hor =x − W 2 W 2 y hor = 1 h ,(5)
where we simply take the inverse of the subject height as its depth to the horizontal-view camera. For all M detected subjects in the horizontal-view image, we can find their vector representations and sort them in terms of x hor values in an ascending order. We then stack them together as
V hor = (x hor , y hor ) ∈ R M ×2(6)
where x hor and y hor are the column-wise vectors of all the x hor and y hor values of the M subjects detected in the horizontal-view image, respectively.
Vector Matching
In this section we associate the subjects across two views by matching the vectors between the two vector sets V top and V hor . Since the x values of both vector sets have been normalized to the range of [−1, 1], they can be directly compared. However, the y values in these two vector sets are not comparable, although both of them reflect the depth to the horizontal-view camera: y top values are in terms of number of pixels in the top-view image while y hor values are in terms of the number of pixels in the horizontal-view image. It is non-trivial to normalize them into a same scale given their errors in reflecting the true depth -it is a very rough depth estimation by using y hor since it is very sensitive to subject detection errors and height difference among subjects.
We first find reliable subset matchings between x top and x hor and use them to estimate the scale difference between their corresponding y values. More specifically, we find a scaling factor µ to scale y top values to make them comparable to the y hor values. For this purpose, we use a RANSACalike strategy [6]: for each element x top in V top , we find the nearest x hor in V hor . If |x top − x hor | is less than a very small threshold value, we consider x top and x hor a matched pair and take the ratio of their corresponding y values and the average of this ratio over all the matched pairs is taken as the scaling factor µ.
With the scaling factor µ, we match V top and V hor using dynamic programming (DP) [17]. Specifically, we define a dissimilarity matrix D of dimension |K| × M , where D ij is the dissimilarity between V top i and V hor j and it is defined by
D ij = λ|x top i − x hor j | + |µy top i − y hor j |,(7)
where λ > 0 is a balance factor. Given that x top and x hor are both ascending sequences, we use dynamic programming algorithm to search a monotonic path in D from D 1,1 to D |K|,M to build the matching between V top and V hor with minimum total dissimilarities. If a vector V top matches to multiple vectors in V hor , we only keep one with the smallest dissimilarity given in Eq. (7). After that, we check if a vector V hor matches to multiple vectors in V top and we keep one with the smallest dissimilarity. These twostep operations will guarantee the resulting matching is oneon-one and we denote γ to be the number of final matched pairs. Denote the resulting matched vector subsets to be V top * = (x top * , y top * ) and V hor * = (x hor * , y hor * ), both of dimension γ × 2. We define a matching cost between V top and V hor as
φ(V top , V hor ) = 1 γ ρ L γ (λ x top * −x hor * 1 + µy top * −y hor * 1 ),(8)
where ρ > 1 is a pre-specified factor and L = max(|K|, M ). In this matching cost, the term ρ L γ encourages the inclusion of more vector pairs into the final matching, which is important when we use this matching cost to search for optimal camera location O and view angle θ to be discussed next.
Detecting Horizontal-View Camera and View Angle
In calculating the matching cost of Eq. (8), we need to know the horizontal-view camera location O and its view angle θ to compute the vector V top . In practice, we do not know O and θ priorly. As mentioned earlier, we exhaustively try all possible values for O and θ and then select the ones that lead to the minimum matching cost φ.
The matching with such minimum cost provides us the final cross-view subject association. For view angle θ, we sample the its range [0, 2π) uniformly with an interval of ∆θ and in the experiments, we will report results by using different sample intervals. For the horizontal-view camera location O, we simply try every subject detected in the top-view image as the camera (wearer) location.
An occlusion in the horizontal-view image indicates that two subjects and the horizontal-view camera are collinear, as shown by P 1 and P 2 in Fig. 3(a). In this case, the subject with larger depth, i.e., P 2 , is not visible in the horizontal view and we simply ignore this occluded subject in vector representation of V top . In practice, we set a tolerance threshold β = 2 • and if OP 1 , OP 2 < β, we ignore the one with larger depth. The entire cross-view subject association algorithm is summarized in Algorithm 1. Estimate scaling µ as discussed in Section 3.3. 6 Calculate D by Eq. (7) using µ and λ; 7 Calculate V top , V hor based on D by DP algorithm; 8 Calculate φ by Eq. (8) 9 Find θ with the minimum φ;
Experiment
In this section, we first describe the dataset used for performance evaluation and then introduce our experimental results.
Test Dataset
We do not find publicly available dataset with corresponding top-view and horizontal-view images/videos and ground-truth labeling of the cross-view subject association. Therefore, we collect a new dataset for performance evaluation. Specifically, we use a GoPro HERO7 camera (mounted over wearer's head) to take horizontal-view videos and a DJI "yu" Mavic 2 drone to take top-view videos. Both cameras were set to have the same fps of 30. We manually synchronize these videos such that corresponding frames between them are taken at the same time. We then temporally sample these two videos uniformly to construct frame (image) pairs for our dataset. Videos are taken at three different sites with different background and the sampling interval is set to 100 frames to ensure the variety of the collected images. Finally, we obtain 220 image pairs from top and horizontal views, and for both views, the image resolution is 2, 688 × 1, 512. We label the same persons across two videos on all 220 image pairs. Note that, this manual labeling is quite labor intensive given the difficulty in identifying persons in the top-view images (see Fig. 1 for an example).
For evaluating the proposed method more comprehensively, we examine all 220 image pairs and consider the following five attributes: Occ: horizontal-view images containing partially or fully occluded subjects; Hor mov: the horizontal-view images sampled from videos when the camera-wearer moves and rotates his head. Hor rot: the horizontal-view images sampled from videos when the camera-wearer rotates his head. Hor sta: the horizontalview images sampled from videos when the camera-wearer stays static. TV var: the top-view images sampled from videos when the drone moves up, down and/or change camera-view direction. Table 1 shows the number of image pairs with these five attributes, respectively. Note that some image pairs show multiple attributes listed above. For each pair of images, we analyze two more properties. One is the number of subjects in an image, which reflects the level of crowdedness. The other is the proportion between the number of shared subjects in two views and the total number of subjects in an image. Both of them can be computed against either the top-view image or the horizontal-view image and their histograms on all 220 image pairs are shown in Fig. 4.
In this paper, we use two metrics for performance evaluation. 1) The accuracy in identifying the horizontal-view camera wearer in the top-view image, and 2) the precision and recall of cross-view subject association. We do not include the camera-view angle θ for evaluation because it is difficult to annotate its ground truth.
Experiment Setup
We implement the proposed method in Matlab and run on a desktop computer with an Intel Core i7 3.4GHz CPU. We use the general YOLO [15] detector to detect subjects in the form of bounding boxes in both top-view and horizontal-view images 1 . The pre-specified parameters ρ and λ are set to 25 and 0.015 respectively. We will further discuss the influence of these parameters in Section 4.4.
We did not find available methods with code that can directly handle our top-and horizontal-view subject association. One related work is [3] for cross-view matching. However, we could not include it directly into comparison because 1) its code is not available to public, and 2) it computes optical flow for θ and therefore cannot handle a pair of static images in our dataset. Actually, the method in [3] assumes a certain slope view angle of the top-view camera and use appearance matching for cross association. This is similar to the appearance-matching-based person re-id methods.
In this paper, we chose a recent person re-id method [19] for comparison. We take each subject detected in the horizontal-view image as query and search it in the set of subjects detected in the top-view image. We tried two versions of this re-id method: one is retrained from scratch using 1,000 sample subjects collected by ourselves (no overlap with the subjects in our test dataset) and the other is to fine-tune from the version provided in [19] these 1,000 sample subjects.
Results
We apply the proposed method to all 220 pairs of images in our dataset. We detect the horizontal-view camera wearer on the top-view image as described in 3.4 and the detection accuracy is 84.1%. We also use the Cumulative Matching Characteristic (CMC) curve to evaluate the matching accuracy, as shown in Fig. 5(a), where the horizontal and vertical axes are the CMC rank and the matching accuracy respectively.
For a pair of images, we use the precision and recall scores to evaluate the cross-view subject association. As shown in Table 2, the average precision and recall scores of our method are 79.6% and 77.0% respectively. In this table, 'Ours w O' indicates the use of our method by giving the ground-truth camera location O. We can find in this table that the re-id method, either retrained or fune-tuned, produces very poor result, which confirms the difficulty in using appearance features for the proposed cross-view subject association.
We also calculate the proportion of all the image pairs with precision or recall score of 1 (Prec.@1 and Reca.@1). Figure 5. (a) The CMC curve for horizontal-view camera detection. (b) Precision and recall scores in association, where the horizontal axis denotes a precision or recall score x, and the vertical coordinate denotes the proportion of image pairs with corresponding precision or recall score that is greater than x. They reach 60.0% and 50.9% respectively. The distributions of these two scores on all 220 image pairs are shown in Fig. 5(b). In Table 3, we report the evaluation results on different subsets with respective attributes. We can see that the proposed method is not sensitive to the motion of both top-view and horizontal-view cameras, which is highly desirable for motion-camera applications.
Ablation Studies
Step Length for θ. We study the influence of the value ∆θ, the step length for searching optimal camera view angle θ in the range [0, 2π). We set the value of ∆θ to 1 • , 5 • and 10 • , respectively and the association results are shown in Table 4. As expected, ∆θ = 1 • leads to the highest performance, although a larger step length, such as ∆θ = 5 • also produces acceptable results.
Vector representation. Next we compare the association results using different vector representation methods as shown in Table 5. The first row denotes that we represent the subjects in two views by one-dimensional vectors x top and x hor respectively. The second row denotes that we represent the subjects in two views by one-dimensional vectors y top and y hor , respectively, which are simply normalized to the range [0, 1] to make them comparable. The third row denotes that we combine the one-dimensional vectors for the first and second rows to represent each view, which differs from our proposed method (the fourth row of Table 5) only on the normalization of y top and y hor -our proposed method uses a RANSAC strategy. By comparing the results in the third and fourth rows, we can see that the use of RANSAC strategy for estimating the scaling factor µ does improve the final association performance. The results in the first and second rows show that using only one dimension of the proposed vector representation cannot achieve performance as good as the proposed method that combines both dimensions. We can also see that x top and x hor provides more accurate information than y top and y hor when used for cross-view subject association. Parameters selection. There are two free parameters ρ and λ in Eq. (8). We select different values for them and see their influence to the final association performance. Table 6 reports the results by varying one of these two parameters while fixing the other one. We can see that the final association precision and recall scores are not very sensitive to the selected values of these two parameters. Detection method. In order to analyze the influence of subjects detection's accuracy to the proposed cross-view association, we tried the use of different subject detections. Table 7, in the first row, we use manually annotated bounding boxes of each subject on both views for the proposed association. In the second and third rows, we use manually annotated subjects on top-view images and horizontal-view images, respectively, while using automatically detected subjects [15] on the other-view images. In the fourth row, we automatically detect subjects in both views first, and then only keep those that show an IoU> 0.5 (Intersection over Union) against a manually annotated subject, in terms of their bounding boxes. We can see that the use of manually annotated subjects produces much better crossview subject association. This indicates that further efforts on improving subject detection will benefit the association.
As shown in
Discussion
Number of associated subjects. We investigate the correlation between the association performance and the number of associated subjects. Figure 6(a) shows the average association performance on the image pairs with different number of associated subjects. We can see that the association results get worse when the number of associated subjects is too high or too low. When there are too many associated subjects, the crowded subjects in the horizontal view may prevent the accurate detection of subjects. When there are two few subjects, the constructed vector representation is not sufficiently discriminative to locate the camera location O and camera-view angle θ. Figure 6(b) shows the average association performance on the image pairs with different proportions of associated subjects. More specifically, the performance at x along the horizontal axis is the average precision/recall score on all the image pairs with the proportion of associated subjects (to the total number of subjects in the top-view image) less than x. This confirms that on the images with higher such proportion, the association can be more reliable.
Occlusion. Occlusions are very common, as shown in Table 1. Table 8 shows the association results on the entire dataset and the subset of data with occlusions, by using the proposed method with and without the step of identifying and ignoring occluded subjects. We can see that our simple strategy for handling occlusion can significantly improve the association performance on the image pairs with occlusions. Sample results on image pairs with occlusions are shown in the top row of Fig. 7, where associated subjects bear same number labels. We can see that occlusions occur more often when 1) the subjects are crowded, and 2) one subject is very close to the horizontal-view camera.
Proportion of shared subjects. It is a common situation that many subjects in two views are not the same persons. In this case, the shared subjects may only count for a small proportion in both top-and horizontal-views. Two examples are shown in the second row of Fig. 7. In the left, we show a case where many subjects in the top view are not in the field of view of the horizontal-view camera. In the right, we show a case where many subjects in the horizontal view are too far from the horizontal-view camera and not covered by the top-view camera. We can see that the proposed method can handle these two cases very well, by exploring the spatial distribution of the shared subjects.
Failure case. At last, we give two failure cases as shown in Fig. 8 -one caused by the error in subject detection (blue boxes) and the other is caused by the close distance of multiple subjects, e.g, subjects 3,4 and 5, in either top or horizontal view, which lead to error detection of occlusions and incorrect vector representations.
Conclusion
In this paper, we developed a new method to associate multiple subjects across top-view and horizontal-view images by modeling and matching the subjects' spatial distributions. We constructed a vector representation for all the detected subjects in the horizontal-view image and another vector representation for all the detected subjects in the top-view image that are located in the field of view of the horizontal-view camera. These two vector representations are then matched for cross-view subject association. We proposed a new matching cost function with which we can further optimize for the location and view angle of the horizontal-view camera in the top-view image. We collected a new dataset, as well as manually labeled groundtruth cross-view subject association, and experimental results on this dataset are very promising. | 4,761 |
1907.11458 | 2966260985 | Video surveillance can be significantly enhanced by using both top-view data, e.g., those from drone-mounted cameras in the air, and horizontal-view data, e.g., those from wearable cameras on the ground. Collaborative analysis of different-view data can facilitate various kinds of applications, such as human tracking, person identification, and human activity recognition. However, for such collaborative analysis, the first step is to associate people, referred to as subjects in this paper, across these two views. This is a very challenging problem due to large human-appearance difference between top and horizontal views. In this paper, we present a new approach to address this problem by exploring and matching the subjects' spatial distributions between the two views. More specifically, on the top-view image, we model and match subjects' relative positions to the horizontal-view camera in both views and define a matching cost to decide the actual location of horizontal-view camera and its view angle in the top-view image. We collect a new dataset consisting of top-view and horizontal-view image pairs for performance evaluation and the experimental results show the effectiveness of the proposed method. | As mentioned above, cross-view subject association can be treated as a person re-id problem, which has been widely studied in recent years. Most existing re-id methods can be grouped into two classes: similarity learning and representation learning. The former focuses on learning the similarity metric, e.g., the invariant feature learning based models @cite_11 @cite_16 @cite_18 , classical metric learning models @cite_23 @cite_6 @cite_0 , and deep metric learning models @cite_17 @cite_4 . The latter focuses on feature learning, including low-level visual features such as color, shape, and texture @cite_14 @cite_22 , and more recent CNN deep features @cite_13 @cite_12 . These methods assume that all the data are taken from horizontal views, with similar or different horizontal view angles, and almost all of these methods are based on appearance matching. In this paper, we attempt to re-identify subjects across top and horizontal views, where appearance matching is not an appropriate choice. | {
"abstract": [
"Color naming, which relates colors with color names, can help people with a semantic analysis of images in many computer vision applications. In this paper, we propose a novel salient color names based color descriptor (SCNCD) to describe colors. SCNCD utilizes salient color names to guarantee that a higher probability will be assigned to the color name which is nearer to the color. Based on SCNCD, color distributions over color names in different color spaces are then obtained and fused to generate a feature representation. Moreover, the effect of background information is employed and analyzed for person re-identification. With a simple metric learning method, the proposed approach outperforms the state-of-the-art performance (without user’s feedback optimization) on two challenging datasets (VIPeR and PRID 450S). More importantly, the proposed feature can be obtained very fast if we compute SCNCD of each color in advance.",
"Viewpoint invariant pedestrian recognition is an important yet under-addressed problem in computer vision. This is likely due to the difficulty in matching two objects with unknown viewpoint and pose. This paper presents a method of performing viewpoint invariant pedestrian recognition using an efficiently and intelligently designed object representation, the ensemble of localized features (ELF). Instead of designing a specific feature by hand to solve the problem, we define a feature space using our intuition about the problem and let a machine learning algorithm find the best representation. We show how both an object class specific representation and a discriminative recognition model can be learned using the AdaBoost algorithm. This approach allows many different kinds of simple features to be combined into a single similarity function. The method is evaluated using a viewpoint invariant pedestrian recognition dataset and the results are shown to be superior to all previous benchmarks for both recognition and reacquisition of pedestrians.",
"",
"",
"Human re-identification is to match a pair of humans appearing in different cameras with non-overlapping views. However, in order to achieve this task, we need to overcome several challenges such as variations in lighting, viewpoint, pose and colour. In this paper, we propose a new approach for person re-identification in multi-camera networks by using a hierarchical structure with a Siamese Convolution Neural Network (SCNN). A set of human pairs is projected into the same feature subspace through a nonlinear transformation that is learned by using a convolution neural network. The learning process minimizes the loss function, which ensures that the similarity distance between positive pairs is less than lower threshold and the similarity distance between negative pairs is higher than upper threshold. Our experiment is achieved by using a small scale of dataset due to the computation time. Viewpoint Invariant Pedestrian Recognition (VIPeR) dataset is used in our experiment, since it is widely employed in this field. Initial results suggest that the proposed SCNN structure has good performance in people re-identification.",
"",
"",
"In this paper, we raise important issues on scalability and the required degree of supervision of existing Mahalanobis metric learning methods. Often rather tedious optimization procedures are applied that become computationally intractable on a large scale. Further, if one considers the constantly growing amount of data it is often infeasible to specify fully supervised labels for all data points. Instead, it is easier to specify labels in form of equivalence constraints. We introduce a simple though effective strategy to learn a distance metric from equivalence constraints, based on a statistical inference perspective. In contrast to existing methods we do not rely on complex optimization problems requiring computationally expensive iterations. Hence, our method is orders of magnitudes faster than comparable methods. Results on a variety of challenging benchmarks with rather diverse nature demonstrate the power of our method. These include faces in unconstrained environments, matching before unseen object instances and person re-identification across spatially disjoint cameras. In the latter two benchmarks we clearly outperform the state-of-the-art.",
"",
"This paper presents a novel large-scale dataset and comprehensive baselines for end-to-end pedestrian detection and person recognition in raw video frames. Our baselines address three issues: the performance of various combinations of detectors and recognizers, mechanisms for pedestrian detection to help improve overall re-identification (re-ID) accuracy and assessing the effectiveness of different detectors for re-ID. We make three distinct contributions. First, a new dataset, PRW, is introduced to evaluate Person Re-identification in the Wild, using videos acquired through six synchronized cameras. It contains 932 identities and 11,816 frames in which pedestrians are annotated with their bounding box positions and identities. Extensive benchmarking results are presented on this dataset. Second, we show that pedestrian detection aids re-ID through two simple yet effective improvements: a cascaded fine-tuning strategy that trains a detection model first and then the classification model, and a Confidence Weighted Similarity (CWS) metric that incorporates detection scores into similarity measurement. Third, we derive insights in evaluating detector performance for the particular scenario of accurate person re-ID.",
"",
"Person re-identification is an important technique towards automatic search of a person's presence in a surveillance video. Two fundamental problems are critical for person re-identification, feature representation and metric learning. An effective feature representation should be robust to illumination and viewpoint changes, and a discriminant metric should be learned to match various person images. In this paper, we propose an effective feature representation called Local Maximal Occurrence (LOMO), and a subspace and metric learning method called Cross-view Quadratic Discriminant Analysis (XQDA). The LOMO feature analyzes the horizontal occurrence of local features, and maximizes the occurrence to make a stable representation against viewpoint changes. Besides, to handle illumination variations, we apply the Retinex transform and a scale invariant texture operator. To learn a discriminant metric, we propose to learn a discriminant low dimensional subspace by cross-view quadratic discriminant analysis, and simultaneously, a QDA metric is learned on the derived subspace. We also present a practical computation method for XQDA, as well as its regularization. Experiments on four challenging person re-identification databases, VIPeR, QMUL GRID, CUHK Campus, and CUHK03, show that the proposed method improves the state-of-the-art rank-1 identification rates by 2.2 , 4.88 , 28.91 , and 31.55 on the four databases, respectively."
],
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_4",
"@cite_22",
"@cite_17",
"@cite_6",
"@cite_0",
"@cite_23",
"@cite_16",
"@cite_13",
"@cite_12",
"@cite_11"
],
"mid": [
"46454230",
"1518138188",
"",
"",
"2250055846",
"",
"",
"2068042582",
"",
"2963901085",
"",
"1949591461"
]
} | Multiple Human Association between Top and Horizontal Views by Matching Subjects' Spatial Distributions | The advancement of moving-camera technologies provides a new perspective for video surveillance. Unmanned aerial vehicles (UAVs), such as drones in the air, can provide top views of a group of subjects on the ground. Wearable cameras, such as Google Glass and GoPro, mounted over the head of a wearer (one of the subjects on the ground), can provide horizontal views of the same group of subjects. As shown in Fig. 1, the data collected from these two views well complement each other -top-view images contain no mutual occlusions and well exhibit a global picture and the relative positions of the subjects, while horizontal-view images can capture the detailed appearance, action, and behavior of subjects of interest in a much closer distance. Clearly, their collaborative analysis can significantly improve the video-surveillance capabili-
Top-view
Horizontal-view Figure 1. An illustration of the top-view (left) and horizontal-view (right) images. The former is taken by a camera mounted to a drone in the air and the latter is taken by a GoPro worn by a wearer who walked on the ground. The proposed method identifies on the top-view image the location and view angle of the camera (indicated by red box) that produces the horizontal-view image, and associate subjects, indicated by identical color boxes, across these two videos. ties such as human tracking, human detection, and activity recognition.
The first step for such a collaborative analysis is to accurately associate the subjects across these two views, i.e., we need to identify any person present in both views and identify his location in both views, as shown in Fig. 1. In general, this can be treated as a person re-identification (reid) problem -for each subject in one view, re-identify him in the other view. However, this is a very challenging person re-id problem because the same subject may show totally different appearance in top and horizontal views, not to mention that the top view of subjects contains very limited features by only showing the top of heads and shoulders and it can be very difficult to distinguish different subjects from their top views, as shown in Fig. 1.
Prior works [1,2,3] tried to alleviate the challenge of this problem by assuming 1) the view direction of the top-view camera in the air has certain slope such that subjects' body, and even part of the background, are still partially visible in top views and can be used for feature matching to the horizontal views, and 2) the view angle of the horizontalview camera on the ground is consistent with the moving direction of the camera wearer and can be easily estimated by computing optical flow in the top-view videos This can be used to identify the on-the-ground camera wearer in the top-view video. These two assumptions, however, limit the their applicability in practice, e.g., the horizontal-view camera wearer may turn head (and therefore the head-mounted camera) when he walks, leading to inconsistency between his moving direction and wearable-camera view direction.
In this paper, we develop a new approach to associate subjects across top and horizontal views without the above two assumptions. Our main idea is to explore the spatial distribution of the subjects for cross-view subject association. From the horizontal-view image, we detect all the subjects, and estimate their depths and spatial distribution using the sizes and locations of the detected subjects, respectively. On the corresponding top-view image, we traverse each detected subject and possible direction to localize the horizontal-view camera (wearer), as well as its view angle. For each traversed location and direction, we estimate the spatial distribution of all the visible subjects. We finally define a matching cost between the subjects' spatial distributions in top and horizontal views to decide the horizontalview camera location and view angle, with which we can associate the subjects across the two views. In the experiments, we collect a new dataset consisting of image pairs from top and horizontal-views for performance evaluation. Experimental results verify that the proposed method can effectively associate multiple subjects across top and horizontal views.
The main contributions of this paper are: 1) We propose to use the spatial distribution of multiple subjects for associating subjects across top and horizontal views, instead of using subject appearance and motions in prior works. 2) We develop geometry-based algorithms to model and match the subjects spatial distributions across top and horizontal views. 3) We collect a new dataset of top-view and horizontal-view images for evaluating the proposed crossview subject association.
The remainder of this paper is organized as follows. Section 2 reviews the related work. Section 3 elaborates on the proposed method and Section 4 reports the experimental results, followed by a brief conclusion in Section 5.
Proposed Method
In this section, we first give an overview of the proposed method and then elaborate on the main steps.
Overview
Given a top-view image and a horizontal-view image that are taken by respective cameras at the same time, we detect all persons (referred to as subjects in this paper) on both images by a person detector [15]. being the j-th detected subject. The goal of cross-view subject association is to identify all the matched subjects between T and H that indicate the the same persons.
Let T = {O top i } N i=1
In this paper, we address this problem by exploring the spatial distributions of the detected subjects in both views.
More specifically, from each detected subject O top i in the top view, we infer a vector V top i = (x top i , y top i )
that reflects its relative position to the horizontal-view camera (wearer) on the ground. Then for each detected subject O hor j in the horizontal view, we also infer a vector V hor j = (x hor j , y hor j ) to reflect its relative position to the horizontal-view camera on the ground. We associate the subjects detected in two views by seeking matchings between two vector sets
V top (T , θ, O) = {V top i } N i=1 and V hor (E) = {V hor j } M j=1 ,
where O and θ are the location and view angle of the horizontal-view camera (wearer) in the top-view image and they are unknown priorly. Finally, we define a matching cost function φ to measure the dissimilarity between the two vector sets and optimize this function for finding the matching subjects between two views, as well as the camera location O, and camera view angle θ. In the following, we elaborate on each step of the proposed method.
Vector Representation
In this section, we discuss how to derive V top and V hor . On the top-view image, we first assume that the horizontalview camera location O and its view angle θ are given. This way, we can compute its field of view in the top-view image and all the detected subjects' relative positions to the horizontal-view camera on the ground. Horizontal-view image is egocentric and we can compute the detected subjects' relative positions to the camera based on the subjects' sizes and positions on the horizontal-view image.
Top-View Vector Representation
As shown in Fig. 2(a), in the top-view image we can easily compute the left and right boundaries of the field of view of the horizontal-view camera, denoted by L, R, respectively, based on the given camera location O and its view angle θ. For a subject at P in the field of view, we estimate its relative position to the horizontal-view camera by using two geometry parametersx andŷ, wherex is the (signed) distance to the horizontal-view camera along the (camera) right direction V , as shown in Fig. 2(a) andŷ is the depth. Based on pinhole camera model, we can calculate them by
x = f cot OP , V ŷ = | OP | · sin OP , V ,(1)
where ·, · indicates the angle between two directions and f is the focus length of horizontal-view camera.
Next we consider the range ofx. From Fig. 2(a), we can get
x
min = f cot L, V = f cot( π+α 2 ) x max = f cot R, V = f cot( π−α 2 ),(2)
where α ∈ [0, π] is the given field-of-view angle of the horizontal-view camera as indicated in Fig. 2
(a). From Eq. (2), we havex max = −x min > 0.
To enable the matching to the vector representation from the horizontal view, we further normalize the value range of
x to [−1, 1], i.e., x top =x f cot( π−α 2 ) y top =ŷ.(3)
With this normalization, we actually do not need the actual value of f in the proposed method.
Let O top k , k ∈ K ⊂ {1, 2, · · · , N } be the subset of detected subjects in the field of view in the top-view image. We can find the vector representation for all of them and sort them in terms of x top values in an ascending order. We then stack them together as
V top = (x top , y top ) ∈ R |K|×2(4)
where |K| is the size of K, and x top and y top are the column-wise vectors of all the x top and y top values of the subjects in the field of view, respectively.
Horizontal-View Vector Representation
For each subject in the horizontal-view image, we also compute a vector representation to make it consistent to the topview vector representation, i.e., x-value reflects the distance to the horizontal-view camera along the right direction and y-value reflects the depth to the horizontal-view camera. As shown in Fig. 2 (b), in the horizontal-view image, let Q = (x,ỹ) and h be the location and height of a detected subject, respectively. If we take the top-left corner of the image as the origin of the coordinate,x − W 2 , with W being the width of the horizontal-view image, is actually the subject's distance to the horizontal-view camera along the right direction. To facilitate the matching to the top-view vectors, we normalize its value range to
[−1, 1] by x hor =x − W 2 W 2 y hor = 1 h ,(5)
where we simply take the inverse of the subject height as its depth to the horizontal-view camera. For all M detected subjects in the horizontal-view image, we can find their vector representations and sort them in terms of x hor values in an ascending order. We then stack them together as
V hor = (x hor , y hor ) ∈ R M ×2(6)
where x hor and y hor are the column-wise vectors of all the x hor and y hor values of the M subjects detected in the horizontal-view image, respectively.
Vector Matching
In this section we associate the subjects across two views by matching the vectors between the two vector sets V top and V hor . Since the x values of both vector sets have been normalized to the range of [−1, 1], they can be directly compared. However, the y values in these two vector sets are not comparable, although both of them reflect the depth to the horizontal-view camera: y top values are in terms of number of pixels in the top-view image while y hor values are in terms of the number of pixels in the horizontal-view image. It is non-trivial to normalize them into a same scale given their errors in reflecting the true depth -it is a very rough depth estimation by using y hor since it is very sensitive to subject detection errors and height difference among subjects.
We first find reliable subset matchings between x top and x hor and use them to estimate the scale difference between their corresponding y values. More specifically, we find a scaling factor µ to scale y top values to make them comparable to the y hor values. For this purpose, we use a RANSACalike strategy [6]: for each element x top in V top , we find the nearest x hor in V hor . If |x top − x hor | is less than a very small threshold value, we consider x top and x hor a matched pair and take the ratio of their corresponding y values and the average of this ratio over all the matched pairs is taken as the scaling factor µ.
With the scaling factor µ, we match V top and V hor using dynamic programming (DP) [17]. Specifically, we define a dissimilarity matrix D of dimension |K| × M , where D ij is the dissimilarity between V top i and V hor j and it is defined by
D ij = λ|x top i − x hor j | + |µy top i − y hor j |,(7)
where λ > 0 is a balance factor. Given that x top and x hor are both ascending sequences, we use dynamic programming algorithm to search a monotonic path in D from D 1,1 to D |K|,M to build the matching between V top and V hor with minimum total dissimilarities. If a vector V top matches to multiple vectors in V hor , we only keep one with the smallest dissimilarity given in Eq. (7). After that, we check if a vector V hor matches to multiple vectors in V top and we keep one with the smallest dissimilarity. These twostep operations will guarantee the resulting matching is oneon-one and we denote γ to be the number of final matched pairs. Denote the resulting matched vector subsets to be V top * = (x top * , y top * ) and V hor * = (x hor * , y hor * ), both of dimension γ × 2. We define a matching cost between V top and V hor as
φ(V top , V hor ) = 1 γ ρ L γ (λ x top * −x hor * 1 + µy top * −y hor * 1 ),(8)
where ρ > 1 is a pre-specified factor and L = max(|K|, M ). In this matching cost, the term ρ L γ encourages the inclusion of more vector pairs into the final matching, which is important when we use this matching cost to search for optimal camera location O and view angle θ to be discussed next.
Detecting Horizontal-View Camera and View Angle
In calculating the matching cost of Eq. (8), we need to know the horizontal-view camera location O and its view angle θ to compute the vector V top . In practice, we do not know O and θ priorly. As mentioned earlier, we exhaustively try all possible values for O and θ and then select the ones that lead to the minimum matching cost φ.
The matching with such minimum cost provides us the final cross-view subject association. For view angle θ, we sample the its range [0, 2π) uniformly with an interval of ∆θ and in the experiments, we will report results by using different sample intervals. For the horizontal-view camera location O, we simply try every subject detected in the top-view image as the camera (wearer) location.
An occlusion in the horizontal-view image indicates that two subjects and the horizontal-view camera are collinear, as shown by P 1 and P 2 in Fig. 3(a). In this case, the subject with larger depth, i.e., P 2 , is not visible in the horizontal view and we simply ignore this occluded subject in vector representation of V top . In practice, we set a tolerance threshold β = 2 • and if OP 1 , OP 2 < β, we ignore the one with larger depth. The entire cross-view subject association algorithm is summarized in Algorithm 1. Estimate scaling µ as discussed in Section 3.3. 6 Calculate D by Eq. (7) using µ and λ; 7 Calculate V top , V hor based on D by DP algorithm; 8 Calculate φ by Eq. (8) 9 Find θ with the minimum φ;
Experiment
In this section, we first describe the dataset used for performance evaluation and then introduce our experimental results.
Test Dataset
We do not find publicly available dataset with corresponding top-view and horizontal-view images/videos and ground-truth labeling of the cross-view subject association. Therefore, we collect a new dataset for performance evaluation. Specifically, we use a GoPro HERO7 camera (mounted over wearer's head) to take horizontal-view videos and a DJI "yu" Mavic 2 drone to take top-view videos. Both cameras were set to have the same fps of 30. We manually synchronize these videos such that corresponding frames between them are taken at the same time. We then temporally sample these two videos uniformly to construct frame (image) pairs for our dataset. Videos are taken at three different sites with different background and the sampling interval is set to 100 frames to ensure the variety of the collected images. Finally, we obtain 220 image pairs from top and horizontal views, and for both views, the image resolution is 2, 688 × 1, 512. We label the same persons across two videos on all 220 image pairs. Note that, this manual labeling is quite labor intensive given the difficulty in identifying persons in the top-view images (see Fig. 1 for an example).
For evaluating the proposed method more comprehensively, we examine all 220 image pairs and consider the following five attributes: Occ: horizontal-view images containing partially or fully occluded subjects; Hor mov: the horizontal-view images sampled from videos when the camera-wearer moves and rotates his head. Hor rot: the horizontal-view images sampled from videos when the camera-wearer rotates his head. Hor sta: the horizontalview images sampled from videos when the camera-wearer stays static. TV var: the top-view images sampled from videos when the drone moves up, down and/or change camera-view direction. Table 1 shows the number of image pairs with these five attributes, respectively. Note that some image pairs show multiple attributes listed above. For each pair of images, we analyze two more properties. One is the number of subjects in an image, which reflects the level of crowdedness. The other is the proportion between the number of shared subjects in two views and the total number of subjects in an image. Both of them can be computed against either the top-view image or the horizontal-view image and their histograms on all 220 image pairs are shown in Fig. 4.
In this paper, we use two metrics for performance evaluation. 1) The accuracy in identifying the horizontal-view camera wearer in the top-view image, and 2) the precision and recall of cross-view subject association. We do not include the camera-view angle θ for evaluation because it is difficult to annotate its ground truth.
Experiment Setup
We implement the proposed method in Matlab and run on a desktop computer with an Intel Core i7 3.4GHz CPU. We use the general YOLO [15] detector to detect subjects in the form of bounding boxes in both top-view and horizontal-view images 1 . The pre-specified parameters ρ and λ are set to 25 and 0.015 respectively. We will further discuss the influence of these parameters in Section 4.4.
We did not find available methods with code that can directly handle our top-and horizontal-view subject association. One related work is [3] for cross-view matching. However, we could not include it directly into comparison because 1) its code is not available to public, and 2) it computes optical flow for θ and therefore cannot handle a pair of static images in our dataset. Actually, the method in [3] assumes a certain slope view angle of the top-view camera and use appearance matching for cross association. This is similar to the appearance-matching-based person re-id methods.
In this paper, we chose a recent person re-id method [19] for comparison. We take each subject detected in the horizontal-view image as query and search it in the set of subjects detected in the top-view image. We tried two versions of this re-id method: one is retrained from scratch using 1,000 sample subjects collected by ourselves (no overlap with the subjects in our test dataset) and the other is to fine-tune from the version provided in [19] these 1,000 sample subjects.
Results
We apply the proposed method to all 220 pairs of images in our dataset. We detect the horizontal-view camera wearer on the top-view image as described in 3.4 and the detection accuracy is 84.1%. We also use the Cumulative Matching Characteristic (CMC) curve to evaluate the matching accuracy, as shown in Fig. 5(a), where the horizontal and vertical axes are the CMC rank and the matching accuracy respectively.
For a pair of images, we use the precision and recall scores to evaluate the cross-view subject association. As shown in Table 2, the average precision and recall scores of our method are 79.6% and 77.0% respectively. In this table, 'Ours w O' indicates the use of our method by giving the ground-truth camera location O. We can find in this table that the re-id method, either retrained or fune-tuned, produces very poor result, which confirms the difficulty in using appearance features for the proposed cross-view subject association.
We also calculate the proportion of all the image pairs with precision or recall score of 1 (Prec.@1 and Reca.@1). Figure 5. (a) The CMC curve for horizontal-view camera detection. (b) Precision and recall scores in association, where the horizontal axis denotes a precision or recall score x, and the vertical coordinate denotes the proportion of image pairs with corresponding precision or recall score that is greater than x. They reach 60.0% and 50.9% respectively. The distributions of these two scores on all 220 image pairs are shown in Fig. 5(b). In Table 3, we report the evaluation results on different subsets with respective attributes. We can see that the proposed method is not sensitive to the motion of both top-view and horizontal-view cameras, which is highly desirable for motion-camera applications.
Ablation Studies
Step Length for θ. We study the influence of the value ∆θ, the step length for searching optimal camera view angle θ in the range [0, 2π). We set the value of ∆θ to 1 • , 5 • and 10 • , respectively and the association results are shown in Table 4. As expected, ∆θ = 1 • leads to the highest performance, although a larger step length, such as ∆θ = 5 • also produces acceptable results.
Vector representation. Next we compare the association results using different vector representation methods as shown in Table 5. The first row denotes that we represent the subjects in two views by one-dimensional vectors x top and x hor respectively. The second row denotes that we represent the subjects in two views by one-dimensional vectors y top and y hor , respectively, which are simply normalized to the range [0, 1] to make them comparable. The third row denotes that we combine the one-dimensional vectors for the first and second rows to represent each view, which differs from our proposed method (the fourth row of Table 5) only on the normalization of y top and y hor -our proposed method uses a RANSAC strategy. By comparing the results in the third and fourth rows, we can see that the use of RANSAC strategy for estimating the scaling factor µ does improve the final association performance. The results in the first and second rows show that using only one dimension of the proposed vector representation cannot achieve performance as good as the proposed method that combines both dimensions. We can also see that x top and x hor provides more accurate information than y top and y hor when used for cross-view subject association. Parameters selection. There are two free parameters ρ and λ in Eq. (8). We select different values for them and see their influence to the final association performance. Table 6 reports the results by varying one of these two parameters while fixing the other one. We can see that the final association precision and recall scores are not very sensitive to the selected values of these two parameters. Detection method. In order to analyze the influence of subjects detection's accuracy to the proposed cross-view association, we tried the use of different subject detections. Table 7, in the first row, we use manually annotated bounding boxes of each subject on both views for the proposed association. In the second and third rows, we use manually annotated subjects on top-view images and horizontal-view images, respectively, while using automatically detected subjects [15] on the other-view images. In the fourth row, we automatically detect subjects in both views first, and then only keep those that show an IoU> 0.5 (Intersection over Union) against a manually annotated subject, in terms of their bounding boxes. We can see that the use of manually annotated subjects produces much better crossview subject association. This indicates that further efforts on improving subject detection will benefit the association.
As shown in
Discussion
Number of associated subjects. We investigate the correlation between the association performance and the number of associated subjects. Figure 6(a) shows the average association performance on the image pairs with different number of associated subjects. We can see that the association results get worse when the number of associated subjects is too high or too low. When there are too many associated subjects, the crowded subjects in the horizontal view may prevent the accurate detection of subjects. When there are two few subjects, the constructed vector representation is not sufficiently discriminative to locate the camera location O and camera-view angle θ. Figure 6(b) shows the average association performance on the image pairs with different proportions of associated subjects. More specifically, the performance at x along the horizontal axis is the average precision/recall score on all the image pairs with the proportion of associated subjects (to the total number of subjects in the top-view image) less than x. This confirms that on the images with higher such proportion, the association can be more reliable.
Occlusion. Occlusions are very common, as shown in Table 1. Table 8 shows the association results on the entire dataset and the subset of data with occlusions, by using the proposed method with and without the step of identifying and ignoring occluded subjects. We can see that our simple strategy for handling occlusion can significantly improve the association performance on the image pairs with occlusions. Sample results on image pairs with occlusions are shown in the top row of Fig. 7, where associated subjects bear same number labels. We can see that occlusions occur more often when 1) the subjects are crowded, and 2) one subject is very close to the horizontal-view camera.
Proportion of shared subjects. It is a common situation that many subjects in two views are not the same persons. In this case, the shared subjects may only count for a small proportion in both top-and horizontal-views. Two examples are shown in the second row of Fig. 7. In the left, we show a case where many subjects in the top view are not in the field of view of the horizontal-view camera. In the right, we show a case where many subjects in the horizontal view are too far from the horizontal-view camera and not covered by the top-view camera. We can see that the proposed method can handle these two cases very well, by exploring the spatial distribution of the shared subjects.
Failure case. At last, we give two failure cases as shown in Fig. 8 -one caused by the error in subject detection (blue boxes) and the other is caused by the close distance of multiple subjects, e.g, subjects 3,4 and 5, in either top or horizontal view, which lead to error detection of occlusions and incorrect vector representations.
Conclusion
In this paper, we developed a new method to associate multiple subjects across top-view and horizontal-view images by modeling and matching the subjects' spatial distributions. We constructed a vector representation for all the detected subjects in the horizontal-view image and another vector representation for all the detected subjects in the top-view image that are located in the field of view of the horizontal-view camera. These two vector representations are then matched for cross-view subject association. We proposed a new matching cost function with which we can further optimize for the location and view angle of the horizontal-view camera in the top-view image. We collected a new dataset, as well as manually labeled groundtruth cross-view subject association, and experimental results on this dataset are very promising. | 4,761 |
1907.11397 | 2966209912 | Zero-shot learning (ZSL) aims to recognize unseen objects (test classes) given some other seen objects (training classes), by sharing information of attributes between different objects. Attributes are artificially annotated for objects and are treated equally in recent ZSL tasks. However, some inferior attributes with poor predictability or poor discriminability may have negative impact on the ZSL system performance. This paper first derives a generalization error bound for ZSL tasks. Our theoretical analysis verifies that selecting key attributes set can improve the generalization performance of the original ZSL model which uses all the attributes. Unfortunately, previous attribute selection methods are conducted based on the seen data, their selected attributes have poor generalization capability to the unseen data, which is unavailable in training stage for ZSL tasks. Inspired by learning from pseudo relevance feedback, this paper introduces the out-of-the-box data, which is pseudo data generated by an attribute-guided generative model, to mimic the unseen data. After that, we present an iterative attribute selection (IAS) strategy which iteratively selects key attributes based on the out-of-the-box data. Since the distribution of the generated out-of-the-box data is similar to the test data, the key attributes selected by IAS can be effectively generalized to test data. Extensive experiments demonstrate that IAS can significantly improve existing attribute-based ZSL methods and achieve state-of-the-art performance. | ZSL can recognize new objects using attributes as the intermediate semantic representation. Some researchers adopt the probability-prediction strategy to transfer information. @cite_12 proposed a popular baseline, i.e. direct attribute prediction (DAP). DAP learns probabilistic attribute classifiers using the seen data and infers the label of the unseen data by combining the results of pre-trained classifiers. Most recent works adopt the label-embedding strategy that directly learns a mapping function from the input features space to the semantic embedding space. One line of works is to learn linear compatibility functions. For example, @cite_0 presented an attribute label embedding (ALE) model which learns a compatibility function combined with ranking loss. Romera- @cite_16 proposed an approach that models the relationships among features, attributes and classes as a two linear layers network. Another direction is to learn nonlinear compatibility functions. @cite_30 presented a nonlinear embedding model that augments bilinear compatibility model by incorporating latent variables. @cite_15 proposed a first general kronecker product kernel-based learning model for ZSL tasks. In addition to the classification task, @cite_38 proposed an attribute network for zero-shot hashing retrieval task. | {
"abstract": [
"We present a novel latent embedding model for learning a compatibility function between image and class embeddings, in the context of zero-shot classification. The proposed method augments the state-of-the-art bilinear compatibility model by incorporating latent variables. Instead of learning a single bilinear map, it learns a collection of maps with the selection, of which map to use, being a latent variable for the current image-class pair. We train the model with a ranking based objective function which penalizes incorrect rankings of the true class for a given image. We empirically demonstrate that our model improves the state-of-the-art for various class embeddings consistently on three challenging publicly available datasets for the zero-shot setting. Moreover, our method leads to visually highly interpretable results with clear clusters of different fine-grained object properties that correspond to different latent variable maps.",
"Zero-shot hashing (ZSH) aims at learning a hashing model that is trained only by instances from seen categories but can generate well to those of unseen categories. Typically, it is achieved by utilizing a semantic embedding space to transfer knowledge from seen domain to unseen domain. Existing efforts mainly focus on single-modal retrieval task, especially image-based image retrieval (IBIR). However, as a highlighted research topic in the field of hashing, cross-modal retrieval is more common in real-world applications. To address the cross-modal ZSH (CMZSH) retrieval task, we propose a novel attribute-guided network (AgNet), which can perform not only IBIR but also text-based image retrieval (TBIR). In particular, AgNet aligns different modal data into a semantically rich attribute space, which bridges the gap caused by modality heterogeneity and zero-shot setting. We also design an effective strategy that exploits the attribute to guide the generation of hash codes for image and text within the same network. Extensive experimental results on three benchmark data sets (AwA, SUN, and ImageNet) demonstrate the superiority of AgNet on both cross-modal and single-modal zero-shot image retrieval tasks.",
"Attributes act as intermediate representations that enable parameter sharing between classes, a must when training data is scarce. We propose to view attribute-based image classification as a label-embedding problem: each class is embedded in the space of attribute vectors. We introduce a function that measures the compatibility between an image and a label embedding. The parameters of this function are learned on a training set of labeled samples to ensure that, given an image, the correct classes rank higher than the incorrect ones. Results on the Animals With Attributes and Caltech-UCSD-Birds datasets show that the proposed framework outperforms the standard Direct Attribute Prediction baseline in a zero-shot learning scenario. Label embedding enjoys a built-in ability to leverage alternative sources of information instead of or in addition to attributes, such as, e.g., class hierarchies or textual descriptions. Moreover, label embedding encompasses the whole range of learning settings from zero-shot learning to regular learning with a large number of labeled examples.",
"Kronecker product kernel provides the standard approach in the kernel methods’ literature for learning from graph data, where edges are labeled and both start and end vertices have their own feature representations. The methods allow generalization to such new edges, whose start and end vertices do not appear in the training data, a setting known as zero-shot or zero-data learning. Such a setting occurs in numerous applications, including drug-target interaction prediction, collaborative filtering, and information retrieval. Efficient training algorithms based on the so-called vec trick that makes use of the special structure of the Kronecker product are known for the case where the training data are a complete bipartite graph. In this paper, we generalize these results to noncomplete training graphs. This allows us to derive a general framework for training Kronecker product kernel methods, as specific examples we implement Kronecker ridge regression and support vector machine algorithms. Experimental results demonstrate that the proposed approach leads to accurate models, while allowing order of magnitude improvements in training and prediction time.",
"Zero-shot learning consists in learning how to recognise new concepts by just having a description of them. Many sophisticated approaches have been proposed to address the challenges this problem comprises. In this paper we describe a zero-shot learning approach that can be implemented in just one line of code, yet it is able to outperform state of the art approaches on standard datasets. The approach is based on a more general framework which models the relationships between features, attributes, and classes as a two linear layers network, where the weights of the top layer are not learned but are given by the environment. We further provide a learning bound on the generalisation error of this kind of approaches, by casting them as domain adaptation methods. In experiments carried out on three standard real datasets, we found that our approach is able to perform significantly better than the state of art on all of them, obtaining a ratio of improvement up to 17 .",
"We study the problem of object recognition for categories for which we have no training examples, a task also called zero--data or zero-shot learning. This situation has hardly been studied in computer vision research, even though it occurs frequently; the world contains tens of thousands of different object classes, and image collections have been formed and suitably annotated for only a few of them. To tackle the problem, we introduce attribute-based classification: Objects are identified based on a high-level description that is phrased in terms of semantic attributes, such as the object's color or shape. Because the identification of each such property transcends the specific learning task at hand, the attribute classifiers can be prelearned independently, for example, from existing image data sets unrelated to the current task. Afterward, new classes can be detected based on their attribute representation, without the need for a new training phase. In this paper, we also introduce a new data set, Animals with Attributes, of over 30,000 images of 50 animal classes, annotated with 85 semantic attributes. Extensive experiments on this and two more data sets show that attribute-based classification indeed is able to categorize images without access to any training images of the target classes."
],
"cite_N": [
"@cite_30",
"@cite_38",
"@cite_0",
"@cite_15",
"@cite_16",
"@cite_12"
],
"mid": [
"2334493732",
"2963340196",
"2171061940",
"2964038500",
"652269744",
"2128532956"
]
} | Improving Generalization via Attribute Selection on Out-of-the-box Data | With the rapid development of machine learning technologies, especially the rise of deep neural network, visual object recognition has made tremendous progress in recent years (Zheng et al., 2018;Shen et al., 2018). These recognition systems even outperform humans when provided with a massive amount of labeled data. However, it is expensive to collect sufficient labeled samples for all natural objects, especially for the new concepts and many more fine-grained subordinate categories . Therefore, how to achieve an acceptable recognition performance for objects with limited or even no training samples is a challenging but practical problem (Palatucci et al., 2009). Inspired by human cognition system that can identify new objects when provided with a description in advance (Murphy, 2004), zero-shot learning (ZSL) has been proposed to recognize unseen objects with no training samples (Cheng et al., 2017;Ji et al., 2019). Since labeled sample is not given for the target classes, we need to collect some source classes with sufficient labeled samples and find the connection between the target classes and the source classes.
As a kind of semantic representation, attributes are widely used to transfer knowledge from the seen classes (source) to the unseen classes (target) . Attributes play a key role in sharing information between classes and govern the performance of zero-shot classification. In previous ZSL works, all the attributes are assumed to be effective and treated equally. However, as pointed out in Guo et al. (2018), different attributes have different properties, such as the distributive entropy and the predictability. The attributes with poor predictability or poor discriminability may have negative impacts on the ZSL system performance. The poor predictability means that the attributes are hard to be correctly recognized from the feature space, and the poor discriminability means that the attributes are weak in distinguishing different objects. Hence, it is obvious that not all the attributes are necessary and effective for zero-shot classification.
Based on these observations, selecting the key attributes, instead of using all the attributes, is significant and necessary for constructing ZSL models. Guo et al. (2018) proposed the zero-shot learning with attribute selection (ZSLAS) model, which selects attributes by measuring the distributive entropy and the predictability of attributes based on the training data. ZSLAS can improve the performance of attribute-based ZSL methods, while it suffers from the drawback of generalization. Since the training classes and the test classes are disjoint in ZSL tasks, the training data is bounded by the box cut by attributes (illustrated in Figure 1). Therefore, the attributes selected based on the training data have poor generalization capability to the unseen test data.
To address the drawback, this paper derives a generalization error bound for ZSL problem. Since attributes for ZSL task is literally like the codewords in the error correcting output code (ECOC) model (Dietterich et al., 1994), we analyze the bound from the perspective of ECOC. Our analyses reveal that the key attributes need to be selected based on the data which is out of the box (i.e. the distribution of the training classes). Considering that test data is unavailable during the training stage for ZSL tasks, inspired by learning from pseudo relevance feedback (Miao et al., 2016), we introduce the out-Training data (in-the-box) Test data Generated data (out-of-the-box) BOX
attribute #2
attribute #1
attribute #3
Attributes representation of labels walrus bat seal tiger Figure 1: Illustration of out-of-the-box data. The distance between the out-of-the-box data and the test data (green solid arrow) is much less than the distance between the training data and the test data (blue dashed arrow).
of-the-box 1 data to mimic the unseen test classes. The out-of-the-box data is generated by an attribute-guided generative model using the same attribute representation as the test classes. Therefore, the out-of-the-box data has a similar distribution to the test data.
Guided by the performance of ZSL model on the out-of-the-box data, we propose a novel iterative attribute selection (IAS) model to select the key attributes in an iterative manner. Figure 2 illustrates the procedures of the proposed ZSL with iterative attribute selection (ZSLIAS). Unlike the previous ZSLAS that uses training data to select attributes at once, our IAS first generates out-of-the-box data to mimic the unseen classes, and subsequently iteratively selects key attributes based on the generated out-of-the-box data. During the test stage, selected attributes are employed as a more efficient semantic representation to improve the original ZSL model. By adopting the proposed IAS, the improved attribute embedding space is more discriminative for the test data, and hence improves the performance of the original ZSL model.
The main contributions of this paper are summarized as follows:
• We present a generalization error analysis for ZSL problem. Our theoretical analyses prove that selecting the subset of key attributes can improve the generalization performance of the original ZSL model which utilizes all the attributes.
• Based on our theoretical findings, we propose a novel iterative attribute selection 1 The out-of-the-box data is generated based on the training data and the attribute representation without extra information, which follows the standard zero-shot learning setting. Figure 2: The pipeline of the ZSLIAS framework. In training stage, we first generate the out-of-the-box data by a tailor-made generative model (i.e. AVAE), and then iteratively select attributes based on the out-of-the-box data. In test stage, the selected attributes are exploited to build ZSL model for unseen objects categorization.
Iterative Attribute Selection
strategy to select key attributes for ZSL tasks.
• Since test data is unseen during the training stage for ZSL tasks, we introduce the out-of-the-box data to mimic test data for attribute selection. Such data generated by a designed generative model has a similar distribution to the test data. Therefore, attributes selected based on the out-of-the-box data can be effectively generalized to the unseen test data.
• Extensive experiments demonstrate that IAS can effectively improve the attributebased ZSL model and achieve state-of-the-art performance.
The rest of the paper is organized as follows. Section 2 reviews related works. Section 3 gives the preliminary and motivation. Section 4 presents the theoretical analyses on generalization bound for attribute selection. Section 5 proposes the iterative attribute selection model. Experimental results are reported in Section 6. Conclusion is drawn in Section 7.
Zero-shot Learning
ZSL can recognize new objects using attributes as the intermediate semantic representation. Some researchers adopt the probability-prediction strategy to transfer information. Lampert et al. (2013) proposed a popular baseline, i.e. direct attribute prediction (DAP). DAP learns probabilistic attribute classifiers using the seen data and infers the label of the unseen data by combining the results of pre-trained classifiers.
Most recent works adopt the label-embedding strategy that directly learns a mapping function from the input features space to the semantic embedding space. One line of works is to learn linear compatibility functions. For example, Akata et al. (2015) presented an attribute label embedding (ALE) model which learns a compatibility function combined with ranking loss. Romera-Paredes et al. (2015) proposed an approach that models the relationships among features, attributes and classes as a two linear layers network. Another direction is to learn nonlinear compatibility functions. Xian et al. (2016) presented a nonlinear embedding model that augments bilinear compatibility model by incorporating latent variables. Airola et al. (2017) proposed a first general Kronecker product kernel-based learning model for ZSL tasks. In addition to the classification task, Ji et al. (2019) proposed an attribute network for zero-shot hashing retrieval task.
Attribute Selection
Attributes, as a kind of popular semantic representation of visual objects, can be the appearance, a part or a property of objects (Farhadi et al., 2009). For example, object elephant has the attribute big and long nose, object zebra has the attribute striped. Attributes are widely used to transfer information to recognize new objects in ZSL tasks Xu et al., 2019). As shown in Figure 1, using attributes as the semantic representation, data of different categories locates in different boxes bounded by the attributes. Since the attribute representation of the seen classes and the unseen class are different, the boxes with respect to the seen data and the unseen data are disjoint.
In previous ZSL works, all the attributes are assumed to be effective and treated equally. However, as pointed out in Guo et al. (2018), not all the attributes are effective for recognizing new objects. Therefore, we should select the key attributes to improve the semantic presentation. Liu et al. (2014) proposed a novel greedy algorithm which selects attributes based on their discriminating power and reliability. Guo et al. (2018) proposed to select attributes by measuring the distributive entropy and the predictability of attributes based on the training data. In short, previous attribute selection models are conducted based on the training data, which makes the selected attributes have poor generalization capability to the unseen test data. While our IAS iteratively selects attributes based on the out-of-the-box data which has a similar distribution to the test data, and thus the key attributes selected by our model can be more effectively generalized to the unseen test data.
Attribute-guided Generative Models
Deep generative models aim to estimate the joint distribution p(y; x) of samples and labels, by learning the class prior probability p(y) and the class-conditional density p(x|y) separately. The generative model can be extended to a conditional generative model if the generator is conditioned on some extra information, such as attributes in the proposed method. Odena et al. (2017) introduced a conditional version of generative adversarial nets, i.e. CGAN, which can be constructed by simply feeding the data label. CGAN is conditioned on both the generator and discriminator and can generate samples conditioned on class labels. Conditional Variational Autoencoder (CVAE) (Sohn et al., 2015), as an extension of Variational Autoencoder, is a deep conditional generative model for structured output prediction using Gaussian latent variables. We modify CVAE with the attribute representation to generate out-of-the-box data for the attribute selection.
Preliminary and Motivation
ZSL Task Formulation
We consider zero-shot learning as a task that recognizes unseen classes which have no labeled samples available. Given a training set D s = {(x n , y n ) , n = 1, ..., N s }, the task of traditional ZSL is to learn a mapping f : X → Y from the image feature space to the label embedding space, by minimizing the following regularized empirical risk:
L (y, f (x; W)) = 1 N s Ns n=1 l (y n , f (x n ; W)) + Ω (W) ,(1)
where l (·) is the loss function, which can be square loss 1/2(f (x) − y) 2 , logistic loss log(1+exp(−yf (x))) or hinge loss max(0, 1−yf (x)). W is the parameter of mapping f , and Ω (·) is the regularization term.
The mapping function f is defined as follows:
f (x; W) = arg max y∈Y F (x, y; W) ,(2)
where the function F : X × Y → R is the bilinear compatibility function to associate image features and label embeddings defined as follows:
F (x, y; W) = θ (x) T Wϕ (y) ,(3)
where θ (x) is the image features, ϕ (y) is the label embedding (i.e. attribute representation). We summarize some frequently used notations in Table 1.
Interpretation of ZSL Task
In traditional ZSL models, all the attributes are assumed to be effective and treated equally. While in previous works, some researchers pointed out that not all the attributes are useful and significant for zero-shot classification (Jiang et al., 2017). To the best of our knowledge, there is no theoretical analysis for the generalization performance of ZSL tasks, let alone selecting informative attributes for unseen classes. To fill in this gap, we first derive the generalization error bound for ZSL models. The intuition of our theoretical analysis is to simply treat the attributes as a kind of error correcting output codes, then the prediction of ZSL tasks can be deemed as the assignment of class labels with respective pre-defined ECOC, which is the closest to the predicted ECOC problem (Rocha et al., 2014). Based on this novel interpretation, we derive a theoretical generalization error bound of ZSL model as shown in Section 4. From the generalization bound analyses, we find that the discriminating power of attributes governs the performance of the ZSL model.
Deficiency of ZSLAS
Some attribute selection works have been proposed in recent years. Guo et al. (2018) proposed the ZSLAS model that selects attributes based on the distributive entropy and the predictability of attributes using training data. Simultaneously considering the ZSL model loss function and attribute properties in a joint optimization framework, they selected attributes by minimizing the following loss function:
L(y, f (x; s, W)) = 1 N s Ns n=1 {l ZSL (y n , f (x n ; s, W)) + αl p (θ(x n ), ϕ(y n ); s) − βl v (θ(x n ), µ; s)},(4)
where s is the weight vector of the attributes which will be further used for attribute selection. θ(·) is the attribute classifier, ϕ(y n ) is the attribute representation, µ is an auxiliary parameter. l ZSL is the model based loss function for ZSL, i.e. l(·) as defined in Eq. (1). l p is the attribute prediction loss which can be defined based on specific ZSL models and l v is the loss of variance which measures the distributive entropy of attributes (Guo et al., 2018). After getting the weight vector s by optimizing Eq. (4), attributes can be selected according to s and then be used to construct ZSL model. From our theoretical analyses in Section 4, ZSLAS can improve the original ZSL model to some extent (Guo et al., 2018). However, ZSLAS suffers from a drawback that the attributes are selected based on the training data. Since the training and test classes are disjoint in ZSL tasks, it is difficult to measure the quality and contribution of attributes regarding discriminating the unseen test classes. Thus, the selected attributes by ZSLAS have poor generalization capability to the test data due to the domain shift problem.
Definition of Out-of-the-box
Since previous attribute selection models are conducted based on the bounded in-thebox data, the selected attributes have poor generalization capability to the test data. However, the test data is unavailable during the training stage. Inspired by learning from pseudo relevance feedback (Miao et al., 2016), we introduce the pseudo data, which is outside the box of the training data, to mimic test classes to guide the attribute selection. Considering that the training data is bounded in the box by attributes, we generate the out-of-the-box data using an attribute-guided generative model. Since the out-of-thebox data is generated based on the same attribute representation as test classes, the box of the generated data will overlap with the box of the test data. And consequently, the key attributes selected by the proposed IAS model based on the out-of-the-box data can be effectively generalized to the unseen test data.
Generalization Bound Analysis
In this section, we first derive the generalization error bound of the original ZSL model and then analyze the bound changes after attribute selection. In previous works, some generalization error bounds have been presented for the ZSL task. Romera-Paredes et al. (2015) transformed ZSL problem to the domain adaptation problem and then analyzed the risk bounds for domain adaptation. Stock et al. (2018) considered ZSL problem as a specific setting of pairwise learning and analyzed the bound by the kernel ridge regression model. However, these bound analysis are not suitable for ZSL model due to their assumptions. In this work, we derive the generalization bound from the perspective of ECOC model, which is more similar to the ZSL problem.
Generalization Error Bound of ZSL
Zero-shot classification is an effective way to recognize new objects which have no training samples available. The basic framework of ZSL model is using attribute representation as the bridge to transfer knowledge from seen objects to unseen objects. To simplify the analysis, we consider ZSL as a multi-class classification problem. Therefore, ZSL task can be addressed via an ensemble method which combines many binary attribute classifiers. Specifically, we pre-trained a binary classifier for each attribute separately in the training stage. To classify a new sample, all the attribute classifiers are evaluated to obtain an attribute codeword (a vector in which each element represents the output of an attribute classifier). Then we compare the predicted codeword to the attribute representations of all the test classes to retrieve the label of the test sample.
To analyze the generalization error bound of ZSL, we first define some distances in the attribute space, and then present a proposition of the error correcting ability of attributes.
Definition 1 (Generalized Attribute Distance). Given the attribute matrix A for associating labels and attributes, let a i , a j denote the attribute representation of label y i and y j in matrix A with length N a , respectively. Then the generalized attribute distance between a i and a j can be defined as
d(a i , a j ) = Na m=1 ∆(a (m) i , a (m) j ),(5)
where N a is the number of attributes, a (m) i is the m th element in the attribute representation a i of the label
y i . ∆(a (m) i , a (m) j ) is equal to 1 if a (m) i = a (m) j , 0 otherwise.
We further define the minimum distance between any two attribute representations in the attribute space.
Definition 2 (Minimum Attribute Distance). The minimum attribute distance τ of matrix A is the minimum distance between any two attribute representations a i and a j as follows:
τ = min i =j d(a i , a j ), ∀ 1 ≤ i, j ≤ N a .(6)
Given the definition of distance in the attribute space, we can prove the following proposition.
Proposition 1 (Error Correcting Ability ). Given the label-attribute correlation matrix A and a vector of predicted attribute representation f (x) for an unseen test sample x with known label y. If x is incorrectly classified, then the distance between the predicted attribute representation f (x) and the correct attribute representation a y is greater than half of the minimum attribute distance τ , i.e.
d(f (x), a y ) ≥ τ 2 .(7)
Proof. Suppose that the predicted attribute representation for test sample x with correct attribute representation a y is f (x), and the sample x is incorrectly classified to the mismatched attribute representation a r , where r ∈ Y u \ {y}. Then the distance between f (x) and a y is greater than the distance between f (x) and a r , i.e.,
d(f (x), a y ) ≥ d(f (x), a r ).(8)
Here, the distance between attribute representation can be expanded as the elementwise summation based on Eq. (5) as follows:
Na m=1 ∆(f (m) (x), a (m) y ) ≥ Na m=1 ∆(f (m) (x), a (m) r ).(9)
Then, we have:
d(f (x), a y ) = Na m=1 ∆(f (m) (x), a (m) y ) = 1 2 Na m=1 ∆(f (m) (x), a (m) y ) + ∆(f (m) (x), a (m) y ) (i) ≥ 1 2 Na m=1 ∆(f (m) (x), a (m) y ) + ∆(f (m) (x), a (m) r ) (ii) ≥ 1 2 Na m=1 ∆(a (m) y , a (m) r ) = 1 2 d(a y , a r ) (iii) ≥ τ 2 ,(10)
where (i) follows Eq. (9), (ii) is based on the triangle inequality of distance metric and (iii) follows Eq. (6).
From Proposition 1, we can find that, the predicted attribute representation is not required to be exactly the same as the ground truth for each unseen test sample. As long as the distance is less than τ /2, ZSL models can correct the error committed by some attribute classifiers and make an accurate prediction.
Based on the Proposition of error correcting ability of attributes, we can derive the theorem of generalization error bound for ZSL.
Theorem 1 (Generalization Error Bound of ZSL). Given N a attribute classifiers, f (1) , f (2) , ..., f (Na) , trained on training set D s with label-attribute matrix A, the generalization error rate for the attribute-based ZSL model is upper bounded by
2N aB τ ,(11)
whereB = 1 Na Na m=1 B m and B m is the upper bound of the prediction loss for the m th attribute classifier f (m) .
Proof. According to Proposition 1, for any incorrectly classified test sample x with label y, the distance between the predicted attribute representation f (x) and the true attribute representation a y is greater than τ /2, i.e.,
d(f (x), a y ) = Na m=1 ∆(f (m) (x), a (m) y ) ≥ τ 2 .(12)
Let k be the number of incorrect image classifications for unseen test dataset D u = {(x i , y i ), i = 1, ..., N u }, we can obtain:
k τ 2 ≤ Nu i=1 Na m=1 ∆(f (m) (x i ), a (m) y i ) ≤ Nu i=1 Na m=1 B m = N u N aB ,(13)
whereB = 1 Na Na m=1 B m and B m is the upper bound of attribute prediction loss. Hence, the generalized error rate k/N u is bounded by 2N aB /τ .
Remark 1 (Generalization error bound is positively correlated to the average attribute prediction loss). From Theorem 1, we can find that the generalization error bound of the attribute-based ZSL model depends on the number of attributes N a , minimum attribute distance τ and average prediction lossB for all the attribute classifiers. According to the Definition 1 and 2, the minimum attribute distance τ is positively correlated to the number of attributes N a . Therefore, the generalization error bound is mainly affected by the average prediction lossB. Intuitively, the inferior attributes with poor predictability cause greater prediction lossB, and consequently, these attributes will have negative effect on the ZSL performance and increase the generalization error rate.
Improvement of Generalization after Attribute Selection
It has been proven that the generalization error bound of ZSL model is affected by the average prediction lossB in the previous section. In this section, we will prove that attribute selection can reduce the average prediction lossB, and consequently reduce the generalization error bound of ZSL from the perspective of PAC-style (Valiant, 1984) analysis.
Lemma 1 (PAC bound of ZSL (Palatucci et al., 2009)). Given N a attribute classifiers, to obtain an attribute classifier with (1 − δ) probability that has at most k a incorrect predicted attributes, the PAC bound D of the attribute-based ZSL model is:
D ∝ N a k a [4log(2/δ) + 8(d + 1)log(13N a /k a )],(14)
where d is the dimension of the image features.
Remark 2 (The average attribute prediction loss is positively correlated to the PAC bound). Here, k a /N a is the tolerable prediction error rate of attribute classifiers. According to the definition of the average attribute prediction lossB, it is obvious that the ZSL model with smallerB could tolerate a greater k a /N a . From Lemma 1, we can find that the PAC bound D is monotonically increasing with respect to N a /k a . Hence, the PAC bound D decreases when the N a /k a decreases, and consequently the average prediction lossB decreases.
Lemma 2 (Test Error Bound (Vapnik, 2013)). Suppose that the PAC bound of the attribute-based ZSL model is D. The probability of the test error distancing from an upper bound is given by:
p e ts ≤ e tr + 1 N s D log 2N s D + 1 − log η 4 = 1 − η,(15)
where N s is the size of the training set, 0 ≤ η ≤ 1, and e ts , e tr are the test error and the training error respectively. Proof. In attribute selection, the key attributes are selected by minimizing the loss function in Eq.
(1) on the out-of-the-box data. Since the generated out-of-the-box data has a similar distribution to the test data, the test error of ZSL will decrease after attribute selection, i.e. ZSLIAS has a smaller test error bound than the original ZSL model. Therefore, we can infer that ZSLIAS has a smaller PAC bound based on Remark 3. According to Remark 2, we can infer that the average prediction errorB decreases after attribute selection. As a consequence, the generalization error bound of ZSLIAS is smaller than the original ZSL model based on Remark 1.
From Proposition 2, we can observe that the generalization error of ZSL model will decrease after adopting the proposed IAS. In other words, ZSLIAS have a smaller classification error rate comparing to the original ZSL method when generalizing to the unseen test data.
IAS with Out-of-the-box Data
Motivated by the generalization bound analyses, we select the key attributes based on the out-of-the-box data. In this section, we first present the proposed iterative attribute selection model. Then, we introduce the attribute-guided generative model designed to generate the out-of-the-box data. The complexity analysis of IAS is given at last.
Iterative Attribute Selection Model
Inspired by the idea of iterative machine teaching (Liu et al., 2017), we propose a novel iterative attribute selection model that iteratively selects attributes based on the generated out-of-the-box data. Firstly, we generate the out-of-the-box data to mimic test classes by an attribute-based generative model. Then, the key attributes are selected in an iterative manner based on the out-of-the-box data. After obtaining the selected attributes, we can consider them as a more efficient semantic representation to improve the original ZSL model.
Suppose given the generated out-of-the-box data D g = {(x n , y n ), n = 1, ..., N g }, we can combine the empirical risk in Eq. (1) with the attribute selection model. Then the loss function is rewritten as follows:
L (y, f (x; s, W)) = 1 N g Ng n=1 l (y n , f (x n ; s, W)) + Ω (W) ,
where s ∈ (0, 1) Na is the indicator vector for the attribute selection, in which s i = 1 if the i th attribute is selected or 0 otherwise. N a is the number of all the attributes. Correspondingly, the mapping function f in Eq.
(2) and the compatibility function F in Eq. (3) can be rewritten as follows:
f (x; s, W) = arg max y∈Y F (x, y; s, W) ,(17)F (x, y; s, W) = θ (x) T W (s • ϕ (y)) ,(18)
where • is element-wise product operator (Hadamard product), s is the selection vector defined in Eq. (16).
To solve the optimization problem in Eq. (16), we need to specify the choice of the loss function l (·). The loss function in Eq. (16) for single sample (x n , y n ) is expressed as follows (Xian et al., 2018): l(y n , f ((x n ; s, W))) = y∈Yg r ny [ (y n , y) + F (x n , y; s, W) − F (x n , y n ; s, W)] +
= y∈Yg r ny [ (y n , y) + θ(x n ) T W(s • ϕ(y)) − θ(x n ) T W(s • ϕ(y n ))] + ,(19)
where Y g is the label of generated out-of-the-box data, which is the same as Y u . (y n ; y) = 0 if y n = y, 1 otherwise. r ny ∈ [0, 1] is the weight defined in specific ZSL methods.
Since the dimension of the optimal attribute subset (i.e. l 0 -norm of s) is agnostic, finding the optimal s is a NP-Complete (Garey et al., 1974) problem. Therefore, inspired by the idea of iterative machine teaching (Liu et al., 2017), we adopt the greedy algorithm (Cormen et al., 2009) to optimize the loss function in an iterative manner. Eq. (16) gets updated during each iteration as follows:
L t+1 = 1 N g Ng n=1 l t+1 (y n , f (x n ; s t+1 , W t+1 )) + Ω(W t+1 ), s.t. s i ∈s t+1 s i = t + 1, s j ∈(s t+1 −s t ) s j = 1.(20)
The constraints on s ensure that s t updates one element (from 0 updates to 1) during each iteration, which indicates that only one attribute is selected each time. s 0 is the initial vector of all 0's. Correspondingly, the loss function in Eq. (20) for single sample (x n , y n ) gets updated during each iteration as follows:
l t+1 = y∈Yg r ny [ (y n , y) + θ(x n ) T W t+1 (s t+1 • ϕ(y)) − θ(x n ) T W t+1 (s t+1 • ϕ(y n ))] + .(21)
Here l t+1 subjects to the same constrains as Eq. (20).
To minimize the loss function in Eq. (20), we can alternatively optimize W t+1 and s t+1 by optimizing one variable while fixing the other one. In each iteration, we firstly optimize W t+1 via the gradient descent algorithm (Burges et al., 2005). The gradient of Eq. (20) is calculated as follows:
∂L t+1 ∂W t+1 = 1 N g Ng n=1 ∂l t+1 ∂W t+1 + 1 2 αW t+1 ,(22)
where
∂l t+1 ∂W t+1 = y∈Yg r ny θ(x n ) T (s t • (ϕ(y) − ϕ(y n ))),(23)
where α is the regularization parameter. After updating W t+1 , we can traverse all the elements equal to 0 in s t , and turn them into 1 respectively. Then s t+1 is updated by the optimal s t+1 which achieves the minimal loss of Eq. (20):
s t+1 = arg min s t+1 1 N g Ng n=1 l t+1 (y n , f (x n ; s t+1 , W t+1 )) + Ω(W t+1 ),(24)
When iterations end and s is obtained, we can easily get the subset of key attributes by selecting the attributes corresponding to the elements equal to 1 in the selection vector s.
The procedure of the proposed IAS model is given in Algorithm 1.
Generation of Out-of-the-box Data
In order to select the discriminative attributes for test classes, we should do attribute selection on the test data. Since the training data and the test data are located in the different boxes bounded by the attributes, we adopt an attribute-based generative model (Bucher et al., 2017) to generate out-of-the-box data to mimic test classes. Comparing to the ZSLAS, the key attributes selected by IAS based on the out-of-the-box data can be more efficiently generalized to test data. Conditional variational autoencoder (CVAE) (Sohn et al., 2015) is a conditional generative model in which the latent codes and generated data are both conditioned on some extra information. In this work, we propose the attribute-based variational autoencoder (AVAE), a special version of CVAE with tailor-made attributes, to generate the out-of-the-box data.
VAE (Kingma et al., 2013) is a directed graphical model with certain types of latent variables. The generative process of VAE is as follows: a set of latent codes z is generated from the prior distribution p(z), and the data x is generated by the generative distribution p(x|z) conditioned on z : z ∼ p(z), x ∼ p(x|z). The empirical objective of VAE is expressed as follows (Sohn et al., 2015):
L VAE (x) = −KL(q(z|x) p(z)) + 1 L L l=1 logp(x|z (l) ),(25)
Algorithm 1 Iterative Attribute Selection Model
Input:
The generated out-of-the-box data D g ; Original attribute set A; Iteration stop threshold ε. Output:
Subset of selected attributes S.
1: Initialization: s 0 = 0, randomize W 0 ; 2: for t = 0 to N a − 1 do 3: if |L t+1 − L t | ≤ ε 10:
L t = 1 Ng Ng n=1 l t (y n , f (x n ; s t , W t )) + Ω(W t ) (Eq. (20)) 4: ∂L t ∂W t = 1 Ng Ng n=1 ∂l t ∂W t + 1 2 αW t (
Break;
11:
end if 12: end for 13: Obtain the subset of selected attributes: S = s • A.
where z (l) = g(x, (l) ), (l) ∼ N (0, I). q(z|x) is the recognition distribution which is reparameterized with a deterministic and differentiable function g(·, ·) (Sohn et al., 2015) . KL denotes the Kullback-Leibler divergence (Kullback, 1987) between the incorporated distributions. L is the number of samples.
Combining with the condition, i.e. the attribute representation of labels, the empirical objective of the AVAE is defined as follows:
L AVAE (x, ϕ(y)) = −KL(q(z|x, ϕ(y)) p(z|ϕ(y))) + 1 L L l=1 logp(x|ϕ(y), z (l) ),(26)
where z (l) = g(x, ϕ(y), (l) ), ϕ (y) is the attribute representation of label y.
In the encoding stage, for each training data point x (i) , we estimate the q(z (i) |x (i) , ϕ(y (i) )) = Q(z) using the encoder. In the decoding stage, after inputting the concatenation of thez sampled from the Q(z) and the attribute representation ϕ(y u ), the decoder will generate a new sample x g with the same attribute representation as the unseen class ϕ(y u ).
The procedure of AVAE is illustrated in Figure 3. At training time, the attribute representation (of training classes) whose image is being fed in is provided to the encoder and decoder. To generate an image of a particular attribute representation (of test classes), we can just feed this attribute vector along with a random point in the latent space sampled from a standard normal distribution. The system no longer relies on the latent space to encode what object you are dealing with. Instead, the latent space encodes attribute information. Since the attribute representations of test classes are fed into the decoder at generating stage, the generated out-of-the-box data D g has a similar distribution to the test data.
Complexity Analysis
Experiments
To evaluate the performance of the proposed iterative attribute selection model, extensive experiments are conducted on four standard datasets with ZSL setting. In this section, we first compare the proposed approach with the state-of-the-art, and then give detailed analyses.
Experimental Settings
Dataset
We conduct experiments on four standard ZSL datasets: (1) Animal with Attribute (AwA) (Lampert et al., 2013), (2) attribute-Pascal-Yahoo (aPY) (Farhadi et al., 2009), (3) Caltech-UCSD Bird 200-2011 (CUB) (Wah et al., 2011), and (4) SUN Attribute Database (SUN) (Patterson et al., 2012). The overall statistic information of these datasets is summarized in Table 2.
Dataset #Attributes
Classes Images (SS) Images (PS) #Total #Training #Test #Training #Test #Training #Test AwA 85 50 40 10 24295 6180 19832 5685 aPY 64 32 20 12 12695 2644 5932 7924 CUB 312 200 150 50 8855 2933 7057 2967 SUN 102 717 645 72 12900 1440 10320 1440
Dataset Split
Zero-shot learning assumes that training classes and test classes are disjoint. Actually, ImageNet, the dataset exploited to extract image features via deep neural networks, may include some test classes. Therefore, Xian et al. (2018) proposed a new dataset split (PS) ensuring that none of the test classes appears in the dataset used to train the extractor model. In this paper, we evaluate the proposed model using both splits, i.e., the original standard split (SS) and the proposed split (PS).
Image Feature
Deep neural network feature is extracted for the experiments. Image features are extracted from the entire images for AwA, CUB and SUN datasets, and from bounding boxes mentioned in Farhadi et al. (2009) for aPY dataset, respectively. The original ResNet-101 (He et al., 2016) pre-trained on ImageNet with 1K classes is used to calculate 2048-dimensional top-layer pooling units as image features.
Attribute Representation
Attributes are used as the semantic representation to transfer information from training classes to test classes. We use 85, 64, 312 and 102-dimensional continuous value attributes for AwA, aPY, CUB and SUN datasets, respectively.
Evaluation protocol
Unified dataset splits shown in Table 2 are used for all the compared methods to get fair comparison results. Since the dataset is not well balanced with respect to the number of images per class (Xian et al., 2018), we use the mean class accuracy, i.e. per-class averaged top-1 accuracy, as the criterion of assessment. Mean class accuracy is calculated as follows: acc = 1 L y∈Yu #correct predictions in y #samples in y ,
where L is the number of test classes, Y u is the set comprised of all the test labels.
Comparison with the State-of-the-Art
To evaluate the efficiency of the proposed iterative attribute selection model, we modify several latest ZSL baselines by the proposed IAS and compare them with the state-ofthe-art.
We modify seven representative ZSL baselines to evaluate the IAS model, including three popular ZSL baselines (i.e. DAP (Lampert et al., 2013), LatEm (Xian et al., 2016) and SAE (Kodirov et al., 2017)) and four latest ZSL baselines (i.e. MFMR , GANZrl (Tong et al., 2018), fVG (Xian et al., 2019) and LLAE (Li et al., 2019)).
The improvement achieved on these ZSL baselines is summarized in Table 3. It can be observed that IAS can significantly improve the performance of attribute-based ZSL methods. Specifically, the mean accuracies of these ZSL methods on four datasets (i.e. AwA, aPY, CUB and SUN) are increased by 11.09%, 15.97%, 9.10%, 5.11%, respectively (10.29% on average) after using IAS. For DAP on AwA and aPY datasets, LatEm on AwA dataset, IAS can improve their accuracy by greater than 20%, which demonstrates that IAS can significantly improve the performance of ZSL models. Interestingly, SAE performs badly on aPY and CUB datasets, while the accuracy rises to an acceptable level (from 8.33% to 38.53%, and from 24.65% to 42.85%, respectively) by using IAS. Even though the performance of state-of-the-art baselines is pretty well, IAS can still improve them to some extent (5.48%, 3.24%, 2.80% and 3.64% on average for MFMR, GANZrl, fVG and LLAE respectively). These results demonstrate that the proposed iterative attribute selection model makes sense and can effectively improve existing attribute-based ZSL methods. This also proves the necessity and effectiveness of attribute selection for ZSL tasks.
As a similar work to ours, ZSLAS selects attributes based on the distributive entropy and the predictability of attributes. Thus, we compare the improvement of IAS and ZS-LAS on DAP and LatEm, respectively. In Table 3, it can be observed that ZSLAS can improve existing ZSL methods, while IAS can improve them by a greater level (2.15% vs 10.61% on average). Compared to ZSLAS, the advantages of ZSLIAS can be interpreted in two aspects. Firstly, ZSLIAS selects attributes in an iterative manner, hence it can select a more optimal subset of key attributes than ZSLAS that selects attributes at once. Secondly, ZSLAS is conducted based on the training data, while ZSLIAS is conducted based on the out-of-the-box data which has a similar distribution to the test data. Therefore, attributes selected by ZSLIAS is more applicable and discriminative for test data. Experimental results demonstrate the significant superiority of the proposed IAS model over previous attribute selection models.
Detailed Analysis
In order to further understand the promising performance, we analyze the following experimental results in detail.
Evaluation on the Out-of-the-box Data
In the first experiment, we evaluate the out-of-the-box data generated by a tailor-made attribute-based deep generative model. Figure 4 shows the distribution of the out-of-thebox data and the real test data sampled from AwA dataset using t-SNE. Note that the out-of-the-box data in Figure 4(b) is generated only based on the attribute representation of unseen classes, and without extra information of any test images. It can be observed that the generated out-of-the-box data can capture a similar distribution to the real test data, which guarantees that the selected attributes can be effectively generalized to test data.
We also quantitatively evaluate the out-of-the-box data by calculating various distances between three distributions, i.e. the generated out-of-the-box data (X g ), unseen test data (X u ) and seen training data (X s ), in pairs. Table 4 shows the distribution distances measured by Wasserstein Distance (Vallender, 1974), KL Divergence (Kullback, 1987), Hellinger Distance (Beran, 1977) and Bhattacharyya Distance (Kailath, 1967), respectively. It is obvious that the distance between X g and X u is much less than the distance between X u and X s , which means that the generated out-of-the-box data has a similar distribution to the unseen test data compared to the seen data. Therefore, at- tributes selected based on the out-of-the-box data are more discriminative for test data comparing to attributes selected based on training data. We illustrate some generated images of unseen classes (i.e. panda and seal) and annotate them the corresponding attribute representations as shown in Figure 5. Numbers in black indicate the attribute representations of the labels of real test images. Numbers in red and green are the correct and the incorrect attribute values of generated images, respectively. We can see that the generated images have the similar attribute representation as test images. Therefore, the tailor-made attribute-based deep generative model can generate the out-of-the-box data which captures a similar distribution to the unseen data.
Effectiveness of IAS
In the second experiment, we compare the performance of three ZSL methods (i.e. DAP, LatEm and SAE) after using IAS on four datasets, respectively. The accuracies with respect to the number of selected attributes are shown in Figure 6. On AwA, aPY and SUN datasets, we can see that the performance of these three ZSL methods increases sharply when the number of selected attributes grows from 0 to about 20%, and then reaches the peak. These results suggest that only about a quarter of attributes are the key attributes which are necessary and effective to classify test objects. In Figure 6(b) and 6(f), there is an interesting result that SAE performs badly on aPY dataset with both SS and PS (the accuracy is less than 10%), while the performance is acceptable after using IAS (the accuracy is about 40%). These results demonstrate the effectiveness and robustness of IAS for ZSL tasks. Furthermore, we modify DAP by using all the attributes (#84), using the selected attributes (#20) and using the remaining attributes (#64) after attribute selection, respectively. The resulting confusion matrices of these three variants evaluated on AwA dataset with proposed split setting are illustrated in Figure 7. The numbers in the diagonal area (yellow patches) of confusion matrices indicate the classification accuracy per class. It is obvious that IAS can significantly improve DAP performance on most of the test classes, and the accuracies on some classes nearly doubled after using IAS, such as horse, seal, and giraffe. Even though some objects are hard to be recognized by DAP, like dolphin (the accuracy of DAP is 1.6%), we can get an acceptable performance after using IAS (the accuracy of DAPIAS is 72.7%). The original DAP only performs better than IAS with regard to the object blue whale, this is because in the original DAP, most of the marine creatures (such as blue whale, walrus and dolphin) are classified as the blue whale, which increases the classification accuracy while also increasing the false positive rate. More importantly, the confusion matrix of DAPIAS contains less noise (i.e. smaller numbers in the side regions (white patches) of confusion matrices apart from the diagonal area) than DAP, which suggests that DAPIAS has less prediction uncertainties. In other words, adopting IAS can improve the robustness of attribute-based ZSL methods.
In Figure 7, the accuracy of using the selected attributes (71.88% on average) is significantly improved comparing to the accuracy of using all the attributes (46.23% on average), and the accuracy of using the remaining attributes (31.32% on average) is extremely terrible. These results suggest that the selected attributes are the key attributes for discriminating test data. The missing attributes are useless and even have a negative impact on the ZSL system. Therefore, it is obvious that not all the attributes are effective for ZSL tasks, and we should select the key attributes to improve performance.
Interpretability of Selected Attributes
In the third experiment, we present the visualization results of attribute selection. We find that ZSL methods obtain the best performance when selecting about 20% attributes as shown in Figure 6. Therefore, we illustrate the top 20% key attributes selected by DAP, LatEm and SAE on four datasets in Figure 8. Three rows in each figure are DAP, LatEm and SAE from top to bottom, and yellow bars indicate the attributes which are selected by the corresponding methods. We can see that the attribute subsets selected by different ZSL methods are highly coincident for the same dataset, which demonstrates that the selected attributes are the key attributes for discriminating test data. Specifi- cally, we enumerate the key attributes selected by three ZSL methods on AwA dataset in Table 5. Attributes in boldface indicate that they are simultaneously selected by all the three ZSL methods, and attributes in italics indicate that they are selected by any two of these three methods. It can be observed that 13 attributes (65%) are selected by all the three ZSL methods. These three attribute subsets selected by diverse ZSL models are very similar, which is another evidence that IAS is reasonable and useful for zero-shot classification.
Conclusion
We present a novel and effective iterative attribute selection model to improve existing attribute-based ZSL methods. In most of the previous ZSL works, all the attributes are assumed to be effective and treated equally. However, we notice that attributes have different predictability and discriminability for diverse objects. Motivated by this observation, we propose to select the key attributes to build ZSL model. Since training classes and test classes are disjoint in ZSL tasks, we introduce the out-of-the-box data to mimic test data to guide the progress of attribute selection. The out-of-the-box data generated by a tailor-made attribute-based deep generative model has a similar distribution to the test data. Hence, the attributes selected by IAS based on the out-of-the-box data can be effectively generalized to the test data. To evaluate the effectiveness of IAS, we conduct extensive experiments on four standard ZSL datasets. Experimental results demonstrate that IAS can effectively select the key attributes for ZSL tasks and significantly improve state-of-the-art ZSL methods.
In this work, we select the same attributes for all the unseen test classes. Obviously, this is not the global optimal solution to select attributes for diverse categories. In the future, we will consider a tailor-made attribute selection model that can select the special subset of key attributes for each test class. | 7,799 |
1907.11397 | 2966209912 | Zero-shot learning (ZSL) aims to recognize unseen objects (test classes) given some other seen objects (training classes), by sharing information of attributes between different objects. Attributes are artificially annotated for objects and are treated equally in recent ZSL tasks. However, some inferior attributes with poor predictability or poor discriminability may have negative impact on the ZSL system performance. This paper first derives a generalization error bound for ZSL tasks. Our theoretical analysis verifies that selecting key attributes set can improve the generalization performance of the original ZSL model which uses all the attributes. Unfortunately, previous attribute selection methods are conducted based on the seen data, their selected attributes have poor generalization capability to the unseen data, which is unavailable in training stage for ZSL tasks. Inspired by learning from pseudo relevance feedback, this paper introduces the out-of-the-box data, which is pseudo data generated by an attribute-guided generative model, to mimic the unseen data. After that, we present an iterative attribute selection (IAS) strategy which iteratively selects key attributes based on the out-of-the-box data. Since the distribution of the generated out-of-the-box data is similar to the test data, the key attributes selected by IAS can be effectively generalized to test data. Extensive experiments demonstrate that IAS can significantly improve existing attribute-based ZSL methods and achieve state-of-the-art performance. | Attributes, as popular semantic representation of visual objects, can be the appearance, a part or a property of objects @cite_31 . For example, object has the attribute and , object has the attribute . Attributes are widely used to transfer information to recognize new objects in ZSL tasks @cite_12 @cite_0 . Using attributes as the semantic representation, data of different categories locates in different boxes bounded by the attributes as shown in Fig. . Since the attribute representation of the seen classes and the unseen class are different, the boxes with respect to the seen data and the unseen data are disjoint. | {
"abstract": [
"Attributes act as intermediate representations that enable parameter sharing between classes, a must when training data is scarce. We propose to view attribute-based image classification as a label-embedding problem: each class is embedded in the space of attribute vectors. We introduce a function that measures the compatibility between an image and a label embedding. The parameters of this function are learned on a training set of labeled samples to ensure that, given an image, the correct classes rank higher than the incorrect ones. Results on the Animals With Attributes and Caltech-UCSD-Birds datasets show that the proposed framework outperforms the standard Direct Attribute Prediction baseline in a zero-shot learning scenario. Label embedding enjoys a built-in ability to leverage alternative sources of information instead of or in addition to attributes, such as, e.g., class hierarchies or textual descriptions. Moreover, label embedding encompasses the whole range of learning settings from zero-shot learning to regular learning with a large number of labeled examples.",
"We propose to shift the goal of recognition from naming to describing. Doing so allows us not only to name familiar objects, but also: to report unusual aspects of a familiar object (“spotty dog”, not just “dog”); to say something about unfamiliar objects (“hairy and four-legged”, not just “unknown”); and to learn how to recognize new objects with few or no visual examples. Rather than focusing on identity assignment, we make inferring attributes the core problem of recognition. These attributes can be semantic (“spotty”) or discriminative (“dogs have it but sheep do not”). Learning attributes presents a major new challenge: generalization across object categories, not just across instances within a category. In this paper, we also introduce a novel feature selection method for learning attributes that generalize well across categories. We support our claims by thorough evaluation that provides insights into the limitations of the standard recognition paradigm of naming and demonstrates the new abilities provided by our attribute-based framework.",
"We study the problem of object recognition for categories for which we have no training examples, a task also called zero--data or zero-shot learning. This situation has hardly been studied in computer vision research, even though it occurs frequently; the world contains tens of thousands of different object classes, and image collections have been formed and suitably annotated for only a few of them. To tackle the problem, we introduce attribute-based classification: Objects are identified based on a high-level description that is phrased in terms of semantic attributes, such as the object's color or shape. Because the identification of each such property transcends the specific learning task at hand, the attribute classifiers can be prelearned independently, for example, from existing image data sets unrelated to the current task. Afterward, new classes can be detected based on their attribute representation, without the need for a new training phase. In this paper, we also introduce a new data set, Animals with Attributes, of over 30,000 images of 50 animal classes, annotated with 85 semantic attributes. Extensive experiments on this and two more data sets show that attribute-based classification indeed is able to categorize images without access to any training images of the target classes."
],
"cite_N": [
"@cite_0",
"@cite_31",
"@cite_12"
],
"mid": [
"2171061940",
"2098411764",
"2128532956"
]
} | Improving Generalization via Attribute Selection on Out-of-the-box Data | With the rapid development of machine learning technologies, especially the rise of deep neural network, visual object recognition has made tremendous progress in recent years (Zheng et al., 2018;Shen et al., 2018). These recognition systems even outperform humans when provided with a massive amount of labeled data. However, it is expensive to collect sufficient labeled samples for all natural objects, especially for the new concepts and many more fine-grained subordinate categories . Therefore, how to achieve an acceptable recognition performance for objects with limited or even no training samples is a challenging but practical problem (Palatucci et al., 2009). Inspired by human cognition system that can identify new objects when provided with a description in advance (Murphy, 2004), zero-shot learning (ZSL) has been proposed to recognize unseen objects with no training samples (Cheng et al., 2017;Ji et al., 2019). Since labeled sample is not given for the target classes, we need to collect some source classes with sufficient labeled samples and find the connection between the target classes and the source classes.
As a kind of semantic representation, attributes are widely used to transfer knowledge from the seen classes (source) to the unseen classes (target) . Attributes play a key role in sharing information between classes and govern the performance of zero-shot classification. In previous ZSL works, all the attributes are assumed to be effective and treated equally. However, as pointed out in Guo et al. (2018), different attributes have different properties, such as the distributive entropy and the predictability. The attributes with poor predictability or poor discriminability may have negative impacts on the ZSL system performance. The poor predictability means that the attributes are hard to be correctly recognized from the feature space, and the poor discriminability means that the attributes are weak in distinguishing different objects. Hence, it is obvious that not all the attributes are necessary and effective for zero-shot classification.
Based on these observations, selecting the key attributes, instead of using all the attributes, is significant and necessary for constructing ZSL models. Guo et al. (2018) proposed the zero-shot learning with attribute selection (ZSLAS) model, which selects attributes by measuring the distributive entropy and the predictability of attributes based on the training data. ZSLAS can improve the performance of attribute-based ZSL methods, while it suffers from the drawback of generalization. Since the training classes and the test classes are disjoint in ZSL tasks, the training data is bounded by the box cut by attributes (illustrated in Figure 1). Therefore, the attributes selected based on the training data have poor generalization capability to the unseen test data.
To address the drawback, this paper derives a generalization error bound for ZSL problem. Since attributes for ZSL task is literally like the codewords in the error correcting output code (ECOC) model (Dietterich et al., 1994), we analyze the bound from the perspective of ECOC. Our analyses reveal that the key attributes need to be selected based on the data which is out of the box (i.e. the distribution of the training classes). Considering that test data is unavailable during the training stage for ZSL tasks, inspired by learning from pseudo relevance feedback (Miao et al., 2016), we introduce the out-Training data (in-the-box) Test data Generated data (out-of-the-box) BOX
attribute #2
attribute #1
attribute #3
Attributes representation of labels walrus bat seal tiger Figure 1: Illustration of out-of-the-box data. The distance between the out-of-the-box data and the test data (green solid arrow) is much less than the distance between the training data and the test data (blue dashed arrow).
of-the-box 1 data to mimic the unseen test classes. The out-of-the-box data is generated by an attribute-guided generative model using the same attribute representation as the test classes. Therefore, the out-of-the-box data has a similar distribution to the test data.
Guided by the performance of ZSL model on the out-of-the-box data, we propose a novel iterative attribute selection (IAS) model to select the key attributes in an iterative manner. Figure 2 illustrates the procedures of the proposed ZSL with iterative attribute selection (ZSLIAS). Unlike the previous ZSLAS that uses training data to select attributes at once, our IAS first generates out-of-the-box data to mimic the unseen classes, and subsequently iteratively selects key attributes based on the generated out-of-the-box data. During the test stage, selected attributes are employed as a more efficient semantic representation to improve the original ZSL model. By adopting the proposed IAS, the improved attribute embedding space is more discriminative for the test data, and hence improves the performance of the original ZSL model.
The main contributions of this paper are summarized as follows:
• We present a generalization error analysis for ZSL problem. Our theoretical analyses prove that selecting the subset of key attributes can improve the generalization performance of the original ZSL model which utilizes all the attributes.
• Based on our theoretical findings, we propose a novel iterative attribute selection 1 The out-of-the-box data is generated based on the training data and the attribute representation without extra information, which follows the standard zero-shot learning setting. Figure 2: The pipeline of the ZSLIAS framework. In training stage, we first generate the out-of-the-box data by a tailor-made generative model (i.e. AVAE), and then iteratively select attributes based on the out-of-the-box data. In test stage, the selected attributes are exploited to build ZSL model for unseen objects categorization.
Iterative Attribute Selection
strategy to select key attributes for ZSL tasks.
• Since test data is unseen during the training stage for ZSL tasks, we introduce the out-of-the-box data to mimic test data for attribute selection. Such data generated by a designed generative model has a similar distribution to the test data. Therefore, attributes selected based on the out-of-the-box data can be effectively generalized to the unseen test data.
• Extensive experiments demonstrate that IAS can effectively improve the attributebased ZSL model and achieve state-of-the-art performance.
The rest of the paper is organized as follows. Section 2 reviews related works. Section 3 gives the preliminary and motivation. Section 4 presents the theoretical analyses on generalization bound for attribute selection. Section 5 proposes the iterative attribute selection model. Experimental results are reported in Section 6. Conclusion is drawn in Section 7.
Zero-shot Learning
ZSL can recognize new objects using attributes as the intermediate semantic representation. Some researchers adopt the probability-prediction strategy to transfer information. Lampert et al. (2013) proposed a popular baseline, i.e. direct attribute prediction (DAP). DAP learns probabilistic attribute classifiers using the seen data and infers the label of the unseen data by combining the results of pre-trained classifiers.
Most recent works adopt the label-embedding strategy that directly learns a mapping function from the input features space to the semantic embedding space. One line of works is to learn linear compatibility functions. For example, Akata et al. (2015) presented an attribute label embedding (ALE) model which learns a compatibility function combined with ranking loss. Romera-Paredes et al. (2015) proposed an approach that models the relationships among features, attributes and classes as a two linear layers network. Another direction is to learn nonlinear compatibility functions. Xian et al. (2016) presented a nonlinear embedding model that augments bilinear compatibility model by incorporating latent variables. Airola et al. (2017) proposed a first general Kronecker product kernel-based learning model for ZSL tasks. In addition to the classification task, Ji et al. (2019) proposed an attribute network for zero-shot hashing retrieval task.
Attribute Selection
Attributes, as a kind of popular semantic representation of visual objects, can be the appearance, a part or a property of objects (Farhadi et al., 2009). For example, object elephant has the attribute big and long nose, object zebra has the attribute striped. Attributes are widely used to transfer information to recognize new objects in ZSL tasks Xu et al., 2019). As shown in Figure 1, using attributes as the semantic representation, data of different categories locates in different boxes bounded by the attributes. Since the attribute representation of the seen classes and the unseen class are different, the boxes with respect to the seen data and the unseen data are disjoint.
In previous ZSL works, all the attributes are assumed to be effective and treated equally. However, as pointed out in Guo et al. (2018), not all the attributes are effective for recognizing new objects. Therefore, we should select the key attributes to improve the semantic presentation. Liu et al. (2014) proposed a novel greedy algorithm which selects attributes based on their discriminating power and reliability. Guo et al. (2018) proposed to select attributes by measuring the distributive entropy and the predictability of attributes based on the training data. In short, previous attribute selection models are conducted based on the training data, which makes the selected attributes have poor generalization capability to the unseen test data. While our IAS iteratively selects attributes based on the out-of-the-box data which has a similar distribution to the test data, and thus the key attributes selected by our model can be more effectively generalized to the unseen test data.
Attribute-guided Generative Models
Deep generative models aim to estimate the joint distribution p(y; x) of samples and labels, by learning the class prior probability p(y) and the class-conditional density p(x|y) separately. The generative model can be extended to a conditional generative model if the generator is conditioned on some extra information, such as attributes in the proposed method. Odena et al. (2017) introduced a conditional version of generative adversarial nets, i.e. CGAN, which can be constructed by simply feeding the data label. CGAN is conditioned on both the generator and discriminator and can generate samples conditioned on class labels. Conditional Variational Autoencoder (CVAE) (Sohn et al., 2015), as an extension of Variational Autoencoder, is a deep conditional generative model for structured output prediction using Gaussian latent variables. We modify CVAE with the attribute representation to generate out-of-the-box data for the attribute selection.
Preliminary and Motivation
ZSL Task Formulation
We consider zero-shot learning as a task that recognizes unseen classes which have no labeled samples available. Given a training set D s = {(x n , y n ) , n = 1, ..., N s }, the task of traditional ZSL is to learn a mapping f : X → Y from the image feature space to the label embedding space, by minimizing the following regularized empirical risk:
L (y, f (x; W)) = 1 N s Ns n=1 l (y n , f (x n ; W)) + Ω (W) ,(1)
where l (·) is the loss function, which can be square loss 1/2(f (x) − y) 2 , logistic loss log(1+exp(−yf (x))) or hinge loss max(0, 1−yf (x)). W is the parameter of mapping f , and Ω (·) is the regularization term.
The mapping function f is defined as follows:
f (x; W) = arg max y∈Y F (x, y; W) ,(2)
where the function F : X × Y → R is the bilinear compatibility function to associate image features and label embeddings defined as follows:
F (x, y; W) = θ (x) T Wϕ (y) ,(3)
where θ (x) is the image features, ϕ (y) is the label embedding (i.e. attribute representation). We summarize some frequently used notations in Table 1.
Interpretation of ZSL Task
In traditional ZSL models, all the attributes are assumed to be effective and treated equally. While in previous works, some researchers pointed out that not all the attributes are useful and significant for zero-shot classification (Jiang et al., 2017). To the best of our knowledge, there is no theoretical analysis for the generalization performance of ZSL tasks, let alone selecting informative attributes for unseen classes. To fill in this gap, we first derive the generalization error bound for ZSL models. The intuition of our theoretical analysis is to simply treat the attributes as a kind of error correcting output codes, then the prediction of ZSL tasks can be deemed as the assignment of class labels with respective pre-defined ECOC, which is the closest to the predicted ECOC problem (Rocha et al., 2014). Based on this novel interpretation, we derive a theoretical generalization error bound of ZSL model as shown in Section 4. From the generalization bound analyses, we find that the discriminating power of attributes governs the performance of the ZSL model.
Deficiency of ZSLAS
Some attribute selection works have been proposed in recent years. Guo et al. (2018) proposed the ZSLAS model that selects attributes based on the distributive entropy and the predictability of attributes using training data. Simultaneously considering the ZSL model loss function and attribute properties in a joint optimization framework, they selected attributes by minimizing the following loss function:
L(y, f (x; s, W)) = 1 N s Ns n=1 {l ZSL (y n , f (x n ; s, W)) + αl p (θ(x n ), ϕ(y n ); s) − βl v (θ(x n ), µ; s)},(4)
where s is the weight vector of the attributes which will be further used for attribute selection. θ(·) is the attribute classifier, ϕ(y n ) is the attribute representation, µ is an auxiliary parameter. l ZSL is the model based loss function for ZSL, i.e. l(·) as defined in Eq. (1). l p is the attribute prediction loss which can be defined based on specific ZSL models and l v is the loss of variance which measures the distributive entropy of attributes (Guo et al., 2018). After getting the weight vector s by optimizing Eq. (4), attributes can be selected according to s and then be used to construct ZSL model. From our theoretical analyses in Section 4, ZSLAS can improve the original ZSL model to some extent (Guo et al., 2018). However, ZSLAS suffers from a drawback that the attributes are selected based on the training data. Since the training and test classes are disjoint in ZSL tasks, it is difficult to measure the quality and contribution of attributes regarding discriminating the unseen test classes. Thus, the selected attributes by ZSLAS have poor generalization capability to the test data due to the domain shift problem.
Definition of Out-of-the-box
Since previous attribute selection models are conducted based on the bounded in-thebox data, the selected attributes have poor generalization capability to the test data. However, the test data is unavailable during the training stage. Inspired by learning from pseudo relevance feedback (Miao et al., 2016), we introduce the pseudo data, which is outside the box of the training data, to mimic test classes to guide the attribute selection. Considering that the training data is bounded in the box by attributes, we generate the out-of-the-box data using an attribute-guided generative model. Since the out-of-thebox data is generated based on the same attribute representation as test classes, the box of the generated data will overlap with the box of the test data. And consequently, the key attributes selected by the proposed IAS model based on the out-of-the-box data can be effectively generalized to the unseen test data.
Generalization Bound Analysis
In this section, we first derive the generalization error bound of the original ZSL model and then analyze the bound changes after attribute selection. In previous works, some generalization error bounds have been presented for the ZSL task. Romera-Paredes et al. (2015) transformed ZSL problem to the domain adaptation problem and then analyzed the risk bounds for domain adaptation. Stock et al. (2018) considered ZSL problem as a specific setting of pairwise learning and analyzed the bound by the kernel ridge regression model. However, these bound analysis are not suitable for ZSL model due to their assumptions. In this work, we derive the generalization bound from the perspective of ECOC model, which is more similar to the ZSL problem.
Generalization Error Bound of ZSL
Zero-shot classification is an effective way to recognize new objects which have no training samples available. The basic framework of ZSL model is using attribute representation as the bridge to transfer knowledge from seen objects to unseen objects. To simplify the analysis, we consider ZSL as a multi-class classification problem. Therefore, ZSL task can be addressed via an ensemble method which combines many binary attribute classifiers. Specifically, we pre-trained a binary classifier for each attribute separately in the training stage. To classify a new sample, all the attribute classifiers are evaluated to obtain an attribute codeword (a vector in which each element represents the output of an attribute classifier). Then we compare the predicted codeword to the attribute representations of all the test classes to retrieve the label of the test sample.
To analyze the generalization error bound of ZSL, we first define some distances in the attribute space, and then present a proposition of the error correcting ability of attributes.
Definition 1 (Generalized Attribute Distance). Given the attribute matrix A for associating labels and attributes, let a i , a j denote the attribute representation of label y i and y j in matrix A with length N a , respectively. Then the generalized attribute distance between a i and a j can be defined as
d(a i , a j ) = Na m=1 ∆(a (m) i , a (m) j ),(5)
where N a is the number of attributes, a (m) i is the m th element in the attribute representation a i of the label
y i . ∆(a (m) i , a (m) j ) is equal to 1 if a (m) i = a (m) j , 0 otherwise.
We further define the minimum distance between any two attribute representations in the attribute space.
Definition 2 (Minimum Attribute Distance). The minimum attribute distance τ of matrix A is the minimum distance between any two attribute representations a i and a j as follows:
τ = min i =j d(a i , a j ), ∀ 1 ≤ i, j ≤ N a .(6)
Given the definition of distance in the attribute space, we can prove the following proposition.
Proposition 1 (Error Correcting Ability ). Given the label-attribute correlation matrix A and a vector of predicted attribute representation f (x) for an unseen test sample x with known label y. If x is incorrectly classified, then the distance between the predicted attribute representation f (x) and the correct attribute representation a y is greater than half of the minimum attribute distance τ , i.e.
d(f (x), a y ) ≥ τ 2 .(7)
Proof. Suppose that the predicted attribute representation for test sample x with correct attribute representation a y is f (x), and the sample x is incorrectly classified to the mismatched attribute representation a r , where r ∈ Y u \ {y}. Then the distance between f (x) and a y is greater than the distance between f (x) and a r , i.e.,
d(f (x), a y ) ≥ d(f (x), a r ).(8)
Here, the distance between attribute representation can be expanded as the elementwise summation based on Eq. (5) as follows:
Na m=1 ∆(f (m) (x), a (m) y ) ≥ Na m=1 ∆(f (m) (x), a (m) r ).(9)
Then, we have:
d(f (x), a y ) = Na m=1 ∆(f (m) (x), a (m) y ) = 1 2 Na m=1 ∆(f (m) (x), a (m) y ) + ∆(f (m) (x), a (m) y ) (i) ≥ 1 2 Na m=1 ∆(f (m) (x), a (m) y ) + ∆(f (m) (x), a (m) r ) (ii) ≥ 1 2 Na m=1 ∆(a (m) y , a (m) r ) = 1 2 d(a y , a r ) (iii) ≥ τ 2 ,(10)
where (i) follows Eq. (9), (ii) is based on the triangle inequality of distance metric and (iii) follows Eq. (6).
From Proposition 1, we can find that, the predicted attribute representation is not required to be exactly the same as the ground truth for each unseen test sample. As long as the distance is less than τ /2, ZSL models can correct the error committed by some attribute classifiers and make an accurate prediction.
Based on the Proposition of error correcting ability of attributes, we can derive the theorem of generalization error bound for ZSL.
Theorem 1 (Generalization Error Bound of ZSL). Given N a attribute classifiers, f (1) , f (2) , ..., f (Na) , trained on training set D s with label-attribute matrix A, the generalization error rate for the attribute-based ZSL model is upper bounded by
2N aB τ ,(11)
whereB = 1 Na Na m=1 B m and B m is the upper bound of the prediction loss for the m th attribute classifier f (m) .
Proof. According to Proposition 1, for any incorrectly classified test sample x with label y, the distance between the predicted attribute representation f (x) and the true attribute representation a y is greater than τ /2, i.e.,
d(f (x), a y ) = Na m=1 ∆(f (m) (x), a (m) y ) ≥ τ 2 .(12)
Let k be the number of incorrect image classifications for unseen test dataset D u = {(x i , y i ), i = 1, ..., N u }, we can obtain:
k τ 2 ≤ Nu i=1 Na m=1 ∆(f (m) (x i ), a (m) y i ) ≤ Nu i=1 Na m=1 B m = N u N aB ,(13)
whereB = 1 Na Na m=1 B m and B m is the upper bound of attribute prediction loss. Hence, the generalized error rate k/N u is bounded by 2N aB /τ .
Remark 1 (Generalization error bound is positively correlated to the average attribute prediction loss). From Theorem 1, we can find that the generalization error bound of the attribute-based ZSL model depends on the number of attributes N a , minimum attribute distance τ and average prediction lossB for all the attribute classifiers. According to the Definition 1 and 2, the minimum attribute distance τ is positively correlated to the number of attributes N a . Therefore, the generalization error bound is mainly affected by the average prediction lossB. Intuitively, the inferior attributes with poor predictability cause greater prediction lossB, and consequently, these attributes will have negative effect on the ZSL performance and increase the generalization error rate.
Improvement of Generalization after Attribute Selection
It has been proven that the generalization error bound of ZSL model is affected by the average prediction lossB in the previous section. In this section, we will prove that attribute selection can reduce the average prediction lossB, and consequently reduce the generalization error bound of ZSL from the perspective of PAC-style (Valiant, 1984) analysis.
Lemma 1 (PAC bound of ZSL (Palatucci et al., 2009)). Given N a attribute classifiers, to obtain an attribute classifier with (1 − δ) probability that has at most k a incorrect predicted attributes, the PAC bound D of the attribute-based ZSL model is:
D ∝ N a k a [4log(2/δ) + 8(d + 1)log(13N a /k a )],(14)
where d is the dimension of the image features.
Remark 2 (The average attribute prediction loss is positively correlated to the PAC bound). Here, k a /N a is the tolerable prediction error rate of attribute classifiers. According to the definition of the average attribute prediction lossB, it is obvious that the ZSL model with smallerB could tolerate a greater k a /N a . From Lemma 1, we can find that the PAC bound D is monotonically increasing with respect to N a /k a . Hence, the PAC bound D decreases when the N a /k a decreases, and consequently the average prediction lossB decreases.
Lemma 2 (Test Error Bound (Vapnik, 2013)). Suppose that the PAC bound of the attribute-based ZSL model is D. The probability of the test error distancing from an upper bound is given by:
p e ts ≤ e tr + 1 N s D log 2N s D + 1 − log η 4 = 1 − η,(15)
where N s is the size of the training set, 0 ≤ η ≤ 1, and e ts , e tr are the test error and the training error respectively. Proof. In attribute selection, the key attributes are selected by minimizing the loss function in Eq.
(1) on the out-of-the-box data. Since the generated out-of-the-box data has a similar distribution to the test data, the test error of ZSL will decrease after attribute selection, i.e. ZSLIAS has a smaller test error bound than the original ZSL model. Therefore, we can infer that ZSLIAS has a smaller PAC bound based on Remark 3. According to Remark 2, we can infer that the average prediction errorB decreases after attribute selection. As a consequence, the generalization error bound of ZSLIAS is smaller than the original ZSL model based on Remark 1.
From Proposition 2, we can observe that the generalization error of ZSL model will decrease after adopting the proposed IAS. In other words, ZSLIAS have a smaller classification error rate comparing to the original ZSL method when generalizing to the unseen test data.
IAS with Out-of-the-box Data
Motivated by the generalization bound analyses, we select the key attributes based on the out-of-the-box data. In this section, we first present the proposed iterative attribute selection model. Then, we introduce the attribute-guided generative model designed to generate the out-of-the-box data. The complexity analysis of IAS is given at last.
Iterative Attribute Selection Model
Inspired by the idea of iterative machine teaching (Liu et al., 2017), we propose a novel iterative attribute selection model that iteratively selects attributes based on the generated out-of-the-box data. Firstly, we generate the out-of-the-box data to mimic test classes by an attribute-based generative model. Then, the key attributes are selected in an iterative manner based on the out-of-the-box data. After obtaining the selected attributes, we can consider them as a more efficient semantic representation to improve the original ZSL model.
Suppose given the generated out-of-the-box data D g = {(x n , y n ), n = 1, ..., N g }, we can combine the empirical risk in Eq. (1) with the attribute selection model. Then the loss function is rewritten as follows:
L (y, f (x; s, W)) = 1 N g Ng n=1 l (y n , f (x n ; s, W)) + Ω (W) ,
where s ∈ (0, 1) Na is the indicator vector for the attribute selection, in which s i = 1 if the i th attribute is selected or 0 otherwise. N a is the number of all the attributes. Correspondingly, the mapping function f in Eq.
(2) and the compatibility function F in Eq. (3) can be rewritten as follows:
f (x; s, W) = arg max y∈Y F (x, y; s, W) ,(17)F (x, y; s, W) = θ (x) T W (s • ϕ (y)) ,(18)
where • is element-wise product operator (Hadamard product), s is the selection vector defined in Eq. (16).
To solve the optimization problem in Eq. (16), we need to specify the choice of the loss function l (·). The loss function in Eq. (16) for single sample (x n , y n ) is expressed as follows (Xian et al., 2018): l(y n , f ((x n ; s, W))) = y∈Yg r ny [ (y n , y) + F (x n , y; s, W) − F (x n , y n ; s, W)] +
= y∈Yg r ny [ (y n , y) + θ(x n ) T W(s • ϕ(y)) − θ(x n ) T W(s • ϕ(y n ))] + ,(19)
where Y g is the label of generated out-of-the-box data, which is the same as Y u . (y n ; y) = 0 if y n = y, 1 otherwise. r ny ∈ [0, 1] is the weight defined in specific ZSL methods.
Since the dimension of the optimal attribute subset (i.e. l 0 -norm of s) is agnostic, finding the optimal s is a NP-Complete (Garey et al., 1974) problem. Therefore, inspired by the idea of iterative machine teaching (Liu et al., 2017), we adopt the greedy algorithm (Cormen et al., 2009) to optimize the loss function in an iterative manner. Eq. (16) gets updated during each iteration as follows:
L t+1 = 1 N g Ng n=1 l t+1 (y n , f (x n ; s t+1 , W t+1 )) + Ω(W t+1 ), s.t. s i ∈s t+1 s i = t + 1, s j ∈(s t+1 −s t ) s j = 1.(20)
The constraints on s ensure that s t updates one element (from 0 updates to 1) during each iteration, which indicates that only one attribute is selected each time. s 0 is the initial vector of all 0's. Correspondingly, the loss function in Eq. (20) for single sample (x n , y n ) gets updated during each iteration as follows:
l t+1 = y∈Yg r ny [ (y n , y) + θ(x n ) T W t+1 (s t+1 • ϕ(y)) − θ(x n ) T W t+1 (s t+1 • ϕ(y n ))] + .(21)
Here l t+1 subjects to the same constrains as Eq. (20).
To minimize the loss function in Eq. (20), we can alternatively optimize W t+1 and s t+1 by optimizing one variable while fixing the other one. In each iteration, we firstly optimize W t+1 via the gradient descent algorithm (Burges et al., 2005). The gradient of Eq. (20) is calculated as follows:
∂L t+1 ∂W t+1 = 1 N g Ng n=1 ∂l t+1 ∂W t+1 + 1 2 αW t+1 ,(22)
where
∂l t+1 ∂W t+1 = y∈Yg r ny θ(x n ) T (s t • (ϕ(y) − ϕ(y n ))),(23)
where α is the regularization parameter. After updating W t+1 , we can traverse all the elements equal to 0 in s t , and turn them into 1 respectively. Then s t+1 is updated by the optimal s t+1 which achieves the minimal loss of Eq. (20):
s t+1 = arg min s t+1 1 N g Ng n=1 l t+1 (y n , f (x n ; s t+1 , W t+1 )) + Ω(W t+1 ),(24)
When iterations end and s is obtained, we can easily get the subset of key attributes by selecting the attributes corresponding to the elements equal to 1 in the selection vector s.
The procedure of the proposed IAS model is given in Algorithm 1.
Generation of Out-of-the-box Data
In order to select the discriminative attributes for test classes, we should do attribute selection on the test data. Since the training data and the test data are located in the different boxes bounded by the attributes, we adopt an attribute-based generative model (Bucher et al., 2017) to generate out-of-the-box data to mimic test classes. Comparing to the ZSLAS, the key attributes selected by IAS based on the out-of-the-box data can be more efficiently generalized to test data. Conditional variational autoencoder (CVAE) (Sohn et al., 2015) is a conditional generative model in which the latent codes and generated data are both conditioned on some extra information. In this work, we propose the attribute-based variational autoencoder (AVAE), a special version of CVAE with tailor-made attributes, to generate the out-of-the-box data.
VAE (Kingma et al., 2013) is a directed graphical model with certain types of latent variables. The generative process of VAE is as follows: a set of latent codes z is generated from the prior distribution p(z), and the data x is generated by the generative distribution p(x|z) conditioned on z : z ∼ p(z), x ∼ p(x|z). The empirical objective of VAE is expressed as follows (Sohn et al., 2015):
L VAE (x) = −KL(q(z|x) p(z)) + 1 L L l=1 logp(x|z (l) ),(25)
Algorithm 1 Iterative Attribute Selection Model
Input:
The generated out-of-the-box data D g ; Original attribute set A; Iteration stop threshold ε. Output:
Subset of selected attributes S.
1: Initialization: s 0 = 0, randomize W 0 ; 2: for t = 0 to N a − 1 do 3: if |L t+1 − L t | ≤ ε 10:
L t = 1 Ng Ng n=1 l t (y n , f (x n ; s t , W t )) + Ω(W t ) (Eq. (20)) 4: ∂L t ∂W t = 1 Ng Ng n=1 ∂l t ∂W t + 1 2 αW t (
Break;
11:
end if 12: end for 13: Obtain the subset of selected attributes: S = s • A.
where z (l) = g(x, (l) ), (l) ∼ N (0, I). q(z|x) is the recognition distribution which is reparameterized with a deterministic and differentiable function g(·, ·) (Sohn et al., 2015) . KL denotes the Kullback-Leibler divergence (Kullback, 1987) between the incorporated distributions. L is the number of samples.
Combining with the condition, i.e. the attribute representation of labels, the empirical objective of the AVAE is defined as follows:
L AVAE (x, ϕ(y)) = −KL(q(z|x, ϕ(y)) p(z|ϕ(y))) + 1 L L l=1 logp(x|ϕ(y), z (l) ),(26)
where z (l) = g(x, ϕ(y), (l) ), ϕ (y) is the attribute representation of label y.
In the encoding stage, for each training data point x (i) , we estimate the q(z (i) |x (i) , ϕ(y (i) )) = Q(z) using the encoder. In the decoding stage, after inputting the concatenation of thez sampled from the Q(z) and the attribute representation ϕ(y u ), the decoder will generate a new sample x g with the same attribute representation as the unseen class ϕ(y u ).
The procedure of AVAE is illustrated in Figure 3. At training time, the attribute representation (of training classes) whose image is being fed in is provided to the encoder and decoder. To generate an image of a particular attribute representation (of test classes), we can just feed this attribute vector along with a random point in the latent space sampled from a standard normal distribution. The system no longer relies on the latent space to encode what object you are dealing with. Instead, the latent space encodes attribute information. Since the attribute representations of test classes are fed into the decoder at generating stage, the generated out-of-the-box data D g has a similar distribution to the test data.
Complexity Analysis
Experiments
To evaluate the performance of the proposed iterative attribute selection model, extensive experiments are conducted on four standard datasets with ZSL setting. In this section, we first compare the proposed approach with the state-of-the-art, and then give detailed analyses.
Experimental Settings
Dataset
We conduct experiments on four standard ZSL datasets: (1) Animal with Attribute (AwA) (Lampert et al., 2013), (2) attribute-Pascal-Yahoo (aPY) (Farhadi et al., 2009), (3) Caltech-UCSD Bird 200-2011 (CUB) (Wah et al., 2011), and (4) SUN Attribute Database (SUN) (Patterson et al., 2012). The overall statistic information of these datasets is summarized in Table 2.
Dataset #Attributes
Classes Images (SS) Images (PS) #Total #Training #Test #Training #Test #Training #Test AwA 85 50 40 10 24295 6180 19832 5685 aPY 64 32 20 12 12695 2644 5932 7924 CUB 312 200 150 50 8855 2933 7057 2967 SUN 102 717 645 72 12900 1440 10320 1440
Dataset Split
Zero-shot learning assumes that training classes and test classes are disjoint. Actually, ImageNet, the dataset exploited to extract image features via deep neural networks, may include some test classes. Therefore, Xian et al. (2018) proposed a new dataset split (PS) ensuring that none of the test classes appears in the dataset used to train the extractor model. In this paper, we evaluate the proposed model using both splits, i.e., the original standard split (SS) and the proposed split (PS).
Image Feature
Deep neural network feature is extracted for the experiments. Image features are extracted from the entire images for AwA, CUB and SUN datasets, and from bounding boxes mentioned in Farhadi et al. (2009) for aPY dataset, respectively. The original ResNet-101 (He et al., 2016) pre-trained on ImageNet with 1K classes is used to calculate 2048-dimensional top-layer pooling units as image features.
Attribute Representation
Attributes are used as the semantic representation to transfer information from training classes to test classes. We use 85, 64, 312 and 102-dimensional continuous value attributes for AwA, aPY, CUB and SUN datasets, respectively.
Evaluation protocol
Unified dataset splits shown in Table 2 are used for all the compared methods to get fair comparison results. Since the dataset is not well balanced with respect to the number of images per class (Xian et al., 2018), we use the mean class accuracy, i.e. per-class averaged top-1 accuracy, as the criterion of assessment. Mean class accuracy is calculated as follows: acc = 1 L y∈Yu #correct predictions in y #samples in y ,
where L is the number of test classes, Y u is the set comprised of all the test labels.
Comparison with the State-of-the-Art
To evaluate the efficiency of the proposed iterative attribute selection model, we modify several latest ZSL baselines by the proposed IAS and compare them with the state-ofthe-art.
We modify seven representative ZSL baselines to evaluate the IAS model, including three popular ZSL baselines (i.e. DAP (Lampert et al., 2013), LatEm (Xian et al., 2016) and SAE (Kodirov et al., 2017)) and four latest ZSL baselines (i.e. MFMR , GANZrl (Tong et al., 2018), fVG (Xian et al., 2019) and LLAE (Li et al., 2019)).
The improvement achieved on these ZSL baselines is summarized in Table 3. It can be observed that IAS can significantly improve the performance of attribute-based ZSL methods. Specifically, the mean accuracies of these ZSL methods on four datasets (i.e. AwA, aPY, CUB and SUN) are increased by 11.09%, 15.97%, 9.10%, 5.11%, respectively (10.29% on average) after using IAS. For DAP on AwA and aPY datasets, LatEm on AwA dataset, IAS can improve their accuracy by greater than 20%, which demonstrates that IAS can significantly improve the performance of ZSL models. Interestingly, SAE performs badly on aPY and CUB datasets, while the accuracy rises to an acceptable level (from 8.33% to 38.53%, and from 24.65% to 42.85%, respectively) by using IAS. Even though the performance of state-of-the-art baselines is pretty well, IAS can still improve them to some extent (5.48%, 3.24%, 2.80% and 3.64% on average for MFMR, GANZrl, fVG and LLAE respectively). These results demonstrate that the proposed iterative attribute selection model makes sense and can effectively improve existing attribute-based ZSL methods. This also proves the necessity and effectiveness of attribute selection for ZSL tasks.
As a similar work to ours, ZSLAS selects attributes based on the distributive entropy and the predictability of attributes. Thus, we compare the improvement of IAS and ZS-LAS on DAP and LatEm, respectively. In Table 3, it can be observed that ZSLAS can improve existing ZSL methods, while IAS can improve them by a greater level (2.15% vs 10.61% on average). Compared to ZSLAS, the advantages of ZSLIAS can be interpreted in two aspects. Firstly, ZSLIAS selects attributes in an iterative manner, hence it can select a more optimal subset of key attributes than ZSLAS that selects attributes at once. Secondly, ZSLAS is conducted based on the training data, while ZSLIAS is conducted based on the out-of-the-box data which has a similar distribution to the test data. Therefore, attributes selected by ZSLIAS is more applicable and discriminative for test data. Experimental results demonstrate the significant superiority of the proposed IAS model over previous attribute selection models.
Detailed Analysis
In order to further understand the promising performance, we analyze the following experimental results in detail.
Evaluation on the Out-of-the-box Data
In the first experiment, we evaluate the out-of-the-box data generated by a tailor-made attribute-based deep generative model. Figure 4 shows the distribution of the out-of-thebox data and the real test data sampled from AwA dataset using t-SNE. Note that the out-of-the-box data in Figure 4(b) is generated only based on the attribute representation of unseen classes, and without extra information of any test images. It can be observed that the generated out-of-the-box data can capture a similar distribution to the real test data, which guarantees that the selected attributes can be effectively generalized to test data.
We also quantitatively evaluate the out-of-the-box data by calculating various distances between three distributions, i.e. the generated out-of-the-box data (X g ), unseen test data (X u ) and seen training data (X s ), in pairs. Table 4 shows the distribution distances measured by Wasserstein Distance (Vallender, 1974), KL Divergence (Kullback, 1987), Hellinger Distance (Beran, 1977) and Bhattacharyya Distance (Kailath, 1967), respectively. It is obvious that the distance between X g and X u is much less than the distance between X u and X s , which means that the generated out-of-the-box data has a similar distribution to the unseen test data compared to the seen data. Therefore, at- tributes selected based on the out-of-the-box data are more discriminative for test data comparing to attributes selected based on training data. We illustrate some generated images of unseen classes (i.e. panda and seal) and annotate them the corresponding attribute representations as shown in Figure 5. Numbers in black indicate the attribute representations of the labels of real test images. Numbers in red and green are the correct and the incorrect attribute values of generated images, respectively. We can see that the generated images have the similar attribute representation as test images. Therefore, the tailor-made attribute-based deep generative model can generate the out-of-the-box data which captures a similar distribution to the unseen data.
Effectiveness of IAS
In the second experiment, we compare the performance of three ZSL methods (i.e. DAP, LatEm and SAE) after using IAS on four datasets, respectively. The accuracies with respect to the number of selected attributes are shown in Figure 6. On AwA, aPY and SUN datasets, we can see that the performance of these three ZSL methods increases sharply when the number of selected attributes grows from 0 to about 20%, and then reaches the peak. These results suggest that only about a quarter of attributes are the key attributes which are necessary and effective to classify test objects. In Figure 6(b) and 6(f), there is an interesting result that SAE performs badly on aPY dataset with both SS and PS (the accuracy is less than 10%), while the performance is acceptable after using IAS (the accuracy is about 40%). These results demonstrate the effectiveness and robustness of IAS for ZSL tasks. Furthermore, we modify DAP by using all the attributes (#84), using the selected attributes (#20) and using the remaining attributes (#64) after attribute selection, respectively. The resulting confusion matrices of these three variants evaluated on AwA dataset with proposed split setting are illustrated in Figure 7. The numbers in the diagonal area (yellow patches) of confusion matrices indicate the classification accuracy per class. It is obvious that IAS can significantly improve DAP performance on most of the test classes, and the accuracies on some classes nearly doubled after using IAS, such as horse, seal, and giraffe. Even though some objects are hard to be recognized by DAP, like dolphin (the accuracy of DAP is 1.6%), we can get an acceptable performance after using IAS (the accuracy of DAPIAS is 72.7%). The original DAP only performs better than IAS with regard to the object blue whale, this is because in the original DAP, most of the marine creatures (such as blue whale, walrus and dolphin) are classified as the blue whale, which increases the classification accuracy while also increasing the false positive rate. More importantly, the confusion matrix of DAPIAS contains less noise (i.e. smaller numbers in the side regions (white patches) of confusion matrices apart from the diagonal area) than DAP, which suggests that DAPIAS has less prediction uncertainties. In other words, adopting IAS can improve the robustness of attribute-based ZSL methods.
In Figure 7, the accuracy of using the selected attributes (71.88% on average) is significantly improved comparing to the accuracy of using all the attributes (46.23% on average), and the accuracy of using the remaining attributes (31.32% on average) is extremely terrible. These results suggest that the selected attributes are the key attributes for discriminating test data. The missing attributes are useless and even have a negative impact on the ZSL system. Therefore, it is obvious that not all the attributes are effective for ZSL tasks, and we should select the key attributes to improve performance.
Interpretability of Selected Attributes
In the third experiment, we present the visualization results of attribute selection. We find that ZSL methods obtain the best performance when selecting about 20% attributes as shown in Figure 6. Therefore, we illustrate the top 20% key attributes selected by DAP, LatEm and SAE on four datasets in Figure 8. Three rows in each figure are DAP, LatEm and SAE from top to bottom, and yellow bars indicate the attributes which are selected by the corresponding methods. We can see that the attribute subsets selected by different ZSL methods are highly coincident for the same dataset, which demonstrates that the selected attributes are the key attributes for discriminating test data. Specifi- cally, we enumerate the key attributes selected by three ZSL methods on AwA dataset in Table 5. Attributes in boldface indicate that they are simultaneously selected by all the three ZSL methods, and attributes in italics indicate that they are selected by any two of these three methods. It can be observed that 13 attributes (65%) are selected by all the three ZSL methods. These three attribute subsets selected by diverse ZSL models are very similar, which is another evidence that IAS is reasonable and useful for zero-shot classification.
Conclusion
We present a novel and effective iterative attribute selection model to improve existing attribute-based ZSL methods. In most of the previous ZSL works, all the attributes are assumed to be effective and treated equally. However, we notice that attributes have different predictability and discriminability for diverse objects. Motivated by this observation, we propose to select the key attributes to build ZSL model. Since training classes and test classes are disjoint in ZSL tasks, we introduce the out-of-the-box data to mimic test data to guide the progress of attribute selection. The out-of-the-box data generated by a tailor-made attribute-based deep generative model has a similar distribution to the test data. Hence, the attributes selected by IAS based on the out-of-the-box data can be effectively generalized to the test data. To evaluate the effectiveness of IAS, we conduct extensive experiments on four standard ZSL datasets. Experimental results demonstrate that IAS can effectively select the key attributes for ZSL tasks and significantly improve state-of-the-art ZSL methods.
In this work, we select the same attributes for all the unseen test classes. Obviously, this is not the global optimal solution to select attributes for diverse categories. In the future, we will consider a tailor-made attribute selection model that can select the special subset of key attributes for each test class. | 7,799 |
1907.11397 | 2966209912 | Zero-shot learning (ZSL) aims to recognize unseen objects (test classes) given some other seen objects (training classes), by sharing information of attributes between different objects. Attributes are artificially annotated for objects and are treated equally in recent ZSL tasks. However, some inferior attributes with poor predictability or poor discriminability may have negative impact on the ZSL system performance. This paper first derives a generalization error bound for ZSL tasks. Our theoretical analysis verifies that selecting key attributes set can improve the generalization performance of the original ZSL model which uses all the attributes. Unfortunately, previous attribute selection methods are conducted based on the seen data, their selected attributes have poor generalization capability to the unseen data, which is unavailable in training stage for ZSL tasks. Inspired by learning from pseudo relevance feedback, this paper introduces the out-of-the-box data, which is pseudo data generated by an attribute-guided generative model, to mimic the unseen data. After that, we present an iterative attribute selection (IAS) strategy which iteratively selects key attributes based on the out-of-the-box data. Since the distribution of the generated out-of-the-box data is similar to the test data, the key attributes selected by IAS can be effectively generalized to test data. Extensive experiments demonstrate that IAS can significantly improve existing attribute-based ZSL methods and achieve state-of-the-art performance. | Deep generative models aim to estimate the joint distribution @math of samples and labels, by learning the class prior probability @math and the class-conditional density @math separately. Generative model can be extended to a conditional generative model if the generator is conditioned on some extra information, such as attributes in the proposed method. Mirza and Osindero @cite_27 introduced a conditional version of generative adversarial nets, i.e. CGAN, which can be constructed by simply feeding the data label. CGAN is conditioned on both the generator and discriminator and can generate samples conditioned on class labels. Conditional Variational Autoencoder (CVAE) @cite_14 , as an extension of Variational Autoencoder, is a deep conditional generative model for structured output prediction using Gaussian latent variables. We modify CVAE with the attribute representation to generate out-of-the-box data for the attribute selection. | {
"abstract": [
"Synthesizing high resolution photorealistic images has been a long-standing challenge in machine learning. In this paper we introduce new methods for the improved training of generative adversarial networks (GANs) for image synthesis. We construct a variant of GANs employing label conditioning that results in 128x128 resolution image samples exhibiting global coherence. We expand on previous work for image quality assessment to provide two new analyses for assessing the discriminability and diversity of samples from class-conditional image synthesis models. These analyses demonstrate that high resolution samples provide class information not present in low resolution samples. Across 1000 ImageNet classes, 128x128 samples are more than twice as discriminable as artificially resized 32x32 samples. In addition, 84.7 of the classes have samples exhibiting diversity comparable to real ImageNet data.",
"Supervised deep learning has been successfully applied to many recognition problems. Although it can approximate a complex many-to-one function well when a large amount of training data is provided, it is still challenging to model complex structured output representations that effectively perform probabilistic inference and make diverse predictions. In this work, we develop a deep conditional generative model for structured output prediction using Gaussian latent variables. The model is trained efficiently in the framework of stochastic gradient variational Bayes, and allows for fast prediction using stochastic feed-forward inference. In addition, we provide novel strategies to build robust structured prediction algorithms, such as input noise-injection and multi-scale prediction objective at training. In experiments, we demonstrate the effectiveness of our proposed algorithm in comparison to the deterministic deep neural network counterparts in generating diverse but realistic structured output predictions using stochastic inference. Furthermore, the proposed training methods are complimentary, which leads to strong pixel-level object segmentation and semantic labeling performance on Caltech-UCSD Birds 200 and the subset of Labeled Faces in the Wild dataset."
],
"cite_N": [
"@cite_27",
"@cite_14"
],
"mid": [
"2950776302",
"2188365844"
]
} | Improving Generalization via Attribute Selection on Out-of-the-box Data | With the rapid development of machine learning technologies, especially the rise of deep neural network, visual object recognition has made tremendous progress in recent years (Zheng et al., 2018;Shen et al., 2018). These recognition systems even outperform humans when provided with a massive amount of labeled data. However, it is expensive to collect sufficient labeled samples for all natural objects, especially for the new concepts and many more fine-grained subordinate categories . Therefore, how to achieve an acceptable recognition performance for objects with limited or even no training samples is a challenging but practical problem (Palatucci et al., 2009). Inspired by human cognition system that can identify new objects when provided with a description in advance (Murphy, 2004), zero-shot learning (ZSL) has been proposed to recognize unseen objects with no training samples (Cheng et al., 2017;Ji et al., 2019). Since labeled sample is not given for the target classes, we need to collect some source classes with sufficient labeled samples and find the connection between the target classes and the source classes.
As a kind of semantic representation, attributes are widely used to transfer knowledge from the seen classes (source) to the unseen classes (target) . Attributes play a key role in sharing information between classes and govern the performance of zero-shot classification. In previous ZSL works, all the attributes are assumed to be effective and treated equally. However, as pointed out in Guo et al. (2018), different attributes have different properties, such as the distributive entropy and the predictability. The attributes with poor predictability or poor discriminability may have negative impacts on the ZSL system performance. The poor predictability means that the attributes are hard to be correctly recognized from the feature space, and the poor discriminability means that the attributes are weak in distinguishing different objects. Hence, it is obvious that not all the attributes are necessary and effective for zero-shot classification.
Based on these observations, selecting the key attributes, instead of using all the attributes, is significant and necessary for constructing ZSL models. Guo et al. (2018) proposed the zero-shot learning with attribute selection (ZSLAS) model, which selects attributes by measuring the distributive entropy and the predictability of attributes based on the training data. ZSLAS can improve the performance of attribute-based ZSL methods, while it suffers from the drawback of generalization. Since the training classes and the test classes are disjoint in ZSL tasks, the training data is bounded by the box cut by attributes (illustrated in Figure 1). Therefore, the attributes selected based on the training data have poor generalization capability to the unseen test data.
To address the drawback, this paper derives a generalization error bound for ZSL problem. Since attributes for ZSL task is literally like the codewords in the error correcting output code (ECOC) model (Dietterich et al., 1994), we analyze the bound from the perspective of ECOC. Our analyses reveal that the key attributes need to be selected based on the data which is out of the box (i.e. the distribution of the training classes). Considering that test data is unavailable during the training stage for ZSL tasks, inspired by learning from pseudo relevance feedback (Miao et al., 2016), we introduce the out-Training data (in-the-box) Test data Generated data (out-of-the-box) BOX
attribute #2
attribute #1
attribute #3
Attributes representation of labels walrus bat seal tiger Figure 1: Illustration of out-of-the-box data. The distance between the out-of-the-box data and the test data (green solid arrow) is much less than the distance between the training data and the test data (blue dashed arrow).
of-the-box 1 data to mimic the unseen test classes. The out-of-the-box data is generated by an attribute-guided generative model using the same attribute representation as the test classes. Therefore, the out-of-the-box data has a similar distribution to the test data.
Guided by the performance of ZSL model on the out-of-the-box data, we propose a novel iterative attribute selection (IAS) model to select the key attributes in an iterative manner. Figure 2 illustrates the procedures of the proposed ZSL with iterative attribute selection (ZSLIAS). Unlike the previous ZSLAS that uses training data to select attributes at once, our IAS first generates out-of-the-box data to mimic the unseen classes, and subsequently iteratively selects key attributes based on the generated out-of-the-box data. During the test stage, selected attributes are employed as a more efficient semantic representation to improve the original ZSL model. By adopting the proposed IAS, the improved attribute embedding space is more discriminative for the test data, and hence improves the performance of the original ZSL model.
The main contributions of this paper are summarized as follows:
• We present a generalization error analysis for ZSL problem. Our theoretical analyses prove that selecting the subset of key attributes can improve the generalization performance of the original ZSL model which utilizes all the attributes.
• Based on our theoretical findings, we propose a novel iterative attribute selection 1 The out-of-the-box data is generated based on the training data and the attribute representation without extra information, which follows the standard zero-shot learning setting. Figure 2: The pipeline of the ZSLIAS framework. In training stage, we first generate the out-of-the-box data by a tailor-made generative model (i.e. AVAE), and then iteratively select attributes based on the out-of-the-box data. In test stage, the selected attributes are exploited to build ZSL model for unseen objects categorization.
Iterative Attribute Selection
strategy to select key attributes for ZSL tasks.
• Since test data is unseen during the training stage for ZSL tasks, we introduce the out-of-the-box data to mimic test data for attribute selection. Such data generated by a designed generative model has a similar distribution to the test data. Therefore, attributes selected based on the out-of-the-box data can be effectively generalized to the unseen test data.
• Extensive experiments demonstrate that IAS can effectively improve the attributebased ZSL model and achieve state-of-the-art performance.
The rest of the paper is organized as follows. Section 2 reviews related works. Section 3 gives the preliminary and motivation. Section 4 presents the theoretical analyses on generalization bound for attribute selection. Section 5 proposes the iterative attribute selection model. Experimental results are reported in Section 6. Conclusion is drawn in Section 7.
Zero-shot Learning
ZSL can recognize new objects using attributes as the intermediate semantic representation. Some researchers adopt the probability-prediction strategy to transfer information. Lampert et al. (2013) proposed a popular baseline, i.e. direct attribute prediction (DAP). DAP learns probabilistic attribute classifiers using the seen data and infers the label of the unseen data by combining the results of pre-trained classifiers.
Most recent works adopt the label-embedding strategy that directly learns a mapping function from the input features space to the semantic embedding space. One line of works is to learn linear compatibility functions. For example, Akata et al. (2015) presented an attribute label embedding (ALE) model which learns a compatibility function combined with ranking loss. Romera-Paredes et al. (2015) proposed an approach that models the relationships among features, attributes and classes as a two linear layers network. Another direction is to learn nonlinear compatibility functions. Xian et al. (2016) presented a nonlinear embedding model that augments bilinear compatibility model by incorporating latent variables. Airola et al. (2017) proposed a first general Kronecker product kernel-based learning model for ZSL tasks. In addition to the classification task, Ji et al. (2019) proposed an attribute network for zero-shot hashing retrieval task.
Attribute Selection
Attributes, as a kind of popular semantic representation of visual objects, can be the appearance, a part or a property of objects (Farhadi et al., 2009). For example, object elephant has the attribute big and long nose, object zebra has the attribute striped. Attributes are widely used to transfer information to recognize new objects in ZSL tasks Xu et al., 2019). As shown in Figure 1, using attributes as the semantic representation, data of different categories locates in different boxes bounded by the attributes. Since the attribute representation of the seen classes and the unseen class are different, the boxes with respect to the seen data and the unseen data are disjoint.
In previous ZSL works, all the attributes are assumed to be effective and treated equally. However, as pointed out in Guo et al. (2018), not all the attributes are effective for recognizing new objects. Therefore, we should select the key attributes to improve the semantic presentation. Liu et al. (2014) proposed a novel greedy algorithm which selects attributes based on their discriminating power and reliability. Guo et al. (2018) proposed to select attributes by measuring the distributive entropy and the predictability of attributes based on the training data. In short, previous attribute selection models are conducted based on the training data, which makes the selected attributes have poor generalization capability to the unseen test data. While our IAS iteratively selects attributes based on the out-of-the-box data which has a similar distribution to the test data, and thus the key attributes selected by our model can be more effectively generalized to the unseen test data.
Attribute-guided Generative Models
Deep generative models aim to estimate the joint distribution p(y; x) of samples and labels, by learning the class prior probability p(y) and the class-conditional density p(x|y) separately. The generative model can be extended to a conditional generative model if the generator is conditioned on some extra information, such as attributes in the proposed method. Odena et al. (2017) introduced a conditional version of generative adversarial nets, i.e. CGAN, which can be constructed by simply feeding the data label. CGAN is conditioned on both the generator and discriminator and can generate samples conditioned on class labels. Conditional Variational Autoencoder (CVAE) (Sohn et al., 2015), as an extension of Variational Autoencoder, is a deep conditional generative model for structured output prediction using Gaussian latent variables. We modify CVAE with the attribute representation to generate out-of-the-box data for the attribute selection.
Preliminary and Motivation
ZSL Task Formulation
We consider zero-shot learning as a task that recognizes unseen classes which have no labeled samples available. Given a training set D s = {(x n , y n ) , n = 1, ..., N s }, the task of traditional ZSL is to learn a mapping f : X → Y from the image feature space to the label embedding space, by minimizing the following regularized empirical risk:
L (y, f (x; W)) = 1 N s Ns n=1 l (y n , f (x n ; W)) + Ω (W) ,(1)
where l (·) is the loss function, which can be square loss 1/2(f (x) − y) 2 , logistic loss log(1+exp(−yf (x))) or hinge loss max(0, 1−yf (x)). W is the parameter of mapping f , and Ω (·) is the regularization term.
The mapping function f is defined as follows:
f (x; W) = arg max y∈Y F (x, y; W) ,(2)
where the function F : X × Y → R is the bilinear compatibility function to associate image features and label embeddings defined as follows:
F (x, y; W) = θ (x) T Wϕ (y) ,(3)
where θ (x) is the image features, ϕ (y) is the label embedding (i.e. attribute representation). We summarize some frequently used notations in Table 1.
Interpretation of ZSL Task
In traditional ZSL models, all the attributes are assumed to be effective and treated equally. While in previous works, some researchers pointed out that not all the attributes are useful and significant for zero-shot classification (Jiang et al., 2017). To the best of our knowledge, there is no theoretical analysis for the generalization performance of ZSL tasks, let alone selecting informative attributes for unseen classes. To fill in this gap, we first derive the generalization error bound for ZSL models. The intuition of our theoretical analysis is to simply treat the attributes as a kind of error correcting output codes, then the prediction of ZSL tasks can be deemed as the assignment of class labels with respective pre-defined ECOC, which is the closest to the predicted ECOC problem (Rocha et al., 2014). Based on this novel interpretation, we derive a theoretical generalization error bound of ZSL model as shown in Section 4. From the generalization bound analyses, we find that the discriminating power of attributes governs the performance of the ZSL model.
Deficiency of ZSLAS
Some attribute selection works have been proposed in recent years. Guo et al. (2018) proposed the ZSLAS model that selects attributes based on the distributive entropy and the predictability of attributes using training data. Simultaneously considering the ZSL model loss function and attribute properties in a joint optimization framework, they selected attributes by minimizing the following loss function:
L(y, f (x; s, W)) = 1 N s Ns n=1 {l ZSL (y n , f (x n ; s, W)) + αl p (θ(x n ), ϕ(y n ); s) − βl v (θ(x n ), µ; s)},(4)
where s is the weight vector of the attributes which will be further used for attribute selection. θ(·) is the attribute classifier, ϕ(y n ) is the attribute representation, µ is an auxiliary parameter. l ZSL is the model based loss function for ZSL, i.e. l(·) as defined in Eq. (1). l p is the attribute prediction loss which can be defined based on specific ZSL models and l v is the loss of variance which measures the distributive entropy of attributes (Guo et al., 2018). After getting the weight vector s by optimizing Eq. (4), attributes can be selected according to s and then be used to construct ZSL model. From our theoretical analyses in Section 4, ZSLAS can improve the original ZSL model to some extent (Guo et al., 2018). However, ZSLAS suffers from a drawback that the attributes are selected based on the training data. Since the training and test classes are disjoint in ZSL tasks, it is difficult to measure the quality and contribution of attributes regarding discriminating the unseen test classes. Thus, the selected attributes by ZSLAS have poor generalization capability to the test data due to the domain shift problem.
Definition of Out-of-the-box
Since previous attribute selection models are conducted based on the bounded in-thebox data, the selected attributes have poor generalization capability to the test data. However, the test data is unavailable during the training stage. Inspired by learning from pseudo relevance feedback (Miao et al., 2016), we introduce the pseudo data, which is outside the box of the training data, to mimic test classes to guide the attribute selection. Considering that the training data is bounded in the box by attributes, we generate the out-of-the-box data using an attribute-guided generative model. Since the out-of-thebox data is generated based on the same attribute representation as test classes, the box of the generated data will overlap with the box of the test data. And consequently, the key attributes selected by the proposed IAS model based on the out-of-the-box data can be effectively generalized to the unseen test data.
Generalization Bound Analysis
In this section, we first derive the generalization error bound of the original ZSL model and then analyze the bound changes after attribute selection. In previous works, some generalization error bounds have been presented for the ZSL task. Romera-Paredes et al. (2015) transformed ZSL problem to the domain adaptation problem and then analyzed the risk bounds for domain adaptation. Stock et al. (2018) considered ZSL problem as a specific setting of pairwise learning and analyzed the bound by the kernel ridge regression model. However, these bound analysis are not suitable for ZSL model due to their assumptions. In this work, we derive the generalization bound from the perspective of ECOC model, which is more similar to the ZSL problem.
Generalization Error Bound of ZSL
Zero-shot classification is an effective way to recognize new objects which have no training samples available. The basic framework of ZSL model is using attribute representation as the bridge to transfer knowledge from seen objects to unseen objects. To simplify the analysis, we consider ZSL as a multi-class classification problem. Therefore, ZSL task can be addressed via an ensemble method which combines many binary attribute classifiers. Specifically, we pre-trained a binary classifier for each attribute separately in the training stage. To classify a new sample, all the attribute classifiers are evaluated to obtain an attribute codeword (a vector in which each element represents the output of an attribute classifier). Then we compare the predicted codeword to the attribute representations of all the test classes to retrieve the label of the test sample.
To analyze the generalization error bound of ZSL, we first define some distances in the attribute space, and then present a proposition of the error correcting ability of attributes.
Definition 1 (Generalized Attribute Distance). Given the attribute matrix A for associating labels and attributes, let a i , a j denote the attribute representation of label y i and y j in matrix A with length N a , respectively. Then the generalized attribute distance between a i and a j can be defined as
d(a i , a j ) = Na m=1 ∆(a (m) i , a (m) j ),(5)
where N a is the number of attributes, a (m) i is the m th element in the attribute representation a i of the label
y i . ∆(a (m) i , a (m) j ) is equal to 1 if a (m) i = a (m) j , 0 otherwise.
We further define the minimum distance between any two attribute representations in the attribute space.
Definition 2 (Minimum Attribute Distance). The minimum attribute distance τ of matrix A is the minimum distance between any two attribute representations a i and a j as follows:
τ = min i =j d(a i , a j ), ∀ 1 ≤ i, j ≤ N a .(6)
Given the definition of distance in the attribute space, we can prove the following proposition.
Proposition 1 (Error Correcting Ability ). Given the label-attribute correlation matrix A and a vector of predicted attribute representation f (x) for an unseen test sample x with known label y. If x is incorrectly classified, then the distance between the predicted attribute representation f (x) and the correct attribute representation a y is greater than half of the minimum attribute distance τ , i.e.
d(f (x), a y ) ≥ τ 2 .(7)
Proof. Suppose that the predicted attribute representation for test sample x with correct attribute representation a y is f (x), and the sample x is incorrectly classified to the mismatched attribute representation a r , where r ∈ Y u \ {y}. Then the distance between f (x) and a y is greater than the distance between f (x) and a r , i.e.,
d(f (x), a y ) ≥ d(f (x), a r ).(8)
Here, the distance between attribute representation can be expanded as the elementwise summation based on Eq. (5) as follows:
Na m=1 ∆(f (m) (x), a (m) y ) ≥ Na m=1 ∆(f (m) (x), a (m) r ).(9)
Then, we have:
d(f (x), a y ) = Na m=1 ∆(f (m) (x), a (m) y ) = 1 2 Na m=1 ∆(f (m) (x), a (m) y ) + ∆(f (m) (x), a (m) y ) (i) ≥ 1 2 Na m=1 ∆(f (m) (x), a (m) y ) + ∆(f (m) (x), a (m) r ) (ii) ≥ 1 2 Na m=1 ∆(a (m) y , a (m) r ) = 1 2 d(a y , a r ) (iii) ≥ τ 2 ,(10)
where (i) follows Eq. (9), (ii) is based on the triangle inequality of distance metric and (iii) follows Eq. (6).
From Proposition 1, we can find that, the predicted attribute representation is not required to be exactly the same as the ground truth for each unseen test sample. As long as the distance is less than τ /2, ZSL models can correct the error committed by some attribute classifiers and make an accurate prediction.
Based on the Proposition of error correcting ability of attributes, we can derive the theorem of generalization error bound for ZSL.
Theorem 1 (Generalization Error Bound of ZSL). Given N a attribute classifiers, f (1) , f (2) , ..., f (Na) , trained on training set D s with label-attribute matrix A, the generalization error rate for the attribute-based ZSL model is upper bounded by
2N aB τ ,(11)
whereB = 1 Na Na m=1 B m and B m is the upper bound of the prediction loss for the m th attribute classifier f (m) .
Proof. According to Proposition 1, for any incorrectly classified test sample x with label y, the distance between the predicted attribute representation f (x) and the true attribute representation a y is greater than τ /2, i.e.,
d(f (x), a y ) = Na m=1 ∆(f (m) (x), a (m) y ) ≥ τ 2 .(12)
Let k be the number of incorrect image classifications for unseen test dataset D u = {(x i , y i ), i = 1, ..., N u }, we can obtain:
k τ 2 ≤ Nu i=1 Na m=1 ∆(f (m) (x i ), a (m) y i ) ≤ Nu i=1 Na m=1 B m = N u N aB ,(13)
whereB = 1 Na Na m=1 B m and B m is the upper bound of attribute prediction loss. Hence, the generalized error rate k/N u is bounded by 2N aB /τ .
Remark 1 (Generalization error bound is positively correlated to the average attribute prediction loss). From Theorem 1, we can find that the generalization error bound of the attribute-based ZSL model depends on the number of attributes N a , minimum attribute distance τ and average prediction lossB for all the attribute classifiers. According to the Definition 1 and 2, the minimum attribute distance τ is positively correlated to the number of attributes N a . Therefore, the generalization error bound is mainly affected by the average prediction lossB. Intuitively, the inferior attributes with poor predictability cause greater prediction lossB, and consequently, these attributes will have negative effect on the ZSL performance and increase the generalization error rate.
Improvement of Generalization after Attribute Selection
It has been proven that the generalization error bound of ZSL model is affected by the average prediction lossB in the previous section. In this section, we will prove that attribute selection can reduce the average prediction lossB, and consequently reduce the generalization error bound of ZSL from the perspective of PAC-style (Valiant, 1984) analysis.
Lemma 1 (PAC bound of ZSL (Palatucci et al., 2009)). Given N a attribute classifiers, to obtain an attribute classifier with (1 − δ) probability that has at most k a incorrect predicted attributes, the PAC bound D of the attribute-based ZSL model is:
D ∝ N a k a [4log(2/δ) + 8(d + 1)log(13N a /k a )],(14)
where d is the dimension of the image features.
Remark 2 (The average attribute prediction loss is positively correlated to the PAC bound). Here, k a /N a is the tolerable prediction error rate of attribute classifiers. According to the definition of the average attribute prediction lossB, it is obvious that the ZSL model with smallerB could tolerate a greater k a /N a . From Lemma 1, we can find that the PAC bound D is monotonically increasing with respect to N a /k a . Hence, the PAC bound D decreases when the N a /k a decreases, and consequently the average prediction lossB decreases.
Lemma 2 (Test Error Bound (Vapnik, 2013)). Suppose that the PAC bound of the attribute-based ZSL model is D. The probability of the test error distancing from an upper bound is given by:
p e ts ≤ e tr + 1 N s D log 2N s D + 1 − log η 4 = 1 − η,(15)
where N s is the size of the training set, 0 ≤ η ≤ 1, and e ts , e tr are the test error and the training error respectively. Proof. In attribute selection, the key attributes are selected by minimizing the loss function in Eq.
(1) on the out-of-the-box data. Since the generated out-of-the-box data has a similar distribution to the test data, the test error of ZSL will decrease after attribute selection, i.e. ZSLIAS has a smaller test error bound than the original ZSL model. Therefore, we can infer that ZSLIAS has a smaller PAC bound based on Remark 3. According to Remark 2, we can infer that the average prediction errorB decreases after attribute selection. As a consequence, the generalization error bound of ZSLIAS is smaller than the original ZSL model based on Remark 1.
From Proposition 2, we can observe that the generalization error of ZSL model will decrease after adopting the proposed IAS. In other words, ZSLIAS have a smaller classification error rate comparing to the original ZSL method when generalizing to the unseen test data.
IAS with Out-of-the-box Data
Motivated by the generalization bound analyses, we select the key attributes based on the out-of-the-box data. In this section, we first present the proposed iterative attribute selection model. Then, we introduce the attribute-guided generative model designed to generate the out-of-the-box data. The complexity analysis of IAS is given at last.
Iterative Attribute Selection Model
Inspired by the idea of iterative machine teaching (Liu et al., 2017), we propose a novel iterative attribute selection model that iteratively selects attributes based on the generated out-of-the-box data. Firstly, we generate the out-of-the-box data to mimic test classes by an attribute-based generative model. Then, the key attributes are selected in an iterative manner based on the out-of-the-box data. After obtaining the selected attributes, we can consider them as a more efficient semantic representation to improve the original ZSL model.
Suppose given the generated out-of-the-box data D g = {(x n , y n ), n = 1, ..., N g }, we can combine the empirical risk in Eq. (1) with the attribute selection model. Then the loss function is rewritten as follows:
L (y, f (x; s, W)) = 1 N g Ng n=1 l (y n , f (x n ; s, W)) + Ω (W) ,
where s ∈ (0, 1) Na is the indicator vector for the attribute selection, in which s i = 1 if the i th attribute is selected or 0 otherwise. N a is the number of all the attributes. Correspondingly, the mapping function f in Eq.
(2) and the compatibility function F in Eq. (3) can be rewritten as follows:
f (x; s, W) = arg max y∈Y F (x, y; s, W) ,(17)F (x, y; s, W) = θ (x) T W (s • ϕ (y)) ,(18)
where • is element-wise product operator (Hadamard product), s is the selection vector defined in Eq. (16).
To solve the optimization problem in Eq. (16), we need to specify the choice of the loss function l (·). The loss function in Eq. (16) for single sample (x n , y n ) is expressed as follows (Xian et al., 2018): l(y n , f ((x n ; s, W))) = y∈Yg r ny [ (y n , y) + F (x n , y; s, W) − F (x n , y n ; s, W)] +
= y∈Yg r ny [ (y n , y) + θ(x n ) T W(s • ϕ(y)) − θ(x n ) T W(s • ϕ(y n ))] + ,(19)
where Y g is the label of generated out-of-the-box data, which is the same as Y u . (y n ; y) = 0 if y n = y, 1 otherwise. r ny ∈ [0, 1] is the weight defined in specific ZSL methods.
Since the dimension of the optimal attribute subset (i.e. l 0 -norm of s) is agnostic, finding the optimal s is a NP-Complete (Garey et al., 1974) problem. Therefore, inspired by the idea of iterative machine teaching (Liu et al., 2017), we adopt the greedy algorithm (Cormen et al., 2009) to optimize the loss function in an iterative manner. Eq. (16) gets updated during each iteration as follows:
L t+1 = 1 N g Ng n=1 l t+1 (y n , f (x n ; s t+1 , W t+1 )) + Ω(W t+1 ), s.t. s i ∈s t+1 s i = t + 1, s j ∈(s t+1 −s t ) s j = 1.(20)
The constraints on s ensure that s t updates one element (from 0 updates to 1) during each iteration, which indicates that only one attribute is selected each time. s 0 is the initial vector of all 0's. Correspondingly, the loss function in Eq. (20) for single sample (x n , y n ) gets updated during each iteration as follows:
l t+1 = y∈Yg r ny [ (y n , y) + θ(x n ) T W t+1 (s t+1 • ϕ(y)) − θ(x n ) T W t+1 (s t+1 • ϕ(y n ))] + .(21)
Here l t+1 subjects to the same constrains as Eq. (20).
To minimize the loss function in Eq. (20), we can alternatively optimize W t+1 and s t+1 by optimizing one variable while fixing the other one. In each iteration, we firstly optimize W t+1 via the gradient descent algorithm (Burges et al., 2005). The gradient of Eq. (20) is calculated as follows:
∂L t+1 ∂W t+1 = 1 N g Ng n=1 ∂l t+1 ∂W t+1 + 1 2 αW t+1 ,(22)
where
∂l t+1 ∂W t+1 = y∈Yg r ny θ(x n ) T (s t • (ϕ(y) − ϕ(y n ))),(23)
where α is the regularization parameter. After updating W t+1 , we can traverse all the elements equal to 0 in s t , and turn them into 1 respectively. Then s t+1 is updated by the optimal s t+1 which achieves the minimal loss of Eq. (20):
s t+1 = arg min s t+1 1 N g Ng n=1 l t+1 (y n , f (x n ; s t+1 , W t+1 )) + Ω(W t+1 ),(24)
When iterations end and s is obtained, we can easily get the subset of key attributes by selecting the attributes corresponding to the elements equal to 1 in the selection vector s.
The procedure of the proposed IAS model is given in Algorithm 1.
Generation of Out-of-the-box Data
In order to select the discriminative attributes for test classes, we should do attribute selection on the test data. Since the training data and the test data are located in the different boxes bounded by the attributes, we adopt an attribute-based generative model (Bucher et al., 2017) to generate out-of-the-box data to mimic test classes. Comparing to the ZSLAS, the key attributes selected by IAS based on the out-of-the-box data can be more efficiently generalized to test data. Conditional variational autoencoder (CVAE) (Sohn et al., 2015) is a conditional generative model in which the latent codes and generated data are both conditioned on some extra information. In this work, we propose the attribute-based variational autoencoder (AVAE), a special version of CVAE with tailor-made attributes, to generate the out-of-the-box data.
VAE (Kingma et al., 2013) is a directed graphical model with certain types of latent variables. The generative process of VAE is as follows: a set of latent codes z is generated from the prior distribution p(z), and the data x is generated by the generative distribution p(x|z) conditioned on z : z ∼ p(z), x ∼ p(x|z). The empirical objective of VAE is expressed as follows (Sohn et al., 2015):
L VAE (x) = −KL(q(z|x) p(z)) + 1 L L l=1 logp(x|z (l) ),(25)
Algorithm 1 Iterative Attribute Selection Model
Input:
The generated out-of-the-box data D g ; Original attribute set A; Iteration stop threshold ε. Output:
Subset of selected attributes S.
1: Initialization: s 0 = 0, randomize W 0 ; 2: for t = 0 to N a − 1 do 3: if |L t+1 − L t | ≤ ε 10:
L t = 1 Ng Ng n=1 l t (y n , f (x n ; s t , W t )) + Ω(W t ) (Eq. (20)) 4: ∂L t ∂W t = 1 Ng Ng n=1 ∂l t ∂W t + 1 2 αW t (
Break;
11:
end if 12: end for 13: Obtain the subset of selected attributes: S = s • A.
where z (l) = g(x, (l) ), (l) ∼ N (0, I). q(z|x) is the recognition distribution which is reparameterized with a deterministic and differentiable function g(·, ·) (Sohn et al., 2015) . KL denotes the Kullback-Leibler divergence (Kullback, 1987) between the incorporated distributions. L is the number of samples.
Combining with the condition, i.e. the attribute representation of labels, the empirical objective of the AVAE is defined as follows:
L AVAE (x, ϕ(y)) = −KL(q(z|x, ϕ(y)) p(z|ϕ(y))) + 1 L L l=1 logp(x|ϕ(y), z (l) ),(26)
where z (l) = g(x, ϕ(y), (l) ), ϕ (y) is the attribute representation of label y.
In the encoding stage, for each training data point x (i) , we estimate the q(z (i) |x (i) , ϕ(y (i) )) = Q(z) using the encoder. In the decoding stage, after inputting the concatenation of thez sampled from the Q(z) and the attribute representation ϕ(y u ), the decoder will generate a new sample x g with the same attribute representation as the unseen class ϕ(y u ).
The procedure of AVAE is illustrated in Figure 3. At training time, the attribute representation (of training classes) whose image is being fed in is provided to the encoder and decoder. To generate an image of a particular attribute representation (of test classes), we can just feed this attribute vector along with a random point in the latent space sampled from a standard normal distribution. The system no longer relies on the latent space to encode what object you are dealing with. Instead, the latent space encodes attribute information. Since the attribute representations of test classes are fed into the decoder at generating stage, the generated out-of-the-box data D g has a similar distribution to the test data.
Complexity Analysis
Experiments
To evaluate the performance of the proposed iterative attribute selection model, extensive experiments are conducted on four standard datasets with ZSL setting. In this section, we first compare the proposed approach with the state-of-the-art, and then give detailed analyses.
Experimental Settings
Dataset
We conduct experiments on four standard ZSL datasets: (1) Animal with Attribute (AwA) (Lampert et al., 2013), (2) attribute-Pascal-Yahoo (aPY) (Farhadi et al., 2009), (3) Caltech-UCSD Bird 200-2011 (CUB) (Wah et al., 2011), and (4) SUN Attribute Database (SUN) (Patterson et al., 2012). The overall statistic information of these datasets is summarized in Table 2.
Dataset #Attributes
Classes Images (SS) Images (PS) #Total #Training #Test #Training #Test #Training #Test AwA 85 50 40 10 24295 6180 19832 5685 aPY 64 32 20 12 12695 2644 5932 7924 CUB 312 200 150 50 8855 2933 7057 2967 SUN 102 717 645 72 12900 1440 10320 1440
Dataset Split
Zero-shot learning assumes that training classes and test classes are disjoint. Actually, ImageNet, the dataset exploited to extract image features via deep neural networks, may include some test classes. Therefore, Xian et al. (2018) proposed a new dataset split (PS) ensuring that none of the test classes appears in the dataset used to train the extractor model. In this paper, we evaluate the proposed model using both splits, i.e., the original standard split (SS) and the proposed split (PS).
Image Feature
Deep neural network feature is extracted for the experiments. Image features are extracted from the entire images for AwA, CUB and SUN datasets, and from bounding boxes mentioned in Farhadi et al. (2009) for aPY dataset, respectively. The original ResNet-101 (He et al., 2016) pre-trained on ImageNet with 1K classes is used to calculate 2048-dimensional top-layer pooling units as image features.
Attribute Representation
Attributes are used as the semantic representation to transfer information from training classes to test classes. We use 85, 64, 312 and 102-dimensional continuous value attributes for AwA, aPY, CUB and SUN datasets, respectively.
Evaluation protocol
Unified dataset splits shown in Table 2 are used for all the compared methods to get fair comparison results. Since the dataset is not well balanced with respect to the number of images per class (Xian et al., 2018), we use the mean class accuracy, i.e. per-class averaged top-1 accuracy, as the criterion of assessment. Mean class accuracy is calculated as follows: acc = 1 L y∈Yu #correct predictions in y #samples in y ,
where L is the number of test classes, Y u is the set comprised of all the test labels.
Comparison with the State-of-the-Art
To evaluate the efficiency of the proposed iterative attribute selection model, we modify several latest ZSL baselines by the proposed IAS and compare them with the state-ofthe-art.
We modify seven representative ZSL baselines to evaluate the IAS model, including three popular ZSL baselines (i.e. DAP (Lampert et al., 2013), LatEm (Xian et al., 2016) and SAE (Kodirov et al., 2017)) and four latest ZSL baselines (i.e. MFMR , GANZrl (Tong et al., 2018), fVG (Xian et al., 2019) and LLAE (Li et al., 2019)).
The improvement achieved on these ZSL baselines is summarized in Table 3. It can be observed that IAS can significantly improve the performance of attribute-based ZSL methods. Specifically, the mean accuracies of these ZSL methods on four datasets (i.e. AwA, aPY, CUB and SUN) are increased by 11.09%, 15.97%, 9.10%, 5.11%, respectively (10.29% on average) after using IAS. For DAP on AwA and aPY datasets, LatEm on AwA dataset, IAS can improve their accuracy by greater than 20%, which demonstrates that IAS can significantly improve the performance of ZSL models. Interestingly, SAE performs badly on aPY and CUB datasets, while the accuracy rises to an acceptable level (from 8.33% to 38.53%, and from 24.65% to 42.85%, respectively) by using IAS. Even though the performance of state-of-the-art baselines is pretty well, IAS can still improve them to some extent (5.48%, 3.24%, 2.80% and 3.64% on average for MFMR, GANZrl, fVG and LLAE respectively). These results demonstrate that the proposed iterative attribute selection model makes sense and can effectively improve existing attribute-based ZSL methods. This also proves the necessity and effectiveness of attribute selection for ZSL tasks.
As a similar work to ours, ZSLAS selects attributes based on the distributive entropy and the predictability of attributes. Thus, we compare the improvement of IAS and ZS-LAS on DAP and LatEm, respectively. In Table 3, it can be observed that ZSLAS can improve existing ZSL methods, while IAS can improve them by a greater level (2.15% vs 10.61% on average). Compared to ZSLAS, the advantages of ZSLIAS can be interpreted in two aspects. Firstly, ZSLIAS selects attributes in an iterative manner, hence it can select a more optimal subset of key attributes than ZSLAS that selects attributes at once. Secondly, ZSLAS is conducted based on the training data, while ZSLIAS is conducted based on the out-of-the-box data which has a similar distribution to the test data. Therefore, attributes selected by ZSLIAS is more applicable and discriminative for test data. Experimental results demonstrate the significant superiority of the proposed IAS model over previous attribute selection models.
Detailed Analysis
In order to further understand the promising performance, we analyze the following experimental results in detail.
Evaluation on the Out-of-the-box Data
In the first experiment, we evaluate the out-of-the-box data generated by a tailor-made attribute-based deep generative model. Figure 4 shows the distribution of the out-of-thebox data and the real test data sampled from AwA dataset using t-SNE. Note that the out-of-the-box data in Figure 4(b) is generated only based on the attribute representation of unseen classes, and without extra information of any test images. It can be observed that the generated out-of-the-box data can capture a similar distribution to the real test data, which guarantees that the selected attributes can be effectively generalized to test data.
We also quantitatively evaluate the out-of-the-box data by calculating various distances between three distributions, i.e. the generated out-of-the-box data (X g ), unseen test data (X u ) and seen training data (X s ), in pairs. Table 4 shows the distribution distances measured by Wasserstein Distance (Vallender, 1974), KL Divergence (Kullback, 1987), Hellinger Distance (Beran, 1977) and Bhattacharyya Distance (Kailath, 1967), respectively. It is obvious that the distance between X g and X u is much less than the distance between X u and X s , which means that the generated out-of-the-box data has a similar distribution to the unseen test data compared to the seen data. Therefore, at- tributes selected based on the out-of-the-box data are more discriminative for test data comparing to attributes selected based on training data. We illustrate some generated images of unseen classes (i.e. panda and seal) and annotate them the corresponding attribute representations as shown in Figure 5. Numbers in black indicate the attribute representations of the labels of real test images. Numbers in red and green are the correct and the incorrect attribute values of generated images, respectively. We can see that the generated images have the similar attribute representation as test images. Therefore, the tailor-made attribute-based deep generative model can generate the out-of-the-box data which captures a similar distribution to the unseen data.
Effectiveness of IAS
In the second experiment, we compare the performance of three ZSL methods (i.e. DAP, LatEm and SAE) after using IAS on four datasets, respectively. The accuracies with respect to the number of selected attributes are shown in Figure 6. On AwA, aPY and SUN datasets, we can see that the performance of these three ZSL methods increases sharply when the number of selected attributes grows from 0 to about 20%, and then reaches the peak. These results suggest that only about a quarter of attributes are the key attributes which are necessary and effective to classify test objects. In Figure 6(b) and 6(f), there is an interesting result that SAE performs badly on aPY dataset with both SS and PS (the accuracy is less than 10%), while the performance is acceptable after using IAS (the accuracy is about 40%). These results demonstrate the effectiveness and robustness of IAS for ZSL tasks. Furthermore, we modify DAP by using all the attributes (#84), using the selected attributes (#20) and using the remaining attributes (#64) after attribute selection, respectively. The resulting confusion matrices of these three variants evaluated on AwA dataset with proposed split setting are illustrated in Figure 7. The numbers in the diagonal area (yellow patches) of confusion matrices indicate the classification accuracy per class. It is obvious that IAS can significantly improve DAP performance on most of the test classes, and the accuracies on some classes nearly doubled after using IAS, such as horse, seal, and giraffe. Even though some objects are hard to be recognized by DAP, like dolphin (the accuracy of DAP is 1.6%), we can get an acceptable performance after using IAS (the accuracy of DAPIAS is 72.7%). The original DAP only performs better than IAS with regard to the object blue whale, this is because in the original DAP, most of the marine creatures (such as blue whale, walrus and dolphin) are classified as the blue whale, which increases the classification accuracy while also increasing the false positive rate. More importantly, the confusion matrix of DAPIAS contains less noise (i.e. smaller numbers in the side regions (white patches) of confusion matrices apart from the diagonal area) than DAP, which suggests that DAPIAS has less prediction uncertainties. In other words, adopting IAS can improve the robustness of attribute-based ZSL methods.
In Figure 7, the accuracy of using the selected attributes (71.88% on average) is significantly improved comparing to the accuracy of using all the attributes (46.23% on average), and the accuracy of using the remaining attributes (31.32% on average) is extremely terrible. These results suggest that the selected attributes are the key attributes for discriminating test data. The missing attributes are useless and even have a negative impact on the ZSL system. Therefore, it is obvious that not all the attributes are effective for ZSL tasks, and we should select the key attributes to improve performance.
Interpretability of Selected Attributes
In the third experiment, we present the visualization results of attribute selection. We find that ZSL methods obtain the best performance when selecting about 20% attributes as shown in Figure 6. Therefore, we illustrate the top 20% key attributes selected by DAP, LatEm and SAE on four datasets in Figure 8. Three rows in each figure are DAP, LatEm and SAE from top to bottom, and yellow bars indicate the attributes which are selected by the corresponding methods. We can see that the attribute subsets selected by different ZSL methods are highly coincident for the same dataset, which demonstrates that the selected attributes are the key attributes for discriminating test data. Specifi- cally, we enumerate the key attributes selected by three ZSL methods on AwA dataset in Table 5. Attributes in boldface indicate that they are simultaneously selected by all the three ZSL methods, and attributes in italics indicate that they are selected by any two of these three methods. It can be observed that 13 attributes (65%) are selected by all the three ZSL methods. These three attribute subsets selected by diverse ZSL models are very similar, which is another evidence that IAS is reasonable and useful for zero-shot classification.
Conclusion
We present a novel and effective iterative attribute selection model to improve existing attribute-based ZSL methods. In most of the previous ZSL works, all the attributes are assumed to be effective and treated equally. However, we notice that attributes have different predictability and discriminability for diverse objects. Motivated by this observation, we propose to select the key attributes to build ZSL model. Since training classes and test classes are disjoint in ZSL tasks, we introduce the out-of-the-box data to mimic test data to guide the progress of attribute selection. The out-of-the-box data generated by a tailor-made attribute-based deep generative model has a similar distribution to the test data. Hence, the attributes selected by IAS based on the out-of-the-box data can be effectively generalized to the test data. To evaluate the effectiveness of IAS, we conduct extensive experiments on four standard ZSL datasets. Experimental results demonstrate that IAS can effectively select the key attributes for ZSL tasks and significantly improve state-of-the-art ZSL methods.
In this work, we select the same attributes for all the unseen test classes. Obviously, this is not the global optimal solution to select attributes for diverse categories. In the future, we will consider a tailor-made attribute selection model that can select the special subset of key attributes for each test class. | 7,799 |
1907.11474 | 2966558078 | Semantic segmentation for lightweight urban scene parsing is a very challenging task, because both accuracy and efficiency (e.g., execution speed, memory footprint, and computation complexity) should all be taken into account. However, most previous works pay too much attention to one-sided perspective, either accuracy or speed, and ignore others, which poses a great limitation to actual demands of intelligent devices. To tackle this dilemma, we propose a new lightweight architecture named Context-Integrated and Feature-Refined Network (CIFReNet). The core components of our architecture are the Long-skip Refinement Module (LRM) and the Multi-scale Contexts Integration Module (MCIM). With low additional computation cost, LRM is designed to ease the propagation of spatial information and boost the quality of feature refinement. Meanwhile, MCIM consists of three cascaded Dense Semantic Pyramid (DSP) blocks with a global constraint. It makes full use of sub-regions close to the target and enlarges the field of view in an economical yet powerful way. Comprehensive experiments have demonstrated that our proposed method reaches a reasonable trade-off among overall properties on Cityscapes and Camvid dataset. Specifically, with only 7.1 GFLOPs, CIFReNet that contains less than 1.9 M parameters obtains a competitive result of 70.9 MIoU on Cityscapes test set and 64.5 on Camvid test set at a real-time speed of 32.3 FPS, which is more cost-efficient than other state-of-the-art methods. | Some recent works based on Fully Convolution Networks (FCNs) @cite_4 have achieved promising results on public benchmarks @cite_8 , @cite_24 . We then review the latest deep-learning-based methods from lightweight-oriented and accuracy-oriented aspects for scene parsing tasks. | {
"abstract": [
"",
"Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, improve on the previous best result in semantic segmentation. Our key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet, the VGG net, and GoogLeNet) into fully convolutional networks and transfer their learned representations by fine-tuning to the segmentation task. We then define a skip architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional networks achieve improved segmentation of PASCAL VOC (30 relative improvement to 67.2 mean IU on 2012), NYUDv2, SIFT Flow, and PASCAL-Context, while inference takes one tenth of a second for a typical image.",
"Visual understanding of complex urban street scenes is an enabling factor for a wide range of applications. Object detection has benefited enormously from large-scale datasets, especially in the context of deep learning. For semantic urban scene understanding, however, no current dataset adequately captures the complexity of real-world urban scenes. To address this, we introduce Cityscapes, a benchmark suite and large-scale dataset to train and test approaches for pixel-level and instance-level semantic labeling. Cityscapes is comprised of a large, diverse set of stereo video sequences recorded in streets from 50 different cities. 5000 of these images have high quality pixel-level annotations, 20 000 additional images have coarse annotations to enable methods that leverage large volumes of weakly-labeled data. Crucially, our effort exceeds previous attempts in terms of dataset size, annotation richness, scene variability, and complexity. Our accompanying empirical study provides an in-depth analysis of the dataset characteristics, as well as a performance evaluation of several state-of-the-art approaches based on our benchmark."
],
"cite_N": [
"@cite_24",
"@cite_4",
"@cite_8"
],
"mid": [
"",
"2395611524",
"2340897893"
]
} | 0 |
||
1907.10202 | 2962894112 | We address the challenging problem of generating facial attributes using a single image in an unconstrained pose. In contrast to prior works that largely consider generation on 2D near-frontal images, we propose a GAN-based framework to generate attributes directly on a dense 3D representation given by UV texture and position maps, resulting in photorealistic, geometrically-consistent and identity-preserving outputs. Starting from a self-occluded UV texture map obtained by applying an off-the-shelf 3D reconstruction method, we propose two novel components. First, a texture completion generative adversarial network (TC-GAN) completes the partial UV texture map. Second, a 3D attribute generation GAN (3DA-GAN) synthesizes the target attribute while obtaining an appearance consistent with 3D face geometry and preserving identity. Extensive experiments on CelebA, LFW and IJB-A show that our method achieves consistently better attribute generation accuracy than prior methods, a higher degree of qualitative photorealism and preserves face identity information. | Early works @cite_27 @cite_2 apply a 3D Morphable Model and search for dense point correspondence to complete the invisible face region. @cite_8 proposes a high fidelity pose and expression normalization approach based on 3DMM. @cite_40 formulate the frontalization as a low rank optimization problem. @cite_12 formulate the frontalization as a recurrent object rotation problem. @cite_29 propose a concatenate network structure to rotate faces with image-level reconstruction constraint. @cite_18 proposes using the identity perception feature to reconstruct normalized faces. Recently, GAN-based generative models @cite_31 @cite_30 @cite_15 @cite_25 @cite_41 @cite_3 have achieved high visual quality and preserve identity with large extent. Our method aligns in the GAN-based methods but works on 3D UV position and texture other than the 2D images. | {
"abstract": [
"Despite recent advances in face recognition using deep learning, severe accuracy drops are observed for large pose variations in unconstrained environments. Learning pose-invariant features is one solution, but needs expensively labeled large-scale data and carefully designed feature learning algorithms. In this work, we focus on frontalizing faces in the wild under various head poses, including extreme profile view's. We propose a novel deep 3D Morphable Model (3DMM) conditioned Face Frontalization Generative Adversarial Network (GAN), termed as FF-GAN, to generate neutral head pose face images. Our framework differs from both traditional GANs and 3DMM based modeling. Incorporating 3DMM into the GAN structure provides shape and appearance priors for fast convergence with less training data, while also supporting end-to-end training. The 3DMM-conditioned GAN employs not only the discriminator and generator loss but also a new masked symmetry loss to retain visual quality under occlusions, besides an identity loss to recover high frequency information. Experiments on face recognition, landmark localization and 3D reconstruction consistently show the advantage of our frontalization method on faces in the wild datasets. 1",
"We present a method for synthesizing a frontal, neutral-expression image of a person's face given an input face photograph. This is achieved by learning to generate facial landmarks and textures from features extracted from a facial-recognition network. Unlike previous approaches, our encoding feature vector is largely invariant to lighting, pose, and facial expression. Exploiting this invariance, we train our decoder network using only frontal, neutral-expression photographs. Since these photographs are well aligned, we can decompose them into a sparse set of landmark points and aligned texture maps. The decoder then predicts landmarks and textures independently and combines them using a differentiable image warping operation. The resulting images can be used for a number of applications, such as analyzing facial attributes, exposure and white balance adjustment, or creating a 3-D avatar.",
"Pose and expression normalization is a crucial step to recover the canonical view of faces under arbitrary conditions, so as to improve the face recognition performance. An ideal normalization method is desired to be automatic, database independent and high-fidelity, where the face appearance should be preserved with little artifact and information loss. However, most normalization methods fail to satisfy one or more of the goals. In this paper, we propose a High-fidelity Pose and Expression Normalization (HPEN) method with 3D Morphable Model (3DMM) which can automatically generate a natural face image in frontal pose and neutral expression. Specifically, we firstly make a landmark marching assumption to describe the non-correspondence between 2D and 3D landmarks caused by pose variations and propose a pose adaptive 3DMM fitting algorithm. Secondly, we mesh the whole image into a 3D object and eliminate the pose and expression variations using an identity preserving 3D transformation. Finally, we propose an inpainting method based on Possion Editing to fill the invisible region caused by self occlusion. Extensive experiments on Multi-PIE and LFW demonstrate that the proposed method significantly improves face recognition performance and outperforms state-of-the-art methods in both constrained and unconstrained environments.",
"Photorealistic frontal view synthesis from a single face image has a wide range of applications in the field of face recognition. Although data-driven deep learning methods have been proposed to address this problem by seeking solutions from ample face data, this problem is still challenging because it is intrinsically ill-posed. This paper proposes a Two-Pathway Generative Adversarial Network (TP-GAN) for photorealistic frontal view synthesis by simultaneously perceiving global structures and local details. Four landmark located patch networks are proposed to attend to local textures in addition to the commonly used global encoderdecoder network. Except for the novel architecture, we make this ill-posed problem well constrained by introducing a combination of adversarial loss, symmetry loss and identity preserving loss. The combined loss function leverages both frontal face distribution and pre-trained discriminative deep face models to guide an identity preserving inference of frontal views from profiles. Different from previous deep learning methods that mainly rely on intermediate features for recognition, our method directly leverages the synthesized identity preserving image for downstream tasks like face recognition and attribution estimation. Experimental results demonstrate that our method not only presents compelling perceptual results but also outperforms state-of-theart results on large pose face recognition.",
"We propose a framework based on Generative Adversarial Networks to disentangle the identity and attributes of faces, such that we can conveniently recombine different identities and attributes for identity preserving face synthesis in open domains. Previous identity preserving face synthesis processes are largely confined to synthesizing faces with known identities that are already in the training dataset. To synthesize a face with identity outside the training dataset, our framework requires one input image of that subject to produce an identity vector, and any other input face image to extract an attribute vector capturing, e.g., pose, emotion, illumination, and even the background. We then recombine the identity vector and the attribute vector to synthesize a new face of the subject with the extracted attribute. Our proposed framework does not need to annotate the attributes of faces in any way. It is trained with an asymmetric loss function to better preserve the identity and stabilize the training process. It can also effectively leverage large amounts of unlabeled training face images to further improve the fidelity of the synthesized faces for subjects that are not presented in the labeled training face dataset. Our experiments demonstrate the efficacy of the proposed framework. We also present its usage in a much broader set of applications including face frontalization, face attribute morphing, and face adversarial example detection.",
"Face recognition under viewpoint and illumination changes is a difficult problem, so many researchers have tried to solve this problem by producing the pose- and illumination- invariant feature. [26] changed all arbitrary pose and illumination images to the frontal view image to use for the invariant feature. In this scheme, preserving identity while rotating pose image is a crucial issue. This paper proposes a new deep architecture based on a novel type of multitask learning, which can achieve superior performance in rotating to a target-pose face image from an arbitrary pose and illumination image while preserving identity. The target pose can be controlled by the user's intention. This novel type of multi-task model significantly improves identity preservation over the single task model. By using all the synthesized controlled pose images, called Controlled Pose Image (CPI), for the pose-illumination-invariant feature and voting among the multiple face recognition results, we clearly outperform the state-of-the-art algorithms by more than 4 6 on the MultiPIE dataset.",
"Recently proposed robust 3D face alignment methods establish either dense or sparse correspondence between a 3D face model and a 2D facial image. The use of these methods presents new challenges as well as opportunities for facial texture analysis. In particular, by sampling the image using the fitted model, a facial UV can be created. Unfortunately, due to self-occlusion, such a UV map is always incomplete. In this paper, we propose a framework for training Deep Convolutional Neural Network (DCNN) to complete the facial UV map extracted from in-the-wild images. To this end, we first gather complete UV maps by fitting a 3D Morphable Model (3DMM) to various multiview image and video datasets, as well as leveraging on a new 3D dataset with over 3,000 identities. Second, we devise a meticulously designed architecture that combines local and global adversarial DCNNs to learn an identity-preserving facial UV completion model. We demonstrate that by attaching the completed UV to the fitted mesh and generating instances of arbitrary poses, we can increase pose variations for training deep face recognition verification models, and minimise pose discrepancy during testing, which lead to better performance. Experiments on both controlled and in-the-wild UV datasets prove the effectiveness of our adversarial UV completion model. We achieve state-of-the-art verification accuracy, 94.05 , under the CFP frontal-profile protocol only by combining pose augmentation during training and pose discrepancy reduction during testing. We will release the first in-the-wild UV dataset (we refer as WildUV) that comprises of complete facial UV maps from 1,892 identities for research purposes.",
"“Frontalization” is the process of synthesizing frontal facing views of faces appearing in single unconstrained photos. Recent reports have suggested that this process may substantially boost the performance of face recognition systems. This, by transforming the challenging problem of recognizing faces viewed from unconstrained viewpoints to the easier problem of recognizing faces in constrained, forward facing poses. Previous frontalization methods did this by attempting to approximate 3D facial shapes for each query image. We observe that 3D face shape estimation from unconstrained photos may be a harder problem than frontalization and can potentially introduce facial misalignments. Instead, we explore the simpler approach of using a single, unmodified, 3D surface as an approximation to the shape of all input faces. We show that this leads to a straightforward, efficient and easy to implement method for frontalization. More importantly, it produces aesthetic new frontal views and is surprisingly effective when used for face recognition and gender estimation.",
"Recently, it has been shown that excellent results can be achieved in both facial landmark localization and pose-invariant face recognition. These breakthroughs are attributed to the efforts of the community to manually annotate facial images in many different poses and to collect 3D facial data. In this paper, we propose a novel method for joint frontal view reconstruction and landmark localization using a small set of frontal images only. By observing that the frontal facial image is the one having the minimum rank of all different poses, an appropriate model which is able to jointly recover the frontalized version of the face as well as the facial landmarks is devised. To this end, a suitable optimization problem, involving the minimization of the nuclear norm and the matrix l1 norm is solved. The proposed method is assessed in frontal face reconstruction, face landmark localization, pose-invariant face recognition, and face verification in unconstrained conditions. The relevant experiments have been conducted on 8 databases. The experimental results demonstrate the effectiveness of the proposed method in comparison to the state-of-the-art methods for the target problems.",
"In this paper, we propose a new and effective frontalization algorithm for frontal rendering of unconstrained face images, and experiment it for face recognition. Initially, a 3DMM is fit to the image, and an interpolating function maps each pixel inside the face region on the image to the 3D model's. Thus, we can render a frontal view without introducing artifacts in the final image thanks to the exact correspondence between each pixel and the 3D coordinate of the model. The 3D model is then back projected onto the frontalized image allowing us to localize image patches where to extract the feature descriptors, and thus enhancing the alignment between the same descriptor over different images. Our solution outperforms other frontalization techniques in terms of face verification. Results comparable to state-of-the-art on two challenging benchmark datasets are also reported, supporting our claim of effectiveness of the proposed face image representation.",
"The large pose discrepancy between two face images is one of the key challenges in face recognition. Conventional approaches for pose-invariant face recognition either perform face frontalization on, or learn a pose-invariant representation from, a non-frontal face image. We argue that it is more desirable to perform both tasks jointly to allow them to leverage each other. To this end, this paper proposes Disentangled Representation learning-Generative Adversarial Network (DR-GAN) with three distinct novelties. First, the encoder-decoder structure of the generator allows DR-GAN to learn a generative and discriminative representation, in addition to image synthesis. Second, this representation is explicitly disentangled from other face variations such as pose, through the pose code provided to the decoder and pose estimation in the discriminator. Third, DR-GAN can take one or multiple images as the input, and generate one unified representation along with an arbitrary number of synthetic images. Quantitative and qualitative evaluation on both controlled and in-the-wild databases demonstrate the superiority of DR-GAN over the state of the art.",
"Face synthesis has achieved advanced development by using generative adversarial networks (GANs). Existing methods typically formulate GAN as a two-player game, where a discriminator distinguishes face images from the real and synthesized domains, while a generator reduces its discriminativeness by synthesizing a face of photorealistic quality. Their competition converges when the discriminator is unable to differentiate these two domains. Unlike two-player GANs, this work generates identity-preserving faces by proposing FaceID-GAN, which treats a classifier of face identity as the third player, competing with the generator by distinguishing the identities of the real and synthesized faces (see Fig.1). A stationary point is reached when the generator produces faces that have high quality as well as preserve identity. Instead of simply modeling the identity classifier as an additional discriminator, FaceID-GAN is formulated by satisfying information symmetry, which ensures that the real and synthesized images are projected into the same feature space. In other words, the identity classifier is used to extract identity features from both input (real) and output (synthesized) face images of the generator, substantially alleviating training difficulty of GAN. Extensive experiments show that FaceID-GAN is able to generate faces of arbitrary viewpoint while preserve identity, outperforming recent advanced approaches.",
"An important problem for both graphics and vision is to synthesize novel views of a 3D object from a single image. This is particularly challenging due to the partial observability inherent in projecting a 3D object onto the image space, and the ill-posedness of inferring object shape and pose. However, we can train a neural network to address the problem if we restrict our attention to specific object categories (in our case faces and chairs) for which we can gather ample training data. In this paper, we propose a novel recurrent convolutional encoder-decoder network that is trained end-to-end on the task of rendering rotated objects starting from a single image. The recurrent structure allows our model to capture long-term dependencies along a sequence of transformations. We demonstrate the quality of its predictions for human faces on the Multi-PIE dataset and for a dataset of 3D chair models, and also show its ability to disentangle latent factors of variation (e.g., identity and pose) without using full supervision."
],
"cite_N": [
"@cite_30",
"@cite_18",
"@cite_8",
"@cite_15",
"@cite_41",
"@cite_29",
"@cite_3",
"@cite_27",
"@cite_40",
"@cite_2",
"@cite_31",
"@cite_25",
"@cite_12"
],
"mid": [
"2963100452",
"2610460387",
"1935685005",
"2964337551",
"2964339532",
"1955369839",
"2772024431",
"1916406603",
"2217448953",
"2607839780",
"2737047298",
"2799209711",
"2951475414"
]
} | 0 |
||
1907.10107 | 2962860923 | Lifelong learning is challenging for deep neural networks due to their susceptibility to catastrophic forgetting. Catastrophic forgetting occurs when a trained network is not able to maintain its ability to accomplish previously learned tasks when it is trained to perform new tasks. We study the problem of lifelong learning for generative models, extending a trained network to new conditional generation tasks without forgetting previous tasks, while assuming access to the training data for the current task only. In contrast to state-of-the-art memory replay based approaches which are limited to label-conditioned image generation tasks, a more generic framework for continual learning of generative models under different conditional image generation settings is proposed in this paper. Lifelong GAN employs knowledge distillation to transfer learned knowledge from previous networks to the new network. This makes it possible to perform image-conditioned generation tasks in a lifelong learning setting. We validate Lifelong GAN for both image-conditioned and label-conditioned generation tasks, and provide qualitative and quantitative results to show the generality and effectiveness of our method. | Recent image-conditioned models have shown promising results for numerous image-to-image translation tasks such as maps @math satellite images, sketches @math photos, labels @math images @cite_4 @cite_27 @cite_29 , future frame prediction @cite_23 , superresolution @cite_26 , and inpainting @cite_19 . Moreover, images can be stylized by disentangling the style and the content @cite_20 @cite_30 or by encoding styles into a stylebank (set of convolution filters) @cite_1 . Models @cite_34 @cite_12 for rendering a person's appearance onto a given pose have shown to be effective for person re-identification. Label-conditioned models @cite_37 @cite_21 have also been explored for generating images for specific categories. | {
"abstract": [
"This paper introduces a deep-learning approach to photographic style transfer that handles a large variety of image content while faithfully transferring the reference style. Our approach builds upon the recent work on painterly transfer that separates style from the content of an image by considering different layers of a neural network. However, as is, this approach is not suitable for photorealistic style transfer. Even when both the input and reference images are photographs, the output still exhibits distortions reminiscent of a painting. Our contribution is to constrain the transformation from the input to the output to be locally affine in colorspace, and to express this constraint as a custom fully differentiable energy term. We show that this approach successfully suppresses distortion and yields satisfying photorealistic style transfers in a broad variety of scenarios, including transfer of the time of day, weather, season, and artistic edits.",
"This paper describes InfoGAN, an information-theoretic extension to the Generative Adversarial Network that is able to learn disentangled representations in a completely unsupervised manner. InfoGAN is a generative adversarial network that also maximizes the mutual information between a small subset of the latent variables and the observation. We derive a lower bound of the mutual information objective that can be optimized efficiently. Specifically, InfoGAN successfully disentangles writing styles from digit shapes on the MNIST dataset, pose from lighting of 3D rendered images, and background digits from the central digit on the SVHN dataset. It also discovers visual concepts that include hair styles, presence absence of eyeglasses, and emotions on the CelebA face dataset. Experiments show that InfoGAN learns interpretable representations that are competitive with representations learned by existing supervised methods. For an up-to-date version of this paper, please see https: arxiv.org abs 1606.03657.",
"Despite the breakthroughs in accuracy and speed of single image super-resolution using faster and deeper convolutional neural networks, one central problem remains largely unsolved: how do we recover the finer texture details when we super-resolve at large upscaling factors? The behavior of optimization-based super-resolution methods is principally driven by the choice of the objective function. Recent work has largely focused on minimizing the mean squared reconstruction error. The resulting estimates have high peak signal-to-noise ratios, but they are often lacking high-frequency details and are perceptually unsatisfying in the sense that they fail to match the fidelity expected at the higher resolution. In this paper, we present SRGAN, a generative adversarial network (GAN) for image super-resolution (SR). To our knowledge, it is the first framework capable of inferring photo-realistic natural images for 4x upscaling factors. To achieve this, we propose a perceptual loss function which consists of an adversarial loss and a content loss. The adversarial loss pushes our solution to the natural image manifold using a discriminator network that is trained to differentiate between the super-resolved images and original photo-realistic images. In addition, we use a content loss motivated by perceptual similarity instead of similarity in pixel space. Our deep residual network is able to recover photo-realistic textures from heavily downsampled images on public benchmarks. An extensive mean-opinion-score (MOS) test shows hugely significant gains in perceptual quality using SRGAN. The MOS scores obtained with SRGAN are closer to those of the original high-resolution images than to those obtained with any state-of-the-art method.",
"We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Moreover, since the release of the pix2pix software associated with this paper, hundreds of twitter users have posted their own artistic experiments using our system. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without handengineering our loss functions either.",
"Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain X to a target domain Y in the absence of paired examples. Our goal is to learn a mapping G : X → Y such that the distribution of images from G(X) is indistinguishable from the distribution Y using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping F : Y → X and introduce a cycle consistency loss to push F(G(X)) ≈ X (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach.",
"Generative Adversarial Nets (GANs) have shown promise in image generation and semi-supervised learning (SSL). However, existing GANs in SSL have two problems: (1) the generator and the discriminator (i.e. the classifier) may not be optimal at the same time; and (2) the generator cannot control the semantics of the generated samples. The problems essentially arise from the two-player formulation, where a single discriminator shares incompatible roles of identifying fake samples and predicting labels and it only estimates the data without considering the labels. To address the problems, we present triple generative adversarial net (Triple-GAN), which consists of three players---a generator, a discriminator and a classifier. The generator and the classifier characterize the conditional distributions between images and labels, and the discriminator solely focuses on identifying fake image-label pairs. We design compatible utilities to ensure that the distributions characterized by the classifier and the generator both converge to the data distribution. Our results on various datasets demonstrate that Triple-GAN as a unified model can simultaneously (1) achieve the state-of-the-art classification results among deep generative models, and (2) disentangle the classes and styles of the input and transfer smoothly in the data space via interpolation in the latent space class-conditionally.",
"We propose StyleBank, which is composed of multiple convolution filter banks and each filter bank explicitly represents one style, for neural image style transfer. To transfer an image to a specific style, the corresponding filter bank is operated on top of the intermediate feature embedding produced by a single auto-encoder. The StyleBank and the auto-encoder are jointly learnt, where the learning is conducted in such a way that the auto-encoder does not encode any style information thanks to the flexibility introduced by the explicit filter bank representation. It also enables us to conduct incremental learning to add a new image style by learning a new filter bank while holding the auto-encoder fixed. The explicit style representation along with the flexible network design enables us to fuse styles at not only the image level, but also the region level. Our method is the first style transfer network that links back to traditional texton mapping methods, and hence provides new understanding on neural style transfer. Our method is easy to train, runs in real-time, and produces results that qualitatively better or at least comparable to existing methods.",
"Semantic image inpainting is a challenging task where large missing regions have to be filled based on the available visual data. Existing methods which extract information from only a single image generally produce unsatisfactory results due to the lack of high level context. In this paper, we propose a novel method for semantic image inpainting, which generates the missing content by conditioning on the available data. Given a trained generative model, we search for the closest encoding of the corrupted image in the latent image manifold using our context and prior losses. This encoding is then passed through the generative model to infer the missing content. In our method, inference is possible irrespective of how the missing content is structured, while the state-of-the-art learning based method requires specific information about the holes in the training phase. Experiments on three datasets show that our method successfully predicts information in large missing regions and achieves pixel-level photorealism, significantly outperforming the state-of-the-art methods.",
"",
"We propose a hierarchical approach for making long-term predictions of future frames. To avoid inherent compounding errors in recursive pixel-level prediction, we propose to first estimate high-level structure in the input frames, then predict how that structure evolves in the future, and finally by observing a single frame from the past and the predicted high-level structure, we construct the future frames without having to observe any of the pixel-level predictions. Long-term video prediction is difficult to perform by recurrently observing the predicted frames because the small errors in pixel space exponentially amplify as predictions are made deeper into the future. Our approach prevents pixel-level error propagation from happening by removing the need to observe the predicted frames. Our model is built with a combination of LSTM and analogy-based encoder-decoder convolutional neural networks, which independently predict the video structure and generate the future frames, respectively. In experiments, our model is evaluated on the Human 3.6M and Penn Action datasets on the task of long-term pixel-level video prediction of humans performing actions and demonstrate significantly better results than the state-of-the-art.",
"",
"This paper proposes the novel Pose Guided Person Generation Network (PG @math ) that allows to synthesize person images in arbitrary poses, based on an image of that person and a novel pose. Our generation framework PG @math utilizes the pose information explicitly and consists of two key stages: pose integration and image refinement. In the first stage the condition image and the target pose are fed into a U-Net-like network to generate an initial but coarse image of the person with the target pose. The second stage then refines the initial and blurry result by training a U-Net-like generator in an adversarial way. Extensive experimental results on both 128 @math 64 re-identification images and 256 @math 256 fashion photos show that our model generates high-quality person images with convincing details.",
"We consider image transformation problems, where an input image is transformed into an output image. Recent methods for such problems typically train feed-forward convolutional neural networks using a per-pixel loss between the output and ground-truth images. Parallel work has shown that high-quality images can be generated by defining and optimizing perceptual loss functions based on high-level features extracted from pretrained networks. We combine the benefits of both approaches, and propose the use of perceptual loss functions for training feed-forward networks for image transformation tasks. We show results on image style transfer, where a feed-forward network is trained to solve the optimization problem proposed by in real-time. Compared to the optimization-based method, our network gives similar qualitative results but is three orders of magnitude faster. We also experiment with single-image super-resolution, where replacing a per-pixel loss with a perceptual loss gives visually pleasing results."
],
"cite_N": [
"@cite_30",
"@cite_37",
"@cite_26",
"@cite_4",
"@cite_29",
"@cite_21",
"@cite_1",
"@cite_19",
"@cite_27",
"@cite_23",
"@cite_34",
"@cite_12",
"@cite_20"
],
"mid": [
"2604721644",
"2963226019",
"2523714292",
"2963073614",
"2962793481",
"2964218010",
"2604737827",
"2963917315",
"",
"2963253230",
"2894236793",
"2962819541",
"2331128040"
]
} | Lifelong GAN: Continual Learning for Conditional Image Generation | Learning is a lifelong process for humans. We acquire knowledge throughout our lives so that we become more efficient and versatile facing new tasks. The accumulation of knowledge in turn accelerates our acquisition of new skills. In contrast to human learning, lifelong learning remains an open challenge for modern deep learning systems. It is well known that deep neural networks are susceptible to a phenomenon known as catastrophic forgetting [18]. Catastrophic forgetting occurs when a trained neural network is not able to maintain its ability to accomplish previously learned tasks when it is adapted to perform new tasks.
Consider the example in Figure 1. A generative model is first trained on the task edges → shoes. Given a new task segmentations → facades, a new model is initialized from the previous one and fine-tuned for the new task. After training, the model forgets about the previous task and cannot generate shoe images given edge images as inputs. One way to address this would be to combine the training data for the current task with the training data for all previous tasks and then train the model using the joint data. Unfortunately, this approach is not scalable in general: as new tasks are added, the storage requirements and training time of the joint data grow without bound. In addition, the models for previous tasks may be trained using private or privileged data which is not accessible during the training of the current task. The challenge in lifelong learning is therefore to extend the model to accomplish the current task, without forgetting how to accomplish previous tasks in scenarios where we are restricted to the training data for only the current task. In this work, we work under the assumption that we only have access to a model trained on previous tasks without access to the previous data.
Recent efforts [24,3,7] have demonstrated how discriminative models could be incrementally learnt for a sequence of tasks. Despite the success of these efforts, lifelong learning in generative settings remains an open problem. Parameter regularization [23,13] has been adapted from discriminative models to generative models, but poor performance is observed [28]. The state-of-the-art continual learning generative frameworks [23,28] are built on memory replay which treats generated data from previous tasks as part of the training examples in the new tasks. Although memory replay has been shown to alleviate the catastrophic forgetting problem by taking advantage of the generative setting, its applicability is limited to label-conditioned generation tasks. In particular, replay based methods cannot be extended to image-conditioned generation. The reason lies in that no conditional image can be accessed to generate replay training pairs for previous tasks. Therefore, a more generic continual learning framework that can enable various conditional generation tasks is valuable.
In this paper, we introduce a generic continual learning framework Lifelong GAN that can be applied to both image-conditioned and label-conditioned image generation. We employ knowledge distillation [9] to address catastrophic forgetting for conditional generative continual learning tasks. Given a new task, Lifelong GAN learns to perform this task, and to keep the memory of previous tasks, information is extracted from a previously trained network and distilled to the new network during training by encouraging the two networks to produce similar output values or visual patterns. To the best of our knowledge, we are the first to utilize the principle of knowledge distillation for continual learning generative frameworks.
To summarize, our contributions are as follows. First, we propose a generic framework for continual learning of conditional image generation models. Second, we validate the effectiveness of our approach for two different types of conditional inputs: (1) image-conditioned generation, and (2) label-conditioned generation, and provide qualitative and quantitative results to illustrate the capability of our GAN framework to learn new generation tasks without the catastrophic forgetting of previous tasks. Third, we illustrate the generality of our framework by performing continual learn-ing across diverse data domains.
Related Work
Conditional GANs. Image generation has achieved great success since the introduction of GANs [8]. There also has been rapid progress in the field of conditional image generation [19]. Conditional image generation tasks can be typically categorized as image-conditioned image generation and label-conditioned image generation.
Recent image-conditioned models have shown promising results for numerous image-to-image translation tasks such as maps → satellite images, sketches → photos, labels → images [10,35,34], future frame prediction [26], superresolution [15], and inpainting [30]. Moreover, images can be stylized by disentangling the style and the content [11,16] or by encoding styles into a stylebank (set of convolution filters) [4]. Models [32,17] for rendering a person's appearance onto a given pose have shown to be effective for person re-identification. Label-conditioned models [5,6] have also been explored for generating images for specific categories.
Knowledge Distillation. Proposed by Hinton et al. [9], knowledge distillation is designed for transferring knowledge from a teacher classifier to a student classifier. The teacher classifier normally would have more privileged information [25] compared with the student classifier. The privileged information includes two aspects. The first aspect is referred to as the learning power, namely the size of the neural networks. A student classifier could have a more compact network structure compared with the teacher classifier, and by distilling knowledge from the teacher classifier to student classifier, the student classifier would have similar or even better classification performance than the teacher network. Relevant applications include network compression [21] and network training acceleration [27]. The second aspect is the learning resources, namely the amount of input data. The teacher classifier could have more learning resources and see more data that the student cannot see. Compared with the first aspect, this aspect is relatively unexplored and is the focus of our work.
Continual Learning. Many techniques have been recently proposed for solving continuous learning problems in computer vision [24,3] and robotics [7] in both discriminative and generative settings.
For discriminative settings, Shmelkov et al. [24] employ a distillation loss that measures the discrepancy between the output of the old and new network for distilling knowledge learnt by the old network. In addition, Castro et al. [3] propose to use a few exemplar images from previous tasks and perform knowledge distillation using new features from previous classification layers followed by a modified activation layer. For generative settings, continual learning has been primarily achieved using memory replay based methods. Replay was first proposed by Seff et al. [23], where the images for previous tasks are generated and combined together with the data for the new task to form a joint dataset, and a new model is trained on the joint dataset. A similar idea is also adopted by Wu et al. [28] for label-conditioned image generation. Approaches based on elastic weight consolidation [13] have also been explored for the task of labelconditioned image generation [28], but they have limited capability to remember previous categories and generate high quality images.
In this paper, we introduce knowledge distillation within continual generative model learning, which has not been explored before. Our approach can be applied to both imageconditioned generation, for which the replay mechanism is not applicable, and label-conditioned image generation.
Approach
Our proposed Lifelong GAN addresses catastrophic forgetting using knowledge distillation and, in contrast to replay based methods, can be applied to continually learn both label-conditioned and image-conditioned generation tasks. In this paper, we build our model on the state-of-the-art Bi-cycleGAN [35] model. Our overall approach for continual learning for a generative model is illustrated in Figure 2. Given data from the current task, Lifelong GAN learns to perform this task, and to keep the memory of previous tasks, knowledge distillation is adopted to distill information from a previously trained network to the current network by encouraging the two networks to produce similar output values or patterns given the same input. To avoid "conflicts" that arise when having two desired outputs (current training goal and outputs from previous model) given the same input, we generate auxiliary data for distillation from the current data via two operations Montage and Swap.
Lifelong GAN with Knowledge Distillation
To perform continual learning of conditional generation tasks, the proposed Lifelong GAN is built on top of Bicycle GAN with the adoption of knowledge distillation. We first introduce the problem formulation, followed by a detailed description of our model, then discuss our strategy to tackle the conflicting objectives in training.
Problem Formulation. During training of the t th task, we are given a dataset of N t paired instances
S t = {(A i,t , B i,t )|A i,t ∈ A t , B i,t ∈ B t } Nt i=1
where A t and B t denote the domain of conditional images and ground truth images respectively. For simplicity, we use the notations A t , B t for an instance from the respective domain. The goal is to train a model M t which can generate images of current task B t ← (A t , z), without forgetting how to generate images of previous tasks B i ← (A i , z), i = 1, 2, ..., (t − 1).
Figure 2:
Overview of Lifelong GAN. Given training data for the t th task, model M t is trained to learn this current task. To avoid forgetting previous tasks, knowledge distillation is adopted to distill information from model M t−1 to model M t by encouraging the two networks to produce similar output values or patterns given the auxiliary data as inputs.
Let M t be the t th model trained, and M t−1 be the (t − 1) th model trained. Both M t−1 and M t contain two cycles (cVAE-GAN and cLR-GAN) as described in Section 3.1. Inspired by continual learning methods for discriminative models, we prevent the current model M t from forgetting the knowledge learned by the previous model M t−1 by inputting the data of the current task S t to both M t and M t−1 , and distilling the knowledge from M t−1 to M t by encouraging the outputs of M t−1 and M t to be similar. We describe the process of knowledge distillation for both cycles as follows.
cVAE-GAN. Recall from Section 3.1 that cVAE-GAN has two outputs: the encoded latent code z and the reconstructed ground truth image B. Given ground truth image B t , the encoders E t and E t−1 are encouraged to encode it in the same way and produce the same output; given encoded latent code z and conditional image A t , the generators G t and G t−1 are encouraged to reconstruct the ground truth images in the same way. Therefore, we define the loss for the cVAE-GAN cycle with knowledge distillation as:
L t cVAE−DL = L t cVAE−GAN + βE At,Bt∼p(At,Bt) [||E t (B t ) − E t−1 (B t )|| 1 + ||G t (A t , E t (B t )) − G t−1 (A t , E t−1 (B t ))|| 1 ],(4)
where β is the loss weight for knowledge distillation.
cLR-GAN. Recall from Section 3.1 that cLR-GAN also has two outputs: the generated image B and the reconstructed latent code z. Given the latent code z and conditional image A t , the generators G t and G t−1 are encouraged to generate images in the same way; given the generated image B t , the encoders E t and E t−1 are encouraged to encode the generated images in the same way. Therefore, we define the loss for the cLR-GAN cycle as:
L t cLR−DL = L t cLR−GAN + βE At∼p(At),z∼p(z) [||G t (A t , z) − G t−1 (A t , z)|| 1 + ||E t (G t (A t , z)) − E t−1 (G t−1 (A t , z))|| 1 ].(5)
The distillation losses can be defined in several ways, e.g. the L 2 loss [2,24], KL divergence [9] or crossentropy [9,3]. In our approach, we use L 1 instead of L 2 to avoid blurriness in the generated images.
Lifelong GAN is proposed to adopt knowledge distillation in both cycles, hence the overall loss function is:
L t Lifelong−GAN = L t cVAE−DL + L t cLR−DL .(6)
Conflict Removal with Auxiliary Data. Note that Equation 4 contains conflicting objectives. The first term encourages the model to reconstruct the inputs of the current task, while the third term encourages the model to generate the same images as the outputs of the old model. In addition, the first term encourages the model to encode the input images to normal distributions, while the second term encourages the model to encode the input images to a distribution learned from the old model. Similar conflicting objectives exist in Equation 5. To sum up, the conflicts appear when the model is required to produce two different outputs, namely mimicking the performance of the old model and accomplishing the new goal, given the same inputs.
To address these conflicting objectives, we propose to use auxiliary data for distilling knowledge from the old model M t−1 to model M t . The use of auxiliary data for distillation removes these conflicts. It is important that new auxiliary data should be used for each task, otherwise the network could potentially implicitly encode them when learning previous tasks. We describe approaches for doing so without requiring external data sources in Sec. 3.3.
The auxiliary data S aux The losses L t cVAE−DL and L t cLR−DL are re-written as:
t = {(A aux i,t , B aux i,t )|A aux i,t ∈ A aux t , B aux i,t ∈ B aux t } Nt i=1 consist of N auxL t cVAE−DL = L t cVAE−GAN + βE A aux t ,B aux t ∼p(A aux t ,B aux t ) [||E t (B aux t ) − E t−1 (B aux t )|| 1 + ||G t (A aux t , E t (B aux t )) − G t−1 (A aux t , E t−1 (B aux t ))|| 1 ],(7)L t cLR−DL = L t cLR−GAN + βE A aux t ∼p(A aux t ),z∼p(z) [||G t (A aux t , z) − G t−1 (A aux t , z)|| 1 + ||E t (G t (A aux t , z)) − E t−1 (G t−1 (A aux t , z))|| 1 ],(8)
where β is the loss weight for knowledge distillation. Lifelong GAN can be used for continual learning of both image-conditioned and label-conditioned generation tasks. The auxiliary images for knowledge distillation for both settings can be generated using the Montage and Swap operations described in Section 3.3. For label-conditioned generation, we can simply use the categorical codes from previous tasks.
Auxiliary Data Generation
We now discuss the generation of auxiliary data. Recall from Section 3.2 that we use auxiliary data to address the conflicting objectives in Equations 4 and 5.
The auxiliary images do not require labels, and can in principle be sourced from online image repositories. However, this solution may not be scalable as it requires a new set of auxiliary images to be collected when learning each new task. A more desirable alternative may be to generate auxiliary data by using the current data in a way that avoids the over-fitting problem. We propose two operations for generating auxiliary data from the current task data:
1. Montage: Randomly sample small image patches from current input images and montage them together to produce auxiliary images for distillation.
2. Swap: Swap the conditional image A t and the ground truth image B t for distillation. Namely the encoder receives the conditional image A t and encodes it to a latent code z, and the generator is conditioned on the ground truth image B t .
Both operations are used in image-conditioned generation; in label-conditioned generation, since there is no conditional image, only the montage operation is applicable. Other alternatives may be possible. Essentially, the auxiliary data generation needs to provide out-of-task samples that can be used to preserve the knowledge learned by the old model. The knowledge is preserved using the distillation losses, which encourage the old and new models to produce similar responses on the out-of-task samples.
Experiments
We evaluate Lifelong GAN for two settings: (1) imageconditioned image generation, and (2) label-conditioned image generation. We are the first to explore continual learning for image-conditioned image generation; no existing approaches are applicable for comparison. Additionally, we compare our model with the memory replay based approach which is the state-of-the-art for label-conditioned image generation. Training Details. All the sequential digit generation models are trained on images of size 64×64 and all other models are trained on images of size 128 × 128. We use the Tensorflow [1] framework with Adam Optimizer [12] and a learning rate of 0.0001. We set the parameters λ latent = 0.5, λ KL = 0.01, and β = 5.0 for all experiments. The weights of generator and encoder in cVAE-GAN and cLR-GAN are shared. Extra training iterations on the generator and encoder using only distillation loss are used for models trained on images of size 128 × 128 for better remembering previous tasks. Baseline Models. We compare Lifelong GAN to the following baseline models: (a) Memory Replay (MR): Images generated by a generator trained on previous tasks are combined with the training images for the current task to form a hybrid training set. (b) Sequential Fine-tuning (SFT): The model is fine-tuned in a sequential manner, with parameters initialized from the model trained/fine-tuned on the previous task. (c) Joint Learning (JL): The model is trained utilizing data from all tasks.
Note that for image-conditioned image generation, we only compare with joint learning and sequential fine-tuning methods, as memory replay based approaches are not applicable without any ground-truth conditional input. Quantitative Metrics. We use different metrics to evaluate different aspects of the generation. In this work, we use Acc, r-Acc and LPIPS to validate the quality of the generated data. Acc is the accuracy of the classifier network trained on real images and evaluated on generated images (higher indicates better generation quality). r-Acc is the accuracy of the classifier network trained on generated images and evaluated on real images (higher indicates better generation quality). LPIPS [33] is used to quantitatively evaluate the diversity as used in BicycleGAN [35]. Higher LPIPS indicates higher diversity. Furthermore, LPIPS closer to the ones of real images indicates more realistic generation.
Image-conditioned Image Generation
Digit Generation. We divide the digits in MNIST [14] into 3 groups: {0,1,2}, {3,4,5}, and {6,7,8,9} 1 . The digits in each group are dyed with a signature color as shown in Figure 3. Given a dyed image, the task is to generate a foreground segmentation mask for the digit (i.e. generate a foreground segmentation given a dyed image as condition). The three groups give us three tasks for sequential learning. Generated images from the last task for all approaches are shown in Figure 3. We can see that sequential finetuning suffers from catastrophic forgetting (it is unable to segment digits 0-5 from the previous tasks), while our approach can learn to generate segmentation masks for the current task without forgetting the previous tasks. 1 Image-to-image Translation. We also apply Lifelong GAN to more challenging domains and datasets with large variation for higher resolution images. The first task is image-to-image translation of edges → shoes photos [31,29]. The second task is image-to-image translation of segmentations → facades [22]. The goal of this experiment is to learn the task of semantic segmentations → facades without forgetting the task edges → shoe photos. We sam-ple˜20000 image pairs for the first task and use all images for the second task. Generated images for all approaches are shown in Figure 4. For both Lifelong GAN and sequential fine-tuning, the model of Task2 is initialized from the same model trained on Task1. We show the generation results of each task for Lifelong GAN. For sequential fine-tuning, we show the generation results of the last task. It is clear that the sequentially fine-tuned model completely forgets the previous task and can only generate incoherent facade-like patterns. In contrast, Lifelong GAN learns the current generative task while remembering the previous task. It is also observed that Lifelong GAN is capable of maintaining the diversity of generated images of the previous task.
Label-conditioned Image Generation
Digit Generation. We divide the MNIST [14] digits into 4 groups, {0,1,2}, {3,4}, {5,6,7} and {8,9}, resulting in four tasks for sequential learning. Each task is to generate binary MNIST digits given labels (one-hot encoded labels) as conditional inputs.
Visual results for all methods are shown in Figure 5, where we also include outputs of the generator after each task for our approach and memory replay. Sequential finetuning results in catastrophic forgetting, as shown by this baseline's inability to generate digits from any previous tasks; when given a previous label, it will either generate something similar to the current task or simply unrecognizable patterns. Meanwhile, both our approach and memory replay are visually similar to joint training results, indicating that both are able to address the forgetting issue in Figure 4: Comparison among different approaches for continual learning of image to image translation tasks. Given the same model trained for the task edges → shoes, we train Lifelong GAN and sequential fine-tuning model on the task segmentations → facades. Sequential fine-tuning suffers from severe catastrophic forgetting. In contrast, Lifelong GAN can learn the current task while remembering the old task. We demonstrate some intermediate results during different tasks of continual learning for our distillation based approach and memory replay. Sequential fine-tuning suffers from severe forgetting issues while other methods give visually similar results compared to the joint learning results. this task. Quantitatively, our method achieves comparable classification accuracy to memory replay, and outperforms memory replay in terms of reverse classification accuracy. Flower Generation. We also demonstrate Lifelong GAN on a more challenging dataset, which contains higher resolution images from five categories of the Flower dataset [20]. The experiment consists of a sequence of five tasks in the order of sunflower, daisy, iris, daffodil, pansy. Each task involves learning a new category.
Generated images for all approaches are shown in Fig-ure 6. We show the generation results of each task for both Lifelong GAN and memory replay to better analyze these two methods. For sequential fine-tuning, we show the generation results of the last task which is enough to show that the model suffers from catastrophic forgetting. Figure 6 gives useful insights into the comparison between Lifelong GAN and memory replay. Both methods can learn to generate images for new tasks while remembering previous ones. However, memory replay is more sensitive to generation artifacts appearing in the intermediate Task 4 Figure 6: Comparison among different approaches for continual learning of flower image generation tasks. Given the same model trained for category sunflower, we train Lifelong GAN, memory replay and sequential fine-tuning model for other tasks. Sequential fine-tuning suffers from severe catastrophic forgetting, while both Lifelong GAN and memory replay can learn to perform the current task while remembering the old tasks. Lifelong GAN is more robust to artifacts in the generated images of the middle tasks, while memory replay is much more sensitive and all later tasks are severely impacted by these artifacts.
tasks of sequential learning. While training Task3 (category iris), both Lifelong GAN and memory replay show some artifacts in the generated images. For memory replay, the artifacts are reinforced during the training of later tasks and gradually spread over all categories. In contrast, Lifelong GAN is more robust to the artifacts and later tasks are much less sensitive to intermediate tasks. Lifelong GAN treats previous tasks and current tasks separately, trying to learn the distribution of new tasks while mimicking the distribution of the old tasks. Table 2 shows the quantitative results. Lifelong GAN outperforms memory replay by 10% in terms of classification accuracy and 25% in terms of reverse classification accuracy. We also observed visually and quantitatively that memory replay tends to lose diversity during the sequential learning, and generates images with little diversity for the final task.
Conclusion
We study the problem of lifelong learning for generative networks and propose a distillation based continual learning framework enabling a single network to be extended to new tasks without forgetting previous tasks with only supervision for the current task. Unlike previous methods that adopt memory replay to generate images from previous tasks as training data, we employ knowledge distillation to transfer learned knowledge from previous networks to the new network. Our generic framework enables a broader range of generation tasks including imageto-image translation, which is not possible using memory replay based methods. We validate Lifelong GAN for both image-conditioned and label-conditioned generation tasks, and both qualitative and quantitative results illustrate the generality and effectiveness of our method. | 4,135 |
1907.10107 | 2962860923 | Lifelong learning is challenging for deep neural networks due to their susceptibility to catastrophic forgetting. Catastrophic forgetting occurs when a trained network is not able to maintain its ability to accomplish previously learned tasks when it is trained to perform new tasks. We study the problem of lifelong learning for generative models, extending a trained network to new conditional generation tasks without forgetting previous tasks, while assuming access to the training data for the current task only. In contrast to state-of-the-art memory replay based approaches which are limited to label-conditioned image generation tasks, a more generic framework for continual learning of generative models under different conditional image generation settings is proposed in this paper. Lifelong GAN employs knowledge distillation to transfer learned knowledge from previous networks to the new network. This makes it possible to perform image-conditioned generation tasks in a lifelong learning setting. We validate Lifelong GAN for both image-conditioned and label-conditioned generation tasks, and provide qualitative and quantitative results to show the generality and effectiveness of our method. | Proposed by @cite_8 , knowledge distillation is designed for transferring knowledge from a teacher classifier to a student classifier. The teacher classifier normally would have more privileged information @cite_6 compared with the student classifier. The privileged information includes two aspects. The first aspect is referred to as the learning power, namely the size of the neural networks. A student classifier could have a more compact network structure compared with the teacher classifier, and by distilling knowledge from the teacher classifier to student classifier, the student classifier would have similar or even better classification performance than the teacher network. Relevant applications include network compression @cite_32 and network training acceleration @cite_7 . The second aspect is the learning resources, namely the amount of input data. The teacher classifier could have more learning resources and see more data that the student cannot see. Compared with the first aspect, this aspect is relatively unexplored and is the focus of our work. | {
"abstract": [
"Knowledge distillation (KD) aims to train a lightweight classifier suitable to provide accurate inference with constrained resources in multi-label learning. Instead of directly consuming feature-label pairs, the classifier is trained by a teacher, i.e., a high-capacity model whose training may be resource-hungry. The accuracy of the classifier trained this way is usually suboptimal because it is difficult to learn the true data distribution from the teacher. An alternative method is to adversarially train the classifier against a discriminator in a two-player game akin to generative adversarial networks (GAN), which can ensure the classifier to learn the true data distribution at the equilibrium of this game. However, it may take excessively long time for such a two-player game to reach equilibrium due to high-variance gradient updates. To address these limitations, we propose a three-player game named KDGAN consisting of a classifier, a teacher, and a discriminator. The classifier and the teacher learn from each other via distillation losses and are adversarially trained against the discriminator via adversarial losses. By simultaneously optimizing the distillation and adversarial losses, the classifier will learn the true data distribution at the equilibrium. We approximate the discrete distribution learned by the classifier (or the teacher) with a concrete distribution. From the concrete distribution, we generate continuous samples to obtain low-variance gradient updates, which speed up the training. Extensive experiments using real datasets confirm the superiority of KDGAN in both accuracy and training speed.",
"Deep neural networks (DNNs) continue to make significant advances, solving tasks from image classification to translation or reinforcement learning. One aspect of the field receiving considerable attention is efficiently executing deep models in resource-constrained environments, such as mobile or embedded devices. This paper focuses on this problem, and proposes two new compression methods, which jointly leverage weight quantization and distillation of larger teacher networks into smaller student networks. The first method we propose is called quantized distillation and leverages distillation during the training process, by incorporating distillation loss, expressed with respect to the teacher, into the training of a student network whose weights are quantized to a limited set of levels. The second method, differentiable quantization, optimizes the location of quantization points through stochastic gradient descent, to better fit the behavior of the teacher model. We validate both methods through experiments on convolutional and recurrent architectures. We show that quantized shallow students can reach similar accuracy levels to full-precision teacher models, while providing order of magnitude compression, and inference speedup that is linear in the depth reduction. In sum, our results enable DNNs for resource-constrained environments to leverage architecture and accuracy advances developed on more powerful devices.",
"This paper describes a new paradigm of machine learning, in which Intelligent Teacher is involved. During training stage, Intelligent Teacher provides Student with information that contains, along with classification of each example, additional privileged information (for example, explanation) of this example. The paper describes two mechanisms that can be used for significantly accelerating the speed of Student's learning using privileged information: (1) correction of Student's concepts of similarity between examples, and (2) direct Teacher-Student knowledge transfer.",
"A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel."
],
"cite_N": [
"@cite_7",
"@cite_32",
"@cite_6",
"@cite_8"
],
"mid": [
"2903396356",
"2964203871",
"2173379916",
"1821462560"
]
} | Lifelong GAN: Continual Learning for Conditional Image Generation | Learning is a lifelong process for humans. We acquire knowledge throughout our lives so that we become more efficient and versatile facing new tasks. The accumulation of knowledge in turn accelerates our acquisition of new skills. In contrast to human learning, lifelong learning remains an open challenge for modern deep learning systems. It is well known that deep neural networks are susceptible to a phenomenon known as catastrophic forgetting [18]. Catastrophic forgetting occurs when a trained neural network is not able to maintain its ability to accomplish previously learned tasks when it is adapted to perform new tasks.
Consider the example in Figure 1. A generative model is first trained on the task edges → shoes. Given a new task segmentations → facades, a new model is initialized from the previous one and fine-tuned for the new task. After training, the model forgets about the previous task and cannot generate shoe images given edge images as inputs. One way to address this would be to combine the training data for the current task with the training data for all previous tasks and then train the model using the joint data. Unfortunately, this approach is not scalable in general: as new tasks are added, the storage requirements and training time of the joint data grow without bound. In addition, the models for previous tasks may be trained using private or privileged data which is not accessible during the training of the current task. The challenge in lifelong learning is therefore to extend the model to accomplish the current task, without forgetting how to accomplish previous tasks in scenarios where we are restricted to the training data for only the current task. In this work, we work under the assumption that we only have access to a model trained on previous tasks without access to the previous data.
Recent efforts [24,3,7] have demonstrated how discriminative models could be incrementally learnt for a sequence of tasks. Despite the success of these efforts, lifelong learning in generative settings remains an open problem. Parameter regularization [23,13] has been adapted from discriminative models to generative models, but poor performance is observed [28]. The state-of-the-art continual learning generative frameworks [23,28] are built on memory replay which treats generated data from previous tasks as part of the training examples in the new tasks. Although memory replay has been shown to alleviate the catastrophic forgetting problem by taking advantage of the generative setting, its applicability is limited to label-conditioned generation tasks. In particular, replay based methods cannot be extended to image-conditioned generation. The reason lies in that no conditional image can be accessed to generate replay training pairs for previous tasks. Therefore, a more generic continual learning framework that can enable various conditional generation tasks is valuable.
In this paper, we introduce a generic continual learning framework Lifelong GAN that can be applied to both image-conditioned and label-conditioned image generation. We employ knowledge distillation [9] to address catastrophic forgetting for conditional generative continual learning tasks. Given a new task, Lifelong GAN learns to perform this task, and to keep the memory of previous tasks, information is extracted from a previously trained network and distilled to the new network during training by encouraging the two networks to produce similar output values or visual patterns. To the best of our knowledge, we are the first to utilize the principle of knowledge distillation for continual learning generative frameworks.
To summarize, our contributions are as follows. First, we propose a generic framework for continual learning of conditional image generation models. Second, we validate the effectiveness of our approach for two different types of conditional inputs: (1) image-conditioned generation, and (2) label-conditioned generation, and provide qualitative and quantitative results to illustrate the capability of our GAN framework to learn new generation tasks without the catastrophic forgetting of previous tasks. Third, we illustrate the generality of our framework by performing continual learn-ing across diverse data domains.
Related Work
Conditional GANs. Image generation has achieved great success since the introduction of GANs [8]. There also has been rapid progress in the field of conditional image generation [19]. Conditional image generation tasks can be typically categorized as image-conditioned image generation and label-conditioned image generation.
Recent image-conditioned models have shown promising results for numerous image-to-image translation tasks such as maps → satellite images, sketches → photos, labels → images [10,35,34], future frame prediction [26], superresolution [15], and inpainting [30]. Moreover, images can be stylized by disentangling the style and the content [11,16] or by encoding styles into a stylebank (set of convolution filters) [4]. Models [32,17] for rendering a person's appearance onto a given pose have shown to be effective for person re-identification. Label-conditioned models [5,6] have also been explored for generating images for specific categories.
Knowledge Distillation. Proposed by Hinton et al. [9], knowledge distillation is designed for transferring knowledge from a teacher classifier to a student classifier. The teacher classifier normally would have more privileged information [25] compared with the student classifier. The privileged information includes two aspects. The first aspect is referred to as the learning power, namely the size of the neural networks. A student classifier could have a more compact network structure compared with the teacher classifier, and by distilling knowledge from the teacher classifier to student classifier, the student classifier would have similar or even better classification performance than the teacher network. Relevant applications include network compression [21] and network training acceleration [27]. The second aspect is the learning resources, namely the amount of input data. The teacher classifier could have more learning resources and see more data that the student cannot see. Compared with the first aspect, this aspect is relatively unexplored and is the focus of our work.
Continual Learning. Many techniques have been recently proposed for solving continuous learning problems in computer vision [24,3] and robotics [7] in both discriminative and generative settings.
For discriminative settings, Shmelkov et al. [24] employ a distillation loss that measures the discrepancy between the output of the old and new network for distilling knowledge learnt by the old network. In addition, Castro et al. [3] propose to use a few exemplar images from previous tasks and perform knowledge distillation using new features from previous classification layers followed by a modified activation layer. For generative settings, continual learning has been primarily achieved using memory replay based methods. Replay was first proposed by Seff et al. [23], where the images for previous tasks are generated and combined together with the data for the new task to form a joint dataset, and a new model is trained on the joint dataset. A similar idea is also adopted by Wu et al. [28] for label-conditioned image generation. Approaches based on elastic weight consolidation [13] have also been explored for the task of labelconditioned image generation [28], but they have limited capability to remember previous categories and generate high quality images.
In this paper, we introduce knowledge distillation within continual generative model learning, which has not been explored before. Our approach can be applied to both imageconditioned generation, for which the replay mechanism is not applicable, and label-conditioned image generation.
Approach
Our proposed Lifelong GAN addresses catastrophic forgetting using knowledge distillation and, in contrast to replay based methods, can be applied to continually learn both label-conditioned and image-conditioned generation tasks. In this paper, we build our model on the state-of-the-art Bi-cycleGAN [35] model. Our overall approach for continual learning for a generative model is illustrated in Figure 2. Given data from the current task, Lifelong GAN learns to perform this task, and to keep the memory of previous tasks, knowledge distillation is adopted to distill information from a previously trained network to the current network by encouraging the two networks to produce similar output values or patterns given the same input. To avoid "conflicts" that arise when having two desired outputs (current training goal and outputs from previous model) given the same input, we generate auxiliary data for distillation from the current data via two operations Montage and Swap.
Lifelong GAN with Knowledge Distillation
To perform continual learning of conditional generation tasks, the proposed Lifelong GAN is built on top of Bicycle GAN with the adoption of knowledge distillation. We first introduce the problem formulation, followed by a detailed description of our model, then discuss our strategy to tackle the conflicting objectives in training.
Problem Formulation. During training of the t th task, we are given a dataset of N t paired instances
S t = {(A i,t , B i,t )|A i,t ∈ A t , B i,t ∈ B t } Nt i=1
where A t and B t denote the domain of conditional images and ground truth images respectively. For simplicity, we use the notations A t , B t for an instance from the respective domain. The goal is to train a model M t which can generate images of current task B t ← (A t , z), without forgetting how to generate images of previous tasks B i ← (A i , z), i = 1, 2, ..., (t − 1).
Figure 2:
Overview of Lifelong GAN. Given training data for the t th task, model M t is trained to learn this current task. To avoid forgetting previous tasks, knowledge distillation is adopted to distill information from model M t−1 to model M t by encouraging the two networks to produce similar output values or patterns given the auxiliary data as inputs.
Let M t be the t th model trained, and M t−1 be the (t − 1) th model trained. Both M t−1 and M t contain two cycles (cVAE-GAN and cLR-GAN) as described in Section 3.1. Inspired by continual learning methods for discriminative models, we prevent the current model M t from forgetting the knowledge learned by the previous model M t−1 by inputting the data of the current task S t to both M t and M t−1 , and distilling the knowledge from M t−1 to M t by encouraging the outputs of M t−1 and M t to be similar. We describe the process of knowledge distillation for both cycles as follows.
cVAE-GAN. Recall from Section 3.1 that cVAE-GAN has two outputs: the encoded latent code z and the reconstructed ground truth image B. Given ground truth image B t , the encoders E t and E t−1 are encouraged to encode it in the same way and produce the same output; given encoded latent code z and conditional image A t , the generators G t and G t−1 are encouraged to reconstruct the ground truth images in the same way. Therefore, we define the loss for the cVAE-GAN cycle with knowledge distillation as:
L t cVAE−DL = L t cVAE−GAN + βE At,Bt∼p(At,Bt) [||E t (B t ) − E t−1 (B t )|| 1 + ||G t (A t , E t (B t )) − G t−1 (A t , E t−1 (B t ))|| 1 ],(4)
where β is the loss weight for knowledge distillation.
cLR-GAN. Recall from Section 3.1 that cLR-GAN also has two outputs: the generated image B and the reconstructed latent code z. Given the latent code z and conditional image A t , the generators G t and G t−1 are encouraged to generate images in the same way; given the generated image B t , the encoders E t and E t−1 are encouraged to encode the generated images in the same way. Therefore, we define the loss for the cLR-GAN cycle as:
L t cLR−DL = L t cLR−GAN + βE At∼p(At),z∼p(z) [||G t (A t , z) − G t−1 (A t , z)|| 1 + ||E t (G t (A t , z)) − E t−1 (G t−1 (A t , z))|| 1 ].(5)
The distillation losses can be defined in several ways, e.g. the L 2 loss [2,24], KL divergence [9] or crossentropy [9,3]. In our approach, we use L 1 instead of L 2 to avoid blurriness in the generated images.
Lifelong GAN is proposed to adopt knowledge distillation in both cycles, hence the overall loss function is:
L t Lifelong−GAN = L t cVAE−DL + L t cLR−DL .(6)
Conflict Removal with Auxiliary Data. Note that Equation 4 contains conflicting objectives. The first term encourages the model to reconstruct the inputs of the current task, while the third term encourages the model to generate the same images as the outputs of the old model. In addition, the first term encourages the model to encode the input images to normal distributions, while the second term encourages the model to encode the input images to a distribution learned from the old model. Similar conflicting objectives exist in Equation 5. To sum up, the conflicts appear when the model is required to produce two different outputs, namely mimicking the performance of the old model and accomplishing the new goal, given the same inputs.
To address these conflicting objectives, we propose to use auxiliary data for distilling knowledge from the old model M t−1 to model M t . The use of auxiliary data for distillation removes these conflicts. It is important that new auxiliary data should be used for each task, otherwise the network could potentially implicitly encode them when learning previous tasks. We describe approaches for doing so without requiring external data sources in Sec. 3.3.
The auxiliary data S aux The losses L t cVAE−DL and L t cLR−DL are re-written as:
t = {(A aux i,t , B aux i,t )|A aux i,t ∈ A aux t , B aux i,t ∈ B aux t } Nt i=1 consist of N auxL t cVAE−DL = L t cVAE−GAN + βE A aux t ,B aux t ∼p(A aux t ,B aux t ) [||E t (B aux t ) − E t−1 (B aux t )|| 1 + ||G t (A aux t , E t (B aux t )) − G t−1 (A aux t , E t−1 (B aux t ))|| 1 ],(7)L t cLR−DL = L t cLR−GAN + βE A aux t ∼p(A aux t ),z∼p(z) [||G t (A aux t , z) − G t−1 (A aux t , z)|| 1 + ||E t (G t (A aux t , z)) − E t−1 (G t−1 (A aux t , z))|| 1 ],(8)
where β is the loss weight for knowledge distillation. Lifelong GAN can be used for continual learning of both image-conditioned and label-conditioned generation tasks. The auxiliary images for knowledge distillation for both settings can be generated using the Montage and Swap operations described in Section 3.3. For label-conditioned generation, we can simply use the categorical codes from previous tasks.
Auxiliary Data Generation
We now discuss the generation of auxiliary data. Recall from Section 3.2 that we use auxiliary data to address the conflicting objectives in Equations 4 and 5.
The auxiliary images do not require labels, and can in principle be sourced from online image repositories. However, this solution may not be scalable as it requires a new set of auxiliary images to be collected when learning each new task. A more desirable alternative may be to generate auxiliary data by using the current data in a way that avoids the over-fitting problem. We propose two operations for generating auxiliary data from the current task data:
1. Montage: Randomly sample small image patches from current input images and montage them together to produce auxiliary images for distillation.
2. Swap: Swap the conditional image A t and the ground truth image B t for distillation. Namely the encoder receives the conditional image A t and encodes it to a latent code z, and the generator is conditioned on the ground truth image B t .
Both operations are used in image-conditioned generation; in label-conditioned generation, since there is no conditional image, only the montage operation is applicable. Other alternatives may be possible. Essentially, the auxiliary data generation needs to provide out-of-task samples that can be used to preserve the knowledge learned by the old model. The knowledge is preserved using the distillation losses, which encourage the old and new models to produce similar responses on the out-of-task samples.
Experiments
We evaluate Lifelong GAN for two settings: (1) imageconditioned image generation, and (2) label-conditioned image generation. We are the first to explore continual learning for image-conditioned image generation; no existing approaches are applicable for comparison. Additionally, we compare our model with the memory replay based approach which is the state-of-the-art for label-conditioned image generation. Training Details. All the sequential digit generation models are trained on images of size 64×64 and all other models are trained on images of size 128 × 128. We use the Tensorflow [1] framework with Adam Optimizer [12] and a learning rate of 0.0001. We set the parameters λ latent = 0.5, λ KL = 0.01, and β = 5.0 for all experiments. The weights of generator and encoder in cVAE-GAN and cLR-GAN are shared. Extra training iterations on the generator and encoder using only distillation loss are used for models trained on images of size 128 × 128 for better remembering previous tasks. Baseline Models. We compare Lifelong GAN to the following baseline models: (a) Memory Replay (MR): Images generated by a generator trained on previous tasks are combined with the training images for the current task to form a hybrid training set. (b) Sequential Fine-tuning (SFT): The model is fine-tuned in a sequential manner, with parameters initialized from the model trained/fine-tuned on the previous task. (c) Joint Learning (JL): The model is trained utilizing data from all tasks.
Note that for image-conditioned image generation, we only compare with joint learning and sequential fine-tuning methods, as memory replay based approaches are not applicable without any ground-truth conditional input. Quantitative Metrics. We use different metrics to evaluate different aspects of the generation. In this work, we use Acc, r-Acc and LPIPS to validate the quality of the generated data. Acc is the accuracy of the classifier network trained on real images and evaluated on generated images (higher indicates better generation quality). r-Acc is the accuracy of the classifier network trained on generated images and evaluated on real images (higher indicates better generation quality). LPIPS [33] is used to quantitatively evaluate the diversity as used in BicycleGAN [35]. Higher LPIPS indicates higher diversity. Furthermore, LPIPS closer to the ones of real images indicates more realistic generation.
Image-conditioned Image Generation
Digit Generation. We divide the digits in MNIST [14] into 3 groups: {0,1,2}, {3,4,5}, and {6,7,8,9} 1 . The digits in each group are dyed with a signature color as shown in Figure 3. Given a dyed image, the task is to generate a foreground segmentation mask for the digit (i.e. generate a foreground segmentation given a dyed image as condition). The three groups give us three tasks for sequential learning. Generated images from the last task for all approaches are shown in Figure 3. We can see that sequential finetuning suffers from catastrophic forgetting (it is unable to segment digits 0-5 from the previous tasks), while our approach can learn to generate segmentation masks for the current task without forgetting the previous tasks. 1 Image-to-image Translation. We also apply Lifelong GAN to more challenging domains and datasets with large variation for higher resolution images. The first task is image-to-image translation of edges → shoes photos [31,29]. The second task is image-to-image translation of segmentations → facades [22]. The goal of this experiment is to learn the task of semantic segmentations → facades without forgetting the task edges → shoe photos. We sam-ple˜20000 image pairs for the first task and use all images for the second task. Generated images for all approaches are shown in Figure 4. For both Lifelong GAN and sequential fine-tuning, the model of Task2 is initialized from the same model trained on Task1. We show the generation results of each task for Lifelong GAN. For sequential fine-tuning, we show the generation results of the last task. It is clear that the sequentially fine-tuned model completely forgets the previous task and can only generate incoherent facade-like patterns. In contrast, Lifelong GAN learns the current generative task while remembering the previous task. It is also observed that Lifelong GAN is capable of maintaining the diversity of generated images of the previous task.
Label-conditioned Image Generation
Digit Generation. We divide the MNIST [14] digits into 4 groups, {0,1,2}, {3,4}, {5,6,7} and {8,9}, resulting in four tasks for sequential learning. Each task is to generate binary MNIST digits given labels (one-hot encoded labels) as conditional inputs.
Visual results for all methods are shown in Figure 5, where we also include outputs of the generator after each task for our approach and memory replay. Sequential finetuning results in catastrophic forgetting, as shown by this baseline's inability to generate digits from any previous tasks; when given a previous label, it will either generate something similar to the current task or simply unrecognizable patterns. Meanwhile, both our approach and memory replay are visually similar to joint training results, indicating that both are able to address the forgetting issue in Figure 4: Comparison among different approaches for continual learning of image to image translation tasks. Given the same model trained for the task edges → shoes, we train Lifelong GAN and sequential fine-tuning model on the task segmentations → facades. Sequential fine-tuning suffers from severe catastrophic forgetting. In contrast, Lifelong GAN can learn the current task while remembering the old task. We demonstrate some intermediate results during different tasks of continual learning for our distillation based approach and memory replay. Sequential fine-tuning suffers from severe forgetting issues while other methods give visually similar results compared to the joint learning results. this task. Quantitatively, our method achieves comparable classification accuracy to memory replay, and outperforms memory replay in terms of reverse classification accuracy. Flower Generation. We also demonstrate Lifelong GAN on a more challenging dataset, which contains higher resolution images from five categories of the Flower dataset [20]. The experiment consists of a sequence of five tasks in the order of sunflower, daisy, iris, daffodil, pansy. Each task involves learning a new category.
Generated images for all approaches are shown in Fig-ure 6. We show the generation results of each task for both Lifelong GAN and memory replay to better analyze these two methods. For sequential fine-tuning, we show the generation results of the last task which is enough to show that the model suffers from catastrophic forgetting. Figure 6 gives useful insights into the comparison between Lifelong GAN and memory replay. Both methods can learn to generate images for new tasks while remembering previous ones. However, memory replay is more sensitive to generation artifacts appearing in the intermediate Task 4 Figure 6: Comparison among different approaches for continual learning of flower image generation tasks. Given the same model trained for category sunflower, we train Lifelong GAN, memory replay and sequential fine-tuning model for other tasks. Sequential fine-tuning suffers from severe catastrophic forgetting, while both Lifelong GAN and memory replay can learn to perform the current task while remembering the old tasks. Lifelong GAN is more robust to artifacts in the generated images of the middle tasks, while memory replay is much more sensitive and all later tasks are severely impacted by these artifacts.
tasks of sequential learning. While training Task3 (category iris), both Lifelong GAN and memory replay show some artifacts in the generated images. For memory replay, the artifacts are reinforced during the training of later tasks and gradually spread over all categories. In contrast, Lifelong GAN is more robust to the artifacts and later tasks are much less sensitive to intermediate tasks. Lifelong GAN treats previous tasks and current tasks separately, trying to learn the distribution of new tasks while mimicking the distribution of the old tasks. Table 2 shows the quantitative results. Lifelong GAN outperforms memory replay by 10% in terms of classification accuracy and 25% in terms of reverse classification accuracy. We also observed visually and quantitatively that memory replay tends to lose diversity during the sequential learning, and generates images with little diversity for the final task.
Conclusion
We study the problem of lifelong learning for generative networks and propose a distillation based continual learning framework enabling a single network to be extended to new tasks without forgetting previous tasks with only supervision for the current task. Unlike previous methods that adopt memory replay to generate images from previous tasks as training data, we employ knowledge distillation to transfer learned knowledge from previous networks to the new network. Our generic framework enables a broader range of generation tasks including imageto-image translation, which is not possible using memory replay based methods. We validate Lifelong GAN for both image-conditioned and label-conditioned generation tasks, and both qualitative and quantitative results illustrate the generality and effectiveness of our method. | 4,135 |
1907.10107 | 2962860923 | Lifelong learning is challenging for deep neural networks due to their susceptibility to catastrophic forgetting. Catastrophic forgetting occurs when a trained network is not able to maintain its ability to accomplish previously learned tasks when it is trained to perform new tasks. We study the problem of lifelong learning for generative models, extending a trained network to new conditional generation tasks without forgetting previous tasks, while assuming access to the training data for the current task only. In contrast to state-of-the-art memory replay based approaches which are limited to label-conditioned image generation tasks, a more generic framework for continual learning of generative models under different conditional image generation settings is proposed in this paper. Lifelong GAN employs knowledge distillation to transfer learned knowledge from previous networks to the new network. This makes it possible to perform image-conditioned generation tasks in a lifelong learning setting. We validate Lifelong GAN for both image-conditioned and label-conditioned generation tasks, and provide qualitative and quantitative results to show the generality and effectiveness of our method. | Many techniques have been recently proposed for solving continuous learning problems in computer vision @cite_5 @cite_35 and robotics @cite_10 in both discriminative and generative settings. | {
"abstract": [
"Despite their success for object detection, convolutional neural networks are ill-equipped for incremental learning, i.e., adapting the original model trained on a set of classes to additionally detect objects of new classes, in the absence of the initial training data. They suffer from “catastrophic forgetting”–an abrupt degradation of performance on the original set of classes, when the training objective is adapted to the new classes. We present a method to address this issue, and learn object detectors incrementally, when neither the original training data nor annotations for the original classes in the new training set are available. The core of our proposed solution is a loss function to balance the interplay between predictions on the new classes and a new distillation loss which minimizes the discrepancy between responses for old classes from the original and the updated networks. This incremental learning can be performed multiple times, for a new set of classes in each step, with a moderate drop in performance compared to the baseline network trained on the ensemble of data. We present object detection results on the PASCAL VOC 2007 and COCO datasets, along with a detailed empirical analysis of the approach.",
"This paper is about long-term navigation in environments whose appearance changes over time - suddenly or gradually. We describe, implement and validate an approach which allows us to incrementally learn a model whose complexity varies naturally in accordance with variation of scene appearance. It allows us to leverage the state of the art in pose estimation to build over many runs, a world model of sufficient richness to allow simple localisation despite a large variation in conditions. As our robot repeatedly traverses its workspace, it accumulates distinct visual experiences that in concert, implicitly represent the scene variation - each experience captures a visual mode. When operating in a previously visited area, we continually try to localise in these previous experiences while simultaneously running an independent vision based pose estimation system. Failure to localise in a sufficient number of prior experiences indicates an insufficient model of the workspace and instigates the laying down of the live image sequence as a new distinct experience. In this way, over time we can capture the typical time varying appearance of an environment and the number of experiences required tends to a constant. Although we focus on vision as a primary sensor throughout, the ideas we present here are equally applicable to other sensor modalities. We demonstrate our approach working on a road vehicle operating over a three month period at different times of day, in different weather and lighting conditions. In all, we process over 136,000 frames captured from 37km of driving.",
"Although deep learning approaches have stood out in recent years due to their state-of-the-art results, they continue to suffer from catastrophic forgetting, a dramatic decrease in overall performance when training with new classes added incrementally. This is due to current neural network architectures requiring the entire dataset, consisting of all the samples from the old as well as the new classes, to update the model—a requirement that becomes easily unsustainable as the number of classes grows. We address this issue with our approach to learn deep neural networks incrementally, using new data and only a small exemplar set corresponding to samples from the old classes. This is based on a loss composed of a distillation measure to retain the knowledge acquired from the old classes, and a cross-entropy loss to learn the new classes. Our incremental training is achieved while keeping the entire framework end-to-end, i.e., learning the data representation and the classifier jointly, unlike recent methods with no such guarantees. We evaluate our method extensively on the CIFAR-100 and ImageNet (ILSVRC 2012) image classification datasets, and show state-of-the-art performance."
],
"cite_N": [
"@cite_5",
"@cite_10",
"@cite_35"
],
"mid": [
"2962966271",
"2041232406",
"2884282566"
]
} | Lifelong GAN: Continual Learning for Conditional Image Generation | Learning is a lifelong process for humans. We acquire knowledge throughout our lives so that we become more efficient and versatile facing new tasks. The accumulation of knowledge in turn accelerates our acquisition of new skills. In contrast to human learning, lifelong learning remains an open challenge for modern deep learning systems. It is well known that deep neural networks are susceptible to a phenomenon known as catastrophic forgetting [18]. Catastrophic forgetting occurs when a trained neural network is not able to maintain its ability to accomplish previously learned tasks when it is adapted to perform new tasks.
Consider the example in Figure 1. A generative model is first trained on the task edges → shoes. Given a new task segmentations → facades, a new model is initialized from the previous one and fine-tuned for the new task. After training, the model forgets about the previous task and cannot generate shoe images given edge images as inputs. One way to address this would be to combine the training data for the current task with the training data for all previous tasks and then train the model using the joint data. Unfortunately, this approach is not scalable in general: as new tasks are added, the storage requirements and training time of the joint data grow without bound. In addition, the models for previous tasks may be trained using private or privileged data which is not accessible during the training of the current task. The challenge in lifelong learning is therefore to extend the model to accomplish the current task, without forgetting how to accomplish previous tasks in scenarios where we are restricted to the training data for only the current task. In this work, we work under the assumption that we only have access to a model trained on previous tasks without access to the previous data.
Recent efforts [24,3,7] have demonstrated how discriminative models could be incrementally learnt for a sequence of tasks. Despite the success of these efforts, lifelong learning in generative settings remains an open problem. Parameter regularization [23,13] has been adapted from discriminative models to generative models, but poor performance is observed [28]. The state-of-the-art continual learning generative frameworks [23,28] are built on memory replay which treats generated data from previous tasks as part of the training examples in the new tasks. Although memory replay has been shown to alleviate the catastrophic forgetting problem by taking advantage of the generative setting, its applicability is limited to label-conditioned generation tasks. In particular, replay based methods cannot be extended to image-conditioned generation. The reason lies in that no conditional image can be accessed to generate replay training pairs for previous tasks. Therefore, a more generic continual learning framework that can enable various conditional generation tasks is valuable.
In this paper, we introduce a generic continual learning framework Lifelong GAN that can be applied to both image-conditioned and label-conditioned image generation. We employ knowledge distillation [9] to address catastrophic forgetting for conditional generative continual learning tasks. Given a new task, Lifelong GAN learns to perform this task, and to keep the memory of previous tasks, information is extracted from a previously trained network and distilled to the new network during training by encouraging the two networks to produce similar output values or visual patterns. To the best of our knowledge, we are the first to utilize the principle of knowledge distillation for continual learning generative frameworks.
To summarize, our contributions are as follows. First, we propose a generic framework for continual learning of conditional image generation models. Second, we validate the effectiveness of our approach for two different types of conditional inputs: (1) image-conditioned generation, and (2) label-conditioned generation, and provide qualitative and quantitative results to illustrate the capability of our GAN framework to learn new generation tasks without the catastrophic forgetting of previous tasks. Third, we illustrate the generality of our framework by performing continual learn-ing across diverse data domains.
Related Work
Conditional GANs. Image generation has achieved great success since the introduction of GANs [8]. There also has been rapid progress in the field of conditional image generation [19]. Conditional image generation tasks can be typically categorized as image-conditioned image generation and label-conditioned image generation.
Recent image-conditioned models have shown promising results for numerous image-to-image translation tasks such as maps → satellite images, sketches → photos, labels → images [10,35,34], future frame prediction [26], superresolution [15], and inpainting [30]. Moreover, images can be stylized by disentangling the style and the content [11,16] or by encoding styles into a stylebank (set of convolution filters) [4]. Models [32,17] for rendering a person's appearance onto a given pose have shown to be effective for person re-identification. Label-conditioned models [5,6] have also been explored for generating images for specific categories.
Knowledge Distillation. Proposed by Hinton et al. [9], knowledge distillation is designed for transferring knowledge from a teacher classifier to a student classifier. The teacher classifier normally would have more privileged information [25] compared with the student classifier. The privileged information includes two aspects. The first aspect is referred to as the learning power, namely the size of the neural networks. A student classifier could have a more compact network structure compared with the teacher classifier, and by distilling knowledge from the teacher classifier to student classifier, the student classifier would have similar or even better classification performance than the teacher network. Relevant applications include network compression [21] and network training acceleration [27]. The second aspect is the learning resources, namely the amount of input data. The teacher classifier could have more learning resources and see more data that the student cannot see. Compared with the first aspect, this aspect is relatively unexplored and is the focus of our work.
Continual Learning. Many techniques have been recently proposed for solving continuous learning problems in computer vision [24,3] and robotics [7] in both discriminative and generative settings.
For discriminative settings, Shmelkov et al. [24] employ a distillation loss that measures the discrepancy between the output of the old and new network for distilling knowledge learnt by the old network. In addition, Castro et al. [3] propose to use a few exemplar images from previous tasks and perform knowledge distillation using new features from previous classification layers followed by a modified activation layer. For generative settings, continual learning has been primarily achieved using memory replay based methods. Replay was first proposed by Seff et al. [23], where the images for previous tasks are generated and combined together with the data for the new task to form a joint dataset, and a new model is trained on the joint dataset. A similar idea is also adopted by Wu et al. [28] for label-conditioned image generation. Approaches based on elastic weight consolidation [13] have also been explored for the task of labelconditioned image generation [28], but they have limited capability to remember previous categories and generate high quality images.
In this paper, we introduce knowledge distillation within continual generative model learning, which has not been explored before. Our approach can be applied to both imageconditioned generation, for which the replay mechanism is not applicable, and label-conditioned image generation.
Approach
Our proposed Lifelong GAN addresses catastrophic forgetting using knowledge distillation and, in contrast to replay based methods, can be applied to continually learn both label-conditioned and image-conditioned generation tasks. In this paper, we build our model on the state-of-the-art Bi-cycleGAN [35] model. Our overall approach for continual learning for a generative model is illustrated in Figure 2. Given data from the current task, Lifelong GAN learns to perform this task, and to keep the memory of previous tasks, knowledge distillation is adopted to distill information from a previously trained network to the current network by encouraging the two networks to produce similar output values or patterns given the same input. To avoid "conflicts" that arise when having two desired outputs (current training goal and outputs from previous model) given the same input, we generate auxiliary data for distillation from the current data via two operations Montage and Swap.
Lifelong GAN with Knowledge Distillation
To perform continual learning of conditional generation tasks, the proposed Lifelong GAN is built on top of Bicycle GAN with the adoption of knowledge distillation. We first introduce the problem formulation, followed by a detailed description of our model, then discuss our strategy to tackle the conflicting objectives in training.
Problem Formulation. During training of the t th task, we are given a dataset of N t paired instances
S t = {(A i,t , B i,t )|A i,t ∈ A t , B i,t ∈ B t } Nt i=1
where A t and B t denote the domain of conditional images and ground truth images respectively. For simplicity, we use the notations A t , B t for an instance from the respective domain. The goal is to train a model M t which can generate images of current task B t ← (A t , z), without forgetting how to generate images of previous tasks B i ← (A i , z), i = 1, 2, ..., (t − 1).
Figure 2:
Overview of Lifelong GAN. Given training data for the t th task, model M t is trained to learn this current task. To avoid forgetting previous tasks, knowledge distillation is adopted to distill information from model M t−1 to model M t by encouraging the two networks to produce similar output values or patterns given the auxiliary data as inputs.
Let M t be the t th model trained, and M t−1 be the (t − 1) th model trained. Both M t−1 and M t contain two cycles (cVAE-GAN and cLR-GAN) as described in Section 3.1. Inspired by continual learning methods for discriminative models, we prevent the current model M t from forgetting the knowledge learned by the previous model M t−1 by inputting the data of the current task S t to both M t and M t−1 , and distilling the knowledge from M t−1 to M t by encouraging the outputs of M t−1 and M t to be similar. We describe the process of knowledge distillation for both cycles as follows.
cVAE-GAN. Recall from Section 3.1 that cVAE-GAN has two outputs: the encoded latent code z and the reconstructed ground truth image B. Given ground truth image B t , the encoders E t and E t−1 are encouraged to encode it in the same way and produce the same output; given encoded latent code z and conditional image A t , the generators G t and G t−1 are encouraged to reconstruct the ground truth images in the same way. Therefore, we define the loss for the cVAE-GAN cycle with knowledge distillation as:
L t cVAE−DL = L t cVAE−GAN + βE At,Bt∼p(At,Bt) [||E t (B t ) − E t−1 (B t )|| 1 + ||G t (A t , E t (B t )) − G t−1 (A t , E t−1 (B t ))|| 1 ],(4)
where β is the loss weight for knowledge distillation.
cLR-GAN. Recall from Section 3.1 that cLR-GAN also has two outputs: the generated image B and the reconstructed latent code z. Given the latent code z and conditional image A t , the generators G t and G t−1 are encouraged to generate images in the same way; given the generated image B t , the encoders E t and E t−1 are encouraged to encode the generated images in the same way. Therefore, we define the loss for the cLR-GAN cycle as:
L t cLR−DL = L t cLR−GAN + βE At∼p(At),z∼p(z) [||G t (A t , z) − G t−1 (A t , z)|| 1 + ||E t (G t (A t , z)) − E t−1 (G t−1 (A t , z))|| 1 ].(5)
The distillation losses can be defined in several ways, e.g. the L 2 loss [2,24], KL divergence [9] or crossentropy [9,3]. In our approach, we use L 1 instead of L 2 to avoid blurriness in the generated images.
Lifelong GAN is proposed to adopt knowledge distillation in both cycles, hence the overall loss function is:
L t Lifelong−GAN = L t cVAE−DL + L t cLR−DL .(6)
Conflict Removal with Auxiliary Data. Note that Equation 4 contains conflicting objectives. The first term encourages the model to reconstruct the inputs of the current task, while the third term encourages the model to generate the same images as the outputs of the old model. In addition, the first term encourages the model to encode the input images to normal distributions, while the second term encourages the model to encode the input images to a distribution learned from the old model. Similar conflicting objectives exist in Equation 5. To sum up, the conflicts appear when the model is required to produce two different outputs, namely mimicking the performance of the old model and accomplishing the new goal, given the same inputs.
To address these conflicting objectives, we propose to use auxiliary data for distilling knowledge from the old model M t−1 to model M t . The use of auxiliary data for distillation removes these conflicts. It is important that new auxiliary data should be used for each task, otherwise the network could potentially implicitly encode them when learning previous tasks. We describe approaches for doing so without requiring external data sources in Sec. 3.3.
The auxiliary data S aux The losses L t cVAE−DL and L t cLR−DL are re-written as:
t = {(A aux i,t , B aux i,t )|A aux i,t ∈ A aux t , B aux i,t ∈ B aux t } Nt i=1 consist of N auxL t cVAE−DL = L t cVAE−GAN + βE A aux t ,B aux t ∼p(A aux t ,B aux t ) [||E t (B aux t ) − E t−1 (B aux t )|| 1 + ||G t (A aux t , E t (B aux t )) − G t−1 (A aux t , E t−1 (B aux t ))|| 1 ],(7)L t cLR−DL = L t cLR−GAN + βE A aux t ∼p(A aux t ),z∼p(z) [||G t (A aux t , z) − G t−1 (A aux t , z)|| 1 + ||E t (G t (A aux t , z)) − E t−1 (G t−1 (A aux t , z))|| 1 ],(8)
where β is the loss weight for knowledge distillation. Lifelong GAN can be used for continual learning of both image-conditioned and label-conditioned generation tasks. The auxiliary images for knowledge distillation for both settings can be generated using the Montage and Swap operations described in Section 3.3. For label-conditioned generation, we can simply use the categorical codes from previous tasks.
Auxiliary Data Generation
We now discuss the generation of auxiliary data. Recall from Section 3.2 that we use auxiliary data to address the conflicting objectives in Equations 4 and 5.
The auxiliary images do not require labels, and can in principle be sourced from online image repositories. However, this solution may not be scalable as it requires a new set of auxiliary images to be collected when learning each new task. A more desirable alternative may be to generate auxiliary data by using the current data in a way that avoids the over-fitting problem. We propose two operations for generating auxiliary data from the current task data:
1. Montage: Randomly sample small image patches from current input images and montage them together to produce auxiliary images for distillation.
2. Swap: Swap the conditional image A t and the ground truth image B t for distillation. Namely the encoder receives the conditional image A t and encodes it to a latent code z, and the generator is conditioned on the ground truth image B t .
Both operations are used in image-conditioned generation; in label-conditioned generation, since there is no conditional image, only the montage operation is applicable. Other alternatives may be possible. Essentially, the auxiliary data generation needs to provide out-of-task samples that can be used to preserve the knowledge learned by the old model. The knowledge is preserved using the distillation losses, which encourage the old and new models to produce similar responses on the out-of-task samples.
Experiments
We evaluate Lifelong GAN for two settings: (1) imageconditioned image generation, and (2) label-conditioned image generation. We are the first to explore continual learning for image-conditioned image generation; no existing approaches are applicable for comparison. Additionally, we compare our model with the memory replay based approach which is the state-of-the-art for label-conditioned image generation. Training Details. All the sequential digit generation models are trained on images of size 64×64 and all other models are trained on images of size 128 × 128. We use the Tensorflow [1] framework with Adam Optimizer [12] and a learning rate of 0.0001. We set the parameters λ latent = 0.5, λ KL = 0.01, and β = 5.0 for all experiments. The weights of generator and encoder in cVAE-GAN and cLR-GAN are shared. Extra training iterations on the generator and encoder using only distillation loss are used for models trained on images of size 128 × 128 for better remembering previous tasks. Baseline Models. We compare Lifelong GAN to the following baseline models: (a) Memory Replay (MR): Images generated by a generator trained on previous tasks are combined with the training images for the current task to form a hybrid training set. (b) Sequential Fine-tuning (SFT): The model is fine-tuned in a sequential manner, with parameters initialized from the model trained/fine-tuned on the previous task. (c) Joint Learning (JL): The model is trained utilizing data from all tasks.
Note that for image-conditioned image generation, we only compare with joint learning and sequential fine-tuning methods, as memory replay based approaches are not applicable without any ground-truth conditional input. Quantitative Metrics. We use different metrics to evaluate different aspects of the generation. In this work, we use Acc, r-Acc and LPIPS to validate the quality of the generated data. Acc is the accuracy of the classifier network trained on real images and evaluated on generated images (higher indicates better generation quality). r-Acc is the accuracy of the classifier network trained on generated images and evaluated on real images (higher indicates better generation quality). LPIPS [33] is used to quantitatively evaluate the diversity as used in BicycleGAN [35]. Higher LPIPS indicates higher diversity. Furthermore, LPIPS closer to the ones of real images indicates more realistic generation.
Image-conditioned Image Generation
Digit Generation. We divide the digits in MNIST [14] into 3 groups: {0,1,2}, {3,4,5}, and {6,7,8,9} 1 . The digits in each group are dyed with a signature color as shown in Figure 3. Given a dyed image, the task is to generate a foreground segmentation mask for the digit (i.e. generate a foreground segmentation given a dyed image as condition). The three groups give us three tasks for sequential learning. Generated images from the last task for all approaches are shown in Figure 3. We can see that sequential finetuning suffers from catastrophic forgetting (it is unable to segment digits 0-5 from the previous tasks), while our approach can learn to generate segmentation masks for the current task without forgetting the previous tasks. 1 Image-to-image Translation. We also apply Lifelong GAN to more challenging domains and datasets with large variation for higher resolution images. The first task is image-to-image translation of edges → shoes photos [31,29]. The second task is image-to-image translation of segmentations → facades [22]. The goal of this experiment is to learn the task of semantic segmentations → facades without forgetting the task edges → shoe photos. We sam-ple˜20000 image pairs for the first task and use all images for the second task. Generated images for all approaches are shown in Figure 4. For both Lifelong GAN and sequential fine-tuning, the model of Task2 is initialized from the same model trained on Task1. We show the generation results of each task for Lifelong GAN. For sequential fine-tuning, we show the generation results of the last task. It is clear that the sequentially fine-tuned model completely forgets the previous task and can only generate incoherent facade-like patterns. In contrast, Lifelong GAN learns the current generative task while remembering the previous task. It is also observed that Lifelong GAN is capable of maintaining the diversity of generated images of the previous task.
Label-conditioned Image Generation
Digit Generation. We divide the MNIST [14] digits into 4 groups, {0,1,2}, {3,4}, {5,6,7} and {8,9}, resulting in four tasks for sequential learning. Each task is to generate binary MNIST digits given labels (one-hot encoded labels) as conditional inputs.
Visual results for all methods are shown in Figure 5, where we also include outputs of the generator after each task for our approach and memory replay. Sequential finetuning results in catastrophic forgetting, as shown by this baseline's inability to generate digits from any previous tasks; when given a previous label, it will either generate something similar to the current task or simply unrecognizable patterns. Meanwhile, both our approach and memory replay are visually similar to joint training results, indicating that both are able to address the forgetting issue in Figure 4: Comparison among different approaches for continual learning of image to image translation tasks. Given the same model trained for the task edges → shoes, we train Lifelong GAN and sequential fine-tuning model on the task segmentations → facades. Sequential fine-tuning suffers from severe catastrophic forgetting. In contrast, Lifelong GAN can learn the current task while remembering the old task. We demonstrate some intermediate results during different tasks of continual learning for our distillation based approach and memory replay. Sequential fine-tuning suffers from severe forgetting issues while other methods give visually similar results compared to the joint learning results. this task. Quantitatively, our method achieves comparable classification accuracy to memory replay, and outperforms memory replay in terms of reverse classification accuracy. Flower Generation. We also demonstrate Lifelong GAN on a more challenging dataset, which contains higher resolution images from five categories of the Flower dataset [20]. The experiment consists of a sequence of five tasks in the order of sunflower, daisy, iris, daffodil, pansy. Each task involves learning a new category.
Generated images for all approaches are shown in Fig-ure 6. We show the generation results of each task for both Lifelong GAN and memory replay to better analyze these two methods. For sequential fine-tuning, we show the generation results of the last task which is enough to show that the model suffers from catastrophic forgetting. Figure 6 gives useful insights into the comparison between Lifelong GAN and memory replay. Both methods can learn to generate images for new tasks while remembering previous ones. However, memory replay is more sensitive to generation artifacts appearing in the intermediate Task 4 Figure 6: Comparison among different approaches for continual learning of flower image generation tasks. Given the same model trained for category sunflower, we train Lifelong GAN, memory replay and sequential fine-tuning model for other tasks. Sequential fine-tuning suffers from severe catastrophic forgetting, while both Lifelong GAN and memory replay can learn to perform the current task while remembering the old tasks. Lifelong GAN is more robust to artifacts in the generated images of the middle tasks, while memory replay is much more sensitive and all later tasks are severely impacted by these artifacts.
tasks of sequential learning. While training Task3 (category iris), both Lifelong GAN and memory replay show some artifacts in the generated images. For memory replay, the artifacts are reinforced during the training of later tasks and gradually spread over all categories. In contrast, Lifelong GAN is more robust to the artifacts and later tasks are much less sensitive to intermediate tasks. Lifelong GAN treats previous tasks and current tasks separately, trying to learn the distribution of new tasks while mimicking the distribution of the old tasks. Table 2 shows the quantitative results. Lifelong GAN outperforms memory replay by 10% in terms of classification accuracy and 25% in terms of reverse classification accuracy. We also observed visually and quantitatively that memory replay tends to lose diversity during the sequential learning, and generates images with little diversity for the final task.
Conclusion
We study the problem of lifelong learning for generative networks and propose a distillation based continual learning framework enabling a single network to be extended to new tasks without forgetting previous tasks with only supervision for the current task. Unlike previous methods that adopt memory replay to generate images from previous tasks as training data, we employ knowledge distillation to transfer learned knowledge from previous networks to the new network. Our generic framework enables a broader range of generation tasks including imageto-image translation, which is not possible using memory replay based methods. We validate Lifelong GAN for both image-conditioned and label-conditioned generation tasks, and both qualitative and quantitative results illustrate the generality and effectiveness of our method. | 4,135 |
1907.10107 | 2962860923 | Lifelong learning is challenging for deep neural networks due to their susceptibility to catastrophic forgetting. Catastrophic forgetting occurs when a trained network is not able to maintain its ability to accomplish previously learned tasks when it is trained to perform new tasks. We study the problem of lifelong learning for generative models, extending a trained network to new conditional generation tasks without forgetting previous tasks, while assuming access to the training data for the current task only. In contrast to state-of-the-art memory replay based approaches which are limited to label-conditioned image generation tasks, a more generic framework for continual learning of generative models under different conditional image generation settings is proposed in this paper. Lifelong GAN employs knowledge distillation to transfer learned knowledge from previous networks to the new network. This makes it possible to perform image-conditioned generation tasks in a lifelong learning setting. We validate Lifelong GAN for both image-conditioned and label-conditioned generation tasks, and provide qualitative and quantitative results to show the generality and effectiveness of our method. | For discriminative settings, Shmelkov al @cite_5 employ a distillation loss that measures the discrepancy between the output of the old and new network for distilling knowledge learnt by the old network. In addition, Castro al @cite_35 propose to use a few exemplar images from previous tasks and perform knowledge distillation using new features from previous classification layers followed by a modified activation layer. For generative settings, continual learning has been primarily achieved using memory replay based methods. Replay was first proposed by @cite_17 , where the images for previous tasks are generated and combined together with the data for the new task to form a joint dataset, and a new model is trained on the joint dataset. A similar idea is also adopted by @cite_22 for label-conditioned image generation. Approaches based on elastic weight consolidation @cite_3 have also been explored for the task of label-conditioned image generation @cite_22 , but they have limited capability to remember previous categories and generate high quality images. | {
"abstract": [
"Although deep learning approaches have stood out in recent years due to their state-of-the-art results, they continue to suffer from catastrophic forgetting, a dramatic decrease in overall performance when training with new classes added incrementally. This is due to current neural network architectures requiring the entire dataset, consisting of all the samples from the old as well as the new classes, to update the model—a requirement that becomes easily unsustainable as the number of classes grows. We address this issue with our approach to learn deep neural networks incrementally, using new data and only a small exemplar set corresponding to samples from the old classes. This is based on a loss composed of a distillation measure to retain the knowledge acquired from the old classes, and a cross-entropy loss to learn the new classes. Our incremental training is achieved while keeping the entire framework end-to-end, i.e., learning the data representation and the classifier jointly, unlike recent methods with no such guarantees. We evaluate our method extensively on the CIFAR-100 and ImageNet (ILSVRC 2012) image classification datasets, and show state-of-the-art performance.",
"Previous works on sequential learning address the problem of forgetting in discriminative models. In this paper we consider the case of generative models. In particular, we investigate generative adversarial networks (GANs) in the task of learning new categories in a sequential fashion. We first show that sequential fine tuning renders the network unable to properly generate images from previous categories (i.e. forgetting). Addressing this problem, we propose Memory Replay GANs (MeRGANs), a conditional GAN framework that integrates a memory replay generator. We study two methods to prevent forgetting by leveraging these replays, namely joint training with replay and replay alignment. Qualitative and quantitative experimental results in MNIST, SVHN and LSUN datasets show that our memory replay approach can generate competitive images while significantly mitigating the forgetting of previous categories.",
"Comunicacio presentada a: 35th International Conference on Machine Learning, celebrat a Stockholmsmassan, Suecia, del 10 al 15 de juliol del 2018.",
"Despite their success for object detection, convolutional neural networks are ill-equipped for incremental learning, i.e., adapting the original model trained on a set of classes to additionally detect objects of new classes, in the absence of the initial training data. They suffer from “catastrophic forgetting”–an abrupt degradation of performance on the original set of classes, when the training objective is adapted to the new classes. We present a method to address this issue, and learn object detectors incrementally, when neither the original training data nor annotations for the original classes in the new training set are available. The core of our proposed solution is a loss function to balance the interplay between predictions on the new classes and a new distillation loss which minimizes the discrepancy between responses for old classes from the original and the updated networks. This incremental learning can be performed multiple times, for a new set of classes in each step, with a moderate drop in performance compared to the baseline network trained on the ensemble of data. We present object detection results on the PASCAL VOC 2007 and COCO datasets, along with a detailed empirical analysis of the approach.",
"Developments in deep generative models have allowed for tractable learning of high-dimensional data distributions. While the employed learning procedures typically assume that training data is drawn i.i.d. from the distribution of interest, it may be desirable to model distinct distributions which are observed sequentially, such as when different classes are encountered over time. Although conditional variations of deep generative models permit multiple distributions to be modeled by a single network in a disentangled fashion, they are susceptible to catastrophic forgetting when the distributions are encountered sequentially. In this paper, we adapt recent work in reducing catastrophic forgetting to the task of training generative adversarial networks on a sequence of distinct distributions, enabling continual generative modeling."
],
"cite_N": [
"@cite_35",
"@cite_22",
"@cite_3",
"@cite_5",
"@cite_17"
],
"mid": [
"2884282566",
"2892234010",
"2963813679",
"2962966271",
"2619508703"
]
} | Lifelong GAN: Continual Learning for Conditional Image Generation | Learning is a lifelong process for humans. We acquire knowledge throughout our lives so that we become more efficient and versatile facing new tasks. The accumulation of knowledge in turn accelerates our acquisition of new skills. In contrast to human learning, lifelong learning remains an open challenge for modern deep learning systems. It is well known that deep neural networks are susceptible to a phenomenon known as catastrophic forgetting [18]. Catastrophic forgetting occurs when a trained neural network is not able to maintain its ability to accomplish previously learned tasks when it is adapted to perform new tasks.
Consider the example in Figure 1. A generative model is first trained on the task edges → shoes. Given a new task segmentations → facades, a new model is initialized from the previous one and fine-tuned for the new task. After training, the model forgets about the previous task and cannot generate shoe images given edge images as inputs. One way to address this would be to combine the training data for the current task with the training data for all previous tasks and then train the model using the joint data. Unfortunately, this approach is not scalable in general: as new tasks are added, the storage requirements and training time of the joint data grow without bound. In addition, the models for previous tasks may be trained using private or privileged data which is not accessible during the training of the current task. The challenge in lifelong learning is therefore to extend the model to accomplish the current task, without forgetting how to accomplish previous tasks in scenarios where we are restricted to the training data for only the current task. In this work, we work under the assumption that we only have access to a model trained on previous tasks without access to the previous data.
Recent efforts [24,3,7] have demonstrated how discriminative models could be incrementally learnt for a sequence of tasks. Despite the success of these efforts, lifelong learning in generative settings remains an open problem. Parameter regularization [23,13] has been adapted from discriminative models to generative models, but poor performance is observed [28]. The state-of-the-art continual learning generative frameworks [23,28] are built on memory replay which treats generated data from previous tasks as part of the training examples in the new tasks. Although memory replay has been shown to alleviate the catastrophic forgetting problem by taking advantage of the generative setting, its applicability is limited to label-conditioned generation tasks. In particular, replay based methods cannot be extended to image-conditioned generation. The reason lies in that no conditional image can be accessed to generate replay training pairs for previous tasks. Therefore, a more generic continual learning framework that can enable various conditional generation tasks is valuable.
In this paper, we introduce a generic continual learning framework Lifelong GAN that can be applied to both image-conditioned and label-conditioned image generation. We employ knowledge distillation [9] to address catastrophic forgetting for conditional generative continual learning tasks. Given a new task, Lifelong GAN learns to perform this task, and to keep the memory of previous tasks, information is extracted from a previously trained network and distilled to the new network during training by encouraging the two networks to produce similar output values or visual patterns. To the best of our knowledge, we are the first to utilize the principle of knowledge distillation for continual learning generative frameworks.
To summarize, our contributions are as follows. First, we propose a generic framework for continual learning of conditional image generation models. Second, we validate the effectiveness of our approach for two different types of conditional inputs: (1) image-conditioned generation, and (2) label-conditioned generation, and provide qualitative and quantitative results to illustrate the capability of our GAN framework to learn new generation tasks without the catastrophic forgetting of previous tasks. Third, we illustrate the generality of our framework by performing continual learn-ing across diverse data domains.
Related Work
Conditional GANs. Image generation has achieved great success since the introduction of GANs [8]. There also has been rapid progress in the field of conditional image generation [19]. Conditional image generation tasks can be typically categorized as image-conditioned image generation and label-conditioned image generation.
Recent image-conditioned models have shown promising results for numerous image-to-image translation tasks such as maps → satellite images, sketches → photos, labels → images [10,35,34], future frame prediction [26], superresolution [15], and inpainting [30]. Moreover, images can be stylized by disentangling the style and the content [11,16] or by encoding styles into a stylebank (set of convolution filters) [4]. Models [32,17] for rendering a person's appearance onto a given pose have shown to be effective for person re-identification. Label-conditioned models [5,6] have also been explored for generating images for specific categories.
Knowledge Distillation. Proposed by Hinton et al. [9], knowledge distillation is designed for transferring knowledge from a teacher classifier to a student classifier. The teacher classifier normally would have more privileged information [25] compared with the student classifier. The privileged information includes two aspects. The first aspect is referred to as the learning power, namely the size of the neural networks. A student classifier could have a more compact network structure compared with the teacher classifier, and by distilling knowledge from the teacher classifier to student classifier, the student classifier would have similar or even better classification performance than the teacher network. Relevant applications include network compression [21] and network training acceleration [27]. The second aspect is the learning resources, namely the amount of input data. The teacher classifier could have more learning resources and see more data that the student cannot see. Compared with the first aspect, this aspect is relatively unexplored and is the focus of our work.
Continual Learning. Many techniques have been recently proposed for solving continuous learning problems in computer vision [24,3] and robotics [7] in both discriminative and generative settings.
For discriminative settings, Shmelkov et al. [24] employ a distillation loss that measures the discrepancy between the output of the old and new network for distilling knowledge learnt by the old network. In addition, Castro et al. [3] propose to use a few exemplar images from previous tasks and perform knowledge distillation using new features from previous classification layers followed by a modified activation layer. For generative settings, continual learning has been primarily achieved using memory replay based methods. Replay was first proposed by Seff et al. [23], where the images for previous tasks are generated and combined together with the data for the new task to form a joint dataset, and a new model is trained on the joint dataset. A similar idea is also adopted by Wu et al. [28] for label-conditioned image generation. Approaches based on elastic weight consolidation [13] have also been explored for the task of labelconditioned image generation [28], but they have limited capability to remember previous categories and generate high quality images.
In this paper, we introduce knowledge distillation within continual generative model learning, which has not been explored before. Our approach can be applied to both imageconditioned generation, for which the replay mechanism is not applicable, and label-conditioned image generation.
Approach
Our proposed Lifelong GAN addresses catastrophic forgetting using knowledge distillation and, in contrast to replay based methods, can be applied to continually learn both label-conditioned and image-conditioned generation tasks. In this paper, we build our model on the state-of-the-art Bi-cycleGAN [35] model. Our overall approach for continual learning for a generative model is illustrated in Figure 2. Given data from the current task, Lifelong GAN learns to perform this task, and to keep the memory of previous tasks, knowledge distillation is adopted to distill information from a previously trained network to the current network by encouraging the two networks to produce similar output values or patterns given the same input. To avoid "conflicts" that arise when having two desired outputs (current training goal and outputs from previous model) given the same input, we generate auxiliary data for distillation from the current data via two operations Montage and Swap.
Lifelong GAN with Knowledge Distillation
To perform continual learning of conditional generation tasks, the proposed Lifelong GAN is built on top of Bicycle GAN with the adoption of knowledge distillation. We first introduce the problem formulation, followed by a detailed description of our model, then discuss our strategy to tackle the conflicting objectives in training.
Problem Formulation. During training of the t th task, we are given a dataset of N t paired instances
S t = {(A i,t , B i,t )|A i,t ∈ A t , B i,t ∈ B t } Nt i=1
where A t and B t denote the domain of conditional images and ground truth images respectively. For simplicity, we use the notations A t , B t for an instance from the respective domain. The goal is to train a model M t which can generate images of current task B t ← (A t , z), without forgetting how to generate images of previous tasks B i ← (A i , z), i = 1, 2, ..., (t − 1).
Figure 2:
Overview of Lifelong GAN. Given training data for the t th task, model M t is trained to learn this current task. To avoid forgetting previous tasks, knowledge distillation is adopted to distill information from model M t−1 to model M t by encouraging the two networks to produce similar output values or patterns given the auxiliary data as inputs.
Let M t be the t th model trained, and M t−1 be the (t − 1) th model trained. Both M t−1 and M t contain two cycles (cVAE-GAN and cLR-GAN) as described in Section 3.1. Inspired by continual learning methods for discriminative models, we prevent the current model M t from forgetting the knowledge learned by the previous model M t−1 by inputting the data of the current task S t to both M t and M t−1 , and distilling the knowledge from M t−1 to M t by encouraging the outputs of M t−1 and M t to be similar. We describe the process of knowledge distillation for both cycles as follows.
cVAE-GAN. Recall from Section 3.1 that cVAE-GAN has two outputs: the encoded latent code z and the reconstructed ground truth image B. Given ground truth image B t , the encoders E t and E t−1 are encouraged to encode it in the same way and produce the same output; given encoded latent code z and conditional image A t , the generators G t and G t−1 are encouraged to reconstruct the ground truth images in the same way. Therefore, we define the loss for the cVAE-GAN cycle with knowledge distillation as:
L t cVAE−DL = L t cVAE−GAN + βE At,Bt∼p(At,Bt) [||E t (B t ) − E t−1 (B t )|| 1 + ||G t (A t , E t (B t )) − G t−1 (A t , E t−1 (B t ))|| 1 ],(4)
where β is the loss weight for knowledge distillation.
cLR-GAN. Recall from Section 3.1 that cLR-GAN also has two outputs: the generated image B and the reconstructed latent code z. Given the latent code z and conditional image A t , the generators G t and G t−1 are encouraged to generate images in the same way; given the generated image B t , the encoders E t and E t−1 are encouraged to encode the generated images in the same way. Therefore, we define the loss for the cLR-GAN cycle as:
L t cLR−DL = L t cLR−GAN + βE At∼p(At),z∼p(z) [||G t (A t , z) − G t−1 (A t , z)|| 1 + ||E t (G t (A t , z)) − E t−1 (G t−1 (A t , z))|| 1 ].(5)
The distillation losses can be defined in several ways, e.g. the L 2 loss [2,24], KL divergence [9] or crossentropy [9,3]. In our approach, we use L 1 instead of L 2 to avoid blurriness in the generated images.
Lifelong GAN is proposed to adopt knowledge distillation in both cycles, hence the overall loss function is:
L t Lifelong−GAN = L t cVAE−DL + L t cLR−DL .(6)
Conflict Removal with Auxiliary Data. Note that Equation 4 contains conflicting objectives. The first term encourages the model to reconstruct the inputs of the current task, while the third term encourages the model to generate the same images as the outputs of the old model. In addition, the first term encourages the model to encode the input images to normal distributions, while the second term encourages the model to encode the input images to a distribution learned from the old model. Similar conflicting objectives exist in Equation 5. To sum up, the conflicts appear when the model is required to produce two different outputs, namely mimicking the performance of the old model and accomplishing the new goal, given the same inputs.
To address these conflicting objectives, we propose to use auxiliary data for distilling knowledge from the old model M t−1 to model M t . The use of auxiliary data for distillation removes these conflicts. It is important that new auxiliary data should be used for each task, otherwise the network could potentially implicitly encode them when learning previous tasks. We describe approaches for doing so without requiring external data sources in Sec. 3.3.
The auxiliary data S aux The losses L t cVAE−DL and L t cLR−DL are re-written as:
t = {(A aux i,t , B aux i,t )|A aux i,t ∈ A aux t , B aux i,t ∈ B aux t } Nt i=1 consist of N auxL t cVAE−DL = L t cVAE−GAN + βE A aux t ,B aux t ∼p(A aux t ,B aux t ) [||E t (B aux t ) − E t−1 (B aux t )|| 1 + ||G t (A aux t , E t (B aux t )) − G t−1 (A aux t , E t−1 (B aux t ))|| 1 ],(7)L t cLR−DL = L t cLR−GAN + βE A aux t ∼p(A aux t ),z∼p(z) [||G t (A aux t , z) − G t−1 (A aux t , z)|| 1 + ||E t (G t (A aux t , z)) − E t−1 (G t−1 (A aux t , z))|| 1 ],(8)
where β is the loss weight for knowledge distillation. Lifelong GAN can be used for continual learning of both image-conditioned and label-conditioned generation tasks. The auxiliary images for knowledge distillation for both settings can be generated using the Montage and Swap operations described in Section 3.3. For label-conditioned generation, we can simply use the categorical codes from previous tasks.
Auxiliary Data Generation
We now discuss the generation of auxiliary data. Recall from Section 3.2 that we use auxiliary data to address the conflicting objectives in Equations 4 and 5.
The auxiliary images do not require labels, and can in principle be sourced from online image repositories. However, this solution may not be scalable as it requires a new set of auxiliary images to be collected when learning each new task. A more desirable alternative may be to generate auxiliary data by using the current data in a way that avoids the over-fitting problem. We propose two operations for generating auxiliary data from the current task data:
1. Montage: Randomly sample small image patches from current input images and montage them together to produce auxiliary images for distillation.
2. Swap: Swap the conditional image A t and the ground truth image B t for distillation. Namely the encoder receives the conditional image A t and encodes it to a latent code z, and the generator is conditioned on the ground truth image B t .
Both operations are used in image-conditioned generation; in label-conditioned generation, since there is no conditional image, only the montage operation is applicable. Other alternatives may be possible. Essentially, the auxiliary data generation needs to provide out-of-task samples that can be used to preserve the knowledge learned by the old model. The knowledge is preserved using the distillation losses, which encourage the old and new models to produce similar responses on the out-of-task samples.
Experiments
We evaluate Lifelong GAN for two settings: (1) imageconditioned image generation, and (2) label-conditioned image generation. We are the first to explore continual learning for image-conditioned image generation; no existing approaches are applicable for comparison. Additionally, we compare our model with the memory replay based approach which is the state-of-the-art for label-conditioned image generation. Training Details. All the sequential digit generation models are trained on images of size 64×64 and all other models are trained on images of size 128 × 128. We use the Tensorflow [1] framework with Adam Optimizer [12] and a learning rate of 0.0001. We set the parameters λ latent = 0.5, λ KL = 0.01, and β = 5.0 for all experiments. The weights of generator and encoder in cVAE-GAN and cLR-GAN are shared. Extra training iterations on the generator and encoder using only distillation loss are used for models trained on images of size 128 × 128 for better remembering previous tasks. Baseline Models. We compare Lifelong GAN to the following baseline models: (a) Memory Replay (MR): Images generated by a generator trained on previous tasks are combined with the training images for the current task to form a hybrid training set. (b) Sequential Fine-tuning (SFT): The model is fine-tuned in a sequential manner, with parameters initialized from the model trained/fine-tuned on the previous task. (c) Joint Learning (JL): The model is trained utilizing data from all tasks.
Note that for image-conditioned image generation, we only compare with joint learning and sequential fine-tuning methods, as memory replay based approaches are not applicable without any ground-truth conditional input. Quantitative Metrics. We use different metrics to evaluate different aspects of the generation. In this work, we use Acc, r-Acc and LPIPS to validate the quality of the generated data. Acc is the accuracy of the classifier network trained on real images and evaluated on generated images (higher indicates better generation quality). r-Acc is the accuracy of the classifier network trained on generated images and evaluated on real images (higher indicates better generation quality). LPIPS [33] is used to quantitatively evaluate the diversity as used in BicycleGAN [35]. Higher LPIPS indicates higher diversity. Furthermore, LPIPS closer to the ones of real images indicates more realistic generation.
Image-conditioned Image Generation
Digit Generation. We divide the digits in MNIST [14] into 3 groups: {0,1,2}, {3,4,5}, and {6,7,8,9} 1 . The digits in each group are dyed with a signature color as shown in Figure 3. Given a dyed image, the task is to generate a foreground segmentation mask for the digit (i.e. generate a foreground segmentation given a dyed image as condition). The three groups give us three tasks for sequential learning. Generated images from the last task for all approaches are shown in Figure 3. We can see that sequential finetuning suffers from catastrophic forgetting (it is unable to segment digits 0-5 from the previous tasks), while our approach can learn to generate segmentation masks for the current task without forgetting the previous tasks. 1 Image-to-image Translation. We also apply Lifelong GAN to more challenging domains and datasets with large variation for higher resolution images. The first task is image-to-image translation of edges → shoes photos [31,29]. The second task is image-to-image translation of segmentations → facades [22]. The goal of this experiment is to learn the task of semantic segmentations → facades without forgetting the task edges → shoe photos. We sam-ple˜20000 image pairs for the first task and use all images for the second task. Generated images for all approaches are shown in Figure 4. For both Lifelong GAN and sequential fine-tuning, the model of Task2 is initialized from the same model trained on Task1. We show the generation results of each task for Lifelong GAN. For sequential fine-tuning, we show the generation results of the last task. It is clear that the sequentially fine-tuned model completely forgets the previous task and can only generate incoherent facade-like patterns. In contrast, Lifelong GAN learns the current generative task while remembering the previous task. It is also observed that Lifelong GAN is capable of maintaining the diversity of generated images of the previous task.
Label-conditioned Image Generation
Digit Generation. We divide the MNIST [14] digits into 4 groups, {0,1,2}, {3,4}, {5,6,7} and {8,9}, resulting in four tasks for sequential learning. Each task is to generate binary MNIST digits given labels (one-hot encoded labels) as conditional inputs.
Visual results for all methods are shown in Figure 5, where we also include outputs of the generator after each task for our approach and memory replay. Sequential finetuning results in catastrophic forgetting, as shown by this baseline's inability to generate digits from any previous tasks; when given a previous label, it will either generate something similar to the current task or simply unrecognizable patterns. Meanwhile, both our approach and memory replay are visually similar to joint training results, indicating that both are able to address the forgetting issue in Figure 4: Comparison among different approaches for continual learning of image to image translation tasks. Given the same model trained for the task edges → shoes, we train Lifelong GAN and sequential fine-tuning model on the task segmentations → facades. Sequential fine-tuning suffers from severe catastrophic forgetting. In contrast, Lifelong GAN can learn the current task while remembering the old task. We demonstrate some intermediate results during different tasks of continual learning for our distillation based approach and memory replay. Sequential fine-tuning suffers from severe forgetting issues while other methods give visually similar results compared to the joint learning results. this task. Quantitatively, our method achieves comparable classification accuracy to memory replay, and outperforms memory replay in terms of reverse classification accuracy. Flower Generation. We also demonstrate Lifelong GAN on a more challenging dataset, which contains higher resolution images from five categories of the Flower dataset [20]. The experiment consists of a sequence of five tasks in the order of sunflower, daisy, iris, daffodil, pansy. Each task involves learning a new category.
Generated images for all approaches are shown in Fig-ure 6. We show the generation results of each task for both Lifelong GAN and memory replay to better analyze these two methods. For sequential fine-tuning, we show the generation results of the last task which is enough to show that the model suffers from catastrophic forgetting. Figure 6 gives useful insights into the comparison between Lifelong GAN and memory replay. Both methods can learn to generate images for new tasks while remembering previous ones. However, memory replay is more sensitive to generation artifacts appearing in the intermediate Task 4 Figure 6: Comparison among different approaches for continual learning of flower image generation tasks. Given the same model trained for category sunflower, we train Lifelong GAN, memory replay and sequential fine-tuning model for other tasks. Sequential fine-tuning suffers from severe catastrophic forgetting, while both Lifelong GAN and memory replay can learn to perform the current task while remembering the old tasks. Lifelong GAN is more robust to artifacts in the generated images of the middle tasks, while memory replay is much more sensitive and all later tasks are severely impacted by these artifacts.
tasks of sequential learning. While training Task3 (category iris), both Lifelong GAN and memory replay show some artifacts in the generated images. For memory replay, the artifacts are reinforced during the training of later tasks and gradually spread over all categories. In contrast, Lifelong GAN is more robust to the artifacts and later tasks are much less sensitive to intermediate tasks. Lifelong GAN treats previous tasks and current tasks separately, trying to learn the distribution of new tasks while mimicking the distribution of the old tasks. Table 2 shows the quantitative results. Lifelong GAN outperforms memory replay by 10% in terms of classification accuracy and 25% in terms of reverse classification accuracy. We also observed visually and quantitatively that memory replay tends to lose diversity during the sequential learning, and generates images with little diversity for the final task.
Conclusion
We study the problem of lifelong learning for generative networks and propose a distillation based continual learning framework enabling a single network to be extended to new tasks without forgetting previous tasks with only supervision for the current task. Unlike previous methods that adopt memory replay to generate images from previous tasks as training data, we employ knowledge distillation to transfer learned knowledge from previous networks to the new network. Our generic framework enables a broader range of generation tasks including imageto-image translation, which is not possible using memory replay based methods. We validate Lifelong GAN for both image-conditioned and label-conditioned generation tasks, and both qualitative and quantitative results illustrate the generality and effectiveness of our method. | 4,135 |
1907.10156 | 2963636228 | Most of object detection algorithms can be categorized into two classes: two-stage detectors and one-stage detectors. For two-stage detectors, a region proposal phase can filter massive background candidates in the first stage and it masks the classification task more balanced in the second stage. Recently, one-stage detectors have attracted much attention due to its simple yet effective architecture. Different from two-stage detectors, one-stage detectors have to identify foreground objects from all candidates in a single stage. This architecture is efficient but can suffer from the imbalance issue with respect to two aspects: the imbalance between classes and that in the distribution of background, where only a few candidates are hard to be identified. In this work, we propose to address the challenge by developing the distributional ranking (DR) loss. First, we convert the classification problem to a ranking problem to alleviate the class-imbalance problem. Then, we propose to rank the distribution of foreground candidates above that of background ones in the constrained worst-case scenario. This strategy not only handles the imbalance in background candidates but also improves the efficiency for the ranking algorithm. Besides the classification task, we also improve the regression loss by gradually approaching the @math loss as suggested in interior-point methods. To evaluate the proposed losses, we replace the corresponding losses in RetinaNet that reports the state-of-the-art performance as a one-stage detector. With the ResNet-101 as the backbone, our method can improve mAP on COCO data set from @math to @math by only changing the loss functions and it verifies the effectiveness of the proposed losses. | Detection is a fundamental task in computer vision. In conventional methods, hand crafted features, e.g., HOG @cite_3 and SIFT @cite_7 , are used for detection either with a sliding-window strategy which holds a dense set of candidates, e.g., DPM @cite_2 or with a region proposal method which keeps a sparse set of candidates, e.g., Selective Search @cite_8 . Recently, since deep neural networks have shown the dominating performance in classification tasks @cite_11 , the features obtained from neural networks are leveraged for detection tasks. | {
"abstract": [
"This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance.",
"This paper addresses the problem of generating possible object locations for use in object recognition. We introduce selective search which combines the strength of both an exhaustive search and segmentation. Like segmentation, we use the image structure to guide our sampling process. Like exhaustive search, we aim to capture all possible object locations. Instead of a single technique to generate possible object locations, we diversify our search and use a variety of complementary image partitionings to deal with as many image conditions as possible. Our selective search results in a small set of data-driven, class-independent, high quality locations, yielding 99 recall and a Mean Average Best Overlap of 0.879 at 10,097 locations. The reduced number of locations compared to an exhaustive search enables the use of stronger machine learning techniques and stronger appearance models for object recognition. In this paper we show that our selective search enables the use of the powerful Bag-of-Words model for recognition. The selective search software is made publicly available (Software: http: disi.unitn.it uijlings SelectiveSearch.html ).",
"We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.",
"This paper describes a discriminatively trained, multiscale, deformable part model for object detection. Our system achieves a two-fold improvement in average precision over the best performance in the 2006 PASCAL person detection challenge. It also outperforms the best results in the 2007 challenge in ten out of twenty categories. The system relies heavily on deformable parts. While deformable part models have become quite popular, their value had not been demonstrated on difficult benchmarks such as the PASCAL challenge. Our system also relies heavily on new methods for discriminative training. We combine a margin-sensitive approach for data mining hard negative examples with a formalism we call latent SVM. A latent SVM, like a hidden CRF, leads to a non-convex training problem. However, a latent SVM is semi-convex and the training problem becomes convex once latent information is specified for the positive examples. We believe that our training methods will eventually make possible the effective use of more latent information such as hierarchical (grammar) models and models involving latent three dimensional pose.",
"We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5 and 17.0 which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called \"dropout\" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3 , compared to 26.2 achieved by the second-best entry."
],
"cite_N": [
"@cite_7",
"@cite_8",
"@cite_3",
"@cite_2",
"@cite_11"
],
"mid": [
"2151103935",
"2088049833",
"2161969291",
"2120419212",
"2163605009"
]
} | DR Loss: Improving Object Detection by Distributional Ranking | The performance of object detection has been improved dramatically with the development of deep neural networks in the past few years. Most of detection algorithms fall into two categories: two-stage detectors [3,11,12,14] and onestage detectors [6,15,17,20]. For the two-stage schema, the procedure of the algorithms can be divided into two parts. In the first stage, a region propose method will filter most of background candidate bounding boxes and keep only a small set of candidates. In the following stage, these candidates are classified as foreground classes or background and the bounding box is further refined by optimizing a regression loss. Two-stage detectors demonstrate the superior performance on real-world data sets while the efficiency can be an issue in practice, especially for the devices with limited computing resources, e.g., smart phones, cameras, etc. Therefore, one-stage detectors are developed for an efficient detection. Different from two-stage detectors, onestage algorithms consist of a single phase and have to identify foreground objects from all candidates directly. The structure of a ons-stage detector is straightforward and efficient. However, a one-stage detector may suffer from the imbalance problem that can reside in the following two aspects. First, the numbers of candidates between classes are imbalanced. Without a region proposal phase, the number of background candidates can easily overwhelm that of foreground ones. Second, the distribution of background candidates is imbalanced. Most of them can be easily separated from foreground objects while only a few of them are hard to classify.
To alleviate the imbalance problem, SSD [17] adopts hard negative mining, which keeps a small set of background candidates with the highest loss. By eliminating simple background candidates, the strategy balances the number of candidates between classes and the distribution of background simultaneously. However, some important classification information from background can be lost, and thus the detection performance can degrade. RetinaNet [15] proposes to keep all background candidates but assign different weights for loss functions. The weighted cross entropy loss is called focal loss. It makes the algorithm focus on the hard candidates while reserving the information from all candidates. This strategy improves the performance of one-stage detectors significantly. Despite the success of focal loss, it re-weights classification losses in a heuristic way and can be insufficient to address the class-imbalance problem. Besides, the design of focal loss is data independent and lacks the exploration of the data distribution, which is essential to balance the distribution of background candidates.
In this work, we propose a data dependent ranking loss to handle the imbalance challenge. First, to alleviate the effect of the class-imbalance problem, we convert the classification problem to a ranking problem, which optimizes ranks of pairs. Since each pair consists of a foreground candidate and a background candidate, it is well balanced. Moreover, considering the imbalance in background candidates, we introduce the distributional ranking (DR) loss to rank the constrained distribution of foreground above that of background candidates. By re-weighting the candidates to derive the distribution corresponding to the worst-case loss, the loss can focus on the decision boundary between foreground and background distributions. Besides, we rank the expectation of distributions in lieu of original examples, which reduces the number of pairs in ranking and improves the efficiency. Compared with the re-weighting strategy in focal loss, that for DR loss is data dependent and can balance the distribution of background better. Fig. 1 illustrates the proposed DR loss. Besides the classification task, the regression is also important for detection to refine the bounding boxes of objects. The smoothed L 1 loss is prevalently adopted to approximate the L 1 loss in detection algorithms. We propose to improve the regression loss by gradually approaching the L 1 loss for better approximation, where the similar trick is also applied in interior-point methods [1].
We conduct the experiments on COCO [16] data set to demonstrate the proposed losses. Since RetinaNet reports the state-of-the-art performance among one-stage detectors, we replace the corresponding losses in RetinaNet with our proposed losses while the other components are retained. For fair comparison, we implement our algorithm in Detectron 1 , which is the official codebase of RetinaNet. With ResNet-101 [12] as the backbone, optimizing our loss functions can boost the mAP of RetinaNet from 39.1% to 41.1%, which confirms the effectiveness of proposed losses.
The rest of this paper is organized as follows. Section 2 reviews the related work in object detection. Section 3 1 https://github.com/facebookresearch/Detectron describes the details of the proposed DR loss and regression loss. Section 4 compares the proposed losses to others on COCO detection task. Finally, Section 5 concludes this work with future directions.
DR Loss
Given a set of candidate bounding boxes from an image, a detector has to identify the foreground objects from background ones with a classification model. Let θ denote a classifier and it can be learned by optimizing the problem
min θ N i j,k (p i,j,k )(1)
where N is the number of total images. In this work, we employ sigmoid function to predict the probability for each example. p i,j,k is determined by θ and indicates the estimated probability that the j-th candidate in the i-th image is from the k-th class. (·) is the loss function. In most of detectors, the classifier is learned by optimizing the cross entropy loss. For the binary classification problem, it can be written as
CE (p) = −log(p) y = 1 −log(1 − p) y = 0 where y ∈ {0, 1} is the label.
The objective in Eqn. 1 is conventional for object detection and it suffers from the class-imbalance problem. This can be demonstrated by rewriting the problem in the equivalent form
min θ N i ( n+ j+ (p i,j+ ) + n− j− (p i,j− ))(2)
where j + and j − denote the positive (i.e., foreground) and negative (i.e., background) examples, respectively. n + and n − are the corresponding number of examples. When n − n + , the accumulated loss from the latter term will dominate. This issue is from the fact that the losses for positive and negative examples are separated and the contribution of positive examples will be overwhelmed by negative examples. A heuristic way to handle the problem is emphasizing positive examples, which can increase the weights for the corresponding losses. In this work, we aim to address the problem in a fundamental way.
Ranking
To alleviate the challenge from class-imbalance, we optimize the rank between positive and negative examples. Given a pair of positive and negative examples, an ideal ranking model can rank the positive example above the negative one with a large margin
∀i, j + , j − p i,j+ − p i,j− ≥ γ (3)
where γ is a non-negative margin. Compared with the objective in Eqn. 1, the ranking model optimizes the relationship between individual positive and negative examples, which is well balanced.
The objective of ranking can be written as
min θ N i n+ j+ n− j− (p i,j− − p i,j+ + γ)(4)
where (·) can be the hinge loss as
hinge (z) = [z] + = z z > 0 0 o.w.
The objective can be interpreted as
1 n + n − n+ j+ n− j− (p i,j− − p i,j+ + γ) = E j+,j− [ (p i,j− − p i,j+ + γ)](5)
It demonstrates that the objective measures the expected ranking loss by uniformly sampling a pair of positive and negative examples. The ranking loss addresses the class-imbalance issue by comparing the rank of each positive example to negative examples. However, it ignores a phenomenon in object detection, where the distribution of negative examples is also imbalanced. Besides, the ranking loss introduces a new challenge, that is, the vast number of pairs. We tackle them in the following subsections.
Distributional Ranking
As indicated in Eqn. 5, the ranking loss in Eqn. 4 punishes a mis-ranking for a uniformly sampled pair. In detection, most of negative examples can be easily ranked well, that is, a randomly sampled pair will not incur the ranking loss with high probability. Therefore, we propose to optimize the ranking boundary to avoid the trivial solution
min θ N i (max j− p i,j− − min j+ p i,j+ + γ)(6)
If we can rank the positive example with the lowest score above the negative one with the highest confidence, the whole set of candidates are perfectly ranked. Compared with the conventional ranking loss, the worst case loss is much more efficient by reducing the number of pairs from n + n − to 1. Moreover, it clearly eliminates the classimbalance issue since only a single pair of positive and negative examples are required for each image. However, this formulation is very sensitive to outliers, which can lead to the degraded detection model. To improve the robustness, we first introduce the distribution for the positive and negative examples and obtain the expectation as
P i,+ = n+ j+ q i,j+ p i,j+ ; P i,− = n− j− q i,j− p i,j−
where q i,+ ∈ ∆ and q i,− ∈ ∆ denote the distribution over positive and negative examples, respectively. P i,+ and P i,− represent the expected ranking score under the corresponding distribution. ∆ is the simplex as ∆ = {q : j q j = 1, ∀j, q j ≥ 0}. When q i,+ and q i,− are the uniform distribution, P i,+ and P i,− demonstrates the expectation from the original distribution.
By deriving the distribution corresponding to the worstcase loss from the original distribution
P i,+ = min qi,+∈∆ n+ j+ q i,j+ p i,j+ ; P i,− = max qi,−∈∆ n− j− q i,j− p i,j−
we can rewrite the problem in Eqn. 6 in the equivalent form
min θ N i (P i,− − P i,+ + γ)
which can be considered as ranking the distributions between positive and negative examples in the worst case. It is obvious that the original formulation is not robust due to the fact that the domain of the generated distribution is unconstrained. Consequently, it will concentrate on a single example while ignoring the original distribution. Hence, we improve the robustness of the ranking loss by regularizing the freedom of the derived distribution as
P i,− = max qi,−∈∆,Ω(qi,−)≥ − n− j− q i,j− p i,j− −P i,+ = max qi,+∈∆,Ω(qi,+)≥ + n+ j+ q i,j+ (−p i,j+ )
where Ω(·) is a regularizer for the diversity of the distribution to prevent the distribution from the trivial one-hot solution. It can be different forms of entropy, e.g., Rényi entropy, Shannon entropy, etc. − and + are constants to control the freedom of distributions.
To obtain the constrained distribution, we investigate the subproblem
max qi,−∈∆ j− q i,j− p i,j− s.t. Ω(q i− ) ≥ −
According to the dual theory [1], given − , we can find the parameter λ − to obtain the optimal q i,− by solving the problem
max qi,−∈∆ j− q i,j− p i,j− + λ − Ω(q i,− )
We observe that the former term is linear in q i,− . Hence, if Ω(·) is strongly concave in q i,− , the problem can be solved efficiently by first order algorithms [1].
Considering the efficiency, we adopt the Shannon entropy as the regularizer in this work and we can have the closed-form solution as follows.
Proposition 1. For the problem max qi,−∈∆ j− q i,j− p i,j− + λ − H(q i− )
we have the closed-form solution as
q i,j− = 1 Z − exp( p i,j− λ − ); Z − = j− exp( p i,j− λ − )
Proof. It can be proved directly from K.K.T. condition [1].
q i,j+ (−p i,j+ ) + λ + H(q i+ )
we have the closed-form solution as Remark 1 These Propositions show that the harder the example, the larger the weight of the example. Besides, the weight is data dependent and is affected by the data distribution. Fig. 2 illustrates the drifting of the distribution with the proposed strategy. The derived distribution is approaching the distribution corresponding to the worst-case loss when decreasing λ.
q i,j+ = 1 Z + exp( −p i,j+ λ + ); Z + = j+ exp( −p i,j+ λ + )
With the closed-form solutions of distributions, the expectation of distributions can be computed aŝ
P i,− = n− j− q i,j− p i,j− = n− j− 1 Z − exp( p i,j− λ − )p i,j− (7) P i,+ = n− j− q i,j+ p i,j+ = n+ j+ 1 Z + exp( −p i,j+ λ + )p i,j+
Finally, smoothness is crucial for the convergence of non-convex optimization [7]. So we use the smoothed approximation instead of the original hinge loss as the loss function [25] smooth (z) =
1 L log(1 + exp(Lz))(8)
where L controls the smoothness of the function. The larger the L is , the more closer to the hinge loss the approximation is. Fig. 3 compares the hinge loss to its smoothed version in Eqn. 8. Incorporating all of these components, our distributional ranking loss can be defined as
min θ L DR (θ) = N i smooth (P i,− −P i,+ + γ)(9)
whereP i,− andP i,+ are given in Eqn. 7 and smooth (·) is in Eqn. 8. Compared with the conventional ranking loss, we rank the expectation between two distributions. It shrinks the number of pairs to 1 that leads to the efficient optimization.
The objective in Eqn. 9 looks complicated but its gradient is easy to compute. The detailed calculation of the gradient can be found in the appendix.
If we optimize the DR loss by the standard stochastic gradient descent (SGD) with mini-batch as
θ t+1 = θ t − η 1 m m s=1 ∇ s t
we can show that it can converge as in the following theorem and the detailed proof is cast to the appendix. Theorem 1. Let θ t denote the model obtained from the t-th iteration with SGD optimizer, where mini-batch size is m.
When √ 2mL(θ0) δ √ LT ≤ 1 L , if
we assume the variance of the gradient is bounded as ∀s, ∇ s t − ∇L t F ≤ δ and set the learning rate as η =
√ 2mL(θ0) δ √ LT , we have 1 T t ∇L(θ t ) 2 F ≤ 2δ √ 2L mT L(θ 0 )
Remark 2 Theorem 1 implies that the learning rate depends on the mini-batch size and the number of iterations as η = O( m T ) and the convergence rate is O( 1 √ mT ). We let η 0 , m 0 and T 0 denote an initial setting for training. If we increase the mini-batch size as m = αm 0 and shrink the number of iterations as T = T0 α where α > 1, the convergence rate remains the same. However, the learning rate has to be increased as η = O( m T ) = αη 0 when η ≤ 1 L , which is consistent with the observation in [10].
Remark 3 Theorem 1 also indicates that the convergence rate depends on O( √ L). Therefore, L trades between the approximation error and the convergence rate. When L is large, the smoothed loss can simulate the hinge loss better while the convergence can become slow.
Recover Classification from Ranking
In detection, we have to identify foreground from background. Therefore, the results from ranking has to be converted to classification. A straightforward way is setting a threshold for the ranking score. However, the scores from different pairs can be inconsistent for classification. For example, given two pairs as
p − = 0.1, p + = 0.4; p − = 0.6, p + = 0.9
we observe that both of them have the perfect ranking but it is hard to set a threshold to classify positive examples from negative ones simultaneously. To make the ranking result meaningful for classification, we enforce a large margin in the constraint 3 as γ = 0.5. Therefore, the constraint becomes
∀i, j + , j − p i,j+ − p i,j− ≥ 0.5
Due to the non-negative property of probability, it implies ∀i, j + p i,j+ > 0.5; ∀i, j − p i,j− ≤ 0.5 which recovers the standard criterion for classification.
Bounding Box Regression
Besides classification, regression is also important for detection to refine the bounding box. Most of detectors apply smoothed L 1 loss to optimize the bounding box
reg (x) = 0.5x 2 /β x ≤ β |x| − 0.5β x ≥ β(10)
It smoothes L 1 loss by L 2 loss in the interval of [−β, β] and guarantees that the whole loss function is smooth. It is reasonable since smoothness is important for convergence as indicated in Theorem 1. However, it may result in the slow optimization in the interval of L 2 loss. Inspired by the interior-point method [1], which gradually approximates the non-smooth domain by increasing the weight of the corresponding barrier function at different stages, we obtain β from a decreasing function to reduce the gap between L 1 and L 2 losses. As suggested in the interior-point method, the current objective should be solved to optimum before changing the weight for the barrier function. We decay the value of β in a stepwise manner. Specifically, we compute β at the t-th iteration as
β t = β 0 − α(t%K)
where α is a constant and K denotes the width of a step. Combining the regression loss, the objective of training the detector becomes
min N i τ smooth (P i,− −P i,+ + γ) + reg (v i ; β t )
where τ is to balance the weights between classification and regression.
Experiments
Implementation Details
We evaluate the proposed losses on COCO 2017 data set [16], which contains about 118k images for training, 5k images for validation, and 40k images for test. To focus on the comparison of loss functions, we employ the structure of RetinaNet [15] as the backbone and only substitute the corresponding loss functions. For fair comparison, we make the adequate modifications in the official codebase of Reti-naNet, which is released in Detectron. Besides, we train the model with the same setting as RetinaNet. Specifically, the model is learned with SGD on 8 GPUs and the mini-batch size is set as 16 where each GPU can hold 2 images at each iteration. Most of experiments are trained with 90k iterations and the length is denoted as "1×". The initial learning rate is 0.01 and is decayed by a factor of 10 after 60k iterations and then 80k iterations. For anchor density, we apply the same setting as in [15], where each location has 3 scales and 3 aspect ratios. The standard COCO evaluation criterion is used to compare the performance of different methods.
Since RetinaNet lacks the optimization of the relationship between positive and negative distributions, it has to initialize the output probability of the classifier at 0.01 to fit the distribution of background. In contrast, we initialize the probability of the sigmoid function at 0.5, which is more reasonable for binary classification scenario without any prior knowledge. It also verifies that the proposed DR loss can handle class-imbalance better.
In Eqn. 7, we compute the constrained distribution over positive and negative examples with λ + and λ − , respectively. To reduce the number of parameters, we fix the ratio between λ + and λ − as 1 : 0.1 and tune the scale as
λ + = 1/ log(h); λ − = 0.1/ log(h)
It is easy to show that this strategy is equivalent to fixing λ + and λ − as 1 and 0.1, and changing the base in the definition of the entropy regularizer as
H(q) = − j q j log h q j
Note that RetinaNet applies Feature Pyramid Network (FPN) [14] to obtain multiple scale features. To compute DR loss in one image, we collect candidates from multiple pyramid levels and obtain a single distribution for foreground and background, respectively.
Effect of Parameters
First, we take ablation experiments to evaluate the effect of multiple parameters on the validation set. All experiments in this subsection are implemented with a single image scale of 800 for training and test. ResNet-101 is applied as the backbone for comparison. Only horizontal flipping is adopted as the data augmentation in this subsection. Effect of L: L controls the smoothness of the loss function in Eqn. 8. We compare the model with different L in Table 1. Note that L also changes the function value, we adjust the weight of classification loss τ accordingly. The base of entropy regularizer is fixed as h = 4. We observe that the loss function is quite stable for the choice of different smooth values. Besides, a larger L will result in a smaller function value as shown in Fig. 3 and it suggests to increase the weight of classification loss τ to balance the losses. We keep L = 6 and τ = 5 in the rest experiments. Effect of h: Next, we evaluate the effect of h. h changes the scale of λ − and λ + in the standard entropy regularizer. As illustrated in Fig. 2, a large h will push the generated distribution to the extreme case while a small h will make the derived distribution close to the original distribution. We vary the range of h and summarize the results in Table 2. It is obvious that h is also not sensitive in a reasonable range and we fix it to 4 in the following experiments. Effect of β: Finally, we demonstrate the different strategies for changing β in the smoothed L 1 loss. In the implementation of RetinaNet, β is fixed to 0.11. We compare three strategies to decay β to 0.01, which are illustrated in Fig. 4. The results are shown in Table 3. First, it is evident that all strategies with decayed β can improve the performance of detectors with a fixed β. Then, the stepwise decay with K = 10k outperforms linear decay and it verifies that the objective should be optimized sufficiently before moving to the decay step. We adopt stepwise decay in the next subsections.
Effect of DR Loss:
To illustrate the effect of DR loss, we collect the confidence scores of examples from all images in the validation set and compare the empirical probability density in Fig. 6. We include cross entropy loss and focal loss in the comparison. The model with cross entropy loss is trained by ourselves while the model with focal loss is downloaded directly from the official model zoo with the same configuration as DR loss. First, we observe that most of examples have the extremely low confidence with cross entropy loss. It is because the number of negative examples overwhelms that of positive ones and it will classify most of examples to negative to obtain a small loss as demonstrated in Eqn. 2. Second, focal loss is better than cross entropy loss by drifting the distribution of foreground. However, the expectation of the foreground distribution is still close to that of background, and it has to adopt a small threshold as 0.05 to identify positive examples from negative ones. Compared to cross entropy and focal loss, DR loss optimizes the foreground distribution significantly. By optimizing the ranking loss with a large margin, the expectation of the foreground examples is larger than 0.5 while that of background is smaller than 0.1. It confirms that DR loss can address the imbalance between classes well. Consequently, DR loss allows us to set a large threshold for classification. We set the threshold as 0.
Performance with Different Scales
With the parameters suggested from ablation studies, we train the model with different scales and backbones to show the robustness of the proposed losses. We adopt ResNet-50 and ResNet-101 as backbones in the comparison. Training applies only horizontal flipping as the data augmentation. Table. 4 compares the performance with different scales to that of RetinaNet. We let "Dr.Retina" denote the Reti-naNet with the proposed DR loss and the decaying strategy
Comparison with State-of-the-Art
Finally, we compare Dr.Retina to the state-of-the-art two-stage and one-stage detectors on COCO test set. We follow the setting in [15] to increase the number of training iterations to 2×, which contains 180k iterations, and applies scale jitter in [640, 800] as the additional data augmentation for training. Note that we still use a single image scale and a single crop for test as above.
Conclusion
In this work, we propose the distributional ranking loss to address the imbalance challenge in one-stage object detection. It first converts the original classification problem to a ranking problem, which balances the classes of foreground and background. Furthermore, we propose to rank the expectation of derived distributions in lieu of original examples to focus on the hard examples, which balances the distribution of background. Besides, we improve the regression loss by developing the strategy to optimize L 1 loss better. Experiments on COCO verifies the effectiveness of the proposed losses. Since RPN also has the imbalance issue in two-stage detectors, applying DR loss for that can be our future work.
A. Gradient of DR Loss
We define the DR loss as It looks complicated but its gradient is easy to compute. Here we give the detailed gradient form. For p i,j− , we have
∂ ∂p i,j− = 1 1 + exp(−Lz) ∂z ∂p i,j− = q i,j− 1 + exp(−Lz) (1 + p i,j− λ − − 1 λ − ( j− q i,j− p i,j− ))
where z =P − −P + + γ. If we assume that the variance is bounded as
∀s, ∇ s t − ∇L t F ≤ δ then we have E[L(θ t+1 )] ≤ E[L(θ t ) − η ∇L t 2 F + Lη 2 2 1 m m s=1 ∇ s t − ∇L t + ∇L t 2 F ] ≤ E[L(θ t ) − η ∇L t 2 F + Lη 2 2 ( δ 2 m + ∇L t 2 F )
Therefore, we have By assuming η ≤ 1 L and adding t from 1 to T , we have
t ∇L(θ t ) 2 F ≤ 2L(θ 0 ) η + LηT δ 2 m
We finish the proof by letting
η = 2mL(θ 0 ) δ √ LT
C. Experiments
Effect of DR Loss: We illustrate the empirical PDF of foreground and background from DR loss in Fig. 6. Fig. 6 (a) show the original density of foreground and background.
To make the results more explicit, we decay the density of background by a factor of 10 and demonstrate the result in Fig. 6 (b). It is obvious that DR loss can separate the foreground and background with a large margin in the imbalance scenario. | 4,501 |
1907.10156 | 2963636228 | Most of object detection algorithms can be categorized into two classes: two-stage detectors and one-stage detectors. For two-stage detectors, a region proposal phase can filter massive background candidates in the first stage and it masks the classification task more balanced in the second stage. Recently, one-stage detectors have attracted much attention due to its simple yet effective architecture. Different from two-stage detectors, one-stage detectors have to identify foreground objects from all candidates in a single stage. This architecture is efficient but can suffer from the imbalance issue with respect to two aspects: the imbalance between classes and that in the distribution of background, where only a few candidates are hard to be identified. In this work, we propose to address the challenge by developing the distributional ranking (DR) loss. First, we convert the classification problem to a ranking problem to alleviate the class-imbalance problem. Then, we propose to rank the distribution of foreground candidates above that of background ones in the constrained worst-case scenario. This strategy not only handles the imbalance in background candidates but also improves the efficiency for the ranking algorithm. Besides the classification task, we also improve the regression loss by gradually approaching the @math loss as suggested in interior-point methods. To evaluate the proposed losses, we replace the corresponding losses in RetinaNet that reports the state-of-the-art performance as a one-stage detector. With the ResNet-101 as the backbone, our method can improve mAP on COCO data set from @math to @math by only changing the loss functions and it verifies the effectiveness of the proposed losses. | One-stage detectors are also developed for efficiency @cite_21 @cite_15 @cite_22 . Since there is no region proposal phase to sample background candidates, one-stage detectors can suffer from the imbalance issue both between classes and in the background distribution. To alleviate the challenge, SSD @cite_21 adopts hard example mining, which only keeps the hard background candidates for training. Recently, RetinaNet @cite_13 is proposed to address the problem by focal loss. Unlike SSD, it keeps all background candidates but re-weights them such that the hard example will be assigned with a large weight. Focal loss improves the performance of detection explicitly, but the imbalance problem in detection is still not explored sufficiently. In this work, we develop the distributional ranking loss that ranks the distributions of foreground and background. It can alleviate the imbalance issue and capture the data distribution better with a data dependent mechanism. | {
"abstract": [
"We present YOLO, a new approach to object detection. Prior work on object detection repurposes classifiers to perform detection. Instead, we frame object detection as a regression problem to spatially separated bounding boxes and associated class probabilities. A single neural network predicts bounding boxes and class probabilities directly from full images in one evaluation. Since the whole detection pipeline is a single network, it can be optimized end-to-end directly on detection performance. Our unified architecture is extremely fast. Our base YOLO model processes images in real-time at 45 frames per second. A smaller version of the network, Fast YOLO, processes an astounding 155 frames per second while still achieving double the mAP of other real-time detectors. Compared to state-of-the-art detection systems, YOLO makes more localization errors but is less likely to predict false positives on background. Finally, YOLO learns very general representations of objects. It outperforms other detection methods, including DPM and R-CNN, when generalizing from natural images to other domains like artwork.",
"We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. SSD is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stages and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on the PASCAL VOC, COCO, and ILSVRC datasets confirm that SSD has competitive accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference. For (300 300 ) input, SSD achieves 74.3 mAP on VOC2007 test at 59 FPS on a Nvidia Titan X and for (512 512 ) input, SSD achieves 76.9 mAP, outperforming a comparable state of the art Faster R-CNN model. Compared to other single stage methods, SSD has much better accuracy even with a smaller input image size. Code is available at https: github.com weiliu89 caffe tree ssd.",
"The highest accuracy object detectors to date are based on a two-stage approach popularized by R-CNN, where a classifier is applied to a sparse set of candidate object locations. In contrast, one-stage detectors that are applied over a regular, dense sampling of possible object locations have the potential to be faster and simpler, but have trailed the accuracy of two-stage detectors thus far. In this paper, we investigate why this is the case. We discover that the extreme foreground-background class imbalance encountered during training of dense detectors is the central cause. We propose to address this class imbalance by reshaping the standard cross entropy loss such that it down-weights the loss assigned to well-classified examples. Our novel Focal Loss focuses training on a sparse set of hard examples and prevents the vast number of easy negatives from overwhelming the detector during training. To evaluate the effectiveness of our loss, we design and train a simple dense detector we call RetinaNet. Our results show that when trained with the focal loss, RetinaNet is able to match the speed of previous one-stage detectors while surpassing the accuracy of all existing state-of-the-art two-stage detectors.",
"Abstract: We present an integrated framework for using Convolutional Networks for classification, localization and detection. We show how a multiscale and sliding window approach can be efficiently implemented within a ConvNet. We also introduce a novel deep learning approach to localization by learning to predict object boundaries. Bounding boxes are then accumulated rather than suppressed in order to increase detection confidence. We show that different tasks can be learned simultaneously using a single shared network. This integrated framework is the winner of the localization task of the ImageNet Large Scale Visual Recognition Challenge 2013 (ILSVRC2013) and obtained very competitive results for the detection and classifications tasks. In post-competition work, we establish a new state of the art for the detection task. Finally, we release a feature extractor from our best model called OverFeat."
],
"cite_N": [
"@cite_15",
"@cite_21",
"@cite_13",
"@cite_22"
],
"mid": [
"2963037989",
"2193145675",
"2963351448",
"2963542991"
]
} | DR Loss: Improving Object Detection by Distributional Ranking | The performance of object detection has been improved dramatically with the development of deep neural networks in the past few years. Most of detection algorithms fall into two categories: two-stage detectors [3,11,12,14] and onestage detectors [6,15,17,20]. For the two-stage schema, the procedure of the algorithms can be divided into two parts. In the first stage, a region propose method will filter most of background candidate bounding boxes and keep only a small set of candidates. In the following stage, these candidates are classified as foreground classes or background and the bounding box is further refined by optimizing a regression loss. Two-stage detectors demonstrate the superior performance on real-world data sets while the efficiency can be an issue in practice, especially for the devices with limited computing resources, e.g., smart phones, cameras, etc. Therefore, one-stage detectors are developed for an efficient detection. Different from two-stage detectors, onestage algorithms consist of a single phase and have to identify foreground objects from all candidates directly. The structure of a ons-stage detector is straightforward and efficient. However, a one-stage detector may suffer from the imbalance problem that can reside in the following two aspects. First, the numbers of candidates between classes are imbalanced. Without a region proposal phase, the number of background candidates can easily overwhelm that of foreground ones. Second, the distribution of background candidates is imbalanced. Most of them can be easily separated from foreground objects while only a few of them are hard to classify.
To alleviate the imbalance problem, SSD [17] adopts hard negative mining, which keeps a small set of background candidates with the highest loss. By eliminating simple background candidates, the strategy balances the number of candidates between classes and the distribution of background simultaneously. However, some important classification information from background can be lost, and thus the detection performance can degrade. RetinaNet [15] proposes to keep all background candidates but assign different weights for loss functions. The weighted cross entropy loss is called focal loss. It makes the algorithm focus on the hard candidates while reserving the information from all candidates. This strategy improves the performance of one-stage detectors significantly. Despite the success of focal loss, it re-weights classification losses in a heuristic way and can be insufficient to address the class-imbalance problem. Besides, the design of focal loss is data independent and lacks the exploration of the data distribution, which is essential to balance the distribution of background candidates.
In this work, we propose a data dependent ranking loss to handle the imbalance challenge. First, to alleviate the effect of the class-imbalance problem, we convert the classification problem to a ranking problem, which optimizes ranks of pairs. Since each pair consists of a foreground candidate and a background candidate, it is well balanced. Moreover, considering the imbalance in background candidates, we introduce the distributional ranking (DR) loss to rank the constrained distribution of foreground above that of background candidates. By re-weighting the candidates to derive the distribution corresponding to the worst-case loss, the loss can focus on the decision boundary between foreground and background distributions. Besides, we rank the expectation of distributions in lieu of original examples, which reduces the number of pairs in ranking and improves the efficiency. Compared with the re-weighting strategy in focal loss, that for DR loss is data dependent and can balance the distribution of background better. Fig. 1 illustrates the proposed DR loss. Besides the classification task, the regression is also important for detection to refine the bounding boxes of objects. The smoothed L 1 loss is prevalently adopted to approximate the L 1 loss in detection algorithms. We propose to improve the regression loss by gradually approaching the L 1 loss for better approximation, where the similar trick is also applied in interior-point methods [1].
We conduct the experiments on COCO [16] data set to demonstrate the proposed losses. Since RetinaNet reports the state-of-the-art performance among one-stage detectors, we replace the corresponding losses in RetinaNet with our proposed losses while the other components are retained. For fair comparison, we implement our algorithm in Detectron 1 , which is the official codebase of RetinaNet. With ResNet-101 [12] as the backbone, optimizing our loss functions can boost the mAP of RetinaNet from 39.1% to 41.1%, which confirms the effectiveness of proposed losses.
The rest of this paper is organized as follows. Section 2 reviews the related work in object detection. Section 3 1 https://github.com/facebookresearch/Detectron describes the details of the proposed DR loss and regression loss. Section 4 compares the proposed losses to others on COCO detection task. Finally, Section 5 concludes this work with future directions.
DR Loss
Given a set of candidate bounding boxes from an image, a detector has to identify the foreground objects from background ones with a classification model. Let θ denote a classifier and it can be learned by optimizing the problem
min θ N i j,k (p i,j,k )(1)
where N is the number of total images. In this work, we employ sigmoid function to predict the probability for each example. p i,j,k is determined by θ and indicates the estimated probability that the j-th candidate in the i-th image is from the k-th class. (·) is the loss function. In most of detectors, the classifier is learned by optimizing the cross entropy loss. For the binary classification problem, it can be written as
CE (p) = −log(p) y = 1 −log(1 − p) y = 0 where y ∈ {0, 1} is the label.
The objective in Eqn. 1 is conventional for object detection and it suffers from the class-imbalance problem. This can be demonstrated by rewriting the problem in the equivalent form
min θ N i ( n+ j+ (p i,j+ ) + n− j− (p i,j− ))(2)
where j + and j − denote the positive (i.e., foreground) and negative (i.e., background) examples, respectively. n + and n − are the corresponding number of examples. When n − n + , the accumulated loss from the latter term will dominate. This issue is from the fact that the losses for positive and negative examples are separated and the contribution of positive examples will be overwhelmed by negative examples. A heuristic way to handle the problem is emphasizing positive examples, which can increase the weights for the corresponding losses. In this work, we aim to address the problem in a fundamental way.
Ranking
To alleviate the challenge from class-imbalance, we optimize the rank between positive and negative examples. Given a pair of positive and negative examples, an ideal ranking model can rank the positive example above the negative one with a large margin
∀i, j + , j − p i,j+ − p i,j− ≥ γ (3)
where γ is a non-negative margin. Compared with the objective in Eqn. 1, the ranking model optimizes the relationship between individual positive and negative examples, which is well balanced.
The objective of ranking can be written as
min θ N i n+ j+ n− j− (p i,j− − p i,j+ + γ)(4)
where (·) can be the hinge loss as
hinge (z) = [z] + = z z > 0 0 o.w.
The objective can be interpreted as
1 n + n − n+ j+ n− j− (p i,j− − p i,j+ + γ) = E j+,j− [ (p i,j− − p i,j+ + γ)](5)
It demonstrates that the objective measures the expected ranking loss by uniformly sampling a pair of positive and negative examples. The ranking loss addresses the class-imbalance issue by comparing the rank of each positive example to negative examples. However, it ignores a phenomenon in object detection, where the distribution of negative examples is also imbalanced. Besides, the ranking loss introduces a new challenge, that is, the vast number of pairs. We tackle them in the following subsections.
Distributional Ranking
As indicated in Eqn. 5, the ranking loss in Eqn. 4 punishes a mis-ranking for a uniformly sampled pair. In detection, most of negative examples can be easily ranked well, that is, a randomly sampled pair will not incur the ranking loss with high probability. Therefore, we propose to optimize the ranking boundary to avoid the trivial solution
min θ N i (max j− p i,j− − min j+ p i,j+ + γ)(6)
If we can rank the positive example with the lowest score above the negative one with the highest confidence, the whole set of candidates are perfectly ranked. Compared with the conventional ranking loss, the worst case loss is much more efficient by reducing the number of pairs from n + n − to 1. Moreover, it clearly eliminates the classimbalance issue since only a single pair of positive and negative examples are required for each image. However, this formulation is very sensitive to outliers, which can lead to the degraded detection model. To improve the robustness, we first introduce the distribution for the positive and negative examples and obtain the expectation as
P i,+ = n+ j+ q i,j+ p i,j+ ; P i,− = n− j− q i,j− p i,j−
where q i,+ ∈ ∆ and q i,− ∈ ∆ denote the distribution over positive and negative examples, respectively. P i,+ and P i,− represent the expected ranking score under the corresponding distribution. ∆ is the simplex as ∆ = {q : j q j = 1, ∀j, q j ≥ 0}. When q i,+ and q i,− are the uniform distribution, P i,+ and P i,− demonstrates the expectation from the original distribution.
By deriving the distribution corresponding to the worstcase loss from the original distribution
P i,+ = min qi,+∈∆ n+ j+ q i,j+ p i,j+ ; P i,− = max qi,−∈∆ n− j− q i,j− p i,j−
we can rewrite the problem in Eqn. 6 in the equivalent form
min θ N i (P i,− − P i,+ + γ)
which can be considered as ranking the distributions between positive and negative examples in the worst case. It is obvious that the original formulation is not robust due to the fact that the domain of the generated distribution is unconstrained. Consequently, it will concentrate on a single example while ignoring the original distribution. Hence, we improve the robustness of the ranking loss by regularizing the freedom of the derived distribution as
P i,− = max qi,−∈∆,Ω(qi,−)≥ − n− j− q i,j− p i,j− −P i,+ = max qi,+∈∆,Ω(qi,+)≥ + n+ j+ q i,j+ (−p i,j+ )
where Ω(·) is a regularizer for the diversity of the distribution to prevent the distribution from the trivial one-hot solution. It can be different forms of entropy, e.g., Rényi entropy, Shannon entropy, etc. − and + are constants to control the freedom of distributions.
To obtain the constrained distribution, we investigate the subproblem
max qi,−∈∆ j− q i,j− p i,j− s.t. Ω(q i− ) ≥ −
According to the dual theory [1], given − , we can find the parameter λ − to obtain the optimal q i,− by solving the problem
max qi,−∈∆ j− q i,j− p i,j− + λ − Ω(q i,− )
We observe that the former term is linear in q i,− . Hence, if Ω(·) is strongly concave in q i,− , the problem can be solved efficiently by first order algorithms [1].
Considering the efficiency, we adopt the Shannon entropy as the regularizer in this work and we can have the closed-form solution as follows.
Proposition 1. For the problem max qi,−∈∆ j− q i,j− p i,j− + λ − H(q i− )
we have the closed-form solution as
q i,j− = 1 Z − exp( p i,j− λ − ); Z − = j− exp( p i,j− λ − )
Proof. It can be proved directly from K.K.T. condition [1].
q i,j+ (−p i,j+ ) + λ + H(q i+ )
we have the closed-form solution as Remark 1 These Propositions show that the harder the example, the larger the weight of the example. Besides, the weight is data dependent and is affected by the data distribution. Fig. 2 illustrates the drifting of the distribution with the proposed strategy. The derived distribution is approaching the distribution corresponding to the worst-case loss when decreasing λ.
q i,j+ = 1 Z + exp( −p i,j+ λ + ); Z + = j+ exp( −p i,j+ λ + )
With the closed-form solutions of distributions, the expectation of distributions can be computed aŝ
P i,− = n− j− q i,j− p i,j− = n− j− 1 Z − exp( p i,j− λ − )p i,j− (7) P i,+ = n− j− q i,j+ p i,j+ = n+ j+ 1 Z + exp( −p i,j+ λ + )p i,j+
Finally, smoothness is crucial for the convergence of non-convex optimization [7]. So we use the smoothed approximation instead of the original hinge loss as the loss function [25] smooth (z) =
1 L log(1 + exp(Lz))(8)
where L controls the smoothness of the function. The larger the L is , the more closer to the hinge loss the approximation is. Fig. 3 compares the hinge loss to its smoothed version in Eqn. 8. Incorporating all of these components, our distributional ranking loss can be defined as
min θ L DR (θ) = N i smooth (P i,− −P i,+ + γ)(9)
whereP i,− andP i,+ are given in Eqn. 7 and smooth (·) is in Eqn. 8. Compared with the conventional ranking loss, we rank the expectation between two distributions. It shrinks the number of pairs to 1 that leads to the efficient optimization.
The objective in Eqn. 9 looks complicated but its gradient is easy to compute. The detailed calculation of the gradient can be found in the appendix.
If we optimize the DR loss by the standard stochastic gradient descent (SGD) with mini-batch as
θ t+1 = θ t − η 1 m m s=1 ∇ s t
we can show that it can converge as in the following theorem and the detailed proof is cast to the appendix. Theorem 1. Let θ t denote the model obtained from the t-th iteration with SGD optimizer, where mini-batch size is m.
When √ 2mL(θ0) δ √ LT ≤ 1 L , if
we assume the variance of the gradient is bounded as ∀s, ∇ s t − ∇L t F ≤ δ and set the learning rate as η =
√ 2mL(θ0) δ √ LT , we have 1 T t ∇L(θ t ) 2 F ≤ 2δ √ 2L mT L(θ 0 )
Remark 2 Theorem 1 implies that the learning rate depends on the mini-batch size and the number of iterations as η = O( m T ) and the convergence rate is O( 1 √ mT ). We let η 0 , m 0 and T 0 denote an initial setting for training. If we increase the mini-batch size as m = αm 0 and shrink the number of iterations as T = T0 α where α > 1, the convergence rate remains the same. However, the learning rate has to be increased as η = O( m T ) = αη 0 when η ≤ 1 L , which is consistent with the observation in [10].
Remark 3 Theorem 1 also indicates that the convergence rate depends on O( √ L). Therefore, L trades between the approximation error and the convergence rate. When L is large, the smoothed loss can simulate the hinge loss better while the convergence can become slow.
Recover Classification from Ranking
In detection, we have to identify foreground from background. Therefore, the results from ranking has to be converted to classification. A straightforward way is setting a threshold for the ranking score. However, the scores from different pairs can be inconsistent for classification. For example, given two pairs as
p − = 0.1, p + = 0.4; p − = 0.6, p + = 0.9
we observe that both of them have the perfect ranking but it is hard to set a threshold to classify positive examples from negative ones simultaneously. To make the ranking result meaningful for classification, we enforce a large margin in the constraint 3 as γ = 0.5. Therefore, the constraint becomes
∀i, j + , j − p i,j+ − p i,j− ≥ 0.5
Due to the non-negative property of probability, it implies ∀i, j + p i,j+ > 0.5; ∀i, j − p i,j− ≤ 0.5 which recovers the standard criterion for classification.
Bounding Box Regression
Besides classification, regression is also important for detection to refine the bounding box. Most of detectors apply smoothed L 1 loss to optimize the bounding box
reg (x) = 0.5x 2 /β x ≤ β |x| − 0.5β x ≥ β(10)
It smoothes L 1 loss by L 2 loss in the interval of [−β, β] and guarantees that the whole loss function is smooth. It is reasonable since smoothness is important for convergence as indicated in Theorem 1. However, it may result in the slow optimization in the interval of L 2 loss. Inspired by the interior-point method [1], which gradually approximates the non-smooth domain by increasing the weight of the corresponding barrier function at different stages, we obtain β from a decreasing function to reduce the gap between L 1 and L 2 losses. As suggested in the interior-point method, the current objective should be solved to optimum before changing the weight for the barrier function. We decay the value of β in a stepwise manner. Specifically, we compute β at the t-th iteration as
β t = β 0 − α(t%K)
where α is a constant and K denotes the width of a step. Combining the regression loss, the objective of training the detector becomes
min N i τ smooth (P i,− −P i,+ + γ) + reg (v i ; β t )
where τ is to balance the weights between classification and regression.
Experiments
Implementation Details
We evaluate the proposed losses on COCO 2017 data set [16], which contains about 118k images for training, 5k images for validation, and 40k images for test. To focus on the comparison of loss functions, we employ the structure of RetinaNet [15] as the backbone and only substitute the corresponding loss functions. For fair comparison, we make the adequate modifications in the official codebase of Reti-naNet, which is released in Detectron. Besides, we train the model with the same setting as RetinaNet. Specifically, the model is learned with SGD on 8 GPUs and the mini-batch size is set as 16 where each GPU can hold 2 images at each iteration. Most of experiments are trained with 90k iterations and the length is denoted as "1×". The initial learning rate is 0.01 and is decayed by a factor of 10 after 60k iterations and then 80k iterations. For anchor density, we apply the same setting as in [15], where each location has 3 scales and 3 aspect ratios. The standard COCO evaluation criterion is used to compare the performance of different methods.
Since RetinaNet lacks the optimization of the relationship between positive and negative distributions, it has to initialize the output probability of the classifier at 0.01 to fit the distribution of background. In contrast, we initialize the probability of the sigmoid function at 0.5, which is more reasonable for binary classification scenario without any prior knowledge. It also verifies that the proposed DR loss can handle class-imbalance better.
In Eqn. 7, we compute the constrained distribution over positive and negative examples with λ + and λ − , respectively. To reduce the number of parameters, we fix the ratio between λ + and λ − as 1 : 0.1 and tune the scale as
λ + = 1/ log(h); λ − = 0.1/ log(h)
It is easy to show that this strategy is equivalent to fixing λ + and λ − as 1 and 0.1, and changing the base in the definition of the entropy regularizer as
H(q) = − j q j log h q j
Note that RetinaNet applies Feature Pyramid Network (FPN) [14] to obtain multiple scale features. To compute DR loss in one image, we collect candidates from multiple pyramid levels and obtain a single distribution for foreground and background, respectively.
Effect of Parameters
First, we take ablation experiments to evaluate the effect of multiple parameters on the validation set. All experiments in this subsection are implemented with a single image scale of 800 for training and test. ResNet-101 is applied as the backbone for comparison. Only horizontal flipping is adopted as the data augmentation in this subsection. Effect of L: L controls the smoothness of the loss function in Eqn. 8. We compare the model with different L in Table 1. Note that L also changes the function value, we adjust the weight of classification loss τ accordingly. The base of entropy regularizer is fixed as h = 4. We observe that the loss function is quite stable for the choice of different smooth values. Besides, a larger L will result in a smaller function value as shown in Fig. 3 and it suggests to increase the weight of classification loss τ to balance the losses. We keep L = 6 and τ = 5 in the rest experiments. Effect of h: Next, we evaluate the effect of h. h changes the scale of λ − and λ + in the standard entropy regularizer. As illustrated in Fig. 2, a large h will push the generated distribution to the extreme case while a small h will make the derived distribution close to the original distribution. We vary the range of h and summarize the results in Table 2. It is obvious that h is also not sensitive in a reasonable range and we fix it to 4 in the following experiments. Effect of β: Finally, we demonstrate the different strategies for changing β in the smoothed L 1 loss. In the implementation of RetinaNet, β is fixed to 0.11. We compare three strategies to decay β to 0.01, which are illustrated in Fig. 4. The results are shown in Table 3. First, it is evident that all strategies with decayed β can improve the performance of detectors with a fixed β. Then, the stepwise decay with K = 10k outperforms linear decay and it verifies that the objective should be optimized sufficiently before moving to the decay step. We adopt stepwise decay in the next subsections.
Effect of DR Loss:
To illustrate the effect of DR loss, we collect the confidence scores of examples from all images in the validation set and compare the empirical probability density in Fig. 6. We include cross entropy loss and focal loss in the comparison. The model with cross entropy loss is trained by ourselves while the model with focal loss is downloaded directly from the official model zoo with the same configuration as DR loss. First, we observe that most of examples have the extremely low confidence with cross entropy loss. It is because the number of negative examples overwhelms that of positive ones and it will classify most of examples to negative to obtain a small loss as demonstrated in Eqn. 2. Second, focal loss is better than cross entropy loss by drifting the distribution of foreground. However, the expectation of the foreground distribution is still close to that of background, and it has to adopt a small threshold as 0.05 to identify positive examples from negative ones. Compared to cross entropy and focal loss, DR loss optimizes the foreground distribution significantly. By optimizing the ranking loss with a large margin, the expectation of the foreground examples is larger than 0.5 while that of background is smaller than 0.1. It confirms that DR loss can address the imbalance between classes well. Consequently, DR loss allows us to set a large threshold for classification. We set the threshold as 0.
Performance with Different Scales
With the parameters suggested from ablation studies, we train the model with different scales and backbones to show the robustness of the proposed losses. We adopt ResNet-50 and ResNet-101 as backbones in the comparison. Training applies only horizontal flipping as the data augmentation. Table. 4 compares the performance with different scales to that of RetinaNet. We let "Dr.Retina" denote the Reti-naNet with the proposed DR loss and the decaying strategy
Comparison with State-of-the-Art
Finally, we compare Dr.Retina to the state-of-the-art two-stage and one-stage detectors on COCO test set. We follow the setting in [15] to increase the number of training iterations to 2×, which contains 180k iterations, and applies scale jitter in [640, 800] as the additional data augmentation for training. Note that we still use a single image scale and a single crop for test as above.
Conclusion
In this work, we propose the distributional ranking loss to address the imbalance challenge in one-stage object detection. It first converts the original classification problem to a ranking problem, which balances the classes of foreground and background. Furthermore, we propose to rank the expectation of derived distributions in lieu of original examples to focus on the hard examples, which balances the distribution of background. Besides, we improve the regression loss by developing the strategy to optimize L 1 loss better. Experiments on COCO verifies the effectiveness of the proposed losses. Since RPN also has the imbalance issue in two-stage detectors, applying DR loss for that can be our future work.
A. Gradient of DR Loss
We define the DR loss as It looks complicated but its gradient is easy to compute. Here we give the detailed gradient form. For p i,j− , we have
∂ ∂p i,j− = 1 1 + exp(−Lz) ∂z ∂p i,j− = q i,j− 1 + exp(−Lz) (1 + p i,j− λ − − 1 λ − ( j− q i,j− p i,j− ))
where z =P − −P + + γ. If we assume that the variance is bounded as
∀s, ∇ s t − ∇L t F ≤ δ then we have E[L(θ t+1 )] ≤ E[L(θ t ) − η ∇L t 2 F + Lη 2 2 1 m m s=1 ∇ s t − ∇L t + ∇L t 2 F ] ≤ E[L(θ t ) − η ∇L t 2 F + Lη 2 2 ( δ 2 m + ∇L t 2 F )
Therefore, we have By assuming η ≤ 1 L and adding t from 1 to T , we have
t ∇L(θ t ) 2 F ≤ 2L(θ 0 ) η + LηT δ 2 m
We finish the proof by letting
η = 2mL(θ 0 ) δ √ LT
C. Experiments
Effect of DR Loss: We illustrate the empirical PDF of foreground and background from DR loss in Fig. 6. Fig. 6 (a) show the original density of foreground and background.
To make the results more explicit, we decay the density of background by a factor of 10 and demonstrate the result in Fig. 6 (b). It is obvious that DR loss can separate the foreground and background with a large margin in the imbalance scenario. | 4,501 |
1907.08520 | 2964096688 | Comunicacio presentada a la 22a International Conference on Digital Audio Effects (DAFx-19) que se celebra del 2 al 6 de setembre de 2019 a Birmingham, Regne Unit. | Within the context of NSynth @cite_10 , a new high-quality dataset of one shot instrumental notes was presented, largely surpassing the size of the previous datasets, containing @math musical notes with unique pitch, timbre and envelope. The sounds were collected from @math instruments from commercial sample libraries and are annotated based on their source (acoustic, electronic or synthetic), instrument family and sonic qualities. The instrument families used in the annotation are bass, brass, flute, guitar, keyboard, mallet, organ, reed, string, synth lead and vocal. The dataset is available online https: magenta.tensorflow.org datasets nsynth and provides a good basis for training and evaluating one shot instrumental sound classifiers. This dataset is already split in training, validation and test set, where the instruments present in the training set do not overlap with the ones present in validation and test sets. However, to the best of our knowledge, no methods for instrument classification have so far been evaluated on this dataset. | {
"abstract": [
"Generative models in vision have seen rapid progress due to algorithmic improvements and the availability of high-quality image datasets. In this paper, we offer contributions in both these areas to enable similar progress in audio modeling. First, we detail a powerful new WaveNet-style autoencoder model that conditions an autoregressive decoder on temporal codes learned from the raw audio waveform. Second, we introduce NSynth, a large-scale and high-quality dataset of musical notes that is an order of magnitude larger than comparable public datasets. Using NSynth, we demonstrate improved qualitative and quantitative performance of the WaveNet autoencoder over a well-tuned spectral autoencoder baseline. Finally, we show that the model learns a manifold of embeddings that allows for morphing between instruments, meaningfully interpolating in timbre to create new types of sounds that are realistic and expressive."
],
"cite_N": [
"@cite_10"
],
"mid": [
"2951535099"
]
} | DATA AUGMENTATION FOR INSTRUMENT CLASSIFICATION ROBUST TO AUDIO EFFECTS | The repurposing of audio material, also known as sampling, has been a key component in Electronic Music Production (EMP) since its early days and became a practice which had a major influence in a large variety of musical genres. The availability of software such as Digital Audio Workstations, together with the audio sharing possibilities offered with the internet and cloud storage technologies, led to a variety of online audio sharing or sample library platforms. In order to allow for easier sample navigation, commercial databases such as sounds.com 1 or Loopcloud 2 rely on expert annotation to classify and characterise the content they provide. In the case of collaborative databases such as Freesound [1] the navigation and characterisation of the sounds is based on unrestricted textual descriptions and tags of the sounds provided by users. This leads to a search based on noisy labels which different members use to characterise the same type of sounds.
Automatically classifying one-shot instrumental sounds in unstructured large audio databases provides an intuitive way of navigating them, and a better characterisation the sounds contained.
For databases where the annotation of the sounds is done manually, it can be a way to simplify the job of the annotator, by providing suggested annotations or, if the system is reliable enough, only presenting sounds with low classification confidence.
The automatic classification of one-shot instrumental sounds remain an open research topic for music information retrieval (MIR). While the research on this field has been mostly performed on clean and unprocessed sounds, the sounds provided by EMP databases may also contain "production-ready" sounds, with audio effects applied on them. Therefore, in order for this automatic classification to be reliable for EMP sample databases, it has to be robust to the types of audio effects applied to these instruments. In our study, we evaluate the robustness of a state of the art automatic classification method for sounds with audio effects, and analyse how data augmentation can be used to improve classification accuracy.
METHODOLOGY
In our study we will conduct two experiments. First, we will try to understand how augmenting a dataset with specific effects can improve instrument classification and secondly, we will see if this augmentation can improve the robustness of a model to the selected effect.
To investigate this, we process the training, validation and test sets of the NSynth [14] dataset with audio effects. A state of the art deep learning architecture for instrument classification [9] is then trained with the original training set, and appended with each of the augmented datasets for each effect. We use the model trained with the original training set as a baseline and compare how the models trained with augmented versions perform on the original test and on the augmented versions of it for each effect. The code for the experiments and evaluation is available in a public GitHub repository 5 .
Data Augmentation and Pre-Processing
The audio effects for the augmentation were applied directly to the audio files present in the training, validation splits of the NSynth dataset [14]. For the augmentation procedure, we used a pitchshifting effect present in the LibROSA 6 library and audio effects in the form of VST audio plugins. For the augmentation which used audio plugins, the effects were applied directly to the audio signals using the Mrs. Watson 7 command-line audio plugin host. This command line tool was designed for automating audio processing tasks and allows the loading of an input sound file, processing it using a VST audio effect and saving the processed sound. In order to maintain transparency and reproducibility of this study only VST plugins which are freely distributed online were selected. The parameters used in the augmentation procedure were the ones set in the factory default preset for each audio plugin, except for those whose default preset did not alter significantly the sound.
The audio effects used were the following:
• Heavy distortion: A Bitcrusher audio effect which produces distortion through the reduction of the sampling rate and the bit depth of the input sound was used in the training set. The VST plugin used for augmenting the training set was the TAL-Bitcrusher 8 . For the test and validation set, we used Camel Audio's CamelCrusher 9 plugin which provides distortion using tube overdrive emulation combined with a compressor.
• Saturation: For this effect, tube saturation and amplifier simulation plugins were used. The audio effect creates harmonics in the signal, replicating the saturation effect from a valve-or vacuum-tube amplifier [18]. For this augmentation we focused on a subtle saturation which did not create noticeable distortion. The plugin used in the training set was the TAL-Tube 8 , while for the validation and test set Shattered Glass Audio's Ace 10 replica of a 1950s all tube amplifier was used.
• Reverb: To create a reverberation effect, the TAL-Reverb-4 plugin 11 was used in the test set. This effect replicates the artificial reverb obtained in a plate reverb unit. For the validation and test set we used OrilRiver 12 algorithmic reverb, which models the reverb provided by room acoustics. The default preset for this plugin mimics the reverb present in a small room.
• Echo: A delay effect with long decay and with a big delay time (more than 50ms) [18] was used to create an echo effect. We used the TAL-Dub-2 13 VST plugin in the training set and soundhack's ++delay 14 validation and test set. For this last plugin, we adapted the factory default preset, changing the delay time to 181.7 ms and the feedback parameter to 50%, so that the echo effect was more noticeable.
• Flanger: For this delay effect, the input audio is summed with a delayed version of it, creating a comb filter effect. The time of the delay is short (less than 15 ms) and is varied with a low frequency oscillator [18,19]. Flanger effects can also have a feedback parameter, where the output of the delay line is routed back to its input. For the training set, the VST plugin used was the TAL-Flanger 8 , while for the test and validation sets we used Blue Cat's Flanger 15 , which mimics a vintage flanger effect.
• Chorus: The chorus effect simulates the timing and pitch variations present when several individual sounds with similar pitch and timbre play in unison [19]. The implementation of this effect is similar to the flanger. The chorus uses longer delay times (around 30 ms), a larger number of voices (more than one) and normally does not contain the feedback parameter [18,19]. The VST effect used in the training set was the TAL-Chorus-LX 16 which tries to emulate the chorus module present in the Juno 60 synthesizer. For the test and validation sets, we used Blue Cat's Chorus 17 , which replicates a single voice vintage chorus effect.
• Pitch shifting: For this effect, the LibROSA Python package for musical and audio analysis was used. This library contains a function which pitch shifts the input audio. As the dataset used contains recordings of the instruments for every note in the chromatic scale in successive octaves, our approach focused on pitch-shifting in steps smaller than one semitone, similarly to what can occur in a detuned instrument. The bins_per_octave parameter of the pitch-shifting function was set to 72 = 12 × 6 while the n_steps parameter was set to a random value between 1 and 5 for each sound. Neither 0 or 6 were selected as possible values as it would be the same as not altering the sound pitch-shifting it by one semitone. The intention of the random assignment in the n_steps is to ensure the size of this augmented dataset is equal to the size of the datasets of other effects.
The audio resulting from this augmentation step can be longer than the original unprocessed audio. In order to keep all examples with the same length, the processed audio files were trimmed, ensuring all audio samples had a fixed duration of 4 s, similar to the sounds presented in the NSynth dataset [14].
The next step in the data processing pipeline is representing each sound in a log-scaled mel-spectogram. First, a 1024-point Short-time Fourier transform (STFT) is calculated on the signal, with a 75% overlap. The magnitude of the STFT result is converted to a mel-spectogram with 80 components, covering a frequency range from 40 Hz to 7600 Hz. Finally, the logarithm of the mel-spectogram is calculated, resulting in a 80 × 247 log-scaled mel-spectogram for the 4 s sounds sampled at 16 kHz present in the NSynth dataset [14].
Convolutional Neural Network
The CNN architecture we chose to use in our experiment is the single-layer architecture proposed by Pons et al. [9] for the musical instrument classification experiment, which has an implementation available online 18 . This architecture uses vertical convolution filters in order to better model timbral characteristics present in the spectogram, achieving close to state-of-the-art results [10], using a much smaller model (23 times less trainable parameters) and a consequently lower training time.
We chose the single-layer architecture presented in this study and adapted it to take an input of size 80 × 247. This architecture contains a single but wide convolutional layer with different filters with various sizes, to capture the timbral characteristics of the input:
• 128 filters of size 5 × 1 and 8 × 1;
• 64 filters of size 5 × 3 and 80 × 3;
• 32 filters of size 5 × 5 and 80 × 5.
Batch normalisation [20] is used after the convolutional layer and the activation function used is Exponential Linear Unit [21]. Max pooling is applied in the channel dimension for learning pitch invariant representations. Finally, 50% dropout is applied to the output layer, which is a densely connected 11-way layer, with the softmax activation function. A graph of the model can be seen in Figure 1. For more information on this architecture and its properties see [9].
Evaluation
The training of the models used the Adam optimiser [22], with a learning rate of 0.001. In the original paper [9] the authors used Stochastic Gradient Descent (SGD) with a learning rate reduction every 5 epochs. This was shown to provide good accuracy on the IRMAS dataset. However, we chose to use Adam as an optimiser because it does not need significant tuning as SGD. Furthermore, using a variable learning rate dependent on the number of epochs could benefit the larger training datasets as is the case of the ones with augmentation. A batch size of 50 examples was used, as it was the largest batch size able to fit the memory of the available 18 GPUs. The loss function employed for the training was the categorical cross-entropy, as used in [9], which can be calculated as shown in Equation (1)
loss = − 1 N N i=1 log p model [yi ∈ Cy i ](1)
To compare the models trained with the different datasets, we used categorical accuracy as evaluation metric, described in Equation (2). A prediction is considered correct if the index of the output node with highest value is the same as the correct label.
Categorical Accuracy = Correct predictions/N
All the models were trained until the categorical accuracy did not improve in the validation set after 10 epochs and the model which provided the best value for the validation set was evaluated in the test set.
RESULTS
Two experiments were conducted in our study. We firstly evaluated how augmenting the training set of NSynth [14] by applying audio effects to the sounds can improve the automatic classification on the instruments of the unmodified test set. In the second experiment we evaluated how robust a state of the art model for instrument classification is when classifying sounds where these audio effects are applied. The results of the first experiment are presented in Table 1, where the classification accuracy between the models trained with In [16], the authors state that the superior performance obtained was due to an augmentation procedure coupled with an increase in the model capacity. Experiments with higher capacity models will be performed to understand if the size of the model used is limiting its performance on learning from the augmented dataset.
In Table 2, we present the accuracy values obtained when evaluating the trained model on test sets processed with effects. The first thing we verify is that the accuracy of the classification greatly decreases for almost all effects, when compared to the un- processed sound classification. The model seems to be more robust to the flanger and to the pitch shifting effect, where the difference between the accuracy on the unprocessed test set and on the processed one is smaller than 4%. The effects which caused the biggest drops in accuracy ( > 20% ) were the heavy distortion, the saturation, the echo and the reverb. When evaluating if training with the augmented datasets increased the robustness of the model, we see that this is only true for the chorus and distortion effect. While for the heavy distortion effect the accuracy when training with the augmented set is improved by a significant value (≈ 4%), the difference in accuracy between training with the augmented and the unprocessed sets are small. Further experiments will be performed to understand the bad generalisation of the model. Besides experimenting with a higher capacity model as previously stated, work will be conducted on further augmenting the datasets. Although the effects applied were the same in the training, validation and test sets, the implementations used were different in the training set. This leads to a different timbre between the sets that the architecture might not be able to generalise to. In future experiments, we will further augment the dataset using a number of different settings for each effect, as well as different combinations of the effects applied.
CONCLUSIONS
In this paper we evaluated how a state of the art algorithm for automatic instrument classification performs when classifying the NSynth dataset and how augmenting this dataset with audio effects commonly used in electronic music production influences its accuracy on both the original and processed versions of the audio. | 2,358 |
1907.08451 | 2963562418 | Fast, non-destructive and on-site quality control tools, mainly high sensitive imaging techniques, are important to assess the reliability of photovoltaic plants. To minimize the risk of further damages and electrical yield losses, electroluminescence (EL) imaging is used to detect local defects in an early stage, which might cause future electric losses. For an automated defect recognition on EL measurements, a robust detection and rectification of modules, as well as an optional segmentation into cells is required. This paper introduces a method to detect solar modules and crossing points between solar cells in EL images. We only require 1-D image statistics for the detection, resulting in an approach that is computationally efficient. In addition, the method is able to detect the modules under perspective distortion and in scenarios, where multiple modules are visible in the image. We compare our method to the state of the art and show that it is superior in presence of perspective distortion while the performance on images, where the module is roughly coplanar to the detector, is similar to the reference method. Finally, we show that we greatly improve in terms of computational time in comparison to the reference method. | The detection of solar modules in an EL image is an object detection task. Traditionally, feature-based methods have been applied to solve the task of object detection. Especially, Haar wavelets have proven to be successful @cite_0 . For an the efficient computation, Viola and Jones @cite_12 made use of integral images, previously known as summed area tables @cite_9 . Integral images are also an essential part of our method. | {
"abstract": [
"This paper presents a general trainable framework for object detection in static images of cluttered scenes. The detection technique we develop is based on a wavelet representation of an object class derived from a statistical analysis of the class instances. By learning an object class in terms of a subset of an overcomplete dictionary of wavelet basis functions, we derive a compact representation of an object class which is used as an input to a support vector machine classifier. This representation overcomes both the problem of in-class variability and provides a low false detection rate in unconstrained environments. We demonstrate the capabilities of the technique in two domains whose inherent information content differs significantly. The first system is face detection and the second is the domain of people which, in contrast to faces, vary greatly in color, texture, and patterns. Unlike previous approaches, this system learns from examples and does not rely on any a priori (hand-crafted) models or motion-based segmentation. The paper also presents a motion-based extension to enhance the performance of the detection algorithm over video sequences. The results presented here suggest that this architecture may well be quite general.",
"Texture-map computations can be made tractable through use of precalculated tables which allow computational costs independent of the texture density. The first example of this technique, the “mip” map, uses a set of tables containing successively lower-resolution representations filtered down from the discrete texture function. An alternative method using a single table of values representing the integral over the texture function rather than the function itself may yield superior results at similar cost. The necessary algorithms to support the new technique are explained. Finally, the cost and performance of the new technique is compared to previous techniques.",
"This paper describes a machine learning approach for visual object detection which is capable of processing images extremely rapidly and achieving high detection rates. This work is distinguished by three key contributions. The first is the introduction of a new image representation called the \"integral image\" which allows the features used by our detector to be computed very quickly. The second is a learning algorithm, based on AdaBoost, which selects a small number of critical visual features from a larger set and yields extremely efficient classifiers. The third contribution is a method for combining increasingly more complex classifiers in a \"cascade\" which allows background regions of the image to be quickly discarded while spending more computation on promising object-like regions. The cascade can be viewed as an object specific focus-of-attention mechanism which unlike previous approaches provides statistical guarantees that discarded regions are unlikely to contain the object of interest. In the domain of face detection the system yields detection rates comparable to the best previous systems. Used in real-time applications, the detector runs at 15 frames per second without resorting to image differencing or skin color detection."
],
"cite_N": [
"@cite_0",
"@cite_9",
"@cite_12"
],
"mid": [
"2115763357",
"2143425433",
"2164598857"
]
} | Fast and robust detection of solar modules in electroluminescence images | Over the last decade, photovoltaic (PV) energy has become an important factor in emission-free energy production. In 2016 for example, about 40 GW of PV capacity was installed in Germany, which amounts to nearly one fifth of the total installed electric capacity [3]. Not only in Germany, renwable elecricity production has been transformed to a considerable business. It is expected, that by 2023 about one third of world wide electricity comes from renwable sources [14]. To ensure high performance of the installed modules, regular inspection by imaging and non-imaging methods is required. For on-site inspection, imaging methods are very useful to find out which modules are defect after signs of decreasing electricity generation have been detected. Typically, on-site inspection of solar modules is performed by infrared (IR) or electroluminescence (EL) imaging. This work focusses on EL imaging. However, it could be adapted to other modalities as well.
A solar module (see Fig. 1) consists of a varying number of solar cells that are placed onto a regular grid. Since cells on a module share a similar structure and cracks are usually spread out only within each cell, it is a natural algorithmic choice to perform detailed inspection on a per cell basis. To this end, an automatic detection of the module and crossing points between cells is required.
Our main contributions are as follows: We propose a method for the detection of solar modules and the crossing points between solar cells in the image. It works irrespective of the module's pose and position. Our method is based on 1-D image statistics, leading to a very fast approach. In addition, we show how this can be extended to situations, where multiple modules are visible in the image. Finally, we compare our method to the state of the art and show that the detection performance is comparable, while the computational time is lowered by a factor of 40.
The remainder of this work is organized a follows: In Sec. 2, we summarize the state of the art in object detection and specifically on the detection of solar modules. In Sections 3 and 4, we introduce our method, which is eventually compare against the state of the art in Sec. 5.
Detection of the module
This work is supposed to be used for EL images of solar modules in different constellations. As shown in Fig. 6, modules might be imaged from different viewpoints. In addition, there might be more than one module visible in the image. In this work, we focus on cases, where one module is fully visible and others might be partially viewed, since this commonly happens, when EL images of modules mounted next to each other are captured in the field. However, this method can be easily adapted to robustly handle different situations. The only assumption we make is that the number of cells in a row and per column is known.
The detection of the module in the image and the localization of crossing points between solar cells is performed in two steps. First, the module is roughly located to obtain an initial guess of a rigid transformation between model and image coordinates. We describe the procedure in Sec. 3.1 and Sec. 3.2. Then, the resulting transform is used to predict coarse locations of crossing points. These locations are then refined as described in Sec. 4.
Detection of a single module
We locate the module by considering 1-D images statistics obtained by summing the image in x and y direction. This is related, but not equal to the concept that is known as integral images [1,13]. Let I denote an EL image of a solar module. Throughout this work, we assume that images are column major, i. e., I[x, y], where x ∈ [1, w] and y ∈ [1, h], denotes a single pixel in column y and row x. Then, the integration over rows is given by
I Σx [y] = w x=1 I[x, y] .(1)
The sum over columns I Σy is defined similarly. Fig. 2 visualizes the statistics obtained by this summation (blue lines). Since the module is clearly separated from the background by the mean intensity, the location of the module in the image can be easily obtained from I Σx and I Σy . However, we are merely interested in the absolute values of the mean intensities than in the change of the latter. Therefore, we consider the gradients ∇ σ I Σx and ∇ σ I Σy , where σ denotes a Gaussian smoothing to suppress high frequencies.
Since we are only interested in low frequent changes, we heuristically set σ = 0.01 · max(w, h). As shown in Fig. 2, a left edge of a module is characterized by a maximum in ∇ σ I Σx or ∇ σ I Σy . Similarly, a right edge corresponds to a minimum. In addition, the skewness of the module with respect to the image's y axis corresponds to the width of the minimum and maximum peak in ∇ σ I Σx , whereas the skewness of the module with respect to the x axis corresponds to the peak-widths in ∇ σ I Σy .
Formally, let x 1 and x 2 denote the location of the maximum and minimum on ∇ σ I Σx , and y 1 and y 2 denote the location of the maximum and minimum on ∇ σ I Σy , respectively. Further, let x 1− and x 1+ denote the pair of points where the peak corresponding to x 1 vanishes. We define two bounding boxes for the module (see Fig. 2) as follows: The outer bounding box is given by
B 1 = [b 1,1 , b 1,2 , b 1,3 , b 1,4 ] = [(x 1− , y 2+ ), (x 2+ , y 2+ ), (x 2+ , y 1− ), (x 1− , y 1− )] ,(2)
while the inner bounding box is given by
B 2 = [b 2,1 , b 2,2 , b 2,3 , b 2,4 ] = [(x 1+ , y 2− ), (x 2− , y 2− ), (x 2− , y 1+ ), (x 1+ , y 1+ )] .
(3) With these bounding boxes, we obtain a first estimate of the module position. However, it is unclear if b 1,1 or b 2,1 corresponds to the left upper corner of the module. The same holds for b 1,2 versus b 2,2 and so on. This information is lost by the summation over the image. However, we can easily determine the exact pose of the module. To this end, we consider the sum over the sub-regions between the bounding boxes, cf. Fig. 2. This way, we can identify the four corners {b 1 , . . . , b 4 } of the module and obtain a rough estimate of the module position and pose. To simplify the detection of crossing points, we assume that the longer side of a non-square module always corresponds to the edges (b 1 , b 2 ) and (b 3 , b 4 ).
Detection of multiple modules
In many on-site applications, multiple modules will be visible in an EL image (see Fig. 3). In these cases, the detection of a single maximum and minimum along each axis will not suffice. To account for this, we need to define, when a point in ∇ σ I Σk , k ∈ {x, y}, will be considered a maximum/minimum. We compute the standard deviation σ k of ∇ σ I Σk and consider every point a maximum, where 2σ k < ∇ σ I Σk and every point a minimum, where −2σ k > ∇ σ I Σk . Then, we apply non maximum/minimum suppression to obtain a single detection per maximum and minimum. As a result, we obtain a sequence of extrema per axis.
Ideally, every minimum is directly followed by a maximum. However, due to false positives this is not always the case.
In this work, we focus on the case, where only one module is fully visible, whereas the others are partially occluded. Since we know that a module in the image corresponds to a maximum followed by a minimum, we can easily identify false positives. We group all maxima and minima that occur sequentially and only keep the one that corresponds to the largest or smallest value in ∇ σ I Σk . Still, we might have multiple pairs of maxima followed by a minimum. We choose the one where the distance between minimum and maximum is maximal. This is a very simple strategy that does not allow to detect more than one module. However, an extension to multiple modules is straightforward.
Detection of cell crossing points
For the detection of cell crossing points, we assert that the module consists of N columns of cells and M rows, where a typical module configuration is N = 10 and M = 6. However, our approach is not limited to that configuration. Without loss of generality, we assume that N ≥ M . With this information, we can define a simple model of the module. It consists of the corners and cell crossings on a regular grid, where the cell size is 1. By definition, the origin of the model coordinate system resides in the upper left corner with the y axis pointing downwards. Hence, every point in the model is given by Here, we assume that the longer side of a non-square module always corresponds to edges (b 1 , b 2 ) and (b 3 , b 4 ), and that N ≥ M . Note that this does not limit the approach regarding the orientation of the module since, for example, (b 1 , b 2 ) can define a horizontal or vertical line in the image.
m i,j = (i − 1, j − 1) i ≤ N, j ≤ M .(4)
We aim to estimate a transform that converts model coordinates m i,j into image coordinates x i,j , which is done by using a homography matrix H 0 that encodes the relation between model and image plane. With the four correspondences between the module edges in model and image plane, we estimate H 0 using the direct linear transform (DLT) [7]. Using H 0 , we obtain an initial guess to the position of each crossing point bỹ
x i,j ≈ H 0mi,j ,(5)
where the model point m = (x, y) in cartesian coordinates is converted to its homogeneous representation bym = (x, y, 1). Now, we aim to refine this initial guess by a local search. To this end, we extract a rectified image patch of the local neighborhood around each initial guess (Sec. 4.1). Using the resulting image patches, we apply the detection of cell crossing points (Sec. 4.2). Finally, we detect outliers and re-estimate H 0 to minimize the reprojection error between detected cell crossing points and the corresponding model points (Sec. 4.3). Figure 4b and 4c) as well as the corners of a module (cf. Fig. 4a) lead to different responses in the 1-D statistics. We show the accumulated intensities in blue and the gradient of that in orange.
Extraction of rectified image patches
For the local search, we consider only a small region around the initial guess. By means of the homography H 0 , we have some prior knowledge about the position and pose of the module in the image. We take this into account by warping a region that corresponds to the size of approximately one cell. To this end, we create a regular grid of pixel coordinates. The size of the grid depends on the approximate size of a cell in the image, which is obtained bŷ
r i,j = x i,j −x i+1,j+1 2 ,(6)
where the approximationx is given by Equ. (5) and conversion from homogeneous
x = (x 1 , x 2 , x 3 ) to inhomogeneous coordinates isx = x1 x3 ,x 2 x3
. Note that the approximationr i,j is only valid in the vicinity ofx i,j . The warping is then performed by mapping model coordinates into image coordinates using H 0 followed by sampling the image using bilinear interpolation. As a result, a rectified patch image I i,j is obtained that is coarsely centered at the true cell crossing point, see Fig. 4.
Cell crossing points detection
The detection step for cell crossing points is very similar to the module detection step but with local image patches. It is carried out for every model point m i,j and image patch I i,j to find an estimate x i,j to the (unknown) true image location of m i,j . To simplify notation, we drop the index throughout this section. We compute 1-D image statistics from I to obtain I Σx and I Σy , as well as ∇ σ I Σx and ∇ σ I Σy , as described in Sec. 3.1. The smoothing factor σ is set relative to the image size in the same way as for the module detection.
We find that there are different types of cell crossings that have differing intensity profiles, see Fig. 4. Another challenge is that busbars are hard to distinguish from the ridges (separating regions between cells) between cells, see for example Fig. 4c. Therefore, we cannot consider a single minimum/maximum. We proceed similar to the approach for the detection of multiple modules, cf. Sec. 3.2). We apply thresholding and non-maximum/non-minimum suppression on ∇ σ I Σx and ∇ σ I Σy to obtain a sequence of maxima and minima along each axis. The threshold is set to 1.5 · σ k , where σ k is the standard deviation of ∇ σ I Σk .
From the location of m in the model grid, we know the type of the target cell crossing. We distinguish between ridges and edges of the module. A cell crossing might consist of both. For example a crossing between two cells on the left border of the module, see Fig. 4b, consists of an edge on the x axis and a ridge on the y axis.
Detection of ridges A ridge is characterized by a minimum in ∇ σ I Σk followed by a maximum. As noted earlier, ridges are hard to distinguish from busbars. Luckily, solar cells are usually built symmetrically. Hence, given that image patches are roughly rectified and that the initial guess to the crossing point is not close to the border of the image patch, it is likely that we observe an even number of busbars. As a consequence, we simply use all minima that are directly followed by a maximum, order them by their position and take the middle. We expect to have an odd number of such sequences (an even number of busbars and the actual ridge we are interested in). In case this heuristic is not applicable, because we found an even number of such sequences, we simply drop this point. The correct position on the respective axis corresponds to the turning point of ∇ σ I Σk .
Detection of edges For edges, we distinguish between left/top edges and bottom/right edges of the module. Left/top edges are characterized by a maximum, whereas bottom/right edges correspond to a minimum in ∇ σ I Σk . In case of multiple extrema, we make a heuristic to choose the correct one. We assume that our initial guess is not far off. Therefore, we choose the maximum or minimum that is closest to the center of the patch.
Outlier detection
We chose to apply a fast method to detect the crossing points by considering 1-D image statistics only. As a result, the detected crossing points contain a significant number of outliers. In addition, every detected crossing point exhibits some measurement error. Therefore, we need to identify outliers and find a robust estimate to H that minimizes the overall error. Since H has 8 degrees of freedom, only four point correspondences (m i,j ,x i,j ) are required to obtain a unique solution. On the other hand, a typical module with 10 rows and 6 columns has 77 crossing points. Hence, even if the detection of crossing points failed in a significant number of cases, the number of point correspondences is typically much larger than 4. Therefore, this problem is well suited to be solved by Random Sample Consensus (RANSAC) [4]. We apply RANSAC to find those point correspondences that give the most consistent model. At every iteration t,
we randomly sample four point correspondences and estimate H t using the DLT. For the determination of the consensus set, we treat a point as an outlier if the detected point x i,j and the estimated point H tmi,j differ by more than 5 % of the cell size.
The error of the model H t is given by the following least-squares formulation
e t = 1 N M i,j x i,j − x i,j 2 2 ,(7)
wherex is the current estimate by the model H in cartesian coordinates. Finally, we estimate H using all point correspondences from the consensus set to minimize e t .
Experimental results
We conduct a series of experiments to show that our approach is robust w. r. t. to the position and pose of the module in the image as well as to various degrees of distortion of the modules. In Sec. 5.1, we introduce the dataset that we use throughout our experiments. In Sec. 5.2, we quantitatively compare the results of our approach with our reference method [2]. In addition, we show that our method robustly handles cases, where multiple modules are visible in the image or the module is perspectively distorted. Finally, in Sec. 5.3, we compare the computation time of our approach to the state of the art.
Dataset
Deitsch et al. [2] propose a joint detection and segmentation approach for solar modules. In their evaluation, they use two datasets. They report their computational performance on a dataset that consists of 44 modules. We will refer to this dataset as DataA and use it only for the performance evaluation, to obtain results that are easy to compare. In addition, they use a dataset that consists of 8 modules to evaluate their segmentation. We will refer to this data as DataB, see Fig. 6b. The data is publicly available, which allows for a direct comparison of the two methods. However, since we do not apply a pixelwise segmentation, we could not use the segmentation masks they also provided. To this end, we manually added polygonal annotations, where each corner of the polygon corresponds to one of the corners of the module. To assess the performance in different settings, we add two additional datasets. One of them consists of 10 images with multiple modules visible. We deem this setting important, since in on-site applications, it is difficult to measure only a single module. We will refer to this as DataC. An example is shown in Fig. 6c. The other consists of 9 images, where the module has been gradually rotated around the y-axis with a step size of 10 • starting at 0 • . We will refer to this as DataD, see Fig. 6a. We manually added polygonal annotations to DataC and DataD, too. For the EL imaging procedure of DataC and DataD, two different silicon detector CCD cameras with an optical long pass filter have been used. For the different PV module tilting angles (DataD), a Sensovation "coolSamba HR-320" was used, while for the outdoor PV string measurements a Greateyes "GE BI 2048 2048" was employed (DataC).
Detection results
We are interested in the number of modules that are detected correctly and how accurate the detection is. To assess the detection accuracy, we calculate the intersection over union (IoU) between ground truth polygon and detection. Additionally, we report the recall at different IoU-thresholds. Fig. 5 summarizes the detection results. We see that our method outperforms the reference method on the test dataset provided by Deitsch et al. [2] (DataB) by a small margin. However, the results of the reference method are a little bit more accurate. This can be explained by the fact that they consider lens distortion, while our method only estimates a projective transformation between model and image coordinates. The experiments on DataD assess the robustness of both methods with respect to rotations of the module. We clearly see that our method is considerably robust against rotations, while the reference method requires that the modules are roughly rectified. Finally, we determine the performance of our method, when multiple modules are visible in the image (DataC). The reference method does not support this scenario. It turns out that our method gives very good results when an image shows multiple modules.
In Fig. 6, we visually show the module crossing points estimated using our method. For the rotated modules (DataD), it turns out that the detection fails for 70 • and 80 • rotation. However, for 60 • and less, we consistently achieve good results (see Fig. 6b). Finally, Fig. 6c reveals that the method also works on varying types of modules and in presence of severe degradation.
Computation time
We determine the computational performance of our method on a workstation equipped with an Intel Xeon E5-1630 CPU running at 3.7 GHz. The method is implemented in Python3 using NumPy and only uses a single thread. We use the same 44 module images that Deitsch et al. [2] have used for their performance evaluation to obtain results that can be compared easily. On average, the 44 images are processed in 15 s, resulting in approximately 340 ms per module. This includes the initialization time of the interpreter and the time for loading the images. The average raw processing time of a single image is about 190 ms.
Deitsch et al. [2] report an overall processing time of 6 min for the 44 images using a multi-threaded implementation. Therefore, a single image amounts to 13.5 s on average. Hence, our method is about 40 times faster than the reference method. On the other hand, the reference method does not only detect the cell crossing points but also performs segmentation of the active cell area. In addition, they account for lens distortion as well. This partially justifies the performance difference.
Conclusion
In this work, we have presented a new approach to detect solar modules in EL images. It is based on 1-D image statistics and relates to object detection methods based on integral images. To this end, it can be implemented efficiently and we are confident, that a real-time processing of images is feasible. The experiments show that our method is superior in presence of perspective distortion while performing similarly well than state of the art on non-distorted EL images. Additionally, we show that it is able to deal with scenarios, where multiple modules are present in the image.
In future, the method could be extented to account for complex scenarios, where perspective distortion is strong. In these situations, the stability could be improved by a prior rectification of the module, e. g., using the Hough transform to detect the orientation of the module. Since point correspondences between the module and a virtual model of the latter are established, the proposed method could be extended to calibrate the parameters of a camera model, too. This would allow to take lens distortion into account and to extract undistorted cell images. | 3,914 |
1907.08451 | 2963562418 | Fast, non-destructive and on-site quality control tools, mainly high sensitive imaging techniques, are important to assess the reliability of photovoltaic plants. To minimize the risk of further damages and electrical yield losses, electroluminescence (EL) imaging is used to detect local defects in an early stage, which might cause future electric losses. For an automated defect recognition on EL measurements, a robust detection and rectification of modules, as well as an optional segmentation into cells is required. This paper introduces a method to detect solar modules and crossing points between solar cells in EL images. We only require 1-D image statistics for the detection, resulting in an approach that is computationally efficient. In addition, the method is able to detect the modules under perspective distortion and in scenarios, where multiple modules are visible in the image. We compare our method to the state of the art and show that it is superior in presence of perspective distortion while the performance on images, where the module is roughly coplanar to the detector, is similar to the reference method. Finally, we show that we greatly improve in terms of computational time in comparison to the reference method. | In the last years, convolutional neural networks (CNNs) have achieved superior performance in many computer vision tasks. For example, single-stage detectors like YOLO @cite_6 yield good detection performance with a tolerable computational cost. Multi-stage object detectors, such as R-CNN @cite_11 , achieve even better results but come with an increased computational cost. In contrast to CNN-based approaches, the proposed method does not require any training data and is computationally very efficient. | {
"abstract": [
"We present YOLO, a new approach to object detection. Prior work on object detection repurposes classifiers to perform detection. Instead, we frame object detection as a regression problem to spatially separated bounding boxes and associated class probabilities. A single neural network predicts bounding boxes and class probabilities directly from full images in one evaluation. Since the whole detection pipeline is a single network, it can be optimized end-to-end directly on detection performance. Our unified architecture is extremely fast. Our base YOLO model processes images in real-time at 45 frames per second. A smaller version of the network, Fast YOLO, processes an astounding 155 frames per second while still achieving double the mAP of other real-time detectors. Compared to state-of-the-art detection systems, YOLO makes more localization errors but is less likely to predict false positives on background. Finally, YOLO learns very general representations of objects. It outperforms other detection methods, including DPM and R-CNN, when generalizing from natural images to other domains like artwork.",
"Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn."
],
"cite_N": [
"@cite_6",
"@cite_11"
],
"mid": [
"2963037989",
"2102605133"
]
} | Fast and robust detection of solar modules in electroluminescence images | Over the last decade, photovoltaic (PV) energy has become an important factor in emission-free energy production. In 2016 for example, about 40 GW of PV capacity was installed in Germany, which amounts to nearly one fifth of the total installed electric capacity [3]. Not only in Germany, renwable elecricity production has been transformed to a considerable business. It is expected, that by 2023 about one third of world wide electricity comes from renwable sources [14]. To ensure high performance of the installed modules, regular inspection by imaging and non-imaging methods is required. For on-site inspection, imaging methods are very useful to find out which modules are defect after signs of decreasing electricity generation have been detected. Typically, on-site inspection of solar modules is performed by infrared (IR) or electroluminescence (EL) imaging. This work focusses on EL imaging. However, it could be adapted to other modalities as well.
A solar module (see Fig. 1) consists of a varying number of solar cells that are placed onto a regular grid. Since cells on a module share a similar structure and cracks are usually spread out only within each cell, it is a natural algorithmic choice to perform detailed inspection on a per cell basis. To this end, an automatic detection of the module and crossing points between cells is required.
Our main contributions are as follows: We propose a method for the detection of solar modules and the crossing points between solar cells in the image. It works irrespective of the module's pose and position. Our method is based on 1-D image statistics, leading to a very fast approach. In addition, we show how this can be extended to situations, where multiple modules are visible in the image. Finally, we compare our method to the state of the art and show that the detection performance is comparable, while the computational time is lowered by a factor of 40.
The remainder of this work is organized a follows: In Sec. 2, we summarize the state of the art in object detection and specifically on the detection of solar modules. In Sections 3 and 4, we introduce our method, which is eventually compare against the state of the art in Sec. 5.
Detection of the module
This work is supposed to be used for EL images of solar modules in different constellations. As shown in Fig. 6, modules might be imaged from different viewpoints. In addition, there might be more than one module visible in the image. In this work, we focus on cases, where one module is fully visible and others might be partially viewed, since this commonly happens, when EL images of modules mounted next to each other are captured in the field. However, this method can be easily adapted to robustly handle different situations. The only assumption we make is that the number of cells in a row and per column is known.
The detection of the module in the image and the localization of crossing points between solar cells is performed in two steps. First, the module is roughly located to obtain an initial guess of a rigid transformation between model and image coordinates. We describe the procedure in Sec. 3.1 and Sec. 3.2. Then, the resulting transform is used to predict coarse locations of crossing points. These locations are then refined as described in Sec. 4.
Detection of a single module
We locate the module by considering 1-D images statistics obtained by summing the image in x and y direction. This is related, but not equal to the concept that is known as integral images [1,13]. Let I denote an EL image of a solar module. Throughout this work, we assume that images are column major, i. e., I[x, y], where x ∈ [1, w] and y ∈ [1, h], denotes a single pixel in column y and row x. Then, the integration over rows is given by
I Σx [y] = w x=1 I[x, y] .(1)
The sum over columns I Σy is defined similarly. Fig. 2 visualizes the statistics obtained by this summation (blue lines). Since the module is clearly separated from the background by the mean intensity, the location of the module in the image can be easily obtained from I Σx and I Σy . However, we are merely interested in the absolute values of the mean intensities than in the change of the latter. Therefore, we consider the gradients ∇ σ I Σx and ∇ σ I Σy , where σ denotes a Gaussian smoothing to suppress high frequencies.
Since we are only interested in low frequent changes, we heuristically set σ = 0.01 · max(w, h). As shown in Fig. 2, a left edge of a module is characterized by a maximum in ∇ σ I Σx or ∇ σ I Σy . Similarly, a right edge corresponds to a minimum. In addition, the skewness of the module with respect to the image's y axis corresponds to the width of the minimum and maximum peak in ∇ σ I Σx , whereas the skewness of the module with respect to the x axis corresponds to the peak-widths in ∇ σ I Σy .
Formally, let x 1 and x 2 denote the location of the maximum and minimum on ∇ σ I Σx , and y 1 and y 2 denote the location of the maximum and minimum on ∇ σ I Σy , respectively. Further, let x 1− and x 1+ denote the pair of points where the peak corresponding to x 1 vanishes. We define two bounding boxes for the module (see Fig. 2) as follows: The outer bounding box is given by
B 1 = [b 1,1 , b 1,2 , b 1,3 , b 1,4 ] = [(x 1− , y 2+ ), (x 2+ , y 2+ ), (x 2+ , y 1− ), (x 1− , y 1− )] ,(2)
while the inner bounding box is given by
B 2 = [b 2,1 , b 2,2 , b 2,3 , b 2,4 ] = [(x 1+ , y 2− ), (x 2− , y 2− ), (x 2− , y 1+ ), (x 1+ , y 1+ )] .
(3) With these bounding boxes, we obtain a first estimate of the module position. However, it is unclear if b 1,1 or b 2,1 corresponds to the left upper corner of the module. The same holds for b 1,2 versus b 2,2 and so on. This information is lost by the summation over the image. However, we can easily determine the exact pose of the module. To this end, we consider the sum over the sub-regions between the bounding boxes, cf. Fig. 2. This way, we can identify the four corners {b 1 , . . . , b 4 } of the module and obtain a rough estimate of the module position and pose. To simplify the detection of crossing points, we assume that the longer side of a non-square module always corresponds to the edges (b 1 , b 2 ) and (b 3 , b 4 ).
Detection of multiple modules
In many on-site applications, multiple modules will be visible in an EL image (see Fig. 3). In these cases, the detection of a single maximum and minimum along each axis will not suffice. To account for this, we need to define, when a point in ∇ σ I Σk , k ∈ {x, y}, will be considered a maximum/minimum. We compute the standard deviation σ k of ∇ σ I Σk and consider every point a maximum, where 2σ k < ∇ σ I Σk and every point a minimum, where −2σ k > ∇ σ I Σk . Then, we apply non maximum/minimum suppression to obtain a single detection per maximum and minimum. As a result, we obtain a sequence of extrema per axis.
Ideally, every minimum is directly followed by a maximum. However, due to false positives this is not always the case.
In this work, we focus on the case, where only one module is fully visible, whereas the others are partially occluded. Since we know that a module in the image corresponds to a maximum followed by a minimum, we can easily identify false positives. We group all maxima and minima that occur sequentially and only keep the one that corresponds to the largest or smallest value in ∇ σ I Σk . Still, we might have multiple pairs of maxima followed by a minimum. We choose the one where the distance between minimum and maximum is maximal. This is a very simple strategy that does not allow to detect more than one module. However, an extension to multiple modules is straightforward.
Detection of cell crossing points
For the detection of cell crossing points, we assert that the module consists of N columns of cells and M rows, where a typical module configuration is N = 10 and M = 6. However, our approach is not limited to that configuration. Without loss of generality, we assume that N ≥ M . With this information, we can define a simple model of the module. It consists of the corners and cell crossings on a regular grid, where the cell size is 1. By definition, the origin of the model coordinate system resides in the upper left corner with the y axis pointing downwards. Hence, every point in the model is given by Here, we assume that the longer side of a non-square module always corresponds to edges (b 1 , b 2 ) and (b 3 , b 4 ), and that N ≥ M . Note that this does not limit the approach regarding the orientation of the module since, for example, (b 1 , b 2 ) can define a horizontal or vertical line in the image.
m i,j = (i − 1, j − 1) i ≤ N, j ≤ M .(4)
We aim to estimate a transform that converts model coordinates m i,j into image coordinates x i,j , which is done by using a homography matrix H 0 that encodes the relation between model and image plane. With the four correspondences between the module edges in model and image plane, we estimate H 0 using the direct linear transform (DLT) [7]. Using H 0 , we obtain an initial guess to the position of each crossing point bỹ
x i,j ≈ H 0mi,j ,(5)
where the model point m = (x, y) in cartesian coordinates is converted to its homogeneous representation bym = (x, y, 1). Now, we aim to refine this initial guess by a local search. To this end, we extract a rectified image patch of the local neighborhood around each initial guess (Sec. 4.1). Using the resulting image patches, we apply the detection of cell crossing points (Sec. 4.2). Finally, we detect outliers and re-estimate H 0 to minimize the reprojection error between detected cell crossing points and the corresponding model points (Sec. 4.3). Figure 4b and 4c) as well as the corners of a module (cf. Fig. 4a) lead to different responses in the 1-D statistics. We show the accumulated intensities in blue and the gradient of that in orange.
Extraction of rectified image patches
For the local search, we consider only a small region around the initial guess. By means of the homography H 0 , we have some prior knowledge about the position and pose of the module in the image. We take this into account by warping a region that corresponds to the size of approximately one cell. To this end, we create a regular grid of pixel coordinates. The size of the grid depends on the approximate size of a cell in the image, which is obtained bŷ
r i,j = x i,j −x i+1,j+1 2 ,(6)
where the approximationx is given by Equ. (5) and conversion from homogeneous
x = (x 1 , x 2 , x 3 ) to inhomogeneous coordinates isx = x1 x3 ,x 2 x3
. Note that the approximationr i,j is only valid in the vicinity ofx i,j . The warping is then performed by mapping model coordinates into image coordinates using H 0 followed by sampling the image using bilinear interpolation. As a result, a rectified patch image I i,j is obtained that is coarsely centered at the true cell crossing point, see Fig. 4.
Cell crossing points detection
The detection step for cell crossing points is very similar to the module detection step but with local image patches. It is carried out for every model point m i,j and image patch I i,j to find an estimate x i,j to the (unknown) true image location of m i,j . To simplify notation, we drop the index throughout this section. We compute 1-D image statistics from I to obtain I Σx and I Σy , as well as ∇ σ I Σx and ∇ σ I Σy , as described in Sec. 3.1. The smoothing factor σ is set relative to the image size in the same way as for the module detection.
We find that there are different types of cell crossings that have differing intensity profiles, see Fig. 4. Another challenge is that busbars are hard to distinguish from the ridges (separating regions between cells) between cells, see for example Fig. 4c. Therefore, we cannot consider a single minimum/maximum. We proceed similar to the approach for the detection of multiple modules, cf. Sec. 3.2). We apply thresholding and non-maximum/non-minimum suppression on ∇ σ I Σx and ∇ σ I Σy to obtain a sequence of maxima and minima along each axis. The threshold is set to 1.5 · σ k , where σ k is the standard deviation of ∇ σ I Σk .
From the location of m in the model grid, we know the type of the target cell crossing. We distinguish between ridges and edges of the module. A cell crossing might consist of both. For example a crossing between two cells on the left border of the module, see Fig. 4b, consists of an edge on the x axis and a ridge on the y axis.
Detection of ridges A ridge is characterized by a minimum in ∇ σ I Σk followed by a maximum. As noted earlier, ridges are hard to distinguish from busbars. Luckily, solar cells are usually built symmetrically. Hence, given that image patches are roughly rectified and that the initial guess to the crossing point is not close to the border of the image patch, it is likely that we observe an even number of busbars. As a consequence, we simply use all minima that are directly followed by a maximum, order them by their position and take the middle. We expect to have an odd number of such sequences (an even number of busbars and the actual ridge we are interested in). In case this heuristic is not applicable, because we found an even number of such sequences, we simply drop this point. The correct position on the respective axis corresponds to the turning point of ∇ σ I Σk .
Detection of edges For edges, we distinguish between left/top edges and bottom/right edges of the module. Left/top edges are characterized by a maximum, whereas bottom/right edges correspond to a minimum in ∇ σ I Σk . In case of multiple extrema, we make a heuristic to choose the correct one. We assume that our initial guess is not far off. Therefore, we choose the maximum or minimum that is closest to the center of the patch.
Outlier detection
We chose to apply a fast method to detect the crossing points by considering 1-D image statistics only. As a result, the detected crossing points contain a significant number of outliers. In addition, every detected crossing point exhibits some measurement error. Therefore, we need to identify outliers and find a robust estimate to H that minimizes the overall error. Since H has 8 degrees of freedom, only four point correspondences (m i,j ,x i,j ) are required to obtain a unique solution. On the other hand, a typical module with 10 rows and 6 columns has 77 crossing points. Hence, even if the detection of crossing points failed in a significant number of cases, the number of point correspondences is typically much larger than 4. Therefore, this problem is well suited to be solved by Random Sample Consensus (RANSAC) [4]. We apply RANSAC to find those point correspondences that give the most consistent model. At every iteration t,
we randomly sample four point correspondences and estimate H t using the DLT. For the determination of the consensus set, we treat a point as an outlier if the detected point x i,j and the estimated point H tmi,j differ by more than 5 % of the cell size.
The error of the model H t is given by the following least-squares formulation
e t = 1 N M i,j x i,j − x i,j 2 2 ,(7)
wherex is the current estimate by the model H in cartesian coordinates. Finally, we estimate H using all point correspondences from the consensus set to minimize e t .
Experimental results
We conduct a series of experiments to show that our approach is robust w. r. t. to the position and pose of the module in the image as well as to various degrees of distortion of the modules. In Sec. 5.1, we introduce the dataset that we use throughout our experiments. In Sec. 5.2, we quantitatively compare the results of our approach with our reference method [2]. In addition, we show that our method robustly handles cases, where multiple modules are visible in the image or the module is perspectively distorted. Finally, in Sec. 5.3, we compare the computation time of our approach to the state of the art.
Dataset
Deitsch et al. [2] propose a joint detection and segmentation approach for solar modules. In their evaluation, they use two datasets. They report their computational performance on a dataset that consists of 44 modules. We will refer to this dataset as DataA and use it only for the performance evaluation, to obtain results that are easy to compare. In addition, they use a dataset that consists of 8 modules to evaluate their segmentation. We will refer to this data as DataB, see Fig. 6b. The data is publicly available, which allows for a direct comparison of the two methods. However, since we do not apply a pixelwise segmentation, we could not use the segmentation masks they also provided. To this end, we manually added polygonal annotations, where each corner of the polygon corresponds to one of the corners of the module. To assess the performance in different settings, we add two additional datasets. One of them consists of 10 images with multiple modules visible. We deem this setting important, since in on-site applications, it is difficult to measure only a single module. We will refer to this as DataC. An example is shown in Fig. 6c. The other consists of 9 images, where the module has been gradually rotated around the y-axis with a step size of 10 • starting at 0 • . We will refer to this as DataD, see Fig. 6a. We manually added polygonal annotations to DataC and DataD, too. For the EL imaging procedure of DataC and DataD, two different silicon detector CCD cameras with an optical long pass filter have been used. For the different PV module tilting angles (DataD), a Sensovation "coolSamba HR-320" was used, while for the outdoor PV string measurements a Greateyes "GE BI 2048 2048" was employed (DataC).
Detection results
We are interested in the number of modules that are detected correctly and how accurate the detection is. To assess the detection accuracy, we calculate the intersection over union (IoU) between ground truth polygon and detection. Additionally, we report the recall at different IoU-thresholds. Fig. 5 summarizes the detection results. We see that our method outperforms the reference method on the test dataset provided by Deitsch et al. [2] (DataB) by a small margin. However, the results of the reference method are a little bit more accurate. This can be explained by the fact that they consider lens distortion, while our method only estimates a projective transformation between model and image coordinates. The experiments on DataD assess the robustness of both methods with respect to rotations of the module. We clearly see that our method is considerably robust against rotations, while the reference method requires that the modules are roughly rectified. Finally, we determine the performance of our method, when multiple modules are visible in the image (DataC). The reference method does not support this scenario. It turns out that our method gives very good results when an image shows multiple modules.
In Fig. 6, we visually show the module crossing points estimated using our method. For the rotated modules (DataD), it turns out that the detection fails for 70 • and 80 • rotation. However, for 60 • and less, we consistently achieve good results (see Fig. 6b). Finally, Fig. 6c reveals that the method also works on varying types of modules and in presence of severe degradation.
Computation time
We determine the computational performance of our method on a workstation equipped with an Intel Xeon E5-1630 CPU running at 3.7 GHz. The method is implemented in Python3 using NumPy and only uses a single thread. We use the same 44 module images that Deitsch et al. [2] have used for their performance evaluation to obtain results that can be compared easily. On average, the 44 images are processed in 15 s, resulting in approximately 340 ms per module. This includes the initialization time of the interpreter and the time for loading the images. The average raw processing time of a single image is about 190 ms.
Deitsch et al. [2] report an overall processing time of 6 min for the 44 images using a multi-threaded implementation. Therefore, a single image amounts to 13.5 s on average. Hence, our method is about 40 times faster than the reference method. On the other hand, the reference method does not only detect the cell crossing points but also performs segmentation of the active cell area. In addition, they account for lens distortion as well. This partially justifies the performance difference.
Conclusion
In this work, we have presented a new approach to detect solar modules in EL images. It is based on 1-D image statistics and relates to object detection methods based on integral images. To this end, it can be implemented efficiently and we are confident, that a real-time processing of images is feasible. The experiments show that our method is superior in presence of perspective distortion while performing similarly well than state of the art on non-distorted EL images. Additionally, we show that it is able to deal with scenarios, where multiple modules are present in the image.
In future, the method could be extented to account for complex scenarios, where perspective distortion is strong. In these situations, the stability could be improved by a prior rectification of the module, e. g., using the Hough transform to detect the orientation of the module. Since point correspondences between the module and a virtual model of the latter are established, the proposed method could be extended to calibrate the parameters of a camera model, too. This would allow to take lens distortion into account and to extract undistorted cell images. | 3,914 |
1907.08451 | 2963562418 | Fast, non-destructive and on-site quality control tools, mainly high sensitive imaging techniques, are important to assess the reliability of photovoltaic plants. To minimize the risk of further damages and electrical yield losses, electroluminescence (EL) imaging is used to detect local defects in an early stage, which might cause future electric losses. For an automated defect recognition on EL measurements, a robust detection and rectification of modules, as well as an optional segmentation into cells is required. This paper introduces a method to detect solar modules and crossing points between solar cells in EL images. We only require 1-D image statistics for the detection, resulting in an approach that is computationally efficient. In addition, the method is able to detect the modules under perspective distortion and in scenarios, where multiple modules are visible in the image. We compare our method to the state of the art and show that it is superior in presence of perspective distortion while the performance on images, where the module is roughly coplanar to the detector, is similar to the reference method. Finally, we show that we greatly improve in terms of computational time in comparison to the reference method. | There are not many preliminary works on the automated detection of solar modules. al Vetter @cite_3 proposed an object detection pipeline that consists of several stacked filters followed by a Hough transform to detect solar modules in noisy infrared thermography measurements. Recently, al Deitsch @cite_1 proposed a processing pipeline for solar modules that jointly detects the modules in an EL image, estimates the configuration ( the number of rows and columns of cells), estimates the lens distortion and performs segmentation into rectified cell images. Their approach consists of a preprocessing step, where a multiscale vesselness filter @cite_8 is used to extract ridges (separating lines between cells) and bus bars. Then, parabolic curves are fitted onto the result to obtain a parametric model of the module. Finally, the distortion is estimated and module corners are extracted. Since this is, to the best of our knowledge, the only method that automatically detects solar modules and cell crossing points in EL images, we use this as a reference method to assess the performance of our approach. | {
"abstract": [
"High resolution electroluminescence (EL) images captured in the infrared spectrum allow to visually and non-destructively inspect the quality of photovoltaic (PV) modules. Currently, however, such a visual inspection requires trained experts to discern different kind of defects, which is time-consuming and expensive. In this work, we make an important step towards improving the current state-of-the-art in solar module inspection. We propose a robust automated segmentation method to extract individual solar cells from EL images of PV modules. Automated segmentation of cells is a key step in automating the visual inspection workflow. It also enables controlled studies on large amounts of data to understanding the effects of module degradation over time - a process not yet fully understood. The proposed method infers in several steps a high level solar module representation from low-level edge features. An important step in the algorithm is to formulate the segmentation problem in terms of lens calibration by exploiting the plumbline constraint. We evaluate our method on a dataset of various solar modules types containing a total of 408 solar cells with various defects. Our method robustly solves this task with a median weighted Jaccard index of 96.09 and an @math score of 97.23 .",
"Abstract Local electric defects may result in considerable performance losses in solar cells. Infrared thermography is an essential tool to detect these defects on photovoltaic modules. Accordingly, IR-thermography is frequently used in R&D labs of PV manufactures and, furthermore, outdoors in order to identify faulty modules in PV-power plants. Massive amount of data is acquired which needs to be analyzed. An automatized method for detecting solar modules in IR-images would enable a faster and automatized analysis of the data. However, IR-images tend to suffer from rather large noise, which makes an automatized segmentation challenging. The aim of this study was to establish a reliable segmentation algorithm for R&D labs. We propose an algorithm, which detects a solar cell or module within an IR-image with large noise. We tested the algorithm on images of 10 PV-samples characterized by highly sensitive dark lock-in thermography (DLIT). The algorithm proved to be very reliable in detecting correctly the solar module. In our study, we focused on thin film solar cells, however, a transfer of the algorithm to other cell types is straight forward.",
"The multiscale second order local structure of an image (Hessian) is examined with the purpose of developing a vessel enhancement filter. A vesselness measure is obtained on the basis of all eigenvalues of the Hessian. This measure is tested on two dimensional DSA and three dimensional aortoiliac and cerebral MRA data. Its clinical utility is shown by the simultaneous noise and background suppression and vessel enhancement in maximum intensity projections and volumetric displays."
],
"cite_N": [
"@cite_1",
"@cite_3",
"@cite_8"
],
"mid": [
"2808042620",
"2312404918",
"2129534965"
]
} | Fast and robust detection of solar modules in electroluminescence images | Over the last decade, photovoltaic (PV) energy has become an important factor in emission-free energy production. In 2016 for example, about 40 GW of PV capacity was installed in Germany, which amounts to nearly one fifth of the total installed electric capacity [3]. Not only in Germany, renwable elecricity production has been transformed to a considerable business. It is expected, that by 2023 about one third of world wide electricity comes from renwable sources [14]. To ensure high performance of the installed modules, regular inspection by imaging and non-imaging methods is required. For on-site inspection, imaging methods are very useful to find out which modules are defect after signs of decreasing electricity generation have been detected. Typically, on-site inspection of solar modules is performed by infrared (IR) or electroluminescence (EL) imaging. This work focusses on EL imaging. However, it could be adapted to other modalities as well.
A solar module (see Fig. 1) consists of a varying number of solar cells that are placed onto a regular grid. Since cells on a module share a similar structure and cracks are usually spread out only within each cell, it is a natural algorithmic choice to perform detailed inspection on a per cell basis. To this end, an automatic detection of the module and crossing points between cells is required.
Our main contributions are as follows: We propose a method for the detection of solar modules and the crossing points between solar cells in the image. It works irrespective of the module's pose and position. Our method is based on 1-D image statistics, leading to a very fast approach. In addition, we show how this can be extended to situations, where multiple modules are visible in the image. Finally, we compare our method to the state of the art and show that the detection performance is comparable, while the computational time is lowered by a factor of 40.
The remainder of this work is organized a follows: In Sec. 2, we summarize the state of the art in object detection and specifically on the detection of solar modules. In Sections 3 and 4, we introduce our method, which is eventually compare against the state of the art in Sec. 5.
Detection of the module
This work is supposed to be used for EL images of solar modules in different constellations. As shown in Fig. 6, modules might be imaged from different viewpoints. In addition, there might be more than one module visible in the image. In this work, we focus on cases, where one module is fully visible and others might be partially viewed, since this commonly happens, when EL images of modules mounted next to each other are captured in the field. However, this method can be easily adapted to robustly handle different situations. The only assumption we make is that the number of cells in a row and per column is known.
The detection of the module in the image and the localization of crossing points between solar cells is performed in two steps. First, the module is roughly located to obtain an initial guess of a rigid transformation between model and image coordinates. We describe the procedure in Sec. 3.1 and Sec. 3.2. Then, the resulting transform is used to predict coarse locations of crossing points. These locations are then refined as described in Sec. 4.
Detection of a single module
We locate the module by considering 1-D images statistics obtained by summing the image in x and y direction. This is related, but not equal to the concept that is known as integral images [1,13]. Let I denote an EL image of a solar module. Throughout this work, we assume that images are column major, i. e., I[x, y], where x ∈ [1, w] and y ∈ [1, h], denotes a single pixel in column y and row x. Then, the integration over rows is given by
I Σx [y] = w x=1 I[x, y] .(1)
The sum over columns I Σy is defined similarly. Fig. 2 visualizes the statistics obtained by this summation (blue lines). Since the module is clearly separated from the background by the mean intensity, the location of the module in the image can be easily obtained from I Σx and I Σy . However, we are merely interested in the absolute values of the mean intensities than in the change of the latter. Therefore, we consider the gradients ∇ σ I Σx and ∇ σ I Σy , where σ denotes a Gaussian smoothing to suppress high frequencies.
Since we are only interested in low frequent changes, we heuristically set σ = 0.01 · max(w, h). As shown in Fig. 2, a left edge of a module is characterized by a maximum in ∇ σ I Σx or ∇ σ I Σy . Similarly, a right edge corresponds to a minimum. In addition, the skewness of the module with respect to the image's y axis corresponds to the width of the minimum and maximum peak in ∇ σ I Σx , whereas the skewness of the module with respect to the x axis corresponds to the peak-widths in ∇ σ I Σy .
Formally, let x 1 and x 2 denote the location of the maximum and minimum on ∇ σ I Σx , and y 1 and y 2 denote the location of the maximum and minimum on ∇ σ I Σy , respectively. Further, let x 1− and x 1+ denote the pair of points where the peak corresponding to x 1 vanishes. We define two bounding boxes for the module (see Fig. 2) as follows: The outer bounding box is given by
B 1 = [b 1,1 , b 1,2 , b 1,3 , b 1,4 ] = [(x 1− , y 2+ ), (x 2+ , y 2+ ), (x 2+ , y 1− ), (x 1− , y 1− )] ,(2)
while the inner bounding box is given by
B 2 = [b 2,1 , b 2,2 , b 2,3 , b 2,4 ] = [(x 1+ , y 2− ), (x 2− , y 2− ), (x 2− , y 1+ ), (x 1+ , y 1+ )] .
(3) With these bounding boxes, we obtain a first estimate of the module position. However, it is unclear if b 1,1 or b 2,1 corresponds to the left upper corner of the module. The same holds for b 1,2 versus b 2,2 and so on. This information is lost by the summation over the image. However, we can easily determine the exact pose of the module. To this end, we consider the sum over the sub-regions between the bounding boxes, cf. Fig. 2. This way, we can identify the four corners {b 1 , . . . , b 4 } of the module and obtain a rough estimate of the module position and pose. To simplify the detection of crossing points, we assume that the longer side of a non-square module always corresponds to the edges (b 1 , b 2 ) and (b 3 , b 4 ).
Detection of multiple modules
In many on-site applications, multiple modules will be visible in an EL image (see Fig. 3). In these cases, the detection of a single maximum and minimum along each axis will not suffice. To account for this, we need to define, when a point in ∇ σ I Σk , k ∈ {x, y}, will be considered a maximum/minimum. We compute the standard deviation σ k of ∇ σ I Σk and consider every point a maximum, where 2σ k < ∇ σ I Σk and every point a minimum, where −2σ k > ∇ σ I Σk . Then, we apply non maximum/minimum suppression to obtain a single detection per maximum and minimum. As a result, we obtain a sequence of extrema per axis.
Ideally, every minimum is directly followed by a maximum. However, due to false positives this is not always the case.
In this work, we focus on the case, where only one module is fully visible, whereas the others are partially occluded. Since we know that a module in the image corresponds to a maximum followed by a minimum, we can easily identify false positives. We group all maxima and minima that occur sequentially and only keep the one that corresponds to the largest or smallest value in ∇ σ I Σk . Still, we might have multiple pairs of maxima followed by a minimum. We choose the one where the distance between minimum and maximum is maximal. This is a very simple strategy that does not allow to detect more than one module. However, an extension to multiple modules is straightforward.
Detection of cell crossing points
For the detection of cell crossing points, we assert that the module consists of N columns of cells and M rows, where a typical module configuration is N = 10 and M = 6. However, our approach is not limited to that configuration. Without loss of generality, we assume that N ≥ M . With this information, we can define a simple model of the module. It consists of the corners and cell crossings on a regular grid, where the cell size is 1. By definition, the origin of the model coordinate system resides in the upper left corner with the y axis pointing downwards. Hence, every point in the model is given by Here, we assume that the longer side of a non-square module always corresponds to edges (b 1 , b 2 ) and (b 3 , b 4 ), and that N ≥ M . Note that this does not limit the approach regarding the orientation of the module since, for example, (b 1 , b 2 ) can define a horizontal or vertical line in the image.
m i,j = (i − 1, j − 1) i ≤ N, j ≤ M .(4)
We aim to estimate a transform that converts model coordinates m i,j into image coordinates x i,j , which is done by using a homography matrix H 0 that encodes the relation between model and image plane. With the four correspondences between the module edges in model and image plane, we estimate H 0 using the direct linear transform (DLT) [7]. Using H 0 , we obtain an initial guess to the position of each crossing point bỹ
x i,j ≈ H 0mi,j ,(5)
where the model point m = (x, y) in cartesian coordinates is converted to its homogeneous representation bym = (x, y, 1). Now, we aim to refine this initial guess by a local search. To this end, we extract a rectified image patch of the local neighborhood around each initial guess (Sec. 4.1). Using the resulting image patches, we apply the detection of cell crossing points (Sec. 4.2). Finally, we detect outliers and re-estimate H 0 to minimize the reprojection error between detected cell crossing points and the corresponding model points (Sec. 4.3). Figure 4b and 4c) as well as the corners of a module (cf. Fig. 4a) lead to different responses in the 1-D statistics. We show the accumulated intensities in blue and the gradient of that in orange.
Extraction of rectified image patches
For the local search, we consider only a small region around the initial guess. By means of the homography H 0 , we have some prior knowledge about the position and pose of the module in the image. We take this into account by warping a region that corresponds to the size of approximately one cell. To this end, we create a regular grid of pixel coordinates. The size of the grid depends on the approximate size of a cell in the image, which is obtained bŷ
r i,j = x i,j −x i+1,j+1 2 ,(6)
where the approximationx is given by Equ. (5) and conversion from homogeneous
x = (x 1 , x 2 , x 3 ) to inhomogeneous coordinates isx = x1 x3 ,x 2 x3
. Note that the approximationr i,j is only valid in the vicinity ofx i,j . The warping is then performed by mapping model coordinates into image coordinates using H 0 followed by sampling the image using bilinear interpolation. As a result, a rectified patch image I i,j is obtained that is coarsely centered at the true cell crossing point, see Fig. 4.
Cell crossing points detection
The detection step for cell crossing points is very similar to the module detection step but with local image patches. It is carried out for every model point m i,j and image patch I i,j to find an estimate x i,j to the (unknown) true image location of m i,j . To simplify notation, we drop the index throughout this section. We compute 1-D image statistics from I to obtain I Σx and I Σy , as well as ∇ σ I Σx and ∇ σ I Σy , as described in Sec. 3.1. The smoothing factor σ is set relative to the image size in the same way as for the module detection.
We find that there are different types of cell crossings that have differing intensity profiles, see Fig. 4. Another challenge is that busbars are hard to distinguish from the ridges (separating regions between cells) between cells, see for example Fig. 4c. Therefore, we cannot consider a single minimum/maximum. We proceed similar to the approach for the detection of multiple modules, cf. Sec. 3.2). We apply thresholding and non-maximum/non-minimum suppression on ∇ σ I Σx and ∇ σ I Σy to obtain a sequence of maxima and minima along each axis. The threshold is set to 1.5 · σ k , where σ k is the standard deviation of ∇ σ I Σk .
From the location of m in the model grid, we know the type of the target cell crossing. We distinguish between ridges and edges of the module. A cell crossing might consist of both. For example a crossing between two cells on the left border of the module, see Fig. 4b, consists of an edge on the x axis and a ridge on the y axis.
Detection of ridges A ridge is characterized by a minimum in ∇ σ I Σk followed by a maximum. As noted earlier, ridges are hard to distinguish from busbars. Luckily, solar cells are usually built symmetrically. Hence, given that image patches are roughly rectified and that the initial guess to the crossing point is not close to the border of the image patch, it is likely that we observe an even number of busbars. As a consequence, we simply use all minima that are directly followed by a maximum, order them by their position and take the middle. We expect to have an odd number of such sequences (an even number of busbars and the actual ridge we are interested in). In case this heuristic is not applicable, because we found an even number of such sequences, we simply drop this point. The correct position on the respective axis corresponds to the turning point of ∇ σ I Σk .
Detection of edges For edges, we distinguish between left/top edges and bottom/right edges of the module. Left/top edges are characterized by a maximum, whereas bottom/right edges correspond to a minimum in ∇ σ I Σk . In case of multiple extrema, we make a heuristic to choose the correct one. We assume that our initial guess is not far off. Therefore, we choose the maximum or minimum that is closest to the center of the patch.
Outlier detection
We chose to apply a fast method to detect the crossing points by considering 1-D image statistics only. As a result, the detected crossing points contain a significant number of outliers. In addition, every detected crossing point exhibits some measurement error. Therefore, we need to identify outliers and find a robust estimate to H that minimizes the overall error. Since H has 8 degrees of freedom, only four point correspondences (m i,j ,x i,j ) are required to obtain a unique solution. On the other hand, a typical module with 10 rows and 6 columns has 77 crossing points. Hence, even if the detection of crossing points failed in a significant number of cases, the number of point correspondences is typically much larger than 4. Therefore, this problem is well suited to be solved by Random Sample Consensus (RANSAC) [4]. We apply RANSAC to find those point correspondences that give the most consistent model. At every iteration t,
we randomly sample four point correspondences and estimate H t using the DLT. For the determination of the consensus set, we treat a point as an outlier if the detected point x i,j and the estimated point H tmi,j differ by more than 5 % of the cell size.
The error of the model H t is given by the following least-squares formulation
e t = 1 N M i,j x i,j − x i,j 2 2 ,(7)
wherex is the current estimate by the model H in cartesian coordinates. Finally, we estimate H using all point correspondences from the consensus set to minimize e t .
Experimental results
We conduct a series of experiments to show that our approach is robust w. r. t. to the position and pose of the module in the image as well as to various degrees of distortion of the modules. In Sec. 5.1, we introduce the dataset that we use throughout our experiments. In Sec. 5.2, we quantitatively compare the results of our approach with our reference method [2]. In addition, we show that our method robustly handles cases, where multiple modules are visible in the image or the module is perspectively distorted. Finally, in Sec. 5.3, we compare the computation time of our approach to the state of the art.
Dataset
Deitsch et al. [2] propose a joint detection and segmentation approach for solar modules. In their evaluation, they use two datasets. They report their computational performance on a dataset that consists of 44 modules. We will refer to this dataset as DataA and use it only for the performance evaluation, to obtain results that are easy to compare. In addition, they use a dataset that consists of 8 modules to evaluate their segmentation. We will refer to this data as DataB, see Fig. 6b. The data is publicly available, which allows for a direct comparison of the two methods. However, since we do not apply a pixelwise segmentation, we could not use the segmentation masks they also provided. To this end, we manually added polygonal annotations, where each corner of the polygon corresponds to one of the corners of the module. To assess the performance in different settings, we add two additional datasets. One of them consists of 10 images with multiple modules visible. We deem this setting important, since in on-site applications, it is difficult to measure only a single module. We will refer to this as DataC. An example is shown in Fig. 6c. The other consists of 9 images, where the module has been gradually rotated around the y-axis with a step size of 10 • starting at 0 • . We will refer to this as DataD, see Fig. 6a. We manually added polygonal annotations to DataC and DataD, too. For the EL imaging procedure of DataC and DataD, two different silicon detector CCD cameras with an optical long pass filter have been used. For the different PV module tilting angles (DataD), a Sensovation "coolSamba HR-320" was used, while for the outdoor PV string measurements a Greateyes "GE BI 2048 2048" was employed (DataC).
Detection results
We are interested in the number of modules that are detected correctly and how accurate the detection is. To assess the detection accuracy, we calculate the intersection over union (IoU) between ground truth polygon and detection. Additionally, we report the recall at different IoU-thresholds. Fig. 5 summarizes the detection results. We see that our method outperforms the reference method on the test dataset provided by Deitsch et al. [2] (DataB) by a small margin. However, the results of the reference method are a little bit more accurate. This can be explained by the fact that they consider lens distortion, while our method only estimates a projective transformation between model and image coordinates. The experiments on DataD assess the robustness of both methods with respect to rotations of the module. We clearly see that our method is considerably robust against rotations, while the reference method requires that the modules are roughly rectified. Finally, we determine the performance of our method, when multiple modules are visible in the image (DataC). The reference method does not support this scenario. It turns out that our method gives very good results when an image shows multiple modules.
In Fig. 6, we visually show the module crossing points estimated using our method. For the rotated modules (DataD), it turns out that the detection fails for 70 • and 80 • rotation. However, for 60 • and less, we consistently achieve good results (see Fig. 6b). Finally, Fig. 6c reveals that the method also works on varying types of modules and in presence of severe degradation.
Computation time
We determine the computational performance of our method on a workstation equipped with an Intel Xeon E5-1630 CPU running at 3.7 GHz. The method is implemented in Python3 using NumPy and only uses a single thread. We use the same 44 module images that Deitsch et al. [2] have used for their performance evaluation to obtain results that can be compared easily. On average, the 44 images are processed in 15 s, resulting in approximately 340 ms per module. This includes the initialization time of the interpreter and the time for loading the images. The average raw processing time of a single image is about 190 ms.
Deitsch et al. [2] report an overall processing time of 6 min for the 44 images using a multi-threaded implementation. Therefore, a single image amounts to 13.5 s on average. Hence, our method is about 40 times faster than the reference method. On the other hand, the reference method does not only detect the cell crossing points but also performs segmentation of the active cell area. In addition, they account for lens distortion as well. This partially justifies the performance difference.
Conclusion
In this work, we have presented a new approach to detect solar modules in EL images. It is based on 1-D image statistics and relates to object detection methods based on integral images. To this end, it can be implemented efficiently and we are confident, that a real-time processing of images is feasible. The experiments show that our method is superior in presence of perspective distortion while performing similarly well than state of the art on non-distorted EL images. Additionally, we show that it is able to deal with scenarios, where multiple modules are present in the image.
In future, the method could be extented to account for complex scenarios, where perspective distortion is strong. In these situations, the stability could be improved by a prior rectification of the module, e. g., using the Hough transform to detect the orientation of the module. Since point correspondences between the module and a virtual model of the latter are established, the proposed method could be extended to calibrate the parameters of a camera model, too. This would allow to take lens distortion into account and to extract undistorted cell images. | 3,914 |
1907.08195 | 2963385316 | Existing techniques for dynamic scene reconstruction from multiple wide-baseline cameras primarily focus on reconstruction in controlled environments, with fixed calibrated cameras and strong prior constraints. This paper introduces a general approach to obtain a 4D representation of complex dynamic scenes from multi-view wide-baseline static or moving cameras without prior knowledge of the scene structure, appearance, or illumination. Contributions of the work are: An automatic method for initial coarse reconstruction to initialize joint estimation; Sparse-to-dense temporal correspondence integrated with joint multi-view segmentation and reconstruction to introduce temporal coherence; and a general robust approach for joint segmentation refinement and dense reconstruction of dynamic scenes by introducing shape constraint. Comparison with state-of-the-art approaches on a variety of complex indoor and outdoor scenes, demonstrates improved accuracy in both multi-view segmentation and dense reconstruction. This paper demonstrates unsupervised reconstruction of complete temporally coherent 4D scene models with improved non-rigid object segmentation and shape reconstruction and its application to free-viewpoint rendering and virtual reality. | A recent work proposed reconstruction of dynamic fluids @cite_51 for static cameras. Another work used RGB-D cameras to obtain reconstruction of non-rigid surfaces @cite_52 . Pioneering research in general dynamic scene reconstruction from multiple handheld wide-baseline cameras @cite_31 @cite_34 exploited prior reconstruction of the background scene to allow dynamic foreground segmentation and reconstruction. Recent work @cite_11 estimates shape of dynamic objects from handheld cameras exploiting GANs. However these approaches either work for static indoor scenes or exploit strong prior assumptions such as silhouette information, known background or scene structure. Also all these approaches give per frame reconstruction leading to temporally incoherent geometries. Our aim is to perform temporally coherent dense reconstruction of unknown dynamic non-rigid scenes automatically without strong priors or limitations on scene structure. | {
"abstract": [
"We introduce a geometry-driven approach for real-time 3D reconstruction of deforming surfaces from a single RGB-D stream without any templates or shape priors. To this end, we tackle the problem of non-rigid registration by level set evolution without explicit correspondence search. Given a pair of signed distance fields (SDFs) representing the shapes of interest, we estimate a dense deformation field that aligns them. It is defined as a displacement vector field of the same resolution as the SDFs and is determined iteratively via variational minimization. To ensure it generates plausible shapes, we propose a novel regularizer that imposes local rigidity by requiring the deformation to be a smooth and approximately Killing vector field, i.e. generating nearly isometric motions. Moreover, we enforce that the level set property of unity gradient magnitude is preserved over iterations. As a result, KillingFusion reliably reconstructs objects that are undergoing topological changes and fast inter-frame motion. In addition to incrementally building a model from scratch, our system can also deform complete surfaces. We demonstrate these capabilities on several public datasets and introduce our own sequences that permit both qualitative and quantitative comparison to related approaches.",
"We present an algorithm designed for navigating around a performance that was filmed as a \"casual\" multi-view video collection: real-world footage captured on hand held cameras by a few audience members. The objective is to easily navigate in 3D, generating a video-based rendering (VBR) of a performance filmed with widely separated cameras. Casually filmed events are especially challenging because they yield footage with complicated backgrounds and camera motion. Such challenging conditions preclude the use of most algorithms that depend on correlation-based stereo or 3D shape-from-silhouettes. Our algorithm builds on the concepts developed for the exploration of photo-collections of empty scenes. Interactive performer-specific view-interpolation is now possible through innovations in interactive rendering and offline-matting relating to i) modeling the foreground subject as video-sprites on billboards, ii) modeling the background geometry with adaptive view-dependent textures, and iii) view interpolation that follows a performer. The billboards are embedded in a simple but realistic reconstruction of the environment. The reconstructed environment provides very effective visual cues for spatial navigation as the user transitions between viewpoints. The prototype is tested on footage from several challenging events, and demonstrates the editorial utility of the whole system and the particular value of our new inter-billboard optimization.",
"Dynamic scene modeling is a challenging problem in computer vision. Many techniques have been developed in the past to address such a problem but most of them focus on achieving accurate reconstructions in controlled environments, where the background and the lighting are known and the cameras are fixed and calibrated. Recent approaches have relaxed these requirements by applying these techniques to outdoor scenarios. The problem however becomes even harder when the cameras are allowed to move during the recording since no background color model can be easily inferred. In this paper we propose a new approach to model dynamic scenes captured in outdoor environments with moving cameras. A probabilistic framework is proposed to deal with such a scenario and to provide a volumetric reconstruction of all the dynamic elements of the scene. The proposed algorithm was tested on a publicly available dataset filmed outdoors with six moving cameras. A quantitative evaluation of the method was also performed on synthetic data. The obtained results demonstrated the effectiveness of the approach considering the complexity of the problem.",
"3D Reconstruction of dynamic fluid surfaces is an open and challenging problem in computer vision. Unlike previous approaches that reconstruct each surface point independently and often return noisy depth maps, we propose a novel global optimization-based approach that recovers both depths and normals of all 3D points simultaneously. Using the traditional refraction stereo setup, we capture the wavy appearance of a pre-generated random pattern, and then estimate the correspondences between the captured images and the known background by tracking the pattern. Assuming that the light is refracted only once through the fluid interface, we minimize an objective function that incorporates both the cross-view normal consistency constraint and the single-view normal consistency constraints. The key idea is that the normals required for light refraction based on Snells law from one view should agree with not only the ones from the second view, but also the ones estimated from local 3D geometry. Moreover, an effective reconstruction error metric is designed for estimating the refractive index of the fluid. We report experimental results on both synthetic and real data demonstrating that the proposed approach is accurate and shows superiority over the conventional stereo-based method.",
"Reflectance and shape are two important components in visually perceiving the real world. Inferring the reflectance and shape of an object through cameras is a fundamental research topic in the field of computer vision. While three-dimensional shape recovery is pervasive with varieties of approaches and practical applications, reflectance recovery has only emerged recently. Reflectance recovery is a challenging task that is usually conducted in controlled environments, such as a laboratory environment with a special apparatus. However, it is desirable that the reflectance be recovered in the field with a handy camera so that reflectance can be jointly recovered with the shape. To that end, we present a solution that simultaneously recovers the reflectance and shape (i.e., dense depth and normal maps) of an object under natural illumination with commercially available handy cameras. We employ a light field camera to capture one light field image of the object, and a 360-degree camera to capture the illumination. The proposed method provides positive results in both simulation and real-world experiments."
],
"cite_N": [
"@cite_52",
"@cite_31",
"@cite_34",
"@cite_51",
"@cite_11"
],
"mid": [
"2736384647",
"2146614659",
"1797946653",
"2738427683",
"2894205709"
]
} | Temporally coherent general dynamic scene reconstruction | Temporally consistent scene reconstruction for Odzemok dataset color-coded to show the scene object segmentation obtained. effects in film and broadcast production and for content production in virtual reality. The ultimate goal of modelling dynamic scenes from multiple cameras is automatic understanding of real-world scenes from distributed camera networks, for applications in robotics and other autonomous systems. Existing methods have applied multiple view dynamic scene reconstruction techniques in controlled environment with known background or chroma-key studio [23,20,56,60]. Other multiple view stereo techniques require a relatively dense static camera network resulting in a large number of cameras [19]. Extensions to more general outdoor scenes [5,32,60] use prior reconstruction of the static geometry from images of the empty environment. However these methods either require accurate segmentation of dynamic foreground objects, or prior knowledge of the scene struc-ture and background, or are limited to static cameras and controlled environments. Scenes are reconstructed semi-automatically, requiring manual intervention for segmentation/rotoscoping, and result in temporally incoherent per-frame mesh geometries. Temporally coherent geometry with known surface correspondence across the sequence is essential for real-world applications and compact representation.
Our paper addresses the limitations of existing approaches by introducing a methodology for unsupervised temporally coherent dynamic scene reconstruction from multiple wide-baseline static or moving camera views without prior knowledge of the scene structure or background appearance. This temporally coherent dynamic scene reconstruction is demonstrated to work in applications for immersive content production such as free-viewpoint video (FVV) and virtual reality (VR). This work combines two previously published papers in general dynamic reconstruction [42] and temporally coherent reconstruction [43] into a single framework and demonstrates application of this novel unsupervised joint segmentation and reconstruction in immersive content production FVV and VR (Section 5).
The input is a sparse set of synchronised videos from multiple moving cameras of an unknown dynamic scene without prior scene segmentation or camera calibration. Our first contribution is automatic initilisation of camera calibration and sparse scene reconstruction from sparse feature correspondence using sparse feature detection and matching between pairs of frames. An initial coarse reconstruction and segmentation of all scene objects is obtained from sparse features matched across multiple views. This eliminates the requirement for prior knowledge of the background scene appearance or structure. Our second contribution is sparse-to-dense reconstruction and segmentation approach to introduce temporal coherence for every frame. We exploit temporal coherence of the scene to overcome visual ambiguities inherent in single frame reconstruction and multiple view segmentation methods for general scenes. Temporal coherence refers to the correspondence between the 3D surface of all objects observed over time. Our third contribution is spatio-temporal alignment to estimate dense surface correspondence for 4D reconstruction. A geodesic star convexity shape constraint is introduced for the shape segmentation to improve the quality of segmentation for non-rigid objects with complex appearance. The proposed approach overcomes the limitations of existing methods allowing an unsupervised temporally coherent 4D reconstruction of complete models for general dynamic scenes.
The scene is automatically decomposed into a set of spatio-temporally coherent objects as shown in Figure 1 where the resulting 4D scene reconstruction has temporally coherent labels and surface correspondence for each object. This can be used for free-viewpoint video rendering and imported to a game engine for VR experience production. The contributions explained above can be summarized as follows: -Unsupervised temporally coherent dense reconstruction and segmentation of general complex dynamic scenes from multiple wide-baseline views. -Automatic initialization of dynamic object segmentation and reconstruction from sparse features. -A framework for space-time sparse-to-dense segmentation, reconstruction and temporal correspondence. -Robust spatio-temporal refinement of dense reconstruction and segmentation integrating error tolerant photo-consistency and edge information using geodesic star convexity. -Robust and computationally efficient reconstruction of dynamic scenes by exploiting temporal coherence. -Real-world applications of 4D reconstruction to freeviewpoint video rendering and virtual reality. This paper is structured as follows: First related work is reviewed. The methodology for general dynamic scene reconstruction is then introduced. Finally a thorough qualitative and quantitative evaluation and comparison to the state-of-the-art on challenging datasets is presented.
Related Work
Temporally coherent reconstruction is a challenging task for general dynamic scenes due to a number of factors such as motion blur, articulated, non-rigid and large motion of multiple people, resolution differences between camera views, occlusions, wide-baselines, errors in calibration and cluttered dynamic backgrounds. Segmentation of dynamic objects from such scenes is difficult because of foreground and background complexity and the likelihood of overlapping background and foreground color distributions. Reconstruction is also challenging due to limited visual cues and relatively large errors affecting both calibration and extraction of a globally consistent solution. This section reviews previous work on dynamic scene reconstruction and segmentation.
Dynamic Scene Reconstruction
Dense dynamic shape reconstruction is a fundamental problem and heavily studied area in the field of computer vision. Recovering accurate 3D models of a dynamically evolving, non-rigid scene observed by multiple synchronised cameras is a challenging task. Research on multiple view dense dynamic reconstruction has primarily focused on indoor scenes with controlled illumi-nation and static backgrounds, extending methods for multiple view reconstruction of static scenes [53] to sequences [62]. Deep learning based approaches have been introduced to estimate shape of dynamic objects from minimal camera views in constrained environment [29,68] and for rigid objects [58]. In the last decade, focus has shifted to more challenging outdoor scenes captured with both static and moving cameras. Reconstruction of non-rigid dynamic objects in uncontrolled natural environments is challenging due to the scene complexity, illumination changes, shadows, occlusion and dynamic backgrounds with clutter such as trees or people. Methods have been proposed for multi-view reconstruction [65,39,37] requiring a large number of closely spaced cameras for surface estimation of dynamic shape. Practical applications require relatively sparse moving cameras to acquire coverage over large areas such as outdoor. A number of approaches for mutli-view reconstruction of outdoor scenes require initial silhouette segmentation [67,32,22,23] to allow visual-hull reconstruction. Most of these approaches to general dynamic scene reconstruction fail in the case of complex (cluttered) scenes captured with moving cameras.
A recent work proposed reconstruction of dynamic fluids [50] for static cameras. Another work used RGB-D cameras to obtain reconstruction of non-rigid surfaces [55]. Pioneering research in general dynamic scene reconstruction from multiple handheld wide-baseline cameras [5,60] exploited prior reconstruction of the background scene to allow dynamic foreground segmentation and reconstruction. Recent work [46] estimates shape of dynamic objects from handheld cameras exploiting GANs. However these approaches either work for static/indoor scenes or exploit strong prior assumptions such as silhouette information, known background or scene structure. Also all these approaches give per frame reconstruction leading to temporally incoherent geometries. Our aim is to perform temporally coherent dense reconstruction of unknown dynamic non-rigid scenes automatically without strong priors or limitations on scene structure.
Joint Segmentation and Reconstruction
Many of the existing multi-view reconstruction approaches rely on a two-stage sequential pipeline where foreground or background segmentation is initially performed independently with respect to each camera, and then used as input to obtain visual hull for multi-view reconstruction. The problem with this approach is that the errors introduced at the segmentation stage cannot be recovered and are propagated to the reconstruction stage reducing the final reconstruction quality. Segmentation from multiple wide-baseline views has been proposed by exploiting appearance similarity [17,38,70]. These ap-proaches assume static backgrounds and different colour distributions for the foreground and background [52,17] which limits applicability for general scenes.
Joint segmentation and reconstruction methods incorporate estimation of segmentation or matting with reconstruction to provide a combined solution. Joint refinement avoids the propagation of errors between the two stages thereby making the solution more robust. Also, cues from segmentation and reconstruction can be combined efficiently to achieve more accurate results. The first multi-view joint estimation system was proposed by Szeliski et al. [59] which used iterative gradient descent to perform an energy minimization. A number of approaches were introduced for joint formulation in static scenes and one recent work used training data to classify the segments [69]. The focus shifted to joint segmentation and reconstruction for rigid objects in indoor and outdoor environments. These approaches used a variety of techniques such as patch-based refinement [54,48] and fixating cameras on the object of interest [11] for reconstructing rigid objects in the scene. However, these are either limited to static scenes [69,26] or process each frame independently thereby failing to enforce temporal consistency [11,23].
Joint reconstruction and segmentation on monocular video was proposed in [36,3,12] achieving semantic segmentation of scene limited to rigid objects in street scenes. Practical application of joint estimation requires these approaches to work on non-rigid objects such as humans with clothing. A multi-layer joint segmentation and reconstruction approach was proposed for multiple view video of sports and indoor scenes [23]. The algorithm used known background images of the scene without the dynamic foreground objects to obtain an initial segmentation. Visual-hull based reconstruction was performed with known prior foreground/background using a background image plate with fixed and calibrated cameras. This visual hull was used as a prior and was optimized by a combination of photo-consistency, silhouette, color and sparse feature information in an energy minimization framework to improve the segmentation and reconstruction quality. Although structurally similar to our approach, it requires the scene to be captured by fixed calibrated cameras and a priori known fixed background plate as a prior to estimate the initial visual hull by background subtraction. The proposed approach overcomes these limitations allowing moving cameras and unknown scene backgrounds.
An approach based on optical flow and graph cuts was shown to work well for non-rigid objects in indoor settings but requires known background segmentation to obtain silhouettes and is computationally expensive [24]. Practical application of temporally coherent joint estimation requires approaches that work on non-rigid objects for general scenes in uncontrolled environments. A quantitative evaluation of techniques for multi-view reconstruction was presented in [53]. These methods are able to produce high quality results, but rely on good initializations and strong prior assumptions with known and controlled (static) scene backgrounds.
The proposed method exploits the advantages of joint segmentation and reconstruction and addresses the limitations of existing methods by introducing a novel approach to reconstruct general dynamic scenes automatically from wide-baseline cameras with no prior. To overcome the limitations of existing methods, the proposed approach automatically initialises the foreground object segmentation from wide-baseline correspondence without prior knowledge of the scene. This is followed by a joint spatio-temporal reconstruction and segmentation of general scenes. Temporal correspondence is exploited to overcome visual ambiguities giving improved reconstruction together with temporal coherence of surface correspondence to obtain 4D scene models.
Temporal coherent 4D Reconstruction
Temporally coherent 4D reconstruction refers to aligning the 3D surfaces of non-rigid objects over time for a dynamic sequence. This is achieved by estimating pointto-point correspondences for the 3D surfaces to obtain 4D temporally coherent reconstruction. 4D models allows to create efficient representation for practical applications in film, broadcast and immersive content production such as virtual, augmented and mixed reality. The majority of existing approaches for reconstruction of dynamic scenes from multi-view videos process each time frame independently due to the difficulty of simultaneously estimating temporal correspondence for non-rigid objects. Independent per-frame reconstruction can result in errors due to the inherent visual ambiguity caused by occlusion and similar object appearance for general scenes. Recent research has shown that exploiting temporal information can improve reconstruction accuracy as well as achieving temporal coherence [43].
3D scene flow estimates frame to frame correspondence whereas 4D temporal coherence estimates correspondence across the complete sequence to obtain a single surface model. Methods to estimate 3D scene flow have been reported in the literature [41] for autonomous vehicles. However this approach is limited to narrow baseline cameras. Other scene flow approaches are dependent on 2D optical flow [66,6] and they require an accurate estimate for most of the pixels which fails in the case of large motion. However, 3D scene flow methods align two frames independently and do not produce temporally coherent 4D models.
Research investigating spatio-temporal reconstruction across multiple frames was proposed by [20,37,24] exploiting the temporal information from the previous frames using optical flow. An approach for recovering space-time consistent depth maps from multiple video sequences captured by stationary, synchronized and calibrated cameras for depth based free viewpoint video rendering was proposed by [39]. However these methods require accurate initialisation, fixed and calibrated cameras and are limited to simple scenes. Other approaches to temporally coherent reconstruction [4] either requires a large number of closely spaced cameras or bi-layer segmentation [72,30] as a constraint for reconstruction. Recent approaches for spatio-temporal reconstruction of multi-view data either work on indoor studio data [47].
The framework proposed in this paper addresses limitations of existing approaches and gives 4D temporally coherent reconstruction for general dynamic indoor or outdoor scenes with large non-rigid motions, repetitive texture, uncontrolled illumination, and large capture volume. The scenes are captured with sparse static/moving cameras. The proposed approach gives 4D models of complete scenes with both static and dynamic objects for real-world applications (FVV and VR) with no prior knowledge of scene structure.
Multi-view Video Segmentation
In the field of image segmentation, approaches have been proposed to provide temporally consistent monocular video segmentation [21,49,45,71]. Hierarchical segmentation based on graphs was proposed in [21], directed acyclic graph were used to propose an object followed by segmentation [71]. Optical flow is used to identify and consistently segment objects [45,49]. Recently a number of approaches have been proposed for multi-view foreground object segmentation by exploiting appearance similarity spatially across views [16,35,38,70]. An approach for space-time multi-view segmentation was proposed by [17]. However, multi-view approaches assume a static background and different colour distributions for the foreground and background which limits applicability for general scenes and non-rigid objects.
To address this issue we introduce a novel method for spatio-temporal multi-view segmentation of dynamic scenes using shape constraints. Single image segmentation techniques using shape constraints provide good results for complex scene segmentation [25] (convex and concave shapes), but require manual interaction. The proposed approach performs automatic multi-view video segmentation by initializing the foreground object model using spatio-temporal information from widebaseline feature correspondence followed by a multi- layer optimization framework. Geodesic star convexity previously used in single view segmentation [25] is applied to constraint the segmentation in each view. Our multi-view formulation naturally enforces coherent segmentation between views and also resolves ambiguities such as the similarity of background and foreground in isolated views.
Summary and Motivation
Image-based temporally coherent 4D dynamic scene reconstruction without a prior model or constraints on the scene structure is a key problem in computer vision. Existing dense reconstruction algorithms need some strong initial prior and constraints for the solution to converge such as background, structure, and segmentation, which limits their application for automatic reconstruction of general scenes. Current approaches are also commonly limited to independent per-frame reconstruction and do not exploit temporal information or produce a coherent model with known correspondence.
The approach proposed in this paper aims to overcome the limitations of existing approaches to enable robust temporally coherent wide-baseline multiple view reconstruction of general dynamic scenes without prior assumptions on scene appearance, structure or segmentation of the moving objects. Static and dynamic objects in the scene are identified for simultaneous segmentation and reconstruction using geometry and appearance cues in a sparse-to-dense optimization framework. Temporal coherence is introduced to improve the quality of the reconstruction and geodesic star convexity is used to improve the quality of segmentation. The static and dynamic elements are fused automatically in both the temporal and spatial domain to obtain the final 4D scene reconstruction.
This paper presents a unified framework, novel in combining multiple view joint reconstruction and seg-mentation with temporal coherence to improve per-frame reconstruction performance and produce a single framework from the initial work presented in [43,42]. In particular the approach gives 4D surface model with full correspondence over time. A comprehensive experimental evaluation with comparison to the state-of-the-art in segmentation, reconstruction and 4D modelling is also presented extending previous work. Application fo the resulting 4D models to free-viewpoint video rendering and content production for immersive virtual reality experiences is also presented.
Methodology
This work is motivated by the limitations of existing multiple view reconstruction methods which either work independently at each frame resulting in errors due to visual ambiguity [19,23], or require restrictive assumptions on scene complexity and structure and often assume prior camera calibration and foreground segmentation [60,24]. We address these issues by initializing the joint reconstruction and segmentation algorithm automatically, introducing temporal coherence in the reconstruction and geodesic star convexity in segmentation to reduce ambiguity and ensure consistent non-rigid structure initialization at successive frames. The proposed approach is demonstrated to achieve improved reconstruction and segmentation performance over state-ofthe-art approaches and produce temporally coherent 4D models of complex dynamic scenes.
Overview
An overview of the proposed framework for temporally coherent multi-view reconstruction is presented in Figures 2 and consists of the following stages: Multi-view video: The scenes are captured using multiple video cameras (static/moving) separated by widebaseline (> 15 • ). The cameras can be synchronized during the capture using time-code generator or later using the audio information. Camera extrinsic calibration and scene structure are assumed to be unknown. Sparse reconstruction: The intrinsics are assumed to be known. Segmentation based feature detection (SFD) [44] is used to obtain a relatively large number of sparse features suitable for wide-baseline matching which are distributed throughout the scene including on dynamic objects such as people. SFD features are matched between views using a SIFT descriptor giving camera extrinsics and a sparse 3D point-cloud for each time instant for the entire sequence [27]. Initial scene segmentation and reconstruction -Section 3.2: Automatic initialisation is performed without prior knowledge of the scene structure or appearance to obtain an initial approximation for each object. The sparse point cloud is clustered in 3D [51] with each cluster representing a unique foreground object. Object segmentation increases efficiency and improve robustness of 4D models. This reconstruction is refined using the framework explained in Section 3.4 to obtain segmentation and dense reconstruction of each object. Sparse-to-dense temporal reconstruction with temporal coherence -Section 3.3 Temporal coherence is introduced in the framework to initialize the coarse reconstruction and obtain frame-to-frame dense correspondences for dynamic object. Dynamic object regions are detected at each time instant by sparse temporal correspondence of SFD features at successive frames. Sparse temporal feature correspondence allows propagation of the dense reconstruction for each dynamic object to obtain an initial approximation. Joint object-based sparse-to-dense temporally coherent refinement of shape and segmentation -Section 3.4: The initial estimate is refined for each object per-view in the scene through joint optimisation of shape and segmentation using a robust cost function combining matching, color, contrast and smoothness information for wide-baseline matching with a geodesic star convexity constraint. A single 3D model for each dynamic object is obtained by fusion of the view-dependent depth maps using Poisson surface reconstruction [31]. Surface orientation is estimated based on neighbouring pixels. Applications -Section : The 4D representation from the proposed joint segmentation and reconstruction framework has a number of applications in media production, including free-viewpoint video (FVV) rendering and virtual reality (VR).
The process above is repeated for the entire sequence for all objects in the first frame and for dynamic objects at each time-instant. The proposed approach enables automatic reconstruction of all objects in the scene as a 4D mesh sequence. Subsequent sections present the novel contributions of this work in initialisation and refinement to obtain a dense temporally coherent reconstruction. The approach is demonstrated to outperform previous approaches to dynamic scene reconstruction and does not require prior knowledge of the scene.
Initial Scene Segmentation and Reconstruction
For general dynamic scene reconstruction, we need to reconstruct and segment the objects in the scene. This requires an initial coarse approximation for initialisation of a subsequent refinement step to optimise the segmentation and reconstruction with respect to each camera view. We introduce an approach based on sparse point cloud clustering, an overview is shown in Figure 3. Initialisation gives a complete coarse segmentation and reconstruction of each object in the first frame of the sequence for subsequent refinement. The dense reconstruction of the foreground objects and background are combined to obtain a full scene reconstruction at the first time instant. A rough geometric proxy of the background is created using the method. For consecutive time instants dynamic objects and newly appeared objects are identified and only these objects are reconstructed and segmented. The reconstruction of static objects is retained which reduces computational complexity. The optic flow and cluster information for each dynamic object ensures that we retain same labels for the entire sequence.
Sparse Point-cloud Clustering
The sparse representation of the scene is processed to remove outliers using the point neighbourhood statistics to filter outlier data [51]. We segment the objects in the sparse scene reconstruction, this allows only moving objects to be reconstructed at each frame for efficiency and this also allows object shape similarity to be propagated across frames to increase robustness of reconstruction.
We use data clustering approach based on the 3D grid subdivision of the space using an octree data structure in Euclidean space to segment objects at each frame. In a more general sense, nearest neighbor information is used to cluster, which is essentially similar to a flood fill algorithm. We choose this data clustering because of its computational efficiency and robustness. The approach allows segmentation of objects in the scene and is demonstrated to work well for cluttered and general outdoor scenes as shown in Section 4.
Objects with insufficient detected features are reconstructed as part of the scene background. Appearing, disappearing and reappearing objects are handled by sparse dynamic feature tracking, explained in Section 3.3. Clustering results are shown in Figure 3. This is followed by a sparse-to-dense coarse object based approach to segment and reconstruct general dynamic scenes.
Coarse Object Reconstruction
The process to obtain the coarse reconstruction for the first frame of the sequence is shown in Figure 4. The sparse representation of each element is back-projected on the rectified image pair for each view. Delaunay triangulation [18] is performed on the set of back projected points for each cluster on one image and is propagated to the second image using the sparse matched features. Triangles with edge length greater than the median length of edges of all triangles are removed. For each remaining triangle pair direct linear transform is used to estimate the affine homography. Displacement at each pixel within the triangle pair is estimated by interpolation to get an initial dense disparity map for each cluster in the 2D image pair labelled as R I depicted in red in Figure 4. The initial coarse reconstruction for the observed objects in the scene is used to define the depth hypotheses at each pixel for the optimization.
The region R I does not ensure complete coverage of the object, so we extrapolate this region to obtain a region R O (shown in yellow) in 2D by 5% of the average distance between the boundary points(R I ) and the centroid of the object. To allow for errors in the initial approximate depth from sparse features we add volume in front and behind of the projected surface by an error tolerance, along the optical ray of the camera. This ensures that the object boundaries lie within the extrapolated initial coarse estimate and depth at each pixel for the combined regions may not be accurate. The tolerance for extrapolation may vary if a pixel belongs to R I or R O as the propagated pixels of the extrapolated regions (R O ) may have a high level of errors compared to error at the points from sparse representation (R I ) requiring a comparatively higher tolerance. The calculation of threshold depends on the capture volume of the datasets and is set to 1% of the capture volume for R O and half the value for R I . This volume in 3D corresponds to our initial coarse reconstruction of each object and enables us to remove the dependency of the existing approaches on background plate and visual hull estimates. This process of cluster identification and initial coarse object reconstruction is performed for multiple objects in general environments. Initial object segmentation using point cloud clustering and coarse segmentation is insensitive to parameters. Throughout this work the same parameters are used for all datasets. The result of this process is a coarse initial object segmentation and reconstruction for each object.
Sparse-to-dense temporal reconstruction with temporal coherence
Once the static scene reconstruction is obtained for the first frame, we perform temporally coherent reconstruction for dynamic objects at successive time instants instead of whole scene reconstruction for computational efficiency and to avoid redundancy. The initial coarse reconstruction for each dynamic region is refined in the subsequent optimization step with respect to each camera view. Dynamic scene objects are identified from the temporal correspondence of sparse feature points. Sparse correspondence is used to propagate an initial model of the moving object for refinement. Figure 5 presents the sparse reconstruction and temporal correspondence. New objects are identified per frame from the clustered sparse reconstruction and are labelled as dynamic objects. Sparse temporal dynamic feature tracking: Numerous approaches have been proposed to track moving objects in 2D using either features or optical flow. However these methods may fail in the case of occlusion, movement parallel to the view direction, large motions and moving cameras. To overcome these limitations we match the sparse 3D feature points obtained using SFD [44] from multiple wide-baseline views at each time instant. The use of sparse 3D features is robust to large non-rigid motion, occlusions and camera movement. SFD detects sparse features which are stable across wide-baseline views and consecutive time instants for a moving camera and dynamic scene. Sparse 3D feature matches between consecutive time instants are back-projected to each view. These features are matched temporally using SIFT descriptor to identify the moving points. Robust matching is achieved by enforcing multiple view consistency for the temporal feature correspondence in each view as illustrated in Figure 6. Each match must satisfy the constraint:
H t,v (p) + u t,r (p + H t,v (p)) − u t,v (p)− (1) H t,r (p + u t,v (p)) <
where p is the feature image point in view v at frame t, H t,v (p) is the disparity at frame t from views v and r, u t,v (p) is the temporal correspondence from frames t to t + 1 for view v. The multi-view consistency check ensures that correspondences between any two views remain temporally consistent for successive frames. Matches in the 2D domain are sensitive to camera movement and occlusion, hence we map the set of refined matches into 3D to make the system robust to camera motion. The Frobenius norm is applied on the 3D point gradients in all directions [71] to obtain the 'net' motion at each sparse point. The 'net' motion between pairs of 3D points for consecutive time instants are ranked, and the top and bottom 5 percentile values are removed. Median filtering is then applied to identify the dynamic features. Figure 7 shows an example with moving cameras for Juggler [5].
Sparse-to-dense model reconstruction: Dynamic 3D feature points are used to initialize the segmentation and reconstruction of the initial model. This avoids the assumption of static backgrounds and prior scene segmentation commonly used to initialise multiple view reconstruction with a coarse visual-hull approximation [23]. Temporal coherence also provides a more accurate initialisation to overcome visual ambiguities at individual frames. Figure 8 illustrates the use of temporal coherence for reconstruction initialisation and refinement. Dynamic feature correspondence is used to identify the mesh for each dynamic object. This mesh is back projected on each view to obtain the region of interest. Lucas Kanade Optical flow [8] is performed on the projected mask for each view in the temporal domain using the dynamic feature correspondences over time as initialization. Dense multi-view wide-baseline correspondences from the previous frame are propagated to the current frame using the information from the flow vectors to obtain dense multi-view matches in the current frame. The matches are triangulated in 3D to obtain a refined 3D dense model of the dynamic object for the current frame. For dynamic scenes, a new object may enter the scene or a new part may appear as the object moves. To allow the introduction of new objects and object parts we also use information from the cluster of sparse points for each dynamic object. The cluster corresponding to the dynamic features is identified and static points are removed. This ensures that the set of new points not only contain the dynamic features but also the unprocessed points which represent new parts of the object. These points are added to the refined sparse model of the dynamic object. To handle the new objects we detect new clusters at each time instant and consider them as dynamic regions. The sparse-to-dense initial coarse reconstruction improves the quality of segmentation and reconstruction after the refinement. Examples of the improvement in segmentation and reconstruction for Odzemok [1] and Juggler [5] datasets are shown in Figure 9. As observed limbs of the people is retained by using information from the previous frames in both the cases.
Joint object-based sparse-to-dense temporally coherent refinement of shape and segmentation
The initial reconstruction and segmentation from dense temporal feature correspondence is refined using a joint optimization framework. A novel shape constraint is introduced based on geodesic star convexity which has previously been shown to give improved performance in interactive image segmentation for structures with fine details (for example a person's fingers or hair) [25]. Shape is a powerful cue for object recognition and segmentation. Shape models represented as distance transforms from a template have been used for category specific segmentation [33]. Some works have introduced generic connectivity constraints for segmentation showing that obtaining a globally optimal solutions under the connectivity constraint is NP-hard [64]. Veksler et al. have used shape constraint in segmentation framework by enforcing star convexity prior on the segmentation, and globally optimal solutions are achieved subject to this constraint [63]. The star convexity constraint ensures connectivity to seed points, and is a stronger assumption than plain connectivity. An example of a star-convex object is shown in Figure 10 along with a failure case for a non-rigid articulate object. To handle more complex objects the idea of geodesic forests with multiple star centres was introduced to obtain a globally optimal solution for interactive 2D object segmentation [25]. The main focus was to introduce shape constraints in interactive segmentation, by means of a geodesic star convexity prior. The notion of connectivity was extended from Euclidean to geodesic so that paths can bend and adapt to image data as opposed to straight Euclidean rays, thus extending visibility and reducing the number of star centers required.
The geodesic star-convexity is integrated as a constraint on the energy minimisation for joint multi-view Fig. 10 (a) Representation of star convexity: The left object depicts example of star-convex objects, with a star center marked. The object on the right with a plausible star center shows deviations from star-convexity in the fine details, and (b) Multiple star semantics for joint refinement: Single star center based segmentation is depicted on the left and multiple star is shown on the right. reconstruction and segmentation [23]. In this work the shape constraint is automatically initialised for each view from the initial segmentation. The shape constraint is based on the geodesic distance with foreground object initialisation (seeds) as star centres to which the object shape is restricted. The union formed by multiple object seeds form a geodesic forest. This allows complex shapes to be segmented. In this work to automatically initialize the segmentation we use the sparse temporal feature correspondence as star centers (seeds) to build a geodesic forest automatically. The region outside the initial coarse reconstruction of all dynamic objects is initialized as the background seed for segmentation as shown in Figure 12. The shape of the dynamic object is restricted by this geodesic distance constraint that depends on the image gradient. Comparison with existing methods for multi-view segmentation demonstrates improvements in recovery of fine detail structure as illustrated in Figure 12.
Once we have a set of dense 3D points for each dynamic object, Poisson surface reconstruction is performed on the set of sparse points to obtain an initial coarse model of each dynamic region R, which is subsequently refined using the optimization framework (Section 3.4.1).
Optimization on initial coarse object reconstruction based on geodesic star convexity
The depth of the initial coarse reconstruction estimate is refined per view for each dynamic object at a per pixel level. View-dependent optimisation of depth is performed with respect to each camera which is robust to errors in camera calibration and initialisation. Calibration inaccuracies produce inconsistencies limiting the applicability of global reconstruction techniques which simultaneously consider all views; view-dependent techniques are more tolerant to such inaccuracies because they only use a subset of the views for reconstruction of depth from each camera view.
Our goal is to assign an accurate depth value from a set of depth values D = d 1 , ..., d |D|−1 , U and assign a layer label from a set of label values L = l 1 , ..., l |L | to each pixel p for the region R of each dynamic object. Each d i is obtained by sampling the optical ray from the camera and U is an unknown depth value to handle occlusions. This is achieved by optimisation of a joint cost function [23] for label (segmentation) and depth (reconstruction):
E(l, d) = λ data E data (d) + λ contrast E contrast (l)+ λ smooth E smooth (l, d) + λ color E color (l) (2)
where, d is the depth at each pixel, l is the layer label for multiple objects and the cost function terms are defined in section 3.4.2. The equation consists of four terms: the data term is for the photo-consistency scores, the smoothness term is to avoid sudden peaks in depth and maintain the consistency and the color and contrast terms are to identify the object boundaries. Data and smoothness terms are common to solve reconstruction problems [7] and the color and contrast terms are used for segmentation [34]. This is solved subject to a geodesic star-convexity constraint on the labels l. A label l is star convex with respect to center c, if every point p ∈ l is visible to a star center c via l in the image x which can be expressed as an energy cost:
E (l|x, c) = p∈R q∈Γc,p E p,q (l p , l q ) (3) ∀q ∈ Γ c,p , E p,q = ∞ if l p = l q 0 otherwise(4)
where ∀p ∈ R : p ∈ l ⇔ l p = 1 and Γ c,p is the geodesic path joining p to the star center c given by:
Γ c,p = arg min Γ ∈Pc,p L(Γ )(5)
where P c,p denotes the set of all discrete paths between c and p and L(Γ ) is the length of discrete geodesic path as defined in [25]. In the case of image segmentation the gradients in the underlying image provide information to compute the discrete paths between each pixel and star centers and L(Γ ) is defined below:
L(Γ ) = N D −1 i=1 (1 − δ g )j(Γ i , Γ i+1 ) 2 + δ g I(Γ i ) 2(6)
where Γ is an arbitrary parametrized discrete path with N D pixels given by Γ 1 , Γ 2 , · · · Γ N D , j(Γ i , Γ i+1 ) is the Euclidean distance between successive pixels, and the quantity I(Γ i ) 2 is a finite difference approximation of the image gradient between the points Γ i , Γ i+1 . The parameter weights δ g the Euclidean distance with the geodesic length. Using the above definition, one can define the geodesic distance as defined in Equation 5.
An extension of single star-convexity is to use multiple stars to define a more general class of shapes. Introduction of multiple star centers reduces the path lengths and increases the visibility of small parts of objects like small limbs as shown in Figure 10. Hence Equation 3 is extended to multiple stars. A label l is star convex with respect to center c i , if every point p ∈ l is visible to a star center c i in set C = {c 1 , ..., c N T } via l in the image x, where N T is the number of star centers [25]. This is expressed as an energy cost:
E (l|x, C ) = p∈R q∈Γc,p E p,q (l p , l q )(7)
In our case all the correct temporal sparse feature correspondences are used as star centers, hence the segmentation will include all the points which are visible to these sparse features via geodesic distances in the region R, thereby employing the shape constraint. Since the star centers are selected automatically, the method is unsupervised. Comparison of segmentation constraint with geodesic multi-star convexity against no constraints and Euclidean multi-star convexity constraint is shown in Figure 11. The figure demonstrates the usefulness of the proposed approach with an improvement in segmentation quality on non-rigid complex objects. The energy in the Equation 2 is minimized as follows:
min (l,d) s.t. E(l, d) l S (C ) ⇔ min (l,d) E(l, d) + E (l|x, C )(8)
where S (C ) is the set of all shapes which lie within the geodesic distances with respect to the centers in C . Optimization of Equation 8, subject to each pixel p in the region R being at a geodesic distance Γ c,p from the star centers in the set C , is performed using the αexpansion algorithm for a pixel p by iterating through Fig. 12 Geodesic star convexity: A region R with star centers C connected with geodesic distance Γ c,p . Segmentation results with and without geodesic star convexity based optimization are shown on the right for the Juggler dataset. the set of labels in L × D [10]. Graph-cut is used to obtain a local optimum [9]. The improvements in the results using geodesic star convexity in the framework is shown in Figure 12 and by using temporal coherence is shown in Figure 9. Figure 13 shows improvements using geodesic shape constraint, temporal coherence and combined proposed approach for Dance2 [2] dataset.
Energy cost function for joint segmentation and reconstruction
For completeness in this section we define each of the terms in Equation 2, these are based on previous terms used for joint optimisation over depth for each pixel introduced in [42], with modification of the color matching term to improve robustness and extension to multiple labels.
Matching term: The data term for matching between views is specified as a measure of photo-consistency (Figure 14) as follows:
E data (d) = p∈P e data (p, d p ) = M (p, q) = i∈O k m(p, q), if d p = U M U , if d p = U(9)
where P is the 4-connected neighbourhood of pixel p, M U is the fixed cost of labelling a pixel unknown and q denotes the projection of the hypothesised point P in an auxiliary camera where P is a 3D point along the optical ray passing through pixel p located at a distance d p from the reference camera. O k is the set of k most photo-consistent pairs. For textured scenes Normalized Cross Correlation (NCC) over a squared window is a common choice [53]. The NCC values range from -1 to 1 which are then mapped to non-negative values by using the function 1 − N CC.
A maximum likelihood measure [40] is used in this function for confidence value calculation between the center pixel p and the other pixels q and is based on the survey on confidence measures for stereo [28]. The measure is defined as:
m(p, q) = exp cmin 2σ 2 i (p,q)∈N exp −(1−N CC(p,q)) 2σ 2 i(10)
where σ 2 i is the noise variance for each auxiliary camera i; this parameter was fixed to 0.3. N denotes the set of interacting pixels in P. c min is the minimum cost for a pixel obtained by evaluating the function (1−N CC(., .)) on a 15 × 15 window. Contrast term: Segmentation boundaries in images tend to align with contours of high contrast and it is desirable to represent this as a constraint in stereo matching. A consistent interpretation of segmentation-prior and contrast-likelihood is used from [34]. We used a modified version of this interpretation in our formulation to preserve the edges by using Bilateral filtering [61] instead of Gaussian filtering. The contrast term is as follows:
E contrast (l) = p,q∈N e contrast (p, q, l p , l q )(11)
e contrast (p, q, l p , l q ) = 0, if (l p = l q ) 1 1+ ( + exp −C(p,q) ), otherwise (12) · is the L 2 norm and = 1. The simplest choice for C(p, q) would be the squared Euclidean color distance between intensities at pixel p and q as used in [23]. We propose a term for better segmentation as C(p, q) =
B(p)−B(q) 2 2σ 2 pq d 2 pq
where B(.) represents the bilateral filter, d pq is the Euclidean distance between p and q, and σ pq =
B(p)−B(p) 2 d 2 pq
This term enables to remove the regions with low photo-consistency scores and weak edges and thereby helps in estimating the object boundaries.
Smoothness term: This term is inspired by [23] and it ensures the depth labels vary smoothly within the object reducing noise and peaks in the reconstructed surface. This is useful when the photo-consistency score is low and insufficient to assign depth to a pixel ( Figure 14). It is defined as:
E smooth (l, d) = (p,q)∈N e smooth (l p , d p , l q , d q ) (13) e smooth (l p , d p , l q , d q ) = min(|d p − d q | , d max ), if l p = l q and d p , d q = U 0, if l p = l q and d p , d q = U d max , otherwise(14)
d max is set to 50 times the size of the depth sampling step for all datasets.
Color term: This term is computed using the negative log likelihood [9] of the color models learned from the foreground and background markers. The star centers obtained from the sparse 3D features are foreground markers and for background markers we consider the region outside the projected initial coarse reconstruction for each view. The color models use GMMs with 5 components each for Foreground/Background mixed with uniform color models [14] as the markers are sparse.
E color (l) = p∈P −logP (I p |l p )(15)
where P (I p |l p = l i ) denotes the probability at pixel p in the reference image belonging to layer l i . Fig. 15 Comparison of segmentation on benchmark static datasets using geodesic star-convexity.
Results and Performance Evaluation
The proposed system is tested on publicly available multi-view research datasets of indoor and outdoor scenes, details of datasets explained in Table 1. The parameters used for all the datasets are defined in Table 2. More information is available on the website 1 .
Multi-view segmentation evaluation
Segmentation is evaluated against the state-of-the-art methods for multi-view segmentation Kowdle [35] and Djelouah [16] for static scenes and joint segmentation reconstruction methods Mustafa [42] (per frame) and Guillemaut [24] (using temporal information) for both static and dynamic scenes. For static multi-view data the segmentation is initialised as detailed in Section 3.1 followed by refinement using the constrained optimisation Section 3.4.1. For dynamic scenes the full pipeline with temporal coherence is used as detailed in 3. Ground-truth is obtained by manually labelling the foreground for Office, Dance1 and Odzemok dataset, and for other datasets ground-truth is available online. We initialize all approaches by the same proposed initial coarse reconstruction for fair comparison.
To evaluate the segmentation we measure completeness as the ratio of intersection to union with groundtruth [35]. Comparisons are shown in Table 3 and Figure 15, 16 for static benchmark datasets. Comparison for dynamic scene segmentations are shown in Table 4 and Figure 17, 18. Results for multi-view segmentation of static scenes are more accurate than Djelouah, Mustafa, and Guillemaut, and comparable to Kowdle with improved segmentation of some detail such as the back of the chair.
For dynamic scenes the geodesic star convexity based optimization together with temporal consistency gives improved segmentation of fine detail such as the legs of the table in the Office dataset and limbs of the person in the Juggler, Magician and Dance2 datasets in Figure 17 and 18. This overcomes limitations of previous multiview per-frame segmentation.
Reconstruction evaluation
Reconstruction results obtained using the proposed method are compared against Mustafa [42], Guillemaut [24], and Furukawa [19] for dynamic sequences. Furukawa [19] is a per-frame multi-view wide-baseline stereo approach which ranks highly on the middlebury benchmark [53] but does not refine the segmentation.
The depth maps obtained using the proposed approach are compared against Mustafa and Guillemaut in Figure 19. The depth map obtained using the proposed approach are smoother with low reconstruction noise compared to the state-of-the-art methods. Figure 20 and 21 present qualitative and quantitative comparison of our method with the state-of-the-art approaches.
Comparison of reconstructions demonstrates that the proposed method gives consistently more complete and accurate models. The colour maps highlight the quantitative differences in reconstruction. As far as we are aware no ground-truth data exist for dynamic scene reconstruction from real multi-view video. In Figure 21 we present a comparison with the reference mesh available with the Dance2 dataset reconstructed using a visual-hull approach. This comparison demonstrates improved reconstruction of fine detail with the proposed technique.
In contrast to all previous approaches the proposed method gives temporally coherent 4D model reconstructions with dense surface correspondence over time. The introduction of temporal coherence constrains the reconstruction in regions which are ambiguous on a particular frame such as the right leg of the juggler in Figure 20 resulting in more complete shape. Figure 22 shows three complete scene reconstructions with 4D models of multiple objects. The Juggler and Magician sequences are reconstructed from moving handheld cameras. Computational Complexity: Computation times for the proposed approach vs other methods are presented in Table 5. The proposed approach to reconstruct temporally coherent 4D models is comparable in computation time to per-frame multiple view reconstruction and gives a ∼50% reduction in computation cost compared to previous joint segmentation and reconstruction approaches using a known background. This efficiency is achieved through improved per-frame initialisation based on temporal propagation and the introduction of the geodesic star constraint in joint optimisation. Further results can be found in the supplementary material. Temporal coherence: A frame-to-frame alignment is obtained using the proposed approach as shown in Figure 23 for Dance1 and Juggle dataset. The meshes of the dynamic object in Frame 1 and Frame 9 are color coded in both the datasets and the color is propagated to the next frame using the dense temporal coherence information. The color in different parts of the object is retained to the next frame as seen from the figure. The proposed approach obtains sequential temporal alignment which drifts with large movement in the object, hence successive frames are shown in the figure.
Limitations: As with previous dynamic scene reconstruction methods the proposed approach has a number of limitations: persistent ambiguities in appearance between objects will degrade the improvement achieved with temporal coherence; scenes with a large number of inter-occluding dynamic objects will degrade performance; the approach requires sufficient wide-baseline views to cover the scene.
Applications to immersive content production
The 4D meshes generated from the proposed approach can be used for applications in immersive content production such as FVV rendering and VR. This section demonstrates the results of these applications.
Free-viewpoint rendering
In FVV, the virtual viewpoint is controlled interactively by the user. The appearance of the reconstruction is sampled and interpolated directly from the captured camera images using cameras located close to the virtual viewpoint [57].
The proposed joint segmentation and reconstruction framework generates per-view silhouettes and a temporally coherent 4D reconstruction at each time instant of the input video sequence. This representation of the dynamic sequence is used for FVV rendering. To create FVV, a view-dependent surface texture is computed based on the user selected virtual view. This virtual view is obtained by combining the information from camera views in close proximity to the virtual viewpoint [57]. FVV rendering gives user the freedom to interactively choose a novel viewpoint in space to observe the dynamic scene and reproduces fine scale temporal surface details, such as the movement of hair and clothing wrinkles, that may not be modelled geometrically. An example of a reconstructed scene and the camera configuration is shown in Figure 24.
A qualitative evaluation of images synthesised using FVV is shown in Figure 25 and 26. These demonstrate reconstruction results rendered from novel viewpoints from the proposed method against Mustafa [43] and Guillemaut [23] on publicly available datasets. This is particularly important for wide-baseline camera configurations where this technique can be used to synthesize intermediate viewpoints where it may not be practical or economical to physically locate real cameras.
Virtual reality rendering
There is a growing demand for photo-realistic content in the creation of immersive VR experiences. The 4D temporally coherent reconstructions of the dynamic scenes obtained using the proposed approach enables the creation of photo-realistic digital assets that can be incorporated into VR environments using game engines such as Unity and Unreal Engine, as shown in Figure 27 for single frame of four datasets and for a series of frames for Dance1 dataset.
In order to efficiently render the reconstructions in a game engine for applications in VR, a UV texture atlas is extracted using the 4D meshes from the proposed approach as a geometric proxy. The UV texture atlas at each frame are applied to the models at render time in unity for viewing in a VR headset. A UV texture atlas is constructed by projectively texturing and blending multiple view frames onto a 2D unwrapped UV texture atlas, see Figure Figure 28. This is performed once for each static object and at each time instance for dynamic objects allowing efficient storage and real-time playback of static and dynamic textured reconstructions within a VR headset.
Conclusion
This paper introduced a novel technique to automatically segment and reconstruct dynamic scenes captured from multiple moving cameras in general dynamic uncontrolled environments without any prior on background appearance or structure. The proposed automatic initialization was used to identify and initialize the segmentation and reconstruction of multiple objects. A framework for temporally coherent 4D model reconstruction of dynamic scenes from a set of wide-baseline moving cameras. The approach gives a complete model of all static and dynamic non-rigid objects in the scene. Temporal coherence for dynamic objects addresses limitations of previous per-frame reconstruction giving improved reconstruction and segmentation together with dense temporal surface correspondence for dynamic objects. A sparse-to-dense approach is introduced to establish temporal correspondence for non-rigid objects using robust sparse feature matching to initialise dense optical flow providing an initial segmentation and reconstruction. Joint refinement of object reconstruction and segmentation is then performed using a multiple view optimisation with a novel geodesic star convexity constraint that gives improved shape estimation and is computationally efficient. Comparison against state-ofthe-art techniques for multiple view segmentation and reconstruction demonstrates significant improvement in performance for complex scenes. The approach enables reconstruction of 4D models for complex scenes which has not been demonstrated previously. | 8,667 |
1907.08195 | 2963385316 | Existing techniques for dynamic scene reconstruction from multiple wide-baseline cameras primarily focus on reconstruction in controlled environments, with fixed calibrated cameras and strong prior constraints. This paper introduces a general approach to obtain a 4D representation of complex dynamic scenes from multi-view wide-baseline static or moving cameras without prior knowledge of the scene structure, appearance, or illumination. Contributions of the work are: An automatic method for initial coarse reconstruction to initialize joint estimation; Sparse-to-dense temporal correspondence integrated with joint multi-view segmentation and reconstruction to introduce temporal coherence; and a general robust approach for joint segmentation refinement and dense reconstruction of dynamic scenes by introducing shape constraint. Comparison with state-of-the-art approaches on a variety of complex indoor and outdoor scenes, demonstrates improved accuracy in both multi-view segmentation and dense reconstruction. This paper demonstrates unsupervised reconstruction of complete temporally coherent 4D scene models with improved non-rigid object segmentation and shape reconstruction and its application to free-viewpoint rendering and virtual reality. | Many of the existing multi-view reconstruction approaches rely on a two-stage sequential pipeline where foreground or background segmentation is initially performed independently with respect to each camera, and then used as input to obtain visual hull for multi-view reconstruction. The problem with this approach is that the errors introduced at the segmentation stage cannot be recovered and are propagated to the reconstruction stage reducing the final reconstruction quality. Segmentation from multiple wide-baseline views has been proposed by exploiting appearance similarity @cite_2 @cite_44 @cite_20 . These approaches assume static backgrounds and different colour distributions for the foreground and background @cite_16 @cite_2 which limits applicability for general scenes. | {
"abstract": [
"In this paper, we present a method for extracting consistent foreground regions when multiple views of a scene are available. We propose a framework that automatically identifies such regions in images under the assumption that, in each image, background and foreground regions present different color properties. To achieve this task, monocular color information is not sufficient and we exploit the spatial consistency constraint that several image projections of the same space region must satisfy. Combining the monocular color consistency constraint with multiview spatial constraints allows us to automatically and simultaneously segment the foreground and background regions in multiview images. In contrast to standard background subtraction methods, the proposed approach does not require a priori knowledge of the background nor user interaction. Experimental results under realistic scenarios demonstrate the effectiveness of the method for multiple camera set ups.",
"This paper introduces a statistical inference framework to temporally propagate trimap labels from sparsely defined key frames to estimate trimaps for the entire video sequence. Trimap is a fundamental requirement for digital image and video matting approaches. Statistical inference is coupled with Bayesian statistics to allow robust trimap labelling in the presence of shadows, illumination variation and overlap between the foreground and background appearance. Results demonstrate that trimaps are sufficiently accurate to allow high quality video matting using existing natural image matting algorithms. Quantitative evaluation against ground-truth demonstrates that the approach achieves accurate matte estimation with less amount of user interaction compared to the state-of-the-art techniques.",
"This invention relates to a novel energy absorbing isolation device which will absorb and dissipate a major portion of the energy associated with vehicle collisions. The present invention comprises a cylindrical tube, housing a plurality of Belleville spring washers which are compressed on impact by the wide portion of a movable shaft having a relatively wide portion and a relatively narrow portion. The relatively narrow portion of the shaft advances axially into the cylindrical tube as the Belleville washers are compressed. The energy of impact is absorbed and dissipated by compression of the Belleville washers and by interactions between the washers, the inside surface of the cylindrical tube, and the narrow portion of the shaft.",
"Multiple view segmentation consists in segmenting objects simultaneously in several views. A key issue in that respect and compared to monocular settings is to ensure propagation of segmentation information between views while minimizing complexity and computational cost. In this work, we first investigate the idea that examining measurements at the projections of a sparse set of 3D points is sufficient to achieve this goal. The proposed algorithm softly assigns each of these 3D samples to the scene background if it projects on the background region in at least one view, or to the foreground if it projects on foreground region in all views. Second, we show how other modalities such as depth may be seamlessly integrated in the model and benefit the segmentation. The paper exposes a detailed set of experiments used to validate the algorithm, showing results comparable with the state of art, with reduced computational complexity. We also discuss the use of different modalities for specific situations, such as dealing with a low number of viewpoints or a scene with color ambiguities between foreground and background."
],
"cite_N": [
"@cite_44",
"@cite_16",
"@cite_20",
"@cite_2"
],
"mid": [
"2163046003",
"2020045133",
"1511535428",
"2070926764"
]
} | Temporally coherent general dynamic scene reconstruction | Temporally consistent scene reconstruction for Odzemok dataset color-coded to show the scene object segmentation obtained. effects in film and broadcast production and for content production in virtual reality. The ultimate goal of modelling dynamic scenes from multiple cameras is automatic understanding of real-world scenes from distributed camera networks, for applications in robotics and other autonomous systems. Existing methods have applied multiple view dynamic scene reconstruction techniques in controlled environment with known background or chroma-key studio [23,20,56,60]. Other multiple view stereo techniques require a relatively dense static camera network resulting in a large number of cameras [19]. Extensions to more general outdoor scenes [5,32,60] use prior reconstruction of the static geometry from images of the empty environment. However these methods either require accurate segmentation of dynamic foreground objects, or prior knowledge of the scene struc-ture and background, or are limited to static cameras and controlled environments. Scenes are reconstructed semi-automatically, requiring manual intervention for segmentation/rotoscoping, and result in temporally incoherent per-frame mesh geometries. Temporally coherent geometry with known surface correspondence across the sequence is essential for real-world applications and compact representation.
Our paper addresses the limitations of existing approaches by introducing a methodology for unsupervised temporally coherent dynamic scene reconstruction from multiple wide-baseline static or moving camera views without prior knowledge of the scene structure or background appearance. This temporally coherent dynamic scene reconstruction is demonstrated to work in applications for immersive content production such as free-viewpoint video (FVV) and virtual reality (VR). This work combines two previously published papers in general dynamic reconstruction [42] and temporally coherent reconstruction [43] into a single framework and demonstrates application of this novel unsupervised joint segmentation and reconstruction in immersive content production FVV and VR (Section 5).
The input is a sparse set of synchronised videos from multiple moving cameras of an unknown dynamic scene without prior scene segmentation or camera calibration. Our first contribution is automatic initilisation of camera calibration and sparse scene reconstruction from sparse feature correspondence using sparse feature detection and matching between pairs of frames. An initial coarse reconstruction and segmentation of all scene objects is obtained from sparse features matched across multiple views. This eliminates the requirement for prior knowledge of the background scene appearance or structure. Our second contribution is sparse-to-dense reconstruction and segmentation approach to introduce temporal coherence for every frame. We exploit temporal coherence of the scene to overcome visual ambiguities inherent in single frame reconstruction and multiple view segmentation methods for general scenes. Temporal coherence refers to the correspondence between the 3D surface of all objects observed over time. Our third contribution is spatio-temporal alignment to estimate dense surface correspondence for 4D reconstruction. A geodesic star convexity shape constraint is introduced for the shape segmentation to improve the quality of segmentation for non-rigid objects with complex appearance. The proposed approach overcomes the limitations of existing methods allowing an unsupervised temporally coherent 4D reconstruction of complete models for general dynamic scenes.
The scene is automatically decomposed into a set of spatio-temporally coherent objects as shown in Figure 1 where the resulting 4D scene reconstruction has temporally coherent labels and surface correspondence for each object. This can be used for free-viewpoint video rendering and imported to a game engine for VR experience production. The contributions explained above can be summarized as follows: -Unsupervised temporally coherent dense reconstruction and segmentation of general complex dynamic scenes from multiple wide-baseline views. -Automatic initialization of dynamic object segmentation and reconstruction from sparse features. -A framework for space-time sparse-to-dense segmentation, reconstruction and temporal correspondence. -Robust spatio-temporal refinement of dense reconstruction and segmentation integrating error tolerant photo-consistency and edge information using geodesic star convexity. -Robust and computationally efficient reconstruction of dynamic scenes by exploiting temporal coherence. -Real-world applications of 4D reconstruction to freeviewpoint video rendering and virtual reality. This paper is structured as follows: First related work is reviewed. The methodology for general dynamic scene reconstruction is then introduced. Finally a thorough qualitative and quantitative evaluation and comparison to the state-of-the-art on challenging datasets is presented.
Related Work
Temporally coherent reconstruction is a challenging task for general dynamic scenes due to a number of factors such as motion blur, articulated, non-rigid and large motion of multiple people, resolution differences between camera views, occlusions, wide-baselines, errors in calibration and cluttered dynamic backgrounds. Segmentation of dynamic objects from such scenes is difficult because of foreground and background complexity and the likelihood of overlapping background and foreground color distributions. Reconstruction is also challenging due to limited visual cues and relatively large errors affecting both calibration and extraction of a globally consistent solution. This section reviews previous work on dynamic scene reconstruction and segmentation.
Dynamic Scene Reconstruction
Dense dynamic shape reconstruction is a fundamental problem and heavily studied area in the field of computer vision. Recovering accurate 3D models of a dynamically evolving, non-rigid scene observed by multiple synchronised cameras is a challenging task. Research on multiple view dense dynamic reconstruction has primarily focused on indoor scenes with controlled illumi-nation and static backgrounds, extending methods for multiple view reconstruction of static scenes [53] to sequences [62]. Deep learning based approaches have been introduced to estimate shape of dynamic objects from minimal camera views in constrained environment [29,68] and for rigid objects [58]. In the last decade, focus has shifted to more challenging outdoor scenes captured with both static and moving cameras. Reconstruction of non-rigid dynamic objects in uncontrolled natural environments is challenging due to the scene complexity, illumination changes, shadows, occlusion and dynamic backgrounds with clutter such as trees or people. Methods have been proposed for multi-view reconstruction [65,39,37] requiring a large number of closely spaced cameras for surface estimation of dynamic shape. Practical applications require relatively sparse moving cameras to acquire coverage over large areas such as outdoor. A number of approaches for mutli-view reconstruction of outdoor scenes require initial silhouette segmentation [67,32,22,23] to allow visual-hull reconstruction. Most of these approaches to general dynamic scene reconstruction fail in the case of complex (cluttered) scenes captured with moving cameras.
A recent work proposed reconstruction of dynamic fluids [50] for static cameras. Another work used RGB-D cameras to obtain reconstruction of non-rigid surfaces [55]. Pioneering research in general dynamic scene reconstruction from multiple handheld wide-baseline cameras [5,60] exploited prior reconstruction of the background scene to allow dynamic foreground segmentation and reconstruction. Recent work [46] estimates shape of dynamic objects from handheld cameras exploiting GANs. However these approaches either work for static/indoor scenes or exploit strong prior assumptions such as silhouette information, known background or scene structure. Also all these approaches give per frame reconstruction leading to temporally incoherent geometries. Our aim is to perform temporally coherent dense reconstruction of unknown dynamic non-rigid scenes automatically without strong priors or limitations on scene structure.
Joint Segmentation and Reconstruction
Many of the existing multi-view reconstruction approaches rely on a two-stage sequential pipeline where foreground or background segmentation is initially performed independently with respect to each camera, and then used as input to obtain visual hull for multi-view reconstruction. The problem with this approach is that the errors introduced at the segmentation stage cannot be recovered and are propagated to the reconstruction stage reducing the final reconstruction quality. Segmentation from multiple wide-baseline views has been proposed by exploiting appearance similarity [17,38,70]. These ap-proaches assume static backgrounds and different colour distributions for the foreground and background [52,17] which limits applicability for general scenes.
Joint segmentation and reconstruction methods incorporate estimation of segmentation or matting with reconstruction to provide a combined solution. Joint refinement avoids the propagation of errors between the two stages thereby making the solution more robust. Also, cues from segmentation and reconstruction can be combined efficiently to achieve more accurate results. The first multi-view joint estimation system was proposed by Szeliski et al. [59] which used iterative gradient descent to perform an energy minimization. A number of approaches were introduced for joint formulation in static scenes and one recent work used training data to classify the segments [69]. The focus shifted to joint segmentation and reconstruction for rigid objects in indoor and outdoor environments. These approaches used a variety of techniques such as patch-based refinement [54,48] and fixating cameras on the object of interest [11] for reconstructing rigid objects in the scene. However, these are either limited to static scenes [69,26] or process each frame independently thereby failing to enforce temporal consistency [11,23].
Joint reconstruction and segmentation on monocular video was proposed in [36,3,12] achieving semantic segmentation of scene limited to rigid objects in street scenes. Practical application of joint estimation requires these approaches to work on non-rigid objects such as humans with clothing. A multi-layer joint segmentation and reconstruction approach was proposed for multiple view video of sports and indoor scenes [23]. The algorithm used known background images of the scene without the dynamic foreground objects to obtain an initial segmentation. Visual-hull based reconstruction was performed with known prior foreground/background using a background image plate with fixed and calibrated cameras. This visual hull was used as a prior and was optimized by a combination of photo-consistency, silhouette, color and sparse feature information in an energy minimization framework to improve the segmentation and reconstruction quality. Although structurally similar to our approach, it requires the scene to be captured by fixed calibrated cameras and a priori known fixed background plate as a prior to estimate the initial visual hull by background subtraction. The proposed approach overcomes these limitations allowing moving cameras and unknown scene backgrounds.
An approach based on optical flow and graph cuts was shown to work well for non-rigid objects in indoor settings but requires known background segmentation to obtain silhouettes and is computationally expensive [24]. Practical application of temporally coherent joint estimation requires approaches that work on non-rigid objects for general scenes in uncontrolled environments. A quantitative evaluation of techniques for multi-view reconstruction was presented in [53]. These methods are able to produce high quality results, but rely on good initializations and strong prior assumptions with known and controlled (static) scene backgrounds.
The proposed method exploits the advantages of joint segmentation and reconstruction and addresses the limitations of existing methods by introducing a novel approach to reconstruct general dynamic scenes automatically from wide-baseline cameras with no prior. To overcome the limitations of existing methods, the proposed approach automatically initialises the foreground object segmentation from wide-baseline correspondence without prior knowledge of the scene. This is followed by a joint spatio-temporal reconstruction and segmentation of general scenes. Temporal correspondence is exploited to overcome visual ambiguities giving improved reconstruction together with temporal coherence of surface correspondence to obtain 4D scene models.
Temporal coherent 4D Reconstruction
Temporally coherent 4D reconstruction refers to aligning the 3D surfaces of non-rigid objects over time for a dynamic sequence. This is achieved by estimating pointto-point correspondences for the 3D surfaces to obtain 4D temporally coherent reconstruction. 4D models allows to create efficient representation for practical applications in film, broadcast and immersive content production such as virtual, augmented and mixed reality. The majority of existing approaches for reconstruction of dynamic scenes from multi-view videos process each time frame independently due to the difficulty of simultaneously estimating temporal correspondence for non-rigid objects. Independent per-frame reconstruction can result in errors due to the inherent visual ambiguity caused by occlusion and similar object appearance for general scenes. Recent research has shown that exploiting temporal information can improve reconstruction accuracy as well as achieving temporal coherence [43].
3D scene flow estimates frame to frame correspondence whereas 4D temporal coherence estimates correspondence across the complete sequence to obtain a single surface model. Methods to estimate 3D scene flow have been reported in the literature [41] for autonomous vehicles. However this approach is limited to narrow baseline cameras. Other scene flow approaches are dependent on 2D optical flow [66,6] and they require an accurate estimate for most of the pixels which fails in the case of large motion. However, 3D scene flow methods align two frames independently and do not produce temporally coherent 4D models.
Research investigating spatio-temporal reconstruction across multiple frames was proposed by [20,37,24] exploiting the temporal information from the previous frames using optical flow. An approach for recovering space-time consistent depth maps from multiple video sequences captured by stationary, synchronized and calibrated cameras for depth based free viewpoint video rendering was proposed by [39]. However these methods require accurate initialisation, fixed and calibrated cameras and are limited to simple scenes. Other approaches to temporally coherent reconstruction [4] either requires a large number of closely spaced cameras or bi-layer segmentation [72,30] as a constraint for reconstruction. Recent approaches for spatio-temporal reconstruction of multi-view data either work on indoor studio data [47].
The framework proposed in this paper addresses limitations of existing approaches and gives 4D temporally coherent reconstruction for general dynamic indoor or outdoor scenes with large non-rigid motions, repetitive texture, uncontrolled illumination, and large capture volume. The scenes are captured with sparse static/moving cameras. The proposed approach gives 4D models of complete scenes with both static and dynamic objects for real-world applications (FVV and VR) with no prior knowledge of scene structure.
Multi-view Video Segmentation
In the field of image segmentation, approaches have been proposed to provide temporally consistent monocular video segmentation [21,49,45,71]. Hierarchical segmentation based on graphs was proposed in [21], directed acyclic graph were used to propose an object followed by segmentation [71]. Optical flow is used to identify and consistently segment objects [45,49]. Recently a number of approaches have been proposed for multi-view foreground object segmentation by exploiting appearance similarity spatially across views [16,35,38,70]. An approach for space-time multi-view segmentation was proposed by [17]. However, multi-view approaches assume a static background and different colour distributions for the foreground and background which limits applicability for general scenes and non-rigid objects.
To address this issue we introduce a novel method for spatio-temporal multi-view segmentation of dynamic scenes using shape constraints. Single image segmentation techniques using shape constraints provide good results for complex scene segmentation [25] (convex and concave shapes), but require manual interaction. The proposed approach performs automatic multi-view video segmentation by initializing the foreground object model using spatio-temporal information from widebaseline feature correspondence followed by a multi- layer optimization framework. Geodesic star convexity previously used in single view segmentation [25] is applied to constraint the segmentation in each view. Our multi-view formulation naturally enforces coherent segmentation between views and also resolves ambiguities such as the similarity of background and foreground in isolated views.
Summary and Motivation
Image-based temporally coherent 4D dynamic scene reconstruction without a prior model or constraints on the scene structure is a key problem in computer vision. Existing dense reconstruction algorithms need some strong initial prior and constraints for the solution to converge such as background, structure, and segmentation, which limits their application for automatic reconstruction of general scenes. Current approaches are also commonly limited to independent per-frame reconstruction and do not exploit temporal information or produce a coherent model with known correspondence.
The approach proposed in this paper aims to overcome the limitations of existing approaches to enable robust temporally coherent wide-baseline multiple view reconstruction of general dynamic scenes without prior assumptions on scene appearance, structure or segmentation of the moving objects. Static and dynamic objects in the scene are identified for simultaneous segmentation and reconstruction using geometry and appearance cues in a sparse-to-dense optimization framework. Temporal coherence is introduced to improve the quality of the reconstruction and geodesic star convexity is used to improve the quality of segmentation. The static and dynamic elements are fused automatically in both the temporal and spatial domain to obtain the final 4D scene reconstruction.
This paper presents a unified framework, novel in combining multiple view joint reconstruction and seg-mentation with temporal coherence to improve per-frame reconstruction performance and produce a single framework from the initial work presented in [43,42]. In particular the approach gives 4D surface model with full correspondence over time. A comprehensive experimental evaluation with comparison to the state-of-the-art in segmentation, reconstruction and 4D modelling is also presented extending previous work. Application fo the resulting 4D models to free-viewpoint video rendering and content production for immersive virtual reality experiences is also presented.
Methodology
This work is motivated by the limitations of existing multiple view reconstruction methods which either work independently at each frame resulting in errors due to visual ambiguity [19,23], or require restrictive assumptions on scene complexity and structure and often assume prior camera calibration and foreground segmentation [60,24]. We address these issues by initializing the joint reconstruction and segmentation algorithm automatically, introducing temporal coherence in the reconstruction and geodesic star convexity in segmentation to reduce ambiguity and ensure consistent non-rigid structure initialization at successive frames. The proposed approach is demonstrated to achieve improved reconstruction and segmentation performance over state-ofthe-art approaches and produce temporally coherent 4D models of complex dynamic scenes.
Overview
An overview of the proposed framework for temporally coherent multi-view reconstruction is presented in Figures 2 and consists of the following stages: Multi-view video: The scenes are captured using multiple video cameras (static/moving) separated by widebaseline (> 15 • ). The cameras can be synchronized during the capture using time-code generator or later using the audio information. Camera extrinsic calibration and scene structure are assumed to be unknown. Sparse reconstruction: The intrinsics are assumed to be known. Segmentation based feature detection (SFD) [44] is used to obtain a relatively large number of sparse features suitable for wide-baseline matching which are distributed throughout the scene including on dynamic objects such as people. SFD features are matched between views using a SIFT descriptor giving camera extrinsics and a sparse 3D point-cloud for each time instant for the entire sequence [27]. Initial scene segmentation and reconstruction -Section 3.2: Automatic initialisation is performed without prior knowledge of the scene structure or appearance to obtain an initial approximation for each object. The sparse point cloud is clustered in 3D [51] with each cluster representing a unique foreground object. Object segmentation increases efficiency and improve robustness of 4D models. This reconstruction is refined using the framework explained in Section 3.4 to obtain segmentation and dense reconstruction of each object. Sparse-to-dense temporal reconstruction with temporal coherence -Section 3.3 Temporal coherence is introduced in the framework to initialize the coarse reconstruction and obtain frame-to-frame dense correspondences for dynamic object. Dynamic object regions are detected at each time instant by sparse temporal correspondence of SFD features at successive frames. Sparse temporal feature correspondence allows propagation of the dense reconstruction for each dynamic object to obtain an initial approximation. Joint object-based sparse-to-dense temporally coherent refinement of shape and segmentation -Section 3.4: The initial estimate is refined for each object per-view in the scene through joint optimisation of shape and segmentation using a robust cost function combining matching, color, contrast and smoothness information for wide-baseline matching with a geodesic star convexity constraint. A single 3D model for each dynamic object is obtained by fusion of the view-dependent depth maps using Poisson surface reconstruction [31]. Surface orientation is estimated based on neighbouring pixels. Applications -Section : The 4D representation from the proposed joint segmentation and reconstruction framework has a number of applications in media production, including free-viewpoint video (FVV) rendering and virtual reality (VR).
The process above is repeated for the entire sequence for all objects in the first frame and for dynamic objects at each time-instant. The proposed approach enables automatic reconstruction of all objects in the scene as a 4D mesh sequence. Subsequent sections present the novel contributions of this work in initialisation and refinement to obtain a dense temporally coherent reconstruction. The approach is demonstrated to outperform previous approaches to dynamic scene reconstruction and does not require prior knowledge of the scene.
Initial Scene Segmentation and Reconstruction
For general dynamic scene reconstruction, we need to reconstruct and segment the objects in the scene. This requires an initial coarse approximation for initialisation of a subsequent refinement step to optimise the segmentation and reconstruction with respect to each camera view. We introduce an approach based on sparse point cloud clustering, an overview is shown in Figure 3. Initialisation gives a complete coarse segmentation and reconstruction of each object in the first frame of the sequence for subsequent refinement. The dense reconstruction of the foreground objects and background are combined to obtain a full scene reconstruction at the first time instant. A rough geometric proxy of the background is created using the method. For consecutive time instants dynamic objects and newly appeared objects are identified and only these objects are reconstructed and segmented. The reconstruction of static objects is retained which reduces computational complexity. The optic flow and cluster information for each dynamic object ensures that we retain same labels for the entire sequence.
Sparse Point-cloud Clustering
The sparse representation of the scene is processed to remove outliers using the point neighbourhood statistics to filter outlier data [51]. We segment the objects in the sparse scene reconstruction, this allows only moving objects to be reconstructed at each frame for efficiency and this also allows object shape similarity to be propagated across frames to increase robustness of reconstruction.
We use data clustering approach based on the 3D grid subdivision of the space using an octree data structure in Euclidean space to segment objects at each frame. In a more general sense, nearest neighbor information is used to cluster, which is essentially similar to a flood fill algorithm. We choose this data clustering because of its computational efficiency and robustness. The approach allows segmentation of objects in the scene and is demonstrated to work well for cluttered and general outdoor scenes as shown in Section 4.
Objects with insufficient detected features are reconstructed as part of the scene background. Appearing, disappearing and reappearing objects are handled by sparse dynamic feature tracking, explained in Section 3.3. Clustering results are shown in Figure 3. This is followed by a sparse-to-dense coarse object based approach to segment and reconstruct general dynamic scenes.
Coarse Object Reconstruction
The process to obtain the coarse reconstruction for the first frame of the sequence is shown in Figure 4. The sparse representation of each element is back-projected on the rectified image pair for each view. Delaunay triangulation [18] is performed on the set of back projected points for each cluster on one image and is propagated to the second image using the sparse matched features. Triangles with edge length greater than the median length of edges of all triangles are removed. For each remaining triangle pair direct linear transform is used to estimate the affine homography. Displacement at each pixel within the triangle pair is estimated by interpolation to get an initial dense disparity map for each cluster in the 2D image pair labelled as R I depicted in red in Figure 4. The initial coarse reconstruction for the observed objects in the scene is used to define the depth hypotheses at each pixel for the optimization.
The region R I does not ensure complete coverage of the object, so we extrapolate this region to obtain a region R O (shown in yellow) in 2D by 5% of the average distance between the boundary points(R I ) and the centroid of the object. To allow for errors in the initial approximate depth from sparse features we add volume in front and behind of the projected surface by an error tolerance, along the optical ray of the camera. This ensures that the object boundaries lie within the extrapolated initial coarse estimate and depth at each pixel for the combined regions may not be accurate. The tolerance for extrapolation may vary if a pixel belongs to R I or R O as the propagated pixels of the extrapolated regions (R O ) may have a high level of errors compared to error at the points from sparse representation (R I ) requiring a comparatively higher tolerance. The calculation of threshold depends on the capture volume of the datasets and is set to 1% of the capture volume for R O and half the value for R I . This volume in 3D corresponds to our initial coarse reconstruction of each object and enables us to remove the dependency of the existing approaches on background plate and visual hull estimates. This process of cluster identification and initial coarse object reconstruction is performed for multiple objects in general environments. Initial object segmentation using point cloud clustering and coarse segmentation is insensitive to parameters. Throughout this work the same parameters are used for all datasets. The result of this process is a coarse initial object segmentation and reconstruction for each object.
Sparse-to-dense temporal reconstruction with temporal coherence
Once the static scene reconstruction is obtained for the first frame, we perform temporally coherent reconstruction for dynamic objects at successive time instants instead of whole scene reconstruction for computational efficiency and to avoid redundancy. The initial coarse reconstruction for each dynamic region is refined in the subsequent optimization step with respect to each camera view. Dynamic scene objects are identified from the temporal correspondence of sparse feature points. Sparse correspondence is used to propagate an initial model of the moving object for refinement. Figure 5 presents the sparse reconstruction and temporal correspondence. New objects are identified per frame from the clustered sparse reconstruction and are labelled as dynamic objects. Sparse temporal dynamic feature tracking: Numerous approaches have been proposed to track moving objects in 2D using either features or optical flow. However these methods may fail in the case of occlusion, movement parallel to the view direction, large motions and moving cameras. To overcome these limitations we match the sparse 3D feature points obtained using SFD [44] from multiple wide-baseline views at each time instant. The use of sparse 3D features is robust to large non-rigid motion, occlusions and camera movement. SFD detects sparse features which are stable across wide-baseline views and consecutive time instants for a moving camera and dynamic scene. Sparse 3D feature matches between consecutive time instants are back-projected to each view. These features are matched temporally using SIFT descriptor to identify the moving points. Robust matching is achieved by enforcing multiple view consistency for the temporal feature correspondence in each view as illustrated in Figure 6. Each match must satisfy the constraint:
H t,v (p) + u t,r (p + H t,v (p)) − u t,v (p)− (1) H t,r (p + u t,v (p)) <
where p is the feature image point in view v at frame t, H t,v (p) is the disparity at frame t from views v and r, u t,v (p) is the temporal correspondence from frames t to t + 1 for view v. The multi-view consistency check ensures that correspondences between any two views remain temporally consistent for successive frames. Matches in the 2D domain are sensitive to camera movement and occlusion, hence we map the set of refined matches into 3D to make the system robust to camera motion. The Frobenius norm is applied on the 3D point gradients in all directions [71] to obtain the 'net' motion at each sparse point. The 'net' motion between pairs of 3D points for consecutive time instants are ranked, and the top and bottom 5 percentile values are removed. Median filtering is then applied to identify the dynamic features. Figure 7 shows an example with moving cameras for Juggler [5].
Sparse-to-dense model reconstruction: Dynamic 3D feature points are used to initialize the segmentation and reconstruction of the initial model. This avoids the assumption of static backgrounds and prior scene segmentation commonly used to initialise multiple view reconstruction with a coarse visual-hull approximation [23]. Temporal coherence also provides a more accurate initialisation to overcome visual ambiguities at individual frames. Figure 8 illustrates the use of temporal coherence for reconstruction initialisation and refinement. Dynamic feature correspondence is used to identify the mesh for each dynamic object. This mesh is back projected on each view to obtain the region of interest. Lucas Kanade Optical flow [8] is performed on the projected mask for each view in the temporal domain using the dynamic feature correspondences over time as initialization. Dense multi-view wide-baseline correspondences from the previous frame are propagated to the current frame using the information from the flow vectors to obtain dense multi-view matches in the current frame. The matches are triangulated in 3D to obtain a refined 3D dense model of the dynamic object for the current frame. For dynamic scenes, a new object may enter the scene or a new part may appear as the object moves. To allow the introduction of new objects and object parts we also use information from the cluster of sparse points for each dynamic object. The cluster corresponding to the dynamic features is identified and static points are removed. This ensures that the set of new points not only contain the dynamic features but also the unprocessed points which represent new parts of the object. These points are added to the refined sparse model of the dynamic object. To handle the new objects we detect new clusters at each time instant and consider them as dynamic regions. The sparse-to-dense initial coarse reconstruction improves the quality of segmentation and reconstruction after the refinement. Examples of the improvement in segmentation and reconstruction for Odzemok [1] and Juggler [5] datasets are shown in Figure 9. As observed limbs of the people is retained by using information from the previous frames in both the cases.
Joint object-based sparse-to-dense temporally coherent refinement of shape and segmentation
The initial reconstruction and segmentation from dense temporal feature correspondence is refined using a joint optimization framework. A novel shape constraint is introduced based on geodesic star convexity which has previously been shown to give improved performance in interactive image segmentation for structures with fine details (for example a person's fingers or hair) [25]. Shape is a powerful cue for object recognition and segmentation. Shape models represented as distance transforms from a template have been used for category specific segmentation [33]. Some works have introduced generic connectivity constraints for segmentation showing that obtaining a globally optimal solutions under the connectivity constraint is NP-hard [64]. Veksler et al. have used shape constraint in segmentation framework by enforcing star convexity prior on the segmentation, and globally optimal solutions are achieved subject to this constraint [63]. The star convexity constraint ensures connectivity to seed points, and is a stronger assumption than plain connectivity. An example of a star-convex object is shown in Figure 10 along with a failure case for a non-rigid articulate object. To handle more complex objects the idea of geodesic forests with multiple star centres was introduced to obtain a globally optimal solution for interactive 2D object segmentation [25]. The main focus was to introduce shape constraints in interactive segmentation, by means of a geodesic star convexity prior. The notion of connectivity was extended from Euclidean to geodesic so that paths can bend and adapt to image data as opposed to straight Euclidean rays, thus extending visibility and reducing the number of star centers required.
The geodesic star-convexity is integrated as a constraint on the energy minimisation for joint multi-view Fig. 10 (a) Representation of star convexity: The left object depicts example of star-convex objects, with a star center marked. The object on the right with a plausible star center shows deviations from star-convexity in the fine details, and (b) Multiple star semantics for joint refinement: Single star center based segmentation is depicted on the left and multiple star is shown on the right. reconstruction and segmentation [23]. In this work the shape constraint is automatically initialised for each view from the initial segmentation. The shape constraint is based on the geodesic distance with foreground object initialisation (seeds) as star centres to which the object shape is restricted. The union formed by multiple object seeds form a geodesic forest. This allows complex shapes to be segmented. In this work to automatically initialize the segmentation we use the sparse temporal feature correspondence as star centers (seeds) to build a geodesic forest automatically. The region outside the initial coarse reconstruction of all dynamic objects is initialized as the background seed for segmentation as shown in Figure 12. The shape of the dynamic object is restricted by this geodesic distance constraint that depends on the image gradient. Comparison with existing methods for multi-view segmentation demonstrates improvements in recovery of fine detail structure as illustrated in Figure 12.
Once we have a set of dense 3D points for each dynamic object, Poisson surface reconstruction is performed on the set of sparse points to obtain an initial coarse model of each dynamic region R, which is subsequently refined using the optimization framework (Section 3.4.1).
Optimization on initial coarse object reconstruction based on geodesic star convexity
The depth of the initial coarse reconstruction estimate is refined per view for each dynamic object at a per pixel level. View-dependent optimisation of depth is performed with respect to each camera which is robust to errors in camera calibration and initialisation. Calibration inaccuracies produce inconsistencies limiting the applicability of global reconstruction techniques which simultaneously consider all views; view-dependent techniques are more tolerant to such inaccuracies because they only use a subset of the views for reconstruction of depth from each camera view.
Our goal is to assign an accurate depth value from a set of depth values D = d 1 , ..., d |D|−1 , U and assign a layer label from a set of label values L = l 1 , ..., l |L | to each pixel p for the region R of each dynamic object. Each d i is obtained by sampling the optical ray from the camera and U is an unknown depth value to handle occlusions. This is achieved by optimisation of a joint cost function [23] for label (segmentation) and depth (reconstruction):
E(l, d) = λ data E data (d) + λ contrast E contrast (l)+ λ smooth E smooth (l, d) + λ color E color (l) (2)
where, d is the depth at each pixel, l is the layer label for multiple objects and the cost function terms are defined in section 3.4.2. The equation consists of four terms: the data term is for the photo-consistency scores, the smoothness term is to avoid sudden peaks in depth and maintain the consistency and the color and contrast terms are to identify the object boundaries. Data and smoothness terms are common to solve reconstruction problems [7] and the color and contrast terms are used for segmentation [34]. This is solved subject to a geodesic star-convexity constraint on the labels l. A label l is star convex with respect to center c, if every point p ∈ l is visible to a star center c via l in the image x which can be expressed as an energy cost:
E (l|x, c) = p∈R q∈Γc,p E p,q (l p , l q ) (3) ∀q ∈ Γ c,p , E p,q = ∞ if l p = l q 0 otherwise(4)
where ∀p ∈ R : p ∈ l ⇔ l p = 1 and Γ c,p is the geodesic path joining p to the star center c given by:
Γ c,p = arg min Γ ∈Pc,p L(Γ )(5)
where P c,p denotes the set of all discrete paths between c and p and L(Γ ) is the length of discrete geodesic path as defined in [25]. In the case of image segmentation the gradients in the underlying image provide information to compute the discrete paths between each pixel and star centers and L(Γ ) is defined below:
L(Γ ) = N D −1 i=1 (1 − δ g )j(Γ i , Γ i+1 ) 2 + δ g I(Γ i ) 2(6)
where Γ is an arbitrary parametrized discrete path with N D pixels given by Γ 1 , Γ 2 , · · · Γ N D , j(Γ i , Γ i+1 ) is the Euclidean distance between successive pixels, and the quantity I(Γ i ) 2 is a finite difference approximation of the image gradient between the points Γ i , Γ i+1 . The parameter weights δ g the Euclidean distance with the geodesic length. Using the above definition, one can define the geodesic distance as defined in Equation 5.
An extension of single star-convexity is to use multiple stars to define a more general class of shapes. Introduction of multiple star centers reduces the path lengths and increases the visibility of small parts of objects like small limbs as shown in Figure 10. Hence Equation 3 is extended to multiple stars. A label l is star convex with respect to center c i , if every point p ∈ l is visible to a star center c i in set C = {c 1 , ..., c N T } via l in the image x, where N T is the number of star centers [25]. This is expressed as an energy cost:
E (l|x, C ) = p∈R q∈Γc,p E p,q (l p , l q )(7)
In our case all the correct temporal sparse feature correspondences are used as star centers, hence the segmentation will include all the points which are visible to these sparse features via geodesic distances in the region R, thereby employing the shape constraint. Since the star centers are selected automatically, the method is unsupervised. Comparison of segmentation constraint with geodesic multi-star convexity against no constraints and Euclidean multi-star convexity constraint is shown in Figure 11. The figure demonstrates the usefulness of the proposed approach with an improvement in segmentation quality on non-rigid complex objects. The energy in the Equation 2 is minimized as follows:
min (l,d) s.t. E(l, d) l S (C ) ⇔ min (l,d) E(l, d) + E (l|x, C )(8)
where S (C ) is the set of all shapes which lie within the geodesic distances with respect to the centers in C . Optimization of Equation 8, subject to each pixel p in the region R being at a geodesic distance Γ c,p from the star centers in the set C , is performed using the αexpansion algorithm for a pixel p by iterating through Fig. 12 Geodesic star convexity: A region R with star centers C connected with geodesic distance Γ c,p . Segmentation results with and without geodesic star convexity based optimization are shown on the right for the Juggler dataset. the set of labels in L × D [10]. Graph-cut is used to obtain a local optimum [9]. The improvements in the results using geodesic star convexity in the framework is shown in Figure 12 and by using temporal coherence is shown in Figure 9. Figure 13 shows improvements using geodesic shape constraint, temporal coherence and combined proposed approach for Dance2 [2] dataset.
Energy cost function for joint segmentation and reconstruction
For completeness in this section we define each of the terms in Equation 2, these are based on previous terms used for joint optimisation over depth for each pixel introduced in [42], with modification of the color matching term to improve robustness and extension to multiple labels.
Matching term: The data term for matching between views is specified as a measure of photo-consistency (Figure 14) as follows:
E data (d) = p∈P e data (p, d p ) = M (p, q) = i∈O k m(p, q), if d p = U M U , if d p = U(9)
where P is the 4-connected neighbourhood of pixel p, M U is the fixed cost of labelling a pixel unknown and q denotes the projection of the hypothesised point P in an auxiliary camera where P is a 3D point along the optical ray passing through pixel p located at a distance d p from the reference camera. O k is the set of k most photo-consistent pairs. For textured scenes Normalized Cross Correlation (NCC) over a squared window is a common choice [53]. The NCC values range from -1 to 1 which are then mapped to non-negative values by using the function 1 − N CC.
A maximum likelihood measure [40] is used in this function for confidence value calculation between the center pixel p and the other pixels q and is based on the survey on confidence measures for stereo [28]. The measure is defined as:
m(p, q) = exp cmin 2σ 2 i (p,q)∈N exp −(1−N CC(p,q)) 2σ 2 i(10)
where σ 2 i is the noise variance for each auxiliary camera i; this parameter was fixed to 0.3. N denotes the set of interacting pixels in P. c min is the minimum cost for a pixel obtained by evaluating the function (1−N CC(., .)) on a 15 × 15 window. Contrast term: Segmentation boundaries in images tend to align with contours of high contrast and it is desirable to represent this as a constraint in stereo matching. A consistent interpretation of segmentation-prior and contrast-likelihood is used from [34]. We used a modified version of this interpretation in our formulation to preserve the edges by using Bilateral filtering [61] instead of Gaussian filtering. The contrast term is as follows:
E contrast (l) = p,q∈N e contrast (p, q, l p , l q )(11)
e contrast (p, q, l p , l q ) = 0, if (l p = l q ) 1 1+ ( + exp −C(p,q) ), otherwise (12) · is the L 2 norm and = 1. The simplest choice for C(p, q) would be the squared Euclidean color distance between intensities at pixel p and q as used in [23]. We propose a term for better segmentation as C(p, q) =
B(p)−B(q) 2 2σ 2 pq d 2 pq
where B(.) represents the bilateral filter, d pq is the Euclidean distance between p and q, and σ pq =
B(p)−B(p) 2 d 2 pq
This term enables to remove the regions with low photo-consistency scores and weak edges and thereby helps in estimating the object boundaries.
Smoothness term: This term is inspired by [23] and it ensures the depth labels vary smoothly within the object reducing noise and peaks in the reconstructed surface. This is useful when the photo-consistency score is low and insufficient to assign depth to a pixel ( Figure 14). It is defined as:
E smooth (l, d) = (p,q)∈N e smooth (l p , d p , l q , d q ) (13) e smooth (l p , d p , l q , d q ) = min(|d p − d q | , d max ), if l p = l q and d p , d q = U 0, if l p = l q and d p , d q = U d max , otherwise(14)
d max is set to 50 times the size of the depth sampling step for all datasets.
Color term: This term is computed using the negative log likelihood [9] of the color models learned from the foreground and background markers. The star centers obtained from the sparse 3D features are foreground markers and for background markers we consider the region outside the projected initial coarse reconstruction for each view. The color models use GMMs with 5 components each for Foreground/Background mixed with uniform color models [14] as the markers are sparse.
E color (l) = p∈P −logP (I p |l p )(15)
where P (I p |l p = l i ) denotes the probability at pixel p in the reference image belonging to layer l i . Fig. 15 Comparison of segmentation on benchmark static datasets using geodesic star-convexity.
Results and Performance Evaluation
The proposed system is tested on publicly available multi-view research datasets of indoor and outdoor scenes, details of datasets explained in Table 1. The parameters used for all the datasets are defined in Table 2. More information is available on the website 1 .
Multi-view segmentation evaluation
Segmentation is evaluated against the state-of-the-art methods for multi-view segmentation Kowdle [35] and Djelouah [16] for static scenes and joint segmentation reconstruction methods Mustafa [42] (per frame) and Guillemaut [24] (using temporal information) for both static and dynamic scenes. For static multi-view data the segmentation is initialised as detailed in Section 3.1 followed by refinement using the constrained optimisation Section 3.4.1. For dynamic scenes the full pipeline with temporal coherence is used as detailed in 3. Ground-truth is obtained by manually labelling the foreground for Office, Dance1 and Odzemok dataset, and for other datasets ground-truth is available online. We initialize all approaches by the same proposed initial coarse reconstruction for fair comparison.
To evaluate the segmentation we measure completeness as the ratio of intersection to union with groundtruth [35]. Comparisons are shown in Table 3 and Figure 15, 16 for static benchmark datasets. Comparison for dynamic scene segmentations are shown in Table 4 and Figure 17, 18. Results for multi-view segmentation of static scenes are more accurate than Djelouah, Mustafa, and Guillemaut, and comparable to Kowdle with improved segmentation of some detail such as the back of the chair.
For dynamic scenes the geodesic star convexity based optimization together with temporal consistency gives improved segmentation of fine detail such as the legs of the table in the Office dataset and limbs of the person in the Juggler, Magician and Dance2 datasets in Figure 17 and 18. This overcomes limitations of previous multiview per-frame segmentation.
Reconstruction evaluation
Reconstruction results obtained using the proposed method are compared against Mustafa [42], Guillemaut [24], and Furukawa [19] for dynamic sequences. Furukawa [19] is a per-frame multi-view wide-baseline stereo approach which ranks highly on the middlebury benchmark [53] but does not refine the segmentation.
The depth maps obtained using the proposed approach are compared against Mustafa and Guillemaut in Figure 19. The depth map obtained using the proposed approach are smoother with low reconstruction noise compared to the state-of-the-art methods. Figure 20 and 21 present qualitative and quantitative comparison of our method with the state-of-the-art approaches.
Comparison of reconstructions demonstrates that the proposed method gives consistently more complete and accurate models. The colour maps highlight the quantitative differences in reconstruction. As far as we are aware no ground-truth data exist for dynamic scene reconstruction from real multi-view video. In Figure 21 we present a comparison with the reference mesh available with the Dance2 dataset reconstructed using a visual-hull approach. This comparison demonstrates improved reconstruction of fine detail with the proposed technique.
In contrast to all previous approaches the proposed method gives temporally coherent 4D model reconstructions with dense surface correspondence over time. The introduction of temporal coherence constrains the reconstruction in regions which are ambiguous on a particular frame such as the right leg of the juggler in Figure 20 resulting in more complete shape. Figure 22 shows three complete scene reconstructions with 4D models of multiple objects. The Juggler and Magician sequences are reconstructed from moving handheld cameras. Computational Complexity: Computation times for the proposed approach vs other methods are presented in Table 5. The proposed approach to reconstruct temporally coherent 4D models is comparable in computation time to per-frame multiple view reconstruction and gives a ∼50% reduction in computation cost compared to previous joint segmentation and reconstruction approaches using a known background. This efficiency is achieved through improved per-frame initialisation based on temporal propagation and the introduction of the geodesic star constraint in joint optimisation. Further results can be found in the supplementary material. Temporal coherence: A frame-to-frame alignment is obtained using the proposed approach as shown in Figure 23 for Dance1 and Juggle dataset. The meshes of the dynamic object in Frame 1 and Frame 9 are color coded in both the datasets and the color is propagated to the next frame using the dense temporal coherence information. The color in different parts of the object is retained to the next frame as seen from the figure. The proposed approach obtains sequential temporal alignment which drifts with large movement in the object, hence successive frames are shown in the figure.
Limitations: As with previous dynamic scene reconstruction methods the proposed approach has a number of limitations: persistent ambiguities in appearance between objects will degrade the improvement achieved with temporal coherence; scenes with a large number of inter-occluding dynamic objects will degrade performance; the approach requires sufficient wide-baseline views to cover the scene.
Applications to immersive content production
The 4D meshes generated from the proposed approach can be used for applications in immersive content production such as FVV rendering and VR. This section demonstrates the results of these applications.
Free-viewpoint rendering
In FVV, the virtual viewpoint is controlled interactively by the user. The appearance of the reconstruction is sampled and interpolated directly from the captured camera images using cameras located close to the virtual viewpoint [57].
The proposed joint segmentation and reconstruction framework generates per-view silhouettes and a temporally coherent 4D reconstruction at each time instant of the input video sequence. This representation of the dynamic sequence is used for FVV rendering. To create FVV, a view-dependent surface texture is computed based on the user selected virtual view. This virtual view is obtained by combining the information from camera views in close proximity to the virtual viewpoint [57]. FVV rendering gives user the freedom to interactively choose a novel viewpoint in space to observe the dynamic scene and reproduces fine scale temporal surface details, such as the movement of hair and clothing wrinkles, that may not be modelled geometrically. An example of a reconstructed scene and the camera configuration is shown in Figure 24.
A qualitative evaluation of images synthesised using FVV is shown in Figure 25 and 26. These demonstrate reconstruction results rendered from novel viewpoints from the proposed method against Mustafa [43] and Guillemaut [23] on publicly available datasets. This is particularly important for wide-baseline camera configurations where this technique can be used to synthesize intermediate viewpoints where it may not be practical or economical to physically locate real cameras.
Virtual reality rendering
There is a growing demand for photo-realistic content in the creation of immersive VR experiences. The 4D temporally coherent reconstructions of the dynamic scenes obtained using the proposed approach enables the creation of photo-realistic digital assets that can be incorporated into VR environments using game engines such as Unity and Unreal Engine, as shown in Figure 27 for single frame of four datasets and for a series of frames for Dance1 dataset.
In order to efficiently render the reconstructions in a game engine for applications in VR, a UV texture atlas is extracted using the 4D meshes from the proposed approach as a geometric proxy. The UV texture atlas at each frame are applied to the models at render time in unity for viewing in a VR headset. A UV texture atlas is constructed by projectively texturing and blending multiple view frames onto a 2D unwrapped UV texture atlas, see Figure Figure 28. This is performed once for each static object and at each time instance for dynamic objects allowing efficient storage and real-time playback of static and dynamic textured reconstructions within a VR headset.
Conclusion
This paper introduced a novel technique to automatically segment and reconstruct dynamic scenes captured from multiple moving cameras in general dynamic uncontrolled environments without any prior on background appearance or structure. The proposed automatic initialization was used to identify and initialize the segmentation and reconstruction of multiple objects. A framework for temporally coherent 4D model reconstruction of dynamic scenes from a set of wide-baseline moving cameras. The approach gives a complete model of all static and dynamic non-rigid objects in the scene. Temporal coherence for dynamic objects addresses limitations of previous per-frame reconstruction giving improved reconstruction and segmentation together with dense temporal surface correspondence for dynamic objects. A sparse-to-dense approach is introduced to establish temporal correspondence for non-rigid objects using robust sparse feature matching to initialise dense optical flow providing an initial segmentation and reconstruction. Joint refinement of object reconstruction and segmentation is then performed using a multiple view optimisation with a novel geodesic star convexity constraint that gives improved shape estimation and is computationally efficient. Comparison against state-ofthe-art techniques for multiple view segmentation and reconstruction demonstrates significant improvement in performance for complex scenes. The approach enables reconstruction of 4D models for complex scenes which has not been demonstrated previously. | 8,667 |
1907.08195 | 2963385316 | Existing techniques for dynamic scene reconstruction from multiple wide-baseline cameras primarily focus on reconstruction in controlled environments, with fixed calibrated cameras and strong prior constraints. This paper introduces a general approach to obtain a 4D representation of complex dynamic scenes from multi-view wide-baseline static or moving cameras without prior knowledge of the scene structure, appearance, or illumination. Contributions of the work are: An automatic method for initial coarse reconstruction to initialize joint estimation; Sparse-to-dense temporal correspondence integrated with joint multi-view segmentation and reconstruction to introduce temporal coherence; and a general robust approach for joint segmentation refinement and dense reconstruction of dynamic scenes by introducing shape constraint. Comparison with state-of-the-art approaches on a variety of complex indoor and outdoor scenes, demonstrates improved accuracy in both multi-view segmentation and dense reconstruction. This paper demonstrates unsupervised reconstruction of complete temporally coherent 4D scene models with improved non-rigid object segmentation and shape reconstruction and its application to free-viewpoint rendering and virtual reality. | Joint segmentation and reconstruction methods incorporate estimation of segmentation or matting with reconstruction to provide a combined solution. Joint refinement avoids the propagation of errors between the two stages thereby making the solution more robust. Also, cues from segmentation and reconstruction can be combined efficiently to achieve more accurate results. The first multi-view joint estimation system was proposed by @cite_13 which used iterative gradient descent to perform an energy minimization. A number of approaches were introduced for joint formulation in static scenes and one recent work used training data to classify the segments @cite_40 . The focus shifted to joint segmentation and reconstruction for rigid objects in indoor and outdoor environments. These approaches used a variety of techniques such as patch-based refinement @cite_38 @cite_50 and fixating cameras on the object of interest @cite_60 for reconstructing rigid objects in the scene. However, these are either limited to static scenes @cite_40 @cite_1 or process each frame independently thereby failing to enforce temporal consistency @cite_60 @cite_30 . | {
"abstract": [
"Current state-of-the-art image-based scene reconstruction techniques are capable of generating high-fidelity 3D models when used under controlled capture conditions. However, they are often inadequate when used in more challenging environments such as sports scenes with moving cameras. Algorithms must be able to cope with relatively large calibration and segmentation errors as well as input images separated by a wide-baseline and possibly captured at different resolutions. In this paper, we propose a technique which, under these challenging conditions, is able to efficiently compute a high-quality scene representation via graph-cut optimisation of an energy function combining multiple image cues with strong priors. Robustness is achieved by jointly optimising scene segmentation and multiple view reconstruction in a view-dependent manner with respect to each input camera. Joint optimisation prevents propagation of errors from segmentation to reconstruction as is often the case with sequential approaches. View-dependent processing increases tolerance to errors in through-the-lens calibration compared to global approaches. We evaluate our technique in the case of challenging outdoor sports scenes captured with manually operated broadcast cameras as well as several indoor scenes with natural background. A comprehensive experimental evaluation including qualitative and quantitative results demonstrates the accuracy of the technique for high quality segmentation and reconstruction and its suitability for free-viewpoint video under these difficult conditions.",
"In this paper, we present a new framework for three-dimensional (3D) reconstruction of multiple rigid objects from dynamic scenes. Conventional 3D reconstruction from multiple views is applicable to static scenes, in which the configuration of objects is fixed while the images are taken. In our framework, we aim to reconstruct the 3D models of multiple objects in a more general setting where the configuration of the objects varies among views. We solve this problem by object-centered decomposition of the dynamic scenes using unsupervised co-recognition approach. Unlike conventional motion segmentation algorithms that require small motion assumption between consecutive views, co-recognition method provides reliable accurate correspondences of a same object among unordered and wide-baseline views. In order to segment each object region, we benefit from the 3D sparse points obtained from the structure-from-motion. These points are reliable and serve as automatic seed points for a seeded-segmentation algorithm. Experiments on various real challenging image sequences demonstrate the effectiveness of our approach, especially in the presence of abrupt independent motions of objects.",
"We propose an algorithm for automatically obtaining a segmentation of a rigid object in a sequence of images that are calibrated for camera pose and intrinsic parameters. Until recently, the best segmentation results have been obtained by interactive methods that require manual labelling of image regions. Our method requires no user input but instead relies on the camera fixating on the object of interest during the sequence. We begin by learning a model of the object's colour, from the image pixels around the fixation points. We then extract image edges and combine these with the object colour information in a volumetric binary MRF model. The globally optimal segmentation of 3D space is obtained by a graph-cut optimisation. From this segmentation an improved colour model is extracted and the whole process is iterated until convergence. Our first finding is that the fixation constraint, which requires that the object of interest is more or less central in the image, is enough to determine what to segment and initialise an automatic segmentation process. Second, we find that by performing a single segmentation in 3D, we implicitly exploit a 3D rigidity constraint, expressed as silhouette coherency, which significantly improves silhouette quality over independent 2D segmentations. We demonstrate the validity of our approach by providing segmentation results on real sequences.",
"",
"Both image segmentation and dense 3D modeling from images represent an intrinsically ill-posed problem. Strong regularizers are therefore required to constrain the solutions from being 'too noisy'. Unfortunately, these priors generally yield overly smooth reconstructions and or segmentations in certain regions whereas they fail in other areas to constrain the solution sufficiently. In this paper we argue that image segmentation and dense 3D reconstruction contribute valuable information to each other's task. As a consequence, we propose a rigorous mathematical framework to formulate and solve a joint segmentation and dense reconstruction problem. Image segmentations provide geometric cues about which surface orientations are more likely to appear at a certain location in space whereas a dense 3D reconstruction yields a suitable regularization for the segmentation problem by lifting the labeling from 2D images to 3D space. We show how appearance-based cues and 3D surface orientation priors can be learned from training data and subsequently used for class-specific regularization. Experimental results on several real data sets highlight the advantages of our joint formulation.",
"When trying to extract 3D scene information and camera motion from an image sequence alone, it is often necessary to cope with independently moving objects. Recent research has unveiled some of the mathematical foundations of the problem, but a general and practical algorithm, which can handle long, realistic sequences, is still missing. In this paper, we identify the necessary parts of such an algorithm, highlight both unexplored theoretical issues and practical challenges, and propose solutions. Theoretical issues include proper handling of different situations, in which the number of independent motions changes: objects can enter the scene, objects previously moving together can split and follow independent trajectories, or independently moving objects can merge into one common motion. We derive model scoring criteria to handle these changes in the number of segments. A further theoretical issue is the resolution of the relative scale ambiguity between such changes. Practical issues include robust 3D reconstruction of freely moving foreground objects, which often have few and short feature tracks. The proposed framework simultaneously tracks features, groups them into rigidly moving segments, and reconstructs all segments in 3D. Such an online approach, as opposed to batch processing techniques, which first track features, and then perform segmentation and reconstruction, is vital in order to handle small foreground objects.",
"This paper formulates and solves a new variant of the stereo correspondence problem: simultaneously recovering the disparities, true colors, and opacities of visible surface elements. This problem arises in newer applications of stereo reconstruction, such as view interpolation and the layering of real imagery with synthetic graphics for special effects and virtual studio applications. While this problem is intrinsically more difficult than traditional stereo correspondence, where only the disparities are being recovered, it provides a principled way of dealing with commonly occurring problems such as occlusions and the handling of mixed (foreground background) pixels near depth discontinuities. It also provides a novel means for separating foreground and background objects (matting), without the use of a special blue screen. We formulate the problem as the recovery of colors and opacities in a generalized 3-D (x, y, d) disparity space, and solve the problem using a combination of initial evidence aggregation followed by iterative energy minimization."
],
"cite_N": [
"@cite_30",
"@cite_38",
"@cite_60",
"@cite_1",
"@cite_40",
"@cite_50",
"@cite_13"
],
"mid": [
"2126373711",
"2065311267",
"2120715970",
"",
"2150134683",
"2116706028",
"1906648922"
]
} | Temporally coherent general dynamic scene reconstruction | Temporally consistent scene reconstruction for Odzemok dataset color-coded to show the scene object segmentation obtained. effects in film and broadcast production and for content production in virtual reality. The ultimate goal of modelling dynamic scenes from multiple cameras is automatic understanding of real-world scenes from distributed camera networks, for applications in robotics and other autonomous systems. Existing methods have applied multiple view dynamic scene reconstruction techniques in controlled environment with known background or chroma-key studio [23,20,56,60]. Other multiple view stereo techniques require a relatively dense static camera network resulting in a large number of cameras [19]. Extensions to more general outdoor scenes [5,32,60] use prior reconstruction of the static geometry from images of the empty environment. However these methods either require accurate segmentation of dynamic foreground objects, or prior knowledge of the scene struc-ture and background, or are limited to static cameras and controlled environments. Scenes are reconstructed semi-automatically, requiring manual intervention for segmentation/rotoscoping, and result in temporally incoherent per-frame mesh geometries. Temporally coherent geometry with known surface correspondence across the sequence is essential for real-world applications and compact representation.
Our paper addresses the limitations of existing approaches by introducing a methodology for unsupervised temporally coherent dynamic scene reconstruction from multiple wide-baseline static or moving camera views without prior knowledge of the scene structure or background appearance. This temporally coherent dynamic scene reconstruction is demonstrated to work in applications for immersive content production such as free-viewpoint video (FVV) and virtual reality (VR). This work combines two previously published papers in general dynamic reconstruction [42] and temporally coherent reconstruction [43] into a single framework and demonstrates application of this novel unsupervised joint segmentation and reconstruction in immersive content production FVV and VR (Section 5).
The input is a sparse set of synchronised videos from multiple moving cameras of an unknown dynamic scene without prior scene segmentation or camera calibration. Our first contribution is automatic initilisation of camera calibration and sparse scene reconstruction from sparse feature correspondence using sparse feature detection and matching between pairs of frames. An initial coarse reconstruction and segmentation of all scene objects is obtained from sparse features matched across multiple views. This eliminates the requirement for prior knowledge of the background scene appearance or structure. Our second contribution is sparse-to-dense reconstruction and segmentation approach to introduce temporal coherence for every frame. We exploit temporal coherence of the scene to overcome visual ambiguities inherent in single frame reconstruction and multiple view segmentation methods for general scenes. Temporal coherence refers to the correspondence between the 3D surface of all objects observed over time. Our third contribution is spatio-temporal alignment to estimate dense surface correspondence for 4D reconstruction. A geodesic star convexity shape constraint is introduced for the shape segmentation to improve the quality of segmentation for non-rigid objects with complex appearance. The proposed approach overcomes the limitations of existing methods allowing an unsupervised temporally coherent 4D reconstruction of complete models for general dynamic scenes.
The scene is automatically decomposed into a set of spatio-temporally coherent objects as shown in Figure 1 where the resulting 4D scene reconstruction has temporally coherent labels and surface correspondence for each object. This can be used for free-viewpoint video rendering and imported to a game engine for VR experience production. The contributions explained above can be summarized as follows: -Unsupervised temporally coherent dense reconstruction and segmentation of general complex dynamic scenes from multiple wide-baseline views. -Automatic initialization of dynamic object segmentation and reconstruction from sparse features. -A framework for space-time sparse-to-dense segmentation, reconstruction and temporal correspondence. -Robust spatio-temporal refinement of dense reconstruction and segmentation integrating error tolerant photo-consistency and edge information using geodesic star convexity. -Robust and computationally efficient reconstruction of dynamic scenes by exploiting temporal coherence. -Real-world applications of 4D reconstruction to freeviewpoint video rendering and virtual reality. This paper is structured as follows: First related work is reviewed. The methodology for general dynamic scene reconstruction is then introduced. Finally a thorough qualitative and quantitative evaluation and comparison to the state-of-the-art on challenging datasets is presented.
Related Work
Temporally coherent reconstruction is a challenging task for general dynamic scenes due to a number of factors such as motion blur, articulated, non-rigid and large motion of multiple people, resolution differences between camera views, occlusions, wide-baselines, errors in calibration and cluttered dynamic backgrounds. Segmentation of dynamic objects from such scenes is difficult because of foreground and background complexity and the likelihood of overlapping background and foreground color distributions. Reconstruction is also challenging due to limited visual cues and relatively large errors affecting both calibration and extraction of a globally consistent solution. This section reviews previous work on dynamic scene reconstruction and segmentation.
Dynamic Scene Reconstruction
Dense dynamic shape reconstruction is a fundamental problem and heavily studied area in the field of computer vision. Recovering accurate 3D models of a dynamically evolving, non-rigid scene observed by multiple synchronised cameras is a challenging task. Research on multiple view dense dynamic reconstruction has primarily focused on indoor scenes with controlled illumi-nation and static backgrounds, extending methods for multiple view reconstruction of static scenes [53] to sequences [62]. Deep learning based approaches have been introduced to estimate shape of dynamic objects from minimal camera views in constrained environment [29,68] and for rigid objects [58]. In the last decade, focus has shifted to more challenging outdoor scenes captured with both static and moving cameras. Reconstruction of non-rigid dynamic objects in uncontrolled natural environments is challenging due to the scene complexity, illumination changes, shadows, occlusion and dynamic backgrounds with clutter such as trees or people. Methods have been proposed for multi-view reconstruction [65,39,37] requiring a large number of closely spaced cameras for surface estimation of dynamic shape. Practical applications require relatively sparse moving cameras to acquire coverage over large areas such as outdoor. A number of approaches for mutli-view reconstruction of outdoor scenes require initial silhouette segmentation [67,32,22,23] to allow visual-hull reconstruction. Most of these approaches to general dynamic scene reconstruction fail in the case of complex (cluttered) scenes captured with moving cameras.
A recent work proposed reconstruction of dynamic fluids [50] for static cameras. Another work used RGB-D cameras to obtain reconstruction of non-rigid surfaces [55]. Pioneering research in general dynamic scene reconstruction from multiple handheld wide-baseline cameras [5,60] exploited prior reconstruction of the background scene to allow dynamic foreground segmentation and reconstruction. Recent work [46] estimates shape of dynamic objects from handheld cameras exploiting GANs. However these approaches either work for static/indoor scenes or exploit strong prior assumptions such as silhouette information, known background or scene structure. Also all these approaches give per frame reconstruction leading to temporally incoherent geometries. Our aim is to perform temporally coherent dense reconstruction of unknown dynamic non-rigid scenes automatically without strong priors or limitations on scene structure.
Joint Segmentation and Reconstruction
Many of the existing multi-view reconstruction approaches rely on a two-stage sequential pipeline where foreground or background segmentation is initially performed independently with respect to each camera, and then used as input to obtain visual hull for multi-view reconstruction. The problem with this approach is that the errors introduced at the segmentation stage cannot be recovered and are propagated to the reconstruction stage reducing the final reconstruction quality. Segmentation from multiple wide-baseline views has been proposed by exploiting appearance similarity [17,38,70]. These ap-proaches assume static backgrounds and different colour distributions for the foreground and background [52,17] which limits applicability for general scenes.
Joint segmentation and reconstruction methods incorporate estimation of segmentation or matting with reconstruction to provide a combined solution. Joint refinement avoids the propagation of errors between the two stages thereby making the solution more robust. Also, cues from segmentation and reconstruction can be combined efficiently to achieve more accurate results. The first multi-view joint estimation system was proposed by Szeliski et al. [59] which used iterative gradient descent to perform an energy minimization. A number of approaches were introduced for joint formulation in static scenes and one recent work used training data to classify the segments [69]. The focus shifted to joint segmentation and reconstruction for rigid objects in indoor and outdoor environments. These approaches used a variety of techniques such as patch-based refinement [54,48] and fixating cameras on the object of interest [11] for reconstructing rigid objects in the scene. However, these are either limited to static scenes [69,26] or process each frame independently thereby failing to enforce temporal consistency [11,23].
Joint reconstruction and segmentation on monocular video was proposed in [36,3,12] achieving semantic segmentation of scene limited to rigid objects in street scenes. Practical application of joint estimation requires these approaches to work on non-rigid objects such as humans with clothing. A multi-layer joint segmentation and reconstruction approach was proposed for multiple view video of sports and indoor scenes [23]. The algorithm used known background images of the scene without the dynamic foreground objects to obtain an initial segmentation. Visual-hull based reconstruction was performed with known prior foreground/background using a background image plate with fixed and calibrated cameras. This visual hull was used as a prior and was optimized by a combination of photo-consistency, silhouette, color and sparse feature information in an energy minimization framework to improve the segmentation and reconstruction quality. Although structurally similar to our approach, it requires the scene to be captured by fixed calibrated cameras and a priori known fixed background plate as a prior to estimate the initial visual hull by background subtraction. The proposed approach overcomes these limitations allowing moving cameras and unknown scene backgrounds.
An approach based on optical flow and graph cuts was shown to work well for non-rigid objects in indoor settings but requires known background segmentation to obtain silhouettes and is computationally expensive [24]. Practical application of temporally coherent joint estimation requires approaches that work on non-rigid objects for general scenes in uncontrolled environments. A quantitative evaluation of techniques for multi-view reconstruction was presented in [53]. These methods are able to produce high quality results, but rely on good initializations and strong prior assumptions with known and controlled (static) scene backgrounds.
The proposed method exploits the advantages of joint segmentation and reconstruction and addresses the limitations of existing methods by introducing a novel approach to reconstruct general dynamic scenes automatically from wide-baseline cameras with no prior. To overcome the limitations of existing methods, the proposed approach automatically initialises the foreground object segmentation from wide-baseline correspondence without prior knowledge of the scene. This is followed by a joint spatio-temporal reconstruction and segmentation of general scenes. Temporal correspondence is exploited to overcome visual ambiguities giving improved reconstruction together with temporal coherence of surface correspondence to obtain 4D scene models.
Temporal coherent 4D Reconstruction
Temporally coherent 4D reconstruction refers to aligning the 3D surfaces of non-rigid objects over time for a dynamic sequence. This is achieved by estimating pointto-point correspondences for the 3D surfaces to obtain 4D temporally coherent reconstruction. 4D models allows to create efficient representation for practical applications in film, broadcast and immersive content production such as virtual, augmented and mixed reality. The majority of existing approaches for reconstruction of dynamic scenes from multi-view videos process each time frame independently due to the difficulty of simultaneously estimating temporal correspondence for non-rigid objects. Independent per-frame reconstruction can result in errors due to the inherent visual ambiguity caused by occlusion and similar object appearance for general scenes. Recent research has shown that exploiting temporal information can improve reconstruction accuracy as well as achieving temporal coherence [43].
3D scene flow estimates frame to frame correspondence whereas 4D temporal coherence estimates correspondence across the complete sequence to obtain a single surface model. Methods to estimate 3D scene flow have been reported in the literature [41] for autonomous vehicles. However this approach is limited to narrow baseline cameras. Other scene flow approaches are dependent on 2D optical flow [66,6] and they require an accurate estimate for most of the pixels which fails in the case of large motion. However, 3D scene flow methods align two frames independently and do not produce temporally coherent 4D models.
Research investigating spatio-temporal reconstruction across multiple frames was proposed by [20,37,24] exploiting the temporal information from the previous frames using optical flow. An approach for recovering space-time consistent depth maps from multiple video sequences captured by stationary, synchronized and calibrated cameras for depth based free viewpoint video rendering was proposed by [39]. However these methods require accurate initialisation, fixed and calibrated cameras and are limited to simple scenes. Other approaches to temporally coherent reconstruction [4] either requires a large number of closely spaced cameras or bi-layer segmentation [72,30] as a constraint for reconstruction. Recent approaches for spatio-temporal reconstruction of multi-view data either work on indoor studio data [47].
The framework proposed in this paper addresses limitations of existing approaches and gives 4D temporally coherent reconstruction for general dynamic indoor or outdoor scenes with large non-rigid motions, repetitive texture, uncontrolled illumination, and large capture volume. The scenes are captured with sparse static/moving cameras. The proposed approach gives 4D models of complete scenes with both static and dynamic objects for real-world applications (FVV and VR) with no prior knowledge of scene structure.
Multi-view Video Segmentation
In the field of image segmentation, approaches have been proposed to provide temporally consistent monocular video segmentation [21,49,45,71]. Hierarchical segmentation based on graphs was proposed in [21], directed acyclic graph were used to propose an object followed by segmentation [71]. Optical flow is used to identify and consistently segment objects [45,49]. Recently a number of approaches have been proposed for multi-view foreground object segmentation by exploiting appearance similarity spatially across views [16,35,38,70]. An approach for space-time multi-view segmentation was proposed by [17]. However, multi-view approaches assume a static background and different colour distributions for the foreground and background which limits applicability for general scenes and non-rigid objects.
To address this issue we introduce a novel method for spatio-temporal multi-view segmentation of dynamic scenes using shape constraints. Single image segmentation techniques using shape constraints provide good results for complex scene segmentation [25] (convex and concave shapes), but require manual interaction. The proposed approach performs automatic multi-view video segmentation by initializing the foreground object model using spatio-temporal information from widebaseline feature correspondence followed by a multi- layer optimization framework. Geodesic star convexity previously used in single view segmentation [25] is applied to constraint the segmentation in each view. Our multi-view formulation naturally enforces coherent segmentation between views and also resolves ambiguities such as the similarity of background and foreground in isolated views.
Summary and Motivation
Image-based temporally coherent 4D dynamic scene reconstruction without a prior model or constraints on the scene structure is a key problem in computer vision. Existing dense reconstruction algorithms need some strong initial prior and constraints for the solution to converge such as background, structure, and segmentation, which limits their application for automatic reconstruction of general scenes. Current approaches are also commonly limited to independent per-frame reconstruction and do not exploit temporal information or produce a coherent model with known correspondence.
The approach proposed in this paper aims to overcome the limitations of existing approaches to enable robust temporally coherent wide-baseline multiple view reconstruction of general dynamic scenes without prior assumptions on scene appearance, structure or segmentation of the moving objects. Static and dynamic objects in the scene are identified for simultaneous segmentation and reconstruction using geometry and appearance cues in a sparse-to-dense optimization framework. Temporal coherence is introduced to improve the quality of the reconstruction and geodesic star convexity is used to improve the quality of segmentation. The static and dynamic elements are fused automatically in both the temporal and spatial domain to obtain the final 4D scene reconstruction.
This paper presents a unified framework, novel in combining multiple view joint reconstruction and seg-mentation with temporal coherence to improve per-frame reconstruction performance and produce a single framework from the initial work presented in [43,42]. In particular the approach gives 4D surface model with full correspondence over time. A comprehensive experimental evaluation with comparison to the state-of-the-art in segmentation, reconstruction and 4D modelling is also presented extending previous work. Application fo the resulting 4D models to free-viewpoint video rendering and content production for immersive virtual reality experiences is also presented.
Methodology
This work is motivated by the limitations of existing multiple view reconstruction methods which either work independently at each frame resulting in errors due to visual ambiguity [19,23], or require restrictive assumptions on scene complexity and structure and often assume prior camera calibration and foreground segmentation [60,24]. We address these issues by initializing the joint reconstruction and segmentation algorithm automatically, introducing temporal coherence in the reconstruction and geodesic star convexity in segmentation to reduce ambiguity and ensure consistent non-rigid structure initialization at successive frames. The proposed approach is demonstrated to achieve improved reconstruction and segmentation performance over state-ofthe-art approaches and produce temporally coherent 4D models of complex dynamic scenes.
Overview
An overview of the proposed framework for temporally coherent multi-view reconstruction is presented in Figures 2 and consists of the following stages: Multi-view video: The scenes are captured using multiple video cameras (static/moving) separated by widebaseline (> 15 • ). The cameras can be synchronized during the capture using time-code generator or later using the audio information. Camera extrinsic calibration and scene structure are assumed to be unknown. Sparse reconstruction: The intrinsics are assumed to be known. Segmentation based feature detection (SFD) [44] is used to obtain a relatively large number of sparse features suitable for wide-baseline matching which are distributed throughout the scene including on dynamic objects such as people. SFD features are matched between views using a SIFT descriptor giving camera extrinsics and a sparse 3D point-cloud for each time instant for the entire sequence [27]. Initial scene segmentation and reconstruction -Section 3.2: Automatic initialisation is performed without prior knowledge of the scene structure or appearance to obtain an initial approximation for each object. The sparse point cloud is clustered in 3D [51] with each cluster representing a unique foreground object. Object segmentation increases efficiency and improve robustness of 4D models. This reconstruction is refined using the framework explained in Section 3.4 to obtain segmentation and dense reconstruction of each object. Sparse-to-dense temporal reconstruction with temporal coherence -Section 3.3 Temporal coherence is introduced in the framework to initialize the coarse reconstruction and obtain frame-to-frame dense correspondences for dynamic object. Dynamic object regions are detected at each time instant by sparse temporal correspondence of SFD features at successive frames. Sparse temporal feature correspondence allows propagation of the dense reconstruction for each dynamic object to obtain an initial approximation. Joint object-based sparse-to-dense temporally coherent refinement of shape and segmentation -Section 3.4: The initial estimate is refined for each object per-view in the scene through joint optimisation of shape and segmentation using a robust cost function combining matching, color, contrast and smoothness information for wide-baseline matching with a geodesic star convexity constraint. A single 3D model for each dynamic object is obtained by fusion of the view-dependent depth maps using Poisson surface reconstruction [31]. Surface orientation is estimated based on neighbouring pixels. Applications -Section : The 4D representation from the proposed joint segmentation and reconstruction framework has a number of applications in media production, including free-viewpoint video (FVV) rendering and virtual reality (VR).
The process above is repeated for the entire sequence for all objects in the first frame and for dynamic objects at each time-instant. The proposed approach enables automatic reconstruction of all objects in the scene as a 4D mesh sequence. Subsequent sections present the novel contributions of this work in initialisation and refinement to obtain a dense temporally coherent reconstruction. The approach is demonstrated to outperform previous approaches to dynamic scene reconstruction and does not require prior knowledge of the scene.
Initial Scene Segmentation and Reconstruction
For general dynamic scene reconstruction, we need to reconstruct and segment the objects in the scene. This requires an initial coarse approximation for initialisation of a subsequent refinement step to optimise the segmentation and reconstruction with respect to each camera view. We introduce an approach based on sparse point cloud clustering, an overview is shown in Figure 3. Initialisation gives a complete coarse segmentation and reconstruction of each object in the first frame of the sequence for subsequent refinement. The dense reconstruction of the foreground objects and background are combined to obtain a full scene reconstruction at the first time instant. A rough geometric proxy of the background is created using the method. For consecutive time instants dynamic objects and newly appeared objects are identified and only these objects are reconstructed and segmented. The reconstruction of static objects is retained which reduces computational complexity. The optic flow and cluster information for each dynamic object ensures that we retain same labels for the entire sequence.
Sparse Point-cloud Clustering
The sparse representation of the scene is processed to remove outliers using the point neighbourhood statistics to filter outlier data [51]. We segment the objects in the sparse scene reconstruction, this allows only moving objects to be reconstructed at each frame for efficiency and this also allows object shape similarity to be propagated across frames to increase robustness of reconstruction.
We use data clustering approach based on the 3D grid subdivision of the space using an octree data structure in Euclidean space to segment objects at each frame. In a more general sense, nearest neighbor information is used to cluster, which is essentially similar to a flood fill algorithm. We choose this data clustering because of its computational efficiency and robustness. The approach allows segmentation of objects in the scene and is demonstrated to work well for cluttered and general outdoor scenes as shown in Section 4.
Objects with insufficient detected features are reconstructed as part of the scene background. Appearing, disappearing and reappearing objects are handled by sparse dynamic feature tracking, explained in Section 3.3. Clustering results are shown in Figure 3. This is followed by a sparse-to-dense coarse object based approach to segment and reconstruct general dynamic scenes.
Coarse Object Reconstruction
The process to obtain the coarse reconstruction for the first frame of the sequence is shown in Figure 4. The sparse representation of each element is back-projected on the rectified image pair for each view. Delaunay triangulation [18] is performed on the set of back projected points for each cluster on one image and is propagated to the second image using the sparse matched features. Triangles with edge length greater than the median length of edges of all triangles are removed. For each remaining triangle pair direct linear transform is used to estimate the affine homography. Displacement at each pixel within the triangle pair is estimated by interpolation to get an initial dense disparity map for each cluster in the 2D image pair labelled as R I depicted in red in Figure 4. The initial coarse reconstruction for the observed objects in the scene is used to define the depth hypotheses at each pixel for the optimization.
The region R I does not ensure complete coverage of the object, so we extrapolate this region to obtain a region R O (shown in yellow) in 2D by 5% of the average distance between the boundary points(R I ) and the centroid of the object. To allow for errors in the initial approximate depth from sparse features we add volume in front and behind of the projected surface by an error tolerance, along the optical ray of the camera. This ensures that the object boundaries lie within the extrapolated initial coarse estimate and depth at each pixel for the combined regions may not be accurate. The tolerance for extrapolation may vary if a pixel belongs to R I or R O as the propagated pixels of the extrapolated regions (R O ) may have a high level of errors compared to error at the points from sparse representation (R I ) requiring a comparatively higher tolerance. The calculation of threshold depends on the capture volume of the datasets and is set to 1% of the capture volume for R O and half the value for R I . This volume in 3D corresponds to our initial coarse reconstruction of each object and enables us to remove the dependency of the existing approaches on background plate and visual hull estimates. This process of cluster identification and initial coarse object reconstruction is performed for multiple objects in general environments. Initial object segmentation using point cloud clustering and coarse segmentation is insensitive to parameters. Throughout this work the same parameters are used for all datasets. The result of this process is a coarse initial object segmentation and reconstruction for each object.
Sparse-to-dense temporal reconstruction with temporal coherence
Once the static scene reconstruction is obtained for the first frame, we perform temporally coherent reconstruction for dynamic objects at successive time instants instead of whole scene reconstruction for computational efficiency and to avoid redundancy. The initial coarse reconstruction for each dynamic region is refined in the subsequent optimization step with respect to each camera view. Dynamic scene objects are identified from the temporal correspondence of sparse feature points. Sparse correspondence is used to propagate an initial model of the moving object for refinement. Figure 5 presents the sparse reconstruction and temporal correspondence. New objects are identified per frame from the clustered sparse reconstruction and are labelled as dynamic objects. Sparse temporal dynamic feature tracking: Numerous approaches have been proposed to track moving objects in 2D using either features or optical flow. However these methods may fail in the case of occlusion, movement parallel to the view direction, large motions and moving cameras. To overcome these limitations we match the sparse 3D feature points obtained using SFD [44] from multiple wide-baseline views at each time instant. The use of sparse 3D features is robust to large non-rigid motion, occlusions and camera movement. SFD detects sparse features which are stable across wide-baseline views and consecutive time instants for a moving camera and dynamic scene. Sparse 3D feature matches between consecutive time instants are back-projected to each view. These features are matched temporally using SIFT descriptor to identify the moving points. Robust matching is achieved by enforcing multiple view consistency for the temporal feature correspondence in each view as illustrated in Figure 6. Each match must satisfy the constraint:
H t,v (p) + u t,r (p + H t,v (p)) − u t,v (p)− (1) H t,r (p + u t,v (p)) <
where p is the feature image point in view v at frame t, H t,v (p) is the disparity at frame t from views v and r, u t,v (p) is the temporal correspondence from frames t to t + 1 for view v. The multi-view consistency check ensures that correspondences between any two views remain temporally consistent for successive frames. Matches in the 2D domain are sensitive to camera movement and occlusion, hence we map the set of refined matches into 3D to make the system robust to camera motion. The Frobenius norm is applied on the 3D point gradients in all directions [71] to obtain the 'net' motion at each sparse point. The 'net' motion between pairs of 3D points for consecutive time instants are ranked, and the top and bottom 5 percentile values are removed. Median filtering is then applied to identify the dynamic features. Figure 7 shows an example with moving cameras for Juggler [5].
Sparse-to-dense model reconstruction: Dynamic 3D feature points are used to initialize the segmentation and reconstruction of the initial model. This avoids the assumption of static backgrounds and prior scene segmentation commonly used to initialise multiple view reconstruction with a coarse visual-hull approximation [23]. Temporal coherence also provides a more accurate initialisation to overcome visual ambiguities at individual frames. Figure 8 illustrates the use of temporal coherence for reconstruction initialisation and refinement. Dynamic feature correspondence is used to identify the mesh for each dynamic object. This mesh is back projected on each view to obtain the region of interest. Lucas Kanade Optical flow [8] is performed on the projected mask for each view in the temporal domain using the dynamic feature correspondences over time as initialization. Dense multi-view wide-baseline correspondences from the previous frame are propagated to the current frame using the information from the flow vectors to obtain dense multi-view matches in the current frame. The matches are triangulated in 3D to obtain a refined 3D dense model of the dynamic object for the current frame. For dynamic scenes, a new object may enter the scene or a new part may appear as the object moves. To allow the introduction of new objects and object parts we also use information from the cluster of sparse points for each dynamic object. The cluster corresponding to the dynamic features is identified and static points are removed. This ensures that the set of new points not only contain the dynamic features but also the unprocessed points which represent new parts of the object. These points are added to the refined sparse model of the dynamic object. To handle the new objects we detect new clusters at each time instant and consider them as dynamic regions. The sparse-to-dense initial coarse reconstruction improves the quality of segmentation and reconstruction after the refinement. Examples of the improvement in segmentation and reconstruction for Odzemok [1] and Juggler [5] datasets are shown in Figure 9. As observed limbs of the people is retained by using information from the previous frames in both the cases.
Joint object-based sparse-to-dense temporally coherent refinement of shape and segmentation
The initial reconstruction and segmentation from dense temporal feature correspondence is refined using a joint optimization framework. A novel shape constraint is introduced based on geodesic star convexity which has previously been shown to give improved performance in interactive image segmentation for structures with fine details (for example a person's fingers or hair) [25]. Shape is a powerful cue for object recognition and segmentation. Shape models represented as distance transforms from a template have been used for category specific segmentation [33]. Some works have introduced generic connectivity constraints for segmentation showing that obtaining a globally optimal solutions under the connectivity constraint is NP-hard [64]. Veksler et al. have used shape constraint in segmentation framework by enforcing star convexity prior on the segmentation, and globally optimal solutions are achieved subject to this constraint [63]. The star convexity constraint ensures connectivity to seed points, and is a stronger assumption than plain connectivity. An example of a star-convex object is shown in Figure 10 along with a failure case for a non-rigid articulate object. To handle more complex objects the idea of geodesic forests with multiple star centres was introduced to obtain a globally optimal solution for interactive 2D object segmentation [25]. The main focus was to introduce shape constraints in interactive segmentation, by means of a geodesic star convexity prior. The notion of connectivity was extended from Euclidean to geodesic so that paths can bend and adapt to image data as opposed to straight Euclidean rays, thus extending visibility and reducing the number of star centers required.
The geodesic star-convexity is integrated as a constraint on the energy minimisation for joint multi-view Fig. 10 (a) Representation of star convexity: The left object depicts example of star-convex objects, with a star center marked. The object on the right with a plausible star center shows deviations from star-convexity in the fine details, and (b) Multiple star semantics for joint refinement: Single star center based segmentation is depicted on the left and multiple star is shown on the right. reconstruction and segmentation [23]. In this work the shape constraint is automatically initialised for each view from the initial segmentation. The shape constraint is based on the geodesic distance with foreground object initialisation (seeds) as star centres to which the object shape is restricted. The union formed by multiple object seeds form a geodesic forest. This allows complex shapes to be segmented. In this work to automatically initialize the segmentation we use the sparse temporal feature correspondence as star centers (seeds) to build a geodesic forest automatically. The region outside the initial coarse reconstruction of all dynamic objects is initialized as the background seed for segmentation as shown in Figure 12. The shape of the dynamic object is restricted by this geodesic distance constraint that depends on the image gradient. Comparison with existing methods for multi-view segmentation demonstrates improvements in recovery of fine detail structure as illustrated in Figure 12.
Once we have a set of dense 3D points for each dynamic object, Poisson surface reconstruction is performed on the set of sparse points to obtain an initial coarse model of each dynamic region R, which is subsequently refined using the optimization framework (Section 3.4.1).
Optimization on initial coarse object reconstruction based on geodesic star convexity
The depth of the initial coarse reconstruction estimate is refined per view for each dynamic object at a per pixel level. View-dependent optimisation of depth is performed with respect to each camera which is robust to errors in camera calibration and initialisation. Calibration inaccuracies produce inconsistencies limiting the applicability of global reconstruction techniques which simultaneously consider all views; view-dependent techniques are more tolerant to such inaccuracies because they only use a subset of the views for reconstruction of depth from each camera view.
Our goal is to assign an accurate depth value from a set of depth values D = d 1 , ..., d |D|−1 , U and assign a layer label from a set of label values L = l 1 , ..., l |L | to each pixel p for the region R of each dynamic object. Each d i is obtained by sampling the optical ray from the camera and U is an unknown depth value to handle occlusions. This is achieved by optimisation of a joint cost function [23] for label (segmentation) and depth (reconstruction):
E(l, d) = λ data E data (d) + λ contrast E contrast (l)+ λ smooth E smooth (l, d) + λ color E color (l) (2)
where, d is the depth at each pixel, l is the layer label for multiple objects and the cost function terms are defined in section 3.4.2. The equation consists of four terms: the data term is for the photo-consistency scores, the smoothness term is to avoid sudden peaks in depth and maintain the consistency and the color and contrast terms are to identify the object boundaries. Data and smoothness terms are common to solve reconstruction problems [7] and the color and contrast terms are used for segmentation [34]. This is solved subject to a geodesic star-convexity constraint on the labels l. A label l is star convex with respect to center c, if every point p ∈ l is visible to a star center c via l in the image x which can be expressed as an energy cost:
E (l|x, c) = p∈R q∈Γc,p E p,q (l p , l q ) (3) ∀q ∈ Γ c,p , E p,q = ∞ if l p = l q 0 otherwise(4)
where ∀p ∈ R : p ∈ l ⇔ l p = 1 and Γ c,p is the geodesic path joining p to the star center c given by:
Γ c,p = arg min Γ ∈Pc,p L(Γ )(5)
where P c,p denotes the set of all discrete paths between c and p and L(Γ ) is the length of discrete geodesic path as defined in [25]. In the case of image segmentation the gradients in the underlying image provide information to compute the discrete paths between each pixel and star centers and L(Γ ) is defined below:
L(Γ ) = N D −1 i=1 (1 − δ g )j(Γ i , Γ i+1 ) 2 + δ g I(Γ i ) 2(6)
where Γ is an arbitrary parametrized discrete path with N D pixels given by Γ 1 , Γ 2 , · · · Γ N D , j(Γ i , Γ i+1 ) is the Euclidean distance between successive pixels, and the quantity I(Γ i ) 2 is a finite difference approximation of the image gradient between the points Γ i , Γ i+1 . The parameter weights δ g the Euclidean distance with the geodesic length. Using the above definition, one can define the geodesic distance as defined in Equation 5.
An extension of single star-convexity is to use multiple stars to define a more general class of shapes. Introduction of multiple star centers reduces the path lengths and increases the visibility of small parts of objects like small limbs as shown in Figure 10. Hence Equation 3 is extended to multiple stars. A label l is star convex with respect to center c i , if every point p ∈ l is visible to a star center c i in set C = {c 1 , ..., c N T } via l in the image x, where N T is the number of star centers [25]. This is expressed as an energy cost:
E (l|x, C ) = p∈R q∈Γc,p E p,q (l p , l q )(7)
In our case all the correct temporal sparse feature correspondences are used as star centers, hence the segmentation will include all the points which are visible to these sparse features via geodesic distances in the region R, thereby employing the shape constraint. Since the star centers are selected automatically, the method is unsupervised. Comparison of segmentation constraint with geodesic multi-star convexity against no constraints and Euclidean multi-star convexity constraint is shown in Figure 11. The figure demonstrates the usefulness of the proposed approach with an improvement in segmentation quality on non-rigid complex objects. The energy in the Equation 2 is minimized as follows:
min (l,d) s.t. E(l, d) l S (C ) ⇔ min (l,d) E(l, d) + E (l|x, C )(8)
where S (C ) is the set of all shapes which lie within the geodesic distances with respect to the centers in C . Optimization of Equation 8, subject to each pixel p in the region R being at a geodesic distance Γ c,p from the star centers in the set C , is performed using the αexpansion algorithm for a pixel p by iterating through Fig. 12 Geodesic star convexity: A region R with star centers C connected with geodesic distance Γ c,p . Segmentation results with and without geodesic star convexity based optimization are shown on the right for the Juggler dataset. the set of labels in L × D [10]. Graph-cut is used to obtain a local optimum [9]. The improvements in the results using geodesic star convexity in the framework is shown in Figure 12 and by using temporal coherence is shown in Figure 9. Figure 13 shows improvements using geodesic shape constraint, temporal coherence and combined proposed approach for Dance2 [2] dataset.
Energy cost function for joint segmentation and reconstruction
For completeness in this section we define each of the terms in Equation 2, these are based on previous terms used for joint optimisation over depth for each pixel introduced in [42], with modification of the color matching term to improve robustness and extension to multiple labels.
Matching term: The data term for matching between views is specified as a measure of photo-consistency (Figure 14) as follows:
E data (d) = p∈P e data (p, d p ) = M (p, q) = i∈O k m(p, q), if d p = U M U , if d p = U(9)
where P is the 4-connected neighbourhood of pixel p, M U is the fixed cost of labelling a pixel unknown and q denotes the projection of the hypothesised point P in an auxiliary camera where P is a 3D point along the optical ray passing through pixel p located at a distance d p from the reference camera. O k is the set of k most photo-consistent pairs. For textured scenes Normalized Cross Correlation (NCC) over a squared window is a common choice [53]. The NCC values range from -1 to 1 which are then mapped to non-negative values by using the function 1 − N CC.
A maximum likelihood measure [40] is used in this function for confidence value calculation between the center pixel p and the other pixels q and is based on the survey on confidence measures for stereo [28]. The measure is defined as:
m(p, q) = exp cmin 2σ 2 i (p,q)∈N exp −(1−N CC(p,q)) 2σ 2 i(10)
where σ 2 i is the noise variance for each auxiliary camera i; this parameter was fixed to 0.3. N denotes the set of interacting pixels in P. c min is the minimum cost for a pixel obtained by evaluating the function (1−N CC(., .)) on a 15 × 15 window. Contrast term: Segmentation boundaries in images tend to align with contours of high contrast and it is desirable to represent this as a constraint in stereo matching. A consistent interpretation of segmentation-prior and contrast-likelihood is used from [34]. We used a modified version of this interpretation in our formulation to preserve the edges by using Bilateral filtering [61] instead of Gaussian filtering. The contrast term is as follows:
E contrast (l) = p,q∈N e contrast (p, q, l p , l q )(11)
e contrast (p, q, l p , l q ) = 0, if (l p = l q ) 1 1+ ( + exp −C(p,q) ), otherwise (12) · is the L 2 norm and = 1. The simplest choice for C(p, q) would be the squared Euclidean color distance between intensities at pixel p and q as used in [23]. We propose a term for better segmentation as C(p, q) =
B(p)−B(q) 2 2σ 2 pq d 2 pq
where B(.) represents the bilateral filter, d pq is the Euclidean distance between p and q, and σ pq =
B(p)−B(p) 2 d 2 pq
This term enables to remove the regions with low photo-consistency scores and weak edges and thereby helps in estimating the object boundaries.
Smoothness term: This term is inspired by [23] and it ensures the depth labels vary smoothly within the object reducing noise and peaks in the reconstructed surface. This is useful when the photo-consistency score is low and insufficient to assign depth to a pixel ( Figure 14). It is defined as:
E smooth (l, d) = (p,q)∈N e smooth (l p , d p , l q , d q ) (13) e smooth (l p , d p , l q , d q ) = min(|d p − d q | , d max ), if l p = l q and d p , d q = U 0, if l p = l q and d p , d q = U d max , otherwise(14)
d max is set to 50 times the size of the depth sampling step for all datasets.
Color term: This term is computed using the negative log likelihood [9] of the color models learned from the foreground and background markers. The star centers obtained from the sparse 3D features are foreground markers and for background markers we consider the region outside the projected initial coarse reconstruction for each view. The color models use GMMs with 5 components each for Foreground/Background mixed with uniform color models [14] as the markers are sparse.
E color (l) = p∈P −logP (I p |l p )(15)
where P (I p |l p = l i ) denotes the probability at pixel p in the reference image belonging to layer l i . Fig. 15 Comparison of segmentation on benchmark static datasets using geodesic star-convexity.
Results and Performance Evaluation
The proposed system is tested on publicly available multi-view research datasets of indoor and outdoor scenes, details of datasets explained in Table 1. The parameters used for all the datasets are defined in Table 2. More information is available on the website 1 .
Multi-view segmentation evaluation
Segmentation is evaluated against the state-of-the-art methods for multi-view segmentation Kowdle [35] and Djelouah [16] for static scenes and joint segmentation reconstruction methods Mustafa [42] (per frame) and Guillemaut [24] (using temporal information) for both static and dynamic scenes. For static multi-view data the segmentation is initialised as detailed in Section 3.1 followed by refinement using the constrained optimisation Section 3.4.1. For dynamic scenes the full pipeline with temporal coherence is used as detailed in 3. Ground-truth is obtained by manually labelling the foreground for Office, Dance1 and Odzemok dataset, and for other datasets ground-truth is available online. We initialize all approaches by the same proposed initial coarse reconstruction for fair comparison.
To evaluate the segmentation we measure completeness as the ratio of intersection to union with groundtruth [35]. Comparisons are shown in Table 3 and Figure 15, 16 for static benchmark datasets. Comparison for dynamic scene segmentations are shown in Table 4 and Figure 17, 18. Results for multi-view segmentation of static scenes are more accurate than Djelouah, Mustafa, and Guillemaut, and comparable to Kowdle with improved segmentation of some detail such as the back of the chair.
For dynamic scenes the geodesic star convexity based optimization together with temporal consistency gives improved segmentation of fine detail such as the legs of the table in the Office dataset and limbs of the person in the Juggler, Magician and Dance2 datasets in Figure 17 and 18. This overcomes limitations of previous multiview per-frame segmentation.
Reconstruction evaluation
Reconstruction results obtained using the proposed method are compared against Mustafa [42], Guillemaut [24], and Furukawa [19] for dynamic sequences. Furukawa [19] is a per-frame multi-view wide-baseline stereo approach which ranks highly on the middlebury benchmark [53] but does not refine the segmentation.
The depth maps obtained using the proposed approach are compared against Mustafa and Guillemaut in Figure 19. The depth map obtained using the proposed approach are smoother with low reconstruction noise compared to the state-of-the-art methods. Figure 20 and 21 present qualitative and quantitative comparison of our method with the state-of-the-art approaches.
Comparison of reconstructions demonstrates that the proposed method gives consistently more complete and accurate models. The colour maps highlight the quantitative differences in reconstruction. As far as we are aware no ground-truth data exist for dynamic scene reconstruction from real multi-view video. In Figure 21 we present a comparison with the reference mesh available with the Dance2 dataset reconstructed using a visual-hull approach. This comparison demonstrates improved reconstruction of fine detail with the proposed technique.
In contrast to all previous approaches the proposed method gives temporally coherent 4D model reconstructions with dense surface correspondence over time. The introduction of temporal coherence constrains the reconstruction in regions which are ambiguous on a particular frame such as the right leg of the juggler in Figure 20 resulting in more complete shape. Figure 22 shows three complete scene reconstructions with 4D models of multiple objects. The Juggler and Magician sequences are reconstructed from moving handheld cameras. Computational Complexity: Computation times for the proposed approach vs other methods are presented in Table 5. The proposed approach to reconstruct temporally coherent 4D models is comparable in computation time to per-frame multiple view reconstruction and gives a ∼50% reduction in computation cost compared to previous joint segmentation and reconstruction approaches using a known background. This efficiency is achieved through improved per-frame initialisation based on temporal propagation and the introduction of the geodesic star constraint in joint optimisation. Further results can be found in the supplementary material. Temporal coherence: A frame-to-frame alignment is obtained using the proposed approach as shown in Figure 23 for Dance1 and Juggle dataset. The meshes of the dynamic object in Frame 1 and Frame 9 are color coded in both the datasets and the color is propagated to the next frame using the dense temporal coherence information. The color in different parts of the object is retained to the next frame as seen from the figure. The proposed approach obtains sequential temporal alignment which drifts with large movement in the object, hence successive frames are shown in the figure.
Limitations: As with previous dynamic scene reconstruction methods the proposed approach has a number of limitations: persistent ambiguities in appearance between objects will degrade the improvement achieved with temporal coherence; scenes with a large number of inter-occluding dynamic objects will degrade performance; the approach requires sufficient wide-baseline views to cover the scene.
Applications to immersive content production
The 4D meshes generated from the proposed approach can be used for applications in immersive content production such as FVV rendering and VR. This section demonstrates the results of these applications.
Free-viewpoint rendering
In FVV, the virtual viewpoint is controlled interactively by the user. The appearance of the reconstruction is sampled and interpolated directly from the captured camera images using cameras located close to the virtual viewpoint [57].
The proposed joint segmentation and reconstruction framework generates per-view silhouettes and a temporally coherent 4D reconstruction at each time instant of the input video sequence. This representation of the dynamic sequence is used for FVV rendering. To create FVV, a view-dependent surface texture is computed based on the user selected virtual view. This virtual view is obtained by combining the information from camera views in close proximity to the virtual viewpoint [57]. FVV rendering gives user the freedom to interactively choose a novel viewpoint in space to observe the dynamic scene and reproduces fine scale temporal surface details, such as the movement of hair and clothing wrinkles, that may not be modelled geometrically. An example of a reconstructed scene and the camera configuration is shown in Figure 24.
A qualitative evaluation of images synthesised using FVV is shown in Figure 25 and 26. These demonstrate reconstruction results rendered from novel viewpoints from the proposed method against Mustafa [43] and Guillemaut [23] on publicly available datasets. This is particularly important for wide-baseline camera configurations where this technique can be used to synthesize intermediate viewpoints where it may not be practical or economical to physically locate real cameras.
Virtual reality rendering
There is a growing demand for photo-realistic content in the creation of immersive VR experiences. The 4D temporally coherent reconstructions of the dynamic scenes obtained using the proposed approach enables the creation of photo-realistic digital assets that can be incorporated into VR environments using game engines such as Unity and Unreal Engine, as shown in Figure 27 for single frame of four datasets and for a series of frames for Dance1 dataset.
In order to efficiently render the reconstructions in a game engine for applications in VR, a UV texture atlas is extracted using the 4D meshes from the proposed approach as a geometric proxy. The UV texture atlas at each frame are applied to the models at render time in unity for viewing in a VR headset. A UV texture atlas is constructed by projectively texturing and blending multiple view frames onto a 2D unwrapped UV texture atlas, see Figure Figure 28. This is performed once for each static object and at each time instance for dynamic objects allowing efficient storage and real-time playback of static and dynamic textured reconstructions within a VR headset.
Conclusion
This paper introduced a novel technique to automatically segment and reconstruct dynamic scenes captured from multiple moving cameras in general dynamic uncontrolled environments without any prior on background appearance or structure. The proposed automatic initialization was used to identify and initialize the segmentation and reconstruction of multiple objects. A framework for temporally coherent 4D model reconstruction of dynamic scenes from a set of wide-baseline moving cameras. The approach gives a complete model of all static and dynamic non-rigid objects in the scene. Temporal coherence for dynamic objects addresses limitations of previous per-frame reconstruction giving improved reconstruction and segmentation together with dense temporal surface correspondence for dynamic objects. A sparse-to-dense approach is introduced to establish temporal correspondence for non-rigid objects using robust sparse feature matching to initialise dense optical flow providing an initial segmentation and reconstruction. Joint refinement of object reconstruction and segmentation is then performed using a multiple view optimisation with a novel geodesic star convexity constraint that gives improved shape estimation and is computationally efficient. Comparison against state-ofthe-art techniques for multiple view segmentation and reconstruction demonstrates significant improvement in performance for complex scenes. The approach enables reconstruction of 4D models for complex scenes which has not been demonstrated previously. | 8,667 |
1907.08195 | 2963385316 | Existing techniques for dynamic scene reconstruction from multiple wide-baseline cameras primarily focus on reconstruction in controlled environments, with fixed calibrated cameras and strong prior constraints. This paper introduces a general approach to obtain a 4D representation of complex dynamic scenes from multi-view wide-baseline static or moving cameras without prior knowledge of the scene structure, appearance, or illumination. Contributions of the work are: An automatic method for initial coarse reconstruction to initialize joint estimation; Sparse-to-dense temporal correspondence integrated with joint multi-view segmentation and reconstruction to introduce temporal coherence; and a general robust approach for joint segmentation refinement and dense reconstruction of dynamic scenes by introducing shape constraint. Comparison with state-of-the-art approaches on a variety of complex indoor and outdoor scenes, demonstrates improved accuracy in both multi-view segmentation and dense reconstruction. This paper demonstrates unsupervised reconstruction of complete temporally coherent 4D scene models with improved non-rigid object segmentation and shape reconstruction and its application to free-viewpoint rendering and virtual reality. | An approach based on optical flow and graph cuts was shown to work well for non-rigid objects in indoor settings but requires known background segmentation to obtain silhouettes and is computationally expensive @cite_43 . Practical application of temporally coherent joint estimation requires approaches that work on non-rigid objects for general scenes in uncontrolled environments. A quantitative evaluation of techniques for multi-view reconstruction was presented in @cite_62 . These methods are able to produce high quality results, but rely on good initializations and strong prior assumptions with known and controlled (static) scene backgrounds. | {
"abstract": [
"Video-based segmentation and reconstruction techniques are predominantly extensions of techniques developed for the image domain treating each frame independently. These approaches ignore the temporal information contained in input videos which can lead to incoherent results. We propose a framework for joint segmentation and reconstruction which explicitly enforces temporal consistency by formulating the problem as an energy minimisation generalised to groups of frames. The main idea is to use optical flow in combination with a confidence measure to impose robust temporal smoothness constraints. Optimisation is performed using recent advances in the field of graph-cuts combined with practical considerations to reduce run-time and memory consumption. Experimental results with real sequences containing rapid motion demonstrate that the method is able to improve spatio-temporal coherence both in terms of segmentation and reconstruction without introducing any degradation in regions where optical flow fails due to fast motion.",
"This paper presents a quantitative comparison of several multi-view stereo reconstruction algorithms. Until now, the lack of suitable calibrated multi-view image datasets with known ground truth (3D shape models) has prevented such direct comparisons. In this paper, we first survey multi-view stereo algorithms and compare them qualitatively using a taxonomy that differentiates their key properties. We then describe our process for acquiring and calibrating multiview image datasets with high-accuracy ground truth and introduce our evaluation methodology. Finally, we present the results of our quantitative comparison of state-of-the-art multi-view stereo reconstruction algorithms on six benchmark datasets. The datasets, evaluation details, and instructions for submitting new models are available online at http: vision.middlebury.edu mview."
],
"cite_N": [
"@cite_43",
"@cite_62"
],
"mid": [
"2084250169",
"2160014001"
]
} | Temporally coherent general dynamic scene reconstruction | Temporally consistent scene reconstruction for Odzemok dataset color-coded to show the scene object segmentation obtained. effects in film and broadcast production and for content production in virtual reality. The ultimate goal of modelling dynamic scenes from multiple cameras is automatic understanding of real-world scenes from distributed camera networks, for applications in robotics and other autonomous systems. Existing methods have applied multiple view dynamic scene reconstruction techniques in controlled environment with known background or chroma-key studio [23,20,56,60]. Other multiple view stereo techniques require a relatively dense static camera network resulting in a large number of cameras [19]. Extensions to more general outdoor scenes [5,32,60] use prior reconstruction of the static geometry from images of the empty environment. However these methods either require accurate segmentation of dynamic foreground objects, or prior knowledge of the scene struc-ture and background, or are limited to static cameras and controlled environments. Scenes are reconstructed semi-automatically, requiring manual intervention for segmentation/rotoscoping, and result in temporally incoherent per-frame mesh geometries. Temporally coherent geometry with known surface correspondence across the sequence is essential for real-world applications and compact representation.
Our paper addresses the limitations of existing approaches by introducing a methodology for unsupervised temporally coherent dynamic scene reconstruction from multiple wide-baseline static or moving camera views without prior knowledge of the scene structure or background appearance. This temporally coherent dynamic scene reconstruction is demonstrated to work in applications for immersive content production such as free-viewpoint video (FVV) and virtual reality (VR). This work combines two previously published papers in general dynamic reconstruction [42] and temporally coherent reconstruction [43] into a single framework and demonstrates application of this novel unsupervised joint segmentation and reconstruction in immersive content production FVV and VR (Section 5).
The input is a sparse set of synchronised videos from multiple moving cameras of an unknown dynamic scene without prior scene segmentation or camera calibration. Our first contribution is automatic initilisation of camera calibration and sparse scene reconstruction from sparse feature correspondence using sparse feature detection and matching between pairs of frames. An initial coarse reconstruction and segmentation of all scene objects is obtained from sparse features matched across multiple views. This eliminates the requirement for prior knowledge of the background scene appearance or structure. Our second contribution is sparse-to-dense reconstruction and segmentation approach to introduce temporal coherence for every frame. We exploit temporal coherence of the scene to overcome visual ambiguities inherent in single frame reconstruction and multiple view segmentation methods for general scenes. Temporal coherence refers to the correspondence between the 3D surface of all objects observed over time. Our third contribution is spatio-temporal alignment to estimate dense surface correspondence for 4D reconstruction. A geodesic star convexity shape constraint is introduced for the shape segmentation to improve the quality of segmentation for non-rigid objects with complex appearance. The proposed approach overcomes the limitations of existing methods allowing an unsupervised temporally coherent 4D reconstruction of complete models for general dynamic scenes.
The scene is automatically decomposed into a set of spatio-temporally coherent objects as shown in Figure 1 where the resulting 4D scene reconstruction has temporally coherent labels and surface correspondence for each object. This can be used for free-viewpoint video rendering and imported to a game engine for VR experience production. The contributions explained above can be summarized as follows: -Unsupervised temporally coherent dense reconstruction and segmentation of general complex dynamic scenes from multiple wide-baseline views. -Automatic initialization of dynamic object segmentation and reconstruction from sparse features. -A framework for space-time sparse-to-dense segmentation, reconstruction and temporal correspondence. -Robust spatio-temporal refinement of dense reconstruction and segmentation integrating error tolerant photo-consistency and edge information using geodesic star convexity. -Robust and computationally efficient reconstruction of dynamic scenes by exploiting temporal coherence. -Real-world applications of 4D reconstruction to freeviewpoint video rendering and virtual reality. This paper is structured as follows: First related work is reviewed. The methodology for general dynamic scene reconstruction is then introduced. Finally a thorough qualitative and quantitative evaluation and comparison to the state-of-the-art on challenging datasets is presented.
Related Work
Temporally coherent reconstruction is a challenging task for general dynamic scenes due to a number of factors such as motion blur, articulated, non-rigid and large motion of multiple people, resolution differences between camera views, occlusions, wide-baselines, errors in calibration and cluttered dynamic backgrounds. Segmentation of dynamic objects from such scenes is difficult because of foreground and background complexity and the likelihood of overlapping background and foreground color distributions. Reconstruction is also challenging due to limited visual cues and relatively large errors affecting both calibration and extraction of a globally consistent solution. This section reviews previous work on dynamic scene reconstruction and segmentation.
Dynamic Scene Reconstruction
Dense dynamic shape reconstruction is a fundamental problem and heavily studied area in the field of computer vision. Recovering accurate 3D models of a dynamically evolving, non-rigid scene observed by multiple synchronised cameras is a challenging task. Research on multiple view dense dynamic reconstruction has primarily focused on indoor scenes with controlled illumi-nation and static backgrounds, extending methods for multiple view reconstruction of static scenes [53] to sequences [62]. Deep learning based approaches have been introduced to estimate shape of dynamic objects from minimal camera views in constrained environment [29,68] and for rigid objects [58]. In the last decade, focus has shifted to more challenging outdoor scenes captured with both static and moving cameras. Reconstruction of non-rigid dynamic objects in uncontrolled natural environments is challenging due to the scene complexity, illumination changes, shadows, occlusion and dynamic backgrounds with clutter such as trees or people. Methods have been proposed for multi-view reconstruction [65,39,37] requiring a large number of closely spaced cameras for surface estimation of dynamic shape. Practical applications require relatively sparse moving cameras to acquire coverage over large areas such as outdoor. A number of approaches for mutli-view reconstruction of outdoor scenes require initial silhouette segmentation [67,32,22,23] to allow visual-hull reconstruction. Most of these approaches to general dynamic scene reconstruction fail in the case of complex (cluttered) scenes captured with moving cameras.
A recent work proposed reconstruction of dynamic fluids [50] for static cameras. Another work used RGB-D cameras to obtain reconstruction of non-rigid surfaces [55]. Pioneering research in general dynamic scene reconstruction from multiple handheld wide-baseline cameras [5,60] exploited prior reconstruction of the background scene to allow dynamic foreground segmentation and reconstruction. Recent work [46] estimates shape of dynamic objects from handheld cameras exploiting GANs. However these approaches either work for static/indoor scenes or exploit strong prior assumptions such as silhouette information, known background or scene structure. Also all these approaches give per frame reconstruction leading to temporally incoherent geometries. Our aim is to perform temporally coherent dense reconstruction of unknown dynamic non-rigid scenes automatically without strong priors or limitations on scene structure.
Joint Segmentation and Reconstruction
Many of the existing multi-view reconstruction approaches rely on a two-stage sequential pipeline where foreground or background segmentation is initially performed independently with respect to each camera, and then used as input to obtain visual hull for multi-view reconstruction. The problem with this approach is that the errors introduced at the segmentation stage cannot be recovered and are propagated to the reconstruction stage reducing the final reconstruction quality. Segmentation from multiple wide-baseline views has been proposed by exploiting appearance similarity [17,38,70]. These ap-proaches assume static backgrounds and different colour distributions for the foreground and background [52,17] which limits applicability for general scenes.
Joint segmentation and reconstruction methods incorporate estimation of segmentation or matting with reconstruction to provide a combined solution. Joint refinement avoids the propagation of errors between the two stages thereby making the solution more robust. Also, cues from segmentation and reconstruction can be combined efficiently to achieve more accurate results. The first multi-view joint estimation system was proposed by Szeliski et al. [59] which used iterative gradient descent to perform an energy minimization. A number of approaches were introduced for joint formulation in static scenes and one recent work used training data to classify the segments [69]. The focus shifted to joint segmentation and reconstruction for rigid objects in indoor and outdoor environments. These approaches used a variety of techniques such as patch-based refinement [54,48] and fixating cameras on the object of interest [11] for reconstructing rigid objects in the scene. However, these are either limited to static scenes [69,26] or process each frame independently thereby failing to enforce temporal consistency [11,23].
Joint reconstruction and segmentation on monocular video was proposed in [36,3,12] achieving semantic segmentation of scene limited to rigid objects in street scenes. Practical application of joint estimation requires these approaches to work on non-rigid objects such as humans with clothing. A multi-layer joint segmentation and reconstruction approach was proposed for multiple view video of sports and indoor scenes [23]. The algorithm used known background images of the scene without the dynamic foreground objects to obtain an initial segmentation. Visual-hull based reconstruction was performed with known prior foreground/background using a background image plate with fixed and calibrated cameras. This visual hull was used as a prior and was optimized by a combination of photo-consistency, silhouette, color and sparse feature information in an energy minimization framework to improve the segmentation and reconstruction quality. Although structurally similar to our approach, it requires the scene to be captured by fixed calibrated cameras and a priori known fixed background plate as a prior to estimate the initial visual hull by background subtraction. The proposed approach overcomes these limitations allowing moving cameras and unknown scene backgrounds.
An approach based on optical flow and graph cuts was shown to work well for non-rigid objects in indoor settings but requires known background segmentation to obtain silhouettes and is computationally expensive [24]. Practical application of temporally coherent joint estimation requires approaches that work on non-rigid objects for general scenes in uncontrolled environments. A quantitative evaluation of techniques for multi-view reconstruction was presented in [53]. These methods are able to produce high quality results, but rely on good initializations and strong prior assumptions with known and controlled (static) scene backgrounds.
The proposed method exploits the advantages of joint segmentation and reconstruction and addresses the limitations of existing methods by introducing a novel approach to reconstruct general dynamic scenes automatically from wide-baseline cameras with no prior. To overcome the limitations of existing methods, the proposed approach automatically initialises the foreground object segmentation from wide-baseline correspondence without prior knowledge of the scene. This is followed by a joint spatio-temporal reconstruction and segmentation of general scenes. Temporal correspondence is exploited to overcome visual ambiguities giving improved reconstruction together with temporal coherence of surface correspondence to obtain 4D scene models.
Temporal coherent 4D Reconstruction
Temporally coherent 4D reconstruction refers to aligning the 3D surfaces of non-rigid objects over time for a dynamic sequence. This is achieved by estimating pointto-point correspondences for the 3D surfaces to obtain 4D temporally coherent reconstruction. 4D models allows to create efficient representation for practical applications in film, broadcast and immersive content production such as virtual, augmented and mixed reality. The majority of existing approaches for reconstruction of dynamic scenes from multi-view videos process each time frame independently due to the difficulty of simultaneously estimating temporal correspondence for non-rigid objects. Independent per-frame reconstruction can result in errors due to the inherent visual ambiguity caused by occlusion and similar object appearance for general scenes. Recent research has shown that exploiting temporal information can improve reconstruction accuracy as well as achieving temporal coherence [43].
3D scene flow estimates frame to frame correspondence whereas 4D temporal coherence estimates correspondence across the complete sequence to obtain a single surface model. Methods to estimate 3D scene flow have been reported in the literature [41] for autonomous vehicles. However this approach is limited to narrow baseline cameras. Other scene flow approaches are dependent on 2D optical flow [66,6] and they require an accurate estimate for most of the pixels which fails in the case of large motion. However, 3D scene flow methods align two frames independently and do not produce temporally coherent 4D models.
Research investigating spatio-temporal reconstruction across multiple frames was proposed by [20,37,24] exploiting the temporal information from the previous frames using optical flow. An approach for recovering space-time consistent depth maps from multiple video sequences captured by stationary, synchronized and calibrated cameras for depth based free viewpoint video rendering was proposed by [39]. However these methods require accurate initialisation, fixed and calibrated cameras and are limited to simple scenes. Other approaches to temporally coherent reconstruction [4] either requires a large number of closely spaced cameras or bi-layer segmentation [72,30] as a constraint for reconstruction. Recent approaches for spatio-temporal reconstruction of multi-view data either work on indoor studio data [47].
The framework proposed in this paper addresses limitations of existing approaches and gives 4D temporally coherent reconstruction for general dynamic indoor or outdoor scenes with large non-rigid motions, repetitive texture, uncontrolled illumination, and large capture volume. The scenes are captured with sparse static/moving cameras. The proposed approach gives 4D models of complete scenes with both static and dynamic objects for real-world applications (FVV and VR) with no prior knowledge of scene structure.
Multi-view Video Segmentation
In the field of image segmentation, approaches have been proposed to provide temporally consistent monocular video segmentation [21,49,45,71]. Hierarchical segmentation based on graphs was proposed in [21], directed acyclic graph were used to propose an object followed by segmentation [71]. Optical flow is used to identify and consistently segment objects [45,49]. Recently a number of approaches have been proposed for multi-view foreground object segmentation by exploiting appearance similarity spatially across views [16,35,38,70]. An approach for space-time multi-view segmentation was proposed by [17]. However, multi-view approaches assume a static background and different colour distributions for the foreground and background which limits applicability for general scenes and non-rigid objects.
To address this issue we introduce a novel method for spatio-temporal multi-view segmentation of dynamic scenes using shape constraints. Single image segmentation techniques using shape constraints provide good results for complex scene segmentation [25] (convex and concave shapes), but require manual interaction. The proposed approach performs automatic multi-view video segmentation by initializing the foreground object model using spatio-temporal information from widebaseline feature correspondence followed by a multi- layer optimization framework. Geodesic star convexity previously used in single view segmentation [25] is applied to constraint the segmentation in each view. Our multi-view formulation naturally enforces coherent segmentation between views and also resolves ambiguities such as the similarity of background and foreground in isolated views.
Summary and Motivation
Image-based temporally coherent 4D dynamic scene reconstruction without a prior model or constraints on the scene structure is a key problem in computer vision. Existing dense reconstruction algorithms need some strong initial prior and constraints for the solution to converge such as background, structure, and segmentation, which limits their application for automatic reconstruction of general scenes. Current approaches are also commonly limited to independent per-frame reconstruction and do not exploit temporal information or produce a coherent model with known correspondence.
The approach proposed in this paper aims to overcome the limitations of existing approaches to enable robust temporally coherent wide-baseline multiple view reconstruction of general dynamic scenes without prior assumptions on scene appearance, structure or segmentation of the moving objects. Static and dynamic objects in the scene are identified for simultaneous segmentation and reconstruction using geometry and appearance cues in a sparse-to-dense optimization framework. Temporal coherence is introduced to improve the quality of the reconstruction and geodesic star convexity is used to improve the quality of segmentation. The static and dynamic elements are fused automatically in both the temporal and spatial domain to obtain the final 4D scene reconstruction.
This paper presents a unified framework, novel in combining multiple view joint reconstruction and seg-mentation with temporal coherence to improve per-frame reconstruction performance and produce a single framework from the initial work presented in [43,42]. In particular the approach gives 4D surface model with full correspondence over time. A comprehensive experimental evaluation with comparison to the state-of-the-art in segmentation, reconstruction and 4D modelling is also presented extending previous work. Application fo the resulting 4D models to free-viewpoint video rendering and content production for immersive virtual reality experiences is also presented.
Methodology
This work is motivated by the limitations of existing multiple view reconstruction methods which either work independently at each frame resulting in errors due to visual ambiguity [19,23], or require restrictive assumptions on scene complexity and structure and often assume prior camera calibration and foreground segmentation [60,24]. We address these issues by initializing the joint reconstruction and segmentation algorithm automatically, introducing temporal coherence in the reconstruction and geodesic star convexity in segmentation to reduce ambiguity and ensure consistent non-rigid structure initialization at successive frames. The proposed approach is demonstrated to achieve improved reconstruction and segmentation performance over state-ofthe-art approaches and produce temporally coherent 4D models of complex dynamic scenes.
Overview
An overview of the proposed framework for temporally coherent multi-view reconstruction is presented in Figures 2 and consists of the following stages: Multi-view video: The scenes are captured using multiple video cameras (static/moving) separated by widebaseline (> 15 • ). The cameras can be synchronized during the capture using time-code generator or later using the audio information. Camera extrinsic calibration and scene structure are assumed to be unknown. Sparse reconstruction: The intrinsics are assumed to be known. Segmentation based feature detection (SFD) [44] is used to obtain a relatively large number of sparse features suitable for wide-baseline matching which are distributed throughout the scene including on dynamic objects such as people. SFD features are matched between views using a SIFT descriptor giving camera extrinsics and a sparse 3D point-cloud for each time instant for the entire sequence [27]. Initial scene segmentation and reconstruction -Section 3.2: Automatic initialisation is performed without prior knowledge of the scene structure or appearance to obtain an initial approximation for each object. The sparse point cloud is clustered in 3D [51] with each cluster representing a unique foreground object. Object segmentation increases efficiency and improve robustness of 4D models. This reconstruction is refined using the framework explained in Section 3.4 to obtain segmentation and dense reconstruction of each object. Sparse-to-dense temporal reconstruction with temporal coherence -Section 3.3 Temporal coherence is introduced in the framework to initialize the coarse reconstruction and obtain frame-to-frame dense correspondences for dynamic object. Dynamic object regions are detected at each time instant by sparse temporal correspondence of SFD features at successive frames. Sparse temporal feature correspondence allows propagation of the dense reconstruction for each dynamic object to obtain an initial approximation. Joint object-based sparse-to-dense temporally coherent refinement of shape and segmentation -Section 3.4: The initial estimate is refined for each object per-view in the scene through joint optimisation of shape and segmentation using a robust cost function combining matching, color, contrast and smoothness information for wide-baseline matching with a geodesic star convexity constraint. A single 3D model for each dynamic object is obtained by fusion of the view-dependent depth maps using Poisson surface reconstruction [31]. Surface orientation is estimated based on neighbouring pixels. Applications -Section : The 4D representation from the proposed joint segmentation and reconstruction framework has a number of applications in media production, including free-viewpoint video (FVV) rendering and virtual reality (VR).
The process above is repeated for the entire sequence for all objects in the first frame and for dynamic objects at each time-instant. The proposed approach enables automatic reconstruction of all objects in the scene as a 4D mesh sequence. Subsequent sections present the novel contributions of this work in initialisation and refinement to obtain a dense temporally coherent reconstruction. The approach is demonstrated to outperform previous approaches to dynamic scene reconstruction and does not require prior knowledge of the scene.
Initial Scene Segmentation and Reconstruction
For general dynamic scene reconstruction, we need to reconstruct and segment the objects in the scene. This requires an initial coarse approximation for initialisation of a subsequent refinement step to optimise the segmentation and reconstruction with respect to each camera view. We introduce an approach based on sparse point cloud clustering, an overview is shown in Figure 3. Initialisation gives a complete coarse segmentation and reconstruction of each object in the first frame of the sequence for subsequent refinement. The dense reconstruction of the foreground objects and background are combined to obtain a full scene reconstruction at the first time instant. A rough geometric proxy of the background is created using the method. For consecutive time instants dynamic objects and newly appeared objects are identified and only these objects are reconstructed and segmented. The reconstruction of static objects is retained which reduces computational complexity. The optic flow and cluster information for each dynamic object ensures that we retain same labels for the entire sequence.
Sparse Point-cloud Clustering
The sparse representation of the scene is processed to remove outliers using the point neighbourhood statistics to filter outlier data [51]. We segment the objects in the sparse scene reconstruction, this allows only moving objects to be reconstructed at each frame for efficiency and this also allows object shape similarity to be propagated across frames to increase robustness of reconstruction.
We use data clustering approach based on the 3D grid subdivision of the space using an octree data structure in Euclidean space to segment objects at each frame. In a more general sense, nearest neighbor information is used to cluster, which is essentially similar to a flood fill algorithm. We choose this data clustering because of its computational efficiency and robustness. The approach allows segmentation of objects in the scene and is demonstrated to work well for cluttered and general outdoor scenes as shown in Section 4.
Objects with insufficient detected features are reconstructed as part of the scene background. Appearing, disappearing and reappearing objects are handled by sparse dynamic feature tracking, explained in Section 3.3. Clustering results are shown in Figure 3. This is followed by a sparse-to-dense coarse object based approach to segment and reconstruct general dynamic scenes.
Coarse Object Reconstruction
The process to obtain the coarse reconstruction for the first frame of the sequence is shown in Figure 4. The sparse representation of each element is back-projected on the rectified image pair for each view. Delaunay triangulation [18] is performed on the set of back projected points for each cluster on one image and is propagated to the second image using the sparse matched features. Triangles with edge length greater than the median length of edges of all triangles are removed. For each remaining triangle pair direct linear transform is used to estimate the affine homography. Displacement at each pixel within the triangle pair is estimated by interpolation to get an initial dense disparity map for each cluster in the 2D image pair labelled as R I depicted in red in Figure 4. The initial coarse reconstruction for the observed objects in the scene is used to define the depth hypotheses at each pixel for the optimization.
The region R I does not ensure complete coverage of the object, so we extrapolate this region to obtain a region R O (shown in yellow) in 2D by 5% of the average distance between the boundary points(R I ) and the centroid of the object. To allow for errors in the initial approximate depth from sparse features we add volume in front and behind of the projected surface by an error tolerance, along the optical ray of the camera. This ensures that the object boundaries lie within the extrapolated initial coarse estimate and depth at each pixel for the combined regions may not be accurate. The tolerance for extrapolation may vary if a pixel belongs to R I or R O as the propagated pixels of the extrapolated regions (R O ) may have a high level of errors compared to error at the points from sparse representation (R I ) requiring a comparatively higher tolerance. The calculation of threshold depends on the capture volume of the datasets and is set to 1% of the capture volume for R O and half the value for R I . This volume in 3D corresponds to our initial coarse reconstruction of each object and enables us to remove the dependency of the existing approaches on background plate and visual hull estimates. This process of cluster identification and initial coarse object reconstruction is performed for multiple objects in general environments. Initial object segmentation using point cloud clustering and coarse segmentation is insensitive to parameters. Throughout this work the same parameters are used for all datasets. The result of this process is a coarse initial object segmentation and reconstruction for each object.
Sparse-to-dense temporal reconstruction with temporal coherence
Once the static scene reconstruction is obtained for the first frame, we perform temporally coherent reconstruction for dynamic objects at successive time instants instead of whole scene reconstruction for computational efficiency and to avoid redundancy. The initial coarse reconstruction for each dynamic region is refined in the subsequent optimization step with respect to each camera view. Dynamic scene objects are identified from the temporal correspondence of sparse feature points. Sparse correspondence is used to propagate an initial model of the moving object for refinement. Figure 5 presents the sparse reconstruction and temporal correspondence. New objects are identified per frame from the clustered sparse reconstruction and are labelled as dynamic objects. Sparse temporal dynamic feature tracking: Numerous approaches have been proposed to track moving objects in 2D using either features or optical flow. However these methods may fail in the case of occlusion, movement parallel to the view direction, large motions and moving cameras. To overcome these limitations we match the sparse 3D feature points obtained using SFD [44] from multiple wide-baseline views at each time instant. The use of sparse 3D features is robust to large non-rigid motion, occlusions and camera movement. SFD detects sparse features which are stable across wide-baseline views and consecutive time instants for a moving camera and dynamic scene. Sparse 3D feature matches between consecutive time instants are back-projected to each view. These features are matched temporally using SIFT descriptor to identify the moving points. Robust matching is achieved by enforcing multiple view consistency for the temporal feature correspondence in each view as illustrated in Figure 6. Each match must satisfy the constraint:
H t,v (p) + u t,r (p + H t,v (p)) − u t,v (p)− (1) H t,r (p + u t,v (p)) <
where p is the feature image point in view v at frame t, H t,v (p) is the disparity at frame t from views v and r, u t,v (p) is the temporal correspondence from frames t to t + 1 for view v. The multi-view consistency check ensures that correspondences between any two views remain temporally consistent for successive frames. Matches in the 2D domain are sensitive to camera movement and occlusion, hence we map the set of refined matches into 3D to make the system robust to camera motion. The Frobenius norm is applied on the 3D point gradients in all directions [71] to obtain the 'net' motion at each sparse point. The 'net' motion between pairs of 3D points for consecutive time instants are ranked, and the top and bottom 5 percentile values are removed. Median filtering is then applied to identify the dynamic features. Figure 7 shows an example with moving cameras for Juggler [5].
Sparse-to-dense model reconstruction: Dynamic 3D feature points are used to initialize the segmentation and reconstruction of the initial model. This avoids the assumption of static backgrounds and prior scene segmentation commonly used to initialise multiple view reconstruction with a coarse visual-hull approximation [23]. Temporal coherence also provides a more accurate initialisation to overcome visual ambiguities at individual frames. Figure 8 illustrates the use of temporal coherence for reconstruction initialisation and refinement. Dynamic feature correspondence is used to identify the mesh for each dynamic object. This mesh is back projected on each view to obtain the region of interest. Lucas Kanade Optical flow [8] is performed on the projected mask for each view in the temporal domain using the dynamic feature correspondences over time as initialization. Dense multi-view wide-baseline correspondences from the previous frame are propagated to the current frame using the information from the flow vectors to obtain dense multi-view matches in the current frame. The matches are triangulated in 3D to obtain a refined 3D dense model of the dynamic object for the current frame. For dynamic scenes, a new object may enter the scene or a new part may appear as the object moves. To allow the introduction of new objects and object parts we also use information from the cluster of sparse points for each dynamic object. The cluster corresponding to the dynamic features is identified and static points are removed. This ensures that the set of new points not only contain the dynamic features but also the unprocessed points which represent new parts of the object. These points are added to the refined sparse model of the dynamic object. To handle the new objects we detect new clusters at each time instant and consider them as dynamic regions. The sparse-to-dense initial coarse reconstruction improves the quality of segmentation and reconstruction after the refinement. Examples of the improvement in segmentation and reconstruction for Odzemok [1] and Juggler [5] datasets are shown in Figure 9. As observed limbs of the people is retained by using information from the previous frames in both the cases.
Joint object-based sparse-to-dense temporally coherent refinement of shape and segmentation
The initial reconstruction and segmentation from dense temporal feature correspondence is refined using a joint optimization framework. A novel shape constraint is introduced based on geodesic star convexity which has previously been shown to give improved performance in interactive image segmentation for structures with fine details (for example a person's fingers or hair) [25]. Shape is a powerful cue for object recognition and segmentation. Shape models represented as distance transforms from a template have been used for category specific segmentation [33]. Some works have introduced generic connectivity constraints for segmentation showing that obtaining a globally optimal solutions under the connectivity constraint is NP-hard [64]. Veksler et al. have used shape constraint in segmentation framework by enforcing star convexity prior on the segmentation, and globally optimal solutions are achieved subject to this constraint [63]. The star convexity constraint ensures connectivity to seed points, and is a stronger assumption than plain connectivity. An example of a star-convex object is shown in Figure 10 along with a failure case for a non-rigid articulate object. To handle more complex objects the idea of geodesic forests with multiple star centres was introduced to obtain a globally optimal solution for interactive 2D object segmentation [25]. The main focus was to introduce shape constraints in interactive segmentation, by means of a geodesic star convexity prior. The notion of connectivity was extended from Euclidean to geodesic so that paths can bend and adapt to image data as opposed to straight Euclidean rays, thus extending visibility and reducing the number of star centers required.
The geodesic star-convexity is integrated as a constraint on the energy minimisation for joint multi-view Fig. 10 (a) Representation of star convexity: The left object depicts example of star-convex objects, with a star center marked. The object on the right with a plausible star center shows deviations from star-convexity in the fine details, and (b) Multiple star semantics for joint refinement: Single star center based segmentation is depicted on the left and multiple star is shown on the right. reconstruction and segmentation [23]. In this work the shape constraint is automatically initialised for each view from the initial segmentation. The shape constraint is based on the geodesic distance with foreground object initialisation (seeds) as star centres to which the object shape is restricted. The union formed by multiple object seeds form a geodesic forest. This allows complex shapes to be segmented. In this work to automatically initialize the segmentation we use the sparse temporal feature correspondence as star centers (seeds) to build a geodesic forest automatically. The region outside the initial coarse reconstruction of all dynamic objects is initialized as the background seed for segmentation as shown in Figure 12. The shape of the dynamic object is restricted by this geodesic distance constraint that depends on the image gradient. Comparison with existing methods for multi-view segmentation demonstrates improvements in recovery of fine detail structure as illustrated in Figure 12.
Once we have a set of dense 3D points for each dynamic object, Poisson surface reconstruction is performed on the set of sparse points to obtain an initial coarse model of each dynamic region R, which is subsequently refined using the optimization framework (Section 3.4.1).
Optimization on initial coarse object reconstruction based on geodesic star convexity
The depth of the initial coarse reconstruction estimate is refined per view for each dynamic object at a per pixel level. View-dependent optimisation of depth is performed with respect to each camera which is robust to errors in camera calibration and initialisation. Calibration inaccuracies produce inconsistencies limiting the applicability of global reconstruction techniques which simultaneously consider all views; view-dependent techniques are more tolerant to such inaccuracies because they only use a subset of the views for reconstruction of depth from each camera view.
Our goal is to assign an accurate depth value from a set of depth values D = d 1 , ..., d |D|−1 , U and assign a layer label from a set of label values L = l 1 , ..., l |L | to each pixel p for the region R of each dynamic object. Each d i is obtained by sampling the optical ray from the camera and U is an unknown depth value to handle occlusions. This is achieved by optimisation of a joint cost function [23] for label (segmentation) and depth (reconstruction):
E(l, d) = λ data E data (d) + λ contrast E contrast (l)+ λ smooth E smooth (l, d) + λ color E color (l) (2)
where, d is the depth at each pixel, l is the layer label for multiple objects and the cost function terms are defined in section 3.4.2. The equation consists of four terms: the data term is for the photo-consistency scores, the smoothness term is to avoid sudden peaks in depth and maintain the consistency and the color and contrast terms are to identify the object boundaries. Data and smoothness terms are common to solve reconstruction problems [7] and the color and contrast terms are used for segmentation [34]. This is solved subject to a geodesic star-convexity constraint on the labels l. A label l is star convex with respect to center c, if every point p ∈ l is visible to a star center c via l in the image x which can be expressed as an energy cost:
E (l|x, c) = p∈R q∈Γc,p E p,q (l p , l q ) (3) ∀q ∈ Γ c,p , E p,q = ∞ if l p = l q 0 otherwise(4)
where ∀p ∈ R : p ∈ l ⇔ l p = 1 and Γ c,p is the geodesic path joining p to the star center c given by:
Γ c,p = arg min Γ ∈Pc,p L(Γ )(5)
where P c,p denotes the set of all discrete paths between c and p and L(Γ ) is the length of discrete geodesic path as defined in [25]. In the case of image segmentation the gradients in the underlying image provide information to compute the discrete paths between each pixel and star centers and L(Γ ) is defined below:
L(Γ ) = N D −1 i=1 (1 − δ g )j(Γ i , Γ i+1 ) 2 + δ g I(Γ i ) 2(6)
where Γ is an arbitrary parametrized discrete path with N D pixels given by Γ 1 , Γ 2 , · · · Γ N D , j(Γ i , Γ i+1 ) is the Euclidean distance between successive pixels, and the quantity I(Γ i ) 2 is a finite difference approximation of the image gradient between the points Γ i , Γ i+1 . The parameter weights δ g the Euclidean distance with the geodesic length. Using the above definition, one can define the geodesic distance as defined in Equation 5.
An extension of single star-convexity is to use multiple stars to define a more general class of shapes. Introduction of multiple star centers reduces the path lengths and increases the visibility of small parts of objects like small limbs as shown in Figure 10. Hence Equation 3 is extended to multiple stars. A label l is star convex with respect to center c i , if every point p ∈ l is visible to a star center c i in set C = {c 1 , ..., c N T } via l in the image x, where N T is the number of star centers [25]. This is expressed as an energy cost:
E (l|x, C ) = p∈R q∈Γc,p E p,q (l p , l q )(7)
In our case all the correct temporal sparse feature correspondences are used as star centers, hence the segmentation will include all the points which are visible to these sparse features via geodesic distances in the region R, thereby employing the shape constraint. Since the star centers are selected automatically, the method is unsupervised. Comparison of segmentation constraint with geodesic multi-star convexity against no constraints and Euclidean multi-star convexity constraint is shown in Figure 11. The figure demonstrates the usefulness of the proposed approach with an improvement in segmentation quality on non-rigid complex objects. The energy in the Equation 2 is minimized as follows:
min (l,d) s.t. E(l, d) l S (C ) ⇔ min (l,d) E(l, d) + E (l|x, C )(8)
where S (C ) is the set of all shapes which lie within the geodesic distances with respect to the centers in C . Optimization of Equation 8, subject to each pixel p in the region R being at a geodesic distance Γ c,p from the star centers in the set C , is performed using the αexpansion algorithm for a pixel p by iterating through Fig. 12 Geodesic star convexity: A region R with star centers C connected with geodesic distance Γ c,p . Segmentation results with and without geodesic star convexity based optimization are shown on the right for the Juggler dataset. the set of labels in L × D [10]. Graph-cut is used to obtain a local optimum [9]. The improvements in the results using geodesic star convexity in the framework is shown in Figure 12 and by using temporal coherence is shown in Figure 9. Figure 13 shows improvements using geodesic shape constraint, temporal coherence and combined proposed approach for Dance2 [2] dataset.
Energy cost function for joint segmentation and reconstruction
For completeness in this section we define each of the terms in Equation 2, these are based on previous terms used for joint optimisation over depth for each pixel introduced in [42], with modification of the color matching term to improve robustness and extension to multiple labels.
Matching term: The data term for matching between views is specified as a measure of photo-consistency (Figure 14) as follows:
E data (d) = p∈P e data (p, d p ) = M (p, q) = i∈O k m(p, q), if d p = U M U , if d p = U(9)
where P is the 4-connected neighbourhood of pixel p, M U is the fixed cost of labelling a pixel unknown and q denotes the projection of the hypothesised point P in an auxiliary camera where P is a 3D point along the optical ray passing through pixel p located at a distance d p from the reference camera. O k is the set of k most photo-consistent pairs. For textured scenes Normalized Cross Correlation (NCC) over a squared window is a common choice [53]. The NCC values range from -1 to 1 which are then mapped to non-negative values by using the function 1 − N CC.
A maximum likelihood measure [40] is used in this function for confidence value calculation between the center pixel p and the other pixels q and is based on the survey on confidence measures for stereo [28]. The measure is defined as:
m(p, q) = exp cmin 2σ 2 i (p,q)∈N exp −(1−N CC(p,q)) 2σ 2 i(10)
where σ 2 i is the noise variance for each auxiliary camera i; this parameter was fixed to 0.3. N denotes the set of interacting pixels in P. c min is the minimum cost for a pixel obtained by evaluating the function (1−N CC(., .)) on a 15 × 15 window. Contrast term: Segmentation boundaries in images tend to align with contours of high contrast and it is desirable to represent this as a constraint in stereo matching. A consistent interpretation of segmentation-prior and contrast-likelihood is used from [34]. We used a modified version of this interpretation in our formulation to preserve the edges by using Bilateral filtering [61] instead of Gaussian filtering. The contrast term is as follows:
E contrast (l) = p,q∈N e contrast (p, q, l p , l q )(11)
e contrast (p, q, l p , l q ) = 0, if (l p = l q ) 1 1+ ( + exp −C(p,q) ), otherwise (12) · is the L 2 norm and = 1. The simplest choice for C(p, q) would be the squared Euclidean color distance between intensities at pixel p and q as used in [23]. We propose a term for better segmentation as C(p, q) =
B(p)−B(q) 2 2σ 2 pq d 2 pq
where B(.) represents the bilateral filter, d pq is the Euclidean distance between p and q, and σ pq =
B(p)−B(p) 2 d 2 pq
This term enables to remove the regions with low photo-consistency scores and weak edges and thereby helps in estimating the object boundaries.
Smoothness term: This term is inspired by [23] and it ensures the depth labels vary smoothly within the object reducing noise and peaks in the reconstructed surface. This is useful when the photo-consistency score is low and insufficient to assign depth to a pixel ( Figure 14). It is defined as:
E smooth (l, d) = (p,q)∈N e smooth (l p , d p , l q , d q ) (13) e smooth (l p , d p , l q , d q ) = min(|d p − d q | , d max ), if l p = l q and d p , d q = U 0, if l p = l q and d p , d q = U d max , otherwise(14)
d max is set to 50 times the size of the depth sampling step for all datasets.
Color term: This term is computed using the negative log likelihood [9] of the color models learned from the foreground and background markers. The star centers obtained from the sparse 3D features are foreground markers and for background markers we consider the region outside the projected initial coarse reconstruction for each view. The color models use GMMs with 5 components each for Foreground/Background mixed with uniform color models [14] as the markers are sparse.
E color (l) = p∈P −logP (I p |l p )(15)
where P (I p |l p = l i ) denotes the probability at pixel p in the reference image belonging to layer l i . Fig. 15 Comparison of segmentation on benchmark static datasets using geodesic star-convexity.
Results and Performance Evaluation
The proposed system is tested on publicly available multi-view research datasets of indoor and outdoor scenes, details of datasets explained in Table 1. The parameters used for all the datasets are defined in Table 2. More information is available on the website 1 .
Multi-view segmentation evaluation
Segmentation is evaluated against the state-of-the-art methods for multi-view segmentation Kowdle [35] and Djelouah [16] for static scenes and joint segmentation reconstruction methods Mustafa [42] (per frame) and Guillemaut [24] (using temporal information) for both static and dynamic scenes. For static multi-view data the segmentation is initialised as detailed in Section 3.1 followed by refinement using the constrained optimisation Section 3.4.1. For dynamic scenes the full pipeline with temporal coherence is used as detailed in 3. Ground-truth is obtained by manually labelling the foreground for Office, Dance1 and Odzemok dataset, and for other datasets ground-truth is available online. We initialize all approaches by the same proposed initial coarse reconstruction for fair comparison.
To evaluate the segmentation we measure completeness as the ratio of intersection to union with groundtruth [35]. Comparisons are shown in Table 3 and Figure 15, 16 for static benchmark datasets. Comparison for dynamic scene segmentations are shown in Table 4 and Figure 17, 18. Results for multi-view segmentation of static scenes are more accurate than Djelouah, Mustafa, and Guillemaut, and comparable to Kowdle with improved segmentation of some detail such as the back of the chair.
For dynamic scenes the geodesic star convexity based optimization together with temporal consistency gives improved segmentation of fine detail such as the legs of the table in the Office dataset and limbs of the person in the Juggler, Magician and Dance2 datasets in Figure 17 and 18. This overcomes limitations of previous multiview per-frame segmentation.
Reconstruction evaluation
Reconstruction results obtained using the proposed method are compared against Mustafa [42], Guillemaut [24], and Furukawa [19] for dynamic sequences. Furukawa [19] is a per-frame multi-view wide-baseline stereo approach which ranks highly on the middlebury benchmark [53] but does not refine the segmentation.
The depth maps obtained using the proposed approach are compared against Mustafa and Guillemaut in Figure 19. The depth map obtained using the proposed approach are smoother with low reconstruction noise compared to the state-of-the-art methods. Figure 20 and 21 present qualitative and quantitative comparison of our method with the state-of-the-art approaches.
Comparison of reconstructions demonstrates that the proposed method gives consistently more complete and accurate models. The colour maps highlight the quantitative differences in reconstruction. As far as we are aware no ground-truth data exist for dynamic scene reconstruction from real multi-view video. In Figure 21 we present a comparison with the reference mesh available with the Dance2 dataset reconstructed using a visual-hull approach. This comparison demonstrates improved reconstruction of fine detail with the proposed technique.
In contrast to all previous approaches the proposed method gives temporally coherent 4D model reconstructions with dense surface correspondence over time. The introduction of temporal coherence constrains the reconstruction in regions which are ambiguous on a particular frame such as the right leg of the juggler in Figure 20 resulting in more complete shape. Figure 22 shows three complete scene reconstructions with 4D models of multiple objects. The Juggler and Magician sequences are reconstructed from moving handheld cameras. Computational Complexity: Computation times for the proposed approach vs other methods are presented in Table 5. The proposed approach to reconstruct temporally coherent 4D models is comparable in computation time to per-frame multiple view reconstruction and gives a ∼50% reduction in computation cost compared to previous joint segmentation and reconstruction approaches using a known background. This efficiency is achieved through improved per-frame initialisation based on temporal propagation and the introduction of the geodesic star constraint in joint optimisation. Further results can be found in the supplementary material. Temporal coherence: A frame-to-frame alignment is obtained using the proposed approach as shown in Figure 23 for Dance1 and Juggle dataset. The meshes of the dynamic object in Frame 1 and Frame 9 are color coded in both the datasets and the color is propagated to the next frame using the dense temporal coherence information. The color in different parts of the object is retained to the next frame as seen from the figure. The proposed approach obtains sequential temporal alignment which drifts with large movement in the object, hence successive frames are shown in the figure.
Limitations: As with previous dynamic scene reconstruction methods the proposed approach has a number of limitations: persistent ambiguities in appearance between objects will degrade the improvement achieved with temporal coherence; scenes with a large number of inter-occluding dynamic objects will degrade performance; the approach requires sufficient wide-baseline views to cover the scene.
Applications to immersive content production
The 4D meshes generated from the proposed approach can be used for applications in immersive content production such as FVV rendering and VR. This section demonstrates the results of these applications.
Free-viewpoint rendering
In FVV, the virtual viewpoint is controlled interactively by the user. The appearance of the reconstruction is sampled and interpolated directly from the captured camera images using cameras located close to the virtual viewpoint [57].
The proposed joint segmentation and reconstruction framework generates per-view silhouettes and a temporally coherent 4D reconstruction at each time instant of the input video sequence. This representation of the dynamic sequence is used for FVV rendering. To create FVV, a view-dependent surface texture is computed based on the user selected virtual view. This virtual view is obtained by combining the information from camera views in close proximity to the virtual viewpoint [57]. FVV rendering gives user the freedom to interactively choose a novel viewpoint in space to observe the dynamic scene and reproduces fine scale temporal surface details, such as the movement of hair and clothing wrinkles, that may not be modelled geometrically. An example of a reconstructed scene and the camera configuration is shown in Figure 24.
A qualitative evaluation of images synthesised using FVV is shown in Figure 25 and 26. These demonstrate reconstruction results rendered from novel viewpoints from the proposed method against Mustafa [43] and Guillemaut [23] on publicly available datasets. This is particularly important for wide-baseline camera configurations where this technique can be used to synthesize intermediate viewpoints where it may not be practical or economical to physically locate real cameras.
Virtual reality rendering
There is a growing demand for photo-realistic content in the creation of immersive VR experiences. The 4D temporally coherent reconstructions of the dynamic scenes obtained using the proposed approach enables the creation of photo-realistic digital assets that can be incorporated into VR environments using game engines such as Unity and Unreal Engine, as shown in Figure 27 for single frame of four datasets and for a series of frames for Dance1 dataset.
In order to efficiently render the reconstructions in a game engine for applications in VR, a UV texture atlas is extracted using the 4D meshes from the proposed approach as a geometric proxy. The UV texture atlas at each frame are applied to the models at render time in unity for viewing in a VR headset. A UV texture atlas is constructed by projectively texturing and blending multiple view frames onto a 2D unwrapped UV texture atlas, see Figure Figure 28. This is performed once for each static object and at each time instance for dynamic objects allowing efficient storage and real-time playback of static and dynamic textured reconstructions within a VR headset.
Conclusion
This paper introduced a novel technique to automatically segment and reconstruct dynamic scenes captured from multiple moving cameras in general dynamic uncontrolled environments without any prior on background appearance or structure. The proposed automatic initialization was used to identify and initialize the segmentation and reconstruction of multiple objects. A framework for temporally coherent 4D model reconstruction of dynamic scenes from a set of wide-baseline moving cameras. The approach gives a complete model of all static and dynamic non-rigid objects in the scene. Temporal coherence for dynamic objects addresses limitations of previous per-frame reconstruction giving improved reconstruction and segmentation together with dense temporal surface correspondence for dynamic objects. A sparse-to-dense approach is introduced to establish temporal correspondence for non-rigid objects using robust sparse feature matching to initialise dense optical flow providing an initial segmentation and reconstruction. Joint refinement of object reconstruction and segmentation is then performed using a multiple view optimisation with a novel geodesic star convexity constraint that gives improved shape estimation and is computationally efficient. Comparison against state-ofthe-art techniques for multiple view segmentation and reconstruction demonstrates significant improvement in performance for complex scenes. The approach enables reconstruction of 4D models for complex scenes which has not been demonstrated previously. | 8,667 |
1907.08195 | 2963385316 | Existing techniques for dynamic scene reconstruction from multiple wide-baseline cameras primarily focus on reconstruction in controlled environments, with fixed calibrated cameras and strong prior constraints. This paper introduces a general approach to obtain a 4D representation of complex dynamic scenes from multi-view wide-baseline static or moving cameras without prior knowledge of the scene structure, appearance, or illumination. Contributions of the work are: An automatic method for initial coarse reconstruction to initialize joint estimation; Sparse-to-dense temporal correspondence integrated with joint multi-view segmentation and reconstruction to introduce temporal coherence; and a general robust approach for joint segmentation refinement and dense reconstruction of dynamic scenes by introducing shape constraint. Comparison with state-of-the-art approaches on a variety of complex indoor and outdoor scenes, demonstrates improved accuracy in both multi-view segmentation and dense reconstruction. This paper demonstrates unsupervised reconstruction of complete temporally coherent 4D scene models with improved non-rigid object segmentation and shape reconstruction and its application to free-viewpoint rendering and virtual reality. | Temporally coherent 4D reconstruction refers to aligning the 3D surfaces of non-rigid objects over time for a dynamic sequence. This is achieved by estimating point-to-point correspondences for the 3D surfaces to obtain 4D temporally coherent reconstruction. 4D models allows to create efficient representation for practical applications in film, broadcast and immersive content production such as virtual, augmented and mixed reality. The majority of existing approaches for reconstruction of dynamic scenes from multi-view videos process each time frame independently due to the difficulty of simultaneously estimating temporal correspondence for non-rigid objects. Independent per-frame reconstruction can result in errors due to the inherent visual ambiguity caused by occlusion and similar object appearance for general scenes. Recent research has shown that exploiting temporal information can improve reconstruction accuracy as well as achieving temporal coherence @cite_7 . | {
"abstract": [
"This paper presents an approach for reconstruction of 4D temporally coherent models of complex dynamic scenes. No prior knowledge is required of scene structure or camera calibration allowing reconstruction from multiple moving cameras. Sparse-to-dense temporal correspondence is integrated with joint multi-view segmentation and reconstruction to obtain a complete 4D representation of static and dynamic objects. Temporal coherence is exploited to overcome visual ambiguities resulting in improved reconstruction of complex scenes. Robust joint segmentation and reconstruction of dynamic objects is achieved by introducing a geodesic star convexity constraint. Comparative evaluation is performed on a variety of unstructured indoor and outdoor dynamic scenes with hand-held cameras and multiple people. This demonstrates reconstruction of complete temporally coherent 4D scene models with improved nonrigid object segmentation and shape reconstruction."
],
"cite_N": [
"@cite_7"
],
"mid": [
"2295686522"
]
} | Temporally coherent general dynamic scene reconstruction | Temporally consistent scene reconstruction for Odzemok dataset color-coded to show the scene object segmentation obtained. effects in film and broadcast production and for content production in virtual reality. The ultimate goal of modelling dynamic scenes from multiple cameras is automatic understanding of real-world scenes from distributed camera networks, for applications in robotics and other autonomous systems. Existing methods have applied multiple view dynamic scene reconstruction techniques in controlled environment with known background or chroma-key studio [23,20,56,60]. Other multiple view stereo techniques require a relatively dense static camera network resulting in a large number of cameras [19]. Extensions to more general outdoor scenes [5,32,60] use prior reconstruction of the static geometry from images of the empty environment. However these methods either require accurate segmentation of dynamic foreground objects, or prior knowledge of the scene struc-ture and background, or are limited to static cameras and controlled environments. Scenes are reconstructed semi-automatically, requiring manual intervention for segmentation/rotoscoping, and result in temporally incoherent per-frame mesh geometries. Temporally coherent geometry with known surface correspondence across the sequence is essential for real-world applications and compact representation.
Our paper addresses the limitations of existing approaches by introducing a methodology for unsupervised temporally coherent dynamic scene reconstruction from multiple wide-baseline static or moving camera views without prior knowledge of the scene structure or background appearance. This temporally coherent dynamic scene reconstruction is demonstrated to work in applications for immersive content production such as free-viewpoint video (FVV) and virtual reality (VR). This work combines two previously published papers in general dynamic reconstruction [42] and temporally coherent reconstruction [43] into a single framework and demonstrates application of this novel unsupervised joint segmentation and reconstruction in immersive content production FVV and VR (Section 5).
The input is a sparse set of synchronised videos from multiple moving cameras of an unknown dynamic scene without prior scene segmentation or camera calibration. Our first contribution is automatic initilisation of camera calibration and sparse scene reconstruction from sparse feature correspondence using sparse feature detection and matching between pairs of frames. An initial coarse reconstruction and segmentation of all scene objects is obtained from sparse features matched across multiple views. This eliminates the requirement for prior knowledge of the background scene appearance or structure. Our second contribution is sparse-to-dense reconstruction and segmentation approach to introduce temporal coherence for every frame. We exploit temporal coherence of the scene to overcome visual ambiguities inherent in single frame reconstruction and multiple view segmentation methods for general scenes. Temporal coherence refers to the correspondence between the 3D surface of all objects observed over time. Our third contribution is spatio-temporal alignment to estimate dense surface correspondence for 4D reconstruction. A geodesic star convexity shape constraint is introduced for the shape segmentation to improve the quality of segmentation for non-rigid objects with complex appearance. The proposed approach overcomes the limitations of existing methods allowing an unsupervised temporally coherent 4D reconstruction of complete models for general dynamic scenes.
The scene is automatically decomposed into a set of spatio-temporally coherent objects as shown in Figure 1 where the resulting 4D scene reconstruction has temporally coherent labels and surface correspondence for each object. This can be used for free-viewpoint video rendering and imported to a game engine for VR experience production. The contributions explained above can be summarized as follows: -Unsupervised temporally coherent dense reconstruction and segmentation of general complex dynamic scenes from multiple wide-baseline views. -Automatic initialization of dynamic object segmentation and reconstruction from sparse features. -A framework for space-time sparse-to-dense segmentation, reconstruction and temporal correspondence. -Robust spatio-temporal refinement of dense reconstruction and segmentation integrating error tolerant photo-consistency and edge information using geodesic star convexity. -Robust and computationally efficient reconstruction of dynamic scenes by exploiting temporal coherence. -Real-world applications of 4D reconstruction to freeviewpoint video rendering and virtual reality. This paper is structured as follows: First related work is reviewed. The methodology for general dynamic scene reconstruction is then introduced. Finally a thorough qualitative and quantitative evaluation and comparison to the state-of-the-art on challenging datasets is presented.
Related Work
Temporally coherent reconstruction is a challenging task for general dynamic scenes due to a number of factors such as motion blur, articulated, non-rigid and large motion of multiple people, resolution differences between camera views, occlusions, wide-baselines, errors in calibration and cluttered dynamic backgrounds. Segmentation of dynamic objects from such scenes is difficult because of foreground and background complexity and the likelihood of overlapping background and foreground color distributions. Reconstruction is also challenging due to limited visual cues and relatively large errors affecting both calibration and extraction of a globally consistent solution. This section reviews previous work on dynamic scene reconstruction and segmentation.
Dynamic Scene Reconstruction
Dense dynamic shape reconstruction is a fundamental problem and heavily studied area in the field of computer vision. Recovering accurate 3D models of a dynamically evolving, non-rigid scene observed by multiple synchronised cameras is a challenging task. Research on multiple view dense dynamic reconstruction has primarily focused on indoor scenes with controlled illumi-nation and static backgrounds, extending methods for multiple view reconstruction of static scenes [53] to sequences [62]. Deep learning based approaches have been introduced to estimate shape of dynamic objects from minimal camera views in constrained environment [29,68] and for rigid objects [58]. In the last decade, focus has shifted to more challenging outdoor scenes captured with both static and moving cameras. Reconstruction of non-rigid dynamic objects in uncontrolled natural environments is challenging due to the scene complexity, illumination changes, shadows, occlusion and dynamic backgrounds with clutter such as trees or people. Methods have been proposed for multi-view reconstruction [65,39,37] requiring a large number of closely spaced cameras for surface estimation of dynamic shape. Practical applications require relatively sparse moving cameras to acquire coverage over large areas such as outdoor. A number of approaches for mutli-view reconstruction of outdoor scenes require initial silhouette segmentation [67,32,22,23] to allow visual-hull reconstruction. Most of these approaches to general dynamic scene reconstruction fail in the case of complex (cluttered) scenes captured with moving cameras.
A recent work proposed reconstruction of dynamic fluids [50] for static cameras. Another work used RGB-D cameras to obtain reconstruction of non-rigid surfaces [55]. Pioneering research in general dynamic scene reconstruction from multiple handheld wide-baseline cameras [5,60] exploited prior reconstruction of the background scene to allow dynamic foreground segmentation and reconstruction. Recent work [46] estimates shape of dynamic objects from handheld cameras exploiting GANs. However these approaches either work for static/indoor scenes or exploit strong prior assumptions such as silhouette information, known background or scene structure. Also all these approaches give per frame reconstruction leading to temporally incoherent geometries. Our aim is to perform temporally coherent dense reconstruction of unknown dynamic non-rigid scenes automatically without strong priors or limitations on scene structure.
Joint Segmentation and Reconstruction
Many of the existing multi-view reconstruction approaches rely on a two-stage sequential pipeline where foreground or background segmentation is initially performed independently with respect to each camera, and then used as input to obtain visual hull for multi-view reconstruction. The problem with this approach is that the errors introduced at the segmentation stage cannot be recovered and are propagated to the reconstruction stage reducing the final reconstruction quality. Segmentation from multiple wide-baseline views has been proposed by exploiting appearance similarity [17,38,70]. These ap-proaches assume static backgrounds and different colour distributions for the foreground and background [52,17] which limits applicability for general scenes.
Joint segmentation and reconstruction methods incorporate estimation of segmentation or matting with reconstruction to provide a combined solution. Joint refinement avoids the propagation of errors between the two stages thereby making the solution more robust. Also, cues from segmentation and reconstruction can be combined efficiently to achieve more accurate results. The first multi-view joint estimation system was proposed by Szeliski et al. [59] which used iterative gradient descent to perform an energy minimization. A number of approaches were introduced for joint formulation in static scenes and one recent work used training data to classify the segments [69]. The focus shifted to joint segmentation and reconstruction for rigid objects in indoor and outdoor environments. These approaches used a variety of techniques such as patch-based refinement [54,48] and fixating cameras on the object of interest [11] for reconstructing rigid objects in the scene. However, these are either limited to static scenes [69,26] or process each frame independently thereby failing to enforce temporal consistency [11,23].
Joint reconstruction and segmentation on monocular video was proposed in [36,3,12] achieving semantic segmentation of scene limited to rigid objects in street scenes. Practical application of joint estimation requires these approaches to work on non-rigid objects such as humans with clothing. A multi-layer joint segmentation and reconstruction approach was proposed for multiple view video of sports and indoor scenes [23]. The algorithm used known background images of the scene without the dynamic foreground objects to obtain an initial segmentation. Visual-hull based reconstruction was performed with known prior foreground/background using a background image plate with fixed and calibrated cameras. This visual hull was used as a prior and was optimized by a combination of photo-consistency, silhouette, color and sparse feature information in an energy minimization framework to improve the segmentation and reconstruction quality. Although structurally similar to our approach, it requires the scene to be captured by fixed calibrated cameras and a priori known fixed background plate as a prior to estimate the initial visual hull by background subtraction. The proposed approach overcomes these limitations allowing moving cameras and unknown scene backgrounds.
An approach based on optical flow and graph cuts was shown to work well for non-rigid objects in indoor settings but requires known background segmentation to obtain silhouettes and is computationally expensive [24]. Practical application of temporally coherent joint estimation requires approaches that work on non-rigid objects for general scenes in uncontrolled environments. A quantitative evaluation of techniques for multi-view reconstruction was presented in [53]. These methods are able to produce high quality results, but rely on good initializations and strong prior assumptions with known and controlled (static) scene backgrounds.
The proposed method exploits the advantages of joint segmentation and reconstruction and addresses the limitations of existing methods by introducing a novel approach to reconstruct general dynamic scenes automatically from wide-baseline cameras with no prior. To overcome the limitations of existing methods, the proposed approach automatically initialises the foreground object segmentation from wide-baseline correspondence without prior knowledge of the scene. This is followed by a joint spatio-temporal reconstruction and segmentation of general scenes. Temporal correspondence is exploited to overcome visual ambiguities giving improved reconstruction together with temporal coherence of surface correspondence to obtain 4D scene models.
Temporal coherent 4D Reconstruction
Temporally coherent 4D reconstruction refers to aligning the 3D surfaces of non-rigid objects over time for a dynamic sequence. This is achieved by estimating pointto-point correspondences for the 3D surfaces to obtain 4D temporally coherent reconstruction. 4D models allows to create efficient representation for practical applications in film, broadcast and immersive content production such as virtual, augmented and mixed reality. The majority of existing approaches for reconstruction of dynamic scenes from multi-view videos process each time frame independently due to the difficulty of simultaneously estimating temporal correspondence for non-rigid objects. Independent per-frame reconstruction can result in errors due to the inherent visual ambiguity caused by occlusion and similar object appearance for general scenes. Recent research has shown that exploiting temporal information can improve reconstruction accuracy as well as achieving temporal coherence [43].
3D scene flow estimates frame to frame correspondence whereas 4D temporal coherence estimates correspondence across the complete sequence to obtain a single surface model. Methods to estimate 3D scene flow have been reported in the literature [41] for autonomous vehicles. However this approach is limited to narrow baseline cameras. Other scene flow approaches are dependent on 2D optical flow [66,6] and they require an accurate estimate for most of the pixels which fails in the case of large motion. However, 3D scene flow methods align two frames independently and do not produce temporally coherent 4D models.
Research investigating spatio-temporal reconstruction across multiple frames was proposed by [20,37,24] exploiting the temporal information from the previous frames using optical flow. An approach for recovering space-time consistent depth maps from multiple video sequences captured by stationary, synchronized and calibrated cameras for depth based free viewpoint video rendering was proposed by [39]. However these methods require accurate initialisation, fixed and calibrated cameras and are limited to simple scenes. Other approaches to temporally coherent reconstruction [4] either requires a large number of closely spaced cameras or bi-layer segmentation [72,30] as a constraint for reconstruction. Recent approaches for spatio-temporal reconstruction of multi-view data either work on indoor studio data [47].
The framework proposed in this paper addresses limitations of existing approaches and gives 4D temporally coherent reconstruction for general dynamic indoor or outdoor scenes with large non-rigid motions, repetitive texture, uncontrolled illumination, and large capture volume. The scenes are captured with sparse static/moving cameras. The proposed approach gives 4D models of complete scenes with both static and dynamic objects for real-world applications (FVV and VR) with no prior knowledge of scene structure.
Multi-view Video Segmentation
In the field of image segmentation, approaches have been proposed to provide temporally consistent monocular video segmentation [21,49,45,71]. Hierarchical segmentation based on graphs was proposed in [21], directed acyclic graph were used to propose an object followed by segmentation [71]. Optical flow is used to identify and consistently segment objects [45,49]. Recently a number of approaches have been proposed for multi-view foreground object segmentation by exploiting appearance similarity spatially across views [16,35,38,70]. An approach for space-time multi-view segmentation was proposed by [17]. However, multi-view approaches assume a static background and different colour distributions for the foreground and background which limits applicability for general scenes and non-rigid objects.
To address this issue we introduce a novel method for spatio-temporal multi-view segmentation of dynamic scenes using shape constraints. Single image segmentation techniques using shape constraints provide good results for complex scene segmentation [25] (convex and concave shapes), but require manual interaction. The proposed approach performs automatic multi-view video segmentation by initializing the foreground object model using spatio-temporal information from widebaseline feature correspondence followed by a multi- layer optimization framework. Geodesic star convexity previously used in single view segmentation [25] is applied to constraint the segmentation in each view. Our multi-view formulation naturally enforces coherent segmentation between views and also resolves ambiguities such as the similarity of background and foreground in isolated views.
Summary and Motivation
Image-based temporally coherent 4D dynamic scene reconstruction without a prior model or constraints on the scene structure is a key problem in computer vision. Existing dense reconstruction algorithms need some strong initial prior and constraints for the solution to converge such as background, structure, and segmentation, which limits their application for automatic reconstruction of general scenes. Current approaches are also commonly limited to independent per-frame reconstruction and do not exploit temporal information or produce a coherent model with known correspondence.
The approach proposed in this paper aims to overcome the limitations of existing approaches to enable robust temporally coherent wide-baseline multiple view reconstruction of general dynamic scenes without prior assumptions on scene appearance, structure or segmentation of the moving objects. Static and dynamic objects in the scene are identified for simultaneous segmentation and reconstruction using geometry and appearance cues in a sparse-to-dense optimization framework. Temporal coherence is introduced to improve the quality of the reconstruction and geodesic star convexity is used to improve the quality of segmentation. The static and dynamic elements are fused automatically in both the temporal and spatial domain to obtain the final 4D scene reconstruction.
This paper presents a unified framework, novel in combining multiple view joint reconstruction and seg-mentation with temporal coherence to improve per-frame reconstruction performance and produce a single framework from the initial work presented in [43,42]. In particular the approach gives 4D surface model with full correspondence over time. A comprehensive experimental evaluation with comparison to the state-of-the-art in segmentation, reconstruction and 4D modelling is also presented extending previous work. Application fo the resulting 4D models to free-viewpoint video rendering and content production for immersive virtual reality experiences is also presented.
Methodology
This work is motivated by the limitations of existing multiple view reconstruction methods which either work independently at each frame resulting in errors due to visual ambiguity [19,23], or require restrictive assumptions on scene complexity and structure and often assume prior camera calibration and foreground segmentation [60,24]. We address these issues by initializing the joint reconstruction and segmentation algorithm automatically, introducing temporal coherence in the reconstruction and geodesic star convexity in segmentation to reduce ambiguity and ensure consistent non-rigid structure initialization at successive frames. The proposed approach is demonstrated to achieve improved reconstruction and segmentation performance over state-ofthe-art approaches and produce temporally coherent 4D models of complex dynamic scenes.
Overview
An overview of the proposed framework for temporally coherent multi-view reconstruction is presented in Figures 2 and consists of the following stages: Multi-view video: The scenes are captured using multiple video cameras (static/moving) separated by widebaseline (> 15 • ). The cameras can be synchronized during the capture using time-code generator or later using the audio information. Camera extrinsic calibration and scene structure are assumed to be unknown. Sparse reconstruction: The intrinsics are assumed to be known. Segmentation based feature detection (SFD) [44] is used to obtain a relatively large number of sparse features suitable for wide-baseline matching which are distributed throughout the scene including on dynamic objects such as people. SFD features are matched between views using a SIFT descriptor giving camera extrinsics and a sparse 3D point-cloud for each time instant for the entire sequence [27]. Initial scene segmentation and reconstruction -Section 3.2: Automatic initialisation is performed without prior knowledge of the scene structure or appearance to obtain an initial approximation for each object. The sparse point cloud is clustered in 3D [51] with each cluster representing a unique foreground object. Object segmentation increases efficiency and improve robustness of 4D models. This reconstruction is refined using the framework explained in Section 3.4 to obtain segmentation and dense reconstruction of each object. Sparse-to-dense temporal reconstruction with temporal coherence -Section 3.3 Temporal coherence is introduced in the framework to initialize the coarse reconstruction and obtain frame-to-frame dense correspondences for dynamic object. Dynamic object regions are detected at each time instant by sparse temporal correspondence of SFD features at successive frames. Sparse temporal feature correspondence allows propagation of the dense reconstruction for each dynamic object to obtain an initial approximation. Joint object-based sparse-to-dense temporally coherent refinement of shape and segmentation -Section 3.4: The initial estimate is refined for each object per-view in the scene through joint optimisation of shape and segmentation using a robust cost function combining matching, color, contrast and smoothness information for wide-baseline matching with a geodesic star convexity constraint. A single 3D model for each dynamic object is obtained by fusion of the view-dependent depth maps using Poisson surface reconstruction [31]. Surface orientation is estimated based on neighbouring pixels. Applications -Section : The 4D representation from the proposed joint segmentation and reconstruction framework has a number of applications in media production, including free-viewpoint video (FVV) rendering and virtual reality (VR).
The process above is repeated for the entire sequence for all objects in the first frame and for dynamic objects at each time-instant. The proposed approach enables automatic reconstruction of all objects in the scene as a 4D mesh sequence. Subsequent sections present the novel contributions of this work in initialisation and refinement to obtain a dense temporally coherent reconstruction. The approach is demonstrated to outperform previous approaches to dynamic scene reconstruction and does not require prior knowledge of the scene.
Initial Scene Segmentation and Reconstruction
For general dynamic scene reconstruction, we need to reconstruct and segment the objects in the scene. This requires an initial coarse approximation for initialisation of a subsequent refinement step to optimise the segmentation and reconstruction with respect to each camera view. We introduce an approach based on sparse point cloud clustering, an overview is shown in Figure 3. Initialisation gives a complete coarse segmentation and reconstruction of each object in the first frame of the sequence for subsequent refinement. The dense reconstruction of the foreground objects and background are combined to obtain a full scene reconstruction at the first time instant. A rough geometric proxy of the background is created using the method. For consecutive time instants dynamic objects and newly appeared objects are identified and only these objects are reconstructed and segmented. The reconstruction of static objects is retained which reduces computational complexity. The optic flow and cluster information for each dynamic object ensures that we retain same labels for the entire sequence.
Sparse Point-cloud Clustering
The sparse representation of the scene is processed to remove outliers using the point neighbourhood statistics to filter outlier data [51]. We segment the objects in the sparse scene reconstruction, this allows only moving objects to be reconstructed at each frame for efficiency and this also allows object shape similarity to be propagated across frames to increase robustness of reconstruction.
We use data clustering approach based on the 3D grid subdivision of the space using an octree data structure in Euclidean space to segment objects at each frame. In a more general sense, nearest neighbor information is used to cluster, which is essentially similar to a flood fill algorithm. We choose this data clustering because of its computational efficiency and robustness. The approach allows segmentation of objects in the scene and is demonstrated to work well for cluttered and general outdoor scenes as shown in Section 4.
Objects with insufficient detected features are reconstructed as part of the scene background. Appearing, disappearing and reappearing objects are handled by sparse dynamic feature tracking, explained in Section 3.3. Clustering results are shown in Figure 3. This is followed by a sparse-to-dense coarse object based approach to segment and reconstruct general dynamic scenes.
Coarse Object Reconstruction
The process to obtain the coarse reconstruction for the first frame of the sequence is shown in Figure 4. The sparse representation of each element is back-projected on the rectified image pair for each view. Delaunay triangulation [18] is performed on the set of back projected points for each cluster on one image and is propagated to the second image using the sparse matched features. Triangles with edge length greater than the median length of edges of all triangles are removed. For each remaining triangle pair direct linear transform is used to estimate the affine homography. Displacement at each pixel within the triangle pair is estimated by interpolation to get an initial dense disparity map for each cluster in the 2D image pair labelled as R I depicted in red in Figure 4. The initial coarse reconstruction for the observed objects in the scene is used to define the depth hypotheses at each pixel for the optimization.
The region R I does not ensure complete coverage of the object, so we extrapolate this region to obtain a region R O (shown in yellow) in 2D by 5% of the average distance between the boundary points(R I ) and the centroid of the object. To allow for errors in the initial approximate depth from sparse features we add volume in front and behind of the projected surface by an error tolerance, along the optical ray of the camera. This ensures that the object boundaries lie within the extrapolated initial coarse estimate and depth at each pixel for the combined regions may not be accurate. The tolerance for extrapolation may vary if a pixel belongs to R I or R O as the propagated pixels of the extrapolated regions (R O ) may have a high level of errors compared to error at the points from sparse representation (R I ) requiring a comparatively higher tolerance. The calculation of threshold depends on the capture volume of the datasets and is set to 1% of the capture volume for R O and half the value for R I . This volume in 3D corresponds to our initial coarse reconstruction of each object and enables us to remove the dependency of the existing approaches on background plate and visual hull estimates. This process of cluster identification and initial coarse object reconstruction is performed for multiple objects in general environments. Initial object segmentation using point cloud clustering and coarse segmentation is insensitive to parameters. Throughout this work the same parameters are used for all datasets. The result of this process is a coarse initial object segmentation and reconstruction for each object.
Sparse-to-dense temporal reconstruction with temporal coherence
Once the static scene reconstruction is obtained for the first frame, we perform temporally coherent reconstruction for dynamic objects at successive time instants instead of whole scene reconstruction for computational efficiency and to avoid redundancy. The initial coarse reconstruction for each dynamic region is refined in the subsequent optimization step with respect to each camera view. Dynamic scene objects are identified from the temporal correspondence of sparse feature points. Sparse correspondence is used to propagate an initial model of the moving object for refinement. Figure 5 presents the sparse reconstruction and temporal correspondence. New objects are identified per frame from the clustered sparse reconstruction and are labelled as dynamic objects. Sparse temporal dynamic feature tracking: Numerous approaches have been proposed to track moving objects in 2D using either features or optical flow. However these methods may fail in the case of occlusion, movement parallel to the view direction, large motions and moving cameras. To overcome these limitations we match the sparse 3D feature points obtained using SFD [44] from multiple wide-baseline views at each time instant. The use of sparse 3D features is robust to large non-rigid motion, occlusions and camera movement. SFD detects sparse features which are stable across wide-baseline views and consecutive time instants for a moving camera and dynamic scene. Sparse 3D feature matches between consecutive time instants are back-projected to each view. These features are matched temporally using SIFT descriptor to identify the moving points. Robust matching is achieved by enforcing multiple view consistency for the temporal feature correspondence in each view as illustrated in Figure 6. Each match must satisfy the constraint:
H t,v (p) + u t,r (p + H t,v (p)) − u t,v (p)− (1) H t,r (p + u t,v (p)) <
where p is the feature image point in view v at frame t, H t,v (p) is the disparity at frame t from views v and r, u t,v (p) is the temporal correspondence from frames t to t + 1 for view v. The multi-view consistency check ensures that correspondences between any two views remain temporally consistent for successive frames. Matches in the 2D domain are sensitive to camera movement and occlusion, hence we map the set of refined matches into 3D to make the system robust to camera motion. The Frobenius norm is applied on the 3D point gradients in all directions [71] to obtain the 'net' motion at each sparse point. The 'net' motion between pairs of 3D points for consecutive time instants are ranked, and the top and bottom 5 percentile values are removed. Median filtering is then applied to identify the dynamic features. Figure 7 shows an example with moving cameras for Juggler [5].
Sparse-to-dense model reconstruction: Dynamic 3D feature points are used to initialize the segmentation and reconstruction of the initial model. This avoids the assumption of static backgrounds and prior scene segmentation commonly used to initialise multiple view reconstruction with a coarse visual-hull approximation [23]. Temporal coherence also provides a more accurate initialisation to overcome visual ambiguities at individual frames. Figure 8 illustrates the use of temporal coherence for reconstruction initialisation and refinement. Dynamic feature correspondence is used to identify the mesh for each dynamic object. This mesh is back projected on each view to obtain the region of interest. Lucas Kanade Optical flow [8] is performed on the projected mask for each view in the temporal domain using the dynamic feature correspondences over time as initialization. Dense multi-view wide-baseline correspondences from the previous frame are propagated to the current frame using the information from the flow vectors to obtain dense multi-view matches in the current frame. The matches are triangulated in 3D to obtain a refined 3D dense model of the dynamic object for the current frame. For dynamic scenes, a new object may enter the scene or a new part may appear as the object moves. To allow the introduction of new objects and object parts we also use information from the cluster of sparse points for each dynamic object. The cluster corresponding to the dynamic features is identified and static points are removed. This ensures that the set of new points not only contain the dynamic features but also the unprocessed points which represent new parts of the object. These points are added to the refined sparse model of the dynamic object. To handle the new objects we detect new clusters at each time instant and consider them as dynamic regions. The sparse-to-dense initial coarse reconstruction improves the quality of segmentation and reconstruction after the refinement. Examples of the improvement in segmentation and reconstruction for Odzemok [1] and Juggler [5] datasets are shown in Figure 9. As observed limbs of the people is retained by using information from the previous frames in both the cases.
Joint object-based sparse-to-dense temporally coherent refinement of shape and segmentation
The initial reconstruction and segmentation from dense temporal feature correspondence is refined using a joint optimization framework. A novel shape constraint is introduced based on geodesic star convexity which has previously been shown to give improved performance in interactive image segmentation for structures with fine details (for example a person's fingers or hair) [25]. Shape is a powerful cue for object recognition and segmentation. Shape models represented as distance transforms from a template have been used for category specific segmentation [33]. Some works have introduced generic connectivity constraints for segmentation showing that obtaining a globally optimal solutions under the connectivity constraint is NP-hard [64]. Veksler et al. have used shape constraint in segmentation framework by enforcing star convexity prior on the segmentation, and globally optimal solutions are achieved subject to this constraint [63]. The star convexity constraint ensures connectivity to seed points, and is a stronger assumption than plain connectivity. An example of a star-convex object is shown in Figure 10 along with a failure case for a non-rigid articulate object. To handle more complex objects the idea of geodesic forests with multiple star centres was introduced to obtain a globally optimal solution for interactive 2D object segmentation [25]. The main focus was to introduce shape constraints in interactive segmentation, by means of a geodesic star convexity prior. The notion of connectivity was extended from Euclidean to geodesic so that paths can bend and adapt to image data as opposed to straight Euclidean rays, thus extending visibility and reducing the number of star centers required.
The geodesic star-convexity is integrated as a constraint on the energy minimisation for joint multi-view Fig. 10 (a) Representation of star convexity: The left object depicts example of star-convex objects, with a star center marked. The object on the right with a plausible star center shows deviations from star-convexity in the fine details, and (b) Multiple star semantics for joint refinement: Single star center based segmentation is depicted on the left and multiple star is shown on the right. reconstruction and segmentation [23]. In this work the shape constraint is automatically initialised for each view from the initial segmentation. The shape constraint is based on the geodesic distance with foreground object initialisation (seeds) as star centres to which the object shape is restricted. The union formed by multiple object seeds form a geodesic forest. This allows complex shapes to be segmented. In this work to automatically initialize the segmentation we use the sparse temporal feature correspondence as star centers (seeds) to build a geodesic forest automatically. The region outside the initial coarse reconstruction of all dynamic objects is initialized as the background seed for segmentation as shown in Figure 12. The shape of the dynamic object is restricted by this geodesic distance constraint that depends on the image gradient. Comparison with existing methods for multi-view segmentation demonstrates improvements in recovery of fine detail structure as illustrated in Figure 12.
Once we have a set of dense 3D points for each dynamic object, Poisson surface reconstruction is performed on the set of sparse points to obtain an initial coarse model of each dynamic region R, which is subsequently refined using the optimization framework (Section 3.4.1).
Optimization on initial coarse object reconstruction based on geodesic star convexity
The depth of the initial coarse reconstruction estimate is refined per view for each dynamic object at a per pixel level. View-dependent optimisation of depth is performed with respect to each camera which is robust to errors in camera calibration and initialisation. Calibration inaccuracies produce inconsistencies limiting the applicability of global reconstruction techniques which simultaneously consider all views; view-dependent techniques are more tolerant to such inaccuracies because they only use a subset of the views for reconstruction of depth from each camera view.
Our goal is to assign an accurate depth value from a set of depth values D = d 1 , ..., d |D|−1 , U and assign a layer label from a set of label values L = l 1 , ..., l |L | to each pixel p for the region R of each dynamic object. Each d i is obtained by sampling the optical ray from the camera and U is an unknown depth value to handle occlusions. This is achieved by optimisation of a joint cost function [23] for label (segmentation) and depth (reconstruction):
E(l, d) = λ data E data (d) + λ contrast E contrast (l)+ λ smooth E smooth (l, d) + λ color E color (l) (2)
where, d is the depth at each pixel, l is the layer label for multiple objects and the cost function terms are defined in section 3.4.2. The equation consists of four terms: the data term is for the photo-consistency scores, the smoothness term is to avoid sudden peaks in depth and maintain the consistency and the color and contrast terms are to identify the object boundaries. Data and smoothness terms are common to solve reconstruction problems [7] and the color and contrast terms are used for segmentation [34]. This is solved subject to a geodesic star-convexity constraint on the labels l. A label l is star convex with respect to center c, if every point p ∈ l is visible to a star center c via l in the image x which can be expressed as an energy cost:
E (l|x, c) = p∈R q∈Γc,p E p,q (l p , l q ) (3) ∀q ∈ Γ c,p , E p,q = ∞ if l p = l q 0 otherwise(4)
where ∀p ∈ R : p ∈ l ⇔ l p = 1 and Γ c,p is the geodesic path joining p to the star center c given by:
Γ c,p = arg min Γ ∈Pc,p L(Γ )(5)
where P c,p denotes the set of all discrete paths between c and p and L(Γ ) is the length of discrete geodesic path as defined in [25]. In the case of image segmentation the gradients in the underlying image provide information to compute the discrete paths between each pixel and star centers and L(Γ ) is defined below:
L(Γ ) = N D −1 i=1 (1 − δ g )j(Γ i , Γ i+1 ) 2 + δ g I(Γ i ) 2(6)
where Γ is an arbitrary parametrized discrete path with N D pixels given by Γ 1 , Γ 2 , · · · Γ N D , j(Γ i , Γ i+1 ) is the Euclidean distance between successive pixels, and the quantity I(Γ i ) 2 is a finite difference approximation of the image gradient between the points Γ i , Γ i+1 . The parameter weights δ g the Euclidean distance with the geodesic length. Using the above definition, one can define the geodesic distance as defined in Equation 5.
An extension of single star-convexity is to use multiple stars to define a more general class of shapes. Introduction of multiple star centers reduces the path lengths and increases the visibility of small parts of objects like small limbs as shown in Figure 10. Hence Equation 3 is extended to multiple stars. A label l is star convex with respect to center c i , if every point p ∈ l is visible to a star center c i in set C = {c 1 , ..., c N T } via l in the image x, where N T is the number of star centers [25]. This is expressed as an energy cost:
E (l|x, C ) = p∈R q∈Γc,p E p,q (l p , l q )(7)
In our case all the correct temporal sparse feature correspondences are used as star centers, hence the segmentation will include all the points which are visible to these sparse features via geodesic distances in the region R, thereby employing the shape constraint. Since the star centers are selected automatically, the method is unsupervised. Comparison of segmentation constraint with geodesic multi-star convexity against no constraints and Euclidean multi-star convexity constraint is shown in Figure 11. The figure demonstrates the usefulness of the proposed approach with an improvement in segmentation quality on non-rigid complex objects. The energy in the Equation 2 is minimized as follows:
min (l,d) s.t. E(l, d) l S (C ) ⇔ min (l,d) E(l, d) + E (l|x, C )(8)
where S (C ) is the set of all shapes which lie within the geodesic distances with respect to the centers in C . Optimization of Equation 8, subject to each pixel p in the region R being at a geodesic distance Γ c,p from the star centers in the set C , is performed using the αexpansion algorithm for a pixel p by iterating through Fig. 12 Geodesic star convexity: A region R with star centers C connected with geodesic distance Γ c,p . Segmentation results with and without geodesic star convexity based optimization are shown on the right for the Juggler dataset. the set of labels in L × D [10]. Graph-cut is used to obtain a local optimum [9]. The improvements in the results using geodesic star convexity in the framework is shown in Figure 12 and by using temporal coherence is shown in Figure 9. Figure 13 shows improvements using geodesic shape constraint, temporal coherence and combined proposed approach for Dance2 [2] dataset.
Energy cost function for joint segmentation and reconstruction
For completeness in this section we define each of the terms in Equation 2, these are based on previous terms used for joint optimisation over depth for each pixel introduced in [42], with modification of the color matching term to improve robustness and extension to multiple labels.
Matching term: The data term for matching between views is specified as a measure of photo-consistency (Figure 14) as follows:
E data (d) = p∈P e data (p, d p ) = M (p, q) = i∈O k m(p, q), if d p = U M U , if d p = U(9)
where P is the 4-connected neighbourhood of pixel p, M U is the fixed cost of labelling a pixel unknown and q denotes the projection of the hypothesised point P in an auxiliary camera where P is a 3D point along the optical ray passing through pixel p located at a distance d p from the reference camera. O k is the set of k most photo-consistent pairs. For textured scenes Normalized Cross Correlation (NCC) over a squared window is a common choice [53]. The NCC values range from -1 to 1 which are then mapped to non-negative values by using the function 1 − N CC.
A maximum likelihood measure [40] is used in this function for confidence value calculation between the center pixel p and the other pixels q and is based on the survey on confidence measures for stereo [28]. The measure is defined as:
m(p, q) = exp cmin 2σ 2 i (p,q)∈N exp −(1−N CC(p,q)) 2σ 2 i(10)
where σ 2 i is the noise variance for each auxiliary camera i; this parameter was fixed to 0.3. N denotes the set of interacting pixels in P. c min is the minimum cost for a pixel obtained by evaluating the function (1−N CC(., .)) on a 15 × 15 window. Contrast term: Segmentation boundaries in images tend to align with contours of high contrast and it is desirable to represent this as a constraint in stereo matching. A consistent interpretation of segmentation-prior and contrast-likelihood is used from [34]. We used a modified version of this interpretation in our formulation to preserve the edges by using Bilateral filtering [61] instead of Gaussian filtering. The contrast term is as follows:
E contrast (l) = p,q∈N e contrast (p, q, l p , l q )(11)
e contrast (p, q, l p , l q ) = 0, if (l p = l q ) 1 1+ ( + exp −C(p,q) ), otherwise (12) · is the L 2 norm and = 1. The simplest choice for C(p, q) would be the squared Euclidean color distance between intensities at pixel p and q as used in [23]. We propose a term for better segmentation as C(p, q) =
B(p)−B(q) 2 2σ 2 pq d 2 pq
where B(.) represents the bilateral filter, d pq is the Euclidean distance between p and q, and σ pq =
B(p)−B(p) 2 d 2 pq
This term enables to remove the regions with low photo-consistency scores and weak edges and thereby helps in estimating the object boundaries.
Smoothness term: This term is inspired by [23] and it ensures the depth labels vary smoothly within the object reducing noise and peaks in the reconstructed surface. This is useful when the photo-consistency score is low and insufficient to assign depth to a pixel ( Figure 14). It is defined as:
E smooth (l, d) = (p,q)∈N e smooth (l p , d p , l q , d q ) (13) e smooth (l p , d p , l q , d q ) = min(|d p − d q | , d max ), if l p = l q and d p , d q = U 0, if l p = l q and d p , d q = U d max , otherwise(14)
d max is set to 50 times the size of the depth sampling step for all datasets.
Color term: This term is computed using the negative log likelihood [9] of the color models learned from the foreground and background markers. The star centers obtained from the sparse 3D features are foreground markers and for background markers we consider the region outside the projected initial coarse reconstruction for each view. The color models use GMMs with 5 components each for Foreground/Background mixed with uniform color models [14] as the markers are sparse.
E color (l) = p∈P −logP (I p |l p )(15)
where P (I p |l p = l i ) denotes the probability at pixel p in the reference image belonging to layer l i . Fig. 15 Comparison of segmentation on benchmark static datasets using geodesic star-convexity.
Results and Performance Evaluation
The proposed system is tested on publicly available multi-view research datasets of indoor and outdoor scenes, details of datasets explained in Table 1. The parameters used for all the datasets are defined in Table 2. More information is available on the website 1 .
Multi-view segmentation evaluation
Segmentation is evaluated against the state-of-the-art methods for multi-view segmentation Kowdle [35] and Djelouah [16] for static scenes and joint segmentation reconstruction methods Mustafa [42] (per frame) and Guillemaut [24] (using temporal information) for both static and dynamic scenes. For static multi-view data the segmentation is initialised as detailed in Section 3.1 followed by refinement using the constrained optimisation Section 3.4.1. For dynamic scenes the full pipeline with temporal coherence is used as detailed in 3. Ground-truth is obtained by manually labelling the foreground for Office, Dance1 and Odzemok dataset, and for other datasets ground-truth is available online. We initialize all approaches by the same proposed initial coarse reconstruction for fair comparison.
To evaluate the segmentation we measure completeness as the ratio of intersection to union with groundtruth [35]. Comparisons are shown in Table 3 and Figure 15, 16 for static benchmark datasets. Comparison for dynamic scene segmentations are shown in Table 4 and Figure 17, 18. Results for multi-view segmentation of static scenes are more accurate than Djelouah, Mustafa, and Guillemaut, and comparable to Kowdle with improved segmentation of some detail such as the back of the chair.
For dynamic scenes the geodesic star convexity based optimization together with temporal consistency gives improved segmentation of fine detail such as the legs of the table in the Office dataset and limbs of the person in the Juggler, Magician and Dance2 datasets in Figure 17 and 18. This overcomes limitations of previous multiview per-frame segmentation.
Reconstruction evaluation
Reconstruction results obtained using the proposed method are compared against Mustafa [42], Guillemaut [24], and Furukawa [19] for dynamic sequences. Furukawa [19] is a per-frame multi-view wide-baseline stereo approach which ranks highly on the middlebury benchmark [53] but does not refine the segmentation.
The depth maps obtained using the proposed approach are compared against Mustafa and Guillemaut in Figure 19. The depth map obtained using the proposed approach are smoother with low reconstruction noise compared to the state-of-the-art methods. Figure 20 and 21 present qualitative and quantitative comparison of our method with the state-of-the-art approaches.
Comparison of reconstructions demonstrates that the proposed method gives consistently more complete and accurate models. The colour maps highlight the quantitative differences in reconstruction. As far as we are aware no ground-truth data exist for dynamic scene reconstruction from real multi-view video. In Figure 21 we present a comparison with the reference mesh available with the Dance2 dataset reconstructed using a visual-hull approach. This comparison demonstrates improved reconstruction of fine detail with the proposed technique.
In contrast to all previous approaches the proposed method gives temporally coherent 4D model reconstructions with dense surface correspondence over time. The introduction of temporal coherence constrains the reconstruction in regions which are ambiguous on a particular frame such as the right leg of the juggler in Figure 20 resulting in more complete shape. Figure 22 shows three complete scene reconstructions with 4D models of multiple objects. The Juggler and Magician sequences are reconstructed from moving handheld cameras. Computational Complexity: Computation times for the proposed approach vs other methods are presented in Table 5. The proposed approach to reconstruct temporally coherent 4D models is comparable in computation time to per-frame multiple view reconstruction and gives a ∼50% reduction in computation cost compared to previous joint segmentation and reconstruction approaches using a known background. This efficiency is achieved through improved per-frame initialisation based on temporal propagation and the introduction of the geodesic star constraint in joint optimisation. Further results can be found in the supplementary material. Temporal coherence: A frame-to-frame alignment is obtained using the proposed approach as shown in Figure 23 for Dance1 and Juggle dataset. The meshes of the dynamic object in Frame 1 and Frame 9 are color coded in both the datasets and the color is propagated to the next frame using the dense temporal coherence information. The color in different parts of the object is retained to the next frame as seen from the figure. The proposed approach obtains sequential temporal alignment which drifts with large movement in the object, hence successive frames are shown in the figure.
Limitations: As with previous dynamic scene reconstruction methods the proposed approach has a number of limitations: persistent ambiguities in appearance between objects will degrade the improvement achieved with temporal coherence; scenes with a large number of inter-occluding dynamic objects will degrade performance; the approach requires sufficient wide-baseline views to cover the scene.
Applications to immersive content production
The 4D meshes generated from the proposed approach can be used for applications in immersive content production such as FVV rendering and VR. This section demonstrates the results of these applications.
Free-viewpoint rendering
In FVV, the virtual viewpoint is controlled interactively by the user. The appearance of the reconstruction is sampled and interpolated directly from the captured camera images using cameras located close to the virtual viewpoint [57].
The proposed joint segmentation and reconstruction framework generates per-view silhouettes and a temporally coherent 4D reconstruction at each time instant of the input video sequence. This representation of the dynamic sequence is used for FVV rendering. To create FVV, a view-dependent surface texture is computed based on the user selected virtual view. This virtual view is obtained by combining the information from camera views in close proximity to the virtual viewpoint [57]. FVV rendering gives user the freedom to interactively choose a novel viewpoint in space to observe the dynamic scene and reproduces fine scale temporal surface details, such as the movement of hair and clothing wrinkles, that may not be modelled geometrically. An example of a reconstructed scene and the camera configuration is shown in Figure 24.
A qualitative evaluation of images synthesised using FVV is shown in Figure 25 and 26. These demonstrate reconstruction results rendered from novel viewpoints from the proposed method against Mustafa [43] and Guillemaut [23] on publicly available datasets. This is particularly important for wide-baseline camera configurations where this technique can be used to synthesize intermediate viewpoints where it may not be practical or economical to physically locate real cameras.
Virtual reality rendering
There is a growing demand for photo-realistic content in the creation of immersive VR experiences. The 4D temporally coherent reconstructions of the dynamic scenes obtained using the proposed approach enables the creation of photo-realistic digital assets that can be incorporated into VR environments using game engines such as Unity and Unreal Engine, as shown in Figure 27 for single frame of four datasets and for a series of frames for Dance1 dataset.
In order to efficiently render the reconstructions in a game engine for applications in VR, a UV texture atlas is extracted using the 4D meshes from the proposed approach as a geometric proxy. The UV texture atlas at each frame are applied to the models at render time in unity for viewing in a VR headset. A UV texture atlas is constructed by projectively texturing and blending multiple view frames onto a 2D unwrapped UV texture atlas, see Figure Figure 28. This is performed once for each static object and at each time instance for dynamic objects allowing efficient storage and real-time playback of static and dynamic textured reconstructions within a VR headset.
Conclusion
This paper introduced a novel technique to automatically segment and reconstruct dynamic scenes captured from multiple moving cameras in general dynamic uncontrolled environments without any prior on background appearance or structure. The proposed automatic initialization was used to identify and initialize the segmentation and reconstruction of multiple objects. A framework for temporally coherent 4D model reconstruction of dynamic scenes from a set of wide-baseline moving cameras. The approach gives a complete model of all static and dynamic non-rigid objects in the scene. Temporal coherence for dynamic objects addresses limitations of previous per-frame reconstruction giving improved reconstruction and segmentation together with dense temporal surface correspondence for dynamic objects. A sparse-to-dense approach is introduced to establish temporal correspondence for non-rigid objects using robust sparse feature matching to initialise dense optical flow providing an initial segmentation and reconstruction. Joint refinement of object reconstruction and segmentation is then performed using a multiple view optimisation with a novel geodesic star convexity constraint that gives improved shape estimation and is computationally efficient. Comparison against state-ofthe-art techniques for multiple view segmentation and reconstruction demonstrates significant improvement in performance for complex scenes. The approach enables reconstruction of 4D models for complex scenes which has not been demonstrated previously. | 8,667 |
1907.08195 | 2963385316 | Existing techniques for dynamic scene reconstruction from multiple wide-baseline cameras primarily focus on reconstruction in controlled environments, with fixed calibrated cameras and strong prior constraints. This paper introduces a general approach to obtain a 4D representation of complex dynamic scenes from multi-view wide-baseline static or moving cameras without prior knowledge of the scene structure, appearance, or illumination. Contributions of the work are: An automatic method for initial coarse reconstruction to initialize joint estimation; Sparse-to-dense temporal correspondence integrated with joint multi-view segmentation and reconstruction to introduce temporal coherence; and a general robust approach for joint segmentation refinement and dense reconstruction of dynamic scenes by introducing shape constraint. Comparison with state-of-the-art approaches on a variety of complex indoor and outdoor scenes, demonstrates improved accuracy in both multi-view segmentation and dense reconstruction. This paper demonstrates unsupervised reconstruction of complete temporally coherent 4D scene models with improved non-rigid object segmentation and shape reconstruction and its application to free-viewpoint rendering and virtual reality. | 3D scene flow estimates frame to frame correspondence whereas 4D temporal coherence estimates correspondence across the complete sequence to obtain a single surface model. Methods to estimate 3D scene flow have been reported in the literature @cite_0 for autonomous vehicles. However this approach is limited to narrow baseline cameras. Other scene flow approaches are dependent on 2D optical flow @cite_64 @cite_46 and they require an accurate estimate for most of the pixels which fails in the case of large motion. However, 3D scene flow methods align two frames independently and do not produce temporally coherent 4D models. | {
"abstract": [
"This paper proposes a novel model and dataset for 3D scene flow estimation with an application to autonomous driving. Taking advantage of the fact that outdoor scenes often decompose into a small number of independently moving objects, we represent each element in the scene by its rigid motion parameters and each superpixel by a 3D plane as well as an index to the corresponding object. This minimal representation increases robustness and leads to a discrete-continuous CRF where the data term decomposes into pairwise potentials between superpixels and objects. Moreover, our model intrinsically segments the scene into its constituting dynamic components. We demonstrate the performance of our model on existing benchmarks as well as a novel realistic dataset with scene flow ground truth. We obtain this dataset by annotating 400 dynamic scenes from the KITTI raw data collection using detailed 3D CAD models for all vehicles in motion. Our experiments also reveal novel challenges which cannot be handled by existing methods.",
"We present a novel method for recovering the 3D structure and scene flow from calibrated multi-view sequences. We propose a 3D point cloud parametrization of the 3D structure and scene flow that allows us to directly estimate the desired unknowns. A unified global energy functional is proposed to incorporate the information from the available sequences and simultaneously recover both depth and scene flow. The functional enforces multi-view geometric consistency and imposes brightness constancy and piece-wise smoothness assumptions directly on the 3D unknowns. It inherently handles the challenges of discontinuities, occlusions, and large displacements. The main contribution of this work is the fusion of a 3D representation and an advanced variational framework that directly uses the available multi-view information. The minimization of the functional is successfully obtained despite the non-convex optimization problem. The proposed method was tested on real and synthetic data.",
"Building upon recent developments in optical flow and stereo matching estimation, we propose a variational framework for the estimation of stereoscopic scene flow, i.e., the motion of points in the three-dimensional world from stereo image sequences. The proposed algorithm takes into account image pairs from two consecutive times and computes both depth and a 3D motion vector associated with each point in the image. In contrast to previous works, we partially decouple the depth estimation from the motion estimation, which has many practical advantages. The variational formulation is quite flexible and can handle both sparse or dense disparity maps. The proposed method is very efficient; with the depth map being computed on an FPGA, and the scene flow computed on the GPU, the proposed algorithm runs at frame rates of 20 frames per second on QVGA images (320×240 pixels). Furthermore, we present solutions to two important problems in scene flow estimation: violations of intensity consistency between input images, and the uncertainty measures for the scene flow result."
],
"cite_N": [
"@cite_0",
"@cite_46",
"@cite_64"
],
"mid": [
"1921093919",
"1968545482",
"2024336175"
]
} | Temporally coherent general dynamic scene reconstruction | Temporally consistent scene reconstruction for Odzemok dataset color-coded to show the scene object segmentation obtained. effects in film and broadcast production and for content production in virtual reality. The ultimate goal of modelling dynamic scenes from multiple cameras is automatic understanding of real-world scenes from distributed camera networks, for applications in robotics and other autonomous systems. Existing methods have applied multiple view dynamic scene reconstruction techniques in controlled environment with known background or chroma-key studio [23,20,56,60]. Other multiple view stereo techniques require a relatively dense static camera network resulting in a large number of cameras [19]. Extensions to more general outdoor scenes [5,32,60] use prior reconstruction of the static geometry from images of the empty environment. However these methods either require accurate segmentation of dynamic foreground objects, or prior knowledge of the scene struc-ture and background, or are limited to static cameras and controlled environments. Scenes are reconstructed semi-automatically, requiring manual intervention for segmentation/rotoscoping, and result in temporally incoherent per-frame mesh geometries. Temporally coherent geometry with known surface correspondence across the sequence is essential for real-world applications and compact representation.
Our paper addresses the limitations of existing approaches by introducing a methodology for unsupervised temporally coherent dynamic scene reconstruction from multiple wide-baseline static or moving camera views without prior knowledge of the scene structure or background appearance. This temporally coherent dynamic scene reconstruction is demonstrated to work in applications for immersive content production such as free-viewpoint video (FVV) and virtual reality (VR). This work combines two previously published papers in general dynamic reconstruction [42] and temporally coherent reconstruction [43] into a single framework and demonstrates application of this novel unsupervised joint segmentation and reconstruction in immersive content production FVV and VR (Section 5).
The input is a sparse set of synchronised videos from multiple moving cameras of an unknown dynamic scene without prior scene segmentation or camera calibration. Our first contribution is automatic initilisation of camera calibration and sparse scene reconstruction from sparse feature correspondence using sparse feature detection and matching between pairs of frames. An initial coarse reconstruction and segmentation of all scene objects is obtained from sparse features matched across multiple views. This eliminates the requirement for prior knowledge of the background scene appearance or structure. Our second contribution is sparse-to-dense reconstruction and segmentation approach to introduce temporal coherence for every frame. We exploit temporal coherence of the scene to overcome visual ambiguities inherent in single frame reconstruction and multiple view segmentation methods for general scenes. Temporal coherence refers to the correspondence between the 3D surface of all objects observed over time. Our third contribution is spatio-temporal alignment to estimate dense surface correspondence for 4D reconstruction. A geodesic star convexity shape constraint is introduced for the shape segmentation to improve the quality of segmentation for non-rigid objects with complex appearance. The proposed approach overcomes the limitations of existing methods allowing an unsupervised temporally coherent 4D reconstruction of complete models for general dynamic scenes.
The scene is automatically decomposed into a set of spatio-temporally coherent objects as shown in Figure 1 where the resulting 4D scene reconstruction has temporally coherent labels and surface correspondence for each object. This can be used for free-viewpoint video rendering and imported to a game engine for VR experience production. The contributions explained above can be summarized as follows: -Unsupervised temporally coherent dense reconstruction and segmentation of general complex dynamic scenes from multiple wide-baseline views. -Automatic initialization of dynamic object segmentation and reconstruction from sparse features. -A framework for space-time sparse-to-dense segmentation, reconstruction and temporal correspondence. -Robust spatio-temporal refinement of dense reconstruction and segmentation integrating error tolerant photo-consistency and edge information using geodesic star convexity. -Robust and computationally efficient reconstruction of dynamic scenes by exploiting temporal coherence. -Real-world applications of 4D reconstruction to freeviewpoint video rendering and virtual reality. This paper is structured as follows: First related work is reviewed. The methodology for general dynamic scene reconstruction is then introduced. Finally a thorough qualitative and quantitative evaluation and comparison to the state-of-the-art on challenging datasets is presented.
Related Work
Temporally coherent reconstruction is a challenging task for general dynamic scenes due to a number of factors such as motion blur, articulated, non-rigid and large motion of multiple people, resolution differences between camera views, occlusions, wide-baselines, errors in calibration and cluttered dynamic backgrounds. Segmentation of dynamic objects from such scenes is difficult because of foreground and background complexity and the likelihood of overlapping background and foreground color distributions. Reconstruction is also challenging due to limited visual cues and relatively large errors affecting both calibration and extraction of a globally consistent solution. This section reviews previous work on dynamic scene reconstruction and segmentation.
Dynamic Scene Reconstruction
Dense dynamic shape reconstruction is a fundamental problem and heavily studied area in the field of computer vision. Recovering accurate 3D models of a dynamically evolving, non-rigid scene observed by multiple synchronised cameras is a challenging task. Research on multiple view dense dynamic reconstruction has primarily focused on indoor scenes with controlled illumi-nation and static backgrounds, extending methods for multiple view reconstruction of static scenes [53] to sequences [62]. Deep learning based approaches have been introduced to estimate shape of dynamic objects from minimal camera views in constrained environment [29,68] and for rigid objects [58]. In the last decade, focus has shifted to more challenging outdoor scenes captured with both static and moving cameras. Reconstruction of non-rigid dynamic objects in uncontrolled natural environments is challenging due to the scene complexity, illumination changes, shadows, occlusion and dynamic backgrounds with clutter such as trees or people. Methods have been proposed for multi-view reconstruction [65,39,37] requiring a large number of closely spaced cameras for surface estimation of dynamic shape. Practical applications require relatively sparse moving cameras to acquire coverage over large areas such as outdoor. A number of approaches for mutli-view reconstruction of outdoor scenes require initial silhouette segmentation [67,32,22,23] to allow visual-hull reconstruction. Most of these approaches to general dynamic scene reconstruction fail in the case of complex (cluttered) scenes captured with moving cameras.
A recent work proposed reconstruction of dynamic fluids [50] for static cameras. Another work used RGB-D cameras to obtain reconstruction of non-rigid surfaces [55]. Pioneering research in general dynamic scene reconstruction from multiple handheld wide-baseline cameras [5,60] exploited prior reconstruction of the background scene to allow dynamic foreground segmentation and reconstruction. Recent work [46] estimates shape of dynamic objects from handheld cameras exploiting GANs. However these approaches either work for static/indoor scenes or exploit strong prior assumptions such as silhouette information, known background or scene structure. Also all these approaches give per frame reconstruction leading to temporally incoherent geometries. Our aim is to perform temporally coherent dense reconstruction of unknown dynamic non-rigid scenes automatically without strong priors or limitations on scene structure.
Joint Segmentation and Reconstruction
Many of the existing multi-view reconstruction approaches rely on a two-stage sequential pipeline where foreground or background segmentation is initially performed independently with respect to each camera, and then used as input to obtain visual hull for multi-view reconstruction. The problem with this approach is that the errors introduced at the segmentation stage cannot be recovered and are propagated to the reconstruction stage reducing the final reconstruction quality. Segmentation from multiple wide-baseline views has been proposed by exploiting appearance similarity [17,38,70]. These ap-proaches assume static backgrounds and different colour distributions for the foreground and background [52,17] which limits applicability for general scenes.
Joint segmentation and reconstruction methods incorporate estimation of segmentation or matting with reconstruction to provide a combined solution. Joint refinement avoids the propagation of errors between the two stages thereby making the solution more robust. Also, cues from segmentation and reconstruction can be combined efficiently to achieve more accurate results. The first multi-view joint estimation system was proposed by Szeliski et al. [59] which used iterative gradient descent to perform an energy minimization. A number of approaches were introduced for joint formulation in static scenes and one recent work used training data to classify the segments [69]. The focus shifted to joint segmentation and reconstruction for rigid objects in indoor and outdoor environments. These approaches used a variety of techniques such as patch-based refinement [54,48] and fixating cameras on the object of interest [11] for reconstructing rigid objects in the scene. However, these are either limited to static scenes [69,26] or process each frame independently thereby failing to enforce temporal consistency [11,23].
Joint reconstruction and segmentation on monocular video was proposed in [36,3,12] achieving semantic segmentation of scene limited to rigid objects in street scenes. Practical application of joint estimation requires these approaches to work on non-rigid objects such as humans with clothing. A multi-layer joint segmentation and reconstruction approach was proposed for multiple view video of sports and indoor scenes [23]. The algorithm used known background images of the scene without the dynamic foreground objects to obtain an initial segmentation. Visual-hull based reconstruction was performed with known prior foreground/background using a background image plate with fixed and calibrated cameras. This visual hull was used as a prior and was optimized by a combination of photo-consistency, silhouette, color and sparse feature information in an energy minimization framework to improve the segmentation and reconstruction quality. Although structurally similar to our approach, it requires the scene to be captured by fixed calibrated cameras and a priori known fixed background plate as a prior to estimate the initial visual hull by background subtraction. The proposed approach overcomes these limitations allowing moving cameras and unknown scene backgrounds.
An approach based on optical flow and graph cuts was shown to work well for non-rigid objects in indoor settings but requires known background segmentation to obtain silhouettes and is computationally expensive [24]. Practical application of temporally coherent joint estimation requires approaches that work on non-rigid objects for general scenes in uncontrolled environments. A quantitative evaluation of techniques for multi-view reconstruction was presented in [53]. These methods are able to produce high quality results, but rely on good initializations and strong prior assumptions with known and controlled (static) scene backgrounds.
The proposed method exploits the advantages of joint segmentation and reconstruction and addresses the limitations of existing methods by introducing a novel approach to reconstruct general dynamic scenes automatically from wide-baseline cameras with no prior. To overcome the limitations of existing methods, the proposed approach automatically initialises the foreground object segmentation from wide-baseline correspondence without prior knowledge of the scene. This is followed by a joint spatio-temporal reconstruction and segmentation of general scenes. Temporal correspondence is exploited to overcome visual ambiguities giving improved reconstruction together with temporal coherence of surface correspondence to obtain 4D scene models.
Temporal coherent 4D Reconstruction
Temporally coherent 4D reconstruction refers to aligning the 3D surfaces of non-rigid objects over time for a dynamic sequence. This is achieved by estimating pointto-point correspondences for the 3D surfaces to obtain 4D temporally coherent reconstruction. 4D models allows to create efficient representation for practical applications in film, broadcast and immersive content production such as virtual, augmented and mixed reality. The majority of existing approaches for reconstruction of dynamic scenes from multi-view videos process each time frame independently due to the difficulty of simultaneously estimating temporal correspondence for non-rigid objects. Independent per-frame reconstruction can result in errors due to the inherent visual ambiguity caused by occlusion and similar object appearance for general scenes. Recent research has shown that exploiting temporal information can improve reconstruction accuracy as well as achieving temporal coherence [43].
3D scene flow estimates frame to frame correspondence whereas 4D temporal coherence estimates correspondence across the complete sequence to obtain a single surface model. Methods to estimate 3D scene flow have been reported in the literature [41] for autonomous vehicles. However this approach is limited to narrow baseline cameras. Other scene flow approaches are dependent on 2D optical flow [66,6] and they require an accurate estimate for most of the pixels which fails in the case of large motion. However, 3D scene flow methods align two frames independently and do not produce temporally coherent 4D models.
Research investigating spatio-temporal reconstruction across multiple frames was proposed by [20,37,24] exploiting the temporal information from the previous frames using optical flow. An approach for recovering space-time consistent depth maps from multiple video sequences captured by stationary, synchronized and calibrated cameras for depth based free viewpoint video rendering was proposed by [39]. However these methods require accurate initialisation, fixed and calibrated cameras and are limited to simple scenes. Other approaches to temporally coherent reconstruction [4] either requires a large number of closely spaced cameras or bi-layer segmentation [72,30] as a constraint for reconstruction. Recent approaches for spatio-temporal reconstruction of multi-view data either work on indoor studio data [47].
The framework proposed in this paper addresses limitations of existing approaches and gives 4D temporally coherent reconstruction for general dynamic indoor or outdoor scenes with large non-rigid motions, repetitive texture, uncontrolled illumination, and large capture volume. The scenes are captured with sparse static/moving cameras. The proposed approach gives 4D models of complete scenes with both static and dynamic objects for real-world applications (FVV and VR) with no prior knowledge of scene structure.
Multi-view Video Segmentation
In the field of image segmentation, approaches have been proposed to provide temporally consistent monocular video segmentation [21,49,45,71]. Hierarchical segmentation based on graphs was proposed in [21], directed acyclic graph were used to propose an object followed by segmentation [71]. Optical flow is used to identify and consistently segment objects [45,49]. Recently a number of approaches have been proposed for multi-view foreground object segmentation by exploiting appearance similarity spatially across views [16,35,38,70]. An approach for space-time multi-view segmentation was proposed by [17]. However, multi-view approaches assume a static background and different colour distributions for the foreground and background which limits applicability for general scenes and non-rigid objects.
To address this issue we introduce a novel method for spatio-temporal multi-view segmentation of dynamic scenes using shape constraints. Single image segmentation techniques using shape constraints provide good results for complex scene segmentation [25] (convex and concave shapes), but require manual interaction. The proposed approach performs automatic multi-view video segmentation by initializing the foreground object model using spatio-temporal information from widebaseline feature correspondence followed by a multi- layer optimization framework. Geodesic star convexity previously used in single view segmentation [25] is applied to constraint the segmentation in each view. Our multi-view formulation naturally enforces coherent segmentation between views and also resolves ambiguities such as the similarity of background and foreground in isolated views.
Summary and Motivation
Image-based temporally coherent 4D dynamic scene reconstruction without a prior model or constraints on the scene structure is a key problem in computer vision. Existing dense reconstruction algorithms need some strong initial prior and constraints for the solution to converge such as background, structure, and segmentation, which limits their application for automatic reconstruction of general scenes. Current approaches are also commonly limited to independent per-frame reconstruction and do not exploit temporal information or produce a coherent model with known correspondence.
The approach proposed in this paper aims to overcome the limitations of existing approaches to enable robust temporally coherent wide-baseline multiple view reconstruction of general dynamic scenes without prior assumptions on scene appearance, structure or segmentation of the moving objects. Static and dynamic objects in the scene are identified for simultaneous segmentation and reconstruction using geometry and appearance cues in a sparse-to-dense optimization framework. Temporal coherence is introduced to improve the quality of the reconstruction and geodesic star convexity is used to improve the quality of segmentation. The static and dynamic elements are fused automatically in both the temporal and spatial domain to obtain the final 4D scene reconstruction.
This paper presents a unified framework, novel in combining multiple view joint reconstruction and seg-mentation with temporal coherence to improve per-frame reconstruction performance and produce a single framework from the initial work presented in [43,42]. In particular the approach gives 4D surface model with full correspondence over time. A comprehensive experimental evaluation with comparison to the state-of-the-art in segmentation, reconstruction and 4D modelling is also presented extending previous work. Application fo the resulting 4D models to free-viewpoint video rendering and content production for immersive virtual reality experiences is also presented.
Methodology
This work is motivated by the limitations of existing multiple view reconstruction methods which either work independently at each frame resulting in errors due to visual ambiguity [19,23], or require restrictive assumptions on scene complexity and structure and often assume prior camera calibration and foreground segmentation [60,24]. We address these issues by initializing the joint reconstruction and segmentation algorithm automatically, introducing temporal coherence in the reconstruction and geodesic star convexity in segmentation to reduce ambiguity and ensure consistent non-rigid structure initialization at successive frames. The proposed approach is demonstrated to achieve improved reconstruction and segmentation performance over state-ofthe-art approaches and produce temporally coherent 4D models of complex dynamic scenes.
Overview
An overview of the proposed framework for temporally coherent multi-view reconstruction is presented in Figures 2 and consists of the following stages: Multi-view video: The scenes are captured using multiple video cameras (static/moving) separated by widebaseline (> 15 • ). The cameras can be synchronized during the capture using time-code generator or later using the audio information. Camera extrinsic calibration and scene structure are assumed to be unknown. Sparse reconstruction: The intrinsics are assumed to be known. Segmentation based feature detection (SFD) [44] is used to obtain a relatively large number of sparse features suitable for wide-baseline matching which are distributed throughout the scene including on dynamic objects such as people. SFD features are matched between views using a SIFT descriptor giving camera extrinsics and a sparse 3D point-cloud for each time instant for the entire sequence [27]. Initial scene segmentation and reconstruction -Section 3.2: Automatic initialisation is performed without prior knowledge of the scene structure or appearance to obtain an initial approximation for each object. The sparse point cloud is clustered in 3D [51] with each cluster representing a unique foreground object. Object segmentation increases efficiency and improve robustness of 4D models. This reconstruction is refined using the framework explained in Section 3.4 to obtain segmentation and dense reconstruction of each object. Sparse-to-dense temporal reconstruction with temporal coherence -Section 3.3 Temporal coherence is introduced in the framework to initialize the coarse reconstruction and obtain frame-to-frame dense correspondences for dynamic object. Dynamic object regions are detected at each time instant by sparse temporal correspondence of SFD features at successive frames. Sparse temporal feature correspondence allows propagation of the dense reconstruction for each dynamic object to obtain an initial approximation. Joint object-based sparse-to-dense temporally coherent refinement of shape and segmentation -Section 3.4: The initial estimate is refined for each object per-view in the scene through joint optimisation of shape and segmentation using a robust cost function combining matching, color, contrast and smoothness information for wide-baseline matching with a geodesic star convexity constraint. A single 3D model for each dynamic object is obtained by fusion of the view-dependent depth maps using Poisson surface reconstruction [31]. Surface orientation is estimated based on neighbouring pixels. Applications -Section : The 4D representation from the proposed joint segmentation and reconstruction framework has a number of applications in media production, including free-viewpoint video (FVV) rendering and virtual reality (VR).
The process above is repeated for the entire sequence for all objects in the first frame and for dynamic objects at each time-instant. The proposed approach enables automatic reconstruction of all objects in the scene as a 4D mesh sequence. Subsequent sections present the novel contributions of this work in initialisation and refinement to obtain a dense temporally coherent reconstruction. The approach is demonstrated to outperform previous approaches to dynamic scene reconstruction and does not require prior knowledge of the scene.
Initial Scene Segmentation and Reconstruction
For general dynamic scene reconstruction, we need to reconstruct and segment the objects in the scene. This requires an initial coarse approximation for initialisation of a subsequent refinement step to optimise the segmentation and reconstruction with respect to each camera view. We introduce an approach based on sparse point cloud clustering, an overview is shown in Figure 3. Initialisation gives a complete coarse segmentation and reconstruction of each object in the first frame of the sequence for subsequent refinement. The dense reconstruction of the foreground objects and background are combined to obtain a full scene reconstruction at the first time instant. A rough geometric proxy of the background is created using the method. For consecutive time instants dynamic objects and newly appeared objects are identified and only these objects are reconstructed and segmented. The reconstruction of static objects is retained which reduces computational complexity. The optic flow and cluster information for each dynamic object ensures that we retain same labels for the entire sequence.
Sparse Point-cloud Clustering
The sparse representation of the scene is processed to remove outliers using the point neighbourhood statistics to filter outlier data [51]. We segment the objects in the sparse scene reconstruction, this allows only moving objects to be reconstructed at each frame for efficiency and this also allows object shape similarity to be propagated across frames to increase robustness of reconstruction.
We use data clustering approach based on the 3D grid subdivision of the space using an octree data structure in Euclidean space to segment objects at each frame. In a more general sense, nearest neighbor information is used to cluster, which is essentially similar to a flood fill algorithm. We choose this data clustering because of its computational efficiency and robustness. The approach allows segmentation of objects in the scene and is demonstrated to work well for cluttered and general outdoor scenes as shown in Section 4.
Objects with insufficient detected features are reconstructed as part of the scene background. Appearing, disappearing and reappearing objects are handled by sparse dynamic feature tracking, explained in Section 3.3. Clustering results are shown in Figure 3. This is followed by a sparse-to-dense coarse object based approach to segment and reconstruct general dynamic scenes.
Coarse Object Reconstruction
The process to obtain the coarse reconstruction for the first frame of the sequence is shown in Figure 4. The sparse representation of each element is back-projected on the rectified image pair for each view. Delaunay triangulation [18] is performed on the set of back projected points for each cluster on one image and is propagated to the second image using the sparse matched features. Triangles with edge length greater than the median length of edges of all triangles are removed. For each remaining triangle pair direct linear transform is used to estimate the affine homography. Displacement at each pixel within the triangle pair is estimated by interpolation to get an initial dense disparity map for each cluster in the 2D image pair labelled as R I depicted in red in Figure 4. The initial coarse reconstruction for the observed objects in the scene is used to define the depth hypotheses at each pixel for the optimization.
The region R I does not ensure complete coverage of the object, so we extrapolate this region to obtain a region R O (shown in yellow) in 2D by 5% of the average distance between the boundary points(R I ) and the centroid of the object. To allow for errors in the initial approximate depth from sparse features we add volume in front and behind of the projected surface by an error tolerance, along the optical ray of the camera. This ensures that the object boundaries lie within the extrapolated initial coarse estimate and depth at each pixel for the combined regions may not be accurate. The tolerance for extrapolation may vary if a pixel belongs to R I or R O as the propagated pixels of the extrapolated regions (R O ) may have a high level of errors compared to error at the points from sparse representation (R I ) requiring a comparatively higher tolerance. The calculation of threshold depends on the capture volume of the datasets and is set to 1% of the capture volume for R O and half the value for R I . This volume in 3D corresponds to our initial coarse reconstruction of each object and enables us to remove the dependency of the existing approaches on background plate and visual hull estimates. This process of cluster identification and initial coarse object reconstruction is performed for multiple objects in general environments. Initial object segmentation using point cloud clustering and coarse segmentation is insensitive to parameters. Throughout this work the same parameters are used for all datasets. The result of this process is a coarse initial object segmentation and reconstruction for each object.
Sparse-to-dense temporal reconstruction with temporal coherence
Once the static scene reconstruction is obtained for the first frame, we perform temporally coherent reconstruction for dynamic objects at successive time instants instead of whole scene reconstruction for computational efficiency and to avoid redundancy. The initial coarse reconstruction for each dynamic region is refined in the subsequent optimization step with respect to each camera view. Dynamic scene objects are identified from the temporal correspondence of sparse feature points. Sparse correspondence is used to propagate an initial model of the moving object for refinement. Figure 5 presents the sparse reconstruction and temporal correspondence. New objects are identified per frame from the clustered sparse reconstruction and are labelled as dynamic objects. Sparse temporal dynamic feature tracking: Numerous approaches have been proposed to track moving objects in 2D using either features or optical flow. However these methods may fail in the case of occlusion, movement parallel to the view direction, large motions and moving cameras. To overcome these limitations we match the sparse 3D feature points obtained using SFD [44] from multiple wide-baseline views at each time instant. The use of sparse 3D features is robust to large non-rigid motion, occlusions and camera movement. SFD detects sparse features which are stable across wide-baseline views and consecutive time instants for a moving camera and dynamic scene. Sparse 3D feature matches between consecutive time instants are back-projected to each view. These features are matched temporally using SIFT descriptor to identify the moving points. Robust matching is achieved by enforcing multiple view consistency for the temporal feature correspondence in each view as illustrated in Figure 6. Each match must satisfy the constraint:
H t,v (p) + u t,r (p + H t,v (p)) − u t,v (p)− (1) H t,r (p + u t,v (p)) <
where p is the feature image point in view v at frame t, H t,v (p) is the disparity at frame t from views v and r, u t,v (p) is the temporal correspondence from frames t to t + 1 for view v. The multi-view consistency check ensures that correspondences between any two views remain temporally consistent for successive frames. Matches in the 2D domain are sensitive to camera movement and occlusion, hence we map the set of refined matches into 3D to make the system robust to camera motion. The Frobenius norm is applied on the 3D point gradients in all directions [71] to obtain the 'net' motion at each sparse point. The 'net' motion between pairs of 3D points for consecutive time instants are ranked, and the top and bottom 5 percentile values are removed. Median filtering is then applied to identify the dynamic features. Figure 7 shows an example with moving cameras for Juggler [5].
Sparse-to-dense model reconstruction: Dynamic 3D feature points are used to initialize the segmentation and reconstruction of the initial model. This avoids the assumption of static backgrounds and prior scene segmentation commonly used to initialise multiple view reconstruction with a coarse visual-hull approximation [23]. Temporal coherence also provides a more accurate initialisation to overcome visual ambiguities at individual frames. Figure 8 illustrates the use of temporal coherence for reconstruction initialisation and refinement. Dynamic feature correspondence is used to identify the mesh for each dynamic object. This mesh is back projected on each view to obtain the region of interest. Lucas Kanade Optical flow [8] is performed on the projected mask for each view in the temporal domain using the dynamic feature correspondences over time as initialization. Dense multi-view wide-baseline correspondences from the previous frame are propagated to the current frame using the information from the flow vectors to obtain dense multi-view matches in the current frame. The matches are triangulated in 3D to obtain a refined 3D dense model of the dynamic object for the current frame. For dynamic scenes, a new object may enter the scene or a new part may appear as the object moves. To allow the introduction of new objects and object parts we also use information from the cluster of sparse points for each dynamic object. The cluster corresponding to the dynamic features is identified and static points are removed. This ensures that the set of new points not only contain the dynamic features but also the unprocessed points which represent new parts of the object. These points are added to the refined sparse model of the dynamic object. To handle the new objects we detect new clusters at each time instant and consider them as dynamic regions. The sparse-to-dense initial coarse reconstruction improves the quality of segmentation and reconstruction after the refinement. Examples of the improvement in segmentation and reconstruction for Odzemok [1] and Juggler [5] datasets are shown in Figure 9. As observed limbs of the people is retained by using information from the previous frames in both the cases.
Joint object-based sparse-to-dense temporally coherent refinement of shape and segmentation
The initial reconstruction and segmentation from dense temporal feature correspondence is refined using a joint optimization framework. A novel shape constraint is introduced based on geodesic star convexity which has previously been shown to give improved performance in interactive image segmentation for structures with fine details (for example a person's fingers or hair) [25]. Shape is a powerful cue for object recognition and segmentation. Shape models represented as distance transforms from a template have been used for category specific segmentation [33]. Some works have introduced generic connectivity constraints for segmentation showing that obtaining a globally optimal solutions under the connectivity constraint is NP-hard [64]. Veksler et al. have used shape constraint in segmentation framework by enforcing star convexity prior on the segmentation, and globally optimal solutions are achieved subject to this constraint [63]. The star convexity constraint ensures connectivity to seed points, and is a stronger assumption than plain connectivity. An example of a star-convex object is shown in Figure 10 along with a failure case for a non-rigid articulate object. To handle more complex objects the idea of geodesic forests with multiple star centres was introduced to obtain a globally optimal solution for interactive 2D object segmentation [25]. The main focus was to introduce shape constraints in interactive segmentation, by means of a geodesic star convexity prior. The notion of connectivity was extended from Euclidean to geodesic so that paths can bend and adapt to image data as opposed to straight Euclidean rays, thus extending visibility and reducing the number of star centers required.
The geodesic star-convexity is integrated as a constraint on the energy minimisation for joint multi-view Fig. 10 (a) Representation of star convexity: The left object depicts example of star-convex objects, with a star center marked. The object on the right with a plausible star center shows deviations from star-convexity in the fine details, and (b) Multiple star semantics for joint refinement: Single star center based segmentation is depicted on the left and multiple star is shown on the right. reconstruction and segmentation [23]. In this work the shape constraint is automatically initialised for each view from the initial segmentation. The shape constraint is based on the geodesic distance with foreground object initialisation (seeds) as star centres to which the object shape is restricted. The union formed by multiple object seeds form a geodesic forest. This allows complex shapes to be segmented. In this work to automatically initialize the segmentation we use the sparse temporal feature correspondence as star centers (seeds) to build a geodesic forest automatically. The region outside the initial coarse reconstruction of all dynamic objects is initialized as the background seed for segmentation as shown in Figure 12. The shape of the dynamic object is restricted by this geodesic distance constraint that depends on the image gradient. Comparison with existing methods for multi-view segmentation demonstrates improvements in recovery of fine detail structure as illustrated in Figure 12.
Once we have a set of dense 3D points for each dynamic object, Poisson surface reconstruction is performed on the set of sparse points to obtain an initial coarse model of each dynamic region R, which is subsequently refined using the optimization framework (Section 3.4.1).
Optimization on initial coarse object reconstruction based on geodesic star convexity
The depth of the initial coarse reconstruction estimate is refined per view for each dynamic object at a per pixel level. View-dependent optimisation of depth is performed with respect to each camera which is robust to errors in camera calibration and initialisation. Calibration inaccuracies produce inconsistencies limiting the applicability of global reconstruction techniques which simultaneously consider all views; view-dependent techniques are more tolerant to such inaccuracies because they only use a subset of the views for reconstruction of depth from each camera view.
Our goal is to assign an accurate depth value from a set of depth values D = d 1 , ..., d |D|−1 , U and assign a layer label from a set of label values L = l 1 , ..., l |L | to each pixel p for the region R of each dynamic object. Each d i is obtained by sampling the optical ray from the camera and U is an unknown depth value to handle occlusions. This is achieved by optimisation of a joint cost function [23] for label (segmentation) and depth (reconstruction):
E(l, d) = λ data E data (d) + λ contrast E contrast (l)+ λ smooth E smooth (l, d) + λ color E color (l) (2)
where, d is the depth at each pixel, l is the layer label for multiple objects and the cost function terms are defined in section 3.4.2. The equation consists of four terms: the data term is for the photo-consistency scores, the smoothness term is to avoid sudden peaks in depth and maintain the consistency and the color and contrast terms are to identify the object boundaries. Data and smoothness terms are common to solve reconstruction problems [7] and the color and contrast terms are used for segmentation [34]. This is solved subject to a geodesic star-convexity constraint on the labels l. A label l is star convex with respect to center c, if every point p ∈ l is visible to a star center c via l in the image x which can be expressed as an energy cost:
E (l|x, c) = p∈R q∈Γc,p E p,q (l p , l q ) (3) ∀q ∈ Γ c,p , E p,q = ∞ if l p = l q 0 otherwise(4)
where ∀p ∈ R : p ∈ l ⇔ l p = 1 and Γ c,p is the geodesic path joining p to the star center c given by:
Γ c,p = arg min Γ ∈Pc,p L(Γ )(5)
where P c,p denotes the set of all discrete paths between c and p and L(Γ ) is the length of discrete geodesic path as defined in [25]. In the case of image segmentation the gradients in the underlying image provide information to compute the discrete paths between each pixel and star centers and L(Γ ) is defined below:
L(Γ ) = N D −1 i=1 (1 − δ g )j(Γ i , Γ i+1 ) 2 + δ g I(Γ i ) 2(6)
where Γ is an arbitrary parametrized discrete path with N D pixels given by Γ 1 , Γ 2 , · · · Γ N D , j(Γ i , Γ i+1 ) is the Euclidean distance between successive pixels, and the quantity I(Γ i ) 2 is a finite difference approximation of the image gradient between the points Γ i , Γ i+1 . The parameter weights δ g the Euclidean distance with the geodesic length. Using the above definition, one can define the geodesic distance as defined in Equation 5.
An extension of single star-convexity is to use multiple stars to define a more general class of shapes. Introduction of multiple star centers reduces the path lengths and increases the visibility of small parts of objects like small limbs as shown in Figure 10. Hence Equation 3 is extended to multiple stars. A label l is star convex with respect to center c i , if every point p ∈ l is visible to a star center c i in set C = {c 1 , ..., c N T } via l in the image x, where N T is the number of star centers [25]. This is expressed as an energy cost:
E (l|x, C ) = p∈R q∈Γc,p E p,q (l p , l q )(7)
In our case all the correct temporal sparse feature correspondences are used as star centers, hence the segmentation will include all the points which are visible to these sparse features via geodesic distances in the region R, thereby employing the shape constraint. Since the star centers are selected automatically, the method is unsupervised. Comparison of segmentation constraint with geodesic multi-star convexity against no constraints and Euclidean multi-star convexity constraint is shown in Figure 11. The figure demonstrates the usefulness of the proposed approach with an improvement in segmentation quality on non-rigid complex objects. The energy in the Equation 2 is minimized as follows:
min (l,d) s.t. E(l, d) l S (C ) ⇔ min (l,d) E(l, d) + E (l|x, C )(8)
where S (C ) is the set of all shapes which lie within the geodesic distances with respect to the centers in C . Optimization of Equation 8, subject to each pixel p in the region R being at a geodesic distance Γ c,p from the star centers in the set C , is performed using the αexpansion algorithm for a pixel p by iterating through Fig. 12 Geodesic star convexity: A region R with star centers C connected with geodesic distance Γ c,p . Segmentation results with and without geodesic star convexity based optimization are shown on the right for the Juggler dataset. the set of labels in L × D [10]. Graph-cut is used to obtain a local optimum [9]. The improvements in the results using geodesic star convexity in the framework is shown in Figure 12 and by using temporal coherence is shown in Figure 9. Figure 13 shows improvements using geodesic shape constraint, temporal coherence and combined proposed approach for Dance2 [2] dataset.
Energy cost function for joint segmentation and reconstruction
For completeness in this section we define each of the terms in Equation 2, these are based on previous terms used for joint optimisation over depth for each pixel introduced in [42], with modification of the color matching term to improve robustness and extension to multiple labels.
Matching term: The data term for matching between views is specified as a measure of photo-consistency (Figure 14) as follows:
E data (d) = p∈P e data (p, d p ) = M (p, q) = i∈O k m(p, q), if d p = U M U , if d p = U(9)
where P is the 4-connected neighbourhood of pixel p, M U is the fixed cost of labelling a pixel unknown and q denotes the projection of the hypothesised point P in an auxiliary camera where P is a 3D point along the optical ray passing through pixel p located at a distance d p from the reference camera. O k is the set of k most photo-consistent pairs. For textured scenes Normalized Cross Correlation (NCC) over a squared window is a common choice [53]. The NCC values range from -1 to 1 which are then mapped to non-negative values by using the function 1 − N CC.
A maximum likelihood measure [40] is used in this function for confidence value calculation between the center pixel p and the other pixels q and is based on the survey on confidence measures for stereo [28]. The measure is defined as:
m(p, q) = exp cmin 2σ 2 i (p,q)∈N exp −(1−N CC(p,q)) 2σ 2 i(10)
where σ 2 i is the noise variance for each auxiliary camera i; this parameter was fixed to 0.3. N denotes the set of interacting pixels in P. c min is the minimum cost for a pixel obtained by evaluating the function (1−N CC(., .)) on a 15 × 15 window. Contrast term: Segmentation boundaries in images tend to align with contours of high contrast and it is desirable to represent this as a constraint in stereo matching. A consistent interpretation of segmentation-prior and contrast-likelihood is used from [34]. We used a modified version of this interpretation in our formulation to preserve the edges by using Bilateral filtering [61] instead of Gaussian filtering. The contrast term is as follows:
E contrast (l) = p,q∈N e contrast (p, q, l p , l q )(11)
e contrast (p, q, l p , l q ) = 0, if (l p = l q ) 1 1+ ( + exp −C(p,q) ), otherwise (12) · is the L 2 norm and = 1. The simplest choice for C(p, q) would be the squared Euclidean color distance between intensities at pixel p and q as used in [23]. We propose a term for better segmentation as C(p, q) =
B(p)−B(q) 2 2σ 2 pq d 2 pq
where B(.) represents the bilateral filter, d pq is the Euclidean distance between p and q, and σ pq =
B(p)−B(p) 2 d 2 pq
This term enables to remove the regions with low photo-consistency scores and weak edges and thereby helps in estimating the object boundaries.
Smoothness term: This term is inspired by [23] and it ensures the depth labels vary smoothly within the object reducing noise and peaks in the reconstructed surface. This is useful when the photo-consistency score is low and insufficient to assign depth to a pixel ( Figure 14). It is defined as:
E smooth (l, d) = (p,q)∈N e smooth (l p , d p , l q , d q ) (13) e smooth (l p , d p , l q , d q ) = min(|d p − d q | , d max ), if l p = l q and d p , d q = U 0, if l p = l q and d p , d q = U d max , otherwise(14)
d max is set to 50 times the size of the depth sampling step for all datasets.
Color term: This term is computed using the negative log likelihood [9] of the color models learned from the foreground and background markers. The star centers obtained from the sparse 3D features are foreground markers and for background markers we consider the region outside the projected initial coarse reconstruction for each view. The color models use GMMs with 5 components each for Foreground/Background mixed with uniform color models [14] as the markers are sparse.
E color (l) = p∈P −logP (I p |l p )(15)
where P (I p |l p = l i ) denotes the probability at pixel p in the reference image belonging to layer l i . Fig. 15 Comparison of segmentation on benchmark static datasets using geodesic star-convexity.
Results and Performance Evaluation
The proposed system is tested on publicly available multi-view research datasets of indoor and outdoor scenes, details of datasets explained in Table 1. The parameters used for all the datasets are defined in Table 2. More information is available on the website 1 .
Multi-view segmentation evaluation
Segmentation is evaluated against the state-of-the-art methods for multi-view segmentation Kowdle [35] and Djelouah [16] for static scenes and joint segmentation reconstruction methods Mustafa [42] (per frame) and Guillemaut [24] (using temporal information) for both static and dynamic scenes. For static multi-view data the segmentation is initialised as detailed in Section 3.1 followed by refinement using the constrained optimisation Section 3.4.1. For dynamic scenes the full pipeline with temporal coherence is used as detailed in 3. Ground-truth is obtained by manually labelling the foreground for Office, Dance1 and Odzemok dataset, and for other datasets ground-truth is available online. We initialize all approaches by the same proposed initial coarse reconstruction for fair comparison.
To evaluate the segmentation we measure completeness as the ratio of intersection to union with groundtruth [35]. Comparisons are shown in Table 3 and Figure 15, 16 for static benchmark datasets. Comparison for dynamic scene segmentations are shown in Table 4 and Figure 17, 18. Results for multi-view segmentation of static scenes are more accurate than Djelouah, Mustafa, and Guillemaut, and comparable to Kowdle with improved segmentation of some detail such as the back of the chair.
For dynamic scenes the geodesic star convexity based optimization together with temporal consistency gives improved segmentation of fine detail such as the legs of the table in the Office dataset and limbs of the person in the Juggler, Magician and Dance2 datasets in Figure 17 and 18. This overcomes limitations of previous multiview per-frame segmentation.
Reconstruction evaluation
Reconstruction results obtained using the proposed method are compared against Mustafa [42], Guillemaut [24], and Furukawa [19] for dynamic sequences. Furukawa [19] is a per-frame multi-view wide-baseline stereo approach which ranks highly on the middlebury benchmark [53] but does not refine the segmentation.
The depth maps obtained using the proposed approach are compared against Mustafa and Guillemaut in Figure 19. The depth map obtained using the proposed approach are smoother with low reconstruction noise compared to the state-of-the-art methods. Figure 20 and 21 present qualitative and quantitative comparison of our method with the state-of-the-art approaches.
Comparison of reconstructions demonstrates that the proposed method gives consistently more complete and accurate models. The colour maps highlight the quantitative differences in reconstruction. As far as we are aware no ground-truth data exist for dynamic scene reconstruction from real multi-view video. In Figure 21 we present a comparison with the reference mesh available with the Dance2 dataset reconstructed using a visual-hull approach. This comparison demonstrates improved reconstruction of fine detail with the proposed technique.
In contrast to all previous approaches the proposed method gives temporally coherent 4D model reconstructions with dense surface correspondence over time. The introduction of temporal coherence constrains the reconstruction in regions which are ambiguous on a particular frame such as the right leg of the juggler in Figure 20 resulting in more complete shape. Figure 22 shows three complete scene reconstructions with 4D models of multiple objects. The Juggler and Magician sequences are reconstructed from moving handheld cameras. Computational Complexity: Computation times for the proposed approach vs other methods are presented in Table 5. The proposed approach to reconstruct temporally coherent 4D models is comparable in computation time to per-frame multiple view reconstruction and gives a ∼50% reduction in computation cost compared to previous joint segmentation and reconstruction approaches using a known background. This efficiency is achieved through improved per-frame initialisation based on temporal propagation and the introduction of the geodesic star constraint in joint optimisation. Further results can be found in the supplementary material. Temporal coherence: A frame-to-frame alignment is obtained using the proposed approach as shown in Figure 23 for Dance1 and Juggle dataset. The meshes of the dynamic object in Frame 1 and Frame 9 are color coded in both the datasets and the color is propagated to the next frame using the dense temporal coherence information. The color in different parts of the object is retained to the next frame as seen from the figure. The proposed approach obtains sequential temporal alignment which drifts with large movement in the object, hence successive frames are shown in the figure.
Limitations: As with previous dynamic scene reconstruction methods the proposed approach has a number of limitations: persistent ambiguities in appearance between objects will degrade the improvement achieved with temporal coherence; scenes with a large number of inter-occluding dynamic objects will degrade performance; the approach requires sufficient wide-baseline views to cover the scene.
Applications to immersive content production
The 4D meshes generated from the proposed approach can be used for applications in immersive content production such as FVV rendering and VR. This section demonstrates the results of these applications.
Free-viewpoint rendering
In FVV, the virtual viewpoint is controlled interactively by the user. The appearance of the reconstruction is sampled and interpolated directly from the captured camera images using cameras located close to the virtual viewpoint [57].
The proposed joint segmentation and reconstruction framework generates per-view silhouettes and a temporally coherent 4D reconstruction at each time instant of the input video sequence. This representation of the dynamic sequence is used for FVV rendering. To create FVV, a view-dependent surface texture is computed based on the user selected virtual view. This virtual view is obtained by combining the information from camera views in close proximity to the virtual viewpoint [57]. FVV rendering gives user the freedom to interactively choose a novel viewpoint in space to observe the dynamic scene and reproduces fine scale temporal surface details, such as the movement of hair and clothing wrinkles, that may not be modelled geometrically. An example of a reconstructed scene and the camera configuration is shown in Figure 24.
A qualitative evaluation of images synthesised using FVV is shown in Figure 25 and 26. These demonstrate reconstruction results rendered from novel viewpoints from the proposed method against Mustafa [43] and Guillemaut [23] on publicly available datasets. This is particularly important for wide-baseline camera configurations where this technique can be used to synthesize intermediate viewpoints where it may not be practical or economical to physically locate real cameras.
Virtual reality rendering
There is a growing demand for photo-realistic content in the creation of immersive VR experiences. The 4D temporally coherent reconstructions of the dynamic scenes obtained using the proposed approach enables the creation of photo-realistic digital assets that can be incorporated into VR environments using game engines such as Unity and Unreal Engine, as shown in Figure 27 for single frame of four datasets and for a series of frames for Dance1 dataset.
In order to efficiently render the reconstructions in a game engine for applications in VR, a UV texture atlas is extracted using the 4D meshes from the proposed approach as a geometric proxy. The UV texture atlas at each frame are applied to the models at render time in unity for viewing in a VR headset. A UV texture atlas is constructed by projectively texturing and blending multiple view frames onto a 2D unwrapped UV texture atlas, see Figure Figure 28. This is performed once for each static object and at each time instance for dynamic objects allowing efficient storage and real-time playback of static and dynamic textured reconstructions within a VR headset.
Conclusion
This paper introduced a novel technique to automatically segment and reconstruct dynamic scenes captured from multiple moving cameras in general dynamic uncontrolled environments without any prior on background appearance or structure. The proposed automatic initialization was used to identify and initialize the segmentation and reconstruction of multiple objects. A framework for temporally coherent 4D model reconstruction of dynamic scenes from a set of wide-baseline moving cameras. The approach gives a complete model of all static and dynamic non-rigid objects in the scene. Temporal coherence for dynamic objects addresses limitations of previous per-frame reconstruction giving improved reconstruction and segmentation together with dense temporal surface correspondence for dynamic objects. A sparse-to-dense approach is introduced to establish temporal correspondence for non-rigid objects using robust sparse feature matching to initialise dense optical flow providing an initial segmentation and reconstruction. Joint refinement of object reconstruction and segmentation is then performed using a multiple view optimisation with a novel geodesic star convexity constraint that gives improved shape estimation and is computationally efficient. Comparison against state-ofthe-art techniques for multiple view segmentation and reconstruction demonstrates significant improvement in performance for complex scenes. The approach enables reconstruction of 4D models for complex scenes which has not been demonstrated previously. | 8,667 |
1907.08195 | 2963385316 | Existing techniques for dynamic scene reconstruction from multiple wide-baseline cameras primarily focus on reconstruction in controlled environments, with fixed calibrated cameras and strong prior constraints. This paper introduces a general approach to obtain a 4D representation of complex dynamic scenes from multi-view wide-baseline static or moving cameras without prior knowledge of the scene structure, appearance, or illumination. Contributions of the work are: An automatic method for initial coarse reconstruction to initialize joint estimation; Sparse-to-dense temporal correspondence integrated with joint multi-view segmentation and reconstruction to introduce temporal coherence; and a general robust approach for joint segmentation refinement and dense reconstruction of dynamic scenes by introducing shape constraint. Comparison with state-of-the-art approaches on a variety of complex indoor and outdoor scenes, demonstrates improved accuracy in both multi-view segmentation and dense reconstruction. This paper demonstrates unsupervised reconstruction of complete temporally coherent 4D scene models with improved non-rigid object segmentation and shape reconstruction and its application to free-viewpoint rendering and virtual reality. | Research investigating spatio-temporal reconstruction across multiple frames was proposed by @cite_28 @cite_15 @cite_43 exploiting the temporal information from the previous frames using optical flow. An approach for recovering space-time consistent depth maps from multiple video sequences captured by stationary, synchronized and calibrated cameras for depth based free viewpoint video rendering was proposed by @cite_67 . However these methods require accurate initialisation, fixed and calibrated cameras and are limited to simple scenes. Other approaches to temporally coherent reconstruction @cite_65 either requires a large number of closely spaced cameras or bi-layer segmentation @cite_27 @cite_55 as a constraint for reconstruction. Recent approaches for spatio-temporal reconstruction of multi-view data either work on indoor studio data @cite_35 . | {
"abstract": [
"This paper introduces connectivity preserving constraints into spatio-temporal multi-view reconstruction. We efficiently model connectivity constraints by precomputing a geodesic shortest path tree on the occupancy likelihood. Connectivity of the final occupancy labeling is ensured with a set of linear constraints on the labeling function. In order to generalize the connectivity constraints from objects with genus 0 to an arbitrary genus, we detect loops by analyzing the visual hull of the scene. A modification of the constraints ensures connectivity in the presence of loops. The proposed efficient implementation adds little runtime and memory overhead to the reconstruction method. Several experiments show significant improvement over state-of-the-art methods and validate the practical use of this approach in scenes with fine structured details.",
"In this paper, we present a new approach for recovering spacetime-consistent depth maps from multiple video sequences captured by stationary, synchronized and calibrated cameras for depth based free viewpoint video rendering. Our two-pass approach is generalized from the recently proposed region-tree based binocular stereo matching method. In each pass, to enforce temporal consistency between successive depth maps, the traditional region-tree is extended into a temporal one by including connections to “temporal neighbor regions” in previous video frames, which are identified using estimated optical flow information. For enforcing spatial consistency, multi-view geometric constraints are used to identify inconsistencies between depth maps among different views which are captured in an inconsistency map for each view. Iterative optimizations are performed to progressively correct inconsistencies through inconsistency maps based depth hypotheses pruning and visibility reasoning. Furthermore, the background depth and color information is generated from the results of the first pass and is used in the second pass to enforce sequence-wise temporal consistency and to aid in identifying and correcting spatial inconsistencies. The extensive experimental evaluations have shown that our proposed approach is very effective in producing spatially and temporally consistent depth maps.",
"We model the dynamic geometry of a time-varying scene as a 3D isosurface in space-time. The intersection of the isosurface with planes of constant time yields the geometry at a single time instant. An optimal fit of our model to multiple video sequences is defined as the minimum of an energy functional. This functional is given by an integral over the entire hypersurface, which is designed to optimize photo-consistency. A PDE-based evolution derived from the Euler-Lagrange equation maximizes consistency with all of the given video data simultaneously. The result is a 3D model of the scene which varies smoothly over time. The geometry reconstructed by this scheme is significantly better than results obtained by space-carving approaches that do not enforce temporal coherence.",
"Accurate dense 3D reconstruction of dynamic scenes from natural images is still very challenging. Most previous methods rely on a large number of fixed cameras to obtain good results. Some of these methods further require separation of static and dynamic points, which are usually restricted to scenes with known background. We propose a novel dense depth estimation method which can automatically recover accurate and consistent depth maps from the synchronized video sequences taken by a few handheld cameras. Unlike fixed camera arrays, our data capturing setup is much more flexible and easier to use. Our algorithm simultaneously solves bilayer segmentation and depth estimation in a unified energy minimization framework, which combines different spatio-temporal constraints for effective depth optimization and segmentation of static and dynamic points. A variety of examples demonstrate the effectiveness of the proposed framework.",
"Modern large displacement optical flow algorithms usually use an initialization by either sparse descriptor matching techniques or dense approximate nearest neighbor fields. While the latter have the advantage of being dense, they have the major disadvantage of being very outlier prone as they are not designed to find the optical flow, but the visually most similar correspondence. In this paper we present a dense correspondence field approach that is much less outlier prone and thus much better suited for optical flow estimation than approximate nearest neighbor fields. Our approach is conceptually novel as it does not require explicit regularization, smoothing (like median filtering) or a new data term, but solely our novel purely data based search strategy that finds most inliers (even for small objects), while it effectively avoids finding outliers. Moreover, we present novel enhancements for outlier filtering. We show that our approach is better suited for large displacement optical flow estimation than state-of-the-art descriptor matching techniques. We do so by initializing EpicFlow (so far the best method on MPI-Sintel) with our Flow Fields instead of their originally used state-of-the-art descriptor matching technique. We significantly outperform the original EpicFlow on MPI-Sintel, KITTI and Middlebury.",
"Video-based segmentation and reconstruction techniques are predominantly extensions of techniques developed for the image domain treating each frame independently. These approaches ignore the temporal information contained in input videos which can lead to incoherent results. We propose a framework for joint segmentation and reconstruction which explicitly enforces temporal consistency by formulating the problem as an energy minimisation generalised to groups of frames. The main idea is to use optical flow in combination with a confidence measure to impose robust temporal smoothness constraints. Optimisation is performed using recent advances in the field of graph-cuts combined with practical considerations to reduce run-time and memory consumption. Experimental results with real sequences containing rapid motion demonstrate that the method is able to improve spatio-temporal coherence both in terms of segmentation and reconstruction without introducing any degradation in regions where optical flow fails due to fast motion.",
"Extracting high-quality dynamic foreground layers from a video sequence is a challenging problem due to the coupling of color, motion, and occlusion. Many approaches assume that the background scene is static or undergoes the planar perspective transformation. In this paper, we relax these restrictions and present a comprehensive system for accurately computing object motion, layer, and depth information. A novel algorithm that combines different clues to extract the foreground layer is proposed, where a voting-like scheme robust to outliers is employed in optimization. The system is capable of handling difficult examples in which the background is nonplanar and the camera freely moves during video capturing. Our work finds several applications, such as high-quality view interpolation and video editing.",
"We present an approach for 3D reconstruction from multiple video streams taken by static, synchronized and calibrated cameras that is capable of enforcing temporal consistency on the reconstruction of successive frames. Our goal is to improve the quality of the reconstruction by finding corresponding pixels in subsequent frames of the same camera using optical flow, and also to at least maintain the quality of the single time-frame reconstruction when these correspondences are wrong or cannot be found. This allows us to process scenes with fast motion, occlusions and self- occlusions where optical flow fails for large numbers of pixels. To this end, we modify the belief propagation algorithm to operate on a 3D graph that includes both spatial and temporal neighbors and to be able to discard messages from outlying neighbors. We also propose methods for introducing a bias and for suppressing noise typically observed in uniform regions. The bias encapsulates information about the background and aids in achieving a temporally consistent reconstruction and in the mitigation of occlusion related errors. We present results on publicly available real video sequences. We also present quantitative comparisons with results obtained by other researchers."
],
"cite_N": [
"@cite_35",
"@cite_67",
"@cite_28",
"@cite_55",
"@cite_65",
"@cite_43",
"@cite_27",
"@cite_15"
],
"mid": [
"174865904",
"2548381730",
"2101744775",
"76770379",
"2963317244",
"2084250169",
"2171825998",
"2101092098"
]
} | Temporally coherent general dynamic scene reconstruction | Temporally consistent scene reconstruction for Odzemok dataset color-coded to show the scene object segmentation obtained. effects in film and broadcast production and for content production in virtual reality. The ultimate goal of modelling dynamic scenes from multiple cameras is automatic understanding of real-world scenes from distributed camera networks, for applications in robotics and other autonomous systems. Existing methods have applied multiple view dynamic scene reconstruction techniques in controlled environment with known background or chroma-key studio [23,20,56,60]. Other multiple view stereo techniques require a relatively dense static camera network resulting in a large number of cameras [19]. Extensions to more general outdoor scenes [5,32,60] use prior reconstruction of the static geometry from images of the empty environment. However these methods either require accurate segmentation of dynamic foreground objects, or prior knowledge of the scene struc-ture and background, or are limited to static cameras and controlled environments. Scenes are reconstructed semi-automatically, requiring manual intervention for segmentation/rotoscoping, and result in temporally incoherent per-frame mesh geometries. Temporally coherent geometry with known surface correspondence across the sequence is essential for real-world applications and compact representation.
Our paper addresses the limitations of existing approaches by introducing a methodology for unsupervised temporally coherent dynamic scene reconstruction from multiple wide-baseline static or moving camera views without prior knowledge of the scene structure or background appearance. This temporally coherent dynamic scene reconstruction is demonstrated to work in applications for immersive content production such as free-viewpoint video (FVV) and virtual reality (VR). This work combines two previously published papers in general dynamic reconstruction [42] and temporally coherent reconstruction [43] into a single framework and demonstrates application of this novel unsupervised joint segmentation and reconstruction in immersive content production FVV and VR (Section 5).
The input is a sparse set of synchronised videos from multiple moving cameras of an unknown dynamic scene without prior scene segmentation or camera calibration. Our first contribution is automatic initilisation of camera calibration and sparse scene reconstruction from sparse feature correspondence using sparse feature detection and matching between pairs of frames. An initial coarse reconstruction and segmentation of all scene objects is obtained from sparse features matched across multiple views. This eliminates the requirement for prior knowledge of the background scene appearance or structure. Our second contribution is sparse-to-dense reconstruction and segmentation approach to introduce temporal coherence for every frame. We exploit temporal coherence of the scene to overcome visual ambiguities inherent in single frame reconstruction and multiple view segmentation methods for general scenes. Temporal coherence refers to the correspondence between the 3D surface of all objects observed over time. Our third contribution is spatio-temporal alignment to estimate dense surface correspondence for 4D reconstruction. A geodesic star convexity shape constraint is introduced for the shape segmentation to improve the quality of segmentation for non-rigid objects with complex appearance. The proposed approach overcomes the limitations of existing methods allowing an unsupervised temporally coherent 4D reconstruction of complete models for general dynamic scenes.
The scene is automatically decomposed into a set of spatio-temporally coherent objects as shown in Figure 1 where the resulting 4D scene reconstruction has temporally coherent labels and surface correspondence for each object. This can be used for free-viewpoint video rendering and imported to a game engine for VR experience production. The contributions explained above can be summarized as follows: -Unsupervised temporally coherent dense reconstruction and segmentation of general complex dynamic scenes from multiple wide-baseline views. -Automatic initialization of dynamic object segmentation and reconstruction from sparse features. -A framework for space-time sparse-to-dense segmentation, reconstruction and temporal correspondence. -Robust spatio-temporal refinement of dense reconstruction and segmentation integrating error tolerant photo-consistency and edge information using geodesic star convexity. -Robust and computationally efficient reconstruction of dynamic scenes by exploiting temporal coherence. -Real-world applications of 4D reconstruction to freeviewpoint video rendering and virtual reality. This paper is structured as follows: First related work is reviewed. The methodology for general dynamic scene reconstruction is then introduced. Finally a thorough qualitative and quantitative evaluation and comparison to the state-of-the-art on challenging datasets is presented.
Related Work
Temporally coherent reconstruction is a challenging task for general dynamic scenes due to a number of factors such as motion blur, articulated, non-rigid and large motion of multiple people, resolution differences between camera views, occlusions, wide-baselines, errors in calibration and cluttered dynamic backgrounds. Segmentation of dynamic objects from such scenes is difficult because of foreground and background complexity and the likelihood of overlapping background and foreground color distributions. Reconstruction is also challenging due to limited visual cues and relatively large errors affecting both calibration and extraction of a globally consistent solution. This section reviews previous work on dynamic scene reconstruction and segmentation.
Dynamic Scene Reconstruction
Dense dynamic shape reconstruction is a fundamental problem and heavily studied area in the field of computer vision. Recovering accurate 3D models of a dynamically evolving, non-rigid scene observed by multiple synchronised cameras is a challenging task. Research on multiple view dense dynamic reconstruction has primarily focused on indoor scenes with controlled illumi-nation and static backgrounds, extending methods for multiple view reconstruction of static scenes [53] to sequences [62]. Deep learning based approaches have been introduced to estimate shape of dynamic objects from minimal camera views in constrained environment [29,68] and for rigid objects [58]. In the last decade, focus has shifted to more challenging outdoor scenes captured with both static and moving cameras. Reconstruction of non-rigid dynamic objects in uncontrolled natural environments is challenging due to the scene complexity, illumination changes, shadows, occlusion and dynamic backgrounds with clutter such as trees or people. Methods have been proposed for multi-view reconstruction [65,39,37] requiring a large number of closely spaced cameras for surface estimation of dynamic shape. Practical applications require relatively sparse moving cameras to acquire coverage over large areas such as outdoor. A number of approaches for mutli-view reconstruction of outdoor scenes require initial silhouette segmentation [67,32,22,23] to allow visual-hull reconstruction. Most of these approaches to general dynamic scene reconstruction fail in the case of complex (cluttered) scenes captured with moving cameras.
A recent work proposed reconstruction of dynamic fluids [50] for static cameras. Another work used RGB-D cameras to obtain reconstruction of non-rigid surfaces [55]. Pioneering research in general dynamic scene reconstruction from multiple handheld wide-baseline cameras [5,60] exploited prior reconstruction of the background scene to allow dynamic foreground segmentation and reconstruction. Recent work [46] estimates shape of dynamic objects from handheld cameras exploiting GANs. However these approaches either work for static/indoor scenes or exploit strong prior assumptions such as silhouette information, known background or scene structure. Also all these approaches give per frame reconstruction leading to temporally incoherent geometries. Our aim is to perform temporally coherent dense reconstruction of unknown dynamic non-rigid scenes automatically without strong priors or limitations on scene structure.
Joint Segmentation and Reconstruction
Many of the existing multi-view reconstruction approaches rely on a two-stage sequential pipeline where foreground or background segmentation is initially performed independently with respect to each camera, and then used as input to obtain visual hull for multi-view reconstruction. The problem with this approach is that the errors introduced at the segmentation stage cannot be recovered and are propagated to the reconstruction stage reducing the final reconstruction quality. Segmentation from multiple wide-baseline views has been proposed by exploiting appearance similarity [17,38,70]. These ap-proaches assume static backgrounds and different colour distributions for the foreground and background [52,17] which limits applicability for general scenes.
Joint segmentation and reconstruction methods incorporate estimation of segmentation or matting with reconstruction to provide a combined solution. Joint refinement avoids the propagation of errors between the two stages thereby making the solution more robust. Also, cues from segmentation and reconstruction can be combined efficiently to achieve more accurate results. The first multi-view joint estimation system was proposed by Szeliski et al. [59] which used iterative gradient descent to perform an energy minimization. A number of approaches were introduced for joint formulation in static scenes and one recent work used training data to classify the segments [69]. The focus shifted to joint segmentation and reconstruction for rigid objects in indoor and outdoor environments. These approaches used a variety of techniques such as patch-based refinement [54,48] and fixating cameras on the object of interest [11] for reconstructing rigid objects in the scene. However, these are either limited to static scenes [69,26] or process each frame independently thereby failing to enforce temporal consistency [11,23].
Joint reconstruction and segmentation on monocular video was proposed in [36,3,12] achieving semantic segmentation of scene limited to rigid objects in street scenes. Practical application of joint estimation requires these approaches to work on non-rigid objects such as humans with clothing. A multi-layer joint segmentation and reconstruction approach was proposed for multiple view video of sports and indoor scenes [23]. The algorithm used known background images of the scene without the dynamic foreground objects to obtain an initial segmentation. Visual-hull based reconstruction was performed with known prior foreground/background using a background image plate with fixed and calibrated cameras. This visual hull was used as a prior and was optimized by a combination of photo-consistency, silhouette, color and sparse feature information in an energy minimization framework to improve the segmentation and reconstruction quality. Although structurally similar to our approach, it requires the scene to be captured by fixed calibrated cameras and a priori known fixed background plate as a prior to estimate the initial visual hull by background subtraction. The proposed approach overcomes these limitations allowing moving cameras and unknown scene backgrounds.
An approach based on optical flow and graph cuts was shown to work well for non-rigid objects in indoor settings but requires known background segmentation to obtain silhouettes and is computationally expensive [24]. Practical application of temporally coherent joint estimation requires approaches that work on non-rigid objects for general scenes in uncontrolled environments. A quantitative evaluation of techniques for multi-view reconstruction was presented in [53]. These methods are able to produce high quality results, but rely on good initializations and strong prior assumptions with known and controlled (static) scene backgrounds.
The proposed method exploits the advantages of joint segmentation and reconstruction and addresses the limitations of existing methods by introducing a novel approach to reconstruct general dynamic scenes automatically from wide-baseline cameras with no prior. To overcome the limitations of existing methods, the proposed approach automatically initialises the foreground object segmentation from wide-baseline correspondence without prior knowledge of the scene. This is followed by a joint spatio-temporal reconstruction and segmentation of general scenes. Temporal correspondence is exploited to overcome visual ambiguities giving improved reconstruction together with temporal coherence of surface correspondence to obtain 4D scene models.
Temporal coherent 4D Reconstruction
Temporally coherent 4D reconstruction refers to aligning the 3D surfaces of non-rigid objects over time for a dynamic sequence. This is achieved by estimating pointto-point correspondences for the 3D surfaces to obtain 4D temporally coherent reconstruction. 4D models allows to create efficient representation for practical applications in film, broadcast and immersive content production such as virtual, augmented and mixed reality. The majority of existing approaches for reconstruction of dynamic scenes from multi-view videos process each time frame independently due to the difficulty of simultaneously estimating temporal correspondence for non-rigid objects. Independent per-frame reconstruction can result in errors due to the inherent visual ambiguity caused by occlusion and similar object appearance for general scenes. Recent research has shown that exploiting temporal information can improve reconstruction accuracy as well as achieving temporal coherence [43].
3D scene flow estimates frame to frame correspondence whereas 4D temporal coherence estimates correspondence across the complete sequence to obtain a single surface model. Methods to estimate 3D scene flow have been reported in the literature [41] for autonomous vehicles. However this approach is limited to narrow baseline cameras. Other scene flow approaches are dependent on 2D optical flow [66,6] and they require an accurate estimate for most of the pixels which fails in the case of large motion. However, 3D scene flow methods align two frames independently and do not produce temporally coherent 4D models.
Research investigating spatio-temporal reconstruction across multiple frames was proposed by [20,37,24] exploiting the temporal information from the previous frames using optical flow. An approach for recovering space-time consistent depth maps from multiple video sequences captured by stationary, synchronized and calibrated cameras for depth based free viewpoint video rendering was proposed by [39]. However these methods require accurate initialisation, fixed and calibrated cameras and are limited to simple scenes. Other approaches to temporally coherent reconstruction [4] either requires a large number of closely spaced cameras or bi-layer segmentation [72,30] as a constraint for reconstruction. Recent approaches for spatio-temporal reconstruction of multi-view data either work on indoor studio data [47].
The framework proposed in this paper addresses limitations of existing approaches and gives 4D temporally coherent reconstruction for general dynamic indoor or outdoor scenes with large non-rigid motions, repetitive texture, uncontrolled illumination, and large capture volume. The scenes are captured with sparse static/moving cameras. The proposed approach gives 4D models of complete scenes with both static and dynamic objects for real-world applications (FVV and VR) with no prior knowledge of scene structure.
Multi-view Video Segmentation
In the field of image segmentation, approaches have been proposed to provide temporally consistent monocular video segmentation [21,49,45,71]. Hierarchical segmentation based on graphs was proposed in [21], directed acyclic graph were used to propose an object followed by segmentation [71]. Optical flow is used to identify and consistently segment objects [45,49]. Recently a number of approaches have been proposed for multi-view foreground object segmentation by exploiting appearance similarity spatially across views [16,35,38,70]. An approach for space-time multi-view segmentation was proposed by [17]. However, multi-view approaches assume a static background and different colour distributions for the foreground and background which limits applicability for general scenes and non-rigid objects.
To address this issue we introduce a novel method for spatio-temporal multi-view segmentation of dynamic scenes using shape constraints. Single image segmentation techniques using shape constraints provide good results for complex scene segmentation [25] (convex and concave shapes), but require manual interaction. The proposed approach performs automatic multi-view video segmentation by initializing the foreground object model using spatio-temporal information from widebaseline feature correspondence followed by a multi- layer optimization framework. Geodesic star convexity previously used in single view segmentation [25] is applied to constraint the segmentation in each view. Our multi-view formulation naturally enforces coherent segmentation between views and also resolves ambiguities such as the similarity of background and foreground in isolated views.
Summary and Motivation
Image-based temporally coherent 4D dynamic scene reconstruction without a prior model or constraints on the scene structure is a key problem in computer vision. Existing dense reconstruction algorithms need some strong initial prior and constraints for the solution to converge such as background, structure, and segmentation, which limits their application for automatic reconstruction of general scenes. Current approaches are also commonly limited to independent per-frame reconstruction and do not exploit temporal information or produce a coherent model with known correspondence.
The approach proposed in this paper aims to overcome the limitations of existing approaches to enable robust temporally coherent wide-baseline multiple view reconstruction of general dynamic scenes without prior assumptions on scene appearance, structure or segmentation of the moving objects. Static and dynamic objects in the scene are identified for simultaneous segmentation and reconstruction using geometry and appearance cues in a sparse-to-dense optimization framework. Temporal coherence is introduced to improve the quality of the reconstruction and geodesic star convexity is used to improve the quality of segmentation. The static and dynamic elements are fused automatically in both the temporal and spatial domain to obtain the final 4D scene reconstruction.
This paper presents a unified framework, novel in combining multiple view joint reconstruction and seg-mentation with temporal coherence to improve per-frame reconstruction performance and produce a single framework from the initial work presented in [43,42]. In particular the approach gives 4D surface model with full correspondence over time. A comprehensive experimental evaluation with comparison to the state-of-the-art in segmentation, reconstruction and 4D modelling is also presented extending previous work. Application fo the resulting 4D models to free-viewpoint video rendering and content production for immersive virtual reality experiences is also presented.
Methodology
This work is motivated by the limitations of existing multiple view reconstruction methods which either work independently at each frame resulting in errors due to visual ambiguity [19,23], or require restrictive assumptions on scene complexity and structure and often assume prior camera calibration and foreground segmentation [60,24]. We address these issues by initializing the joint reconstruction and segmentation algorithm automatically, introducing temporal coherence in the reconstruction and geodesic star convexity in segmentation to reduce ambiguity and ensure consistent non-rigid structure initialization at successive frames. The proposed approach is demonstrated to achieve improved reconstruction and segmentation performance over state-ofthe-art approaches and produce temporally coherent 4D models of complex dynamic scenes.
Overview
An overview of the proposed framework for temporally coherent multi-view reconstruction is presented in Figures 2 and consists of the following stages: Multi-view video: The scenes are captured using multiple video cameras (static/moving) separated by widebaseline (> 15 • ). The cameras can be synchronized during the capture using time-code generator or later using the audio information. Camera extrinsic calibration and scene structure are assumed to be unknown. Sparse reconstruction: The intrinsics are assumed to be known. Segmentation based feature detection (SFD) [44] is used to obtain a relatively large number of sparse features suitable for wide-baseline matching which are distributed throughout the scene including on dynamic objects such as people. SFD features are matched between views using a SIFT descriptor giving camera extrinsics and a sparse 3D point-cloud for each time instant for the entire sequence [27]. Initial scene segmentation and reconstruction -Section 3.2: Automatic initialisation is performed without prior knowledge of the scene structure or appearance to obtain an initial approximation for each object. The sparse point cloud is clustered in 3D [51] with each cluster representing a unique foreground object. Object segmentation increases efficiency and improve robustness of 4D models. This reconstruction is refined using the framework explained in Section 3.4 to obtain segmentation and dense reconstruction of each object. Sparse-to-dense temporal reconstruction with temporal coherence -Section 3.3 Temporal coherence is introduced in the framework to initialize the coarse reconstruction and obtain frame-to-frame dense correspondences for dynamic object. Dynamic object regions are detected at each time instant by sparse temporal correspondence of SFD features at successive frames. Sparse temporal feature correspondence allows propagation of the dense reconstruction for each dynamic object to obtain an initial approximation. Joint object-based sparse-to-dense temporally coherent refinement of shape and segmentation -Section 3.4: The initial estimate is refined for each object per-view in the scene through joint optimisation of shape and segmentation using a robust cost function combining matching, color, contrast and smoothness information for wide-baseline matching with a geodesic star convexity constraint. A single 3D model for each dynamic object is obtained by fusion of the view-dependent depth maps using Poisson surface reconstruction [31]. Surface orientation is estimated based on neighbouring pixels. Applications -Section : The 4D representation from the proposed joint segmentation and reconstruction framework has a number of applications in media production, including free-viewpoint video (FVV) rendering and virtual reality (VR).
The process above is repeated for the entire sequence for all objects in the first frame and for dynamic objects at each time-instant. The proposed approach enables automatic reconstruction of all objects in the scene as a 4D mesh sequence. Subsequent sections present the novel contributions of this work in initialisation and refinement to obtain a dense temporally coherent reconstruction. The approach is demonstrated to outperform previous approaches to dynamic scene reconstruction and does not require prior knowledge of the scene.
Initial Scene Segmentation and Reconstruction
For general dynamic scene reconstruction, we need to reconstruct and segment the objects in the scene. This requires an initial coarse approximation for initialisation of a subsequent refinement step to optimise the segmentation and reconstruction with respect to each camera view. We introduce an approach based on sparse point cloud clustering, an overview is shown in Figure 3. Initialisation gives a complete coarse segmentation and reconstruction of each object in the first frame of the sequence for subsequent refinement. The dense reconstruction of the foreground objects and background are combined to obtain a full scene reconstruction at the first time instant. A rough geometric proxy of the background is created using the method. For consecutive time instants dynamic objects and newly appeared objects are identified and only these objects are reconstructed and segmented. The reconstruction of static objects is retained which reduces computational complexity. The optic flow and cluster information for each dynamic object ensures that we retain same labels for the entire sequence.
Sparse Point-cloud Clustering
The sparse representation of the scene is processed to remove outliers using the point neighbourhood statistics to filter outlier data [51]. We segment the objects in the sparse scene reconstruction, this allows only moving objects to be reconstructed at each frame for efficiency and this also allows object shape similarity to be propagated across frames to increase robustness of reconstruction.
We use data clustering approach based on the 3D grid subdivision of the space using an octree data structure in Euclidean space to segment objects at each frame. In a more general sense, nearest neighbor information is used to cluster, which is essentially similar to a flood fill algorithm. We choose this data clustering because of its computational efficiency and robustness. The approach allows segmentation of objects in the scene and is demonstrated to work well for cluttered and general outdoor scenes as shown in Section 4.
Objects with insufficient detected features are reconstructed as part of the scene background. Appearing, disappearing and reappearing objects are handled by sparse dynamic feature tracking, explained in Section 3.3. Clustering results are shown in Figure 3. This is followed by a sparse-to-dense coarse object based approach to segment and reconstruct general dynamic scenes.
Coarse Object Reconstruction
The process to obtain the coarse reconstruction for the first frame of the sequence is shown in Figure 4. The sparse representation of each element is back-projected on the rectified image pair for each view. Delaunay triangulation [18] is performed on the set of back projected points for each cluster on one image and is propagated to the second image using the sparse matched features. Triangles with edge length greater than the median length of edges of all triangles are removed. For each remaining triangle pair direct linear transform is used to estimate the affine homography. Displacement at each pixel within the triangle pair is estimated by interpolation to get an initial dense disparity map for each cluster in the 2D image pair labelled as R I depicted in red in Figure 4. The initial coarse reconstruction for the observed objects in the scene is used to define the depth hypotheses at each pixel for the optimization.
The region R I does not ensure complete coverage of the object, so we extrapolate this region to obtain a region R O (shown in yellow) in 2D by 5% of the average distance between the boundary points(R I ) and the centroid of the object. To allow for errors in the initial approximate depth from sparse features we add volume in front and behind of the projected surface by an error tolerance, along the optical ray of the camera. This ensures that the object boundaries lie within the extrapolated initial coarse estimate and depth at each pixel for the combined regions may not be accurate. The tolerance for extrapolation may vary if a pixel belongs to R I or R O as the propagated pixels of the extrapolated regions (R O ) may have a high level of errors compared to error at the points from sparse representation (R I ) requiring a comparatively higher tolerance. The calculation of threshold depends on the capture volume of the datasets and is set to 1% of the capture volume for R O and half the value for R I . This volume in 3D corresponds to our initial coarse reconstruction of each object and enables us to remove the dependency of the existing approaches on background plate and visual hull estimates. This process of cluster identification and initial coarse object reconstruction is performed for multiple objects in general environments. Initial object segmentation using point cloud clustering and coarse segmentation is insensitive to parameters. Throughout this work the same parameters are used for all datasets. The result of this process is a coarse initial object segmentation and reconstruction for each object.
Sparse-to-dense temporal reconstruction with temporal coherence
Once the static scene reconstruction is obtained for the first frame, we perform temporally coherent reconstruction for dynamic objects at successive time instants instead of whole scene reconstruction for computational efficiency and to avoid redundancy. The initial coarse reconstruction for each dynamic region is refined in the subsequent optimization step with respect to each camera view. Dynamic scene objects are identified from the temporal correspondence of sparse feature points. Sparse correspondence is used to propagate an initial model of the moving object for refinement. Figure 5 presents the sparse reconstruction and temporal correspondence. New objects are identified per frame from the clustered sparse reconstruction and are labelled as dynamic objects. Sparse temporal dynamic feature tracking: Numerous approaches have been proposed to track moving objects in 2D using either features or optical flow. However these methods may fail in the case of occlusion, movement parallel to the view direction, large motions and moving cameras. To overcome these limitations we match the sparse 3D feature points obtained using SFD [44] from multiple wide-baseline views at each time instant. The use of sparse 3D features is robust to large non-rigid motion, occlusions and camera movement. SFD detects sparse features which are stable across wide-baseline views and consecutive time instants for a moving camera and dynamic scene. Sparse 3D feature matches between consecutive time instants are back-projected to each view. These features are matched temporally using SIFT descriptor to identify the moving points. Robust matching is achieved by enforcing multiple view consistency for the temporal feature correspondence in each view as illustrated in Figure 6. Each match must satisfy the constraint:
H t,v (p) + u t,r (p + H t,v (p)) − u t,v (p)− (1) H t,r (p + u t,v (p)) <
where p is the feature image point in view v at frame t, H t,v (p) is the disparity at frame t from views v and r, u t,v (p) is the temporal correspondence from frames t to t + 1 for view v. The multi-view consistency check ensures that correspondences between any two views remain temporally consistent for successive frames. Matches in the 2D domain are sensitive to camera movement and occlusion, hence we map the set of refined matches into 3D to make the system robust to camera motion. The Frobenius norm is applied on the 3D point gradients in all directions [71] to obtain the 'net' motion at each sparse point. The 'net' motion between pairs of 3D points for consecutive time instants are ranked, and the top and bottom 5 percentile values are removed. Median filtering is then applied to identify the dynamic features. Figure 7 shows an example with moving cameras for Juggler [5].
Sparse-to-dense model reconstruction: Dynamic 3D feature points are used to initialize the segmentation and reconstruction of the initial model. This avoids the assumption of static backgrounds and prior scene segmentation commonly used to initialise multiple view reconstruction with a coarse visual-hull approximation [23]. Temporal coherence also provides a more accurate initialisation to overcome visual ambiguities at individual frames. Figure 8 illustrates the use of temporal coherence for reconstruction initialisation and refinement. Dynamic feature correspondence is used to identify the mesh for each dynamic object. This mesh is back projected on each view to obtain the region of interest. Lucas Kanade Optical flow [8] is performed on the projected mask for each view in the temporal domain using the dynamic feature correspondences over time as initialization. Dense multi-view wide-baseline correspondences from the previous frame are propagated to the current frame using the information from the flow vectors to obtain dense multi-view matches in the current frame. The matches are triangulated in 3D to obtain a refined 3D dense model of the dynamic object for the current frame. For dynamic scenes, a new object may enter the scene or a new part may appear as the object moves. To allow the introduction of new objects and object parts we also use information from the cluster of sparse points for each dynamic object. The cluster corresponding to the dynamic features is identified and static points are removed. This ensures that the set of new points not only contain the dynamic features but also the unprocessed points which represent new parts of the object. These points are added to the refined sparse model of the dynamic object. To handle the new objects we detect new clusters at each time instant and consider them as dynamic regions. The sparse-to-dense initial coarse reconstruction improves the quality of segmentation and reconstruction after the refinement. Examples of the improvement in segmentation and reconstruction for Odzemok [1] and Juggler [5] datasets are shown in Figure 9. As observed limbs of the people is retained by using information from the previous frames in both the cases.
Joint object-based sparse-to-dense temporally coherent refinement of shape and segmentation
The initial reconstruction and segmentation from dense temporal feature correspondence is refined using a joint optimization framework. A novel shape constraint is introduced based on geodesic star convexity which has previously been shown to give improved performance in interactive image segmentation for structures with fine details (for example a person's fingers or hair) [25]. Shape is a powerful cue for object recognition and segmentation. Shape models represented as distance transforms from a template have been used for category specific segmentation [33]. Some works have introduced generic connectivity constraints for segmentation showing that obtaining a globally optimal solutions under the connectivity constraint is NP-hard [64]. Veksler et al. have used shape constraint in segmentation framework by enforcing star convexity prior on the segmentation, and globally optimal solutions are achieved subject to this constraint [63]. The star convexity constraint ensures connectivity to seed points, and is a stronger assumption than plain connectivity. An example of a star-convex object is shown in Figure 10 along with a failure case for a non-rigid articulate object. To handle more complex objects the idea of geodesic forests with multiple star centres was introduced to obtain a globally optimal solution for interactive 2D object segmentation [25]. The main focus was to introduce shape constraints in interactive segmentation, by means of a geodesic star convexity prior. The notion of connectivity was extended from Euclidean to geodesic so that paths can bend and adapt to image data as opposed to straight Euclidean rays, thus extending visibility and reducing the number of star centers required.
The geodesic star-convexity is integrated as a constraint on the energy minimisation for joint multi-view Fig. 10 (a) Representation of star convexity: The left object depicts example of star-convex objects, with a star center marked. The object on the right with a plausible star center shows deviations from star-convexity in the fine details, and (b) Multiple star semantics for joint refinement: Single star center based segmentation is depicted on the left and multiple star is shown on the right. reconstruction and segmentation [23]. In this work the shape constraint is automatically initialised for each view from the initial segmentation. The shape constraint is based on the geodesic distance with foreground object initialisation (seeds) as star centres to which the object shape is restricted. The union formed by multiple object seeds form a geodesic forest. This allows complex shapes to be segmented. In this work to automatically initialize the segmentation we use the sparse temporal feature correspondence as star centers (seeds) to build a geodesic forest automatically. The region outside the initial coarse reconstruction of all dynamic objects is initialized as the background seed for segmentation as shown in Figure 12. The shape of the dynamic object is restricted by this geodesic distance constraint that depends on the image gradient. Comparison with existing methods for multi-view segmentation demonstrates improvements in recovery of fine detail structure as illustrated in Figure 12.
Once we have a set of dense 3D points for each dynamic object, Poisson surface reconstruction is performed on the set of sparse points to obtain an initial coarse model of each dynamic region R, which is subsequently refined using the optimization framework (Section 3.4.1).
Optimization on initial coarse object reconstruction based on geodesic star convexity
The depth of the initial coarse reconstruction estimate is refined per view for each dynamic object at a per pixel level. View-dependent optimisation of depth is performed with respect to each camera which is robust to errors in camera calibration and initialisation. Calibration inaccuracies produce inconsistencies limiting the applicability of global reconstruction techniques which simultaneously consider all views; view-dependent techniques are more tolerant to such inaccuracies because they only use a subset of the views for reconstruction of depth from each camera view.
Our goal is to assign an accurate depth value from a set of depth values D = d 1 , ..., d |D|−1 , U and assign a layer label from a set of label values L = l 1 , ..., l |L | to each pixel p for the region R of each dynamic object. Each d i is obtained by sampling the optical ray from the camera and U is an unknown depth value to handle occlusions. This is achieved by optimisation of a joint cost function [23] for label (segmentation) and depth (reconstruction):
E(l, d) = λ data E data (d) + λ contrast E contrast (l)+ λ smooth E smooth (l, d) + λ color E color (l) (2)
where, d is the depth at each pixel, l is the layer label for multiple objects and the cost function terms are defined in section 3.4.2. The equation consists of four terms: the data term is for the photo-consistency scores, the smoothness term is to avoid sudden peaks in depth and maintain the consistency and the color and contrast terms are to identify the object boundaries. Data and smoothness terms are common to solve reconstruction problems [7] and the color and contrast terms are used for segmentation [34]. This is solved subject to a geodesic star-convexity constraint on the labels l. A label l is star convex with respect to center c, if every point p ∈ l is visible to a star center c via l in the image x which can be expressed as an energy cost:
E (l|x, c) = p∈R q∈Γc,p E p,q (l p , l q ) (3) ∀q ∈ Γ c,p , E p,q = ∞ if l p = l q 0 otherwise(4)
where ∀p ∈ R : p ∈ l ⇔ l p = 1 and Γ c,p is the geodesic path joining p to the star center c given by:
Γ c,p = arg min Γ ∈Pc,p L(Γ )(5)
where P c,p denotes the set of all discrete paths between c and p and L(Γ ) is the length of discrete geodesic path as defined in [25]. In the case of image segmentation the gradients in the underlying image provide information to compute the discrete paths between each pixel and star centers and L(Γ ) is defined below:
L(Γ ) = N D −1 i=1 (1 − δ g )j(Γ i , Γ i+1 ) 2 + δ g I(Γ i ) 2(6)
where Γ is an arbitrary parametrized discrete path with N D pixels given by Γ 1 , Γ 2 , · · · Γ N D , j(Γ i , Γ i+1 ) is the Euclidean distance between successive pixels, and the quantity I(Γ i ) 2 is a finite difference approximation of the image gradient between the points Γ i , Γ i+1 . The parameter weights δ g the Euclidean distance with the geodesic length. Using the above definition, one can define the geodesic distance as defined in Equation 5.
An extension of single star-convexity is to use multiple stars to define a more general class of shapes. Introduction of multiple star centers reduces the path lengths and increases the visibility of small parts of objects like small limbs as shown in Figure 10. Hence Equation 3 is extended to multiple stars. A label l is star convex with respect to center c i , if every point p ∈ l is visible to a star center c i in set C = {c 1 , ..., c N T } via l in the image x, where N T is the number of star centers [25]. This is expressed as an energy cost:
E (l|x, C ) = p∈R q∈Γc,p E p,q (l p , l q )(7)
In our case all the correct temporal sparse feature correspondences are used as star centers, hence the segmentation will include all the points which are visible to these sparse features via geodesic distances in the region R, thereby employing the shape constraint. Since the star centers are selected automatically, the method is unsupervised. Comparison of segmentation constraint with geodesic multi-star convexity against no constraints and Euclidean multi-star convexity constraint is shown in Figure 11. The figure demonstrates the usefulness of the proposed approach with an improvement in segmentation quality on non-rigid complex objects. The energy in the Equation 2 is minimized as follows:
min (l,d) s.t. E(l, d) l S (C ) ⇔ min (l,d) E(l, d) + E (l|x, C )(8)
where S (C ) is the set of all shapes which lie within the geodesic distances with respect to the centers in C . Optimization of Equation 8, subject to each pixel p in the region R being at a geodesic distance Γ c,p from the star centers in the set C , is performed using the αexpansion algorithm for a pixel p by iterating through Fig. 12 Geodesic star convexity: A region R with star centers C connected with geodesic distance Γ c,p . Segmentation results with and without geodesic star convexity based optimization are shown on the right for the Juggler dataset. the set of labels in L × D [10]. Graph-cut is used to obtain a local optimum [9]. The improvements in the results using geodesic star convexity in the framework is shown in Figure 12 and by using temporal coherence is shown in Figure 9. Figure 13 shows improvements using geodesic shape constraint, temporal coherence and combined proposed approach for Dance2 [2] dataset.
Energy cost function for joint segmentation and reconstruction
For completeness in this section we define each of the terms in Equation 2, these are based on previous terms used for joint optimisation over depth for each pixel introduced in [42], with modification of the color matching term to improve robustness and extension to multiple labels.
Matching term: The data term for matching between views is specified as a measure of photo-consistency (Figure 14) as follows:
E data (d) = p∈P e data (p, d p ) = M (p, q) = i∈O k m(p, q), if d p = U M U , if d p = U(9)
where P is the 4-connected neighbourhood of pixel p, M U is the fixed cost of labelling a pixel unknown and q denotes the projection of the hypothesised point P in an auxiliary camera where P is a 3D point along the optical ray passing through pixel p located at a distance d p from the reference camera. O k is the set of k most photo-consistent pairs. For textured scenes Normalized Cross Correlation (NCC) over a squared window is a common choice [53]. The NCC values range from -1 to 1 which are then mapped to non-negative values by using the function 1 − N CC.
A maximum likelihood measure [40] is used in this function for confidence value calculation between the center pixel p and the other pixels q and is based on the survey on confidence measures for stereo [28]. The measure is defined as:
m(p, q) = exp cmin 2σ 2 i (p,q)∈N exp −(1−N CC(p,q)) 2σ 2 i(10)
where σ 2 i is the noise variance for each auxiliary camera i; this parameter was fixed to 0.3. N denotes the set of interacting pixels in P. c min is the minimum cost for a pixel obtained by evaluating the function (1−N CC(., .)) on a 15 × 15 window. Contrast term: Segmentation boundaries in images tend to align with contours of high contrast and it is desirable to represent this as a constraint in stereo matching. A consistent interpretation of segmentation-prior and contrast-likelihood is used from [34]. We used a modified version of this interpretation in our formulation to preserve the edges by using Bilateral filtering [61] instead of Gaussian filtering. The contrast term is as follows:
E contrast (l) = p,q∈N e contrast (p, q, l p , l q )(11)
e contrast (p, q, l p , l q ) = 0, if (l p = l q ) 1 1+ ( + exp −C(p,q) ), otherwise (12) · is the L 2 norm and = 1. The simplest choice for C(p, q) would be the squared Euclidean color distance between intensities at pixel p and q as used in [23]. We propose a term for better segmentation as C(p, q) =
B(p)−B(q) 2 2σ 2 pq d 2 pq
where B(.) represents the bilateral filter, d pq is the Euclidean distance between p and q, and σ pq =
B(p)−B(p) 2 d 2 pq
This term enables to remove the regions with low photo-consistency scores and weak edges and thereby helps in estimating the object boundaries.
Smoothness term: This term is inspired by [23] and it ensures the depth labels vary smoothly within the object reducing noise and peaks in the reconstructed surface. This is useful when the photo-consistency score is low and insufficient to assign depth to a pixel ( Figure 14). It is defined as:
E smooth (l, d) = (p,q)∈N e smooth (l p , d p , l q , d q ) (13) e smooth (l p , d p , l q , d q ) = min(|d p − d q | , d max ), if l p = l q and d p , d q = U 0, if l p = l q and d p , d q = U d max , otherwise(14)
d max is set to 50 times the size of the depth sampling step for all datasets.
Color term: This term is computed using the negative log likelihood [9] of the color models learned from the foreground and background markers. The star centers obtained from the sparse 3D features are foreground markers and for background markers we consider the region outside the projected initial coarse reconstruction for each view. The color models use GMMs with 5 components each for Foreground/Background mixed with uniform color models [14] as the markers are sparse.
E color (l) = p∈P −logP (I p |l p )(15)
where P (I p |l p = l i ) denotes the probability at pixel p in the reference image belonging to layer l i . Fig. 15 Comparison of segmentation on benchmark static datasets using geodesic star-convexity.
Results and Performance Evaluation
The proposed system is tested on publicly available multi-view research datasets of indoor and outdoor scenes, details of datasets explained in Table 1. The parameters used for all the datasets are defined in Table 2. More information is available on the website 1 .
Multi-view segmentation evaluation
Segmentation is evaluated against the state-of-the-art methods for multi-view segmentation Kowdle [35] and Djelouah [16] for static scenes and joint segmentation reconstruction methods Mustafa [42] (per frame) and Guillemaut [24] (using temporal information) for both static and dynamic scenes. For static multi-view data the segmentation is initialised as detailed in Section 3.1 followed by refinement using the constrained optimisation Section 3.4.1. For dynamic scenes the full pipeline with temporal coherence is used as detailed in 3. Ground-truth is obtained by manually labelling the foreground for Office, Dance1 and Odzemok dataset, and for other datasets ground-truth is available online. We initialize all approaches by the same proposed initial coarse reconstruction for fair comparison.
To evaluate the segmentation we measure completeness as the ratio of intersection to union with groundtruth [35]. Comparisons are shown in Table 3 and Figure 15, 16 for static benchmark datasets. Comparison for dynamic scene segmentations are shown in Table 4 and Figure 17, 18. Results for multi-view segmentation of static scenes are more accurate than Djelouah, Mustafa, and Guillemaut, and comparable to Kowdle with improved segmentation of some detail such as the back of the chair.
For dynamic scenes the geodesic star convexity based optimization together with temporal consistency gives improved segmentation of fine detail such as the legs of the table in the Office dataset and limbs of the person in the Juggler, Magician and Dance2 datasets in Figure 17 and 18. This overcomes limitations of previous multiview per-frame segmentation.
Reconstruction evaluation
Reconstruction results obtained using the proposed method are compared against Mustafa [42], Guillemaut [24], and Furukawa [19] for dynamic sequences. Furukawa [19] is a per-frame multi-view wide-baseline stereo approach which ranks highly on the middlebury benchmark [53] but does not refine the segmentation.
The depth maps obtained using the proposed approach are compared against Mustafa and Guillemaut in Figure 19. The depth map obtained using the proposed approach are smoother with low reconstruction noise compared to the state-of-the-art methods. Figure 20 and 21 present qualitative and quantitative comparison of our method with the state-of-the-art approaches.
Comparison of reconstructions demonstrates that the proposed method gives consistently more complete and accurate models. The colour maps highlight the quantitative differences in reconstruction. As far as we are aware no ground-truth data exist for dynamic scene reconstruction from real multi-view video. In Figure 21 we present a comparison with the reference mesh available with the Dance2 dataset reconstructed using a visual-hull approach. This comparison demonstrates improved reconstruction of fine detail with the proposed technique.
In contrast to all previous approaches the proposed method gives temporally coherent 4D model reconstructions with dense surface correspondence over time. The introduction of temporal coherence constrains the reconstruction in regions which are ambiguous on a particular frame such as the right leg of the juggler in Figure 20 resulting in more complete shape. Figure 22 shows three complete scene reconstructions with 4D models of multiple objects. The Juggler and Magician sequences are reconstructed from moving handheld cameras. Computational Complexity: Computation times for the proposed approach vs other methods are presented in Table 5. The proposed approach to reconstruct temporally coherent 4D models is comparable in computation time to per-frame multiple view reconstruction and gives a ∼50% reduction in computation cost compared to previous joint segmentation and reconstruction approaches using a known background. This efficiency is achieved through improved per-frame initialisation based on temporal propagation and the introduction of the geodesic star constraint in joint optimisation. Further results can be found in the supplementary material. Temporal coherence: A frame-to-frame alignment is obtained using the proposed approach as shown in Figure 23 for Dance1 and Juggle dataset. The meshes of the dynamic object in Frame 1 and Frame 9 are color coded in both the datasets and the color is propagated to the next frame using the dense temporal coherence information. The color in different parts of the object is retained to the next frame as seen from the figure. The proposed approach obtains sequential temporal alignment which drifts with large movement in the object, hence successive frames are shown in the figure.
Limitations: As with previous dynamic scene reconstruction methods the proposed approach has a number of limitations: persistent ambiguities in appearance between objects will degrade the improvement achieved with temporal coherence; scenes with a large number of inter-occluding dynamic objects will degrade performance; the approach requires sufficient wide-baseline views to cover the scene.
Applications to immersive content production
The 4D meshes generated from the proposed approach can be used for applications in immersive content production such as FVV rendering and VR. This section demonstrates the results of these applications.
Free-viewpoint rendering
In FVV, the virtual viewpoint is controlled interactively by the user. The appearance of the reconstruction is sampled and interpolated directly from the captured camera images using cameras located close to the virtual viewpoint [57].
The proposed joint segmentation and reconstruction framework generates per-view silhouettes and a temporally coherent 4D reconstruction at each time instant of the input video sequence. This representation of the dynamic sequence is used for FVV rendering. To create FVV, a view-dependent surface texture is computed based on the user selected virtual view. This virtual view is obtained by combining the information from camera views in close proximity to the virtual viewpoint [57]. FVV rendering gives user the freedom to interactively choose a novel viewpoint in space to observe the dynamic scene and reproduces fine scale temporal surface details, such as the movement of hair and clothing wrinkles, that may not be modelled geometrically. An example of a reconstructed scene and the camera configuration is shown in Figure 24.
A qualitative evaluation of images synthesised using FVV is shown in Figure 25 and 26. These demonstrate reconstruction results rendered from novel viewpoints from the proposed method against Mustafa [43] and Guillemaut [23] on publicly available datasets. This is particularly important for wide-baseline camera configurations where this technique can be used to synthesize intermediate viewpoints where it may not be practical or economical to physically locate real cameras.
Virtual reality rendering
There is a growing demand for photo-realistic content in the creation of immersive VR experiences. The 4D temporally coherent reconstructions of the dynamic scenes obtained using the proposed approach enables the creation of photo-realistic digital assets that can be incorporated into VR environments using game engines such as Unity and Unreal Engine, as shown in Figure 27 for single frame of four datasets and for a series of frames for Dance1 dataset.
In order to efficiently render the reconstructions in a game engine for applications in VR, a UV texture atlas is extracted using the 4D meshes from the proposed approach as a geometric proxy. The UV texture atlas at each frame are applied to the models at render time in unity for viewing in a VR headset. A UV texture atlas is constructed by projectively texturing and blending multiple view frames onto a 2D unwrapped UV texture atlas, see Figure Figure 28. This is performed once for each static object and at each time instance for dynamic objects allowing efficient storage and real-time playback of static and dynamic textured reconstructions within a VR headset.
Conclusion
This paper introduced a novel technique to automatically segment and reconstruct dynamic scenes captured from multiple moving cameras in general dynamic uncontrolled environments without any prior on background appearance or structure. The proposed automatic initialization was used to identify and initialize the segmentation and reconstruction of multiple objects. A framework for temporally coherent 4D model reconstruction of dynamic scenes from a set of wide-baseline moving cameras. The approach gives a complete model of all static and dynamic non-rigid objects in the scene. Temporal coherence for dynamic objects addresses limitations of previous per-frame reconstruction giving improved reconstruction and segmentation together with dense temporal surface correspondence for dynamic objects. A sparse-to-dense approach is introduced to establish temporal correspondence for non-rigid objects using robust sparse feature matching to initialise dense optical flow providing an initial segmentation and reconstruction. Joint refinement of object reconstruction and segmentation is then performed using a multiple view optimisation with a novel geodesic star convexity constraint that gives improved shape estimation and is computationally efficient. Comparison against state-ofthe-art techniques for multiple view segmentation and reconstruction demonstrates significant improvement in performance for complex scenes. The approach enables reconstruction of 4D models for complex scenes which has not been demonstrated previously. | 8,667 |
1907.08195 | 2963385316 | Existing techniques for dynamic scene reconstruction from multiple wide-baseline cameras primarily focus on reconstruction in controlled environments, with fixed calibrated cameras and strong prior constraints. This paper introduces a general approach to obtain a 4D representation of complex dynamic scenes from multi-view wide-baseline static or moving cameras without prior knowledge of the scene structure, appearance, or illumination. Contributions of the work are: An automatic method for initial coarse reconstruction to initialize joint estimation; Sparse-to-dense temporal correspondence integrated with joint multi-view segmentation and reconstruction to introduce temporal coherence; and a general robust approach for joint segmentation refinement and dense reconstruction of dynamic scenes by introducing shape constraint. Comparison with state-of-the-art approaches on a variety of complex indoor and outdoor scenes, demonstrates improved accuracy in both multi-view segmentation and dense reconstruction. This paper demonstrates unsupervised reconstruction of complete temporally coherent 4D scene models with improved non-rigid object segmentation and shape reconstruction and its application to free-viewpoint rendering and virtual reality. | In the field of image segmentation, approaches have been proposed to provide temporally consistent monocular video segmentation @cite_71 @cite_70 @cite_5 @cite_61 . Hierarchical segmentation based on graphs was proposed in @cite_71 , directed acyclic graph were used to propose an object followed by segmentation @cite_61 . Optical flow is used to identify and consistently segment objects @cite_5 @cite_70 . Recently a number of approaches have been proposed for multi-view foreground object segmentation by exploiting appearance similarity spatially across views @cite_24 @cite_49 @cite_44 @cite_20 . An approach for space-time multi-view segmentation was proposed by @cite_2 . However, multi-view approaches assume a static background and different colour distributions for the foreground and background which limits applicability for general scenes and non-rigid objects. | {
"abstract": [
"In this paper, we propose a novel approach to extract primary object segments in videos in the object proposal' domain. The extracted primary object regions are then used to build object models for optimized video segmentation. The proposed approach has several contributions: First, a novel layered Directed Acyclic Graph (DAG) based framework is presented for detection and segmentation of the primary object in video. We exploit the fact that, in general, objects are spatially cohesive and characterized by locally smooth motion trajectories, to extract the primary object from the set of all available proposals based on motion, appearance and predicted-shape similarity across frames. Second, the DAG is initialized with an enhanced object proposal set where motion based proposal predictions (from adjacent frames) are used to expand the set of object proposals for a particular frame. Last, the paper presents a motion scoring function for selection of object proposals that emphasizes high optical flow gradients at proposal boundaries to discriminate between moving objects and the background. The proposed approach is evaluated using several challenging benchmark videos and it outperforms both unsupervised and supervised state-of-the-art methods.",
"We present a technique for separating foreground objects from the background in a video. Our method is fast, fully automatic, and makes minimal assumptions about the video. This enables handling essentially unconstrained settings, including rapidly moving background, arbitrary object motion and appearance, and non-rigid deformations and articulations. In experiments on two datasets containing over 1400 video shots, our method outperforms a state-of-the-art background subtraction technique [4] as well as methods based on clustering point tracks [6, 18, 19]. Moreover, it performs comparably to recent video object segmentation methods based on object proposals [14, 16, 27], while being orders of magnitude faster.",
"Multiple view segmentation consists in segmenting objects simultaneously in several views. A key issue in that respect and compared to monocular settings is to ensure propagation of segmentation information between views while minimizing complexity and computational cost. In this work, we first investigate the idea that examining measurements at the projections of a sparse set of 3D points is sufficient to achieve this goal. The proposed algorithm softly assigns each of these 3D samples to the scene background if it projects on the background region in at least one view, or to the foreground if it projects on foreground region in all views. Second, we show how other modalities such as depth may be seamlessly integrated in the model and benefit the segmentation. The paper exposes a detailed set of experiments used to validate the algorithm, showing results comparable with the state of art, with reduced computational complexity. We also discuss the use of different modalities for specific situations, such as dealing with a low number of viewpoints or a scene with color ambiguities between foreground and background.",
"In this paper, we address the problem of object segmentation in multiple views or videos when two or more viewpoints of the same scene are available. We propose a new approach that propagates segmentation coherence information in both space and time, hence allowing evidences in one image to be shared over the complete set. To this aim the segmentation is cast as a single efficient labeling problem over space and time with graph cuts. In contrast to most existing multi-view segmentation methods that rely on some form of dense reconstruction, ours only requires a sparse 3D sampling to propagate information between viewpoints. The approach is thoroughly evaluated on standard multi-view datasets, as well as on videos. With static views, results compete with state of the art methods but they are achieved with significantly fewer viewpoints. With multiple videos, we report results that demonstrate the benefit of segmentation propagation through temporal cues.",
"In this paper, we present a method for extracting consistent foreground regions when multiple views of a scene are available. We propose a framework that automatically identifies such regions in images under the assumption that, in each image, background and foreground regions present different color properties. To achieve this task, monocular color information is not sufficient and we exploit the spatial consistency constraint that several image projections of the same space region must satisfy. Combining the monocular color consistency constraint with multiview spatial constraints allows us to automatically and simultaneously segment the foreground and background regions in multiview images. In contrast to standard background subtraction methods, the proposed approach does not require a priori knowledge of the background nor user interaction. Experimental results under realistic scenarios demonstrate the effectiveness of the method for multiple camera set ups.",
"We present an efficient and scalable technique for spatiotemporal segmentation of long video sequences using a hierarchical graph-based algorithm. We begin by over-segmenting a volumetric video graph into space-time regions grouped by appearance. We then construct a “region graph” over the obtained segmentation and iteratively repeat this process over multiple levels to create a tree of spatio-temporal segmentations. This hierarchical approach generates high quality segmentations, which are temporally coherent with stable region boundaries, and allows subsequent applications to choose from varying levels of granularity. We further improve segmentation quality by using dense optical flow to guide temporal connections in the initial graph. We also propose two novel approaches to improve the scalability of our technique: (a) a parallel out-of-core algorithm that can process volumes much larger than an in-core algorithm, and (b) a clip-based processing algorithm that divides the video into overlapping clips in time, and segments them successively while enforcing consistency. We demonstrate hierarchical segmentations on video shots as long as 40 seconds, and even support a streaming mode for arbitrarily long videos, albeit without the ability to process them hierarchically.",
"We present an automatic approach to segment an object in calibrated images acquired from multiple viewpoints. Our system starts with a new piecewise planar layer-based stereo algorithm that estimates a dense depth map that consists of a set of 3D planar surfaces. The algorithm is formulated using an energy minimization framework that combines stereo and appearance cues, where for each surface, an appearance model is learnt using an unsupervised approach. By treating the planar surfaces as structural elements of the scene and reasoning about their visibility in multiple views, we segment the object in each image independently. Finally, these segmentations are refined by probabilistically fusing information across multiple views. We demonstrate that our approach can segment challenging objects with complex shapes and topologies, which may have thin structures and non-Lambertian surfaces. It can also handle scenarios where the object and background color distributions overlap significantly.",
"In moving camera videos, motion segmentation is commonly performed using the image plane motion of pixels, or optical flow. However, objects that are at different depths from the camera can exhibit different optical flows even if they share the same real-world motion. This can cause a depth-dependent segmentation of the scene. Our goal is to develop a segmentation algorithm that clusters pixels that have similar real-world motion irrespective of their depth in the scene. Our solution uses optical flow orientations instead of the complete vectors and exploits the well-known property that under camera translation, optical flow orientations are independent of object depth. We introduce a probabilistic model that automatically estimates the number of observed independent motions and results in a labeling that is consistent with real-world motion in the scene. The result of our system is that static objects are correctly identified as one segment, even if they are at different depths. Color features and information from previous frames in the video sequence are used to correct occasional errors due to the orientation-based segmentation. We present results on more than thirty videos from different benchmarks. The system is particularly robust on complex background scenes containing objects at significantly different depths.",
"This invention relates to a novel energy absorbing isolation device which will absorb and dissipate a major portion of the energy associated with vehicle collisions. The present invention comprises a cylindrical tube, housing a plurality of Belleville spring washers which are compressed on impact by the wide portion of a movable shaft having a relatively wide portion and a relatively narrow portion. The relatively narrow portion of the shaft advances axially into the cylindrical tube as the Belleville washers are compressed. The energy of impact is absorbed and dissipated by compression of the Belleville washers and by interactions between the washers, the inside surface of the cylindrical tube, and the narrow portion of the shaft."
],
"cite_N": [
"@cite_61",
"@cite_70",
"@cite_2",
"@cite_24",
"@cite_44",
"@cite_71",
"@cite_49",
"@cite_5",
"@cite_20"
],
"mid": [
"2155598147",
"2113708607",
"2070926764",
"2150590906",
"2163046003",
"2030346542",
"1559395077",
"2171116555",
"1511535428"
]
} | Temporally coherent general dynamic scene reconstruction | Temporally consistent scene reconstruction for Odzemok dataset color-coded to show the scene object segmentation obtained. effects in film and broadcast production and for content production in virtual reality. The ultimate goal of modelling dynamic scenes from multiple cameras is automatic understanding of real-world scenes from distributed camera networks, for applications in robotics and other autonomous systems. Existing methods have applied multiple view dynamic scene reconstruction techniques in controlled environment with known background or chroma-key studio [23,20,56,60]. Other multiple view stereo techniques require a relatively dense static camera network resulting in a large number of cameras [19]. Extensions to more general outdoor scenes [5,32,60] use prior reconstruction of the static geometry from images of the empty environment. However these methods either require accurate segmentation of dynamic foreground objects, or prior knowledge of the scene struc-ture and background, or are limited to static cameras and controlled environments. Scenes are reconstructed semi-automatically, requiring manual intervention for segmentation/rotoscoping, and result in temporally incoherent per-frame mesh geometries. Temporally coherent geometry with known surface correspondence across the sequence is essential for real-world applications and compact representation.
Our paper addresses the limitations of existing approaches by introducing a methodology for unsupervised temporally coherent dynamic scene reconstruction from multiple wide-baseline static or moving camera views without prior knowledge of the scene structure or background appearance. This temporally coherent dynamic scene reconstruction is demonstrated to work in applications for immersive content production such as free-viewpoint video (FVV) and virtual reality (VR). This work combines two previously published papers in general dynamic reconstruction [42] and temporally coherent reconstruction [43] into a single framework and demonstrates application of this novel unsupervised joint segmentation and reconstruction in immersive content production FVV and VR (Section 5).
The input is a sparse set of synchronised videos from multiple moving cameras of an unknown dynamic scene without prior scene segmentation or camera calibration. Our first contribution is automatic initilisation of camera calibration and sparse scene reconstruction from sparse feature correspondence using sparse feature detection and matching between pairs of frames. An initial coarse reconstruction and segmentation of all scene objects is obtained from sparse features matched across multiple views. This eliminates the requirement for prior knowledge of the background scene appearance or structure. Our second contribution is sparse-to-dense reconstruction and segmentation approach to introduce temporal coherence for every frame. We exploit temporal coherence of the scene to overcome visual ambiguities inherent in single frame reconstruction and multiple view segmentation methods for general scenes. Temporal coherence refers to the correspondence between the 3D surface of all objects observed over time. Our third contribution is spatio-temporal alignment to estimate dense surface correspondence for 4D reconstruction. A geodesic star convexity shape constraint is introduced for the shape segmentation to improve the quality of segmentation for non-rigid objects with complex appearance. The proposed approach overcomes the limitations of existing methods allowing an unsupervised temporally coherent 4D reconstruction of complete models for general dynamic scenes.
The scene is automatically decomposed into a set of spatio-temporally coherent objects as shown in Figure 1 where the resulting 4D scene reconstruction has temporally coherent labels and surface correspondence for each object. This can be used for free-viewpoint video rendering and imported to a game engine for VR experience production. The contributions explained above can be summarized as follows: -Unsupervised temporally coherent dense reconstruction and segmentation of general complex dynamic scenes from multiple wide-baseline views. -Automatic initialization of dynamic object segmentation and reconstruction from sparse features. -A framework for space-time sparse-to-dense segmentation, reconstruction and temporal correspondence. -Robust spatio-temporal refinement of dense reconstruction and segmentation integrating error tolerant photo-consistency and edge information using geodesic star convexity. -Robust and computationally efficient reconstruction of dynamic scenes by exploiting temporal coherence. -Real-world applications of 4D reconstruction to freeviewpoint video rendering and virtual reality. This paper is structured as follows: First related work is reviewed. The methodology for general dynamic scene reconstruction is then introduced. Finally a thorough qualitative and quantitative evaluation and comparison to the state-of-the-art on challenging datasets is presented.
Related Work
Temporally coherent reconstruction is a challenging task for general dynamic scenes due to a number of factors such as motion blur, articulated, non-rigid and large motion of multiple people, resolution differences between camera views, occlusions, wide-baselines, errors in calibration and cluttered dynamic backgrounds. Segmentation of dynamic objects from such scenes is difficult because of foreground and background complexity and the likelihood of overlapping background and foreground color distributions. Reconstruction is also challenging due to limited visual cues and relatively large errors affecting both calibration and extraction of a globally consistent solution. This section reviews previous work on dynamic scene reconstruction and segmentation.
Dynamic Scene Reconstruction
Dense dynamic shape reconstruction is a fundamental problem and heavily studied area in the field of computer vision. Recovering accurate 3D models of a dynamically evolving, non-rigid scene observed by multiple synchronised cameras is a challenging task. Research on multiple view dense dynamic reconstruction has primarily focused on indoor scenes with controlled illumi-nation and static backgrounds, extending methods for multiple view reconstruction of static scenes [53] to sequences [62]. Deep learning based approaches have been introduced to estimate shape of dynamic objects from minimal camera views in constrained environment [29,68] and for rigid objects [58]. In the last decade, focus has shifted to more challenging outdoor scenes captured with both static and moving cameras. Reconstruction of non-rigid dynamic objects in uncontrolled natural environments is challenging due to the scene complexity, illumination changes, shadows, occlusion and dynamic backgrounds with clutter such as trees or people. Methods have been proposed for multi-view reconstruction [65,39,37] requiring a large number of closely spaced cameras for surface estimation of dynamic shape. Practical applications require relatively sparse moving cameras to acquire coverage over large areas such as outdoor. A number of approaches for mutli-view reconstruction of outdoor scenes require initial silhouette segmentation [67,32,22,23] to allow visual-hull reconstruction. Most of these approaches to general dynamic scene reconstruction fail in the case of complex (cluttered) scenes captured with moving cameras.
A recent work proposed reconstruction of dynamic fluids [50] for static cameras. Another work used RGB-D cameras to obtain reconstruction of non-rigid surfaces [55]. Pioneering research in general dynamic scene reconstruction from multiple handheld wide-baseline cameras [5,60] exploited prior reconstruction of the background scene to allow dynamic foreground segmentation and reconstruction. Recent work [46] estimates shape of dynamic objects from handheld cameras exploiting GANs. However these approaches either work for static/indoor scenes or exploit strong prior assumptions such as silhouette information, known background or scene structure. Also all these approaches give per frame reconstruction leading to temporally incoherent geometries. Our aim is to perform temporally coherent dense reconstruction of unknown dynamic non-rigid scenes automatically without strong priors or limitations on scene structure.
Joint Segmentation and Reconstruction
Many of the existing multi-view reconstruction approaches rely on a two-stage sequential pipeline where foreground or background segmentation is initially performed independently with respect to each camera, and then used as input to obtain visual hull for multi-view reconstruction. The problem with this approach is that the errors introduced at the segmentation stage cannot be recovered and are propagated to the reconstruction stage reducing the final reconstruction quality. Segmentation from multiple wide-baseline views has been proposed by exploiting appearance similarity [17,38,70]. These ap-proaches assume static backgrounds and different colour distributions for the foreground and background [52,17] which limits applicability for general scenes.
Joint segmentation and reconstruction methods incorporate estimation of segmentation or matting with reconstruction to provide a combined solution. Joint refinement avoids the propagation of errors between the two stages thereby making the solution more robust. Also, cues from segmentation and reconstruction can be combined efficiently to achieve more accurate results. The first multi-view joint estimation system was proposed by Szeliski et al. [59] which used iterative gradient descent to perform an energy minimization. A number of approaches were introduced for joint formulation in static scenes and one recent work used training data to classify the segments [69]. The focus shifted to joint segmentation and reconstruction for rigid objects in indoor and outdoor environments. These approaches used a variety of techniques such as patch-based refinement [54,48] and fixating cameras on the object of interest [11] for reconstructing rigid objects in the scene. However, these are either limited to static scenes [69,26] or process each frame independently thereby failing to enforce temporal consistency [11,23].
Joint reconstruction and segmentation on monocular video was proposed in [36,3,12] achieving semantic segmentation of scene limited to rigid objects in street scenes. Practical application of joint estimation requires these approaches to work on non-rigid objects such as humans with clothing. A multi-layer joint segmentation and reconstruction approach was proposed for multiple view video of sports and indoor scenes [23]. The algorithm used known background images of the scene without the dynamic foreground objects to obtain an initial segmentation. Visual-hull based reconstruction was performed with known prior foreground/background using a background image plate with fixed and calibrated cameras. This visual hull was used as a prior and was optimized by a combination of photo-consistency, silhouette, color and sparse feature information in an energy minimization framework to improve the segmentation and reconstruction quality. Although structurally similar to our approach, it requires the scene to be captured by fixed calibrated cameras and a priori known fixed background plate as a prior to estimate the initial visual hull by background subtraction. The proposed approach overcomes these limitations allowing moving cameras and unknown scene backgrounds.
An approach based on optical flow and graph cuts was shown to work well for non-rigid objects in indoor settings but requires known background segmentation to obtain silhouettes and is computationally expensive [24]. Practical application of temporally coherent joint estimation requires approaches that work on non-rigid objects for general scenes in uncontrolled environments. A quantitative evaluation of techniques for multi-view reconstruction was presented in [53]. These methods are able to produce high quality results, but rely on good initializations and strong prior assumptions with known and controlled (static) scene backgrounds.
The proposed method exploits the advantages of joint segmentation and reconstruction and addresses the limitations of existing methods by introducing a novel approach to reconstruct general dynamic scenes automatically from wide-baseline cameras with no prior. To overcome the limitations of existing methods, the proposed approach automatically initialises the foreground object segmentation from wide-baseline correspondence without prior knowledge of the scene. This is followed by a joint spatio-temporal reconstruction and segmentation of general scenes. Temporal correspondence is exploited to overcome visual ambiguities giving improved reconstruction together with temporal coherence of surface correspondence to obtain 4D scene models.
Temporal coherent 4D Reconstruction
Temporally coherent 4D reconstruction refers to aligning the 3D surfaces of non-rigid objects over time for a dynamic sequence. This is achieved by estimating pointto-point correspondences for the 3D surfaces to obtain 4D temporally coherent reconstruction. 4D models allows to create efficient representation for practical applications in film, broadcast and immersive content production such as virtual, augmented and mixed reality. The majority of existing approaches for reconstruction of dynamic scenes from multi-view videos process each time frame independently due to the difficulty of simultaneously estimating temporal correspondence for non-rigid objects. Independent per-frame reconstruction can result in errors due to the inherent visual ambiguity caused by occlusion and similar object appearance for general scenes. Recent research has shown that exploiting temporal information can improve reconstruction accuracy as well as achieving temporal coherence [43].
3D scene flow estimates frame to frame correspondence whereas 4D temporal coherence estimates correspondence across the complete sequence to obtain a single surface model. Methods to estimate 3D scene flow have been reported in the literature [41] for autonomous vehicles. However this approach is limited to narrow baseline cameras. Other scene flow approaches are dependent on 2D optical flow [66,6] and they require an accurate estimate for most of the pixels which fails in the case of large motion. However, 3D scene flow methods align two frames independently and do not produce temporally coherent 4D models.
Research investigating spatio-temporal reconstruction across multiple frames was proposed by [20,37,24] exploiting the temporal information from the previous frames using optical flow. An approach for recovering space-time consistent depth maps from multiple video sequences captured by stationary, synchronized and calibrated cameras for depth based free viewpoint video rendering was proposed by [39]. However these methods require accurate initialisation, fixed and calibrated cameras and are limited to simple scenes. Other approaches to temporally coherent reconstruction [4] either requires a large number of closely spaced cameras or bi-layer segmentation [72,30] as a constraint for reconstruction. Recent approaches for spatio-temporal reconstruction of multi-view data either work on indoor studio data [47].
The framework proposed in this paper addresses limitations of existing approaches and gives 4D temporally coherent reconstruction for general dynamic indoor or outdoor scenes with large non-rigid motions, repetitive texture, uncontrolled illumination, and large capture volume. The scenes are captured with sparse static/moving cameras. The proposed approach gives 4D models of complete scenes with both static and dynamic objects for real-world applications (FVV and VR) with no prior knowledge of scene structure.
Multi-view Video Segmentation
In the field of image segmentation, approaches have been proposed to provide temporally consistent monocular video segmentation [21,49,45,71]. Hierarchical segmentation based on graphs was proposed in [21], directed acyclic graph were used to propose an object followed by segmentation [71]. Optical flow is used to identify and consistently segment objects [45,49]. Recently a number of approaches have been proposed for multi-view foreground object segmentation by exploiting appearance similarity spatially across views [16,35,38,70]. An approach for space-time multi-view segmentation was proposed by [17]. However, multi-view approaches assume a static background and different colour distributions for the foreground and background which limits applicability for general scenes and non-rigid objects.
To address this issue we introduce a novel method for spatio-temporal multi-view segmentation of dynamic scenes using shape constraints. Single image segmentation techniques using shape constraints provide good results for complex scene segmentation [25] (convex and concave shapes), but require manual interaction. The proposed approach performs automatic multi-view video segmentation by initializing the foreground object model using spatio-temporal information from widebaseline feature correspondence followed by a multi- layer optimization framework. Geodesic star convexity previously used in single view segmentation [25] is applied to constraint the segmentation in each view. Our multi-view formulation naturally enforces coherent segmentation between views and also resolves ambiguities such as the similarity of background and foreground in isolated views.
Summary and Motivation
Image-based temporally coherent 4D dynamic scene reconstruction without a prior model or constraints on the scene structure is a key problem in computer vision. Existing dense reconstruction algorithms need some strong initial prior and constraints for the solution to converge such as background, structure, and segmentation, which limits their application for automatic reconstruction of general scenes. Current approaches are also commonly limited to independent per-frame reconstruction and do not exploit temporal information or produce a coherent model with known correspondence.
The approach proposed in this paper aims to overcome the limitations of existing approaches to enable robust temporally coherent wide-baseline multiple view reconstruction of general dynamic scenes without prior assumptions on scene appearance, structure or segmentation of the moving objects. Static and dynamic objects in the scene are identified for simultaneous segmentation and reconstruction using geometry and appearance cues in a sparse-to-dense optimization framework. Temporal coherence is introduced to improve the quality of the reconstruction and geodesic star convexity is used to improve the quality of segmentation. The static and dynamic elements are fused automatically in both the temporal and spatial domain to obtain the final 4D scene reconstruction.
This paper presents a unified framework, novel in combining multiple view joint reconstruction and seg-mentation with temporal coherence to improve per-frame reconstruction performance and produce a single framework from the initial work presented in [43,42]. In particular the approach gives 4D surface model with full correspondence over time. A comprehensive experimental evaluation with comparison to the state-of-the-art in segmentation, reconstruction and 4D modelling is also presented extending previous work. Application fo the resulting 4D models to free-viewpoint video rendering and content production for immersive virtual reality experiences is also presented.
Methodology
This work is motivated by the limitations of existing multiple view reconstruction methods which either work independently at each frame resulting in errors due to visual ambiguity [19,23], or require restrictive assumptions on scene complexity and structure and often assume prior camera calibration and foreground segmentation [60,24]. We address these issues by initializing the joint reconstruction and segmentation algorithm automatically, introducing temporal coherence in the reconstruction and geodesic star convexity in segmentation to reduce ambiguity and ensure consistent non-rigid structure initialization at successive frames. The proposed approach is demonstrated to achieve improved reconstruction and segmentation performance over state-ofthe-art approaches and produce temporally coherent 4D models of complex dynamic scenes.
Overview
An overview of the proposed framework for temporally coherent multi-view reconstruction is presented in Figures 2 and consists of the following stages: Multi-view video: The scenes are captured using multiple video cameras (static/moving) separated by widebaseline (> 15 • ). The cameras can be synchronized during the capture using time-code generator or later using the audio information. Camera extrinsic calibration and scene structure are assumed to be unknown. Sparse reconstruction: The intrinsics are assumed to be known. Segmentation based feature detection (SFD) [44] is used to obtain a relatively large number of sparse features suitable for wide-baseline matching which are distributed throughout the scene including on dynamic objects such as people. SFD features are matched between views using a SIFT descriptor giving camera extrinsics and a sparse 3D point-cloud for each time instant for the entire sequence [27]. Initial scene segmentation and reconstruction -Section 3.2: Automatic initialisation is performed without prior knowledge of the scene structure or appearance to obtain an initial approximation for each object. The sparse point cloud is clustered in 3D [51] with each cluster representing a unique foreground object. Object segmentation increases efficiency and improve robustness of 4D models. This reconstruction is refined using the framework explained in Section 3.4 to obtain segmentation and dense reconstruction of each object. Sparse-to-dense temporal reconstruction with temporal coherence -Section 3.3 Temporal coherence is introduced in the framework to initialize the coarse reconstruction and obtain frame-to-frame dense correspondences for dynamic object. Dynamic object regions are detected at each time instant by sparse temporal correspondence of SFD features at successive frames. Sparse temporal feature correspondence allows propagation of the dense reconstruction for each dynamic object to obtain an initial approximation. Joint object-based sparse-to-dense temporally coherent refinement of shape and segmentation -Section 3.4: The initial estimate is refined for each object per-view in the scene through joint optimisation of shape and segmentation using a robust cost function combining matching, color, contrast and smoothness information for wide-baseline matching with a geodesic star convexity constraint. A single 3D model for each dynamic object is obtained by fusion of the view-dependent depth maps using Poisson surface reconstruction [31]. Surface orientation is estimated based on neighbouring pixels. Applications -Section : The 4D representation from the proposed joint segmentation and reconstruction framework has a number of applications in media production, including free-viewpoint video (FVV) rendering and virtual reality (VR).
The process above is repeated for the entire sequence for all objects in the first frame and for dynamic objects at each time-instant. The proposed approach enables automatic reconstruction of all objects in the scene as a 4D mesh sequence. Subsequent sections present the novel contributions of this work in initialisation and refinement to obtain a dense temporally coherent reconstruction. The approach is demonstrated to outperform previous approaches to dynamic scene reconstruction and does not require prior knowledge of the scene.
Initial Scene Segmentation and Reconstruction
For general dynamic scene reconstruction, we need to reconstruct and segment the objects in the scene. This requires an initial coarse approximation for initialisation of a subsequent refinement step to optimise the segmentation and reconstruction with respect to each camera view. We introduce an approach based on sparse point cloud clustering, an overview is shown in Figure 3. Initialisation gives a complete coarse segmentation and reconstruction of each object in the first frame of the sequence for subsequent refinement. The dense reconstruction of the foreground objects and background are combined to obtain a full scene reconstruction at the first time instant. A rough geometric proxy of the background is created using the method. For consecutive time instants dynamic objects and newly appeared objects are identified and only these objects are reconstructed and segmented. The reconstruction of static objects is retained which reduces computational complexity. The optic flow and cluster information for each dynamic object ensures that we retain same labels for the entire sequence.
Sparse Point-cloud Clustering
The sparse representation of the scene is processed to remove outliers using the point neighbourhood statistics to filter outlier data [51]. We segment the objects in the sparse scene reconstruction, this allows only moving objects to be reconstructed at each frame for efficiency and this also allows object shape similarity to be propagated across frames to increase robustness of reconstruction.
We use data clustering approach based on the 3D grid subdivision of the space using an octree data structure in Euclidean space to segment objects at each frame. In a more general sense, nearest neighbor information is used to cluster, which is essentially similar to a flood fill algorithm. We choose this data clustering because of its computational efficiency and robustness. The approach allows segmentation of objects in the scene and is demonstrated to work well for cluttered and general outdoor scenes as shown in Section 4.
Objects with insufficient detected features are reconstructed as part of the scene background. Appearing, disappearing and reappearing objects are handled by sparse dynamic feature tracking, explained in Section 3.3. Clustering results are shown in Figure 3. This is followed by a sparse-to-dense coarse object based approach to segment and reconstruct general dynamic scenes.
Coarse Object Reconstruction
The process to obtain the coarse reconstruction for the first frame of the sequence is shown in Figure 4. The sparse representation of each element is back-projected on the rectified image pair for each view. Delaunay triangulation [18] is performed on the set of back projected points for each cluster on one image and is propagated to the second image using the sparse matched features. Triangles with edge length greater than the median length of edges of all triangles are removed. For each remaining triangle pair direct linear transform is used to estimate the affine homography. Displacement at each pixel within the triangle pair is estimated by interpolation to get an initial dense disparity map for each cluster in the 2D image pair labelled as R I depicted in red in Figure 4. The initial coarse reconstruction for the observed objects in the scene is used to define the depth hypotheses at each pixel for the optimization.
The region R I does not ensure complete coverage of the object, so we extrapolate this region to obtain a region R O (shown in yellow) in 2D by 5% of the average distance between the boundary points(R I ) and the centroid of the object. To allow for errors in the initial approximate depth from sparse features we add volume in front and behind of the projected surface by an error tolerance, along the optical ray of the camera. This ensures that the object boundaries lie within the extrapolated initial coarse estimate and depth at each pixel for the combined regions may not be accurate. The tolerance for extrapolation may vary if a pixel belongs to R I or R O as the propagated pixels of the extrapolated regions (R O ) may have a high level of errors compared to error at the points from sparse representation (R I ) requiring a comparatively higher tolerance. The calculation of threshold depends on the capture volume of the datasets and is set to 1% of the capture volume for R O and half the value for R I . This volume in 3D corresponds to our initial coarse reconstruction of each object and enables us to remove the dependency of the existing approaches on background plate and visual hull estimates. This process of cluster identification and initial coarse object reconstruction is performed for multiple objects in general environments. Initial object segmentation using point cloud clustering and coarse segmentation is insensitive to parameters. Throughout this work the same parameters are used for all datasets. The result of this process is a coarse initial object segmentation and reconstruction for each object.
Sparse-to-dense temporal reconstruction with temporal coherence
Once the static scene reconstruction is obtained for the first frame, we perform temporally coherent reconstruction for dynamic objects at successive time instants instead of whole scene reconstruction for computational efficiency and to avoid redundancy. The initial coarse reconstruction for each dynamic region is refined in the subsequent optimization step with respect to each camera view. Dynamic scene objects are identified from the temporal correspondence of sparse feature points. Sparse correspondence is used to propagate an initial model of the moving object for refinement. Figure 5 presents the sparse reconstruction and temporal correspondence. New objects are identified per frame from the clustered sparse reconstruction and are labelled as dynamic objects. Sparse temporal dynamic feature tracking: Numerous approaches have been proposed to track moving objects in 2D using either features or optical flow. However these methods may fail in the case of occlusion, movement parallel to the view direction, large motions and moving cameras. To overcome these limitations we match the sparse 3D feature points obtained using SFD [44] from multiple wide-baseline views at each time instant. The use of sparse 3D features is robust to large non-rigid motion, occlusions and camera movement. SFD detects sparse features which are stable across wide-baseline views and consecutive time instants for a moving camera and dynamic scene. Sparse 3D feature matches between consecutive time instants are back-projected to each view. These features are matched temporally using SIFT descriptor to identify the moving points. Robust matching is achieved by enforcing multiple view consistency for the temporal feature correspondence in each view as illustrated in Figure 6. Each match must satisfy the constraint:
H t,v (p) + u t,r (p + H t,v (p)) − u t,v (p)− (1) H t,r (p + u t,v (p)) <
where p is the feature image point in view v at frame t, H t,v (p) is the disparity at frame t from views v and r, u t,v (p) is the temporal correspondence from frames t to t + 1 for view v. The multi-view consistency check ensures that correspondences between any two views remain temporally consistent for successive frames. Matches in the 2D domain are sensitive to camera movement and occlusion, hence we map the set of refined matches into 3D to make the system robust to camera motion. The Frobenius norm is applied on the 3D point gradients in all directions [71] to obtain the 'net' motion at each sparse point. The 'net' motion between pairs of 3D points for consecutive time instants are ranked, and the top and bottom 5 percentile values are removed. Median filtering is then applied to identify the dynamic features. Figure 7 shows an example with moving cameras for Juggler [5].
Sparse-to-dense model reconstruction: Dynamic 3D feature points are used to initialize the segmentation and reconstruction of the initial model. This avoids the assumption of static backgrounds and prior scene segmentation commonly used to initialise multiple view reconstruction with a coarse visual-hull approximation [23]. Temporal coherence also provides a more accurate initialisation to overcome visual ambiguities at individual frames. Figure 8 illustrates the use of temporal coherence for reconstruction initialisation and refinement. Dynamic feature correspondence is used to identify the mesh for each dynamic object. This mesh is back projected on each view to obtain the region of interest. Lucas Kanade Optical flow [8] is performed on the projected mask for each view in the temporal domain using the dynamic feature correspondences over time as initialization. Dense multi-view wide-baseline correspondences from the previous frame are propagated to the current frame using the information from the flow vectors to obtain dense multi-view matches in the current frame. The matches are triangulated in 3D to obtain a refined 3D dense model of the dynamic object for the current frame. For dynamic scenes, a new object may enter the scene or a new part may appear as the object moves. To allow the introduction of new objects and object parts we also use information from the cluster of sparse points for each dynamic object. The cluster corresponding to the dynamic features is identified and static points are removed. This ensures that the set of new points not only contain the dynamic features but also the unprocessed points which represent new parts of the object. These points are added to the refined sparse model of the dynamic object. To handle the new objects we detect new clusters at each time instant and consider them as dynamic regions. The sparse-to-dense initial coarse reconstruction improves the quality of segmentation and reconstruction after the refinement. Examples of the improvement in segmentation and reconstruction for Odzemok [1] and Juggler [5] datasets are shown in Figure 9. As observed limbs of the people is retained by using information from the previous frames in both the cases.
Joint object-based sparse-to-dense temporally coherent refinement of shape and segmentation
The initial reconstruction and segmentation from dense temporal feature correspondence is refined using a joint optimization framework. A novel shape constraint is introduced based on geodesic star convexity which has previously been shown to give improved performance in interactive image segmentation for structures with fine details (for example a person's fingers or hair) [25]. Shape is a powerful cue for object recognition and segmentation. Shape models represented as distance transforms from a template have been used for category specific segmentation [33]. Some works have introduced generic connectivity constraints for segmentation showing that obtaining a globally optimal solutions under the connectivity constraint is NP-hard [64]. Veksler et al. have used shape constraint in segmentation framework by enforcing star convexity prior on the segmentation, and globally optimal solutions are achieved subject to this constraint [63]. The star convexity constraint ensures connectivity to seed points, and is a stronger assumption than plain connectivity. An example of a star-convex object is shown in Figure 10 along with a failure case for a non-rigid articulate object. To handle more complex objects the idea of geodesic forests with multiple star centres was introduced to obtain a globally optimal solution for interactive 2D object segmentation [25]. The main focus was to introduce shape constraints in interactive segmentation, by means of a geodesic star convexity prior. The notion of connectivity was extended from Euclidean to geodesic so that paths can bend and adapt to image data as opposed to straight Euclidean rays, thus extending visibility and reducing the number of star centers required.
The geodesic star-convexity is integrated as a constraint on the energy minimisation for joint multi-view Fig. 10 (a) Representation of star convexity: The left object depicts example of star-convex objects, with a star center marked. The object on the right with a plausible star center shows deviations from star-convexity in the fine details, and (b) Multiple star semantics for joint refinement: Single star center based segmentation is depicted on the left and multiple star is shown on the right. reconstruction and segmentation [23]. In this work the shape constraint is automatically initialised for each view from the initial segmentation. The shape constraint is based on the geodesic distance with foreground object initialisation (seeds) as star centres to which the object shape is restricted. The union formed by multiple object seeds form a geodesic forest. This allows complex shapes to be segmented. In this work to automatically initialize the segmentation we use the sparse temporal feature correspondence as star centers (seeds) to build a geodesic forest automatically. The region outside the initial coarse reconstruction of all dynamic objects is initialized as the background seed for segmentation as shown in Figure 12. The shape of the dynamic object is restricted by this geodesic distance constraint that depends on the image gradient. Comparison with existing methods for multi-view segmentation demonstrates improvements in recovery of fine detail structure as illustrated in Figure 12.
Once we have a set of dense 3D points for each dynamic object, Poisson surface reconstruction is performed on the set of sparse points to obtain an initial coarse model of each dynamic region R, which is subsequently refined using the optimization framework (Section 3.4.1).
Optimization on initial coarse object reconstruction based on geodesic star convexity
The depth of the initial coarse reconstruction estimate is refined per view for each dynamic object at a per pixel level. View-dependent optimisation of depth is performed with respect to each camera which is robust to errors in camera calibration and initialisation. Calibration inaccuracies produce inconsistencies limiting the applicability of global reconstruction techniques which simultaneously consider all views; view-dependent techniques are more tolerant to such inaccuracies because they only use a subset of the views for reconstruction of depth from each camera view.
Our goal is to assign an accurate depth value from a set of depth values D = d 1 , ..., d |D|−1 , U and assign a layer label from a set of label values L = l 1 , ..., l |L | to each pixel p for the region R of each dynamic object. Each d i is obtained by sampling the optical ray from the camera and U is an unknown depth value to handle occlusions. This is achieved by optimisation of a joint cost function [23] for label (segmentation) and depth (reconstruction):
E(l, d) = λ data E data (d) + λ contrast E contrast (l)+ λ smooth E smooth (l, d) + λ color E color (l) (2)
where, d is the depth at each pixel, l is the layer label for multiple objects and the cost function terms are defined in section 3.4.2. The equation consists of four terms: the data term is for the photo-consistency scores, the smoothness term is to avoid sudden peaks in depth and maintain the consistency and the color and contrast terms are to identify the object boundaries. Data and smoothness terms are common to solve reconstruction problems [7] and the color and contrast terms are used for segmentation [34]. This is solved subject to a geodesic star-convexity constraint on the labels l. A label l is star convex with respect to center c, if every point p ∈ l is visible to a star center c via l in the image x which can be expressed as an energy cost:
E (l|x, c) = p∈R q∈Γc,p E p,q (l p , l q ) (3) ∀q ∈ Γ c,p , E p,q = ∞ if l p = l q 0 otherwise(4)
where ∀p ∈ R : p ∈ l ⇔ l p = 1 and Γ c,p is the geodesic path joining p to the star center c given by:
Γ c,p = arg min Γ ∈Pc,p L(Γ )(5)
where P c,p denotes the set of all discrete paths between c and p and L(Γ ) is the length of discrete geodesic path as defined in [25]. In the case of image segmentation the gradients in the underlying image provide information to compute the discrete paths between each pixel and star centers and L(Γ ) is defined below:
L(Γ ) = N D −1 i=1 (1 − δ g )j(Γ i , Γ i+1 ) 2 + δ g I(Γ i ) 2(6)
where Γ is an arbitrary parametrized discrete path with N D pixels given by Γ 1 , Γ 2 , · · · Γ N D , j(Γ i , Γ i+1 ) is the Euclidean distance between successive pixels, and the quantity I(Γ i ) 2 is a finite difference approximation of the image gradient between the points Γ i , Γ i+1 . The parameter weights δ g the Euclidean distance with the geodesic length. Using the above definition, one can define the geodesic distance as defined in Equation 5.
An extension of single star-convexity is to use multiple stars to define a more general class of shapes. Introduction of multiple star centers reduces the path lengths and increases the visibility of small parts of objects like small limbs as shown in Figure 10. Hence Equation 3 is extended to multiple stars. A label l is star convex with respect to center c i , if every point p ∈ l is visible to a star center c i in set C = {c 1 , ..., c N T } via l in the image x, where N T is the number of star centers [25]. This is expressed as an energy cost:
E (l|x, C ) = p∈R q∈Γc,p E p,q (l p , l q )(7)
In our case all the correct temporal sparse feature correspondences are used as star centers, hence the segmentation will include all the points which are visible to these sparse features via geodesic distances in the region R, thereby employing the shape constraint. Since the star centers are selected automatically, the method is unsupervised. Comparison of segmentation constraint with geodesic multi-star convexity against no constraints and Euclidean multi-star convexity constraint is shown in Figure 11. The figure demonstrates the usefulness of the proposed approach with an improvement in segmentation quality on non-rigid complex objects. The energy in the Equation 2 is minimized as follows:
min (l,d) s.t. E(l, d) l S (C ) ⇔ min (l,d) E(l, d) + E (l|x, C )(8)
where S (C ) is the set of all shapes which lie within the geodesic distances with respect to the centers in C . Optimization of Equation 8, subject to each pixel p in the region R being at a geodesic distance Γ c,p from the star centers in the set C , is performed using the αexpansion algorithm for a pixel p by iterating through Fig. 12 Geodesic star convexity: A region R with star centers C connected with geodesic distance Γ c,p . Segmentation results with and without geodesic star convexity based optimization are shown on the right for the Juggler dataset. the set of labels in L × D [10]. Graph-cut is used to obtain a local optimum [9]. The improvements in the results using geodesic star convexity in the framework is shown in Figure 12 and by using temporal coherence is shown in Figure 9. Figure 13 shows improvements using geodesic shape constraint, temporal coherence and combined proposed approach for Dance2 [2] dataset.
Energy cost function for joint segmentation and reconstruction
For completeness in this section we define each of the terms in Equation 2, these are based on previous terms used for joint optimisation over depth for each pixel introduced in [42], with modification of the color matching term to improve robustness and extension to multiple labels.
Matching term: The data term for matching between views is specified as a measure of photo-consistency (Figure 14) as follows:
E data (d) = p∈P e data (p, d p ) = M (p, q) = i∈O k m(p, q), if d p = U M U , if d p = U(9)
where P is the 4-connected neighbourhood of pixel p, M U is the fixed cost of labelling a pixel unknown and q denotes the projection of the hypothesised point P in an auxiliary camera where P is a 3D point along the optical ray passing through pixel p located at a distance d p from the reference camera. O k is the set of k most photo-consistent pairs. For textured scenes Normalized Cross Correlation (NCC) over a squared window is a common choice [53]. The NCC values range from -1 to 1 which are then mapped to non-negative values by using the function 1 − N CC.
A maximum likelihood measure [40] is used in this function for confidence value calculation between the center pixel p and the other pixels q and is based on the survey on confidence measures for stereo [28]. The measure is defined as:
m(p, q) = exp cmin 2σ 2 i (p,q)∈N exp −(1−N CC(p,q)) 2σ 2 i(10)
where σ 2 i is the noise variance for each auxiliary camera i; this parameter was fixed to 0.3. N denotes the set of interacting pixels in P. c min is the minimum cost for a pixel obtained by evaluating the function (1−N CC(., .)) on a 15 × 15 window. Contrast term: Segmentation boundaries in images tend to align with contours of high contrast and it is desirable to represent this as a constraint in stereo matching. A consistent interpretation of segmentation-prior and contrast-likelihood is used from [34]. We used a modified version of this interpretation in our formulation to preserve the edges by using Bilateral filtering [61] instead of Gaussian filtering. The contrast term is as follows:
E contrast (l) = p,q∈N e contrast (p, q, l p , l q )(11)
e contrast (p, q, l p , l q ) = 0, if (l p = l q ) 1 1+ ( + exp −C(p,q) ), otherwise (12) · is the L 2 norm and = 1. The simplest choice for C(p, q) would be the squared Euclidean color distance between intensities at pixel p and q as used in [23]. We propose a term for better segmentation as C(p, q) =
B(p)−B(q) 2 2σ 2 pq d 2 pq
where B(.) represents the bilateral filter, d pq is the Euclidean distance between p and q, and σ pq =
B(p)−B(p) 2 d 2 pq
This term enables to remove the regions with low photo-consistency scores and weak edges and thereby helps in estimating the object boundaries.
Smoothness term: This term is inspired by [23] and it ensures the depth labels vary smoothly within the object reducing noise and peaks in the reconstructed surface. This is useful when the photo-consistency score is low and insufficient to assign depth to a pixel ( Figure 14). It is defined as:
E smooth (l, d) = (p,q)∈N e smooth (l p , d p , l q , d q ) (13) e smooth (l p , d p , l q , d q ) = min(|d p − d q | , d max ), if l p = l q and d p , d q = U 0, if l p = l q and d p , d q = U d max , otherwise(14)
d max is set to 50 times the size of the depth sampling step for all datasets.
Color term: This term is computed using the negative log likelihood [9] of the color models learned from the foreground and background markers. The star centers obtained from the sparse 3D features are foreground markers and for background markers we consider the region outside the projected initial coarse reconstruction for each view. The color models use GMMs with 5 components each for Foreground/Background mixed with uniform color models [14] as the markers are sparse.
E color (l) = p∈P −logP (I p |l p )(15)
where P (I p |l p = l i ) denotes the probability at pixel p in the reference image belonging to layer l i . Fig. 15 Comparison of segmentation on benchmark static datasets using geodesic star-convexity.
Results and Performance Evaluation
The proposed system is tested on publicly available multi-view research datasets of indoor and outdoor scenes, details of datasets explained in Table 1. The parameters used for all the datasets are defined in Table 2. More information is available on the website 1 .
Multi-view segmentation evaluation
Segmentation is evaluated against the state-of-the-art methods for multi-view segmentation Kowdle [35] and Djelouah [16] for static scenes and joint segmentation reconstruction methods Mustafa [42] (per frame) and Guillemaut [24] (using temporal information) for both static and dynamic scenes. For static multi-view data the segmentation is initialised as detailed in Section 3.1 followed by refinement using the constrained optimisation Section 3.4.1. For dynamic scenes the full pipeline with temporal coherence is used as detailed in 3. Ground-truth is obtained by manually labelling the foreground for Office, Dance1 and Odzemok dataset, and for other datasets ground-truth is available online. We initialize all approaches by the same proposed initial coarse reconstruction for fair comparison.
To evaluate the segmentation we measure completeness as the ratio of intersection to union with groundtruth [35]. Comparisons are shown in Table 3 and Figure 15, 16 for static benchmark datasets. Comparison for dynamic scene segmentations are shown in Table 4 and Figure 17, 18. Results for multi-view segmentation of static scenes are more accurate than Djelouah, Mustafa, and Guillemaut, and comparable to Kowdle with improved segmentation of some detail such as the back of the chair.
For dynamic scenes the geodesic star convexity based optimization together with temporal consistency gives improved segmentation of fine detail such as the legs of the table in the Office dataset and limbs of the person in the Juggler, Magician and Dance2 datasets in Figure 17 and 18. This overcomes limitations of previous multiview per-frame segmentation.
Reconstruction evaluation
Reconstruction results obtained using the proposed method are compared against Mustafa [42], Guillemaut [24], and Furukawa [19] for dynamic sequences. Furukawa [19] is a per-frame multi-view wide-baseline stereo approach which ranks highly on the middlebury benchmark [53] but does not refine the segmentation.
The depth maps obtained using the proposed approach are compared against Mustafa and Guillemaut in Figure 19. The depth map obtained using the proposed approach are smoother with low reconstruction noise compared to the state-of-the-art methods. Figure 20 and 21 present qualitative and quantitative comparison of our method with the state-of-the-art approaches.
Comparison of reconstructions demonstrates that the proposed method gives consistently more complete and accurate models. The colour maps highlight the quantitative differences in reconstruction. As far as we are aware no ground-truth data exist for dynamic scene reconstruction from real multi-view video. In Figure 21 we present a comparison with the reference mesh available with the Dance2 dataset reconstructed using a visual-hull approach. This comparison demonstrates improved reconstruction of fine detail with the proposed technique.
In contrast to all previous approaches the proposed method gives temporally coherent 4D model reconstructions with dense surface correspondence over time. The introduction of temporal coherence constrains the reconstruction in regions which are ambiguous on a particular frame such as the right leg of the juggler in Figure 20 resulting in more complete shape. Figure 22 shows three complete scene reconstructions with 4D models of multiple objects. The Juggler and Magician sequences are reconstructed from moving handheld cameras. Computational Complexity: Computation times for the proposed approach vs other methods are presented in Table 5. The proposed approach to reconstruct temporally coherent 4D models is comparable in computation time to per-frame multiple view reconstruction and gives a ∼50% reduction in computation cost compared to previous joint segmentation and reconstruction approaches using a known background. This efficiency is achieved through improved per-frame initialisation based on temporal propagation and the introduction of the geodesic star constraint in joint optimisation. Further results can be found in the supplementary material. Temporal coherence: A frame-to-frame alignment is obtained using the proposed approach as shown in Figure 23 for Dance1 and Juggle dataset. The meshes of the dynamic object in Frame 1 and Frame 9 are color coded in both the datasets and the color is propagated to the next frame using the dense temporal coherence information. The color in different parts of the object is retained to the next frame as seen from the figure. The proposed approach obtains sequential temporal alignment which drifts with large movement in the object, hence successive frames are shown in the figure.
Limitations: As with previous dynamic scene reconstruction methods the proposed approach has a number of limitations: persistent ambiguities in appearance between objects will degrade the improvement achieved with temporal coherence; scenes with a large number of inter-occluding dynamic objects will degrade performance; the approach requires sufficient wide-baseline views to cover the scene.
Applications to immersive content production
The 4D meshes generated from the proposed approach can be used for applications in immersive content production such as FVV rendering and VR. This section demonstrates the results of these applications.
Free-viewpoint rendering
In FVV, the virtual viewpoint is controlled interactively by the user. The appearance of the reconstruction is sampled and interpolated directly from the captured camera images using cameras located close to the virtual viewpoint [57].
The proposed joint segmentation and reconstruction framework generates per-view silhouettes and a temporally coherent 4D reconstruction at each time instant of the input video sequence. This representation of the dynamic sequence is used for FVV rendering. To create FVV, a view-dependent surface texture is computed based on the user selected virtual view. This virtual view is obtained by combining the information from camera views in close proximity to the virtual viewpoint [57]. FVV rendering gives user the freedom to interactively choose a novel viewpoint in space to observe the dynamic scene and reproduces fine scale temporal surface details, such as the movement of hair and clothing wrinkles, that may not be modelled geometrically. An example of a reconstructed scene and the camera configuration is shown in Figure 24.
A qualitative evaluation of images synthesised using FVV is shown in Figure 25 and 26. These demonstrate reconstruction results rendered from novel viewpoints from the proposed method against Mustafa [43] and Guillemaut [23] on publicly available datasets. This is particularly important for wide-baseline camera configurations where this technique can be used to synthesize intermediate viewpoints where it may not be practical or economical to physically locate real cameras.
Virtual reality rendering
There is a growing demand for photo-realistic content in the creation of immersive VR experiences. The 4D temporally coherent reconstructions of the dynamic scenes obtained using the proposed approach enables the creation of photo-realistic digital assets that can be incorporated into VR environments using game engines such as Unity and Unreal Engine, as shown in Figure 27 for single frame of four datasets and for a series of frames for Dance1 dataset.
In order to efficiently render the reconstructions in a game engine for applications in VR, a UV texture atlas is extracted using the 4D meshes from the proposed approach as a geometric proxy. The UV texture atlas at each frame are applied to the models at render time in unity for viewing in a VR headset. A UV texture atlas is constructed by projectively texturing and blending multiple view frames onto a 2D unwrapped UV texture atlas, see Figure Figure 28. This is performed once for each static object and at each time instance for dynamic objects allowing efficient storage and real-time playback of static and dynamic textured reconstructions within a VR headset.
Conclusion
This paper introduced a novel technique to automatically segment and reconstruct dynamic scenes captured from multiple moving cameras in general dynamic uncontrolled environments without any prior on background appearance or structure. The proposed automatic initialization was used to identify and initialize the segmentation and reconstruction of multiple objects. A framework for temporally coherent 4D model reconstruction of dynamic scenes from a set of wide-baseline moving cameras. The approach gives a complete model of all static and dynamic non-rigid objects in the scene. Temporal coherence for dynamic objects addresses limitations of previous per-frame reconstruction giving improved reconstruction and segmentation together with dense temporal surface correspondence for dynamic objects. A sparse-to-dense approach is introduced to establish temporal correspondence for non-rigid objects using robust sparse feature matching to initialise dense optical flow providing an initial segmentation and reconstruction. Joint refinement of object reconstruction and segmentation is then performed using a multiple view optimisation with a novel geodesic star convexity constraint that gives improved shape estimation and is computationally efficient. Comparison against state-ofthe-art techniques for multiple view segmentation and reconstruction demonstrates significant improvement in performance for complex scenes. The approach enables reconstruction of 4D models for complex scenes which has not been demonstrated previously. | 8,667 |
1907.08195 | 2963385316 | Existing techniques for dynamic scene reconstruction from multiple wide-baseline cameras primarily focus on reconstruction in controlled environments, with fixed calibrated cameras and strong prior constraints. This paper introduces a general approach to obtain a 4D representation of complex dynamic scenes from multi-view wide-baseline static or moving cameras without prior knowledge of the scene structure, appearance, or illumination. Contributions of the work are: An automatic method for initial coarse reconstruction to initialize joint estimation; Sparse-to-dense temporal correspondence integrated with joint multi-view segmentation and reconstruction to introduce temporal coherence; and a general robust approach for joint segmentation refinement and dense reconstruction of dynamic scenes by introducing shape constraint. Comparison with state-of-the-art approaches on a variety of complex indoor and outdoor scenes, demonstrates improved accuracy in both multi-view segmentation and dense reconstruction. This paper demonstrates unsupervised reconstruction of complete temporally coherent 4D scene models with improved non-rigid object segmentation and shape reconstruction and its application to free-viewpoint rendering and virtual reality. | To address this issue we introduce a novel method for spatio-temporal multi-view segmentation of dynamic scenes using shape constraints. Single image segmentation techniques using shape constraints provide good results for complex scene segmentation @cite_54 (convex and concave shapes), but require manual interaction. The proposed approach performs automatic multi-view video segmentation by initializing the foreground object model using spatio-temporal information from wide-baseline feature correspondence followed by a multi-layer optimization framework. Geodesic star convexity previously used in single view segmentation @cite_54 is applied to constraint the segmentation in each view. Our multi-view formulation naturally enforces coherent segmentation between views and also resolves ambiguities such as the similarity of background and foreground in isolated views. | {
"abstract": [
"In this paper we introduce a new shape constraint for interactive image segmentation. It is an extension of Veksler's [25] star-convexity prior, in two ways: from a single star to multiple stars and from Euclidean rays to Geodesic paths. Global minima of the energy function are obtained subject to these new constraints. We also introduce Geodesic Forests, which exploit the structure of shortest paths in implementing the extended constraints. The star-convexity prior is used here in an interactive setting and this is demonstrated in a practical system. The system is evaluated by means of a “robot user” to measure the amount of interaction required in a precise way. We also introduce a new and harder dataset which augments the existing Grabcut dataset [1] with images and ground truth taken from the PASCAL VOC segmentation challenge [7]."
],
"cite_N": [
"@cite_54"
],
"mid": [
"2168555635"
]
} | Temporally coherent general dynamic scene reconstruction | Temporally consistent scene reconstruction for Odzemok dataset color-coded to show the scene object segmentation obtained. effects in film and broadcast production and for content production in virtual reality. The ultimate goal of modelling dynamic scenes from multiple cameras is automatic understanding of real-world scenes from distributed camera networks, for applications in robotics and other autonomous systems. Existing methods have applied multiple view dynamic scene reconstruction techniques in controlled environment with known background or chroma-key studio [23,20,56,60]. Other multiple view stereo techniques require a relatively dense static camera network resulting in a large number of cameras [19]. Extensions to more general outdoor scenes [5,32,60] use prior reconstruction of the static geometry from images of the empty environment. However these methods either require accurate segmentation of dynamic foreground objects, or prior knowledge of the scene struc-ture and background, or are limited to static cameras and controlled environments. Scenes are reconstructed semi-automatically, requiring manual intervention for segmentation/rotoscoping, and result in temporally incoherent per-frame mesh geometries. Temporally coherent geometry with known surface correspondence across the sequence is essential for real-world applications and compact representation.
Our paper addresses the limitations of existing approaches by introducing a methodology for unsupervised temporally coherent dynamic scene reconstruction from multiple wide-baseline static or moving camera views without prior knowledge of the scene structure or background appearance. This temporally coherent dynamic scene reconstruction is demonstrated to work in applications for immersive content production such as free-viewpoint video (FVV) and virtual reality (VR). This work combines two previously published papers in general dynamic reconstruction [42] and temporally coherent reconstruction [43] into a single framework and demonstrates application of this novel unsupervised joint segmentation and reconstruction in immersive content production FVV and VR (Section 5).
The input is a sparse set of synchronised videos from multiple moving cameras of an unknown dynamic scene without prior scene segmentation or camera calibration. Our first contribution is automatic initilisation of camera calibration and sparse scene reconstruction from sparse feature correspondence using sparse feature detection and matching between pairs of frames. An initial coarse reconstruction and segmentation of all scene objects is obtained from sparse features matched across multiple views. This eliminates the requirement for prior knowledge of the background scene appearance or structure. Our second contribution is sparse-to-dense reconstruction and segmentation approach to introduce temporal coherence for every frame. We exploit temporal coherence of the scene to overcome visual ambiguities inherent in single frame reconstruction and multiple view segmentation methods for general scenes. Temporal coherence refers to the correspondence between the 3D surface of all objects observed over time. Our third contribution is spatio-temporal alignment to estimate dense surface correspondence for 4D reconstruction. A geodesic star convexity shape constraint is introduced for the shape segmentation to improve the quality of segmentation for non-rigid objects with complex appearance. The proposed approach overcomes the limitations of existing methods allowing an unsupervised temporally coherent 4D reconstruction of complete models for general dynamic scenes.
The scene is automatically decomposed into a set of spatio-temporally coherent objects as shown in Figure 1 where the resulting 4D scene reconstruction has temporally coherent labels and surface correspondence for each object. This can be used for free-viewpoint video rendering and imported to a game engine for VR experience production. The contributions explained above can be summarized as follows: -Unsupervised temporally coherent dense reconstruction and segmentation of general complex dynamic scenes from multiple wide-baseline views. -Automatic initialization of dynamic object segmentation and reconstruction from sparse features. -A framework for space-time sparse-to-dense segmentation, reconstruction and temporal correspondence. -Robust spatio-temporal refinement of dense reconstruction and segmentation integrating error tolerant photo-consistency and edge information using geodesic star convexity. -Robust and computationally efficient reconstruction of dynamic scenes by exploiting temporal coherence. -Real-world applications of 4D reconstruction to freeviewpoint video rendering and virtual reality. This paper is structured as follows: First related work is reviewed. The methodology for general dynamic scene reconstruction is then introduced. Finally a thorough qualitative and quantitative evaluation and comparison to the state-of-the-art on challenging datasets is presented.
Related Work
Temporally coherent reconstruction is a challenging task for general dynamic scenes due to a number of factors such as motion blur, articulated, non-rigid and large motion of multiple people, resolution differences between camera views, occlusions, wide-baselines, errors in calibration and cluttered dynamic backgrounds. Segmentation of dynamic objects from such scenes is difficult because of foreground and background complexity and the likelihood of overlapping background and foreground color distributions. Reconstruction is also challenging due to limited visual cues and relatively large errors affecting both calibration and extraction of a globally consistent solution. This section reviews previous work on dynamic scene reconstruction and segmentation.
Dynamic Scene Reconstruction
Dense dynamic shape reconstruction is a fundamental problem and heavily studied area in the field of computer vision. Recovering accurate 3D models of a dynamically evolving, non-rigid scene observed by multiple synchronised cameras is a challenging task. Research on multiple view dense dynamic reconstruction has primarily focused on indoor scenes with controlled illumi-nation and static backgrounds, extending methods for multiple view reconstruction of static scenes [53] to sequences [62]. Deep learning based approaches have been introduced to estimate shape of dynamic objects from minimal camera views in constrained environment [29,68] and for rigid objects [58]. In the last decade, focus has shifted to more challenging outdoor scenes captured with both static and moving cameras. Reconstruction of non-rigid dynamic objects in uncontrolled natural environments is challenging due to the scene complexity, illumination changes, shadows, occlusion and dynamic backgrounds with clutter such as trees or people. Methods have been proposed for multi-view reconstruction [65,39,37] requiring a large number of closely spaced cameras for surface estimation of dynamic shape. Practical applications require relatively sparse moving cameras to acquire coverage over large areas such as outdoor. A number of approaches for mutli-view reconstruction of outdoor scenes require initial silhouette segmentation [67,32,22,23] to allow visual-hull reconstruction. Most of these approaches to general dynamic scene reconstruction fail in the case of complex (cluttered) scenes captured with moving cameras.
A recent work proposed reconstruction of dynamic fluids [50] for static cameras. Another work used RGB-D cameras to obtain reconstruction of non-rigid surfaces [55]. Pioneering research in general dynamic scene reconstruction from multiple handheld wide-baseline cameras [5,60] exploited prior reconstruction of the background scene to allow dynamic foreground segmentation and reconstruction. Recent work [46] estimates shape of dynamic objects from handheld cameras exploiting GANs. However these approaches either work for static/indoor scenes or exploit strong prior assumptions such as silhouette information, known background or scene structure. Also all these approaches give per frame reconstruction leading to temporally incoherent geometries. Our aim is to perform temporally coherent dense reconstruction of unknown dynamic non-rigid scenes automatically without strong priors or limitations on scene structure.
Joint Segmentation and Reconstruction
Many of the existing multi-view reconstruction approaches rely on a two-stage sequential pipeline where foreground or background segmentation is initially performed independently with respect to each camera, and then used as input to obtain visual hull for multi-view reconstruction. The problem with this approach is that the errors introduced at the segmentation stage cannot be recovered and are propagated to the reconstruction stage reducing the final reconstruction quality. Segmentation from multiple wide-baseline views has been proposed by exploiting appearance similarity [17,38,70]. These ap-proaches assume static backgrounds and different colour distributions for the foreground and background [52,17] which limits applicability for general scenes.
Joint segmentation and reconstruction methods incorporate estimation of segmentation or matting with reconstruction to provide a combined solution. Joint refinement avoids the propagation of errors between the two stages thereby making the solution more robust. Also, cues from segmentation and reconstruction can be combined efficiently to achieve more accurate results. The first multi-view joint estimation system was proposed by Szeliski et al. [59] which used iterative gradient descent to perform an energy minimization. A number of approaches were introduced for joint formulation in static scenes and one recent work used training data to classify the segments [69]. The focus shifted to joint segmentation and reconstruction for rigid objects in indoor and outdoor environments. These approaches used a variety of techniques such as patch-based refinement [54,48] and fixating cameras on the object of interest [11] for reconstructing rigid objects in the scene. However, these are either limited to static scenes [69,26] or process each frame independently thereby failing to enforce temporal consistency [11,23].
Joint reconstruction and segmentation on monocular video was proposed in [36,3,12] achieving semantic segmentation of scene limited to rigid objects in street scenes. Practical application of joint estimation requires these approaches to work on non-rigid objects such as humans with clothing. A multi-layer joint segmentation and reconstruction approach was proposed for multiple view video of sports and indoor scenes [23]. The algorithm used known background images of the scene without the dynamic foreground objects to obtain an initial segmentation. Visual-hull based reconstruction was performed with known prior foreground/background using a background image plate with fixed and calibrated cameras. This visual hull was used as a prior and was optimized by a combination of photo-consistency, silhouette, color and sparse feature information in an energy minimization framework to improve the segmentation and reconstruction quality. Although structurally similar to our approach, it requires the scene to be captured by fixed calibrated cameras and a priori known fixed background plate as a prior to estimate the initial visual hull by background subtraction. The proposed approach overcomes these limitations allowing moving cameras and unknown scene backgrounds.
An approach based on optical flow and graph cuts was shown to work well for non-rigid objects in indoor settings but requires known background segmentation to obtain silhouettes and is computationally expensive [24]. Practical application of temporally coherent joint estimation requires approaches that work on non-rigid objects for general scenes in uncontrolled environments. A quantitative evaluation of techniques for multi-view reconstruction was presented in [53]. These methods are able to produce high quality results, but rely on good initializations and strong prior assumptions with known and controlled (static) scene backgrounds.
The proposed method exploits the advantages of joint segmentation and reconstruction and addresses the limitations of existing methods by introducing a novel approach to reconstruct general dynamic scenes automatically from wide-baseline cameras with no prior. To overcome the limitations of existing methods, the proposed approach automatically initialises the foreground object segmentation from wide-baseline correspondence without prior knowledge of the scene. This is followed by a joint spatio-temporal reconstruction and segmentation of general scenes. Temporal correspondence is exploited to overcome visual ambiguities giving improved reconstruction together with temporal coherence of surface correspondence to obtain 4D scene models.
Temporal coherent 4D Reconstruction
Temporally coherent 4D reconstruction refers to aligning the 3D surfaces of non-rigid objects over time for a dynamic sequence. This is achieved by estimating pointto-point correspondences for the 3D surfaces to obtain 4D temporally coherent reconstruction. 4D models allows to create efficient representation for practical applications in film, broadcast and immersive content production such as virtual, augmented and mixed reality. The majority of existing approaches for reconstruction of dynamic scenes from multi-view videos process each time frame independently due to the difficulty of simultaneously estimating temporal correspondence for non-rigid objects. Independent per-frame reconstruction can result in errors due to the inherent visual ambiguity caused by occlusion and similar object appearance for general scenes. Recent research has shown that exploiting temporal information can improve reconstruction accuracy as well as achieving temporal coherence [43].
3D scene flow estimates frame to frame correspondence whereas 4D temporal coherence estimates correspondence across the complete sequence to obtain a single surface model. Methods to estimate 3D scene flow have been reported in the literature [41] for autonomous vehicles. However this approach is limited to narrow baseline cameras. Other scene flow approaches are dependent on 2D optical flow [66,6] and they require an accurate estimate for most of the pixels which fails in the case of large motion. However, 3D scene flow methods align two frames independently and do not produce temporally coherent 4D models.
Research investigating spatio-temporal reconstruction across multiple frames was proposed by [20,37,24] exploiting the temporal information from the previous frames using optical flow. An approach for recovering space-time consistent depth maps from multiple video sequences captured by stationary, synchronized and calibrated cameras for depth based free viewpoint video rendering was proposed by [39]. However these methods require accurate initialisation, fixed and calibrated cameras and are limited to simple scenes. Other approaches to temporally coherent reconstruction [4] either requires a large number of closely spaced cameras or bi-layer segmentation [72,30] as a constraint for reconstruction. Recent approaches for spatio-temporal reconstruction of multi-view data either work on indoor studio data [47].
The framework proposed in this paper addresses limitations of existing approaches and gives 4D temporally coherent reconstruction for general dynamic indoor or outdoor scenes with large non-rigid motions, repetitive texture, uncontrolled illumination, and large capture volume. The scenes are captured with sparse static/moving cameras. The proposed approach gives 4D models of complete scenes with both static and dynamic objects for real-world applications (FVV and VR) with no prior knowledge of scene structure.
Multi-view Video Segmentation
In the field of image segmentation, approaches have been proposed to provide temporally consistent monocular video segmentation [21,49,45,71]. Hierarchical segmentation based on graphs was proposed in [21], directed acyclic graph were used to propose an object followed by segmentation [71]. Optical flow is used to identify and consistently segment objects [45,49]. Recently a number of approaches have been proposed for multi-view foreground object segmentation by exploiting appearance similarity spatially across views [16,35,38,70]. An approach for space-time multi-view segmentation was proposed by [17]. However, multi-view approaches assume a static background and different colour distributions for the foreground and background which limits applicability for general scenes and non-rigid objects.
To address this issue we introduce a novel method for spatio-temporal multi-view segmentation of dynamic scenes using shape constraints. Single image segmentation techniques using shape constraints provide good results for complex scene segmentation [25] (convex and concave shapes), but require manual interaction. The proposed approach performs automatic multi-view video segmentation by initializing the foreground object model using spatio-temporal information from widebaseline feature correspondence followed by a multi- layer optimization framework. Geodesic star convexity previously used in single view segmentation [25] is applied to constraint the segmentation in each view. Our multi-view formulation naturally enforces coherent segmentation between views and also resolves ambiguities such as the similarity of background and foreground in isolated views.
Summary and Motivation
Image-based temporally coherent 4D dynamic scene reconstruction without a prior model or constraints on the scene structure is a key problem in computer vision. Existing dense reconstruction algorithms need some strong initial prior and constraints for the solution to converge such as background, structure, and segmentation, which limits their application for automatic reconstruction of general scenes. Current approaches are also commonly limited to independent per-frame reconstruction and do not exploit temporal information or produce a coherent model with known correspondence.
The approach proposed in this paper aims to overcome the limitations of existing approaches to enable robust temporally coherent wide-baseline multiple view reconstruction of general dynamic scenes without prior assumptions on scene appearance, structure or segmentation of the moving objects. Static and dynamic objects in the scene are identified for simultaneous segmentation and reconstruction using geometry and appearance cues in a sparse-to-dense optimization framework. Temporal coherence is introduced to improve the quality of the reconstruction and geodesic star convexity is used to improve the quality of segmentation. The static and dynamic elements are fused automatically in both the temporal and spatial domain to obtain the final 4D scene reconstruction.
This paper presents a unified framework, novel in combining multiple view joint reconstruction and seg-mentation with temporal coherence to improve per-frame reconstruction performance and produce a single framework from the initial work presented in [43,42]. In particular the approach gives 4D surface model with full correspondence over time. A comprehensive experimental evaluation with comparison to the state-of-the-art in segmentation, reconstruction and 4D modelling is also presented extending previous work. Application fo the resulting 4D models to free-viewpoint video rendering and content production for immersive virtual reality experiences is also presented.
Methodology
This work is motivated by the limitations of existing multiple view reconstruction methods which either work independently at each frame resulting in errors due to visual ambiguity [19,23], or require restrictive assumptions on scene complexity and structure and often assume prior camera calibration and foreground segmentation [60,24]. We address these issues by initializing the joint reconstruction and segmentation algorithm automatically, introducing temporal coherence in the reconstruction and geodesic star convexity in segmentation to reduce ambiguity and ensure consistent non-rigid structure initialization at successive frames. The proposed approach is demonstrated to achieve improved reconstruction and segmentation performance over state-ofthe-art approaches and produce temporally coherent 4D models of complex dynamic scenes.
Overview
An overview of the proposed framework for temporally coherent multi-view reconstruction is presented in Figures 2 and consists of the following stages: Multi-view video: The scenes are captured using multiple video cameras (static/moving) separated by widebaseline (> 15 • ). The cameras can be synchronized during the capture using time-code generator or later using the audio information. Camera extrinsic calibration and scene structure are assumed to be unknown. Sparse reconstruction: The intrinsics are assumed to be known. Segmentation based feature detection (SFD) [44] is used to obtain a relatively large number of sparse features suitable for wide-baseline matching which are distributed throughout the scene including on dynamic objects such as people. SFD features are matched between views using a SIFT descriptor giving camera extrinsics and a sparse 3D point-cloud for each time instant for the entire sequence [27]. Initial scene segmentation and reconstruction -Section 3.2: Automatic initialisation is performed without prior knowledge of the scene structure or appearance to obtain an initial approximation for each object. The sparse point cloud is clustered in 3D [51] with each cluster representing a unique foreground object. Object segmentation increases efficiency and improve robustness of 4D models. This reconstruction is refined using the framework explained in Section 3.4 to obtain segmentation and dense reconstruction of each object. Sparse-to-dense temporal reconstruction with temporal coherence -Section 3.3 Temporal coherence is introduced in the framework to initialize the coarse reconstruction and obtain frame-to-frame dense correspondences for dynamic object. Dynamic object regions are detected at each time instant by sparse temporal correspondence of SFD features at successive frames. Sparse temporal feature correspondence allows propagation of the dense reconstruction for each dynamic object to obtain an initial approximation. Joint object-based sparse-to-dense temporally coherent refinement of shape and segmentation -Section 3.4: The initial estimate is refined for each object per-view in the scene through joint optimisation of shape and segmentation using a robust cost function combining matching, color, contrast and smoothness information for wide-baseline matching with a geodesic star convexity constraint. A single 3D model for each dynamic object is obtained by fusion of the view-dependent depth maps using Poisson surface reconstruction [31]. Surface orientation is estimated based on neighbouring pixels. Applications -Section : The 4D representation from the proposed joint segmentation and reconstruction framework has a number of applications in media production, including free-viewpoint video (FVV) rendering and virtual reality (VR).
The process above is repeated for the entire sequence for all objects in the first frame and for dynamic objects at each time-instant. The proposed approach enables automatic reconstruction of all objects in the scene as a 4D mesh sequence. Subsequent sections present the novel contributions of this work in initialisation and refinement to obtain a dense temporally coherent reconstruction. The approach is demonstrated to outperform previous approaches to dynamic scene reconstruction and does not require prior knowledge of the scene.
Initial Scene Segmentation and Reconstruction
For general dynamic scene reconstruction, we need to reconstruct and segment the objects in the scene. This requires an initial coarse approximation for initialisation of a subsequent refinement step to optimise the segmentation and reconstruction with respect to each camera view. We introduce an approach based on sparse point cloud clustering, an overview is shown in Figure 3. Initialisation gives a complete coarse segmentation and reconstruction of each object in the first frame of the sequence for subsequent refinement. The dense reconstruction of the foreground objects and background are combined to obtain a full scene reconstruction at the first time instant. A rough geometric proxy of the background is created using the method. For consecutive time instants dynamic objects and newly appeared objects are identified and only these objects are reconstructed and segmented. The reconstruction of static objects is retained which reduces computational complexity. The optic flow and cluster information for each dynamic object ensures that we retain same labels for the entire sequence.
Sparse Point-cloud Clustering
The sparse representation of the scene is processed to remove outliers using the point neighbourhood statistics to filter outlier data [51]. We segment the objects in the sparse scene reconstruction, this allows only moving objects to be reconstructed at each frame for efficiency and this also allows object shape similarity to be propagated across frames to increase robustness of reconstruction.
We use data clustering approach based on the 3D grid subdivision of the space using an octree data structure in Euclidean space to segment objects at each frame. In a more general sense, nearest neighbor information is used to cluster, which is essentially similar to a flood fill algorithm. We choose this data clustering because of its computational efficiency and robustness. The approach allows segmentation of objects in the scene and is demonstrated to work well for cluttered and general outdoor scenes as shown in Section 4.
Objects with insufficient detected features are reconstructed as part of the scene background. Appearing, disappearing and reappearing objects are handled by sparse dynamic feature tracking, explained in Section 3.3. Clustering results are shown in Figure 3. This is followed by a sparse-to-dense coarse object based approach to segment and reconstruct general dynamic scenes.
Coarse Object Reconstruction
The process to obtain the coarse reconstruction for the first frame of the sequence is shown in Figure 4. The sparse representation of each element is back-projected on the rectified image pair for each view. Delaunay triangulation [18] is performed on the set of back projected points for each cluster on one image and is propagated to the second image using the sparse matched features. Triangles with edge length greater than the median length of edges of all triangles are removed. For each remaining triangle pair direct linear transform is used to estimate the affine homography. Displacement at each pixel within the triangle pair is estimated by interpolation to get an initial dense disparity map for each cluster in the 2D image pair labelled as R I depicted in red in Figure 4. The initial coarse reconstruction for the observed objects in the scene is used to define the depth hypotheses at each pixel for the optimization.
The region R I does not ensure complete coverage of the object, so we extrapolate this region to obtain a region R O (shown in yellow) in 2D by 5% of the average distance between the boundary points(R I ) and the centroid of the object. To allow for errors in the initial approximate depth from sparse features we add volume in front and behind of the projected surface by an error tolerance, along the optical ray of the camera. This ensures that the object boundaries lie within the extrapolated initial coarse estimate and depth at each pixel for the combined regions may not be accurate. The tolerance for extrapolation may vary if a pixel belongs to R I or R O as the propagated pixels of the extrapolated regions (R O ) may have a high level of errors compared to error at the points from sparse representation (R I ) requiring a comparatively higher tolerance. The calculation of threshold depends on the capture volume of the datasets and is set to 1% of the capture volume for R O and half the value for R I . This volume in 3D corresponds to our initial coarse reconstruction of each object and enables us to remove the dependency of the existing approaches on background plate and visual hull estimates. This process of cluster identification and initial coarse object reconstruction is performed for multiple objects in general environments. Initial object segmentation using point cloud clustering and coarse segmentation is insensitive to parameters. Throughout this work the same parameters are used for all datasets. The result of this process is a coarse initial object segmentation and reconstruction for each object.
Sparse-to-dense temporal reconstruction with temporal coherence
Once the static scene reconstruction is obtained for the first frame, we perform temporally coherent reconstruction for dynamic objects at successive time instants instead of whole scene reconstruction for computational efficiency and to avoid redundancy. The initial coarse reconstruction for each dynamic region is refined in the subsequent optimization step with respect to each camera view. Dynamic scene objects are identified from the temporal correspondence of sparse feature points. Sparse correspondence is used to propagate an initial model of the moving object for refinement. Figure 5 presents the sparse reconstruction and temporal correspondence. New objects are identified per frame from the clustered sparse reconstruction and are labelled as dynamic objects. Sparse temporal dynamic feature tracking: Numerous approaches have been proposed to track moving objects in 2D using either features or optical flow. However these methods may fail in the case of occlusion, movement parallel to the view direction, large motions and moving cameras. To overcome these limitations we match the sparse 3D feature points obtained using SFD [44] from multiple wide-baseline views at each time instant. The use of sparse 3D features is robust to large non-rigid motion, occlusions and camera movement. SFD detects sparse features which are stable across wide-baseline views and consecutive time instants for a moving camera and dynamic scene. Sparse 3D feature matches between consecutive time instants are back-projected to each view. These features are matched temporally using SIFT descriptor to identify the moving points. Robust matching is achieved by enforcing multiple view consistency for the temporal feature correspondence in each view as illustrated in Figure 6. Each match must satisfy the constraint:
H t,v (p) + u t,r (p + H t,v (p)) − u t,v (p)− (1) H t,r (p + u t,v (p)) <
where p is the feature image point in view v at frame t, H t,v (p) is the disparity at frame t from views v and r, u t,v (p) is the temporal correspondence from frames t to t + 1 for view v. The multi-view consistency check ensures that correspondences between any two views remain temporally consistent for successive frames. Matches in the 2D domain are sensitive to camera movement and occlusion, hence we map the set of refined matches into 3D to make the system robust to camera motion. The Frobenius norm is applied on the 3D point gradients in all directions [71] to obtain the 'net' motion at each sparse point. The 'net' motion between pairs of 3D points for consecutive time instants are ranked, and the top and bottom 5 percentile values are removed. Median filtering is then applied to identify the dynamic features. Figure 7 shows an example with moving cameras for Juggler [5].
Sparse-to-dense model reconstruction: Dynamic 3D feature points are used to initialize the segmentation and reconstruction of the initial model. This avoids the assumption of static backgrounds and prior scene segmentation commonly used to initialise multiple view reconstruction with a coarse visual-hull approximation [23]. Temporal coherence also provides a more accurate initialisation to overcome visual ambiguities at individual frames. Figure 8 illustrates the use of temporal coherence for reconstruction initialisation and refinement. Dynamic feature correspondence is used to identify the mesh for each dynamic object. This mesh is back projected on each view to obtain the region of interest. Lucas Kanade Optical flow [8] is performed on the projected mask for each view in the temporal domain using the dynamic feature correspondences over time as initialization. Dense multi-view wide-baseline correspondences from the previous frame are propagated to the current frame using the information from the flow vectors to obtain dense multi-view matches in the current frame. The matches are triangulated in 3D to obtain a refined 3D dense model of the dynamic object for the current frame. For dynamic scenes, a new object may enter the scene or a new part may appear as the object moves. To allow the introduction of new objects and object parts we also use information from the cluster of sparse points for each dynamic object. The cluster corresponding to the dynamic features is identified and static points are removed. This ensures that the set of new points not only contain the dynamic features but also the unprocessed points which represent new parts of the object. These points are added to the refined sparse model of the dynamic object. To handle the new objects we detect new clusters at each time instant and consider them as dynamic regions. The sparse-to-dense initial coarse reconstruction improves the quality of segmentation and reconstruction after the refinement. Examples of the improvement in segmentation and reconstruction for Odzemok [1] and Juggler [5] datasets are shown in Figure 9. As observed limbs of the people is retained by using information from the previous frames in both the cases.
Joint object-based sparse-to-dense temporally coherent refinement of shape and segmentation
The initial reconstruction and segmentation from dense temporal feature correspondence is refined using a joint optimization framework. A novel shape constraint is introduced based on geodesic star convexity which has previously been shown to give improved performance in interactive image segmentation for structures with fine details (for example a person's fingers or hair) [25]. Shape is a powerful cue for object recognition and segmentation. Shape models represented as distance transforms from a template have been used for category specific segmentation [33]. Some works have introduced generic connectivity constraints for segmentation showing that obtaining a globally optimal solutions under the connectivity constraint is NP-hard [64]. Veksler et al. have used shape constraint in segmentation framework by enforcing star convexity prior on the segmentation, and globally optimal solutions are achieved subject to this constraint [63]. The star convexity constraint ensures connectivity to seed points, and is a stronger assumption than plain connectivity. An example of a star-convex object is shown in Figure 10 along with a failure case for a non-rigid articulate object. To handle more complex objects the idea of geodesic forests with multiple star centres was introduced to obtain a globally optimal solution for interactive 2D object segmentation [25]. The main focus was to introduce shape constraints in interactive segmentation, by means of a geodesic star convexity prior. The notion of connectivity was extended from Euclidean to geodesic so that paths can bend and adapt to image data as opposed to straight Euclidean rays, thus extending visibility and reducing the number of star centers required.
The geodesic star-convexity is integrated as a constraint on the energy minimisation for joint multi-view Fig. 10 (a) Representation of star convexity: The left object depicts example of star-convex objects, with a star center marked. The object on the right with a plausible star center shows deviations from star-convexity in the fine details, and (b) Multiple star semantics for joint refinement: Single star center based segmentation is depicted on the left and multiple star is shown on the right. reconstruction and segmentation [23]. In this work the shape constraint is automatically initialised for each view from the initial segmentation. The shape constraint is based on the geodesic distance with foreground object initialisation (seeds) as star centres to which the object shape is restricted. The union formed by multiple object seeds form a geodesic forest. This allows complex shapes to be segmented. In this work to automatically initialize the segmentation we use the sparse temporal feature correspondence as star centers (seeds) to build a geodesic forest automatically. The region outside the initial coarse reconstruction of all dynamic objects is initialized as the background seed for segmentation as shown in Figure 12. The shape of the dynamic object is restricted by this geodesic distance constraint that depends on the image gradient. Comparison with existing methods for multi-view segmentation demonstrates improvements in recovery of fine detail structure as illustrated in Figure 12.
Once we have a set of dense 3D points for each dynamic object, Poisson surface reconstruction is performed on the set of sparse points to obtain an initial coarse model of each dynamic region R, which is subsequently refined using the optimization framework (Section 3.4.1).
Optimization on initial coarse object reconstruction based on geodesic star convexity
The depth of the initial coarse reconstruction estimate is refined per view for each dynamic object at a per pixel level. View-dependent optimisation of depth is performed with respect to each camera which is robust to errors in camera calibration and initialisation. Calibration inaccuracies produce inconsistencies limiting the applicability of global reconstruction techniques which simultaneously consider all views; view-dependent techniques are more tolerant to such inaccuracies because they only use a subset of the views for reconstruction of depth from each camera view.
Our goal is to assign an accurate depth value from a set of depth values D = d 1 , ..., d |D|−1 , U and assign a layer label from a set of label values L = l 1 , ..., l |L | to each pixel p for the region R of each dynamic object. Each d i is obtained by sampling the optical ray from the camera and U is an unknown depth value to handle occlusions. This is achieved by optimisation of a joint cost function [23] for label (segmentation) and depth (reconstruction):
E(l, d) = λ data E data (d) + λ contrast E contrast (l)+ λ smooth E smooth (l, d) + λ color E color (l) (2)
where, d is the depth at each pixel, l is the layer label for multiple objects and the cost function terms are defined in section 3.4.2. The equation consists of four terms: the data term is for the photo-consistency scores, the smoothness term is to avoid sudden peaks in depth and maintain the consistency and the color and contrast terms are to identify the object boundaries. Data and smoothness terms are common to solve reconstruction problems [7] and the color and contrast terms are used for segmentation [34]. This is solved subject to a geodesic star-convexity constraint on the labels l. A label l is star convex with respect to center c, if every point p ∈ l is visible to a star center c via l in the image x which can be expressed as an energy cost:
E (l|x, c) = p∈R q∈Γc,p E p,q (l p , l q ) (3) ∀q ∈ Γ c,p , E p,q = ∞ if l p = l q 0 otherwise(4)
where ∀p ∈ R : p ∈ l ⇔ l p = 1 and Γ c,p is the geodesic path joining p to the star center c given by:
Γ c,p = arg min Γ ∈Pc,p L(Γ )(5)
where P c,p denotes the set of all discrete paths between c and p and L(Γ ) is the length of discrete geodesic path as defined in [25]. In the case of image segmentation the gradients in the underlying image provide information to compute the discrete paths between each pixel and star centers and L(Γ ) is defined below:
L(Γ ) = N D −1 i=1 (1 − δ g )j(Γ i , Γ i+1 ) 2 + δ g I(Γ i ) 2(6)
where Γ is an arbitrary parametrized discrete path with N D pixels given by Γ 1 , Γ 2 , · · · Γ N D , j(Γ i , Γ i+1 ) is the Euclidean distance between successive pixels, and the quantity I(Γ i ) 2 is a finite difference approximation of the image gradient between the points Γ i , Γ i+1 . The parameter weights δ g the Euclidean distance with the geodesic length. Using the above definition, one can define the geodesic distance as defined in Equation 5.
An extension of single star-convexity is to use multiple stars to define a more general class of shapes. Introduction of multiple star centers reduces the path lengths and increases the visibility of small parts of objects like small limbs as shown in Figure 10. Hence Equation 3 is extended to multiple stars. A label l is star convex with respect to center c i , if every point p ∈ l is visible to a star center c i in set C = {c 1 , ..., c N T } via l in the image x, where N T is the number of star centers [25]. This is expressed as an energy cost:
E (l|x, C ) = p∈R q∈Γc,p E p,q (l p , l q )(7)
In our case all the correct temporal sparse feature correspondences are used as star centers, hence the segmentation will include all the points which are visible to these sparse features via geodesic distances in the region R, thereby employing the shape constraint. Since the star centers are selected automatically, the method is unsupervised. Comparison of segmentation constraint with geodesic multi-star convexity against no constraints and Euclidean multi-star convexity constraint is shown in Figure 11. The figure demonstrates the usefulness of the proposed approach with an improvement in segmentation quality on non-rigid complex objects. The energy in the Equation 2 is minimized as follows:
min (l,d) s.t. E(l, d) l S (C ) ⇔ min (l,d) E(l, d) + E (l|x, C )(8)
where S (C ) is the set of all shapes which lie within the geodesic distances with respect to the centers in C . Optimization of Equation 8, subject to each pixel p in the region R being at a geodesic distance Γ c,p from the star centers in the set C , is performed using the αexpansion algorithm for a pixel p by iterating through Fig. 12 Geodesic star convexity: A region R with star centers C connected with geodesic distance Γ c,p . Segmentation results with and without geodesic star convexity based optimization are shown on the right for the Juggler dataset. the set of labels in L × D [10]. Graph-cut is used to obtain a local optimum [9]. The improvements in the results using geodesic star convexity in the framework is shown in Figure 12 and by using temporal coherence is shown in Figure 9. Figure 13 shows improvements using geodesic shape constraint, temporal coherence and combined proposed approach for Dance2 [2] dataset.
Energy cost function for joint segmentation and reconstruction
For completeness in this section we define each of the terms in Equation 2, these are based on previous terms used for joint optimisation over depth for each pixel introduced in [42], with modification of the color matching term to improve robustness and extension to multiple labels.
Matching term: The data term for matching between views is specified as a measure of photo-consistency (Figure 14) as follows:
E data (d) = p∈P e data (p, d p ) = M (p, q) = i∈O k m(p, q), if d p = U M U , if d p = U(9)
where P is the 4-connected neighbourhood of pixel p, M U is the fixed cost of labelling a pixel unknown and q denotes the projection of the hypothesised point P in an auxiliary camera where P is a 3D point along the optical ray passing through pixel p located at a distance d p from the reference camera. O k is the set of k most photo-consistent pairs. For textured scenes Normalized Cross Correlation (NCC) over a squared window is a common choice [53]. The NCC values range from -1 to 1 which are then mapped to non-negative values by using the function 1 − N CC.
A maximum likelihood measure [40] is used in this function for confidence value calculation between the center pixel p and the other pixels q and is based on the survey on confidence measures for stereo [28]. The measure is defined as:
m(p, q) = exp cmin 2σ 2 i (p,q)∈N exp −(1−N CC(p,q)) 2σ 2 i(10)
where σ 2 i is the noise variance for each auxiliary camera i; this parameter was fixed to 0.3. N denotes the set of interacting pixels in P. c min is the minimum cost for a pixel obtained by evaluating the function (1−N CC(., .)) on a 15 × 15 window. Contrast term: Segmentation boundaries in images tend to align with contours of high contrast and it is desirable to represent this as a constraint in stereo matching. A consistent interpretation of segmentation-prior and contrast-likelihood is used from [34]. We used a modified version of this interpretation in our formulation to preserve the edges by using Bilateral filtering [61] instead of Gaussian filtering. The contrast term is as follows:
E contrast (l) = p,q∈N e contrast (p, q, l p , l q )(11)
e contrast (p, q, l p , l q ) = 0, if (l p = l q ) 1 1+ ( + exp −C(p,q) ), otherwise (12) · is the L 2 norm and = 1. The simplest choice for C(p, q) would be the squared Euclidean color distance between intensities at pixel p and q as used in [23]. We propose a term for better segmentation as C(p, q) =
B(p)−B(q) 2 2σ 2 pq d 2 pq
where B(.) represents the bilateral filter, d pq is the Euclidean distance between p and q, and σ pq =
B(p)−B(p) 2 d 2 pq
This term enables to remove the regions with low photo-consistency scores and weak edges and thereby helps in estimating the object boundaries.
Smoothness term: This term is inspired by [23] and it ensures the depth labels vary smoothly within the object reducing noise and peaks in the reconstructed surface. This is useful when the photo-consistency score is low and insufficient to assign depth to a pixel ( Figure 14). It is defined as:
E smooth (l, d) = (p,q)∈N e smooth (l p , d p , l q , d q ) (13) e smooth (l p , d p , l q , d q ) = min(|d p − d q | , d max ), if l p = l q and d p , d q = U 0, if l p = l q and d p , d q = U d max , otherwise(14)
d max is set to 50 times the size of the depth sampling step for all datasets.
Color term: This term is computed using the negative log likelihood [9] of the color models learned from the foreground and background markers. The star centers obtained from the sparse 3D features are foreground markers and for background markers we consider the region outside the projected initial coarse reconstruction for each view. The color models use GMMs with 5 components each for Foreground/Background mixed with uniform color models [14] as the markers are sparse.
E color (l) = p∈P −logP (I p |l p )(15)
where P (I p |l p = l i ) denotes the probability at pixel p in the reference image belonging to layer l i . Fig. 15 Comparison of segmentation on benchmark static datasets using geodesic star-convexity.
Results and Performance Evaluation
The proposed system is tested on publicly available multi-view research datasets of indoor and outdoor scenes, details of datasets explained in Table 1. The parameters used for all the datasets are defined in Table 2. More information is available on the website 1 .
Multi-view segmentation evaluation
Segmentation is evaluated against the state-of-the-art methods for multi-view segmentation Kowdle [35] and Djelouah [16] for static scenes and joint segmentation reconstruction methods Mustafa [42] (per frame) and Guillemaut [24] (using temporal information) for both static and dynamic scenes. For static multi-view data the segmentation is initialised as detailed in Section 3.1 followed by refinement using the constrained optimisation Section 3.4.1. For dynamic scenes the full pipeline with temporal coherence is used as detailed in 3. Ground-truth is obtained by manually labelling the foreground for Office, Dance1 and Odzemok dataset, and for other datasets ground-truth is available online. We initialize all approaches by the same proposed initial coarse reconstruction for fair comparison.
To evaluate the segmentation we measure completeness as the ratio of intersection to union with groundtruth [35]. Comparisons are shown in Table 3 and Figure 15, 16 for static benchmark datasets. Comparison for dynamic scene segmentations are shown in Table 4 and Figure 17, 18. Results for multi-view segmentation of static scenes are more accurate than Djelouah, Mustafa, and Guillemaut, and comparable to Kowdle with improved segmentation of some detail such as the back of the chair.
For dynamic scenes the geodesic star convexity based optimization together with temporal consistency gives improved segmentation of fine detail such as the legs of the table in the Office dataset and limbs of the person in the Juggler, Magician and Dance2 datasets in Figure 17 and 18. This overcomes limitations of previous multiview per-frame segmentation.
Reconstruction evaluation
Reconstruction results obtained using the proposed method are compared against Mustafa [42], Guillemaut [24], and Furukawa [19] for dynamic sequences. Furukawa [19] is a per-frame multi-view wide-baseline stereo approach which ranks highly on the middlebury benchmark [53] but does not refine the segmentation.
The depth maps obtained using the proposed approach are compared against Mustafa and Guillemaut in Figure 19. The depth map obtained using the proposed approach are smoother with low reconstruction noise compared to the state-of-the-art methods. Figure 20 and 21 present qualitative and quantitative comparison of our method with the state-of-the-art approaches.
Comparison of reconstructions demonstrates that the proposed method gives consistently more complete and accurate models. The colour maps highlight the quantitative differences in reconstruction. As far as we are aware no ground-truth data exist for dynamic scene reconstruction from real multi-view video. In Figure 21 we present a comparison with the reference mesh available with the Dance2 dataset reconstructed using a visual-hull approach. This comparison demonstrates improved reconstruction of fine detail with the proposed technique.
In contrast to all previous approaches the proposed method gives temporally coherent 4D model reconstructions with dense surface correspondence over time. The introduction of temporal coherence constrains the reconstruction in regions which are ambiguous on a particular frame such as the right leg of the juggler in Figure 20 resulting in more complete shape. Figure 22 shows three complete scene reconstructions with 4D models of multiple objects. The Juggler and Magician sequences are reconstructed from moving handheld cameras. Computational Complexity: Computation times for the proposed approach vs other methods are presented in Table 5. The proposed approach to reconstruct temporally coherent 4D models is comparable in computation time to per-frame multiple view reconstruction and gives a ∼50% reduction in computation cost compared to previous joint segmentation and reconstruction approaches using a known background. This efficiency is achieved through improved per-frame initialisation based on temporal propagation and the introduction of the geodesic star constraint in joint optimisation. Further results can be found in the supplementary material. Temporal coherence: A frame-to-frame alignment is obtained using the proposed approach as shown in Figure 23 for Dance1 and Juggle dataset. The meshes of the dynamic object in Frame 1 and Frame 9 are color coded in both the datasets and the color is propagated to the next frame using the dense temporal coherence information. The color in different parts of the object is retained to the next frame as seen from the figure. The proposed approach obtains sequential temporal alignment which drifts with large movement in the object, hence successive frames are shown in the figure.
Limitations: As with previous dynamic scene reconstruction methods the proposed approach has a number of limitations: persistent ambiguities in appearance between objects will degrade the improvement achieved with temporal coherence; scenes with a large number of inter-occluding dynamic objects will degrade performance; the approach requires sufficient wide-baseline views to cover the scene.
Applications to immersive content production
The 4D meshes generated from the proposed approach can be used for applications in immersive content production such as FVV rendering and VR. This section demonstrates the results of these applications.
Free-viewpoint rendering
In FVV, the virtual viewpoint is controlled interactively by the user. The appearance of the reconstruction is sampled and interpolated directly from the captured camera images using cameras located close to the virtual viewpoint [57].
The proposed joint segmentation and reconstruction framework generates per-view silhouettes and a temporally coherent 4D reconstruction at each time instant of the input video sequence. This representation of the dynamic sequence is used for FVV rendering. To create FVV, a view-dependent surface texture is computed based on the user selected virtual view. This virtual view is obtained by combining the information from camera views in close proximity to the virtual viewpoint [57]. FVV rendering gives user the freedom to interactively choose a novel viewpoint in space to observe the dynamic scene and reproduces fine scale temporal surface details, such as the movement of hair and clothing wrinkles, that may not be modelled geometrically. An example of a reconstructed scene and the camera configuration is shown in Figure 24.
A qualitative evaluation of images synthesised using FVV is shown in Figure 25 and 26. These demonstrate reconstruction results rendered from novel viewpoints from the proposed method against Mustafa [43] and Guillemaut [23] on publicly available datasets. This is particularly important for wide-baseline camera configurations where this technique can be used to synthesize intermediate viewpoints where it may not be practical or economical to physically locate real cameras.
Virtual reality rendering
There is a growing demand for photo-realistic content in the creation of immersive VR experiences. The 4D temporally coherent reconstructions of the dynamic scenes obtained using the proposed approach enables the creation of photo-realistic digital assets that can be incorporated into VR environments using game engines such as Unity and Unreal Engine, as shown in Figure 27 for single frame of four datasets and for a series of frames for Dance1 dataset.
In order to efficiently render the reconstructions in a game engine for applications in VR, a UV texture atlas is extracted using the 4D meshes from the proposed approach as a geometric proxy. The UV texture atlas at each frame are applied to the models at render time in unity for viewing in a VR headset. A UV texture atlas is constructed by projectively texturing and blending multiple view frames onto a 2D unwrapped UV texture atlas, see Figure Figure 28. This is performed once for each static object and at each time instance for dynamic objects allowing efficient storage and real-time playback of static and dynamic textured reconstructions within a VR headset.
Conclusion
This paper introduced a novel technique to automatically segment and reconstruct dynamic scenes captured from multiple moving cameras in general dynamic uncontrolled environments without any prior on background appearance or structure. The proposed automatic initialization was used to identify and initialize the segmentation and reconstruction of multiple objects. A framework for temporally coherent 4D model reconstruction of dynamic scenes from a set of wide-baseline moving cameras. The approach gives a complete model of all static and dynamic non-rigid objects in the scene. Temporal coherence for dynamic objects addresses limitations of previous per-frame reconstruction giving improved reconstruction and segmentation together with dense temporal surface correspondence for dynamic objects. A sparse-to-dense approach is introduced to establish temporal correspondence for non-rigid objects using robust sparse feature matching to initialise dense optical flow providing an initial segmentation and reconstruction. Joint refinement of object reconstruction and segmentation is then performed using a multiple view optimisation with a novel geodesic star convexity constraint that gives improved shape estimation and is computationally efficient. Comparison against state-ofthe-art techniques for multiple view segmentation and reconstruction demonstrates significant improvement in performance for complex scenes. The approach enables reconstruction of 4D models for complex scenes which has not been demonstrated previously. | 8,667 |
1907.08377 | 2962977811 | We introduce DaiMoN, a decentralized artificial intelligence model network, which incentivizes peer collaboration in improving the accuracy of machine learning models for a given classification problem. It is an autonomous network where peers may submit models with improved accuracy and other peers may verify the accuracy improvement. The system maintains an append-only decentralized ledger to keep the log of critical information, including who has trained the model and improved its accuracy, when it has been improved, by how much it has improved, and where to find the newly updated model. DaiMoN rewards these contributing peers with cryptographic tokens. A main feature of DaiMoN is that it allows peers to verify the accuracy improvement of submitted models without knowing the test labels. This is an essential component in order to mitigate intentional model overfitting by model-improving peers. To enable this model accuracy evaluation with hidden test labels, DaiMoN uses a novel learnable Distance Embedding for Labels (DEL) function proposed in this paper. Specific to each test dataset, DEL scrambles the test label vector by embedding it in a low-dimension space while approximately preserving the distance between the dataset's test label vector and a label vector inferred by the classifier. It therefore allows proof-of-improvement (PoI) by peers without providing them access to true test labels. We provide analysis and empirical evidence that under DEL, peers can accurately assess model accuracy. We also argue that it is hard to invert the embedding function and thus, DEL is resilient against attacks aiming to recover test labels in order to cheat. Our prototype implementation of DaiMoN is available at this https URL. | One area of related work is on data-independent locality sensitive hashing (LSH) @cite_29 and data-dependent locality preserving hashing (LPH) @cite_27 @cite_8 . LSH hashes input vectors so that similar vectors have the same hash value with high probability. There are many algorithms in the family of LSH. One of the most common LSH methods is the random projection method called SimHash @cite_32 , which uses a random hyperplane to hash input vectors. | {
"abstract": [
"We consider localitg-preserving hashing — in which adjacent points in the domain are mapped to adjacent or nearlyadjacent points in the range — when the domain is a ddimensional cube. This problem has applications to highdimensional search and multimedia indexing. We show that simple and natural classes of hash functions are provably good for this problem. We complement this with lower bounds suggesting that our results are essentially the best possible.",
"We present two algorithms for the approximate nearest neighbor problem in high-dimensional spaces. For data sets of size n living in R d , the algorithms require space that is only polynomial in n and d, while achieving query times that are sub-linear in n and polynomial in d. We also show applications to other high-dimensional geometric problems, such as the approximate minimum spanning tree. The article is based on the material from the authors' STOC'98 and FOCS'01 papers. It unifies, generalizes and simplifies the results from those papers.",
"(MATH) A locality sensitive hashing scheme is a distribution on a family @math of hash functions operating on a collection of objects, such that for two objects x,y, PrheF[h(x) = h(y)] = sim(x,y), where sim(x,y) e [0,1] is some similarity function defined on the collection of objects. Such a scheme leads to a compact representation of objects so that similarity of objects can be estimated from their compact sketches, and also leads to efficient algorithms for approximate nearest neighbor search and clustering. Min-wise independent permutations provide an elegant construction of such a locality sensitive hashing scheme for a collection of subsets with the set similarity measure sim(A,B) = |A P B| |A P Ehe [d(h(P),h(Q))] x O(log n log log n). EMD(P, Q). .",
""
],
"cite_N": [
"@cite_27",
"@cite_29",
"@cite_32",
"@cite_8"
],
"mid": [
"2011015278",
"2147717514",
"2012833704",
""
]
} | DaiMoN: A Decentralized Artificial Intelligence Model Network | Network-based services are at the intersection of a revolution. Many centralized monolithic services are being replaced with decentralized microservices. The utility of decentralized ledgers showcases this change, and has been demonstrated by the usage of Bitcoin [1] and Ethereum [2].
The same trend towards decentralization is expected to affect the field of artificial intelligence (AI), and in particular machine learning, as well. Complex models such as deep neural networks require large amounts of computational power and resources to train. Yet, these large, complex models are being retrained over and over again by different parties for similar performance objectives, wasting computational power and resources. Currently, only a relatively small number of pretrained models such as pretrained VGG [3], ResNet [4], GoogLeNet [5], and BERT [6] are made available for reuse.
One reason for this is that the current system to share models is centralized, limiting both the number of available models and incentives for people to participate and share models. Examples of these centralized types of systems are Caffe Model Zoo [7], Pytorch Model Zoo [8], Tensorflow Model Zoo [9], and modelzoo.co [10].
In other fields seeking to incentivize community participation, cryptocurrencies and cryptographic tokens based on decentralized ledger technology (DLT) have been used [1], [2]. In addition to incentives, DLT offers the potential to support transparency, traceability, and digital trust at scale. The ledger is append-only, immutable, public, and can be audited and validated by anyone without a trusted third-party.
In this paper, we introduce DaiMoN, a decentralized artificial intelligence model network that brings the benefits of DLT to the field of machine learning. DaiMoN uses DLT and a token-based economy to incentivize people to improve machine learning models. The system will allow participants to collaborate on improving models in a decentralized manner without the need for a trusted third-party. We focus on applying DaiMoN for collaboratively improving classification models based on deep learning. However, the presented system can be used with other classes of machine learning models with minimal to no modification.
In traditional blockchains, proof-of-work (PoW) [1] incentivizes people to participate in the consensus protocol for a reward and, as a result, the network becomes more secure as more people participate. In DaiMoN, we introduce the concept of proof-of-improvement (PoI). PoI incentivizes people to participate in improving machine learning models for a reward and, as an analogous result, the models on the network become better as more people participate.
One example of a current centralized system that incentivizes data scientists to improve machine learning models for rewards is the Kaggle Competition system [11], where a sponsor puts up a reward for contestants to compete to increase the accuracy of their models on a test dataset. The test dataset inputs are given while labels are withheld to prevent competitors from overfitting to the test dataset.
In this example, a sponsor and competitors rely on Kaggle to keep the labels secret. If someone were to hack or compromise Kaggle's servers and gain access to the labels, Kaggle would be forced to cancel the competition. In contrast, because DaiMoN utilizes a DLT, it eliminates this concern to a large degree, as it does not have to rely on a centralized trusted entity.
However, DaiMoN faces a different challenge: in a decentralized ledger, all data are public. As a result, the public would be able to learn about labels in the test dataset if it were to be posted on the ledger. By knowing test labels, peers may intentionally overfit their models, resulting in models which are not generalizable. To solve the problem, we introduce a novel technique, called Distance Embedding for Labels (DEL), which can scramble the labels before putting them on the ledger. DEL preserves the error in a predicted label vector inferred by the classifier with respect to the true test label vector of the test dataset, so there is no need to divulge the true labels themselves.
With DEL, we can realize the vision of PoI over a DLT network. That is, any peer verifier can vouch for the accuracy improvement of a submitted model without having access to the true test labels. The proof is then included in a block and appended to the ledger for the record.
The structure of this paper is as follows: after introducing DEL and PoI, we introduce the DaiMoN system that provides incentive for people to participate in improving machine learning models.
The contributions of this paper include: 1) A learnable Distance Embedding for Labels (DEL) function specific to the test label vector of the test dataset for the classifier in question, and performance analysis regarding model accuracy estimation and security protection against attacks. To the best of our knowledge, DEL is the first solution which allows peers to verify model quality without knowing the true test labels. 2) Proof-of-improvement (PoI), including detailed PROVE and VERIFY procedures. 3) DaiMoN, a decentralized artificial intelligence model network, including an incentive mechanism. DaiMoN is one of the first proof-of-concept end-to-end systems in distributed machine learning based on decentralized ledger technology.
II. DISTANCE EMBEDDING FOR LABELS (DEL)
In this section, we describe our proposed Distance Embedding for Labels (DEL), a key technique by which DaiMoN can allow peers to verify the accuracy of a submitted model without knowing the labels of the test dataset. By keeping these labels confidential, the system prevents model-improving peers from overfitting their models intentionally to the test labels.
A. Learning the DEL Function with Multi-Layer Perceptron
Suppose that the test dataset for the given C-class classification problem consists of m (input, label) test pairs, and each label is an element in Q = {c ∈ Z | 1 ≤ c ≤ C}, where Z denotes the set of integers. For example, the FashionM-NIST [12] classification problem has C = 10 image classes and m = 10, 000 (input, label) test pairs, where for each pair, the input is a 28 × 28 greyscale image, and the label is an element in Q = {1, 2, . . . , 10}.
For a given test dataset, we consider the corresponding test label vector x t ∈ Q m , which is made of all labels in the test dataset. We seek a x t -specific DEL function f : x ∈ Q m → y ∈ R n , where R denotes the set of real numbers, which can approximately preserve distance from a predicted label vector x ∈ Q m (inferred by a classification model or a classifier we want to evaluate its accuracy) to x t , where n m. For example, we may have n = 256 and m = 10, 000. The error of x, or the distance from x to x t , is defined as
e(x, x t ) = 1 m m i=1 1(x i = x ti ), where 1 is the indicator function, x = {x 1 , . . . , x m } and x t = {x t1 , . . . , x tm }.
Finding such a distance-preserving embedding function f is generally a challenging mathematical problem. Fortunately, we have observed empirically that we can learn this x t -specific embedding function using a neural network.
More specifically, to learn an x t -specific DEL function f , we train a multi-layer perception (MLP) for f as follows. For each randomly selected x ∈ Q m , we minimize the loss:
L θ (x, x t ) = |e(x, x t ) − d(f (x), f (x t ))|,
where θ is the MLP parameters, and d(·, ·) is a modified cosine distance function defined as d(y 1 , y 2 ) = 1 − y 1 ·y 2 y 1 y 2
y 1 · y 2 ≥ 0 1 otherwise.
The MLP training finds a distance-preserving lowdimensional embedding function f specific to a given x t . The existence of such embedding is guaranteed by the Johnson-Lindenstrauss lemma [13], [14], under a more general setting which does not have the restriction about the embedding being specific to a given vector.
B. Use of DEL Function
We use the trained DEL function f to evaluate the accuracy or the error of a classification model or a classifier on the given test dataset without needing to know the true test labels. As defined in the preceding section, for a given test dataset, x t ∈ Q m is the true test label vector of the test dataset. Given a classification model or a classifier, x ∈ Q m is a predicted label vector consisting of labels inferred by the classifier on all the test inputs of the test dataset. A verifier peer can determine the error of the predicted label vector x without knowing the true test label vectorx t , by checking d(f (x), f (x t )) instead of e(x, x t ). This is because these two quantities are approximately equal, as assured by the MLP training, which minimizes their absolute difference. If d(f (x), f (x t )) is deemed to be sufficiently lower than that of the previously known model, then a verifier peer may conclude
Initialize x as {x 1 , x 2 , . . . , x m } with x ← x t 5: for k ∈ K do 6:
Pick a random number c in {1, 2, . . . , C} 7: x k ← c 8: return x that the model has improved the accuracy of the test dataset. That is, the verifier peer uses d(f (x), f (x t )) as a proxy for e(x, x t ).
Note that the DEL function f is x t -specific. For a different test dataset with a different test label vector x t , we will need to train another f . For most model benchmarking applications, we expect a stable test dataset; and thus we will not need to retrain f frequently.
III. TRAINING AND EVALUATION OF DEL
In this section, we evaluate how well the neural network approach described above can learn a DEL function f : Q m → R n with m = 10, 000 and n = 256. We consider a simple multi-layer perceptron (MLP) with 1024 hidden units and a rectified linear unit (ReLU). The output of the network is normalized to a unit length. The network is trained using the Adam optimization algorithm [15]. The dataset used is FashionMNIST [12], which has C = 10 classes and m = 10, 000 (input, label) test pairs. The true test label vector x f is thus composed of these 10,000 test labels.
To generate the data to train the function, we perturb the test label vector x t by using the GENERATEDATA procedure shown. First, the procedure picks v, the number of labels in x t to switch out, and generates the set of indices K, indicating the positions of the label to replace. It then loops through the set K. For each k ∈ K, it generates the new label c to replace the old one. Note that with this procedure, the new label c can be the same as the old label. We use the procedure to generate the training dataset and the test dataset for the MLP. Figure 1 shows the convergence of the network in learning the function f . We see that as the number of epochs increases, both training and testing loss decrease, suggesting that the network is learning the function. After the neural network has been trained, we evaluate how well the learned f can preserve error in a predicted label vector x inferred by the classifier. Figure 2 shows the correlation between the error and the distance in the embedding space under f . We see that both are highly correlated.
IV. ANALYSIS ON DEFENSE AGAINST BRUTE-FORCE
ATTACKS In this section, we show that it is difficult for an attacker to launch a brute-force attack on DEL. To learn about the test label vector x t ∈ Q m that produces y t ∈ R n under a known f : x ∈ Q m → y ∈ R n , the attacker's goal is to find x such that d(f (x), y t ) < , for a small . There are C m possible instances of x to try, where C is the number of classes. Note C m can be very large. For example, for a test dataset of 10 classes and 10,000 samples, we have C = 10, m = 10, 000 and C m = 10 10000 . The attacker may use the following bruteforce algorithm:
The success probability (α) of this attack where at least one out of q tried instances for x is within the distance of x t is
α = 1 − (1 − p) q ≈ pq, where p is the probability that d(f (x), y t ) < .
We now derive p and show its value is exceedingly small for a small , even under moderate values of n. Assume that the outputs of f is uniformly distributed on a unit (n-1)-sphere or equivalently normally distributed on an n-dimension euclidean space [16]. Suppose that y t = f (x t ). We align the top of the unit (n-1)-sphere at y t . Then, p is the probability of a random vector on a (n-1)-hemisphere falling onto the cap [17] which is
p = I sin 2 β ( n − 1 2 , 1 2 ), 1: procedure BRUTEFORCEATTACK(y t , , q) 2:
Pick a random set X 0 of q values in Q m 3:
for x ∈ X 0 do 4:
if d(f (x), y t ) < then where n is the dimension of a vector, β is the angle between x t and a vector on the sphere, and I x (a, b) is the regularized incomplete beta function defined as:
I x (a, b) = B(x; a, b) B(a, b) .
In the above expression, B(x; a, b) is the incomplete beta function, and B(a, b) is the beta function defined as: Figure 3 shows the probability p as the distance ( ) from x t decreases for different values of n. We observe that for a small , this probability is exceedingly low and thus to guarantee attacker's success (α = 1), the number of samples (q = α/p = 1/p) of x needed to be drawn randomly is very high. For instance, for a 10% error rate, = 0.10 and n = 32, the probability p is 6.12×10 −13 and the number of trials q needed to succeed is 1.63 × 10 12 . In addition, the higher the n, the smaller the p and the larger the q. For example, for a 10% error rate, = 0.10 and n = 256, the probability p is 3.33 × 10 −93 and the number of trials q needed to succeed is 3.01 × 10 92 .
B(x; a, b) = x 0 t a−1 (1 − t) b−1 dt, B(a, b) = 1 0 t a−1 (1 − t) b−1 dt.
V. ANALYSIS ON DEFENSE AGAINST INVERSE-MAPPING ATTACKS
In this section, we provide an analysis on defense against attacks attempting to recover the original test label vector x t from y t = f (x t ). We consider the case that the attacker tries to learn an inverse function f −1 : y ∈ R n → x ∈ Q m 1: procedure GENERATEINVERSEDATANEARBY(x t , f ) 2:
x ← GENERATEDATA(x t ) return {y, x} using a neural network. Suppose that the attacker uses a multilayer perceptron (MLP) for this with 1024 hidden units and a rectified linear unit (ReLU). The network is trained using Adam optimization algorithm [15]. The loss function used is the squared error function:
L θ (y, y t ) = f −1 (y) − f −1 (y t ) 2 2 .
We generate the dataset using the two procedures: GENERATEINVERSEDATANEARBY and GENERATEIN-VERSEDATARANDOM shown. The former has the knowledge that the test label vector x t is nearby, and the latter does not. We train the neural network to find the inverse function f −1 and compare how the neural network learns from these two generated datasets.
The GENERATEINVERSEDATANEARBY procedure generates a perturbation of the test label vector x t , passes it through the function f , and returns a pair of the input y and the target output vector x used to learn the inverse function f −1 .
The GENERATEINVERSEDATARANDOM procedure generates a random label vector x t where each element of the vector has a value representing one of the C classes, passes it through the function f and returns a pair of the input y and the target output vector x used to learn the inverse function f −1 . Figure 4 shows the error e(f −1 (y t ), x t ) as the number of training epochs increases. On one hand, using the data generated without the knowledge of the test label vector x t using the GENERATEINVERSEDATARANDOM procedure, we see that the network does not reduce the error as it trains. This means that it does not succeed in learning the inverse function f −1 and therefore, it will not be able to recover the test label x t from its output vector y t . On the other hand, using the data generated with the knowledge of the test label vector x t using the GENERATEINVERSENEARBY procedure, we see that the network does reduce the the error as it trains and has found the test label vector x t from its output vector y t at around 40 epochs. This experiment gives an empirical evidence that without the knowledge of x t , it is hard to find f −1 .
VI. PROOF-OF-IMPROVEMENT
In this section, we introduce the concept of proof-ofimprovement (PoI), a key mechanism supporting DaiMoN. PoI allows a prover P to convince a verifier V that a model M improves the accuracy or reduces the error on the test dataset via the use of DEL without the knowledge of the true test label return {g, y, pk P } sk P vector x t . PoI is characterized by the PROVE and VERIFY procedures shown. As a part of the system setup, a prover P has a public and private key pair (pk P , sk P ) and a verifier V has a public and private key pair (pk V , sk V ). Both are given our learnt DEL function f (·), y t = f (x t ), the set of m test inputs Z = {z} m i=1 , and the current best distance d c achieved by submitted models, according to the distance function d(·, ·) described in Section II.
Let digest(·) be the message digest function such as IPFS hash [18], MD5 [19], SHA [20] and CRC [21]. Let {·} sk denotes a message signed by a secret key sk.
Let M be the classification model for which P will generate a PoI proof π P . The model M takes an input and returns the corresponding predicted class label. The PROVE procedure called by a prover P generates the digest of M and calculates the DEL function output of the predicted labels of the test dataset Z by M. The results are concatenated to form the body of the proof, which is then signed using the prover's secret key sk P . The PoI proof π P shows that the prover P has found a model M that could reduce the error on a test dataset.
To verify, the verifier V runs the following procedure to generate the verification proof π V , the proof that the verifier V has verified the PoI proof π P generated by the prover P. The procedure first verifies the signature of the proof with public key π P .pk P of the prover P. Second, it verifies that the digest is correct by computing digest(M) and comparing it with the digest in the proof π P .g. Third, it verifies the DEL function output by computing f (M(Z)) and comparing it with the the DEL function output in the proof π P .y. Lastly, 1: procedure VERIFY(M, π P , d c , δ) → π V 2:
Verify the signature of π P with π P .pk P 3:
Verify the digest: π P .g = digest(M) 4: Verify the DEL function output: π P .y = f (M(Z)) 5: Verify the distance: d(π P .y, y t ) < d c − δ, δ ≥ 0 6: if all verified then 7: return {π P , d c , δ, pk V } sk V it verifies the distance by computing d(π P .y, y t ) and sees if it is lower than the current best with a margin of δ ≥ 0, where δ is an improvement margin commonly agreed upon among peers. If all are verified, the verifier generates the body of the verification proof by concatenating the PoI proof π P with the current best distance d c and δ. Then, the body is signed with the verifier's secret key sk V , and the verification proof is returned.
VII. THE DAIMON SYSTEM
In this section, we describe the DaiMoN system that incentivizes participants to improve the accuracy of models solving a particular problem. In DaiMoN, each classification problem has its own DaiMoN blockchain with its own token. An append-only ledger maintains the log of improvements for that particular problem. A problem defines inputs and outputs which machine learning models will solve for. We call this the problem definition. For example, a classification problem on the FashionMNIST dataset [12] may define an input z as a 1-channel 1 × 28 × 28 pixel input whose values are ranging from 0 to 1, and an output x as 10-class label ranging from 1 to 10:
{z ∈ R 1×28×28 | 0 ≤ z ≤ 1}, {x ∈ Z | 1 ≤ x ≤ 10}.
Each problem is characterized by a set of test dataset tuples. Each tuple (Z, f, y t ) consists of the test inputs Z = {z} m i=1 , the DEL function f , and the DEL function output y t = f (x t ) on the true test label vector x t .
A participant is identified by its public key pk with the associated private key sk. There are six different roles in DaiMoN: problem contributors, model improvers, validators, block committers, model runners, and model users. A participant can be one or more of these roles. We now detail each role below:
Problem contributors contribute test dataset tuples to a problem. They can create a problem by submitting a problem definition and the first test dataset tuple. See Section VII-B on how additional test tuples can be added.
Model improvers compete to improve the accuracy of the model according to the problem definition defined in the chain. A model improver generates a PoI proof for the improved model and submit it.
Validators validate PoI proofs, generate a verification proof and submit it as a vote on the PoI proofs. Beyond being a verifier for verifying PoI, a validator submits the proof as a vote.
Block committers create a block from the highest voted PoI proof and its associated verification proofs and commit the block.
Model runners run the inference on the latest model given inputs and return outputs and get paid in tokens.
Model users request an inference computation from model runners with an input and pay for the computation in tokens.
A. The Chain
Each chain consists of two types of blocks: Problem blocks and Improvement blocks. A Problem block contains information, including but not limited to: the block number, the hash of the parent block, the problem definition, the test dataset tuples, and the block hash. An Improvement block contains information, including but not limited to: the block number, the hash of the parent block, PoI proof π P from model improver P, verification proofs {π P } from validators {V}, and the block hash. The chain must start with a Problem block that defines the problem, followed by Improvement blocks that record the improvements made for the problem.
B. The Consensus
After a DaiMoN blockchain is created, there is a problem definition period T p . In this period, any participant is allowed to add test dataset tuples into the mix. After a time period T p has passed, a block committer commits a Problem block containing all test dataset tuples submitted within the period to the chain.
After the Problem block is committed, a competition period T b begins. During this period, a model improver can submit the PoI proof of his/her model. A validator then validates the PoI proof and submit a verification proof as a vote. For each PoI proof, its associated number of unique verification proofs are tracked. At the end of each competition period, a block committer commits an Improvement block containing the model with the highest number of unique verification proofs, and the next competition period begins.
C. The Reward
Each committed block rewards tokens to the model improver and validators. The following reward function, or similar ones, can be used:
R(d, d c ) = I 1−d (a, 1 2 ) − I 1−dc (a,1 2 )
,
where I · (·, ·) is the regularized incomplete beta function, d is the distance of the block, d c is the current best distance so far, and a is a parameter to allow for the adjustment to the shape of the reward function. Figure 5 shows the reward function as the distance d decreases for different current best distance d c for a = 3. We see that more and more tokens are rewarded as the distance d reaches 0, and the improvement gap d c − d increases.
Each validator is given a position as it submits the validation proof: the s-th validator to submit the validation proof is given the s-th position. The validator's reward is the model improver's reward scaled by 2 −s : is the validator's position, and Z >0 denotes the set of integers greater than zero. This factor encourages validators to compete to be the first one to submit the validation proof for the PoI proof in order to maximize the reward. Two is used as a base of the scaling factor here since ∞ s=1 2 −s = 1.
R(d, d c )2 −s , where s ∈ Z >0
D. The Market
In order to increase the value of the token of each problem, there should be demand for the token. One way to generate demand for the token is to allow it to be used as a payment for inference computation based on the latest model committed to the chain. To this end, model runners host the inference computation. Each inference call requested by users is paid for by the token of the problem chain that the model solves. Model runners automatically upgrade the model, as better ones are committed to the chain. The price of each call is set by the market according to the demand and supply of each service. This essentially determines the value of the token, which can later be exchanged with other cryptocurrencies or tokens on the exchanges. As the demand for the service increases, so will the token value of the problem chain.
Model runners periodically publish their latest services containing the price for the inference computation of a particular model. Once a service is selected, model users send a request with the payment according to the price specified. Model runners then verify the request from the user, run the computation, and return the result.
To keep a healthy ecosystem among peers, a reputation system may be used to recognize good model runners and users, and reprimand bad model runners and users. Participants in the network can upvote good model runners and users and downvote bad model runners and users.
E. System Implementation
DaiMoN is implemented on top of the Ethereum blockchain [2]. In this way, we can utilize the security and decentralization of the main Ethereum network. The ERC-20 [22] token standard is used to create a token for each problem chain. Tokens are used as an incentive mechanism and can be exchanged. Smart contracts are used to manage the DaiMoN blockchain for each problem.
Identity of a participant is represented by its Ethereum address. Every account on Ethereum is defined by a pair of keys, a private key and public key. Accounts are indexed by their address, which is the last 20 bytes of the Keccak [20] hash of the public key.
The position of the verifiers is recorded and verified on the Ethereum blockchain. As a verifier submits a vote on the smart contract on the Ethereum blockchain, his/her position is recorded and used to calculate the reward for verifier.
The InterPlanetary File System (IPFS) [18] is used to store and share data files that are too big to store on the Ethereum blockchain. Files such as test input files and model files are stored on IPFS and only their associated IPFS hashes are stored in the smart contracts. Those IPFS hashes are then used by participants to refer to the files and download them. Note that since storing files on IPFS makes it public, it is possible that an attacker can find and submit the model before the creator of the model. To prevent this, model improvers must calculate the IPFS hash of the model and register it with the smart contract on the Ethereum blockchain before making the model available on IPFS.
VIII. DISCUSSION
One may compare a DEL function to an encoder of an autoencoder [23]. An autoencoder consists of an encoder and a decoder. The encoder maps an input to a lower-dimensional embedding which is then used by the decoder to reconstruct the original input. Although a DEL function also reduces the dimensionality of the input label vector, it does not require the embedding to reconstruct the original input and it adds the constraint that the output of the function should preserve the error or the distance of the input label vector to a specific test label vector x t . In fact, for our purpose of hiding the test labels, we do not want the embedding to reconstruct the original input test labels. Adding the constraint to prevent the reconstruction may help further defense against the inversemapping attacks and can be explored in future work.
Note that a model with closer distance to the test label vector (x t ) in the embedding space may not have better accuracy. This results in a reward being given to a model with worse accuracy than the previous best model. This issue can be mitigated by increasing the margin δ. With the appropriate δ setting, this discrepancy should be minimal. Note also that as the model gets better, it will be easier for an attacker to recover the true test label vector (x t ). To mitigate this issue, multiple DEL and reward functions may be used at various distance intervals.
By building DaiMoN on top of Ethereum, we inherit the security and decentralization of the main Ethereum network as well as the limitations thereof. We now discuss the security of each individual DaiMoN blockchain. An attack to consider is the Sybil attack on the chain, in which an attacker tries to create multiple identities (accounts) and submit multiple verification proofs on an invalid PoI proof. Since each problem chain is managed using Ethereum smart contracts, there is an inherent gas cost associated with every block submission. Therefore, it may be costly for an attacker to overrun the votes of other validators. The more number of validators for that chain, the higher the cost is. In addition, this can be thwarted by increasing the cost of each submission by requiring validators to also pay Ether as they make the submission. All in all, if the public detects signs of such behavior, they can abandon the chain altogether. If there is not enough demand in the token, the value of the tokens will depreciate and the attacker will have less incentives to attack.
Since we use IPFS in the implementation, we are also limited by the limitations of IPFS: files stored on IPFS are not guaranteed to be persistent. In this case, problem contributors and model improvers need to make sure that their test input files and model files are available to be downloaded on IPFS. In addition to IPFS, other decentralized file storage systems that support persistent storage at a cost such as Filecoin [24], Storj [25], etc. can be used.
X. CONCLUSION
We have introduced DaiMoN, a decentralized artificial intelligence model network. DaiMoN uses a Distance Embedding for Labels (DEL) function. DEL embeds the predicted label vector inferred by a classifier in a low-dimensional space where its error or its distance to the true test label vector of the test dataset is approximately preserved. Under the embedding, DEL hides test labels from peers while allowing them to assess the accuracy improvement that a model makes. We present how to learn DEL, evaluate its effectiveness, and present the analysis of DEL's resilience against attacks. This analysis shows that it is hard to launch a brute-force attack or an inverse-mapping attack on DEL without knowing a priori a good estimate on the location of the test label vector, and that the hardness can be increased rapidly by increasing the dimension of the embedding space.
DEL enables proof-of-improvement (PoI), the core of Dai-MoN. Participants use PoI to prove that they have found a model that improves the accuracy of a particular problem. This allows the network to keep an append-only log of model improvements and reward the participants accordingly. DaiMoN uses a reward function that scales according to the increase in accuracy a new model has achieved on a particular problem. We hope that DaiMoN will spur distributed collaboration in improving machine learning models.
XI. ACKNOWLEDGMENT
This work is supported in part by the Air Force Research Laboratory under agreement number FA8750-18-1-0112 and a gift from MediaTek USA. | 5,738 |
1901.11467 | 2951047368 | An obstacle to the development of many natural language processing products is the vast amount of training examples necessary to get satisfactory results. The generation of these examples is often a tedious and time-consuming task. This paper this paper proposes a method to transform the sentiment of sentences in order to limit the work necessary to generate more training data. This means that one sentence can be transformed to an opposite sentiment sentence and should reduce by half the work required in the generation of text. The proposed pipeline consists of a sentiment classifier with an attention mechanism to highlight the short phrases that determine the sentiment of a sentence. Then, these phrases are changed to phrases of the opposite sentiment using a baseline model and an autoencoder approach. Experiments are run on both the separate parts of the pipeline as well as on the end-to-end model. The sentiment classifier is tested on its accuracy and is found to perform adequately. The autoencoder is tested on how well it is able to change the sentiment of an encoded phrase and it was found that such a task is possible. We use human evaluation to judge the performance of the full (end-to-end) pipeline and that reveals that a model using word vectors outperforms the encoder model. Numerical evaluation shows that a success rate of 54.7 is achieved on the sentiment change. | Sentiment analysis is a task in NLP that aims to predict the sentiment of a sentence @cite_26 . The task can range from a binary classification task where the aim is to predict whether a document is positive or negative to a fine-grained task with multiple classes. In sentiment analysis, state-of-the-art results have been achieved using neural network architectures such as convolutional neural networks @cite_9 and recurrent neural networks @cite_1 . Variants of RNNs; LSTMs and GRUs, have also been used to great success @cite_6 . | {
"abstract": [
"We report on a series of experiments with convolutional neural networks (CNN) trained on top of pre-trained word vectors for sentence-level classification tasks. We show that a simple CNN with little hyperparameter tuning and static vectors achieves excellent results on multiple benchmarks. Learning task-specific vectors through fine-tuning offers further gains in performance. We additionally propose a simple modification to the architecture to allow for the use of both task-specific and static vectors. The CNN models discussed herein improve upon the state of the art on 4 out of 7 tasks, which include sentiment analysis and question classification.",
"Sentiment analysis and opinion mining is the field of study that analyzes people's opinions, sentiments, evaluations, attitudes, and emotions from written language. It is one of the most active research areas in natural language processing and is also widely studied in data mining, Web mining, and text mining. In fact, this research has spread outside of computer science to the management sciences and social sciences due to its importance to business and society as a whole. The growing importance of sentiment analysis coincides with the growth of social media such as reviews, forum discussions, blogs, micro-blogs, Twitter, and social networks. For the first time in human history, we now have a huge volume of opinionated data recorded in digital form for analysis. Sentiment analysis systems are being applied in almost every business and social domain because opinions are central to almost all human activities and are key influencers of our behaviors. Our beliefs and perceptions of reality, and the choices we make, are largely conditioned on how others see and evaluate the world. For this reason, when we need to make a decision we often seek out the opinions of others. This is true not only for individuals but also for organizations. This book is a comprehensive introductory and survey text. It covers all important topics and the latest developments in the field with over 400 references. It is suitable for students, researchers and practitioners who are interested in social media analysis in general and sentiment analysis in particular. Lecturers can readily use it in class for courses on natural language processing, social media analysis, text mining, and data mining. Lecture slides are also available online.",
"Document level sentiment classification remains a challenge: encoding the intrinsic relations between sentences in the semantic meaning of a document. To address this, we introduce a neural network model to learn vector-based document representation in a unified, bottom-up fashion. The model first learns sentence representation with convolutional neural network or long short-term memory. Afterwards, semantics of sentences and their relations are adaptively encoded in document representation with gated recurrent neural network. We conduct document level sentiment classification on four large-scale review datasets from IMDB and Yelp Dataset Challenge. Experimental results show that: (1) our neural model shows superior performances over several state-of-the-art algorithms; (2) gated recurrent neural network dramatically outperforms standard recurrent neural network in document modeling for sentiment classification. 1",
"In this paper, we propose a novel neural network model called RNN Encoder-Decoder that consists of two recurrent neural networks (RNN). One RNN encodes a sequence of symbols into a fixed-length vector representation, and the other decodes the representation into another sequence of symbols. The encoder and decoder of the proposed model are jointly trained to maximize the conditional probability of a target sequence given a source sequence. The performance of a statistical machine translation system is empirically found to improve by using the conditional probabilities of phrase pairs computed by the RNN Encoder-Decoder as an additional feature in the existing log-linear model. Qualitatively, we show that the proposed model learns a semantically and syntactically meaningful representation of linguistic phrases."
],
"cite_N": [
"@cite_9",
"@cite_26",
"@cite_1",
"@cite_6"
],
"mid": [
"2949541494",
"2108646579",
"2250966211",
"2950635152"
]
} | Towards Controlled Transformation of Sentiment in Sentences | In its current state text generation is not able to capture the complexities of human language, making the generated text often of poor quality. (Hu et al., 2017) suggested a method to control the generation of text combining variational autoencoders and holistic attribute discriminators. Although the sentiment generated by their method was quite accurate, the generated sentences were still far from perfect. The short sentences generated by their model seem adequate, but the longer the sentence, the more the quality drops.
Most research tries to generate sentences completely from scratch and while this is one way to generate text, it might also be a possibility to only change parts of a sentence to transform the sentiment. In longer sentences, not every word is important for determining the sentiment of the sentence, so most words can be left unchanged while trying to transform the sentiment.
The model proposed in this work tries to determine the critical part of a sentence and transforms only this to a different sentiment. This method should change the sentiment of the sentence while keeping the grammatical structure and semantic meaning of the sentence intact. To find the critical part of a sentence the model uses an attention mechanism on a sentiment classifier. The phrases that are deemed important by the sentiment classifier are then encoded in an encoder-decoder network and transformed to a new phrase. This phrase is then inserted in the original sentence to create the new sentence with the opposite sentiment.
THE MODEL
In this paper two different pipelines are considered. Both pipelines contain a sentiment classifier with an attention mechanism to extract phrases from the input documents. The difference is that pipeline 1 (which can be seen in Figure 1) uses an encoder to encode the extracted phrases and find the closest phrase with the opposite sentiment in the vector space. This phrase is then either inserted into the sentence or the vector representation of this phrase is decoded and the resulting phrase is inserted into the sentence. Pipeline 2 (as seen in Figure 2) finds the words in the extracted phrases that are most likely to determine the sentiment and replaces these words with similar words of the opposite sentiment using word vectors. In the next sections all individual parts of the pipeline will be explained.
Sentiment classification with attention
To find the phrases that determine the sentiment of a sentence a sentiment classification model with attention is used. The network used is the network defined by (Yang et al., 2016). This model is chosen because in sequence modeling recurrent neural networks have shown to give better classification results than other models, such as convolutional neural networks (Yin et al., 2017). Recurrent neural networks have the added benefit of easily allowing for implementation of attention mechanisms, which are able to focus on small parts of a sequence at a time. The attention mechanism is used to extract the sequences that determine the sentiment. This classifier consists of a word-and sentence encoder and both a word and sentence level attention layer. The word encoder is a bidirectional GRU that encodes information about the whole sentence centered around word w it with t ∈ [1, T ]. The sentence encoder does the same thing, but for a sentence s i which is constructed by taking an aggregate of the word vectors and the attention values composing the sentence. The sentence is then encoded by another bidirectional GRU and another attention layer to compose a document vector. This document vector can Figure 1: The model using the closest phrase approach including an example then be used to determine the sentiment of the document through the fully-connected layer.
Attention for phrase extraction
The attention proposed by (Yang et al., 2016) is used to find the contribution to the sentiment that each individual word brings. The attention they propose feeds the word representation obtained from the word-level encoder through a one-layer MLP to get a hidden representation of the word. This hidden representation is then, together with a word context vector, fed to a softmax function to get the normalized attention weight. After the attention weights have been computed, words with a sufficiently high attention weight are extracted and passed to the encoderdecoder model.
In the first step of the examples in Figure 1 and 2 the attention mechanism is highlighting a few phrases in a movie review. These are the phrases that are then later passed on to either the encoder or to the word changing part of the pipeline.
Sentiment transformation
For the sentiment transformation two approaches are proposed. The first approach is based on an encoder which encodes the extracted phrases into fixed-length vectors. The second approach transforms words from the extracted phrases using word embeddings and an emotion lexicon.
Encoder-Decoder approach
The encoder is the first technique used to transform a sentence to one with a different sentiment. This variant uses an encoder model and a transformation based on the distance between two phrases in the latent space. The first part of this model is to encode the phrases extracted by the attention in the latent space. The encoder used is similar to that proposed by .
The difference is that this model is not trained on two separate datasets, but on one set of phrases both as input and output, where the goal is that the model can echo the sequence. Both the encoder and the decoder are one-directional GRUs that are trained together to echo the sequence. First, the encoder encodes a sequence to a fixed-length vector representation. The decoder should then reconstruct the original sequence from this fixed-length vector representation. This is trained through maximizing the log-likelihood
max θ 1 N N ∑ n=1 logp θ (y n | x n )
where θ is the set of model parameters and (x n , y n ) is a pair of input and output sequences from the training set. In our case x n and y n are the same as we want the encoder-decoder to echo the initial sequence. On top of this we also store a sentiment label with the encoded sequences to use them in the next step of the model.
Afterwards these phrases are encoded into a fixedlength vector. The model then selects the vector closest to the current latent, fixed-length representation Figure 2: The model using the word vector approach including an example (but taking the one with a different sentiment label) using the cosine distance:
min y x · y ||x|| · ||y|| , y ∈ Y
where x is the encoded input phrase and Y is the set of all encoded vectors in the latent space with the opposite label from x. The closest vector is then decoded into a new phrase, which is inserted into the sentence to replace the old phrase. Obviously, the decoder model used here is the same model which is trained to encode the selected phrases.
Word vector approach
The word vector approach also starts by extracting the relevant phrases from the document using the attention mechanism explained in the attention section. However, while the encoder approach uses an encoder to encode the phrases into the latent space, this approach is based on word vectors (Mikolov et al., 2013). First off, the words that are important to the sentiment are selected using the following formulas:
∀x ∈ X : p(neg|x) > 0.65 ∨ p(pos|x) > 0.65
where neg means the sentiment of the sentence is negative, pos means the sentiment of the sentence is positive, x is the current word and X is the set of all the words selected to be replaced. Threshold 0.65 was chosen empirically by inspecting different values.
The replacement word is selected using the closest word in the latent space using the cosine distance. The candidates to replace the word are found using the EmoLex emotion lexicon (Mohammad and Turney, 2013). A negative and a positive word list are created based on this lexicon using the annotations. The negative list contains all words marked as negative and the positive list contains all words marked as positive. When a phrase is positive the closest word in the negative list is chosen and vice-versa when the phrase is negative. The chosen words are then replaced and the new phrase is inserted into the original sentence. Combining both the attention mechanism and the word embeddings was done because it was much faster than going through the whole sequence and replacing words according to the same formula.
DATA
The data used come from the large movie review dataset (Maas et al., 2011). This dataset consists of a training set containing 50000 unlabeled, 12500 positive and 125000 negative reviews and a test set containing 12500 positive and 12500 negative reviews. The experiments in this paper were performed only using the positive and negative reviews, which meant the training set contained 25000 reviews and the test set also contained 25000 reviews.
In terms of preprocessing the text was converted to lower case and any punctuation was removed. Lowercasing was done to avoid the same words being treated differently because they were at the beginning of the sentence and punctuation was removed so that the punctuation would not be included in tokens or treated as its own token.
To see if the model would transfer well to another dataset the experiments were repeated on the Rotten tomato review dataset (Pang and Lee, 2005). This dataset was limited to only full sentences and the labels were changed to binary classification labels. Only the instances that were negative and positive were included and the instances that were somewhat negative or somewhat positive (labels 1, 2 and 3) were ignored.
EXPERIMENTS
In order to properly test the proposed method, experiments were ran on both the individual parts of the approach and on the whole (end-to-end) pipeline. Evaluating the full pipeline was difficult as different existing metrics seemed insufficient because of the nature of the project. For example the BLEU-score would always be high since most of the original sequence is left intact and it has been criticized in the past (Novikova et al., 2017). The percentage of sentences that changed sentiment according to the sentiment classifier was used as a metric, but as sentiment classifiers do not have an accuracy of 100%, this number is a rough estimate. Lastly, a random subset of 15 sentences was given to a test group of 4 people and asked whether they deemed the sentences correct and considered the sentiment changed.
Sentiment classifier
To test the performance of the sentiment classifier individually, the proposed attention RNN model was trained on the 25000 training reviews of the imdb dataset. The sentiment classifier was then used to predict the sentiment on 2000 test reviews of the same dataset. These 2000 reviews were randomly selected. The accuracy of the sentiment classifier was tested because the classifier will later be used in testing the full model and the performance of the encoder-decoder, which makes the performance of the sentiment classifier important to report.
The attention component, for which this sentiment classification model was chosen is more difficult to test. The performance in the attention will be tested by the experiments with the full model. The higher the score for sentiment change is, the better the attention mechanism will have functioned as for a perfect score the attention mechanism will need to have picked out all phrases that contribute towards the sentiment.
The parameters we use consist of an embedding dimension of 300, a size of the hidden layer of 150, an attention vector of size 50 and a batch size of 256. The network makes use of randomly initialized word vectors that are updated during training. The network is trained on the positive and negative training reviews of the imdb dataset and the accuracy is measured using the test reviews. Loss is determined using cross entropy and the optimizer is the Adam opimizer (Kingma and Ba, 2014).
The proposed sentiment classifier was tested in terms of accuracy on the imdb dataset and in comparison to state of the art models. The numbers used to compare the results are reported by (McCann et al., 2017). Table 1 shows that the result of the sentiment classifier used in this paper is slightly below the state of the art. On the imdb dataset we achieve an accuracy (on a binary sentiment classification task) of 89.6 percent, a bit lower than the state of the art on the same dataset. However, the reason this algorithm is used is its ability to highlight the parts of the sentence that contribute most towards the sentiment, which only Model Accuracy This Model 89.6 SA-LSTM (Dai and Le, 2015) 92.8 bmLSTM (Radford et al., 2017) 92.9 TRNN (Dieng et al., 2016) 93.8 oh-LSTM (Johnson and Zhang, 2016) 94.1 Virtual (Miyato et al., 2016) 94.1
Autoencoder
The autoencoder's purpose is to encode short phrases in the latent space so that the closest phrase of the opposite class (sentiment) can be found. To test the performance of the autoencoder for the task presented in this paper, phrases were extracted from the test reviews of the imdb dataset and were then encoded using the autoencoder. The closest vector was then decoded and the sentiment of the resulting sequence was determined using the sentiment classifier described in this paper. In the results section the percentage of phrases that changed sentiment is reported. This experiment which assesses the the performance of the autoencoder is conducted to better interpret the results of the full model. For training, an embedding dimension of 100 and a size of the hidden layer of 250 are used. The word vectors used are pretrained GloVe embedding vectors from the GloVe vector set. The network is trained on the training set of phrases acquired by the attention network, using a negative log likelihood loss function and as an optimizer a stochastic gradient optimizer with a learning rate of 0.01 is used. The training objective is to echo the phrases in the training set. After encoding the sentiment label of the phrase is saved together with the fixed-length vector. This allows later to find the closest vector of the opposite sentiment. Table 2 shows the success rate of the autoencoder in terms of changing the sentiment of the phrases extracted by the attention mechanism. After being decoded, 50.8 percent of the phrases are classified as a Original sequence Generated sequence Sentiment change no movement , no yuks , not much of anything no movement , no yuks , not much of anything no this is one of polanski 's best films this is one of polanski 's lowest films yes most new movies have a bright sheen most new movies a unhappy moments yes w/error gollum 's ' performance ' is incredible gollum 's ' performance ' not well received ! yes as a singular character study , it 's perfect as a give study , it 's perfect no w/error Table 3: Examples of transformed sentences generated by the encoder-decoder model
Original sequence Generated sequence Sentiment change no movement , no yuks , not much of anything obvious movement, obvious yuks, much of kind no w/error this is one of polanski 's best films this is one of polanski 's worst films yes most new movies have a bright sheen most new movies have a bleak ooze yes w/error gollum 's ' performance ' is incredible gollum 's ' performance ' is unbelievable undefined as a singular character study , it 's perfect as a singular character examination it 's crisp yes w/error Table 4: Examples of transformed sentences (same as Table 3) using the word vectors approach different sentiment from the one they originally were classified. The number reported is the ratio of sentences that got assigned a different sentiment by the sentiment classifier after the transformation. Furthermore, some of the phrases in the extracted set had a length of only one or two words for which it is hard to predict the sentiment. These short sequences were included because in the final model they would also be extracted, so they do have an impact on the performance. The model was also tested while leaving out the shorter phrases, both on phrases longer than two and longer than five words, which slightly increases the success rate.
Full model
The full pipeline was tested in two ways. First, sentences were evaluated using a group of human evaluators to determine whether the sentences generated were grammatically and semantically correct on top of the change in sentiment. Next, the change in sentiment was tested using the sentiment classifier described by (Yang et al., 2016). To find how well the full pipeline performed in changing the sentiment of sequences, a basic human evaluation was performed (as a first experiment) on a subset of generated sequences based on sentences from the rotten tomatoes dataset (Pang and Lee, 2005). The reason for choosing this dataset is that the sentences were shorter, so the readability was better than using the imdb dataset. The setup was as follows: Reviewers were shown the original sentence and the two variants generated by two versions of the algorithm, the encoder-decoder model and the word vector model. Reviewers were then asked to rate the generated sentence on a scale from 1 to 5, both in terms of grammatical and semantic correctness and the extent to which the sentiment had changed. The rating of grammatical and semantic correctness was so that the reviewers could indicate whether a sentence was still intelligible after the change was performed. The rating of the sentiment change was an indication of how much the sentiment changed towards the opposite sentiment. In this case, a perfect change of sentiment from positive to negative (or vice-versa) would be rated as 5 and the sentiment remaining exactly the same would be rated as 1. Reviewers also had the option to mark that a sentence hadn't changed, as that would not change the sentiment but give a perfect score in correctness. After all reviewers had reviewed all sentences, the average score for both correctness and sentiment change was calculated for both approaches. The number of times a sentence hadn't changed was also reported. The two approaches were then compared to see which approach performed better. Tables 3 and 4 show how some sentences are transformed using the encoder-decoder and the word vectors approach respectively, along with information on whether the sentiment was changed (and if that happened with introducing some grammatical or semantic error). The word vectors approach seems to do a better job at replacing words correctly, however in both cases there are some errors which are being introduced. Table 5 shows the results obtained by the human evaluation. The numbers at grammatical correctness and sentiment change are the average ratings that sentences got by the evaluation panel. Last row shows the percentage of sentences that did not change at all. The test group indicated that the encoder approach changed the sentence in slightly more than 60% the cases, while the word vectors approach did change the sentence in more than 90% of the cases. This is possibly caused by the number of unknown tokens in the sentences, which caused problems for the encoder, but not for the word vector approach, as it would just ignore the unknown tokens and move on. Another explanation for this result is that the attention mechanism only highlights single words and without the help of an emotion lexicon these single replacements often do not change the sentiment of the sentence, as can be seen in Table 3.
Table 5 also shows that the grammatical quality of the sentences and the sentiment change as performed by the word vectors approach was evaluated to be higher than the ones generated by the encoder approach. Observing the changes made to sentences shows that the replacements in the word vector approach were more sensible when it comes to word type and sentiment. The cause of this is that the word vector approach makes use of an emotion lexicon, which ensures that each word inserted is of the desired sentiment. The encoder approach makes use of the fixed-word vector and the sentiment as determined by the sentiment classifier of the whole encoded phrase, allowing for less control on the exact sentiment of the inserted phrase.
Question
Encoder Word vectors Grammatical correctness 2.7/5 4.4/5 Sentiment change 3.5/5 4.3/5 Unchanged 36.67% 6.67% Table 5: Average score one a scale from 1 to 5 for correctness and sentiment change reviewers assigned to the sentences and ratio of sentences that remained unchanged.
The second experiment conducted had the goal to test the ratio of sentences that changed sentiment compared to the original one. This model is also better able to give an objective measure on how well the model does what it is supposed to do, namely changing the sentiment.
Model
Rotten Tomatoes IMDB Decoder 53.6 53.7 Word vectors 49.1 53.3 Table 6 shows that the accuracy in changing the sentiment is by around 5% higher on the rotten tomatoes coprus (Pang and Lee, 2005) but similar for the imdb corpus (Maas et al., 2011). It should be noted that the performance of the encoder-decoder is almost identical for both datasets.
DISCUSSION
The model proposed in this paper transforms the sentiment of a sentence by replacing short phrases that determine the sentiment. Extraction of these phrases is done using a sentiment classifier with an attention mechanism. These phrases are then encoded using an encoder-decoder network that is trained on these phrases. After the phrases are encoded, the closest phrase of the opposite sentiment is found and replaced into the original sentence. Alternatively, the extracted phrase is transformed by finding the closest word of the opposite sentiment using an emotion lexicon to assign sentiment to words.
The model was evaluated on both its individual parts and end-to-end. We used both automatic metrics and human evaluation. Testing the success rate (of changing the sentiment), best results were achieved with the encoder-decoder method, which score more than 50 % on both datasets. Human evaluation on the model gave the best scores to the word vector based model, both in terms of the change of sentiment and in terms of the grammatical and semantic correctness.
Results raise the issue of language interpretability by humans and machines. Our method seems to create samples that are sufficiently changing the sentiment for the classifier (thus the goal of creating new data points is successful), however this is not confirmed by the human evaluators who judge the actual content of the sentence. However, it should be noted here that human evaluation experiments need to be extended once the approach is more robust to confirm the results.
As for future work, we plan to introduce a more carefully assembled dataset for the encoder-decoder approach, since that might improve the quality of the decoder output. The prominence of unknown tokens in the data suggests that experimenting with a character-level implementation might improve the results, as such algorithms can often infer the meaning of all words, regardless of how often they appear in the data. This could solve the problem of not all words being present in the vocabulary which results in many unknown tokens in the generated sentences.
Finally, another way to improve the model is to have the encoder-decoder better caption the phrases in the latent space. We based our model on but used less hidden units (due to hardware limitations) which may have caused learning a worse representation of the phrases in the latent space. Using more hidden units (or a different architecture for the encoder/decoder model) is a way to further explore how reuslts could be improved. | 3,956 |
1901.11467 | 2951047368 | An obstacle to the development of many natural language processing products is the vast amount of training examples necessary to get satisfactory results. The generation of these examples is often a tedious and time-consuming task. This paper this paper proposes a method to transform the sentiment of sentences in order to limit the work necessary to generate more training data. This means that one sentence can be transformed to an opposite sentiment sentence and should reduce by half the work required in the generation of text. The proposed pipeline consists of a sentiment classifier with an attention mechanism to highlight the short phrases that determine the sentiment of a sentence. Then, these phrases are changed to phrases of the opposite sentiment using a baseline model and an autoencoder approach. Experiments are run on both the separate parts of the pipeline as well as on the end-to-end model. The sentiment classifier is tested on its accuracy and is found to perform adequately. The autoencoder is tested on how well it is able to change the sentiment of an encoded phrase and it was found that such a task is possible. We use human evaluation to judge the performance of the full (end-to-end) pipeline and that reveals that a model using word vectors outperforms the encoder model. Numerical evaluation shows that a success rate of 54.7 is achieved on the sentiment change. | The attention mechanism was first proposed for the task of machine translation @cite_0 . Attention allows a network to 'focus' on one part of the sentence at a time. This is done through keeping another vector which contains information on the impact of individual words. Attention has also been used in other tasks within NLP area such as document classification @cite_22 , sentiment analysis @cite_13 and teaching machines to read @cite_5 | {
"abstract": [
"Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition.",
"Teaching machines to read natural language documents remains an elusive challenge. Machine reading systems can be tested on their ability to answer questions posed on the contents of documents that they have seen, but until now large scale training and test datasets have been missing for this type of evaluation. In this work we define a new methodology that resolves this bottleneck and provides large scale supervised reading comprehension data. This allows us to develop a class of attention based deep neural networks that learn to read real documents and answer complex questions with minimal prior knowledge of language structure.",
"",
"We propose a hierarchical attention network for document classification. Our model has two distinctive characteristics: (i) it has a hierarchical structure that mirrors the hierarchical structure of documents; (ii) it has two levels of attention mechanisms applied at the wordand sentence-level, enabling it to attend differentially to more and less important content when constructing the document representation. Experiments conducted on six large scale text classification tasks demonstrate that the proposed architecture outperform previous methods by a substantial margin. Visualization of the attention layers illustrates that the model selects qualitatively informative words and sentences."
],
"cite_N": [
"@cite_0",
"@cite_5",
"@cite_13",
"@cite_22"
],
"mid": [
"2133564696",
"2949615363",
"2562607067",
"2470673105"
]
} | Towards Controlled Transformation of Sentiment in Sentences | In its current state text generation is not able to capture the complexities of human language, making the generated text often of poor quality. (Hu et al., 2017) suggested a method to control the generation of text combining variational autoencoders and holistic attribute discriminators. Although the sentiment generated by their method was quite accurate, the generated sentences were still far from perfect. The short sentences generated by their model seem adequate, but the longer the sentence, the more the quality drops.
Most research tries to generate sentences completely from scratch and while this is one way to generate text, it might also be a possibility to only change parts of a sentence to transform the sentiment. In longer sentences, not every word is important for determining the sentiment of the sentence, so most words can be left unchanged while trying to transform the sentiment.
The model proposed in this work tries to determine the critical part of a sentence and transforms only this to a different sentiment. This method should change the sentiment of the sentence while keeping the grammatical structure and semantic meaning of the sentence intact. To find the critical part of a sentence the model uses an attention mechanism on a sentiment classifier. The phrases that are deemed important by the sentiment classifier are then encoded in an encoder-decoder network and transformed to a new phrase. This phrase is then inserted in the original sentence to create the new sentence with the opposite sentiment.
THE MODEL
In this paper two different pipelines are considered. Both pipelines contain a sentiment classifier with an attention mechanism to extract phrases from the input documents. The difference is that pipeline 1 (which can be seen in Figure 1) uses an encoder to encode the extracted phrases and find the closest phrase with the opposite sentiment in the vector space. This phrase is then either inserted into the sentence or the vector representation of this phrase is decoded and the resulting phrase is inserted into the sentence. Pipeline 2 (as seen in Figure 2) finds the words in the extracted phrases that are most likely to determine the sentiment and replaces these words with similar words of the opposite sentiment using word vectors. In the next sections all individual parts of the pipeline will be explained.
Sentiment classification with attention
To find the phrases that determine the sentiment of a sentence a sentiment classification model with attention is used. The network used is the network defined by (Yang et al., 2016). This model is chosen because in sequence modeling recurrent neural networks have shown to give better classification results than other models, such as convolutional neural networks (Yin et al., 2017). Recurrent neural networks have the added benefit of easily allowing for implementation of attention mechanisms, which are able to focus on small parts of a sequence at a time. The attention mechanism is used to extract the sequences that determine the sentiment. This classifier consists of a word-and sentence encoder and both a word and sentence level attention layer. The word encoder is a bidirectional GRU that encodes information about the whole sentence centered around word w it with t ∈ [1, T ]. The sentence encoder does the same thing, but for a sentence s i which is constructed by taking an aggregate of the word vectors and the attention values composing the sentence. The sentence is then encoded by another bidirectional GRU and another attention layer to compose a document vector. This document vector can Figure 1: The model using the closest phrase approach including an example then be used to determine the sentiment of the document through the fully-connected layer.
Attention for phrase extraction
The attention proposed by (Yang et al., 2016) is used to find the contribution to the sentiment that each individual word brings. The attention they propose feeds the word representation obtained from the word-level encoder through a one-layer MLP to get a hidden representation of the word. This hidden representation is then, together with a word context vector, fed to a softmax function to get the normalized attention weight. After the attention weights have been computed, words with a sufficiently high attention weight are extracted and passed to the encoderdecoder model.
In the first step of the examples in Figure 1 and 2 the attention mechanism is highlighting a few phrases in a movie review. These are the phrases that are then later passed on to either the encoder or to the word changing part of the pipeline.
Sentiment transformation
For the sentiment transformation two approaches are proposed. The first approach is based on an encoder which encodes the extracted phrases into fixed-length vectors. The second approach transforms words from the extracted phrases using word embeddings and an emotion lexicon.
Encoder-Decoder approach
The encoder is the first technique used to transform a sentence to one with a different sentiment. This variant uses an encoder model and a transformation based on the distance between two phrases in the latent space. The first part of this model is to encode the phrases extracted by the attention in the latent space. The encoder used is similar to that proposed by .
The difference is that this model is not trained on two separate datasets, but on one set of phrases both as input and output, where the goal is that the model can echo the sequence. Both the encoder and the decoder are one-directional GRUs that are trained together to echo the sequence. First, the encoder encodes a sequence to a fixed-length vector representation. The decoder should then reconstruct the original sequence from this fixed-length vector representation. This is trained through maximizing the log-likelihood
max θ 1 N N ∑ n=1 logp θ (y n | x n )
where θ is the set of model parameters and (x n , y n ) is a pair of input and output sequences from the training set. In our case x n and y n are the same as we want the encoder-decoder to echo the initial sequence. On top of this we also store a sentiment label with the encoded sequences to use them in the next step of the model.
Afterwards these phrases are encoded into a fixedlength vector. The model then selects the vector closest to the current latent, fixed-length representation Figure 2: The model using the word vector approach including an example (but taking the one with a different sentiment label) using the cosine distance:
min y x · y ||x|| · ||y|| , y ∈ Y
where x is the encoded input phrase and Y is the set of all encoded vectors in the latent space with the opposite label from x. The closest vector is then decoded into a new phrase, which is inserted into the sentence to replace the old phrase. Obviously, the decoder model used here is the same model which is trained to encode the selected phrases.
Word vector approach
The word vector approach also starts by extracting the relevant phrases from the document using the attention mechanism explained in the attention section. However, while the encoder approach uses an encoder to encode the phrases into the latent space, this approach is based on word vectors (Mikolov et al., 2013). First off, the words that are important to the sentiment are selected using the following formulas:
∀x ∈ X : p(neg|x) > 0.65 ∨ p(pos|x) > 0.65
where neg means the sentiment of the sentence is negative, pos means the sentiment of the sentence is positive, x is the current word and X is the set of all the words selected to be replaced. Threshold 0.65 was chosen empirically by inspecting different values.
The replacement word is selected using the closest word in the latent space using the cosine distance. The candidates to replace the word are found using the EmoLex emotion lexicon (Mohammad and Turney, 2013). A negative and a positive word list are created based on this lexicon using the annotations. The negative list contains all words marked as negative and the positive list contains all words marked as positive. When a phrase is positive the closest word in the negative list is chosen and vice-versa when the phrase is negative. The chosen words are then replaced and the new phrase is inserted into the original sentence. Combining both the attention mechanism and the word embeddings was done because it was much faster than going through the whole sequence and replacing words according to the same formula.
DATA
The data used come from the large movie review dataset (Maas et al., 2011). This dataset consists of a training set containing 50000 unlabeled, 12500 positive and 125000 negative reviews and a test set containing 12500 positive and 12500 negative reviews. The experiments in this paper were performed only using the positive and negative reviews, which meant the training set contained 25000 reviews and the test set also contained 25000 reviews.
In terms of preprocessing the text was converted to lower case and any punctuation was removed. Lowercasing was done to avoid the same words being treated differently because they were at the beginning of the sentence and punctuation was removed so that the punctuation would not be included in tokens or treated as its own token.
To see if the model would transfer well to another dataset the experiments were repeated on the Rotten tomato review dataset (Pang and Lee, 2005). This dataset was limited to only full sentences and the labels were changed to binary classification labels. Only the instances that were negative and positive were included and the instances that were somewhat negative or somewhat positive (labels 1, 2 and 3) were ignored.
EXPERIMENTS
In order to properly test the proposed method, experiments were ran on both the individual parts of the approach and on the whole (end-to-end) pipeline. Evaluating the full pipeline was difficult as different existing metrics seemed insufficient because of the nature of the project. For example the BLEU-score would always be high since most of the original sequence is left intact and it has been criticized in the past (Novikova et al., 2017). The percentage of sentences that changed sentiment according to the sentiment classifier was used as a metric, but as sentiment classifiers do not have an accuracy of 100%, this number is a rough estimate. Lastly, a random subset of 15 sentences was given to a test group of 4 people and asked whether they deemed the sentences correct and considered the sentiment changed.
Sentiment classifier
To test the performance of the sentiment classifier individually, the proposed attention RNN model was trained on the 25000 training reviews of the imdb dataset. The sentiment classifier was then used to predict the sentiment on 2000 test reviews of the same dataset. These 2000 reviews were randomly selected. The accuracy of the sentiment classifier was tested because the classifier will later be used in testing the full model and the performance of the encoder-decoder, which makes the performance of the sentiment classifier important to report.
The attention component, for which this sentiment classification model was chosen is more difficult to test. The performance in the attention will be tested by the experiments with the full model. The higher the score for sentiment change is, the better the attention mechanism will have functioned as for a perfect score the attention mechanism will need to have picked out all phrases that contribute towards the sentiment.
The parameters we use consist of an embedding dimension of 300, a size of the hidden layer of 150, an attention vector of size 50 and a batch size of 256. The network makes use of randomly initialized word vectors that are updated during training. The network is trained on the positive and negative training reviews of the imdb dataset and the accuracy is measured using the test reviews. Loss is determined using cross entropy and the optimizer is the Adam opimizer (Kingma and Ba, 2014).
The proposed sentiment classifier was tested in terms of accuracy on the imdb dataset and in comparison to state of the art models. The numbers used to compare the results are reported by (McCann et al., 2017). Table 1 shows that the result of the sentiment classifier used in this paper is slightly below the state of the art. On the imdb dataset we achieve an accuracy (on a binary sentiment classification task) of 89.6 percent, a bit lower than the state of the art on the same dataset. However, the reason this algorithm is used is its ability to highlight the parts of the sentence that contribute most towards the sentiment, which only Model Accuracy This Model 89.6 SA-LSTM (Dai and Le, 2015) 92.8 bmLSTM (Radford et al., 2017) 92.9 TRNN (Dieng et al., 2016) 93.8 oh-LSTM (Johnson and Zhang, 2016) 94.1 Virtual (Miyato et al., 2016) 94.1
Autoencoder
The autoencoder's purpose is to encode short phrases in the latent space so that the closest phrase of the opposite class (sentiment) can be found. To test the performance of the autoencoder for the task presented in this paper, phrases were extracted from the test reviews of the imdb dataset and were then encoded using the autoencoder. The closest vector was then decoded and the sentiment of the resulting sequence was determined using the sentiment classifier described in this paper. In the results section the percentage of phrases that changed sentiment is reported. This experiment which assesses the the performance of the autoencoder is conducted to better interpret the results of the full model. For training, an embedding dimension of 100 and a size of the hidden layer of 250 are used. The word vectors used are pretrained GloVe embedding vectors from the GloVe vector set. The network is trained on the training set of phrases acquired by the attention network, using a negative log likelihood loss function and as an optimizer a stochastic gradient optimizer with a learning rate of 0.01 is used. The training objective is to echo the phrases in the training set. After encoding the sentiment label of the phrase is saved together with the fixed-length vector. This allows later to find the closest vector of the opposite sentiment. Table 2 shows the success rate of the autoencoder in terms of changing the sentiment of the phrases extracted by the attention mechanism. After being decoded, 50.8 percent of the phrases are classified as a Original sequence Generated sequence Sentiment change no movement , no yuks , not much of anything no movement , no yuks , not much of anything no this is one of polanski 's best films this is one of polanski 's lowest films yes most new movies have a bright sheen most new movies a unhappy moments yes w/error gollum 's ' performance ' is incredible gollum 's ' performance ' not well received ! yes as a singular character study , it 's perfect as a give study , it 's perfect no w/error Table 3: Examples of transformed sentences generated by the encoder-decoder model
Original sequence Generated sequence Sentiment change no movement , no yuks , not much of anything obvious movement, obvious yuks, much of kind no w/error this is one of polanski 's best films this is one of polanski 's worst films yes most new movies have a bright sheen most new movies have a bleak ooze yes w/error gollum 's ' performance ' is incredible gollum 's ' performance ' is unbelievable undefined as a singular character study , it 's perfect as a singular character examination it 's crisp yes w/error Table 4: Examples of transformed sentences (same as Table 3) using the word vectors approach different sentiment from the one they originally were classified. The number reported is the ratio of sentences that got assigned a different sentiment by the sentiment classifier after the transformation. Furthermore, some of the phrases in the extracted set had a length of only one or two words for which it is hard to predict the sentiment. These short sequences were included because in the final model they would also be extracted, so they do have an impact on the performance. The model was also tested while leaving out the shorter phrases, both on phrases longer than two and longer than five words, which slightly increases the success rate.
Full model
The full pipeline was tested in two ways. First, sentences were evaluated using a group of human evaluators to determine whether the sentences generated were grammatically and semantically correct on top of the change in sentiment. Next, the change in sentiment was tested using the sentiment classifier described by (Yang et al., 2016). To find how well the full pipeline performed in changing the sentiment of sequences, a basic human evaluation was performed (as a first experiment) on a subset of generated sequences based on sentences from the rotten tomatoes dataset (Pang and Lee, 2005). The reason for choosing this dataset is that the sentences were shorter, so the readability was better than using the imdb dataset. The setup was as follows: Reviewers were shown the original sentence and the two variants generated by two versions of the algorithm, the encoder-decoder model and the word vector model. Reviewers were then asked to rate the generated sentence on a scale from 1 to 5, both in terms of grammatical and semantic correctness and the extent to which the sentiment had changed. The rating of grammatical and semantic correctness was so that the reviewers could indicate whether a sentence was still intelligible after the change was performed. The rating of the sentiment change was an indication of how much the sentiment changed towards the opposite sentiment. In this case, a perfect change of sentiment from positive to negative (or vice-versa) would be rated as 5 and the sentiment remaining exactly the same would be rated as 1. Reviewers also had the option to mark that a sentence hadn't changed, as that would not change the sentiment but give a perfect score in correctness. After all reviewers had reviewed all sentences, the average score for both correctness and sentiment change was calculated for both approaches. The number of times a sentence hadn't changed was also reported. The two approaches were then compared to see which approach performed better. Tables 3 and 4 show how some sentences are transformed using the encoder-decoder and the word vectors approach respectively, along with information on whether the sentiment was changed (and if that happened with introducing some grammatical or semantic error). The word vectors approach seems to do a better job at replacing words correctly, however in both cases there are some errors which are being introduced. Table 5 shows the results obtained by the human evaluation. The numbers at grammatical correctness and sentiment change are the average ratings that sentences got by the evaluation panel. Last row shows the percentage of sentences that did not change at all. The test group indicated that the encoder approach changed the sentence in slightly more than 60% the cases, while the word vectors approach did change the sentence in more than 90% of the cases. This is possibly caused by the number of unknown tokens in the sentences, which caused problems for the encoder, but not for the word vector approach, as it would just ignore the unknown tokens and move on. Another explanation for this result is that the attention mechanism only highlights single words and without the help of an emotion lexicon these single replacements often do not change the sentiment of the sentence, as can be seen in Table 3.
Table 5 also shows that the grammatical quality of the sentences and the sentiment change as performed by the word vectors approach was evaluated to be higher than the ones generated by the encoder approach. Observing the changes made to sentences shows that the replacements in the word vector approach were more sensible when it comes to word type and sentiment. The cause of this is that the word vector approach makes use of an emotion lexicon, which ensures that each word inserted is of the desired sentiment. The encoder approach makes use of the fixed-word vector and the sentiment as determined by the sentiment classifier of the whole encoded phrase, allowing for less control on the exact sentiment of the inserted phrase.
Question
Encoder Word vectors Grammatical correctness 2.7/5 4.4/5 Sentiment change 3.5/5 4.3/5 Unchanged 36.67% 6.67% Table 5: Average score one a scale from 1 to 5 for correctness and sentiment change reviewers assigned to the sentences and ratio of sentences that remained unchanged.
The second experiment conducted had the goal to test the ratio of sentences that changed sentiment compared to the original one. This model is also better able to give an objective measure on how well the model does what it is supposed to do, namely changing the sentiment.
Model
Rotten Tomatoes IMDB Decoder 53.6 53.7 Word vectors 49.1 53.3 Table 6 shows that the accuracy in changing the sentiment is by around 5% higher on the rotten tomatoes coprus (Pang and Lee, 2005) but similar for the imdb corpus (Maas et al., 2011). It should be noted that the performance of the encoder-decoder is almost identical for both datasets.
DISCUSSION
The model proposed in this paper transforms the sentiment of a sentence by replacing short phrases that determine the sentiment. Extraction of these phrases is done using a sentiment classifier with an attention mechanism. These phrases are then encoded using an encoder-decoder network that is trained on these phrases. After the phrases are encoded, the closest phrase of the opposite sentiment is found and replaced into the original sentence. Alternatively, the extracted phrase is transformed by finding the closest word of the opposite sentiment using an emotion lexicon to assign sentiment to words.
The model was evaluated on both its individual parts and end-to-end. We used both automatic metrics and human evaluation. Testing the success rate (of changing the sentiment), best results were achieved with the encoder-decoder method, which score more than 50 % on both datasets. Human evaluation on the model gave the best scores to the word vector based model, both in terms of the change of sentiment and in terms of the grammatical and semantic correctness.
Results raise the issue of language interpretability by humans and machines. Our method seems to create samples that are sufficiently changing the sentiment for the classifier (thus the goal of creating new data points is successful), however this is not confirmed by the human evaluators who judge the actual content of the sentence. However, it should be noted here that human evaluation experiments need to be extended once the approach is more robust to confirm the results.
As for future work, we plan to introduce a more carefully assembled dataset for the encoder-decoder approach, since that might improve the quality of the decoder output. The prominence of unknown tokens in the data suggests that experimenting with a character-level implementation might improve the results, as such algorithms can often infer the meaning of all words, regardless of how often they appear in the data. This could solve the problem of not all words being present in the vocabulary which results in many unknown tokens in the generated sentences.
Finally, another way to improve the model is to have the encoder-decoder better caption the phrases in the latent space. We based our model on but used less hidden units (due to hardware limitations) which may have caused learning a worse representation of the phrases in the latent space. Using more hidden units (or a different architecture for the encoder/decoder model) is a way to further explore how reuslts could be improved. | 3,956 |
1901.11467 | 2951047368 | An obstacle to the development of many natural language processing products is the vast amount of training examples necessary to get satisfactory results. The generation of these examples is often a tedious and time-consuming task. This paper this paper proposes a method to transform the sentiment of sentences in order to limit the work necessary to generate more training data. This means that one sentence can be transformed to an opposite sentiment sentence and should reduce by half the work required in the generation of text. The proposed pipeline consists of a sentiment classifier with an attention mechanism to highlight the short phrases that determine the sentiment of a sentence. Then, these phrases are changed to phrases of the opposite sentiment using a baseline model and an autoencoder approach. Experiments are run on both the separate parts of the pipeline as well as on the end-to-end model. The sentiment classifier is tested on its accuracy and is found to perform adequately. The autoencoder is tested on how well it is able to change the sentiment of an encoded phrase and it was found that such a task is possible. We use human evaluation to judge the performance of the full (end-to-end) pipeline and that reveals that a model using word vectors outperforms the encoder model. Numerical evaluation shows that a success rate of 54.7 is achieved on the sentiment change. | Encoder-decoder networks @cite_10 @cite_6 are often used in neural machine translation to translate a sequence from one language to another. These networks use RNNs or other types of neural networks to encode the information in the sentence and the another network to decode this sequence to the target language. Since RNNs do not perform well on longer sequences, the LSTM @cite_10 unit is often used for their memory component. Gated Recurrent Units @cite_6 are simpler variants on the LSTM, as they do not have an output gate. | {
"abstract": [
"Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks. Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences. In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure. Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector. Our main result is that on an English to French translation task from the WMT-14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did not have difficulty on long sentences. For comparison, a phrase-based SMT system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM to rerank the 1000 hypotheses produced by the aforementioned SMT system, its BLEU score increases to 36.5, which is close to the previous state of the art. The LSTM also learned sensible phrase and sentence representations that are sensitive to word order and are relatively invariant to the active and the passive voice. Finally, we found that reversing the order of the words in all source sentences (but not target sentences) improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier.",
"In this paper, we propose a novel neural network model called RNN Encoder-Decoder that consists of two recurrent neural networks (RNN). One RNN encodes a sequence of symbols into a fixed-length vector representation, and the other decodes the representation into another sequence of symbols. The encoder and decoder of the proposed model are jointly trained to maximize the conditional probability of a target sequence given a source sequence. The performance of a statistical machine translation system is empirically found to improve by using the conditional probabilities of phrase pairs computed by the RNN Encoder-Decoder as an additional feature in the existing log-linear model. Qualitatively, we show that the proposed model learns a semantically and syntactically meaningful representation of linguistic phrases."
],
"cite_N": [
"@cite_10",
"@cite_6"
],
"mid": [
"2130942839",
"2950635152"
]
} | Towards Controlled Transformation of Sentiment in Sentences | In its current state text generation is not able to capture the complexities of human language, making the generated text often of poor quality. (Hu et al., 2017) suggested a method to control the generation of text combining variational autoencoders and holistic attribute discriminators. Although the sentiment generated by their method was quite accurate, the generated sentences were still far from perfect. The short sentences generated by their model seem adequate, but the longer the sentence, the more the quality drops.
Most research tries to generate sentences completely from scratch and while this is one way to generate text, it might also be a possibility to only change parts of a sentence to transform the sentiment. In longer sentences, not every word is important for determining the sentiment of the sentence, so most words can be left unchanged while trying to transform the sentiment.
The model proposed in this work tries to determine the critical part of a sentence and transforms only this to a different sentiment. This method should change the sentiment of the sentence while keeping the grammatical structure and semantic meaning of the sentence intact. To find the critical part of a sentence the model uses an attention mechanism on a sentiment classifier. The phrases that are deemed important by the sentiment classifier are then encoded in an encoder-decoder network and transformed to a new phrase. This phrase is then inserted in the original sentence to create the new sentence with the opposite sentiment.
THE MODEL
In this paper two different pipelines are considered. Both pipelines contain a sentiment classifier with an attention mechanism to extract phrases from the input documents. The difference is that pipeline 1 (which can be seen in Figure 1) uses an encoder to encode the extracted phrases and find the closest phrase with the opposite sentiment in the vector space. This phrase is then either inserted into the sentence or the vector representation of this phrase is decoded and the resulting phrase is inserted into the sentence. Pipeline 2 (as seen in Figure 2) finds the words in the extracted phrases that are most likely to determine the sentiment and replaces these words with similar words of the opposite sentiment using word vectors. In the next sections all individual parts of the pipeline will be explained.
Sentiment classification with attention
To find the phrases that determine the sentiment of a sentence a sentiment classification model with attention is used. The network used is the network defined by (Yang et al., 2016). This model is chosen because in sequence modeling recurrent neural networks have shown to give better classification results than other models, such as convolutional neural networks (Yin et al., 2017). Recurrent neural networks have the added benefit of easily allowing for implementation of attention mechanisms, which are able to focus on small parts of a sequence at a time. The attention mechanism is used to extract the sequences that determine the sentiment. This classifier consists of a word-and sentence encoder and both a word and sentence level attention layer. The word encoder is a bidirectional GRU that encodes information about the whole sentence centered around word w it with t ∈ [1, T ]. The sentence encoder does the same thing, but for a sentence s i which is constructed by taking an aggregate of the word vectors and the attention values composing the sentence. The sentence is then encoded by another bidirectional GRU and another attention layer to compose a document vector. This document vector can Figure 1: The model using the closest phrase approach including an example then be used to determine the sentiment of the document through the fully-connected layer.
Attention for phrase extraction
The attention proposed by (Yang et al., 2016) is used to find the contribution to the sentiment that each individual word brings. The attention they propose feeds the word representation obtained from the word-level encoder through a one-layer MLP to get a hidden representation of the word. This hidden representation is then, together with a word context vector, fed to a softmax function to get the normalized attention weight. After the attention weights have been computed, words with a sufficiently high attention weight are extracted and passed to the encoderdecoder model.
In the first step of the examples in Figure 1 and 2 the attention mechanism is highlighting a few phrases in a movie review. These are the phrases that are then later passed on to either the encoder or to the word changing part of the pipeline.
Sentiment transformation
For the sentiment transformation two approaches are proposed. The first approach is based on an encoder which encodes the extracted phrases into fixed-length vectors. The second approach transforms words from the extracted phrases using word embeddings and an emotion lexicon.
Encoder-Decoder approach
The encoder is the first technique used to transform a sentence to one with a different sentiment. This variant uses an encoder model and a transformation based on the distance between two phrases in the latent space. The first part of this model is to encode the phrases extracted by the attention in the latent space. The encoder used is similar to that proposed by .
The difference is that this model is not trained on two separate datasets, but on one set of phrases both as input and output, where the goal is that the model can echo the sequence. Both the encoder and the decoder are one-directional GRUs that are trained together to echo the sequence. First, the encoder encodes a sequence to a fixed-length vector representation. The decoder should then reconstruct the original sequence from this fixed-length vector representation. This is trained through maximizing the log-likelihood
max θ 1 N N ∑ n=1 logp θ (y n | x n )
where θ is the set of model parameters and (x n , y n ) is a pair of input and output sequences from the training set. In our case x n and y n are the same as we want the encoder-decoder to echo the initial sequence. On top of this we also store a sentiment label with the encoded sequences to use them in the next step of the model.
Afterwards these phrases are encoded into a fixedlength vector. The model then selects the vector closest to the current latent, fixed-length representation Figure 2: The model using the word vector approach including an example (but taking the one with a different sentiment label) using the cosine distance:
min y x · y ||x|| · ||y|| , y ∈ Y
where x is the encoded input phrase and Y is the set of all encoded vectors in the latent space with the opposite label from x. The closest vector is then decoded into a new phrase, which is inserted into the sentence to replace the old phrase. Obviously, the decoder model used here is the same model which is trained to encode the selected phrases.
Word vector approach
The word vector approach also starts by extracting the relevant phrases from the document using the attention mechanism explained in the attention section. However, while the encoder approach uses an encoder to encode the phrases into the latent space, this approach is based on word vectors (Mikolov et al., 2013). First off, the words that are important to the sentiment are selected using the following formulas:
∀x ∈ X : p(neg|x) > 0.65 ∨ p(pos|x) > 0.65
where neg means the sentiment of the sentence is negative, pos means the sentiment of the sentence is positive, x is the current word and X is the set of all the words selected to be replaced. Threshold 0.65 was chosen empirically by inspecting different values.
The replacement word is selected using the closest word in the latent space using the cosine distance. The candidates to replace the word are found using the EmoLex emotion lexicon (Mohammad and Turney, 2013). A negative and a positive word list are created based on this lexicon using the annotations. The negative list contains all words marked as negative and the positive list contains all words marked as positive. When a phrase is positive the closest word in the negative list is chosen and vice-versa when the phrase is negative. The chosen words are then replaced and the new phrase is inserted into the original sentence. Combining both the attention mechanism and the word embeddings was done because it was much faster than going through the whole sequence and replacing words according to the same formula.
DATA
The data used come from the large movie review dataset (Maas et al., 2011). This dataset consists of a training set containing 50000 unlabeled, 12500 positive and 125000 negative reviews and a test set containing 12500 positive and 12500 negative reviews. The experiments in this paper were performed only using the positive and negative reviews, which meant the training set contained 25000 reviews and the test set also contained 25000 reviews.
In terms of preprocessing the text was converted to lower case and any punctuation was removed. Lowercasing was done to avoid the same words being treated differently because they were at the beginning of the sentence and punctuation was removed so that the punctuation would not be included in tokens or treated as its own token.
To see if the model would transfer well to another dataset the experiments were repeated on the Rotten tomato review dataset (Pang and Lee, 2005). This dataset was limited to only full sentences and the labels were changed to binary classification labels. Only the instances that were negative and positive were included and the instances that were somewhat negative or somewhat positive (labels 1, 2 and 3) were ignored.
EXPERIMENTS
In order to properly test the proposed method, experiments were ran on both the individual parts of the approach and on the whole (end-to-end) pipeline. Evaluating the full pipeline was difficult as different existing metrics seemed insufficient because of the nature of the project. For example the BLEU-score would always be high since most of the original sequence is left intact and it has been criticized in the past (Novikova et al., 2017). The percentage of sentences that changed sentiment according to the sentiment classifier was used as a metric, but as sentiment classifiers do not have an accuracy of 100%, this number is a rough estimate. Lastly, a random subset of 15 sentences was given to a test group of 4 people and asked whether they deemed the sentences correct and considered the sentiment changed.
Sentiment classifier
To test the performance of the sentiment classifier individually, the proposed attention RNN model was trained on the 25000 training reviews of the imdb dataset. The sentiment classifier was then used to predict the sentiment on 2000 test reviews of the same dataset. These 2000 reviews were randomly selected. The accuracy of the sentiment classifier was tested because the classifier will later be used in testing the full model and the performance of the encoder-decoder, which makes the performance of the sentiment classifier important to report.
The attention component, for which this sentiment classification model was chosen is more difficult to test. The performance in the attention will be tested by the experiments with the full model. The higher the score for sentiment change is, the better the attention mechanism will have functioned as for a perfect score the attention mechanism will need to have picked out all phrases that contribute towards the sentiment.
The parameters we use consist of an embedding dimension of 300, a size of the hidden layer of 150, an attention vector of size 50 and a batch size of 256. The network makes use of randomly initialized word vectors that are updated during training. The network is trained on the positive and negative training reviews of the imdb dataset and the accuracy is measured using the test reviews. Loss is determined using cross entropy and the optimizer is the Adam opimizer (Kingma and Ba, 2014).
The proposed sentiment classifier was tested in terms of accuracy on the imdb dataset and in comparison to state of the art models. The numbers used to compare the results are reported by (McCann et al., 2017). Table 1 shows that the result of the sentiment classifier used in this paper is slightly below the state of the art. On the imdb dataset we achieve an accuracy (on a binary sentiment classification task) of 89.6 percent, a bit lower than the state of the art on the same dataset. However, the reason this algorithm is used is its ability to highlight the parts of the sentence that contribute most towards the sentiment, which only Model Accuracy This Model 89.6 SA-LSTM (Dai and Le, 2015) 92.8 bmLSTM (Radford et al., 2017) 92.9 TRNN (Dieng et al., 2016) 93.8 oh-LSTM (Johnson and Zhang, 2016) 94.1 Virtual (Miyato et al., 2016) 94.1
Autoencoder
The autoencoder's purpose is to encode short phrases in the latent space so that the closest phrase of the opposite class (sentiment) can be found. To test the performance of the autoencoder for the task presented in this paper, phrases were extracted from the test reviews of the imdb dataset and were then encoded using the autoencoder. The closest vector was then decoded and the sentiment of the resulting sequence was determined using the sentiment classifier described in this paper. In the results section the percentage of phrases that changed sentiment is reported. This experiment which assesses the the performance of the autoencoder is conducted to better interpret the results of the full model. For training, an embedding dimension of 100 and a size of the hidden layer of 250 are used. The word vectors used are pretrained GloVe embedding vectors from the GloVe vector set. The network is trained on the training set of phrases acquired by the attention network, using a negative log likelihood loss function and as an optimizer a stochastic gradient optimizer with a learning rate of 0.01 is used. The training objective is to echo the phrases in the training set. After encoding the sentiment label of the phrase is saved together with the fixed-length vector. This allows later to find the closest vector of the opposite sentiment. Table 2 shows the success rate of the autoencoder in terms of changing the sentiment of the phrases extracted by the attention mechanism. After being decoded, 50.8 percent of the phrases are classified as a Original sequence Generated sequence Sentiment change no movement , no yuks , not much of anything no movement , no yuks , not much of anything no this is one of polanski 's best films this is one of polanski 's lowest films yes most new movies have a bright sheen most new movies a unhappy moments yes w/error gollum 's ' performance ' is incredible gollum 's ' performance ' not well received ! yes as a singular character study , it 's perfect as a give study , it 's perfect no w/error Table 3: Examples of transformed sentences generated by the encoder-decoder model
Original sequence Generated sequence Sentiment change no movement , no yuks , not much of anything obvious movement, obvious yuks, much of kind no w/error this is one of polanski 's best films this is one of polanski 's worst films yes most new movies have a bright sheen most new movies have a bleak ooze yes w/error gollum 's ' performance ' is incredible gollum 's ' performance ' is unbelievable undefined as a singular character study , it 's perfect as a singular character examination it 's crisp yes w/error Table 4: Examples of transformed sentences (same as Table 3) using the word vectors approach different sentiment from the one they originally were classified. The number reported is the ratio of sentences that got assigned a different sentiment by the sentiment classifier after the transformation. Furthermore, some of the phrases in the extracted set had a length of only one or two words for which it is hard to predict the sentiment. These short sequences were included because in the final model they would also be extracted, so they do have an impact on the performance. The model was also tested while leaving out the shorter phrases, both on phrases longer than two and longer than five words, which slightly increases the success rate.
Full model
The full pipeline was tested in two ways. First, sentences were evaluated using a group of human evaluators to determine whether the sentences generated were grammatically and semantically correct on top of the change in sentiment. Next, the change in sentiment was tested using the sentiment classifier described by (Yang et al., 2016). To find how well the full pipeline performed in changing the sentiment of sequences, a basic human evaluation was performed (as a first experiment) on a subset of generated sequences based on sentences from the rotten tomatoes dataset (Pang and Lee, 2005). The reason for choosing this dataset is that the sentences were shorter, so the readability was better than using the imdb dataset. The setup was as follows: Reviewers were shown the original sentence and the two variants generated by two versions of the algorithm, the encoder-decoder model and the word vector model. Reviewers were then asked to rate the generated sentence on a scale from 1 to 5, both in terms of grammatical and semantic correctness and the extent to which the sentiment had changed. The rating of grammatical and semantic correctness was so that the reviewers could indicate whether a sentence was still intelligible after the change was performed. The rating of the sentiment change was an indication of how much the sentiment changed towards the opposite sentiment. In this case, a perfect change of sentiment from positive to negative (or vice-versa) would be rated as 5 and the sentiment remaining exactly the same would be rated as 1. Reviewers also had the option to mark that a sentence hadn't changed, as that would not change the sentiment but give a perfect score in correctness. After all reviewers had reviewed all sentences, the average score for both correctness and sentiment change was calculated for both approaches. The number of times a sentence hadn't changed was also reported. The two approaches were then compared to see which approach performed better. Tables 3 and 4 show how some sentences are transformed using the encoder-decoder and the word vectors approach respectively, along with information on whether the sentiment was changed (and if that happened with introducing some grammatical or semantic error). The word vectors approach seems to do a better job at replacing words correctly, however in both cases there are some errors which are being introduced. Table 5 shows the results obtained by the human evaluation. The numbers at grammatical correctness and sentiment change are the average ratings that sentences got by the evaluation panel. Last row shows the percentage of sentences that did not change at all. The test group indicated that the encoder approach changed the sentence in slightly more than 60% the cases, while the word vectors approach did change the sentence in more than 90% of the cases. This is possibly caused by the number of unknown tokens in the sentences, which caused problems for the encoder, but not for the word vector approach, as it would just ignore the unknown tokens and move on. Another explanation for this result is that the attention mechanism only highlights single words and without the help of an emotion lexicon these single replacements often do not change the sentiment of the sentence, as can be seen in Table 3.
Table 5 also shows that the grammatical quality of the sentences and the sentiment change as performed by the word vectors approach was evaluated to be higher than the ones generated by the encoder approach. Observing the changes made to sentences shows that the replacements in the word vector approach were more sensible when it comes to word type and sentiment. The cause of this is that the word vector approach makes use of an emotion lexicon, which ensures that each word inserted is of the desired sentiment. The encoder approach makes use of the fixed-word vector and the sentiment as determined by the sentiment classifier of the whole encoded phrase, allowing for less control on the exact sentiment of the inserted phrase.
Question
Encoder Word vectors Grammatical correctness 2.7/5 4.4/5 Sentiment change 3.5/5 4.3/5 Unchanged 36.67% 6.67% Table 5: Average score one a scale from 1 to 5 for correctness and sentiment change reviewers assigned to the sentences and ratio of sentences that remained unchanged.
The second experiment conducted had the goal to test the ratio of sentences that changed sentiment compared to the original one. This model is also better able to give an objective measure on how well the model does what it is supposed to do, namely changing the sentiment.
Model
Rotten Tomatoes IMDB Decoder 53.6 53.7 Word vectors 49.1 53.3 Table 6 shows that the accuracy in changing the sentiment is by around 5% higher on the rotten tomatoes coprus (Pang and Lee, 2005) but similar for the imdb corpus (Maas et al., 2011). It should be noted that the performance of the encoder-decoder is almost identical for both datasets.
DISCUSSION
The model proposed in this paper transforms the sentiment of a sentence by replacing short phrases that determine the sentiment. Extraction of these phrases is done using a sentiment classifier with an attention mechanism. These phrases are then encoded using an encoder-decoder network that is trained on these phrases. After the phrases are encoded, the closest phrase of the opposite sentiment is found and replaced into the original sentence. Alternatively, the extracted phrase is transformed by finding the closest word of the opposite sentiment using an emotion lexicon to assign sentiment to words.
The model was evaluated on both its individual parts and end-to-end. We used both automatic metrics and human evaluation. Testing the success rate (of changing the sentiment), best results were achieved with the encoder-decoder method, which score more than 50 % on both datasets. Human evaluation on the model gave the best scores to the word vector based model, both in terms of the change of sentiment and in terms of the grammatical and semantic correctness.
Results raise the issue of language interpretability by humans and machines. Our method seems to create samples that are sufficiently changing the sentiment for the classifier (thus the goal of creating new data points is successful), however this is not confirmed by the human evaluators who judge the actual content of the sentence. However, it should be noted here that human evaluation experiments need to be extended once the approach is more robust to confirm the results.
As for future work, we plan to introduce a more carefully assembled dataset for the encoder-decoder approach, since that might improve the quality of the decoder output. The prominence of unknown tokens in the data suggests that experimenting with a character-level implementation might improve the results, as such algorithms can often infer the meaning of all words, regardless of how often they appear in the data. This could solve the problem of not all words being present in the vocabulary which results in many unknown tokens in the generated sentences.
Finally, another way to improve the model is to have the encoder-decoder better caption the phrases in the latent space. We based our model on but used less hidden units (due to hardware limitations) which may have caused learning a worse representation of the phrases in the latent space. Using more hidden units (or a different architecture for the encoder/decoder model) is a way to further explore how reuslts could be improved. | 3,956 |
1901.11467 | 2951047368 | An obstacle to the development of many natural language processing products is the vast amount of training examples necessary to get satisfactory results. The generation of these examples is often a tedious and time-consuming task. This paper this paper proposes a method to transform the sentiment of sentences in order to limit the work necessary to generate more training data. This means that one sentence can be transformed to an opposite sentiment sentence and should reduce by half the work required in the generation of text. The proposed pipeline consists of a sentiment classifier with an attention mechanism to highlight the short phrases that determine the sentiment of a sentence. Then, these phrases are changed to phrases of the opposite sentiment using a baseline model and an autoencoder approach. Experiments are run on both the separate parts of the pipeline as well as on the end-to-end model. The sentiment classifier is tested on its accuracy and is found to perform adequately. The autoencoder is tested on how well it is able to change the sentiment of an encoded phrase and it was found that such a task is possible. We use human evaluation to judge the performance of the full (end-to-end) pipeline and that reveals that a model using word vectors outperforms the encoder model. Numerical evaluation shows that a success rate of 54.7 is achieved on the sentiment change. | Transforming the sentiment of sentences has not been systematically attempted, however there are some previous pieces of research into this particular topic. @cite_21 propose a method where a sentence or phrase with the target attribute, in this case sentiment, is extracted and either inserted in the new sentence or completely replacing the previous sentence. Their approach finds phrases based on how often they appear in text with a certain attribute and not in text with the other attribute. However, this approach can not take phrases into account that by themselves are not necessarily strongly leaning towards one sentiment, but still essential to the sentiment of the sentence. | {
"abstract": [
"We consider the task of text attribute transfer: transforming a sentence to alter a specific attribute (e.g., sentiment) while preserving its attribute-independent content (e.g., changing \"screen is just the right size\" to \"screen is too small\"). Our training data includes only sentences labeled with their attribute (e.g., positive or negative), but not pairs of sentences that differ only in their attributes, so we must learn to disentangle attributes from attribute-independent content in an unsupervised way. Previous work using adversarial methods has struggled to produce high-quality outputs. In this paper, we propose simpler methods motivated by the observation that text attributes are often marked by distinctive phrases (e.g., \"too small\"). Our strongest method extracts content words by deleting phrases associated with the sentence's original attribute value, retrieves new phrases associated with the target attribute, and uses a neural model to fluently combine these into a final output. On human evaluation, our best method generates grammatical and appropriate responses on 22 more inputs than the best previous system, averaged over three attribute transfer datasets: altering sentiment of reviews on Yelp, altering sentiment of reviews on Amazon, and altering image captions to be more romantic or humorous."
],
"cite_N": [
"@cite_21"
],
"mid": [
"2797227342"
]
} | Towards Controlled Transformation of Sentiment in Sentences | In its current state text generation is not able to capture the complexities of human language, making the generated text often of poor quality. (Hu et al., 2017) suggested a method to control the generation of text combining variational autoencoders and holistic attribute discriminators. Although the sentiment generated by their method was quite accurate, the generated sentences were still far from perfect. The short sentences generated by their model seem adequate, but the longer the sentence, the more the quality drops.
Most research tries to generate sentences completely from scratch and while this is one way to generate text, it might also be a possibility to only change parts of a sentence to transform the sentiment. In longer sentences, not every word is important for determining the sentiment of the sentence, so most words can be left unchanged while trying to transform the sentiment.
The model proposed in this work tries to determine the critical part of a sentence and transforms only this to a different sentiment. This method should change the sentiment of the sentence while keeping the grammatical structure and semantic meaning of the sentence intact. To find the critical part of a sentence the model uses an attention mechanism on a sentiment classifier. The phrases that are deemed important by the sentiment classifier are then encoded in an encoder-decoder network and transformed to a new phrase. This phrase is then inserted in the original sentence to create the new sentence with the opposite sentiment.
THE MODEL
In this paper two different pipelines are considered. Both pipelines contain a sentiment classifier with an attention mechanism to extract phrases from the input documents. The difference is that pipeline 1 (which can be seen in Figure 1) uses an encoder to encode the extracted phrases and find the closest phrase with the opposite sentiment in the vector space. This phrase is then either inserted into the sentence or the vector representation of this phrase is decoded and the resulting phrase is inserted into the sentence. Pipeline 2 (as seen in Figure 2) finds the words in the extracted phrases that are most likely to determine the sentiment and replaces these words with similar words of the opposite sentiment using word vectors. In the next sections all individual parts of the pipeline will be explained.
Sentiment classification with attention
To find the phrases that determine the sentiment of a sentence a sentiment classification model with attention is used. The network used is the network defined by (Yang et al., 2016). This model is chosen because in sequence modeling recurrent neural networks have shown to give better classification results than other models, such as convolutional neural networks (Yin et al., 2017). Recurrent neural networks have the added benefit of easily allowing for implementation of attention mechanisms, which are able to focus on small parts of a sequence at a time. The attention mechanism is used to extract the sequences that determine the sentiment. This classifier consists of a word-and sentence encoder and both a word and sentence level attention layer. The word encoder is a bidirectional GRU that encodes information about the whole sentence centered around word w it with t ∈ [1, T ]. The sentence encoder does the same thing, but for a sentence s i which is constructed by taking an aggregate of the word vectors and the attention values composing the sentence. The sentence is then encoded by another bidirectional GRU and another attention layer to compose a document vector. This document vector can Figure 1: The model using the closest phrase approach including an example then be used to determine the sentiment of the document through the fully-connected layer.
Attention for phrase extraction
The attention proposed by (Yang et al., 2016) is used to find the contribution to the sentiment that each individual word brings. The attention they propose feeds the word representation obtained from the word-level encoder through a one-layer MLP to get a hidden representation of the word. This hidden representation is then, together with a word context vector, fed to a softmax function to get the normalized attention weight. After the attention weights have been computed, words with a sufficiently high attention weight are extracted and passed to the encoderdecoder model.
In the first step of the examples in Figure 1 and 2 the attention mechanism is highlighting a few phrases in a movie review. These are the phrases that are then later passed on to either the encoder or to the word changing part of the pipeline.
Sentiment transformation
For the sentiment transformation two approaches are proposed. The first approach is based on an encoder which encodes the extracted phrases into fixed-length vectors. The second approach transforms words from the extracted phrases using word embeddings and an emotion lexicon.
Encoder-Decoder approach
The encoder is the first technique used to transform a sentence to one with a different sentiment. This variant uses an encoder model and a transformation based on the distance between two phrases in the latent space. The first part of this model is to encode the phrases extracted by the attention in the latent space. The encoder used is similar to that proposed by .
The difference is that this model is not trained on two separate datasets, but on one set of phrases both as input and output, where the goal is that the model can echo the sequence. Both the encoder and the decoder are one-directional GRUs that are trained together to echo the sequence. First, the encoder encodes a sequence to a fixed-length vector representation. The decoder should then reconstruct the original sequence from this fixed-length vector representation. This is trained through maximizing the log-likelihood
max θ 1 N N ∑ n=1 logp θ (y n | x n )
where θ is the set of model parameters and (x n , y n ) is a pair of input and output sequences from the training set. In our case x n and y n are the same as we want the encoder-decoder to echo the initial sequence. On top of this we also store a sentiment label with the encoded sequences to use them in the next step of the model.
Afterwards these phrases are encoded into a fixedlength vector. The model then selects the vector closest to the current latent, fixed-length representation Figure 2: The model using the word vector approach including an example (but taking the one with a different sentiment label) using the cosine distance:
min y x · y ||x|| · ||y|| , y ∈ Y
where x is the encoded input phrase and Y is the set of all encoded vectors in the latent space with the opposite label from x. The closest vector is then decoded into a new phrase, which is inserted into the sentence to replace the old phrase. Obviously, the decoder model used here is the same model which is trained to encode the selected phrases.
Word vector approach
The word vector approach also starts by extracting the relevant phrases from the document using the attention mechanism explained in the attention section. However, while the encoder approach uses an encoder to encode the phrases into the latent space, this approach is based on word vectors (Mikolov et al., 2013). First off, the words that are important to the sentiment are selected using the following formulas:
∀x ∈ X : p(neg|x) > 0.65 ∨ p(pos|x) > 0.65
where neg means the sentiment of the sentence is negative, pos means the sentiment of the sentence is positive, x is the current word and X is the set of all the words selected to be replaced. Threshold 0.65 was chosen empirically by inspecting different values.
The replacement word is selected using the closest word in the latent space using the cosine distance. The candidates to replace the word are found using the EmoLex emotion lexicon (Mohammad and Turney, 2013). A negative and a positive word list are created based on this lexicon using the annotations. The negative list contains all words marked as negative and the positive list contains all words marked as positive. When a phrase is positive the closest word in the negative list is chosen and vice-versa when the phrase is negative. The chosen words are then replaced and the new phrase is inserted into the original sentence. Combining both the attention mechanism and the word embeddings was done because it was much faster than going through the whole sequence and replacing words according to the same formula.
DATA
The data used come from the large movie review dataset (Maas et al., 2011). This dataset consists of a training set containing 50000 unlabeled, 12500 positive and 125000 negative reviews and a test set containing 12500 positive and 12500 negative reviews. The experiments in this paper were performed only using the positive and negative reviews, which meant the training set contained 25000 reviews and the test set also contained 25000 reviews.
In terms of preprocessing the text was converted to lower case and any punctuation was removed. Lowercasing was done to avoid the same words being treated differently because they were at the beginning of the sentence and punctuation was removed so that the punctuation would not be included in tokens or treated as its own token.
To see if the model would transfer well to another dataset the experiments were repeated on the Rotten tomato review dataset (Pang and Lee, 2005). This dataset was limited to only full sentences and the labels were changed to binary classification labels. Only the instances that were negative and positive were included and the instances that were somewhat negative or somewhat positive (labels 1, 2 and 3) were ignored.
EXPERIMENTS
In order to properly test the proposed method, experiments were ran on both the individual parts of the approach and on the whole (end-to-end) pipeline. Evaluating the full pipeline was difficult as different existing metrics seemed insufficient because of the nature of the project. For example the BLEU-score would always be high since most of the original sequence is left intact and it has been criticized in the past (Novikova et al., 2017). The percentage of sentences that changed sentiment according to the sentiment classifier was used as a metric, but as sentiment classifiers do not have an accuracy of 100%, this number is a rough estimate. Lastly, a random subset of 15 sentences was given to a test group of 4 people and asked whether they deemed the sentences correct and considered the sentiment changed.
Sentiment classifier
To test the performance of the sentiment classifier individually, the proposed attention RNN model was trained on the 25000 training reviews of the imdb dataset. The sentiment classifier was then used to predict the sentiment on 2000 test reviews of the same dataset. These 2000 reviews were randomly selected. The accuracy of the sentiment classifier was tested because the classifier will later be used in testing the full model and the performance of the encoder-decoder, which makes the performance of the sentiment classifier important to report.
The attention component, for which this sentiment classification model was chosen is more difficult to test. The performance in the attention will be tested by the experiments with the full model. The higher the score for sentiment change is, the better the attention mechanism will have functioned as for a perfect score the attention mechanism will need to have picked out all phrases that contribute towards the sentiment.
The parameters we use consist of an embedding dimension of 300, a size of the hidden layer of 150, an attention vector of size 50 and a batch size of 256. The network makes use of randomly initialized word vectors that are updated during training. The network is trained on the positive and negative training reviews of the imdb dataset and the accuracy is measured using the test reviews. Loss is determined using cross entropy and the optimizer is the Adam opimizer (Kingma and Ba, 2014).
The proposed sentiment classifier was tested in terms of accuracy on the imdb dataset and in comparison to state of the art models. The numbers used to compare the results are reported by (McCann et al., 2017). Table 1 shows that the result of the sentiment classifier used in this paper is slightly below the state of the art. On the imdb dataset we achieve an accuracy (on a binary sentiment classification task) of 89.6 percent, a bit lower than the state of the art on the same dataset. However, the reason this algorithm is used is its ability to highlight the parts of the sentence that contribute most towards the sentiment, which only Model Accuracy This Model 89.6 SA-LSTM (Dai and Le, 2015) 92.8 bmLSTM (Radford et al., 2017) 92.9 TRNN (Dieng et al., 2016) 93.8 oh-LSTM (Johnson and Zhang, 2016) 94.1 Virtual (Miyato et al., 2016) 94.1
Autoencoder
The autoencoder's purpose is to encode short phrases in the latent space so that the closest phrase of the opposite class (sentiment) can be found. To test the performance of the autoencoder for the task presented in this paper, phrases were extracted from the test reviews of the imdb dataset and were then encoded using the autoencoder. The closest vector was then decoded and the sentiment of the resulting sequence was determined using the sentiment classifier described in this paper. In the results section the percentage of phrases that changed sentiment is reported. This experiment which assesses the the performance of the autoencoder is conducted to better interpret the results of the full model. For training, an embedding dimension of 100 and a size of the hidden layer of 250 are used. The word vectors used are pretrained GloVe embedding vectors from the GloVe vector set. The network is trained on the training set of phrases acquired by the attention network, using a negative log likelihood loss function and as an optimizer a stochastic gradient optimizer with a learning rate of 0.01 is used. The training objective is to echo the phrases in the training set. After encoding the sentiment label of the phrase is saved together with the fixed-length vector. This allows later to find the closest vector of the opposite sentiment. Table 2 shows the success rate of the autoencoder in terms of changing the sentiment of the phrases extracted by the attention mechanism. After being decoded, 50.8 percent of the phrases are classified as a Original sequence Generated sequence Sentiment change no movement , no yuks , not much of anything no movement , no yuks , not much of anything no this is one of polanski 's best films this is one of polanski 's lowest films yes most new movies have a bright sheen most new movies a unhappy moments yes w/error gollum 's ' performance ' is incredible gollum 's ' performance ' not well received ! yes as a singular character study , it 's perfect as a give study , it 's perfect no w/error Table 3: Examples of transformed sentences generated by the encoder-decoder model
Original sequence Generated sequence Sentiment change no movement , no yuks , not much of anything obvious movement, obvious yuks, much of kind no w/error this is one of polanski 's best films this is one of polanski 's worst films yes most new movies have a bright sheen most new movies have a bleak ooze yes w/error gollum 's ' performance ' is incredible gollum 's ' performance ' is unbelievable undefined as a singular character study , it 's perfect as a singular character examination it 's crisp yes w/error Table 4: Examples of transformed sentences (same as Table 3) using the word vectors approach different sentiment from the one they originally were classified. The number reported is the ratio of sentences that got assigned a different sentiment by the sentiment classifier after the transformation. Furthermore, some of the phrases in the extracted set had a length of only one or two words for which it is hard to predict the sentiment. These short sequences were included because in the final model they would also be extracted, so they do have an impact on the performance. The model was also tested while leaving out the shorter phrases, both on phrases longer than two and longer than five words, which slightly increases the success rate.
Full model
The full pipeline was tested in two ways. First, sentences were evaluated using a group of human evaluators to determine whether the sentences generated were grammatically and semantically correct on top of the change in sentiment. Next, the change in sentiment was tested using the sentiment classifier described by (Yang et al., 2016). To find how well the full pipeline performed in changing the sentiment of sequences, a basic human evaluation was performed (as a first experiment) on a subset of generated sequences based on sentences from the rotten tomatoes dataset (Pang and Lee, 2005). The reason for choosing this dataset is that the sentences were shorter, so the readability was better than using the imdb dataset. The setup was as follows: Reviewers were shown the original sentence and the two variants generated by two versions of the algorithm, the encoder-decoder model and the word vector model. Reviewers were then asked to rate the generated sentence on a scale from 1 to 5, both in terms of grammatical and semantic correctness and the extent to which the sentiment had changed. The rating of grammatical and semantic correctness was so that the reviewers could indicate whether a sentence was still intelligible after the change was performed. The rating of the sentiment change was an indication of how much the sentiment changed towards the opposite sentiment. In this case, a perfect change of sentiment from positive to negative (or vice-versa) would be rated as 5 and the sentiment remaining exactly the same would be rated as 1. Reviewers also had the option to mark that a sentence hadn't changed, as that would not change the sentiment but give a perfect score in correctness. After all reviewers had reviewed all sentences, the average score for both correctness and sentiment change was calculated for both approaches. The number of times a sentence hadn't changed was also reported. The two approaches were then compared to see which approach performed better. Tables 3 and 4 show how some sentences are transformed using the encoder-decoder and the word vectors approach respectively, along with information on whether the sentiment was changed (and if that happened with introducing some grammatical or semantic error). The word vectors approach seems to do a better job at replacing words correctly, however in both cases there are some errors which are being introduced. Table 5 shows the results obtained by the human evaluation. The numbers at grammatical correctness and sentiment change are the average ratings that sentences got by the evaluation panel. Last row shows the percentage of sentences that did not change at all. The test group indicated that the encoder approach changed the sentence in slightly more than 60% the cases, while the word vectors approach did change the sentence in more than 90% of the cases. This is possibly caused by the number of unknown tokens in the sentences, which caused problems for the encoder, but not for the word vector approach, as it would just ignore the unknown tokens and move on. Another explanation for this result is that the attention mechanism only highlights single words and without the help of an emotion lexicon these single replacements often do not change the sentiment of the sentence, as can be seen in Table 3.
Table 5 also shows that the grammatical quality of the sentences and the sentiment change as performed by the word vectors approach was evaluated to be higher than the ones generated by the encoder approach. Observing the changes made to sentences shows that the replacements in the word vector approach were more sensible when it comes to word type and sentiment. The cause of this is that the word vector approach makes use of an emotion lexicon, which ensures that each word inserted is of the desired sentiment. The encoder approach makes use of the fixed-word vector and the sentiment as determined by the sentiment classifier of the whole encoded phrase, allowing for less control on the exact sentiment of the inserted phrase.
Question
Encoder Word vectors Grammatical correctness 2.7/5 4.4/5 Sentiment change 3.5/5 4.3/5 Unchanged 36.67% 6.67% Table 5: Average score one a scale from 1 to 5 for correctness and sentiment change reviewers assigned to the sentences and ratio of sentences that remained unchanged.
The second experiment conducted had the goal to test the ratio of sentences that changed sentiment compared to the original one. This model is also better able to give an objective measure on how well the model does what it is supposed to do, namely changing the sentiment.
Model
Rotten Tomatoes IMDB Decoder 53.6 53.7 Word vectors 49.1 53.3 Table 6 shows that the accuracy in changing the sentiment is by around 5% higher on the rotten tomatoes coprus (Pang and Lee, 2005) but similar for the imdb corpus (Maas et al., 2011). It should be noted that the performance of the encoder-decoder is almost identical for both datasets.
DISCUSSION
The model proposed in this paper transforms the sentiment of a sentence by replacing short phrases that determine the sentiment. Extraction of these phrases is done using a sentiment classifier with an attention mechanism. These phrases are then encoded using an encoder-decoder network that is trained on these phrases. After the phrases are encoded, the closest phrase of the opposite sentiment is found and replaced into the original sentence. Alternatively, the extracted phrase is transformed by finding the closest word of the opposite sentiment using an emotion lexicon to assign sentiment to words.
The model was evaluated on both its individual parts and end-to-end. We used both automatic metrics and human evaluation. Testing the success rate (of changing the sentiment), best results were achieved with the encoder-decoder method, which score more than 50 % on both datasets. Human evaluation on the model gave the best scores to the word vector based model, both in terms of the change of sentiment and in terms of the grammatical and semantic correctness.
Results raise the issue of language interpretability by humans and machines. Our method seems to create samples that are sufficiently changing the sentiment for the classifier (thus the goal of creating new data points is successful), however this is not confirmed by the human evaluators who judge the actual content of the sentence. However, it should be noted here that human evaluation experiments need to be extended once the approach is more robust to confirm the results.
As for future work, we plan to introduce a more carefully assembled dataset for the encoder-decoder approach, since that might improve the quality of the decoder output. The prominence of unknown tokens in the data suggests that experimenting with a character-level implementation might improve the results, as such algorithms can often infer the meaning of all words, regardless of how often they appear in the data. This could solve the problem of not all words being present in the vocabulary which results in many unknown tokens in the generated sentences.
Finally, another way to improve the model is to have the encoder-decoder better caption the phrases in the latent space. We based our model on but used less hidden units (due to hardware limitations) which may have caused learning a worse representation of the phrases in the latent space. Using more hidden units (or a different architecture for the encoder/decoder model) is a way to further explore how reuslts could be improved. | 3,956 |
1901.11409 | 2914666366 | Large datasets have been crucial to the success of deep learning models in the recent years, which keep performing better as they are trained with more labelled data. While there have been sustained efforts to make these models more data-efficient, the potential benefit of understanding the data itself, is largely untapped. Specifically, focusing on object recognition tasks, we wonder if for common benchmark datasets we can do better than random subsets of the data and find a subset that can generalize on par with the full dataset when trained on. To our knowledge, this is the first result that can find notable redundancies in CIFAR-10 and ImageNet datasets (at least 10 ). Interestingly, we observe semantic correlations between required and redundant images. We hope that our findings can motivate further research into identifying additional redundancies and exploiting them for more efficient training or data-collection. | There are approaches which try to prioritize different examples to train on as the learning process goes on such as @cite_0 and @cite_23 . Although these techniques involve selecting examples to train on, they do not seek to identify redundant subsets of the data, but rather to sample the full dataset in a way that speeds up convergence. | {
"abstract": [
"Mini-batch based Stochastic Gradient Descent(SGD) has been widely used to train deep neural networks efficiently. In this paper, we design a general framework to automatically and adaptively select training data for SGD. The framework is based on neural networks and we call it eural ata ilter (). In Neural Data Filter, the whole training process of the original neural network is monitored and supervised by a deep reinforcement network, which controls whether to filter some data in sequentially arrived mini-batches so as to maximize future accumulative reward (e.g., validation accuracy). The SGD process accompanied with NDF is able to use less data and converge faster while achieving comparable accuracy as the standard SGD trained on the full dataset. Our experiments show that NDF bootstraps SGD training for different neural network models including Multi Layer Perceptron Network and Recurrent Neural Network trained on various types of tasks including image classification and text understanding.",
"Deep neural network training spends most of the computation on examples that are properly handled, and could be ignored. We propose to mitigate this phenomenon with a principled importance sampling scheme that focuses computation on \"informative\" examples, and reduces the variance of the stochastic gradients during training. Our contribution is twofold: first, we derive a tractable upper bound to the per-sample gradient norm, and second we derive an estimator of the variance reduction achieved with importance sampling, which enables us to switch it on when it will result in an actual speedup. The resulting scheme can be used by changing a few lines of code in a standard SGD procedure, and we demonstrate experimentally, on image classification, CNN fine-tuning, and RNN training, that for a fixed wall-clock time budget, it provides a reduction of the train losses of up to an order of magnitude and a relative improvement of test errors between 5 and 17 ."
],
"cite_N": [
"@cite_0",
"@cite_23"
],
"mid": [
"2752119372",
"2794302998"
]
} | Semantic Redundancies in Image-Classification Datasets: The 10% You Don't Need | Large datasets have played a central role in the recent success of deep learning. In fact, the performance of AlexNet [Krizhevsky et al., 2012] trained on ImageNet [Deng et al., 2009] in 2012 is often considered as the starting point of the current deep learning era. Undoubtedly, prominent datasets of ImageNet, CI-FAR, and CIFAR-100 [Krizhevsky and Hinton, 2009] have had a crucial role in the evolution of deep learning methods since then; with even bigger datasets like OpenImages [Kuznetsova et al., 2018] and Tencent ML-images [Wu et al., 2019] recently emerging.
These developments have led to state-of-the-art architectures such as ResNets [He et al., 2016a], DenseNets [Huang et al., 2017], VGG [Simonyan and Zisserman, 2014], AmoebaNets [Huang et al., 2018], and regularization techniques such as Dropout [Srivastava et al., 2014] and Shake-Shake [Gastaldi, 2017]. However, understanding the properties of these datasets themselves has remained relatively untapped. Limited study along this direction includes [Lin et al., 2018], which proposes a modified loss function to deal with the class imbalance inherent in object detection datasets and [Tobin et al., 2017], which * Work Done as Google AI Resident. studies modifications to simulated data to help models adapt to the real world, and [Carlini et al., 2018] that demonstrates the existence of prototypical examples and verifies that they match human intuition.
This work studies the properties of ImageNet, CIFAR-10 , and CIFAR-100 datasets from the angle of redundancy. We find that at least 10% of ImageNet and CIFAR-10 can be safely removed by a technique as simple as clustering. Particularly, we identify a certain subset of ImageNet and CIFAR-10 whose removal does not affect the test accuracy when the architecture is trained from scratch on the remaining subset. This is striking, as deep learning techniques are believed to be data hungry [Halevy et al., 2009, Sun et al., 2017. In fact, recently the work by [Vodrahalli et al., 2018] specifically studying the redundancy of these datasets concludes that there is no redundancy. Our work refutes that claim by providing counter examples.
Contributions. This work resolves some recent misconceptions about the absence of notable redundancy in major image classification datasets [Vodrahalli et al., 2018]. We do this by identifying a specific subset, which constitutes above 10% of the training set, and yet its removal causes no drop in the test accuracy. To our knowledge, this is the first time such significant redundancy is shown to exist for these datasets. We emphasize that our contribution is merely to demonstrate the existence of such redundancy, but we do not claim any algorithmic contributions. However, we hope that our findings can motivate further research into identifying additional redundancies and exploiting them for more efficient training or data-collection. Our findings may also be of interest to active learning community, as it provides an upper-bound on the best performance 1 .
Method
Motivation
In order to find redundancies, it is crucial to analyze each sample in the context of other samples in the dataset. Unlike previous attempts, we seek to measure redundancy by explicitly looking at a dissimilarity measure between samples. In case of there being near-duplicates in the training data, the approach of [Vodrahalli et al., 2018] will not be able to decide between them if their resulting gradient magnitude is high, whereas a dissimilarity measure can conclude that they are redundant if it evaluates to a low value.
Algorithm
To find redundancies in datasets, we look at the semantic space of a pre-trained model trained on the full dataset. In our case, the semantic representation comes from the penultimate layer of a neural network. To find groups of points which are close by in the semantic space we use Agglomerative Clustering [Defays, 1977]. Agglomerative Clustering assumes that each point starts out as its own cluster initially, and at each step, the pair of clusters which are closest according to the dissimilarity criterion are joined together. Given two images I 1 and I 2 , whose latent representations are denoted by vectors x 1 and x 2 . We denote the dissimilarity between x 1 and x 2 by d(x 1 , x 2 ) using the cosine angle between them as follows:
d(x 1 , x 2 ) = 1 − x 1 , x 2 x 1 x 2 .(1)
The dissimilarity between two clusters C 1 and C 2 , D(C 1 , C 2 ) is the maximum dissimilarity between any two of their constituent points:
D(C 1 , C 2 ) = max x1∈C1,x2∈C2 d(x 1 , x 2 ) .(2)
For Agglomerative Clustering, we process points belonging to each class independently. Since the dissimilarity is a pairwise measure, processing each class separately leads to faster computations. We run the clustering algorithm until there are k clusters left, where k is the size of the desired subset. We assume that points inside a cluster belong to the same redundant group of images. In each redundant group, we select the image whose representation is closest to the cluster center and discard the rest. Henceforth, we refer to this procedure as semantic space clustering or semantic clustering for brevity.
Experiments
We use the ResNet [He et al., 2016a] architecture for all our experiments with the variant described in [He et al., 2016b]. For each dataset, we compare the performance after training on different random subsets to subsets found with semantic clustering. Given a fixed pre-trained model, semantic clustering subsets are deterministic and the only source of stochasticity is due to the random network weight initialization and random mini-batch choices during optimization by SGD.
The semantic space embedding is obtained by pretraining a network on the full dataset. We chose the output after the last average pooling layer as our semantic space representation. All hyperparameters are kept identical during pre-training and also when training with different subset sizes.
As the baseline, we compare against a subset of size k uniformly sampled from the full set. Each class is sampled independently to in order to be consistent with the semantic clustering scheme. Note that random sampling scheme adds an additional source of stochasticity compared to clustering. For both either uniform sampling or cluster based subset selection, we report the mean and standard deviation of the test accuracy of the model trained from scratch using the subset.
CIFAR-10 & CIFAR-100
We train a 32-layer ResNet for the CIFAR-10 and CIFAR-100 [Krizhevsky and Hinton, 2009] datasets. The semantic representation obtained was a 64dimensional vector. For both the datasets, we train for 100,000 steps with a learning rate which is cosine annealed [Loshchilov and Hutter, 2016] from 0.1 to 0 with a batch size of 128.
For optimization we use Stochastic Gradient Descent with a momentum of coefficient of 0.9. We regularize our weights by penalizing their 2 norm with a factor of 0.0001. We found that to prevent weights from diverging when training with subsets of all sizes, warming up the We see no drop in test accuracy until 10% of the data considered redundant by semantic clustering is removed.
learning rate was necessary. We use linear learning rate warm-up for 2500 steps from 0. We verified that warming up the learning rate performs slightly better than using no warm-up when using the full dataset. In all these experiments, we report average test accuracy across 10 trials.
CIFAR-10
We see in the case of the CIFAR-10 dataset in Figure 2 that the same test accuracy can be achieved even after 10% of the training is discarded using semantic clustering. In contrast, training on random subsets of smaller sizes, results in a monotonic drop in performance. Therefore, while we show that at least 10% of the data in the CIFAR-10 dataset is redundant, this redundancy cannot be observed by uniform sampling. Figure 3 shows examples of images considered redundant with semantic clustering while choosing a subset of 90% size of the full dataset. Each set denotes images the were placed in into the same (redundant) group by semantic clustering. Images in green boxes were retained while the rest were discarded. Figure 4 shows the number of redundant groups of different sizes for two classes in the CIFAR-10 dataset when seeking a 90% subset. Since a majority of points are retained, most clusters end up containing one element upon termination. Redundant points arise from clustering with two or more elements in them.
CIFAR-100
In the case of the CIFAR-100 dataset, our proposed scheme fails to find redundancies, as is shown in Figure 5, while it does slightly better than random subsets. Both proposed and random methods show a monotonic decrease in test accuracy with decreasing subset size. in the CIFAR-10 dataset when finding a 90% subset for two classes. Note that the y-axis is logarithmic. Figure 6 looks at redundant groups found with semantic clustering to retain 90% of the dataset. As compared to Figure 3, the images within a group show much more semantic variation. Redundant groups in Figure 3 are slight variations of the same object, where as in Figure 6, redundant groups do not contain the same object. We note that in this case the model is not able to be invariant to these semantic changes.
Similar to Figure 4, we plot the number of redundant groups of each size for two classes in CIFAR-100 in Figure 7.
To quantify the semantic variation of CIFAR-100 in relation to CIFAR-10 , we select redundant groups of size two or more, and measure the average dissimilarity(from Equation 1) to the retained sample. We report the average over groups in 3 different classes as well as the entire datasets in Table 1. It is clear that the Each column contains a specific class of images. In contrast to Figure 3, the images within each redundant group show much more variations. The groups were found when retaining a 90% subset, and retraining only the selected images (in green boxes) and discarding the rest had a negative impact on test performance.
higher semantic variation in the redundant groups of CIFAR-100 seen in Figure 6 translates to an higher average dissimilarity in Table 1.
Choice of semantic representation.
To determine the best choice of semantic representation from a pre-trained model, we run experiments after selecting the semantic representation from 3 different layers in the network. Figure 8 shows the results. Here "Start" denotes the semantic representation after the first Convolution layer is a ResNet, "Middle" denotes the representation after the second residual block, and "End" denotes the output of the last average pooling layer. We see that the "End" layer's semantic representation is able to find the largest redundancy. 13.90 × 10 −3 Table 1: Average dissimilarity to the retained sample across redundant groups (clusters) of size greater than 1. We report the class-wise mean for 3 classes as well as the average over the entire dataset. All clusters were created to find a subset of 90% the size of the full set. We can observe that the average dissimilarity is about an order of magnitude higher for the CIFAR-100 dataset, indicating that there is more variation in the redundant groups. Figure 9: Validation accuracy after training with subsets of various sizes of ImageNet . We plot the average over 5 trials with the vertical bars denoting standard deviation. There is no drop in validation accuracy when 10% of the training data considered redundant by semantic clustering is removed.
ImageNet
We train a 101-layer ResNet with the ImageNet dataset. It gave us a semantic representation of 2048 dimensions. We use a batch size of 1024 during training and train for 120, 000 steps with a learning rate cosine annealed from 0.4 to 0. Using the strategy from [Goyal et al., 2017], we linearly warm up our learning rate from 0 for 5000 steps to be able to train with large batches. We regularize our weights with 2 penalty with a factor of 0.0001. For optimization, we use Stochastic Gradient Descent with a Momentum coefficient of 0.9 while using the Nesterov momentum update. Since the test set is not publicly available we report the average validation accuracy, measured over 5 trials.
The results of training with subsets of varying sizes of ImageNet dataset are shown in Figure 9. Our proposed scheme is able to successfully show that at least 10% of the data can be removed from the training set without any negative impact on the validation accuracy, whereas training on random subsets always gives a drop with decrease in subset size. Figure 1 shows different redundant groups found in the ImageNet dataset. It is noteworthy that the semantic change considered redundant is different across each group. Figure 11 highlights the similarities between images of the same redundant group and the variation across different redundant groups.
In each row of Figure 12, we plot two images from a redundant group on the left where the retained image is highlighted in a green box. On the right we display the image closest to each retained image in dissimilarity but excluded from the redundant group. These images were close in semantic space to the corresponding retained images, but were not considered similar enough to be redundant. For example the redundant group in the first row of Figure 12 contains Sedan-like looking red cars. The 2-seater sports car on the right, in spite of looking similar to the cars on the left, was not considered redundant with them. Figure 10 shows the number of redundant groups of each size when creating a 90% subset. Much akin to Figure 4, a majority of images are not considered redundant and form a group of size 1.
Additional examples of redundancy group on Ima-geNet is provided in the appendix.
Implementation Details
We use the open source Tensorflow [Abadi et al., 2016] and tensor2tensor [Vaswani et al., 2018] frameworks to train our models. For clustering, we used the scikit-learn [Pedregosa et al., 2011] library. For the CIFAR-10 and CIFAR-100 experiments we train on a single NVIDIA Tesla P100 GPU. For our ImageNet experiments we perform distributed training on 16 Cloud TPUs.
Conclusion
In this work we present a method to find redundant subsets of training data. We explicitly model a dissimilarity metric into our formulation which allows us to find semantically close samples that can be considered redundant. We use an agglomerative clustering algorithm to find redundant groups of images in the semantic space. Through our experiments we are able to show that at least 10% of ImageNet and CIFAR-10 datasets are redundant.
We analyze these redundant groups both qualitatively and quantitatively. Upon visual observation, we see that the semantic change considered redundant varies from cluster to cluster. We show examples of a variety of varying attributes in redundant groups, all of which are redundant from the point of view of training the network.
One particular justification for not needing this variation during training could be that the network learns to be invariant to them because of its shared parameters and seeing similar variations in other parts of the dataset.
In Figure 2 and 9, the accuracy without 5% and 10% of the data is slightly higher than that obtained with the full dataset. This could indicate that redundancies in training datasets hamper the optimization process.
For the CIFAR-100 dataset our proposed scheme fails to find any redundancies. We qualitatively compare the redundant groups in CIFAR-100 ( Figure 6) to the ones found in CIFAR-10 ( Figure 3) and find that the semantic variation across redundant groups is much larger in the former case. Quantitatively this can be seen in Table 1 which shows points in redundant groups of CIFAR-100 are much more spread out in semantic space as compared to CIFAR-10 .
Although we could not find any redundancies in the CIFAR-100 dataset, there could be a better algorithm that could find them. Moreover, we hope that this work inspires a line of work into finding these redundancies and leveraging them for faster and more efficient training.
Acknowledgement
We would like to thank colleagues at Google Research for comments and discussions: Thomas Leung, Yair Movshovitz-Attias, Shraman Ray Chaudhuri, Azade Nazi, Serge Ioffe. Figure 11: This figure highlights semantic similarities between images from the same redundant group and variation seen across different redundant groups of the same class. The redundant groups were found while creating a 90% subset of the ImageNet dataset. Each sub-figure is a redundant group of images according to our algorithm. Each column contains images belonging to the same class, with each row in a column being a different redundant group. For example, the first column contains the Clock class. Clocks in 11a are in one group of redundant images whereas clocks in 11e are in another group. From each of the groups in the sub-figures, only the images marked in green boxes are selected by our algorithm and the others are discarded. Discarding these images had no negative impact on validation accuracy. Figure 12: In each row we plot two images from the same redundant group while creating a 90% subset on the left with the retained image highlighted in a green box. On the right we plot the image closest to the retained image in the semantic space but not included in the same redundant group. Note that the image on the right shows a semantic variation which is inconsistent with the one seen in the redundant group. | 2,891 |
1901.11409 | 2914666366 | Large datasets have been crucial to the success of deep learning models in the recent years, which keep performing better as they are trained with more labelled data. While there have been sustained efforts to make these models more data-efficient, the potential benefit of understanding the data itself, is largely untapped. Specifically, focusing on object recognition tasks, we wonder if for common benchmark datasets we can do better than random subsets of the data and find a subset that can generalize on par with the full dataset when trained on. To our knowledge, this is the first result that can find notable redundancies in CIFAR-10 and ImageNet datasets (at least 10 ). Interestingly, we observe semantic correlations between required and redundant images. We hope that our findings can motivate further research into identifying additional redundancies and exploiting them for more efficient training or data-collection. | An early mention of trying to reduce the training dataset size can be seen in @cite_28 . Their proposed algorithm splits the training dataset into many smaller training sets and iteratively removes these smaller sets until the generalization performance falls below an acceptable threshold. However, the algorithm relies on creating many small sets out of the given training set, rendering it impractical for modern usage. | {
"abstract": [
"Neural network models and other machine learning methods have successfully been applied to several medical classification problems. These models can be periodically refined and retrained as new cases become available. Since training neural networks by backpropagation is time consuming, it is desirable that a minimum number of representative cases be kept in the training set (i.e., redundant cases should be removed). The removal of redundant cases should be carefully monitored so that classification performance is not significantly affected. We made experiments on data removal on a data set of 700 patients suspected of having myocardial infarction and show that there is no statistical difference in classification performance (measured by the differences in areas under the ROC curve on two previously unknown sets of 553 and 500 cases) when as many as 86 of the cases are randomly removed. A proportional reduction in the amount of time required to train the neural network model is achieved."
],
"cite_N": [
"@cite_28"
],
"mid": [
"209945127"
]
} | Semantic Redundancies in Image-Classification Datasets: The 10% You Don't Need | Large datasets have played a central role in the recent success of deep learning. In fact, the performance of AlexNet [Krizhevsky et al., 2012] trained on ImageNet [Deng et al., 2009] in 2012 is often considered as the starting point of the current deep learning era. Undoubtedly, prominent datasets of ImageNet, CI-FAR, and CIFAR-100 [Krizhevsky and Hinton, 2009] have had a crucial role in the evolution of deep learning methods since then; with even bigger datasets like OpenImages [Kuznetsova et al., 2018] and Tencent ML-images [Wu et al., 2019] recently emerging.
These developments have led to state-of-the-art architectures such as ResNets [He et al., 2016a], DenseNets [Huang et al., 2017], VGG [Simonyan and Zisserman, 2014], AmoebaNets [Huang et al., 2018], and regularization techniques such as Dropout [Srivastava et al., 2014] and Shake-Shake [Gastaldi, 2017]. However, understanding the properties of these datasets themselves has remained relatively untapped. Limited study along this direction includes [Lin et al., 2018], which proposes a modified loss function to deal with the class imbalance inherent in object detection datasets and [Tobin et al., 2017], which * Work Done as Google AI Resident. studies modifications to simulated data to help models adapt to the real world, and [Carlini et al., 2018] that demonstrates the existence of prototypical examples and verifies that they match human intuition.
This work studies the properties of ImageNet, CIFAR-10 , and CIFAR-100 datasets from the angle of redundancy. We find that at least 10% of ImageNet and CIFAR-10 can be safely removed by a technique as simple as clustering. Particularly, we identify a certain subset of ImageNet and CIFAR-10 whose removal does not affect the test accuracy when the architecture is trained from scratch on the remaining subset. This is striking, as deep learning techniques are believed to be data hungry [Halevy et al., 2009, Sun et al., 2017. In fact, recently the work by [Vodrahalli et al., 2018] specifically studying the redundancy of these datasets concludes that there is no redundancy. Our work refutes that claim by providing counter examples.
Contributions. This work resolves some recent misconceptions about the absence of notable redundancy in major image classification datasets [Vodrahalli et al., 2018]. We do this by identifying a specific subset, which constitutes above 10% of the training set, and yet its removal causes no drop in the test accuracy. To our knowledge, this is the first time such significant redundancy is shown to exist for these datasets. We emphasize that our contribution is merely to demonstrate the existence of such redundancy, but we do not claim any algorithmic contributions. However, we hope that our findings can motivate further research into identifying additional redundancies and exploiting them for more efficient training or data-collection. Our findings may also be of interest to active learning community, as it provides an upper-bound on the best performance 1 .
Method
Motivation
In order to find redundancies, it is crucial to analyze each sample in the context of other samples in the dataset. Unlike previous attempts, we seek to measure redundancy by explicitly looking at a dissimilarity measure between samples. In case of there being near-duplicates in the training data, the approach of [Vodrahalli et al., 2018] will not be able to decide between them if their resulting gradient magnitude is high, whereas a dissimilarity measure can conclude that they are redundant if it evaluates to a low value.
Algorithm
To find redundancies in datasets, we look at the semantic space of a pre-trained model trained on the full dataset. In our case, the semantic representation comes from the penultimate layer of a neural network. To find groups of points which are close by in the semantic space we use Agglomerative Clustering [Defays, 1977]. Agglomerative Clustering assumes that each point starts out as its own cluster initially, and at each step, the pair of clusters which are closest according to the dissimilarity criterion are joined together. Given two images I 1 and I 2 , whose latent representations are denoted by vectors x 1 and x 2 . We denote the dissimilarity between x 1 and x 2 by d(x 1 , x 2 ) using the cosine angle between them as follows:
d(x 1 , x 2 ) = 1 − x 1 , x 2 x 1 x 2 .(1)
The dissimilarity between two clusters C 1 and C 2 , D(C 1 , C 2 ) is the maximum dissimilarity between any two of their constituent points:
D(C 1 , C 2 ) = max x1∈C1,x2∈C2 d(x 1 , x 2 ) .(2)
For Agglomerative Clustering, we process points belonging to each class independently. Since the dissimilarity is a pairwise measure, processing each class separately leads to faster computations. We run the clustering algorithm until there are k clusters left, where k is the size of the desired subset. We assume that points inside a cluster belong to the same redundant group of images. In each redundant group, we select the image whose representation is closest to the cluster center and discard the rest. Henceforth, we refer to this procedure as semantic space clustering or semantic clustering for brevity.
Experiments
We use the ResNet [He et al., 2016a] architecture for all our experiments with the variant described in [He et al., 2016b]. For each dataset, we compare the performance after training on different random subsets to subsets found with semantic clustering. Given a fixed pre-trained model, semantic clustering subsets are deterministic and the only source of stochasticity is due to the random network weight initialization and random mini-batch choices during optimization by SGD.
The semantic space embedding is obtained by pretraining a network on the full dataset. We chose the output after the last average pooling layer as our semantic space representation. All hyperparameters are kept identical during pre-training and also when training with different subset sizes.
As the baseline, we compare against a subset of size k uniformly sampled from the full set. Each class is sampled independently to in order to be consistent with the semantic clustering scheme. Note that random sampling scheme adds an additional source of stochasticity compared to clustering. For both either uniform sampling or cluster based subset selection, we report the mean and standard deviation of the test accuracy of the model trained from scratch using the subset.
CIFAR-10 & CIFAR-100
We train a 32-layer ResNet for the CIFAR-10 and CIFAR-100 [Krizhevsky and Hinton, 2009] datasets. The semantic representation obtained was a 64dimensional vector. For both the datasets, we train for 100,000 steps with a learning rate which is cosine annealed [Loshchilov and Hutter, 2016] from 0.1 to 0 with a batch size of 128.
For optimization we use Stochastic Gradient Descent with a momentum of coefficient of 0.9. We regularize our weights by penalizing their 2 norm with a factor of 0.0001. We found that to prevent weights from diverging when training with subsets of all sizes, warming up the We see no drop in test accuracy until 10% of the data considered redundant by semantic clustering is removed.
learning rate was necessary. We use linear learning rate warm-up for 2500 steps from 0. We verified that warming up the learning rate performs slightly better than using no warm-up when using the full dataset. In all these experiments, we report average test accuracy across 10 trials.
CIFAR-10
We see in the case of the CIFAR-10 dataset in Figure 2 that the same test accuracy can be achieved even after 10% of the training is discarded using semantic clustering. In contrast, training on random subsets of smaller sizes, results in a monotonic drop in performance. Therefore, while we show that at least 10% of the data in the CIFAR-10 dataset is redundant, this redundancy cannot be observed by uniform sampling. Figure 3 shows examples of images considered redundant with semantic clustering while choosing a subset of 90% size of the full dataset. Each set denotes images the were placed in into the same (redundant) group by semantic clustering. Images in green boxes were retained while the rest were discarded. Figure 4 shows the number of redundant groups of different sizes for two classes in the CIFAR-10 dataset when seeking a 90% subset. Since a majority of points are retained, most clusters end up containing one element upon termination. Redundant points arise from clustering with two or more elements in them.
CIFAR-100
In the case of the CIFAR-100 dataset, our proposed scheme fails to find redundancies, as is shown in Figure 5, while it does slightly better than random subsets. Both proposed and random methods show a monotonic decrease in test accuracy with decreasing subset size. in the CIFAR-10 dataset when finding a 90% subset for two classes. Note that the y-axis is logarithmic. Figure 6 looks at redundant groups found with semantic clustering to retain 90% of the dataset. As compared to Figure 3, the images within a group show much more semantic variation. Redundant groups in Figure 3 are slight variations of the same object, where as in Figure 6, redundant groups do not contain the same object. We note that in this case the model is not able to be invariant to these semantic changes.
Similar to Figure 4, we plot the number of redundant groups of each size for two classes in CIFAR-100 in Figure 7.
To quantify the semantic variation of CIFAR-100 in relation to CIFAR-10 , we select redundant groups of size two or more, and measure the average dissimilarity(from Equation 1) to the retained sample. We report the average over groups in 3 different classes as well as the entire datasets in Table 1. It is clear that the Each column contains a specific class of images. In contrast to Figure 3, the images within each redundant group show much more variations. The groups were found when retaining a 90% subset, and retraining only the selected images (in green boxes) and discarding the rest had a negative impact on test performance.
higher semantic variation in the redundant groups of CIFAR-100 seen in Figure 6 translates to an higher average dissimilarity in Table 1.
Choice of semantic representation.
To determine the best choice of semantic representation from a pre-trained model, we run experiments after selecting the semantic representation from 3 different layers in the network. Figure 8 shows the results. Here "Start" denotes the semantic representation after the first Convolution layer is a ResNet, "Middle" denotes the representation after the second residual block, and "End" denotes the output of the last average pooling layer. We see that the "End" layer's semantic representation is able to find the largest redundancy. 13.90 × 10 −3 Table 1: Average dissimilarity to the retained sample across redundant groups (clusters) of size greater than 1. We report the class-wise mean for 3 classes as well as the average over the entire dataset. All clusters were created to find a subset of 90% the size of the full set. We can observe that the average dissimilarity is about an order of magnitude higher for the CIFAR-100 dataset, indicating that there is more variation in the redundant groups. Figure 9: Validation accuracy after training with subsets of various sizes of ImageNet . We plot the average over 5 trials with the vertical bars denoting standard deviation. There is no drop in validation accuracy when 10% of the training data considered redundant by semantic clustering is removed.
ImageNet
We train a 101-layer ResNet with the ImageNet dataset. It gave us a semantic representation of 2048 dimensions. We use a batch size of 1024 during training and train for 120, 000 steps with a learning rate cosine annealed from 0.4 to 0. Using the strategy from [Goyal et al., 2017], we linearly warm up our learning rate from 0 for 5000 steps to be able to train with large batches. We regularize our weights with 2 penalty with a factor of 0.0001. For optimization, we use Stochastic Gradient Descent with a Momentum coefficient of 0.9 while using the Nesterov momentum update. Since the test set is not publicly available we report the average validation accuracy, measured over 5 trials.
The results of training with subsets of varying sizes of ImageNet dataset are shown in Figure 9. Our proposed scheme is able to successfully show that at least 10% of the data can be removed from the training set without any negative impact on the validation accuracy, whereas training on random subsets always gives a drop with decrease in subset size. Figure 1 shows different redundant groups found in the ImageNet dataset. It is noteworthy that the semantic change considered redundant is different across each group. Figure 11 highlights the similarities between images of the same redundant group and the variation across different redundant groups.
In each row of Figure 12, we plot two images from a redundant group on the left where the retained image is highlighted in a green box. On the right we display the image closest to each retained image in dissimilarity but excluded from the redundant group. These images were close in semantic space to the corresponding retained images, but were not considered similar enough to be redundant. For example the redundant group in the first row of Figure 12 contains Sedan-like looking red cars. The 2-seater sports car on the right, in spite of looking similar to the cars on the left, was not considered redundant with them. Figure 10 shows the number of redundant groups of each size when creating a 90% subset. Much akin to Figure 4, a majority of images are not considered redundant and form a group of size 1.
Additional examples of redundancy group on Ima-geNet is provided in the appendix.
Implementation Details
We use the open source Tensorflow [Abadi et al., 2016] and tensor2tensor [Vaswani et al., 2018] frameworks to train our models. For clustering, we used the scikit-learn [Pedregosa et al., 2011] library. For the CIFAR-10 and CIFAR-100 experiments we train on a single NVIDIA Tesla P100 GPU. For our ImageNet experiments we perform distributed training on 16 Cloud TPUs.
Conclusion
In this work we present a method to find redundant subsets of training data. We explicitly model a dissimilarity metric into our formulation which allows us to find semantically close samples that can be considered redundant. We use an agglomerative clustering algorithm to find redundant groups of images in the semantic space. Through our experiments we are able to show that at least 10% of ImageNet and CIFAR-10 datasets are redundant.
We analyze these redundant groups both qualitatively and quantitatively. Upon visual observation, we see that the semantic change considered redundant varies from cluster to cluster. We show examples of a variety of varying attributes in redundant groups, all of which are redundant from the point of view of training the network.
One particular justification for not needing this variation during training could be that the network learns to be invariant to them because of its shared parameters and seeing similar variations in other parts of the dataset.
In Figure 2 and 9, the accuracy without 5% and 10% of the data is slightly higher than that obtained with the full dataset. This could indicate that redundancies in training datasets hamper the optimization process.
For the CIFAR-100 dataset our proposed scheme fails to find any redundancies. We qualitatively compare the redundant groups in CIFAR-100 ( Figure 6) to the ones found in CIFAR-10 ( Figure 3) and find that the semantic variation across redundant groups is much larger in the former case. Quantitatively this can be seen in Table 1 which shows points in redundant groups of CIFAR-100 are much more spread out in semantic space as compared to CIFAR-10 .
Although we could not find any redundancies in the CIFAR-100 dataset, there could be a better algorithm that could find them. Moreover, we hope that this work inspires a line of work into finding these redundancies and leveraging them for faster and more efficient training.
Acknowledgement
We would like to thank colleagues at Google Research for comments and discussions: Thomas Leung, Yair Movshovitz-Attias, Shraman Ray Chaudhuri, Azade Nazi, Serge Ioffe. Figure 11: This figure highlights semantic similarities between images from the same redundant group and variation seen across different redundant groups of the same class. The redundant groups were found while creating a 90% subset of the ImageNet dataset. Each sub-figure is a redundant group of images according to our algorithm. Each column contains images belonging to the same class, with each row in a column being a different redundant group. For example, the first column contains the Clock class. Clocks in 11a are in one group of redundant images whereas clocks in 11e are in another group. From each of the groups in the sub-figures, only the images marked in green boxes are selected by our algorithm and the others are discarded. Discarding these images had no negative impact on validation accuracy. Figure 12: In each row we plot two images from the same redundant group while creating a 90% subset on the left with the retained image highlighted in a green box. On the right we plot the image closest to the retained image in the semantic space but not included in the same redundant group. Note that the image on the right shows a semantic variation which is inconsistent with the one seen in the redundant group. | 2,891 |
1901.11409 | 2914666366 | Large datasets have been crucial to the success of deep learning models in the recent years, which keep performing better as they are trained with more labelled data. While there have been sustained efforts to make these models more data-efficient, the potential benefit of understanding the data itself, is largely untapped. Specifically, focusing on object recognition tasks, we wonder if for common benchmark datasets we can do better than random subsets of the data and find a subset that can generalize on par with the full dataset when trained on. To our knowledge, this is the first result that can find notable redundancies in CIFAR-10 and ImageNet datasets (at least 10 ). Interestingly, we observe semantic correlations between required and redundant images. We hope that our findings can motivate further research into identifying additional redundancies and exploiting them for more efficient training or data-collection. | @cite_32 pose the problem of subset selection as a constrained sub-modular maximization problem and use it to propose an active learning algorithm. The proposed techniques are used by @cite_3 in the context of image recognition tasks. These drawback however, is that when used with deep-neural networks, simple uncertainty based strategies out-perform the mentioned algorithm. | {
"abstract": [
"We study the problem of selecting a subset of big data to train a classifier while incurring minimal performance loss. We show the connection of submodularity to the data likelihood functions for Naive Bayes (NB) and Nearest Neighbor (NN) classifiers, and formulate the data subset selection problems for these classifiers as constrained submodular maximization. Furthermore, we apply this framework to active learning and propose a novel scheme called filtered active submodular selection (FASS), where we combine the uncertainty sampling method with a submodular data subset selection framework. We extensively evaluate the proposed framework on text categorization and handwritten digit recognition tasks with four different classifiers, including deep neural network (DNN) based classifiers. Empirical results indicate that the proposed framework yields significant improvement over the state-of-the-art algorithms on all classifiers.",
"Supervised machine learning based state-of-the-art computer vision techniques are in general data hungry and pose the challenges of not having adequate computing resources and of high costs involved in human labeling efforts. Training data subset selection and active learning techniques have been proposed as possible solutions to these challenges respectively. A special class of subset selection functions naturally model notions of diversity, coverage and representation and they can be used to eliminate redundancy and thus lend themselves well for training data subset selection. They can also help improve the efficiency of active learning in further reducing human labeling efforts by selecting a subset of the examples obtained using the conventional uncertainty sampling based techniques. In this work we empirically demonstrate the effectiveness of two diversity models, namely the Facility-Location and Disparity-Min models for training-data subset selection and reducing labeling effort. We do this for a variety of computer vision tasks including Gender Recognition, Scene Recognition and Object Recognition. Our results show that subset selection done in the right way can add 2-3 in accuracy on existing baselines, particularly in the case of less training data. This allows the training of complex machine learning models (like Convolutional Neural Networks) with much less training data while incurring minimal performance loss."
],
"cite_N": [
"@cite_32",
"@cite_3"
],
"mid": [
"1912128066",
"2806138827"
]
} | Semantic Redundancies in Image-Classification Datasets: The 10% You Don't Need | Large datasets have played a central role in the recent success of deep learning. In fact, the performance of AlexNet [Krizhevsky et al., 2012] trained on ImageNet [Deng et al., 2009] in 2012 is often considered as the starting point of the current deep learning era. Undoubtedly, prominent datasets of ImageNet, CI-FAR, and CIFAR-100 [Krizhevsky and Hinton, 2009] have had a crucial role in the evolution of deep learning methods since then; with even bigger datasets like OpenImages [Kuznetsova et al., 2018] and Tencent ML-images [Wu et al., 2019] recently emerging.
These developments have led to state-of-the-art architectures such as ResNets [He et al., 2016a], DenseNets [Huang et al., 2017], VGG [Simonyan and Zisserman, 2014], AmoebaNets [Huang et al., 2018], and regularization techniques such as Dropout [Srivastava et al., 2014] and Shake-Shake [Gastaldi, 2017]. However, understanding the properties of these datasets themselves has remained relatively untapped. Limited study along this direction includes [Lin et al., 2018], which proposes a modified loss function to deal with the class imbalance inherent in object detection datasets and [Tobin et al., 2017], which * Work Done as Google AI Resident. studies modifications to simulated data to help models adapt to the real world, and [Carlini et al., 2018] that demonstrates the existence of prototypical examples and verifies that they match human intuition.
This work studies the properties of ImageNet, CIFAR-10 , and CIFAR-100 datasets from the angle of redundancy. We find that at least 10% of ImageNet and CIFAR-10 can be safely removed by a technique as simple as clustering. Particularly, we identify a certain subset of ImageNet and CIFAR-10 whose removal does not affect the test accuracy when the architecture is trained from scratch on the remaining subset. This is striking, as deep learning techniques are believed to be data hungry [Halevy et al., 2009, Sun et al., 2017. In fact, recently the work by [Vodrahalli et al., 2018] specifically studying the redundancy of these datasets concludes that there is no redundancy. Our work refutes that claim by providing counter examples.
Contributions. This work resolves some recent misconceptions about the absence of notable redundancy in major image classification datasets [Vodrahalli et al., 2018]. We do this by identifying a specific subset, which constitutes above 10% of the training set, and yet its removal causes no drop in the test accuracy. To our knowledge, this is the first time such significant redundancy is shown to exist for these datasets. We emphasize that our contribution is merely to demonstrate the existence of such redundancy, but we do not claim any algorithmic contributions. However, we hope that our findings can motivate further research into identifying additional redundancies and exploiting them for more efficient training or data-collection. Our findings may also be of interest to active learning community, as it provides an upper-bound on the best performance 1 .
Method
Motivation
In order to find redundancies, it is crucial to analyze each sample in the context of other samples in the dataset. Unlike previous attempts, we seek to measure redundancy by explicitly looking at a dissimilarity measure between samples. In case of there being near-duplicates in the training data, the approach of [Vodrahalli et al., 2018] will not be able to decide between them if their resulting gradient magnitude is high, whereas a dissimilarity measure can conclude that they are redundant if it evaluates to a low value.
Algorithm
To find redundancies in datasets, we look at the semantic space of a pre-trained model trained on the full dataset. In our case, the semantic representation comes from the penultimate layer of a neural network. To find groups of points which are close by in the semantic space we use Agglomerative Clustering [Defays, 1977]. Agglomerative Clustering assumes that each point starts out as its own cluster initially, and at each step, the pair of clusters which are closest according to the dissimilarity criterion are joined together. Given two images I 1 and I 2 , whose latent representations are denoted by vectors x 1 and x 2 . We denote the dissimilarity between x 1 and x 2 by d(x 1 , x 2 ) using the cosine angle between them as follows:
d(x 1 , x 2 ) = 1 − x 1 , x 2 x 1 x 2 .(1)
The dissimilarity between two clusters C 1 and C 2 , D(C 1 , C 2 ) is the maximum dissimilarity between any two of their constituent points:
D(C 1 , C 2 ) = max x1∈C1,x2∈C2 d(x 1 , x 2 ) .(2)
For Agglomerative Clustering, we process points belonging to each class independently. Since the dissimilarity is a pairwise measure, processing each class separately leads to faster computations. We run the clustering algorithm until there are k clusters left, where k is the size of the desired subset. We assume that points inside a cluster belong to the same redundant group of images. In each redundant group, we select the image whose representation is closest to the cluster center and discard the rest. Henceforth, we refer to this procedure as semantic space clustering or semantic clustering for brevity.
Experiments
We use the ResNet [He et al., 2016a] architecture for all our experiments with the variant described in [He et al., 2016b]. For each dataset, we compare the performance after training on different random subsets to subsets found with semantic clustering. Given a fixed pre-trained model, semantic clustering subsets are deterministic and the only source of stochasticity is due to the random network weight initialization and random mini-batch choices during optimization by SGD.
The semantic space embedding is obtained by pretraining a network on the full dataset. We chose the output after the last average pooling layer as our semantic space representation. All hyperparameters are kept identical during pre-training and also when training with different subset sizes.
As the baseline, we compare against a subset of size k uniformly sampled from the full set. Each class is sampled independently to in order to be consistent with the semantic clustering scheme. Note that random sampling scheme adds an additional source of stochasticity compared to clustering. For both either uniform sampling or cluster based subset selection, we report the mean and standard deviation of the test accuracy of the model trained from scratch using the subset.
CIFAR-10 & CIFAR-100
We train a 32-layer ResNet for the CIFAR-10 and CIFAR-100 [Krizhevsky and Hinton, 2009] datasets. The semantic representation obtained was a 64dimensional vector. For both the datasets, we train for 100,000 steps with a learning rate which is cosine annealed [Loshchilov and Hutter, 2016] from 0.1 to 0 with a batch size of 128.
For optimization we use Stochastic Gradient Descent with a momentum of coefficient of 0.9. We regularize our weights by penalizing their 2 norm with a factor of 0.0001. We found that to prevent weights from diverging when training with subsets of all sizes, warming up the We see no drop in test accuracy until 10% of the data considered redundant by semantic clustering is removed.
learning rate was necessary. We use linear learning rate warm-up for 2500 steps from 0. We verified that warming up the learning rate performs slightly better than using no warm-up when using the full dataset. In all these experiments, we report average test accuracy across 10 trials.
CIFAR-10
We see in the case of the CIFAR-10 dataset in Figure 2 that the same test accuracy can be achieved even after 10% of the training is discarded using semantic clustering. In contrast, training on random subsets of smaller sizes, results in a monotonic drop in performance. Therefore, while we show that at least 10% of the data in the CIFAR-10 dataset is redundant, this redundancy cannot be observed by uniform sampling. Figure 3 shows examples of images considered redundant with semantic clustering while choosing a subset of 90% size of the full dataset. Each set denotes images the were placed in into the same (redundant) group by semantic clustering. Images in green boxes were retained while the rest were discarded. Figure 4 shows the number of redundant groups of different sizes for two classes in the CIFAR-10 dataset when seeking a 90% subset. Since a majority of points are retained, most clusters end up containing one element upon termination. Redundant points arise from clustering with two or more elements in them.
CIFAR-100
In the case of the CIFAR-100 dataset, our proposed scheme fails to find redundancies, as is shown in Figure 5, while it does slightly better than random subsets. Both proposed and random methods show a monotonic decrease in test accuracy with decreasing subset size. in the CIFAR-10 dataset when finding a 90% subset for two classes. Note that the y-axis is logarithmic. Figure 6 looks at redundant groups found with semantic clustering to retain 90% of the dataset. As compared to Figure 3, the images within a group show much more semantic variation. Redundant groups in Figure 3 are slight variations of the same object, where as in Figure 6, redundant groups do not contain the same object. We note that in this case the model is not able to be invariant to these semantic changes.
Similar to Figure 4, we plot the number of redundant groups of each size for two classes in CIFAR-100 in Figure 7.
To quantify the semantic variation of CIFAR-100 in relation to CIFAR-10 , we select redundant groups of size two or more, and measure the average dissimilarity(from Equation 1) to the retained sample. We report the average over groups in 3 different classes as well as the entire datasets in Table 1. It is clear that the Each column contains a specific class of images. In contrast to Figure 3, the images within each redundant group show much more variations. The groups were found when retaining a 90% subset, and retraining only the selected images (in green boxes) and discarding the rest had a negative impact on test performance.
higher semantic variation in the redundant groups of CIFAR-100 seen in Figure 6 translates to an higher average dissimilarity in Table 1.
Choice of semantic representation.
To determine the best choice of semantic representation from a pre-trained model, we run experiments after selecting the semantic representation from 3 different layers in the network. Figure 8 shows the results. Here "Start" denotes the semantic representation after the first Convolution layer is a ResNet, "Middle" denotes the representation after the second residual block, and "End" denotes the output of the last average pooling layer. We see that the "End" layer's semantic representation is able to find the largest redundancy. 13.90 × 10 −3 Table 1: Average dissimilarity to the retained sample across redundant groups (clusters) of size greater than 1. We report the class-wise mean for 3 classes as well as the average over the entire dataset. All clusters were created to find a subset of 90% the size of the full set. We can observe that the average dissimilarity is about an order of magnitude higher for the CIFAR-100 dataset, indicating that there is more variation in the redundant groups. Figure 9: Validation accuracy after training with subsets of various sizes of ImageNet . We plot the average over 5 trials with the vertical bars denoting standard deviation. There is no drop in validation accuracy when 10% of the training data considered redundant by semantic clustering is removed.
ImageNet
We train a 101-layer ResNet with the ImageNet dataset. It gave us a semantic representation of 2048 dimensions. We use a batch size of 1024 during training and train for 120, 000 steps with a learning rate cosine annealed from 0.4 to 0. Using the strategy from [Goyal et al., 2017], we linearly warm up our learning rate from 0 for 5000 steps to be able to train with large batches. We regularize our weights with 2 penalty with a factor of 0.0001. For optimization, we use Stochastic Gradient Descent with a Momentum coefficient of 0.9 while using the Nesterov momentum update. Since the test set is not publicly available we report the average validation accuracy, measured over 5 trials.
The results of training with subsets of varying sizes of ImageNet dataset are shown in Figure 9. Our proposed scheme is able to successfully show that at least 10% of the data can be removed from the training set without any negative impact on the validation accuracy, whereas training on random subsets always gives a drop with decrease in subset size. Figure 1 shows different redundant groups found in the ImageNet dataset. It is noteworthy that the semantic change considered redundant is different across each group. Figure 11 highlights the similarities between images of the same redundant group and the variation across different redundant groups.
In each row of Figure 12, we plot two images from a redundant group on the left where the retained image is highlighted in a green box. On the right we display the image closest to each retained image in dissimilarity but excluded from the redundant group. These images were close in semantic space to the corresponding retained images, but were not considered similar enough to be redundant. For example the redundant group in the first row of Figure 12 contains Sedan-like looking red cars. The 2-seater sports car on the right, in spite of looking similar to the cars on the left, was not considered redundant with them. Figure 10 shows the number of redundant groups of each size when creating a 90% subset. Much akin to Figure 4, a majority of images are not considered redundant and form a group of size 1.
Additional examples of redundancy group on Ima-geNet is provided in the appendix.
Implementation Details
We use the open source Tensorflow [Abadi et al., 2016] and tensor2tensor [Vaswani et al., 2018] frameworks to train our models. For clustering, we used the scikit-learn [Pedregosa et al., 2011] library. For the CIFAR-10 and CIFAR-100 experiments we train on a single NVIDIA Tesla P100 GPU. For our ImageNet experiments we perform distributed training on 16 Cloud TPUs.
Conclusion
In this work we present a method to find redundant subsets of training data. We explicitly model a dissimilarity metric into our formulation which allows us to find semantically close samples that can be considered redundant. We use an agglomerative clustering algorithm to find redundant groups of images in the semantic space. Through our experiments we are able to show that at least 10% of ImageNet and CIFAR-10 datasets are redundant.
We analyze these redundant groups both qualitatively and quantitatively. Upon visual observation, we see that the semantic change considered redundant varies from cluster to cluster. We show examples of a variety of varying attributes in redundant groups, all of which are redundant from the point of view of training the network.
One particular justification for not needing this variation during training could be that the network learns to be invariant to them because of its shared parameters and seeing similar variations in other parts of the dataset.
In Figure 2 and 9, the accuracy without 5% and 10% of the data is slightly higher than that obtained with the full dataset. This could indicate that redundancies in training datasets hamper the optimization process.
For the CIFAR-100 dataset our proposed scheme fails to find any redundancies. We qualitatively compare the redundant groups in CIFAR-100 ( Figure 6) to the ones found in CIFAR-10 ( Figure 3) and find that the semantic variation across redundant groups is much larger in the former case. Quantitatively this can be seen in Table 1 which shows points in redundant groups of CIFAR-100 are much more spread out in semantic space as compared to CIFAR-10 .
Although we could not find any redundancies in the CIFAR-100 dataset, there could be a better algorithm that could find them. Moreover, we hope that this work inspires a line of work into finding these redundancies and leveraging them for faster and more efficient training.
Acknowledgement
We would like to thank colleagues at Google Research for comments and discussions: Thomas Leung, Yair Movshovitz-Attias, Shraman Ray Chaudhuri, Azade Nazi, Serge Ioffe. Figure 11: This figure highlights semantic similarities between images from the same redundant group and variation seen across different redundant groups of the same class. The redundant groups were found while creating a 90% subset of the ImageNet dataset. Each sub-figure is a redundant group of images according to our algorithm. Each column contains images belonging to the same class, with each row in a column being a different redundant group. For example, the first column contains the Clock class. Clocks in 11a are in one group of redundant images whereas clocks in 11e are in another group. From each of the groups in the sub-figures, only the images marked in green boxes are selected by our algorithm and the others are discarded. Discarding these images had no negative impact on validation accuracy. Figure 12: In each row we plot two images from the same redundant group while creating a 90% subset on the left with the retained image highlighted in a green box. On the right we plot the image closest to the retained image in the semantic space but not included in the same redundant group. Note that the image on the right shows a semantic variation which is inconsistent with the one seen in the redundant group. | 2,891 |
1901.11409 | 2914666366 | Large datasets have been crucial to the success of deep learning models in the recent years, which keep performing better as they are trained with more labelled data. While there have been sustained efforts to make these models more data-efficient, the potential benefit of understanding the data itself, is largely untapped. Specifically, focusing on object recognition tasks, we wonder if for common benchmark datasets we can do better than random subsets of the data and find a subset that can generalize on par with the full dataset when trained on. To our knowledge, this is the first result that can find notable redundancies in CIFAR-10 and ImageNet datasets (at least 10 ). Interestingly, we observe semantic correlations between required and redundant images. We hope that our findings can motivate further research into identifying additional redundancies and exploiting them for more efficient training or data-collection. | Another example of trying to identify a smaller, more informative set can be seen in @cite_25 . Using their own definition of value of a training example, they demonstrate that prioritizing training over examples of high training value can result in improved performance for object detection tasks. The authors suggest that their definition of training value encourages prototypicality and thus results is better learning. | {
"abstract": [
"When learning a new concept, not all training examples may prove equally useful for training: some may have higher or lower training value than others. The goal of this paper is to bring to the attention of the vision community the following considerations: (1) some examples are better than others for training detectors or classifiers, and (2) in the presence of better examples, some examples may negatively impact performance and removing them may be beneficial. In this paper, we propose an approach for measuring the training value of an example, and use it for ranking and greedily sorting examples. We test our methods on different vision tasks, models, datasets and classifiers. Our experiments show that the performance of current state-of-the-art detectors and classifiers can be improved when training on a subset, rather than the whole training set."
],
"cite_N": [
"@cite_25"
],
"mid": [
"1644040291"
]
} | Semantic Redundancies in Image-Classification Datasets: The 10% You Don't Need | Large datasets have played a central role in the recent success of deep learning. In fact, the performance of AlexNet [Krizhevsky et al., 2012] trained on ImageNet [Deng et al., 2009] in 2012 is often considered as the starting point of the current deep learning era. Undoubtedly, prominent datasets of ImageNet, CI-FAR, and CIFAR-100 [Krizhevsky and Hinton, 2009] have had a crucial role in the evolution of deep learning methods since then; with even bigger datasets like OpenImages [Kuznetsova et al., 2018] and Tencent ML-images [Wu et al., 2019] recently emerging.
These developments have led to state-of-the-art architectures such as ResNets [He et al., 2016a], DenseNets [Huang et al., 2017], VGG [Simonyan and Zisserman, 2014], AmoebaNets [Huang et al., 2018], and regularization techniques such as Dropout [Srivastava et al., 2014] and Shake-Shake [Gastaldi, 2017]. However, understanding the properties of these datasets themselves has remained relatively untapped. Limited study along this direction includes [Lin et al., 2018], which proposes a modified loss function to deal with the class imbalance inherent in object detection datasets and [Tobin et al., 2017], which * Work Done as Google AI Resident. studies modifications to simulated data to help models adapt to the real world, and [Carlini et al., 2018] that demonstrates the existence of prototypical examples and verifies that they match human intuition.
This work studies the properties of ImageNet, CIFAR-10 , and CIFAR-100 datasets from the angle of redundancy. We find that at least 10% of ImageNet and CIFAR-10 can be safely removed by a technique as simple as clustering. Particularly, we identify a certain subset of ImageNet and CIFAR-10 whose removal does not affect the test accuracy when the architecture is trained from scratch on the remaining subset. This is striking, as deep learning techniques are believed to be data hungry [Halevy et al., 2009, Sun et al., 2017. In fact, recently the work by [Vodrahalli et al., 2018] specifically studying the redundancy of these datasets concludes that there is no redundancy. Our work refutes that claim by providing counter examples.
Contributions. This work resolves some recent misconceptions about the absence of notable redundancy in major image classification datasets [Vodrahalli et al., 2018]. We do this by identifying a specific subset, which constitutes above 10% of the training set, and yet its removal causes no drop in the test accuracy. To our knowledge, this is the first time such significant redundancy is shown to exist for these datasets. We emphasize that our contribution is merely to demonstrate the existence of such redundancy, but we do not claim any algorithmic contributions. However, we hope that our findings can motivate further research into identifying additional redundancies and exploiting them for more efficient training or data-collection. Our findings may also be of interest to active learning community, as it provides an upper-bound on the best performance 1 .
Method
Motivation
In order to find redundancies, it is crucial to analyze each sample in the context of other samples in the dataset. Unlike previous attempts, we seek to measure redundancy by explicitly looking at a dissimilarity measure between samples. In case of there being near-duplicates in the training data, the approach of [Vodrahalli et al., 2018] will not be able to decide between them if their resulting gradient magnitude is high, whereas a dissimilarity measure can conclude that they are redundant if it evaluates to a low value.
Algorithm
To find redundancies in datasets, we look at the semantic space of a pre-trained model trained on the full dataset. In our case, the semantic representation comes from the penultimate layer of a neural network. To find groups of points which are close by in the semantic space we use Agglomerative Clustering [Defays, 1977]. Agglomerative Clustering assumes that each point starts out as its own cluster initially, and at each step, the pair of clusters which are closest according to the dissimilarity criterion are joined together. Given two images I 1 and I 2 , whose latent representations are denoted by vectors x 1 and x 2 . We denote the dissimilarity between x 1 and x 2 by d(x 1 , x 2 ) using the cosine angle between them as follows:
d(x 1 , x 2 ) = 1 − x 1 , x 2 x 1 x 2 .(1)
The dissimilarity between two clusters C 1 and C 2 , D(C 1 , C 2 ) is the maximum dissimilarity between any two of their constituent points:
D(C 1 , C 2 ) = max x1∈C1,x2∈C2 d(x 1 , x 2 ) .(2)
For Agglomerative Clustering, we process points belonging to each class independently. Since the dissimilarity is a pairwise measure, processing each class separately leads to faster computations. We run the clustering algorithm until there are k clusters left, where k is the size of the desired subset. We assume that points inside a cluster belong to the same redundant group of images. In each redundant group, we select the image whose representation is closest to the cluster center and discard the rest. Henceforth, we refer to this procedure as semantic space clustering or semantic clustering for brevity.
Experiments
We use the ResNet [He et al., 2016a] architecture for all our experiments with the variant described in [He et al., 2016b]. For each dataset, we compare the performance after training on different random subsets to subsets found with semantic clustering. Given a fixed pre-trained model, semantic clustering subsets are deterministic and the only source of stochasticity is due to the random network weight initialization and random mini-batch choices during optimization by SGD.
The semantic space embedding is obtained by pretraining a network on the full dataset. We chose the output after the last average pooling layer as our semantic space representation. All hyperparameters are kept identical during pre-training and also when training with different subset sizes.
As the baseline, we compare against a subset of size k uniformly sampled from the full set. Each class is sampled independently to in order to be consistent with the semantic clustering scheme. Note that random sampling scheme adds an additional source of stochasticity compared to clustering. For both either uniform sampling or cluster based subset selection, we report the mean and standard deviation of the test accuracy of the model trained from scratch using the subset.
CIFAR-10 & CIFAR-100
We train a 32-layer ResNet for the CIFAR-10 and CIFAR-100 [Krizhevsky and Hinton, 2009] datasets. The semantic representation obtained was a 64dimensional vector. For both the datasets, we train for 100,000 steps with a learning rate which is cosine annealed [Loshchilov and Hutter, 2016] from 0.1 to 0 with a batch size of 128.
For optimization we use Stochastic Gradient Descent with a momentum of coefficient of 0.9. We regularize our weights by penalizing their 2 norm with a factor of 0.0001. We found that to prevent weights from diverging when training with subsets of all sizes, warming up the We see no drop in test accuracy until 10% of the data considered redundant by semantic clustering is removed.
learning rate was necessary. We use linear learning rate warm-up for 2500 steps from 0. We verified that warming up the learning rate performs slightly better than using no warm-up when using the full dataset. In all these experiments, we report average test accuracy across 10 trials.
CIFAR-10
We see in the case of the CIFAR-10 dataset in Figure 2 that the same test accuracy can be achieved even after 10% of the training is discarded using semantic clustering. In contrast, training on random subsets of smaller sizes, results in a monotonic drop in performance. Therefore, while we show that at least 10% of the data in the CIFAR-10 dataset is redundant, this redundancy cannot be observed by uniform sampling. Figure 3 shows examples of images considered redundant with semantic clustering while choosing a subset of 90% size of the full dataset. Each set denotes images the were placed in into the same (redundant) group by semantic clustering. Images in green boxes were retained while the rest were discarded. Figure 4 shows the number of redundant groups of different sizes for two classes in the CIFAR-10 dataset when seeking a 90% subset. Since a majority of points are retained, most clusters end up containing one element upon termination. Redundant points arise from clustering with two or more elements in them.
CIFAR-100
In the case of the CIFAR-100 dataset, our proposed scheme fails to find redundancies, as is shown in Figure 5, while it does slightly better than random subsets. Both proposed and random methods show a monotonic decrease in test accuracy with decreasing subset size. in the CIFAR-10 dataset when finding a 90% subset for two classes. Note that the y-axis is logarithmic. Figure 6 looks at redundant groups found with semantic clustering to retain 90% of the dataset. As compared to Figure 3, the images within a group show much more semantic variation. Redundant groups in Figure 3 are slight variations of the same object, where as in Figure 6, redundant groups do not contain the same object. We note that in this case the model is not able to be invariant to these semantic changes.
Similar to Figure 4, we plot the number of redundant groups of each size for two classes in CIFAR-100 in Figure 7.
To quantify the semantic variation of CIFAR-100 in relation to CIFAR-10 , we select redundant groups of size two or more, and measure the average dissimilarity(from Equation 1) to the retained sample. We report the average over groups in 3 different classes as well as the entire datasets in Table 1. It is clear that the Each column contains a specific class of images. In contrast to Figure 3, the images within each redundant group show much more variations. The groups were found when retaining a 90% subset, and retraining only the selected images (in green boxes) and discarding the rest had a negative impact on test performance.
higher semantic variation in the redundant groups of CIFAR-100 seen in Figure 6 translates to an higher average dissimilarity in Table 1.
Choice of semantic representation.
To determine the best choice of semantic representation from a pre-trained model, we run experiments after selecting the semantic representation from 3 different layers in the network. Figure 8 shows the results. Here "Start" denotes the semantic representation after the first Convolution layer is a ResNet, "Middle" denotes the representation after the second residual block, and "End" denotes the output of the last average pooling layer. We see that the "End" layer's semantic representation is able to find the largest redundancy. 13.90 × 10 −3 Table 1: Average dissimilarity to the retained sample across redundant groups (clusters) of size greater than 1. We report the class-wise mean for 3 classes as well as the average over the entire dataset. All clusters were created to find a subset of 90% the size of the full set. We can observe that the average dissimilarity is about an order of magnitude higher for the CIFAR-100 dataset, indicating that there is more variation in the redundant groups. Figure 9: Validation accuracy after training with subsets of various sizes of ImageNet . We plot the average over 5 trials with the vertical bars denoting standard deviation. There is no drop in validation accuracy when 10% of the training data considered redundant by semantic clustering is removed.
ImageNet
We train a 101-layer ResNet with the ImageNet dataset. It gave us a semantic representation of 2048 dimensions. We use a batch size of 1024 during training and train for 120, 000 steps with a learning rate cosine annealed from 0.4 to 0. Using the strategy from [Goyal et al., 2017], we linearly warm up our learning rate from 0 for 5000 steps to be able to train with large batches. We regularize our weights with 2 penalty with a factor of 0.0001. For optimization, we use Stochastic Gradient Descent with a Momentum coefficient of 0.9 while using the Nesterov momentum update. Since the test set is not publicly available we report the average validation accuracy, measured over 5 trials.
The results of training with subsets of varying sizes of ImageNet dataset are shown in Figure 9. Our proposed scheme is able to successfully show that at least 10% of the data can be removed from the training set without any negative impact on the validation accuracy, whereas training on random subsets always gives a drop with decrease in subset size. Figure 1 shows different redundant groups found in the ImageNet dataset. It is noteworthy that the semantic change considered redundant is different across each group. Figure 11 highlights the similarities between images of the same redundant group and the variation across different redundant groups.
In each row of Figure 12, we plot two images from a redundant group on the left where the retained image is highlighted in a green box. On the right we display the image closest to each retained image in dissimilarity but excluded from the redundant group. These images were close in semantic space to the corresponding retained images, but were not considered similar enough to be redundant. For example the redundant group in the first row of Figure 12 contains Sedan-like looking red cars. The 2-seater sports car on the right, in spite of looking similar to the cars on the left, was not considered redundant with them. Figure 10 shows the number of redundant groups of each size when creating a 90% subset. Much akin to Figure 4, a majority of images are not considered redundant and form a group of size 1.
Additional examples of redundancy group on Ima-geNet is provided in the appendix.
Implementation Details
We use the open source Tensorflow [Abadi et al., 2016] and tensor2tensor [Vaswani et al., 2018] frameworks to train our models. For clustering, we used the scikit-learn [Pedregosa et al., 2011] library. For the CIFAR-10 and CIFAR-100 experiments we train on a single NVIDIA Tesla P100 GPU. For our ImageNet experiments we perform distributed training on 16 Cloud TPUs.
Conclusion
In this work we present a method to find redundant subsets of training data. We explicitly model a dissimilarity metric into our formulation which allows us to find semantically close samples that can be considered redundant. We use an agglomerative clustering algorithm to find redundant groups of images in the semantic space. Through our experiments we are able to show that at least 10% of ImageNet and CIFAR-10 datasets are redundant.
We analyze these redundant groups both qualitatively and quantitatively. Upon visual observation, we see that the semantic change considered redundant varies from cluster to cluster. We show examples of a variety of varying attributes in redundant groups, all of which are redundant from the point of view of training the network.
One particular justification for not needing this variation during training could be that the network learns to be invariant to them because of its shared parameters and seeing similar variations in other parts of the dataset.
In Figure 2 and 9, the accuracy without 5% and 10% of the data is slightly higher than that obtained with the full dataset. This could indicate that redundancies in training datasets hamper the optimization process.
For the CIFAR-100 dataset our proposed scheme fails to find any redundancies. We qualitatively compare the redundant groups in CIFAR-100 ( Figure 6) to the ones found in CIFAR-10 ( Figure 3) and find that the semantic variation across redundant groups is much larger in the former case. Quantitatively this can be seen in Table 1 which shows points in redundant groups of CIFAR-100 are much more spread out in semantic space as compared to CIFAR-10 .
Although we could not find any redundancies in the CIFAR-100 dataset, there could be a better algorithm that could find them. Moreover, we hope that this work inspires a line of work into finding these redundancies and leveraging them for faster and more efficient training.
Acknowledgement
We would like to thank colleagues at Google Research for comments and discussions: Thomas Leung, Yair Movshovitz-Attias, Shraman Ray Chaudhuri, Azade Nazi, Serge Ioffe. Figure 11: This figure highlights semantic similarities between images from the same redundant group and variation seen across different redundant groups of the same class. The redundant groups were found while creating a 90% subset of the ImageNet dataset. Each sub-figure is a redundant group of images according to our algorithm. Each column contains images belonging to the same class, with each row in a column being a different redundant group. For example, the first column contains the Clock class. Clocks in 11a are in one group of redundant images whereas clocks in 11e are in another group. From each of the groups in the sub-figures, only the images marked in green boxes are selected by our algorithm and the others are discarded. Discarding these images had no negative impact on validation accuracy. Figure 12: In each row we plot two images from the same redundant group while creating a 90% subset on the left with the retained image highlighted in a green box. On the right we plot the image closest to the retained image in the semantic space but not included in the same redundant group. Note that the image on the right shows a semantic variation which is inconsistent with the one seen in the redundant group. | 2,891 |
1901.11409 | 2914666366 | Large datasets have been crucial to the success of deep learning models in the recent years, which keep performing better as they are trained with more labelled data. While there have been sustained efforts to make these models more data-efficient, the potential benefit of understanding the data itself, is largely untapped. Specifically, focusing on object recognition tasks, we wonder if for common benchmark datasets we can do better than random subsets of the data and find a subset that can generalize on par with the full dataset when trained on. To our knowledge, this is the first result that can find notable redundancies in CIFAR-10 and ImageNet datasets (at least 10 ). Interestingly, we observe semantic correlations between required and redundant images. We hope that our findings can motivate further research into identifying additional redundancies and exploiting them for more efficient training or data-collection. | Most recently @cite_11 attempts to find redundancies in image recognition datasets by analyzing gradient magnitudes as a measure of importance. They prioritize examples with high gradient magnitude according to a pre-trained classifier. Their method fails to find redundancies in and datasets. | {
"abstract": [
"Modern computer vision algorithms often rely on very large training datasets. However, it is conceivable that a carefully selected subsample of the dataset is sufficient for training. In this paper, we propose a gradient-based importance measure that we use to empirically analyze relative importance of training images in four datasets of varying complexity. We find that in some cases, a small subsample is indeed sufficient for training. For other datasets, however, the relative differences in importance are negligible. These results have important implications for active learning on deep networks. Additionally, our analysis method can be used as a general tool to better understand diversity of training examples in datasets."
],
"cite_N": [
"@cite_11"
],
"mid": [
"2902142571"
]
} | Semantic Redundancies in Image-Classification Datasets: The 10% You Don't Need | Large datasets have played a central role in the recent success of deep learning. In fact, the performance of AlexNet [Krizhevsky et al., 2012] trained on ImageNet [Deng et al., 2009] in 2012 is often considered as the starting point of the current deep learning era. Undoubtedly, prominent datasets of ImageNet, CI-FAR, and CIFAR-100 [Krizhevsky and Hinton, 2009] have had a crucial role in the evolution of deep learning methods since then; with even bigger datasets like OpenImages [Kuznetsova et al., 2018] and Tencent ML-images [Wu et al., 2019] recently emerging.
These developments have led to state-of-the-art architectures such as ResNets [He et al., 2016a], DenseNets [Huang et al., 2017], VGG [Simonyan and Zisserman, 2014], AmoebaNets [Huang et al., 2018], and regularization techniques such as Dropout [Srivastava et al., 2014] and Shake-Shake [Gastaldi, 2017]. However, understanding the properties of these datasets themselves has remained relatively untapped. Limited study along this direction includes [Lin et al., 2018], which proposes a modified loss function to deal with the class imbalance inherent in object detection datasets and [Tobin et al., 2017], which * Work Done as Google AI Resident. studies modifications to simulated data to help models adapt to the real world, and [Carlini et al., 2018] that demonstrates the existence of prototypical examples and verifies that they match human intuition.
This work studies the properties of ImageNet, CIFAR-10 , and CIFAR-100 datasets from the angle of redundancy. We find that at least 10% of ImageNet and CIFAR-10 can be safely removed by a technique as simple as clustering. Particularly, we identify a certain subset of ImageNet and CIFAR-10 whose removal does not affect the test accuracy when the architecture is trained from scratch on the remaining subset. This is striking, as deep learning techniques are believed to be data hungry [Halevy et al., 2009, Sun et al., 2017. In fact, recently the work by [Vodrahalli et al., 2018] specifically studying the redundancy of these datasets concludes that there is no redundancy. Our work refutes that claim by providing counter examples.
Contributions. This work resolves some recent misconceptions about the absence of notable redundancy in major image classification datasets [Vodrahalli et al., 2018]. We do this by identifying a specific subset, which constitutes above 10% of the training set, and yet its removal causes no drop in the test accuracy. To our knowledge, this is the first time such significant redundancy is shown to exist for these datasets. We emphasize that our contribution is merely to demonstrate the existence of such redundancy, but we do not claim any algorithmic contributions. However, we hope that our findings can motivate further research into identifying additional redundancies and exploiting them for more efficient training or data-collection. Our findings may also be of interest to active learning community, as it provides an upper-bound on the best performance 1 .
Method
Motivation
In order to find redundancies, it is crucial to analyze each sample in the context of other samples in the dataset. Unlike previous attempts, we seek to measure redundancy by explicitly looking at a dissimilarity measure between samples. In case of there being near-duplicates in the training data, the approach of [Vodrahalli et al., 2018] will not be able to decide between them if their resulting gradient magnitude is high, whereas a dissimilarity measure can conclude that they are redundant if it evaluates to a low value.
Algorithm
To find redundancies in datasets, we look at the semantic space of a pre-trained model trained on the full dataset. In our case, the semantic representation comes from the penultimate layer of a neural network. To find groups of points which are close by in the semantic space we use Agglomerative Clustering [Defays, 1977]. Agglomerative Clustering assumes that each point starts out as its own cluster initially, and at each step, the pair of clusters which are closest according to the dissimilarity criterion are joined together. Given two images I 1 and I 2 , whose latent representations are denoted by vectors x 1 and x 2 . We denote the dissimilarity between x 1 and x 2 by d(x 1 , x 2 ) using the cosine angle between them as follows:
d(x 1 , x 2 ) = 1 − x 1 , x 2 x 1 x 2 .(1)
The dissimilarity between two clusters C 1 and C 2 , D(C 1 , C 2 ) is the maximum dissimilarity between any two of their constituent points:
D(C 1 , C 2 ) = max x1∈C1,x2∈C2 d(x 1 , x 2 ) .(2)
For Agglomerative Clustering, we process points belonging to each class independently. Since the dissimilarity is a pairwise measure, processing each class separately leads to faster computations. We run the clustering algorithm until there are k clusters left, where k is the size of the desired subset. We assume that points inside a cluster belong to the same redundant group of images. In each redundant group, we select the image whose representation is closest to the cluster center and discard the rest. Henceforth, we refer to this procedure as semantic space clustering or semantic clustering for brevity.
Experiments
We use the ResNet [He et al., 2016a] architecture for all our experiments with the variant described in [He et al., 2016b]. For each dataset, we compare the performance after training on different random subsets to subsets found with semantic clustering. Given a fixed pre-trained model, semantic clustering subsets are deterministic and the only source of stochasticity is due to the random network weight initialization and random mini-batch choices during optimization by SGD.
The semantic space embedding is obtained by pretraining a network on the full dataset. We chose the output after the last average pooling layer as our semantic space representation. All hyperparameters are kept identical during pre-training and also when training with different subset sizes.
As the baseline, we compare against a subset of size k uniformly sampled from the full set. Each class is sampled independently to in order to be consistent with the semantic clustering scheme. Note that random sampling scheme adds an additional source of stochasticity compared to clustering. For both either uniform sampling or cluster based subset selection, we report the mean and standard deviation of the test accuracy of the model trained from scratch using the subset.
CIFAR-10 & CIFAR-100
We train a 32-layer ResNet for the CIFAR-10 and CIFAR-100 [Krizhevsky and Hinton, 2009] datasets. The semantic representation obtained was a 64dimensional vector. For both the datasets, we train for 100,000 steps with a learning rate which is cosine annealed [Loshchilov and Hutter, 2016] from 0.1 to 0 with a batch size of 128.
For optimization we use Stochastic Gradient Descent with a momentum of coefficient of 0.9. We regularize our weights by penalizing their 2 norm with a factor of 0.0001. We found that to prevent weights from diverging when training with subsets of all sizes, warming up the We see no drop in test accuracy until 10% of the data considered redundant by semantic clustering is removed.
learning rate was necessary. We use linear learning rate warm-up for 2500 steps from 0. We verified that warming up the learning rate performs slightly better than using no warm-up when using the full dataset. In all these experiments, we report average test accuracy across 10 trials.
CIFAR-10
We see in the case of the CIFAR-10 dataset in Figure 2 that the same test accuracy can be achieved even after 10% of the training is discarded using semantic clustering. In contrast, training on random subsets of smaller sizes, results in a monotonic drop in performance. Therefore, while we show that at least 10% of the data in the CIFAR-10 dataset is redundant, this redundancy cannot be observed by uniform sampling. Figure 3 shows examples of images considered redundant with semantic clustering while choosing a subset of 90% size of the full dataset. Each set denotes images the were placed in into the same (redundant) group by semantic clustering. Images in green boxes were retained while the rest were discarded. Figure 4 shows the number of redundant groups of different sizes for two classes in the CIFAR-10 dataset when seeking a 90% subset. Since a majority of points are retained, most clusters end up containing one element upon termination. Redundant points arise from clustering with two or more elements in them.
CIFAR-100
In the case of the CIFAR-100 dataset, our proposed scheme fails to find redundancies, as is shown in Figure 5, while it does slightly better than random subsets. Both proposed and random methods show a monotonic decrease in test accuracy with decreasing subset size. in the CIFAR-10 dataset when finding a 90% subset for two classes. Note that the y-axis is logarithmic. Figure 6 looks at redundant groups found with semantic clustering to retain 90% of the dataset. As compared to Figure 3, the images within a group show much more semantic variation. Redundant groups in Figure 3 are slight variations of the same object, where as in Figure 6, redundant groups do not contain the same object. We note that in this case the model is not able to be invariant to these semantic changes.
Similar to Figure 4, we plot the number of redundant groups of each size for two classes in CIFAR-100 in Figure 7.
To quantify the semantic variation of CIFAR-100 in relation to CIFAR-10 , we select redundant groups of size two or more, and measure the average dissimilarity(from Equation 1) to the retained sample. We report the average over groups in 3 different classes as well as the entire datasets in Table 1. It is clear that the Each column contains a specific class of images. In contrast to Figure 3, the images within each redundant group show much more variations. The groups were found when retaining a 90% subset, and retraining only the selected images (in green boxes) and discarding the rest had a negative impact on test performance.
higher semantic variation in the redundant groups of CIFAR-100 seen in Figure 6 translates to an higher average dissimilarity in Table 1.
Choice of semantic representation.
To determine the best choice of semantic representation from a pre-trained model, we run experiments after selecting the semantic representation from 3 different layers in the network. Figure 8 shows the results. Here "Start" denotes the semantic representation after the first Convolution layer is a ResNet, "Middle" denotes the representation after the second residual block, and "End" denotes the output of the last average pooling layer. We see that the "End" layer's semantic representation is able to find the largest redundancy. 13.90 × 10 −3 Table 1: Average dissimilarity to the retained sample across redundant groups (clusters) of size greater than 1. We report the class-wise mean for 3 classes as well as the average over the entire dataset. All clusters were created to find a subset of 90% the size of the full set. We can observe that the average dissimilarity is about an order of magnitude higher for the CIFAR-100 dataset, indicating that there is more variation in the redundant groups. Figure 9: Validation accuracy after training with subsets of various sizes of ImageNet . We plot the average over 5 trials with the vertical bars denoting standard deviation. There is no drop in validation accuracy when 10% of the training data considered redundant by semantic clustering is removed.
ImageNet
We train a 101-layer ResNet with the ImageNet dataset. It gave us a semantic representation of 2048 dimensions. We use a batch size of 1024 during training and train for 120, 000 steps with a learning rate cosine annealed from 0.4 to 0. Using the strategy from [Goyal et al., 2017], we linearly warm up our learning rate from 0 for 5000 steps to be able to train with large batches. We regularize our weights with 2 penalty with a factor of 0.0001. For optimization, we use Stochastic Gradient Descent with a Momentum coefficient of 0.9 while using the Nesterov momentum update. Since the test set is not publicly available we report the average validation accuracy, measured over 5 trials.
The results of training with subsets of varying sizes of ImageNet dataset are shown in Figure 9. Our proposed scheme is able to successfully show that at least 10% of the data can be removed from the training set without any negative impact on the validation accuracy, whereas training on random subsets always gives a drop with decrease in subset size. Figure 1 shows different redundant groups found in the ImageNet dataset. It is noteworthy that the semantic change considered redundant is different across each group. Figure 11 highlights the similarities between images of the same redundant group and the variation across different redundant groups.
In each row of Figure 12, we plot two images from a redundant group on the left where the retained image is highlighted in a green box. On the right we display the image closest to each retained image in dissimilarity but excluded from the redundant group. These images were close in semantic space to the corresponding retained images, but were not considered similar enough to be redundant. For example the redundant group in the first row of Figure 12 contains Sedan-like looking red cars. The 2-seater sports car on the right, in spite of looking similar to the cars on the left, was not considered redundant with them. Figure 10 shows the number of redundant groups of each size when creating a 90% subset. Much akin to Figure 4, a majority of images are not considered redundant and form a group of size 1.
Additional examples of redundancy group on Ima-geNet is provided in the appendix.
Implementation Details
We use the open source Tensorflow [Abadi et al., 2016] and tensor2tensor [Vaswani et al., 2018] frameworks to train our models. For clustering, we used the scikit-learn [Pedregosa et al., 2011] library. For the CIFAR-10 and CIFAR-100 experiments we train on a single NVIDIA Tesla P100 GPU. For our ImageNet experiments we perform distributed training on 16 Cloud TPUs.
Conclusion
In this work we present a method to find redundant subsets of training data. We explicitly model a dissimilarity metric into our formulation which allows us to find semantically close samples that can be considered redundant. We use an agglomerative clustering algorithm to find redundant groups of images in the semantic space. Through our experiments we are able to show that at least 10% of ImageNet and CIFAR-10 datasets are redundant.
We analyze these redundant groups both qualitatively and quantitatively. Upon visual observation, we see that the semantic change considered redundant varies from cluster to cluster. We show examples of a variety of varying attributes in redundant groups, all of which are redundant from the point of view of training the network.
One particular justification for not needing this variation during training could be that the network learns to be invariant to them because of its shared parameters and seeing similar variations in other parts of the dataset.
In Figure 2 and 9, the accuracy without 5% and 10% of the data is slightly higher than that obtained with the full dataset. This could indicate that redundancies in training datasets hamper the optimization process.
For the CIFAR-100 dataset our proposed scheme fails to find any redundancies. We qualitatively compare the redundant groups in CIFAR-100 ( Figure 6) to the ones found in CIFAR-10 ( Figure 3) and find that the semantic variation across redundant groups is much larger in the former case. Quantitatively this can be seen in Table 1 which shows points in redundant groups of CIFAR-100 are much more spread out in semantic space as compared to CIFAR-10 .
Although we could not find any redundancies in the CIFAR-100 dataset, there could be a better algorithm that could find them. Moreover, we hope that this work inspires a line of work into finding these redundancies and leveraging them for faster and more efficient training.
Acknowledgement
We would like to thank colleagues at Google Research for comments and discussions: Thomas Leung, Yair Movshovitz-Attias, Shraman Ray Chaudhuri, Azade Nazi, Serge Ioffe. Figure 11: This figure highlights semantic similarities between images from the same redundant group and variation seen across different redundant groups of the same class. The redundant groups were found while creating a 90% subset of the ImageNet dataset. Each sub-figure is a redundant group of images according to our algorithm. Each column contains images belonging to the same class, with each row in a column being a different redundant group. For example, the first column contains the Clock class. Clocks in 11a are in one group of redundant images whereas clocks in 11e are in another group. From each of the groups in the sub-figures, only the images marked in green boxes are selected by our algorithm and the others are discarded. Discarding these images had no negative impact on validation accuracy. Figure 12: In each row we plot two images from the same redundant group while creating a 90% subset on the left with the retained image highlighted in a green box. On the right we plot the image closest to the retained image in the semantic space but not included in the same redundant group. Note that the image on the right shows a semantic variation which is inconsistent with the one seen in the redundant group. | 2,891 |
1901.11150 | 2972504565 | Adaptive gradient-based optimizers such as Adagrad and Adam are crucial for achieving state-of-the-art performance in machine translation and language modeling. However, these methods maintain second-order statistics for each parameter, thus introducing significant memory overheads that restrict the size of the model being used as well as the number of examples in a mini-batch. We describe an effective and flexible adaptive optimization method with greatly reduced memory overhead. Our method retains the benefits of per-parameter adaptivity while allowing significantly larger models and batch sizes. We give convergence guarantees for our method, and demonstrate its effectiveness in training very large translation and language models with up to 2-fold speedups compared to the state-of-the-art. | Adaptive learning rates in online and stochastic optimization date back at least to @cite_8 and were popularized in @cite_14 @cite_11 , the former of which introduced the well-known AdaGrad algorithm. Several variants of AdaGrad have now been proposed in the optimization and machine learning literature (see and the references therein), the most notable of which is the Adam algorithm @cite_4 . All of these methods require (at least) linear space for maintaining various per-parameter statistics along their execution. | {
"abstract": [
"We present a new family of subgradient methods that dynamically incorporate knowledge of the geometry of the data observed in earlier iterations to perform more informative gradient-based learning. Metaphorically, the adaptation allows us to find needles in haystacks in the form of very predictive but rarely seen features. Our paradigm stems from recent advances in stochastic optimization and online learning which employ proximal functions to control the gradient steps of the algorithm. We describe and analyze an apparatus for adaptively modifying the proximal function, which significantly simplifies setting a learning rate and results in regret guarantees that are provably as good as the best proximal function that can be chosen in hindsight. We give several efficient algorithms for empirical risk minimization problems with common and important regularization functions and domain constraints. We experimentally study our theoretical analysis and show that adaptive subgradient methods outperform state-of-the-art, yet non-adaptive, subgradient algorithms.",
"We introduce Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments. The method is straightforward to implement, is computationally efficient, has little memory requirements, is invariant to diagonal rescaling of the gradients, and is well suited for problems that are large in terms of data and or parameters. The method is also appropriate for non-stationary objectives and problems with very noisy and or sparse gradients. The hyper-parameters have intuitive interpretations and typically require little tuning. Some connections to related algorithms, on which Adam was inspired, are discussed. We also analyze the theoretical convergence properties of the algorithm and provide a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework. Empirical results demonstrate that Adam works well in practice and compares favorably to other stochastic optimization methods. Finally, we discuss AdaMax, a variant of Adam based on the infinity norm.",
"We introduce a new online convex optimization algorithm that adaptively chooses its regularization function based on the loss functions observed so far. This is in contrast to previous algorithms that use a fixed regularization function such as L2-squared, and modify it only via a single time-dependent parameter. Our algorithm’s regret bounds are worst-case optimal, and for certain realistic classes of loss functions they are much better than existing bounds. These bounds are problem-dependent, which means they can exploit the structure of the actual problem instance. Critically, however, our algorithm does not need to know this structure in advance. Rather, we prove competitive guarantees that show the algorithm provides a bound within a constant factor of the best possible bound (of a certain functional form) in hindsight.",
"Abstract We study on-line learning in the linear regression framework. Most of the performance bounds for on-line algorithms in this framework assume a constant learning rate. To achieve these bounds the learning rate must be optimized based on a posteriori information. This information depends on the whole sequence of examples and thus it is not available to any strictly on-line algorithm. We introduce new techniques for adaptively tuning the learning rate as the data sequence is progressively revealed. Our techniques allow us to prove essentially the same bounds as if we knew the optimal learning rate in advance. Moreover, such techniques apply to a wide class of on-line algorithms, including p -norm algorithms for generalized linear regression and Weighted Majority for linear regression with absolute loss. Our adaptive tunings are radically different from previous techniques, such as the so-called doubling trick. Whereas the doubling trick restarts the on-line algorithm several times using a constant learning rate for each run, our methods save information by changing the value of the learning rate very smoothly. In fact, for Weighted Majority over a finite set of experts our analysis provides a better leading constant than the doubling trick."
],
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_11",
"@cite_8"
],
"mid": [
"2146502635",
"1522301498",
"1518461880",
"2055639053"
]
} | Memory-Efficient Adaptive Optimization for Large-Scale Learning | Adaptive gradient-based optimizers such as AdaGrad [9] and Adam [14] are among the de facto methods of choice in modern machine learning. These methods adaptively tune the learning rate for each parameter during the optimization process using cumulative second-order statistics of the parameter. Often offering superior convergence properties, these methods are very attractive in large scale applications due to their moderate time and space requirements, which are linear in the number of parameters.
However, in extremely large scale applications even the modest memory overhead imposes grave limitations on the quality of the trained model. For example, recent advances in machine translation hinge on inflating the number of parameters in the trained language model to hundreds of millions. In such applications, the memory overhead of the optimizer severely restricts the size of the model that can be used as well as the number of examples in each mini-batch, both of which have been shown to have a dramatic effect on the accuracy of the model.
Motivated by these challenges, we describe an adaptive optimization method that retains the benefits of standard per-parameter adaptivity while significantly reducing its memory costs. Our construction is general and flexible, yet is remarkably simple and almost trivial to implement. We give simple convergence guarantees in the convex (stochastic and online) optimization setting, which show our method to be most effective when the gradients have a natural activation pattern, namely, the parameters can be subdivided into (not necessarily disjoint) sets such that the gradient entries within each set are correlated with each other and tend to share a similar order of magnitude. For example, in deep networks the incoming or outgoing edges of a neuron are jointly activated and, loosely speaking, their associated gradients exhibit similar statistical characteristics. That said, we do not assume that the activation pattern is fully-prescribed to the optimization algorithm before its run.
Large scale experiments show that our algorithm achieves comparable, and at times superior, rates of convergence to those obtained by standard, linear-space adaptive methods using the same batch size. Focusing primarily on language modeling tasks that are notorious for their huge models, we further demonstrate that the reduction in memory footprint can be utilized for a substantial increase in the batch size, which greatly speeds up convergence. As a byproduct of the diminished memory costs, our method also exhibits improved (wall-clock) runtime, which could be attributed to the reduced frequency of memory access.
Preliminaries
We begin by establishing some basic notation. For a vector g and α ∈ R, we use the notation g α to refer to vector obtained by raising each of the entries of g to the power α. We also use diag(g) to denote the square matrix whose diagonal elements are the entries of g (and whose off-diagonal entries are zeros). We use [d] to denote the set {1, . . . , d}. Finally, 1 d is the d-dimensional vector whose entries are all 1.
Optimization setup
We henceforth assume the general online optimization setting (see [20,11]). 1 Optimization takes place in rounds t = 1, . . . , T , where in each round the algorithm has to choose a parameter vector w t ∈ R d . After making the choice on round t, the algorithm receives a loss function t : R d → R which is used to perform an update to the parameters; often, and as will be the case in this paper, this update is determined by the gradient g t = ∇ t (w t ) of the instantaneous loss t at the current iterate w t . The algorithm is measured by its T -round regret, defined as the quantity T t=1 t (w t ) − min w T t=1 t (w); an algorithm is convergent if its regret is o(T ), i.e., if its average regret approaches zero as the number of rounds T grows.
The above setup includes stochastic (possibly mini-batched) optimization as a special case. In the latter, one desires to minimize a population loss L(w) = E z∼D [ (w, z)] based on samples of z, where (w, z) defines the loss of parameters w on a batch z. The online loss function t (w) = (w, z t ) is then the average loss over a mini-batch z t received on iteration t, and the stochastic gradient g t is a conditionally unbiased estimate of the gradient of L at the current parameter vector w t . Under convexity assumptions, an online algorithm with vanishing average regret can be converted to a stochastic optimization algorithm for minimizing the population loss L [4].
Adaptive methods
For the sake of self-containment, we give a brief description of the AdaGrad algorithm [9]. Ada-Grad maintains at every step t the following parameter-wise accumulated statistics, computed based on the previously obtained gradients g 1 , . . . , g t :
γ t (i) = t s=1 g 2 s (i) , ∀ i ∈ [d] .
Relying on these statistics, the update rule of the algorithm on step t takes the form:
w t+1 (i) = w t (i) − η g t (i) γ t (i) , ∀ i ∈ [d] ,
where η > 0 is an external learning rate parameter. AdaGrad has been shown to be particularly effective in training sparse models, where the effective learning rates η γ t (i) decay in a moderate way for rare (yet possibly informative) features. In these cases, AdaGrad can potentially lead to huge gains in terms of convergence; see the discussion in [9].
Activation patterns and covers
While the theoretical analysis of AdaGrad and related algorithms does not make any assumptions on gradient values, in practice we often observe that certain entries of a gradient have similar values, and exhibit what we call an activation pattern. For example, in embedding layers of deep networks, an entire column is either zero or non-zero. Similarly, in layers with ReLU activations it is often observed that all gradients corresponding to the same unit are jointly either zero or non-zero, and in the latter case, their absolute values share a similar order of magnitude.
In both examples, for each parameter i ∈ [d] there is a certain set of indices S i such that for all gradients g t we expect that g t (j) ≈ g t (i) for all j ∈ S. We do not attempt to formalize this notion further, and the analysis of our algorithm does not rely on a definition of an activation pattern. Rather, we leave it as an intuitive concept that serves as a motivation for our use of a cover.
Definition. A cover of a set of parameters
[d] is a collection of k nonempty sets {S r } k r=1 , such that S r ⊆ [d] and ∪ r S r = [d].
In particular, each index i ∈ [d] may be contained in multiple sets S r . k is the size of the cover.
Specific covers of interest include:
(i) Singletons: S r = {r} for all r ∈ [d]; this is a degenerate case which does not model any correlations between parameters.
(ii) Matrix rows/columns: parameters are organized as an m × n matrix, and each S r is the set of indices corresponding to a row/column of this matrix.
(iii) Tensor slices: parameters are organized as a tensor of dimension k 1 × · · · × k n , and each S r is an (n − 1)-dimensional slice of the tensor.
(iv) Multiple tensors: parameters are organized in multiple tensors, each of which has its own cover. The cover {S r } k r=1 is then the union of all the individual covers.
Our algorithm is provided with a prescribed cover as input, and its convergence is characterized in terms of the cover. We further argue, though only informally, that when a cover is "consistent" with the natural activation pattern of the parameters, we can expect the convergence of our algorithm to be significantly better.
The SMalgorithm
The idea behind our algorithm is to keep a single variable for each set S r in the cover. Thus, the additional space it requires is O(k) rather than O(d); typically k is substantially smaller than d, which yields tangible savings in memory. Concretely, for each set S r , the algorithm maintains a running sum, µ(r), of the maximal variance over all gradient entries j ∈ S r . Next, for each parameter i, we take the minimum over all variables µ(r) associated with sets which cover i, denoted S r i. Thereafter, the learning rate corresponding to the i'th gradient entry is determined by taking the square-root of this minimum, denoted by ν(i). Accordingly, we name our algorithm the S quare-root of M inima of S ums of M axima of S quared-gradients M ethod, or in short, SM3. See Algorithm SM3-I for its pseudocode. receive gradient g t = ∇ t (w t )
5:
for r = 1, . . . , k do 6: set µ t (r) ← µ t−1 (r) + max j∈Sr g 2 t (j) 7: for i = 1, . . . , d do 8: set ν t (i) ← min r:Sr i µ t (r) 9:
update w t+1 (i) ← w t (i) − η g t (i) ν t (i) In case (i) above, where there is a set S j = {j} for each j ∈ [d]
, the algorithm reduces to the AdaGrad algorithm [9]. The more interesting cases are where k d and each index i ∈ [d] is covered by multiple sets. In such settings, the memory overhead of the algorithm is sublinear in d. In particular, in setting (ii) the memory footprint reduces from O(mn) to O(m + n), which can be quite substantial in large scale. In setting (iii) the improvement is more pronounced, as
the space requirement drops from O( k i=1 n i ) to O( k i=1 n i ). The time per iteration of SM3-I is O( k r=1 |S r |)
. To see this, consider a bipartite graph defined over d + k vertices. Nodes on one side of the graph correspond to indices i ∈ [d], while nodes on the other side correspond to indices j ∈ [k]. The edges of the graphs are all pairs (i, j) such that i ∈ S j . The complexity of each of the inner for-loops of the algorithm scales with the number of edges in this graph, which is equal to O( k r=1 |S r |). (Applying the update to the weights w t takes O(d) time, but this is always dominated by the former quantity.)
As a final remark, notice that the update rule of SM3-I seems to involve a division by zero when ν t (i) = 0. However, whenever ν t (i) = 0 then necessarily also g t (i) = 0. (This is a direct consequence of Claim 1 below.) In other words, whenever the denominator in the update rule is zero, the corresponding entry has zero gradient and thus need not be updated.
Analysis
We now prove convergence guarantees for SM3-I. We first show two elementary properties of the step sizes the algorithm computes.
Claim 1. For any i ∈ [d] and t ≥ 1 the sequence ν 1 (i), ν 2 (i), . . . is monotonically increasing and, ν t (i) ≥ t s=1 g 2 s (i) .
Proof. The monotonicity is immediate as for any r ∈ [k] the variable µ t (r) is increasing in t by definition, thus ν t (i) = min r:Sr i µ t (r) is also increasing for all i ∈ [d].
Next, since g 2 s (i) ≤ max j∈S g 2 s (j) for any set S that contains i, we have The claim now follows since min r:Sr i µ t (r) = ν t (i).
Proposition 2. Assume that the loss functions 1 , 2 , . . . are convex, and let w 1 , w 2 , . . . be the iterates generated by SM3-I. Then, for any w ∈ R d ,
T t=1 t (w t ) − t (w ) ≤ 2D d i=1 min r:Sr i T t=1 max j∈Sr g 2 t (j) ,
where max 1≤t≤T w t − w ∞ ≤ D and choosing η = D.
E[L(w T )] − L(w ) = O 1 T d i=1 E min r:Sr i T t=1 max j∈Sr g 2 t (j) .
In the above proposition we implicitly assume that the iterates of SM3-I remain bounded and D is a constant. This can be enforced by projecting the iterates to a bounded set of choice. We avoid introducing projections explicitly as they are rarely used in practice.
1 2η T t=1 w t − w 2 Ht − w t+1 − w 2 Ht + η 2 T t=1 g t * Ht 2 .
Here, x H = √ x T Hx and · * is the corresponding dual norm,
x * H = √ x T H −1 x.
Henceforth, for notational convenience we set ν 0 = 0. Simplifying the first sum above using the fact that H t are diagonal matrices, we have Also, from Claim 1 we know that for all t, H t G t , thus
T t=1 w t − w 2 Ht − w t+1 − w 2 Ht ≤ T t=1 (ν 1/2 t − ν 1/2 t−1 ) · (w t − w ) 2 ≤ T t=1 (ν 1/2 t − ν 1/2 t−1 ) · w t − w 2 ∞ 1 d ≤ D 2 ν 1/2 T · 1 d = D 2 Tr(H T ) .T t=1 g t * Ht 2 ≤ T t=1 g t * Gt 2 ≤ 2 Tr(G T ) ≤ 2 Tr(H T ) .
In summary, we have established that
T t=1 t (w t ) − t (w ) ≤ D 2 2η + η Tr(H T ) .
Plugging in η = D and the expression for the diagonal elements of H T , we obtain the claim. For the degenerate case where the matrices H t may not be strictly positive definite, a careful yet technical inspection of the proof above reveals that our arguments apply to this case as well by replacing inverses with pseudo-inverses. The rest of the proof remains intact as the algorithm does not update parameter i on step t if the corresponding diagonal entry in H t is zero.
Discussion
Notice that adding more sets S r to the cover used by SM3 improves its convergence bound, but results in a worse space complexity and a higher runtime per step. Therefore, it makes sense in practice to include in the cover only the sets for which we can quickly compute the max and min operations as required by the algorithm. We discuss this point from a practical perspective in Section 4.
As we mentioned above, when k = d and S i = {i} for all i ∈ [d], SM3-I reduces to the AdaGrad algorithm. The regret bound in Proposition 2 then precisely recovers the bound attained by AdaGrad (see [9,Eq. 6
]), T t=1 ( t (w t ) − t (w )) = O D d i=1 t s=1 g 2 s (j) .
In the general case, we have
d i=1 min r:Sr i t s=1 max j∈Sr g 2 s (j) ≥ d i=1 t s=1 g 2 s (j) ,
as follows from Claim 1. Thus, as can be expected from a space-restricted scheme, our bound is never superior to AdaGrad's regret bound. Nevertheless, the two bounds above are of similar order of magnitude when the cover {S r } k r=1 is consistent with the activation pattern of the gradients g 1 , . . . , g T . Indeed, if for any entry i there is a set S r that covers i such that g(j) ≈ g(i) for all j ∈ S r , then max j∈Sr g 2 s (j) ≈ g 2 s (i), and thus min r:Sr i t s=1 max j∈Sr g 2 s (j) ≈ g 2 s (i). Therefore, in these scenarios we inherit the convergence properties of AdaGrad while using sublinear memory. In particular, if in addition the gradients are sparse, we can obtain an improved dependence on the dimension as discussed in Duchi et al. [9].
It is also worthwhile to compare our algorithm to Adafactor [21]. The two algorithms differ in a number of important ways. First, Adafactor is only defined for matrix-shaped parameter sets while SM3 applies to tensors of arbitrary dimensions, and even more generally, to any predefined cover of the parameters. Second, Adafactor is essentially a fixed step-size algorithm and often requires an external step-size decay schedule for ensuring convergence. SM3 in contrast decays its learning rates automatically, similarly to AdaGrad. Finally, SM3 has the benefit of entertaining rigorous, albeit elementary, convergence guarantees in the convex case.
SM3-II
We now discuss a slightly more efficient variant of SM3, which we describe in SM3-II. It is very similar to SM3-I, and improves on the latter in the following sense.
Proposition 3. For any i ∈ [d]
, the sequence ν 1 (i), . . . , ν T (i) is monotonically increasing. Further, fixing a sequence of gradient vectors g 1 , . . . , g T , we have for all t and i that
t s=1 g 2 s (i) ≤ ν t (i) ≤ ν t (i) ,
where ν 1 (i), . . . , ν T (i) is the sequence produced by SM3-I upon receiving the gradient vectors g 1 , . . . , g T .
In other words, SM3-II provides a tighter upper bound on the cumulative gradient squares than SM3-I. Consequently, we can show, along similar lines to the proof of Proposition 2, a slightly better convergence bound for SM3-II that scales with the quantity d i=1 ν t (i), which is always smaller than the one appearing in the bound of SM3-I.
Proof of Proposition 3. First, to establish monotonicity note that the algorithm maintains µ t (r) = max j∈Sr ν t (j) for t ≥ 1 and r ∈ [k]. Hence, for t ≥ 1 and i ∈ [d] we have
ν t+1 (i) = min r:Sr i max j∈Sr ν t (j) + g 2 t+1 (i) ≥ ν t (i) .
Let γ t (i) = t s=1 g 2 s (i). We next prove by induction that γ t (i) ≤ ν t (i) ≤ ν t (i) for all t and i ∈ [d]. For t = 1 this is true as ν 1 (i) = γ 1 (i) = g 2 1 (i) ≤ ν 1 (i) for all i by Claim 1. For the induction step, assume that γ t (i) ≤ ν t (i) ≤ ν t (i) for all i and write ν t+1 (i) = min
r:Sr i max j∈Sr ν t (j) + g 2 t+1 (i) ≥ ν t (i) + g 2 t+1 (i) ≥ γ t (i) + g 2 t+1 (i) = γ t+1 (i) .
On the other hand, we have
Implementation details
We implemented SM3 as an optimizer in TensorFlow [1]. Our implementation follows the pseudocode of SM3-II, as it performed slightly yet consistently better than SM3-I in our experiments (as predicted by our bounds). The implementation of SM3-II optimizer will be released very soon as open source code. set ν t (i) ← min r:Sr i µ t−1 (r) + g 2 t (i) 8: update w t+1 (i) ← w t (i) − η g t (i) ν t (i) 9: set µ t (r) ← max{µ t (r), ν t (i)} for all r : S r i Table 1: Learning rate decay schedules used by the algorithms we experimented with. Here, t is the current time step, η is the base learning rate, α < 1 is a decay constant, τ is the staircase step interval, η 0 is the minimum learning rate for staircase schedule and T is a large constant defining the total number of training steps.
Default covers. Our implementation employs covers induced by rows and columns of matrices, and more generally, by slices of higher-order tensors (e.g., in convolutional layers). These covers allow us to exploit highly efficient tensor operations provided by GPUs and TPUs for computing max and min over the sets.
Momentum. Our optimizer can be used in conjunction with momentum for improved performance. We found that momentum, set at 0.9, adds stability and allows use of larger learning rates for all optimizers that we compared.
Hyperparameters and learning-rate. An important feature of SM3, compared to other widespread optimizers, is that it only has a single hyper-parameter that requires tuning, the learning rate η. Concretely, SM3 does not rely on a learning-rate decay schedule that is often difficult to tune. The experiments reported in Table 1 of Section 5 verify this empirically. This aspect of SM3 makes it particularly appealing for training large scale models where the training time is too long to allow for exhaustive hyperparameter tuning.
Learning-rate ramp up. Having said the above, we do often find in deep learning tasks that a high learning rate setting in the early stages of optimization causes instability and might result in failure to converge. Therefore, while SM3 does not require an external learning rate decay schedule, it is often helpful to gradually increase the parameter η from zero to its maximal value, typically over the course of the first few thousand updates. While we used this ad hoc safeguard in our experiments, we plan to replace it in the future with norm constraints on the cover sets.
Experiments
We demonstrate the practical benefits of SM3 on several machine learning tasks using the published state-of-the-art architectures and algorithms as baselines. We performed experiments on the following three tasks: 3. Image classification with the ImageNet dataset [18] for which there are a slew of empirical studies [7].
Machine translation
We first ran our experiments using the Transformer model [23] on the smaller WMT'14 en→de dataset. We trained models using the Lingvo [22] sequence modeling framework, available in TensorFlow. We compared SM3 with Adafactor which has similar space requirements. Results are provided in Figure 1 and Table 2. SM3 performed slightly better than Adafactor in both test perplexity and BLEU score of the trained models. We then moved on to the larger WMT'14 en→fr dataset using a larger transformer model (Transformer-Big) architecture from [5]. Our results are shown in Figure 2 and Table 2. We see significant (more than 2x) improvement in convergence rate which further translates into a substantial improvement in BLEU score.
We trained both models on a 4×4 Cloud TPU-V2 [13]. A 4×4 configuration has 32 cores each with 8GB of memory. The transformer model for WMT'14 en→de was trained with batches of size 1536 for 700k steps. The Transformer-Big model for WMT'14 en→fr was trained with the maximal batch size that could fit on each core, yielding an effective batch of size 768, for 1M steps. The Transformer-Big model consists of 6 layers for its encoder and decoder, each layer is composed of 1024 model dimensions, 8192 hidden dimensions, and 16 attention heads. In total the Transformer-Big has 375.4M parameters (1.432GB) and uses a significant fraction of the overall memory, thus making SM3 more effective there.
All experiments were run with synchronous (stochastic) gradient updates. The models used 32K word-pieces [19] for each language pair. We computed BLEU scores on the Newstest 2014 for evaluation. We also disabled checkpoint averaging in order to underscore the improved convergence rate of SM3. Our BLEU scores are not directly comparable to those of [23], instead we followed the experimental protocol described in [5]. BLEU scores were computed on tokenized, true-case outputs and without manual post-processing of the text similar to [24]. Figure 2: Test loss (log-perplexity) of Transformer-Big on the WMT'14 en→fr dataset. Adam is infeasible with this particular batch size due to memory constraints.
Language modeling
We trained a BERT-Large language model from [8] on the combined Wikipedia and BooksCorpus [25]. BERT-Large is a large bidirectional transformer model containing 24 transformer blocks with 1024 hidden dimensions and 16 self attention heads. It has 340M parameters (1.297 GiB), and is setup to optimize two losses jointly: (a) masked language model (Masked-LM) loss where the task is to predict masked tokens based on surrounding context, and (b) next sentence prediction (NSP) loss where the task is to predict if a sentence follows another sentence where negatives sentences are randomly selected from the corpus. We ran all our experiments using the open sourced code from [8] on an 8×8 Cloud TPU-V2 configuration which has 128 cores. The baseline used was the Adam optimizer with learning rate η = 10 −4 , β 1 = 0.9, and β 2 = 0.999. The learning rate was warmed-up over the first 10,000 steps, followed by a linear decay. SM3 used the same warmup as a safety mechanism, with no further tinkering. Momentum was set to 0.9. We trained all models for 500K steps. We split the dataset into a 90 − 10 train-test split.
Our results are presented in Figure 3. We see that SM3 works as well as Adam for the same batch size. However SM3 lets us train with a much larger batch size using a similar amount of memory as Adam. We were able to increase the number of examples in each batch by a factor of 2, yielding quality improvements and faster convergence.
AmoebaNet-D on ImageNet
We trained AmoebaNet-D described in [16] which was originally constructed to have low training cost on the ImageNet dataset. We used the open-source code available from [6] where we changed the optimizer to SM3 and removed learning rate decay. The model was trained on a 4 × 4 Cloud TPU-v2 configuration. The baseline used RMSProp [12] with Nesterov momentum and a staircase learning rate decay schedule. The model was trained with a batch-size of 1024, as recommended in [6]. Our results in Figure 4 indicate that SM3 performed very well in this task and resulted in improved top-1 (77.95) and top-5 (93.89) accuracies.
Conclusions
We presented SM3, a simple and effective adaptive optimization algorithm for stochastic optimization in settings where memory during training is severely limited. In these settings, the memory overhead of adaptive methods such as AdaGrad and Adam is prohibitively large, and thus limits the size of models that can be trained as well as the number of samples in each mini-batch. We demonstrated empirically that SM3 can be effectively used in such settings and dramatically decreases memory overhead. Utilizing the freed memory for increasing the batch size, our experiments show that this saving can also lead to significant gains in performance. In future work we will focus on extending and strengthening our theoretical guarantees, improving the robustness of SM3, and further experimentation with various covers for additional domains. In particular, we plan to evaluate SM3 on training recurrent networks for speech recognition and audio generation. | 4,591 |
1901.11153 | 2911758669 | Recovering the 3D representation of an object from single-view or multi-view RGB images by deep neural networks has attracted increasing attention in the past few years. Several mainstream works (e.g., 3D-R2N2) use recurrent neural networks (RNNs) to fuse multiple feature maps extracted from input images sequentially. However, when given the same set of input images with different orders, RNN-based approaches are unable to produce consistent reconstruction results. Moreover, due to long-term memory loss, RNNs cannot fully exploit input images to refine reconstruction results. To solve these problems, we propose a novel framework for single-view and multi-view 3D reconstruction, named Pix2Vox. By using a well-designed encoder-decoder, it generates a coarse 3D volume from each input image. Then, a context-aware fusion module is introduced to adaptively select high-quality reconstructions for each part (e.g., table legs) from different coarse 3D volumes to obtain a fused 3D volume. Finally, a refiner further refines the fused 3D volume to generate the final output. Experimental results on the ShapeNet and Pix3D benchmarks indicate that the proposed Pix2Vox outperforms state-of-the-arts by a large margin. Furthermore, the proposed method is 24 times faster than 3D-R2N2 in terms of backward inference time. The experiments on ShapeNet unseen 3D categories have shown the superior generalization abilities of our method. | Theoretically, recovering 3D shape from single-view images is an ill-posed problem. To address this issue, many attempts have been made, such as ShapeFromX @cite_18 @cite_13 , where X may represent silhouettes @cite_28 , shading @cite_26 , and texture @cite_8 . However, these methods are barely applicable to use in the real-world scenarios, because all of them require strong presumptions and abundant expertise in natural images @cite_9 . | {
"abstract": [
"A fundamental problem in computer vision is that of inferring the intrinsic, 3D structure of the world from flat, 2D images of that world. Traditional methods for recovering scene properties such as shape, reflectance, or illumination rely on multiple observations of the same scene to overconstrain the problem. Recovering these same properties from a single image seems almost impossible in comparison—there are an infinite number of shapes, paint, and lights that exactly reproduce a single image. However, certain explanations are more likely than others: surfaces tend to be smooth, paint tends to be uniform, and illumination tends to be natural. We therefore pose this problem as one of statistical inference, and define an optimization problem that searches for the most likely explanation of a single image. Our technique can be viewed as a superset of several classic computer vision problems (shape-from-shading, intrinsic images, color constancy, illumination estimation, etc) and outperforms all previous solutions to those constituent problems.",
"Estimating surface normals from just a single image is challenging. To simplify the problem, previous work focused on special cases, including directional lighting, known reflectance maps, etc., making shape from shading impractical outside the lab. To cope with more realistic settings, shading cues need to be combined and generalized to natural illumination. This significantly increases the complexity of the approach, as well as the number of parameters that require tuning. Enabled by a new large-scale dataset for training and analysis, we address this with a discriminative learning approach to shape from shading, which uses regression forests for efficient pixel-independent prediction and fast learning. Von Mises-Fisher distributions in the leaves of each tree enable the estimation of surface normals. To account for their expected spatial regularity, we introduce spatial features, including texton and silhouette features. The proposed silhouette features are computed from the occluding contours of the surface and provide scale-invariant context. Aside from computational efficiency, they enable good generalization to unseen data and importantly allow for a robust estimation of the reflectance map, extending our approach to the uncalibrated setting. Experiments show that our discriminative approach outperforms state-of-the-art methods on synthetic and real-world datasets.",
"Texture provides an important source of information about the three-dimensional structure of visible surfaces, particularly for stationary monocular views. To recover 3d structure, the distorting effects of projection must be distinguished from properties of the texture on which the distortion acts. This requires that assumptions must be made about the texture, yet the unpredictability of natural textures precludes the use of highly restrictive assumptions. The recovery method reported in this paper exploits the minimal assumption that textures do not mimic projective effects. This assumption determines the strategy of attributing as much as possible of the variation observed in the image to projection. Equivalently, the interpretation is chosen for which the texture, prior to projection, is made as uniform as possible. This strategy was implemented using statistical methods, first for the restricted case of planar surfaces and then, by extension, for curved surfaces. The technique was applied successfully to natural images.",
"In this work, we present a novel method for capturing human body shape from a single scaled silhouette. We combine deep correlated features capturing different 2D views, and embedding spaces based on 3D cues in a novel convolutional neural network (CNN) based architecture. We first train a CNN to find a richer body shape representation space from pose invariant 3D human shape descriptors. Then, we learn a mapping from silhouettes to this representation space, with the help of a novel architecture that exploits correlation of multi-view data during training time, to improve prediction at test time. We extensively validate our results on synthetic and real data, demonstrating significant improvements in accuracy as compared to the state-of-the-art, and providing a practical system for detailed human body measurements from a single image.",
"3D point cloud generation by the deep neural network from a single image has been attracting more and more researchers' attention. However, recently-proposed methods require the objects be captured with relatively clean backgrounds, fixed viewpoint, while this highly limits its application in the real environment. To overcome these drawbacks, we proposed to integrate the prior 3D shape knowledge into the network to guide the 3D generation. By taking additional 3D information, the proposed network can handle the 3D object generation from a single real image captured from any viewpoint and complex background. Specifically, giving a query image, we retrieve the nearest shape model from a pre-prepared 3D model database. Then, the image together with the retrieved shape model is fed into the proposed network to generate the fine-grained 3D point cloud. The effectiveness of our proposed framework has been verified on different kinds of datasets. Experimental results show that the proposed framework achieves state-of-the-art accuracy compared to other volumetric-based and point set generation methods. Furthermore, the proposed framework works well for real images in complex backgrounds with various view angles.",
""
],
"cite_N": [
"@cite_18",
"@cite_26",
"@cite_8",
"@cite_28",
"@cite_9",
"@cite_13"
],
"mid": [
"2027560260",
"1942545097",
"2013599012",
"2749324691",
"2890749518",
""
]
} | Pix2Vox: Context-aware 3D Reconstruction from Single and Multi-view Images | 3D reconstruction is an important problem in robotics, CAD, virtual reality and augmented reality. Traditional methods, such as Structure from Motion (SfM) [13] and Simultaneous Localization and Mapping (SLAM) [5], match image features across views. However, establishing feature correspondences becomes extremely difficult when multiple viewpoints are separated by a large margin due to local appearance changes or self-occlusions [11]. To overcome these limitations, several deep learning based approaches, including 3D-R2N2 [2], LSM [8], and 3DensiNet [23], have been proposed to recover the 3D shape of an object and obtained promising results.
To generate 3D volumes, 3D-R2N2 [2] and LSM [8] formulate multi-view 3D reconstruction as a sequence learning problem and use recurrent neural networks (RNNs) to fuse multiple feature maps extracted by a shared encoder from input images. The feature maps are incrementally refined when more views of an object are available. However, RNN-based methods suffer from three limitations. First, when given the same set of images with different orders, RNNs are unable to estimate the 3D shape of an object consistently results due to permutation variance [22]. Second, due to long-term memory loss of RNNs, the input images cannot be fully exploited to refine reconstruction results [14]. Last but not least, RNN-based methods are time- Figure 2: An overview of the proposed Pix2Vox. The network recovers the shape of 3D objects from arbitrary (uncalibrated) single or multiple images. The reconstruction results can be refined when more input images are available. Note that the weights of the encoder and decoder are shared among all views.
consuming since input images are processed sequentially without parallelization [7].
To address the issues mentioned above, we propose Pix2Vox, a novel framework for single-view and multi-view 3D reconstruction that contains four modules: encoder, decoder, context-aware fusion, and refiner. The encoder and decoder generate coarse 3D volumes from multiple input images in parallel, which eliminates the effect of the orders of input images and accelerates the computation. Then, the context-aware fusion module selects high-quality reconstructions from all coarse 3D volumes and generates a fused 3D volume, which fully exploits information of all input images without long-term memory loss. Finally, the refiner further correct wrongly recovered parts of the fused 3D volumes to obtain a refined reconstruction. To achieve a good balance between accuracy and model size, we implement two versions of the proposed framework: Pix2Vox-F and Pix2Vox-A ( Figure 1).
The contributions can be summarized as follows:
• We present a unified framework for both single-view and multi-view 3D reconstruction, namely Pix2Vox. We equip Pix2Vox with well-designed encoder, decoder, and refiner, which shows a powerful ability to handle 3D reconstruction in both synthetic and realworld images. • We propose a context-aware fusion module to adaptively select high-quality reconstructions for each part from different coarse 3D volumes in parallel to produce a fused reconstruction of the whole object. To the best of our knowledge, it is the first time to exploit context across multiple views for 3D reconstruction. • Experimental results on the ShapeNet [29] and Pascal 3D+ [31] datasets demonstrate that the proposed approaches outperform state-of-the-art methods in terms of both accuracy and efficiency. Additional experiments also show its strong generalization abilities in reconstructing unseen 3D objects.
The Method
Overview
The proposed Pix2Vox aims to reconstruct the 3D shape of an object from either single or multiple RGB images. The 3D shape of an object is represented by a 3D voxel grid, where 0 is an empty cell and 1 denotes an occupied cell. The key components of Pix2Vox are shown in Figure 2. First, the encoder produces feature maps from input images. Second, the decoder takes each feature map as input and generates a coarse 3D volume correspondingly. Third, single or multiple 3D volumes are forwarded to the contextaware fusion module, which adaptively selects high-quality reconstructions for each part from coarse 3D volumes to obtain a fused 3D volume. Finally, the refiner with skipconnections further refines the fused 3D volume to generate the final reconstruction result. Figure 3 shows the detailed architectures of Pix2Vox-F and Pix2Vox-A. The former involves much fewer parameters and lower computational complexity, while the latter has more parameters, which can construct more accurate 3D shapes but has higher computational complexity.
Network Architecture
Encoder
The encoder is to compute a set of features for the decoder to recover the 3D shape of the object. The first nine convolutional layers, along with the corresponding batch normalization layers and ReLU activations of a pre-trained VGG-16 [18], are used to extract a 512 × 28 × 28 feature tensor from a 224 × 224 × 3 image. This feature extraction is followed by three sets of 2D convolutional layers, batch normalization layers and ELU layers to embed semantic information into feature vectors. In Pix2Vox-F, the kernel size of the first convolutional layer is 1 2 while the kernel sizes of the other two are 3 2 . The number of output channels of the convolutional layer starts with 512 and decreases by half for the subsequent layer and ends up with 128. In Pix2Vox-A, the kernel sizes of the three convolutional layers are 3 2 , 3 2 , and 1 2 , respectively. The output channels of the three convolutional layers are 512, 512, and 256, respectively. After the second convolutional layer, there is a max pooling layer with kernel sizes of 3 2 and 4 2 in Pix2Vox-F and Pix2Vox-A, respectively. The feature vectors produced by Pix2Vox-F and Pix2Vox-A are of sizes 2048 and 16384, respectively.
Decoder
The decoder is responsible for transforming information of 2D feature maps into 3D volumes. There are five 3D transposed convolutional layers in both Pix2Vox-F and Pix2Vox-A. Specifically, the first four transposed convolutional layers are of a kernel size of 4 3 , with stride of 2 and padding of 1. There is an additional transposed convolutional layer with a bank of 1 3 filter. Each transposed convolutional layer is followed by a batch normalization layer and a ReLU activation except for the last layer followed by a sigmoid function. In Pix2Vox-F, the numbers of output channels of the
Context-aware Fusion
From different viewpoints, we can see different visible parts of an object. The reconstruction qualities of visible parts are much higher than those of invisible parts. Inspired by this observation, we propose a context-aware fusion module to adaptively select high-quality reconstruction for each part (e.g., table legs) from different coarse 3D volumes. The selected reconstructions are fused to generate a 3D volume of the whole object ( Figure 4).
As shown in Figure 5, given coarse 3D volumes and the corresponding context, the context-aware fusion module generates a score map for each coarse volume and then fuses them into one volume by the weighted summation of all coarse volumes according to their score maps. The spatial information of voxels is preserved in the context-aware fusion module, and thus Pix2Vox can utilize multi-view information to recover the structure of an object better.
Specifically, the context-aware fusion module generates the context c r of the r-th coarse volume v c r by concatenating the output of the last two layers in the decoder. Then, the context scoring network generates a score m r for the context of the r-th coarse voxel. The context scoring network is composed of five sets of 3D convolutional layers, each of which has a kernel size of 3 3 and padding of 1, followed by a batch normalization and a leaky ReLU activation. The numbers of output channels of convolutional layers are 9, 16, 8, 4, and 1, respectively. The learned score m r for context c r are normalized across all learnt scores. We choose softmax as the normalization function. Therefore, the score s
Refiner
The refiner can be seen as a residual network, which aims to correct wrongly recovered parts of a 3D volume. It follows the idea of a 3D encoder-decoder with the U-net connections [16]. With the help of the U-net connections between the encoder and decoder, the local structure in the fused volume can be preserved. Specifically, the encoder has three 3D convolutional layers, each of which has a bank of 4 3 filters with padding of 2, followed by a batch normalization layer, a leaky ReLU activation and a max pooling layer with a kernel size of 2 3 . The numbers of output channels of convolutional layers are 32, 64, and 128, respectively. The encoder is finally followed by two fully connected layers with dimensions of 2048 and 8192. The decoder consists of three transposed convolutional layers, each of which has a bank of 4 3 filters with padding of 2 and stride of 1. Except Table 1: Single-view reconstruction on ShapeNet compared using Intersection-over-Union (IoU). The best number for each category is highlighted in bold. The numbers in the parenthesis are results trained and tested with the released code. Note that DRC [21] is trained/tested per category and PSGN [4] takes object masks as an additional input. for the last transposed convolutional layer that is followed by a sigmoid function, other layers are followed by a batch normalization layer and a ReLU activation.
Loss Function
The loss function of the network is defined as the mean value of the voxel-wise binary cross entropies between the reconstructed object and the ground truth. More formally, it can be defined as
= 1 N N i=1 [gt i log(p i ) + (1 − gt i ) log(1 − p i )](3)
where N denotes the number of voxels in the ground truth. p i and gt i represent the predicted occupancy and the corresponding ground truth. The smaller the value is, the closer the prediction is to the ground truth.
Experiments
Datasets and Metrics
Datasets We evaluate the proposed Pix2Vox-F and Pix2Vox-A on both synthetic images of objects from the ShapeNet [29] dataset and real images from the Pascal 3D+ [31] dataset. More specifically, we use a subset of ShapeNet consisting of 13 major categories and 44k 3D models following the settings of [2]. As for Pascal 3D+, there are 12 categories and 22k models. Evaluation Metrics To evaluate the quality of the output from the proposed methods, we binarize the probabilities at a fixed threshold of 0.4 and use intersection over union (IoU) as the similarity measure. More formally,
IoU = i,j,k I(p (i,j,k) > t)I(gt (i,j,k) ) i,j,k I I(p (i,j,k) > t) + I(gt (i,j,k) )(4)
where p (i,j,k) and gt (i,j,k) represent the predicted occupancy probability and the ground truth at (i, j, k), respectively. I(·) is an indicator function and t denotes a voxelization threshold. Higher IoU values indicate better reconstruction results.
Implementation Details
We use 224 × 224 RGB images as input to train the proposed methods with a shape batch size of 64. The output voxelized reconstruction is 32 3 in size. We implement our network in PyTorch 1 and train both Pix2Vox-F and Pix2Vox-A using an Adam optimizer [9] with a β 1 of 0.9 and a β 2 of 0.999. The initial learning rate is set to 0.001 and decayed by 2 after 150 epochs. The optimization is set to stop after 250 epochs.
Reconstruction of Synthetic Images
To evaluate the performance of the proposed methods in handling synthetic images, we compare our methods against several state-of-the-art methods on the ShapeNet testing set. Table 1 shows the performance of single-view reconstruction, while Table 2 shows the mean IoU scores of multiview reconstruction with different numbers of views. Figure 6: Single-view (left) and multi-view (right) reconstructions on the ShapeNet testing set. GT represents the ground truth of the 3D object. Note that DRC [21] is trained/tested per category.
Input GT 3D-R2N2 OGN DRC Pix2Vox-F Pix2Vox-A Multi-view Inputs (3 views) GT 3D-R2N2 Pix2Vox-F Pix2Vox-A
The single-view reconstruction results of Pix2Vox-F and Pix2Vox-A significantly outperform other methods (Table 1). Pix2Vox-A increases IoU over 3D-R2N2 by 18%. In multi-view reconstruction, Pix2Vox-A consistently outperforms 3D-R2N2 in all numbers of views ( Table 2). The IoU of Pix2Vox-A is 13% higher than that of 3D-R2N2. Figure 6 shows several reconstruction examples from the ShapeNet testing set. Both Pix2Vox-F and Pix2Vox-A are able to recover the thin parts of objects, such as lamps and table legs. Compare with Pix2Vox-F, we also observe that higher dimensional feature maps in Pix2Vox-A do contribute to 3D reconstruction. Moreover, in multi-view reconstruction, both Pix2Vox-A and Pix2Vox-F produce better results than 3D-R2N2.
Reconstruction of Real-world Images
To evaluate the performance on of the proposed methods on real-world images, we test our methods for single-view reconstruction on the Pascal 3D+ dataset. First, the images are cropped according to the bounding box of the largest objects within the image. Then, these cropped images are rescaled to the input size of the reconstruction network.
The mean IoU of each category is reported in Table 3. Both Pix2Vox-F and Pix2Vox-A significantly outperform the competing approaches on the Pascal 3D+ testing set. Compared with other methods, our methods are able to better reconstruct the overall shape and capture finer details from the input images. The qualitative analysis is given in Figure 7, which indicate that the proposed methods are more effective in handling real-world scenarios.
Reconstruction of Unseen Objects
In order to test how well our methods can generalize to unseen objects, we conduct additional experiments on ShapeNet. More specifically, all models are trained on the 13 major categories of ShapeNet and tested on the remain- Figure 7: Reconstructions on the Pascal 3D+ testing set from single-view images. GT represents the ground truth of the 3D object. Note that DRC [21] is trained/tested per category.
ing 44 categories of ShapeNet. All pretrained models have never "seen" either the objects in these categories or the labels of objects before. The reconstruction results of 3D-R2N2 are obtained with the released pretrained model. Several reconstruction results are presented in Figure 8. The reconstruction IoU of 3D-R2N2 on unseen objects is 0.119, while Pix2Vox-F and Pix2Vox-A are 0.209 and 0.227, respectively. Experimental results demonstrate that 3D-R2N2 can hardly recover the shape of unseen objects. In contrast, Pix2Vox-F and Pix2Vox-A show satisfactory generalization abilities to unseen objects.
Ablation Study
In this section, we validate the context-aware fusion and the refiner by ablation studies. Context-aware fusion To quantitatively evaluate the context-aware fusion, we replace the context-aware fusion in Pix2Vox-A with the average fusion, where the fused voxel v f can be calculated as
Input GT 3D-R2N2 Pix2Vox-F Pix2Vox-A Figure 8: Reconstruction on unseen objects of ShapeNet from 5-view images. GT represents the ground truth of the 3D object. Table 2 shows that the context-aware fusion performs better than the average fusion in selecting the high-quality reconstructions for each part from different coarse volumes. Refiner Pix2Vox-A uses a refiner to further refine the fused 3D volume. For single-view reconstruction on ShapeNet, the IoU of Pix2Vox-A is 0.658. In contrast, the IoU of Pix2Vox-A without the refiner decreases to 0.643. Removing refiner causes considerable degeneration for the reconstruction accuracy. However, as the number of views increases, the effect of the refiner becomes weaker. The reconstruction results of the two networks (with/without the refiner) are almost the same if the number of the input images is more than 3.
The ablation studies indicate that both the context-aware fusion and the refiner play important roles in our framework for the performance improvements against previous stateof-the-art methods. Table 4 and Figure 1 show the numbers of parameters of different methods. There is an 80% reduction in parameters in Pix2Vox-F compared to 3D-R2N2.
Space and Time Complexity
The running times are obtained on the same PC with an NVIDIA GTX 1080 Ti GPU. For more precise timing, we exclude the reading and writing time when evaluating the forward and backward inference time. Both Pix2Vox-F and Pix2Vox-A are about 8 times faster in forward inference than 3D-R2N2 in single-view reconstruction. In backward inference, Pix2Vox-F and Pix2Vox-A are about 24 and 4 times faster than 3D-R2N2, respectively.
Discussion
To give a detailed analysis of the context-aware fusion module, we visualized the score maps of three coarse volumes when reconstructing the 3D shape of a table from 3view images, as shown in Figure 4. The reconstruction quality of the table tops on the right is clearly of low quality, and the score of the corresponding part is lower than those in the other two coarse volumes. The fused 3D volume is obtained by combining the selected high-quality reconstruction parts, where bad reconstructions can be eliminated effectively by our scoring scheme.
Although our methods outperform state-of-the-arts, the reconstruction results of our methods are still with a low resolution. We can further improve the reconstruction resolutions in the future work by introducing GANs [6].
Conclusion and Future Works
In this paper, we propose a unified framework for both single-view and multi-view 3D reconstruction, named Pix2Vox. Compared with existing methods that fuse deep features generated by a shared encoder, the proposed method fuses multiple coarse volumes produced by a decoder and preserves multi-view spatial constraints better. Quantitative and qualitative evaluation for both single-view and multi-view reconstruction on the ShapeNet and Pascal 3D+ benchmarks indicate that the proposed methods outperform state-of-the-arts by a large margin. Pix2Vox is computationally efficient, which is 24 times faster than 3D-R2N2 in terms of backward inference time. In future work, we will work on improving the resolution of the reconstructed 3D objects. In addition, we also plan to extend Pix2Vox to reconstruct 3D objects from RGB-D images. | 2,898 |
1901.11153 | 2911758669 | Recovering the 3D representation of an object from single-view or multi-view RGB images by deep neural networks has attracted increasing attention in the past few years. Several mainstream works (e.g., 3D-R2N2) use recurrent neural networks (RNNs) to fuse multiple feature maps extracted from input images sequentially. However, when given the same set of input images with different orders, RNN-based approaches are unable to produce consistent reconstruction results. Moreover, due to long-term memory loss, RNNs cannot fully exploit input images to refine reconstruction results. To solve these problems, we propose a novel framework for single-view and multi-view 3D reconstruction, named Pix2Vox. By using a well-designed encoder-decoder, it generates a coarse 3D volume from each input image. Then, a context-aware fusion module is introduced to adaptively select high-quality reconstructions for each part (e.g., table legs) from different coarse 3D volumes to obtain a fused 3D volume. Finally, a refiner further refines the fused 3D volume to generate the final output. Experimental results on the ShapeNet and Pix3D benchmarks indicate that the proposed Pix2Vox outperforms state-of-the-arts by a large margin. Furthermore, the proposed method is 24 times faster than 3D-R2N2 in terms of backward inference time. The experiments on ShapeNet unseen 3D categories have shown the superior generalization abilities of our method. | With the success of generative adversarial networks (GANs) @cite_30 and variational autoencoders (VAEs) @cite_16 , 3D-VAE-GAN @cite_4 adopts GAN and VAE to generate 3D objects by taking a single-view image as input. However, 3D-VAE-GAN requires class labels for reconstruction. MarrNet @cite_17 reconstructs 3D objects by estimating depth, surface normals, and silhouettes of 2D images, which is challenging and usually leads to severe distortion @cite_3 . OGN @cite_14 and O-CNN @cite_23 use octree to represent higher resolution volumetric 3D objects with a limited memory budget. However, OGN representations are complex and consume more computational resources due to the complexity of octree representations. PSGN @cite_7 and 3D-LMNet @cite_12 generate point clouds from single-view images. However, the points have a large degree of freedom in the point cloud representation because of the limited connections between points. Consequently, these methods cannot recover 3D volumes accurately @cite_2 . | {
"abstract": [
"We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.",
"We present a deep convolutional decoder architecture that can generate volumetric 3D outputs in a compute- and memory-efficient manner by using an octree representation. The network learns to predict both the structure of the octree, and the occupancy values of individual cells. This makes it a particularly valuable technique for generating 3D shapes. In contrast to standard decoders acting on regular voxel grids, the architecture does not have cubic complexity. This allows representing much higher resolution outputs with a limited memory budget. We demonstrate this in several application domains, including 3D convolutional autoencoders, generation of objects and whole scenes from high-level representations, and shape from a single image.",
"We study the problem of 3D object generation. We propose a novel framework, namely 3D Generative Adversarial Network (3D-GAN), which generates 3D objects from a probabilistic space by leveraging recent advances in volumetric convolutional networks and generative adversarial nets. The benefits of our model are three-fold: first, the use of an adversarial criterion, instead of traditional heuristic criteria, enables the generator to capture object structure implicitly and to synthesize high-quality 3D objects; second, the generator establishes a mapping from a low-dimensional probabilistic space to the space of 3D objects, so that we can sample objects without a reference image or CAD models, and explore the 3D object manifold; third, the adversarial discriminator provides a powerful 3D shape descriptor which, learned without supervision, has wide applications in 3D object recognition. Experiments demonstrate that our method generates high-quality 3D objects, and our unsupervisedly learned features achieve impressive performance on 3D object recognition, comparable with those of supervised learning methods.",
"Generation of 3D data by deep neural network has been attracting increasing attention in the research community. The majority of extant works resort to regular representations such as volumetric grids or collection of images, however, these representations obscure the natural invariance of 3D shapes under geometric transformations, and also suffer from a number of other issues. In this paper we address the problem of 3D reconstruction from a single image, generating a straight-forward form of output – point cloud coordinates. Along with this problem arises a unique and interesting issue, that the groundtruth shape for an input image may be ambiguous. Driven by this unorthordox output form and the inherent ambiguity in groundtruth, we design architecture, loss function and learning paradigm that are novel and effective. Our final solution is a conditional shape sampler, capable of predicting multiple plausible 3D point clouds from an input image. In experiments not only can our system outperform state-of-the-art methods on single image based 3D reconstruction benchmarks, but it also shows strong performance for 3D shape completion and promising ability in making multiple plausible predictions.",
"Author(s): Tulsiani, Shubham | Advisor(s): Malik, Jitendra | Abstract: We address the task of inferring the 3D structure underlying an image, in particular focusing on two questions -- how we can plausibly obtain supervisory signal for this task, and what forms of representation should we pursue. We first show that we can leverage image-based supervision to learn single-view 3D prediction, by using geometry as a bridge between the learning systems and the available indirect supervision. We demonstrate that this approach enables learning 3D structure across diverse setups e.g. learning deformable models, predctive models for volumetric 3D, or inferring textured meshes. We then advocate the case for inferring interpretable and compositional 3D representations. We present a method that discovers the coherent compositional structure across objects in a unsupervised manner by attempting to assemble shapes using volumetric primitives, and then demonstrate the advantages of predicting similar factored 3D representations for complex scenes.",
"We present O-CNN, an Octree-based Convolutional Neural Network (CNN) for 3D shape analysis. Built upon the octree representation of 3D shapes, our method takes the average normal vectors of a 3D model sampled in the finest leaf octants as input and performs 3D CNN operations on the octants occupied by the 3D shape surface. We design a novel octree data structure to efficiently store the octant information and CNN features into the graphics memory and execute the entire O-CNN training and evaluation on the GPU. O-CNN supports various CNN structures and works for 3D shapes in different representations. By restraining the computations on the octants occupied by 3D surfaces, the memory and computational costs of the O-CNN grow quadratically as the depth of the octree increases, which makes the 3D CNN feasible for high-resolution 3D models. We compare the performance of the O-CNN with other existing 3D CNN solutions and demonstrate the efficiency and efficacy of O-CNN in three shape analysis tasks, including object classification, shape retrieval, and shape segmentation.",
"We propose an end-to-end deep learning architecture that produces a 3D shape in triangular mesh from a single color image. Limited by the nature of deep neural network, previous methods usually represent a 3D shape in volume or point cloud, and it is non-trivial to convert them to the more ready-to-use mesh model. Unlike the existing methods, our network represents 3D mesh in a graph-based convolutional neural network and produces correct geometry by progressively deforming an ellipsoid, leveraging perceptual features extracted from the input image. We adopt a coarse-to-fine strategy to make the whole deformation procedure stable, and define various of mesh related losses to capture properties of different levels to guarantee visually appealing and physically accurate 3D geometry. Extensive experiments show that our method not only qualitatively produces mesh model with better details, but also achieves higher 3D shape estimation accuracy compared to the state-of-the-art.",
"",
"",
"3D object reconstruction from a single image is a highly under-determined problem, requiring strong prior knowledge of plausible 3D shapes. This introduces challenges for learning-based approaches, as 3D object annotations are scarce in real images. Previous work chose to train on synthetic data with ground truth 3D information, but suffered from domain adaptation when tested on real data. In this work, we propose MarrNet, an end-to-end trainable model that sequentially estimates 2.5D sketches and 3D object shape. Our disentangled, two-step formulation has three advantages. First, compared to full 3D shape, 2.5D sketches are much easier to be recovered from a 2D image; models that recover 2.5D sketches are also more likely to transfer from synthetic to real data. Second, for 3D reconstruction from 2.5D sketches, systems can learn purely from synthetic data. This is because we can easily render realistic 2.5D sketches without modeling object appearance variations in real images, including lighting, texture, etc. This further relieves the domain adaptation problem. Third, we derive differentiable projective functions from 3D shape to 2.5D sketches; the framework is therefore end-to-end trainable on real images, requiring no human annotations. Our model achieves state-of-the-art performance on 3D shape reconstruction."
],
"cite_N": [
"@cite_30",
"@cite_14",
"@cite_4",
"@cite_7",
"@cite_3",
"@cite_23",
"@cite_2",
"@cite_16",
"@cite_12",
"@cite_17"
],
"mid": [
"2099471712",
"2603429625",
"2949551726",
"2560722161",
"2894639990",
"2737234477",
"2796312544",
"",
"2963111259",
"2767503796"
]
} | Pix2Vox: Context-aware 3D Reconstruction from Single and Multi-view Images | 3D reconstruction is an important problem in robotics, CAD, virtual reality and augmented reality. Traditional methods, such as Structure from Motion (SfM) [13] and Simultaneous Localization and Mapping (SLAM) [5], match image features across views. However, establishing feature correspondences becomes extremely difficult when multiple viewpoints are separated by a large margin due to local appearance changes or self-occlusions [11]. To overcome these limitations, several deep learning based approaches, including 3D-R2N2 [2], LSM [8], and 3DensiNet [23], have been proposed to recover the 3D shape of an object and obtained promising results.
To generate 3D volumes, 3D-R2N2 [2] and LSM [8] formulate multi-view 3D reconstruction as a sequence learning problem and use recurrent neural networks (RNNs) to fuse multiple feature maps extracted by a shared encoder from input images. The feature maps are incrementally refined when more views of an object are available. However, RNN-based methods suffer from three limitations. First, when given the same set of images with different orders, RNNs are unable to estimate the 3D shape of an object consistently results due to permutation variance [22]. Second, due to long-term memory loss of RNNs, the input images cannot be fully exploited to refine reconstruction results [14]. Last but not least, RNN-based methods are time- Figure 2: An overview of the proposed Pix2Vox. The network recovers the shape of 3D objects from arbitrary (uncalibrated) single or multiple images. The reconstruction results can be refined when more input images are available. Note that the weights of the encoder and decoder are shared among all views.
consuming since input images are processed sequentially without parallelization [7].
To address the issues mentioned above, we propose Pix2Vox, a novel framework for single-view and multi-view 3D reconstruction that contains four modules: encoder, decoder, context-aware fusion, and refiner. The encoder and decoder generate coarse 3D volumes from multiple input images in parallel, which eliminates the effect of the orders of input images and accelerates the computation. Then, the context-aware fusion module selects high-quality reconstructions from all coarse 3D volumes and generates a fused 3D volume, which fully exploits information of all input images without long-term memory loss. Finally, the refiner further correct wrongly recovered parts of the fused 3D volumes to obtain a refined reconstruction. To achieve a good balance between accuracy and model size, we implement two versions of the proposed framework: Pix2Vox-F and Pix2Vox-A ( Figure 1).
The contributions can be summarized as follows:
• We present a unified framework for both single-view and multi-view 3D reconstruction, namely Pix2Vox. We equip Pix2Vox with well-designed encoder, decoder, and refiner, which shows a powerful ability to handle 3D reconstruction in both synthetic and realworld images. • We propose a context-aware fusion module to adaptively select high-quality reconstructions for each part from different coarse 3D volumes in parallel to produce a fused reconstruction of the whole object. To the best of our knowledge, it is the first time to exploit context across multiple views for 3D reconstruction. • Experimental results on the ShapeNet [29] and Pascal 3D+ [31] datasets demonstrate that the proposed approaches outperform state-of-the-art methods in terms of both accuracy and efficiency. Additional experiments also show its strong generalization abilities in reconstructing unseen 3D objects.
The Method
Overview
The proposed Pix2Vox aims to reconstruct the 3D shape of an object from either single or multiple RGB images. The 3D shape of an object is represented by a 3D voxel grid, where 0 is an empty cell and 1 denotes an occupied cell. The key components of Pix2Vox are shown in Figure 2. First, the encoder produces feature maps from input images. Second, the decoder takes each feature map as input and generates a coarse 3D volume correspondingly. Third, single or multiple 3D volumes are forwarded to the contextaware fusion module, which adaptively selects high-quality reconstructions for each part from coarse 3D volumes to obtain a fused 3D volume. Finally, the refiner with skipconnections further refines the fused 3D volume to generate the final reconstruction result. Figure 3 shows the detailed architectures of Pix2Vox-F and Pix2Vox-A. The former involves much fewer parameters and lower computational complexity, while the latter has more parameters, which can construct more accurate 3D shapes but has higher computational complexity.
Network Architecture
Encoder
The encoder is to compute a set of features for the decoder to recover the 3D shape of the object. The first nine convolutional layers, along with the corresponding batch normalization layers and ReLU activations of a pre-trained VGG-16 [18], are used to extract a 512 × 28 × 28 feature tensor from a 224 × 224 × 3 image. This feature extraction is followed by three sets of 2D convolutional layers, batch normalization layers and ELU layers to embed semantic information into feature vectors. In Pix2Vox-F, the kernel size of the first convolutional layer is 1 2 while the kernel sizes of the other two are 3 2 . The number of output channels of the convolutional layer starts with 512 and decreases by half for the subsequent layer and ends up with 128. In Pix2Vox-A, the kernel sizes of the three convolutional layers are 3 2 , 3 2 , and 1 2 , respectively. The output channels of the three convolutional layers are 512, 512, and 256, respectively. After the second convolutional layer, there is a max pooling layer with kernel sizes of 3 2 and 4 2 in Pix2Vox-F and Pix2Vox-A, respectively. The feature vectors produced by Pix2Vox-F and Pix2Vox-A are of sizes 2048 and 16384, respectively.
Decoder
The decoder is responsible for transforming information of 2D feature maps into 3D volumes. There are five 3D transposed convolutional layers in both Pix2Vox-F and Pix2Vox-A. Specifically, the first four transposed convolutional layers are of a kernel size of 4 3 , with stride of 2 and padding of 1. There is an additional transposed convolutional layer with a bank of 1 3 filter. Each transposed convolutional layer is followed by a batch normalization layer and a ReLU activation except for the last layer followed by a sigmoid function. In Pix2Vox-F, the numbers of output channels of the
Context-aware Fusion
From different viewpoints, we can see different visible parts of an object. The reconstruction qualities of visible parts are much higher than those of invisible parts. Inspired by this observation, we propose a context-aware fusion module to adaptively select high-quality reconstruction for each part (e.g., table legs) from different coarse 3D volumes. The selected reconstructions are fused to generate a 3D volume of the whole object ( Figure 4).
As shown in Figure 5, given coarse 3D volumes and the corresponding context, the context-aware fusion module generates a score map for each coarse volume and then fuses them into one volume by the weighted summation of all coarse volumes according to their score maps. The spatial information of voxels is preserved in the context-aware fusion module, and thus Pix2Vox can utilize multi-view information to recover the structure of an object better.
Specifically, the context-aware fusion module generates the context c r of the r-th coarse volume v c r by concatenating the output of the last two layers in the decoder. Then, the context scoring network generates a score m r for the context of the r-th coarse voxel. The context scoring network is composed of five sets of 3D convolutional layers, each of which has a kernel size of 3 3 and padding of 1, followed by a batch normalization and a leaky ReLU activation. The numbers of output channels of convolutional layers are 9, 16, 8, 4, and 1, respectively. The learned score m r for context c r are normalized across all learnt scores. We choose softmax as the normalization function. Therefore, the score s
Refiner
The refiner can be seen as a residual network, which aims to correct wrongly recovered parts of a 3D volume. It follows the idea of a 3D encoder-decoder with the U-net connections [16]. With the help of the U-net connections between the encoder and decoder, the local structure in the fused volume can be preserved. Specifically, the encoder has three 3D convolutional layers, each of which has a bank of 4 3 filters with padding of 2, followed by a batch normalization layer, a leaky ReLU activation and a max pooling layer with a kernel size of 2 3 . The numbers of output channels of convolutional layers are 32, 64, and 128, respectively. The encoder is finally followed by two fully connected layers with dimensions of 2048 and 8192. The decoder consists of three transposed convolutional layers, each of which has a bank of 4 3 filters with padding of 2 and stride of 1. Except Table 1: Single-view reconstruction on ShapeNet compared using Intersection-over-Union (IoU). The best number for each category is highlighted in bold. The numbers in the parenthesis are results trained and tested with the released code. Note that DRC [21] is trained/tested per category and PSGN [4] takes object masks as an additional input. for the last transposed convolutional layer that is followed by a sigmoid function, other layers are followed by a batch normalization layer and a ReLU activation.
Loss Function
The loss function of the network is defined as the mean value of the voxel-wise binary cross entropies between the reconstructed object and the ground truth. More formally, it can be defined as
= 1 N N i=1 [gt i log(p i ) + (1 − gt i ) log(1 − p i )](3)
where N denotes the number of voxels in the ground truth. p i and gt i represent the predicted occupancy and the corresponding ground truth. The smaller the value is, the closer the prediction is to the ground truth.
Experiments
Datasets and Metrics
Datasets We evaluate the proposed Pix2Vox-F and Pix2Vox-A on both synthetic images of objects from the ShapeNet [29] dataset and real images from the Pascal 3D+ [31] dataset. More specifically, we use a subset of ShapeNet consisting of 13 major categories and 44k 3D models following the settings of [2]. As for Pascal 3D+, there are 12 categories and 22k models. Evaluation Metrics To evaluate the quality of the output from the proposed methods, we binarize the probabilities at a fixed threshold of 0.4 and use intersection over union (IoU) as the similarity measure. More formally,
IoU = i,j,k I(p (i,j,k) > t)I(gt (i,j,k) ) i,j,k I I(p (i,j,k) > t) + I(gt (i,j,k) )(4)
where p (i,j,k) and gt (i,j,k) represent the predicted occupancy probability and the ground truth at (i, j, k), respectively. I(·) is an indicator function and t denotes a voxelization threshold. Higher IoU values indicate better reconstruction results.
Implementation Details
We use 224 × 224 RGB images as input to train the proposed methods with a shape batch size of 64. The output voxelized reconstruction is 32 3 in size. We implement our network in PyTorch 1 and train both Pix2Vox-F and Pix2Vox-A using an Adam optimizer [9] with a β 1 of 0.9 and a β 2 of 0.999. The initial learning rate is set to 0.001 and decayed by 2 after 150 epochs. The optimization is set to stop after 250 epochs.
Reconstruction of Synthetic Images
To evaluate the performance of the proposed methods in handling synthetic images, we compare our methods against several state-of-the-art methods on the ShapeNet testing set. Table 1 shows the performance of single-view reconstruction, while Table 2 shows the mean IoU scores of multiview reconstruction with different numbers of views. Figure 6: Single-view (left) and multi-view (right) reconstructions on the ShapeNet testing set. GT represents the ground truth of the 3D object. Note that DRC [21] is trained/tested per category.
Input GT 3D-R2N2 OGN DRC Pix2Vox-F Pix2Vox-A Multi-view Inputs (3 views) GT 3D-R2N2 Pix2Vox-F Pix2Vox-A
The single-view reconstruction results of Pix2Vox-F and Pix2Vox-A significantly outperform other methods (Table 1). Pix2Vox-A increases IoU over 3D-R2N2 by 18%. In multi-view reconstruction, Pix2Vox-A consistently outperforms 3D-R2N2 in all numbers of views ( Table 2). The IoU of Pix2Vox-A is 13% higher than that of 3D-R2N2. Figure 6 shows several reconstruction examples from the ShapeNet testing set. Both Pix2Vox-F and Pix2Vox-A are able to recover the thin parts of objects, such as lamps and table legs. Compare with Pix2Vox-F, we also observe that higher dimensional feature maps in Pix2Vox-A do contribute to 3D reconstruction. Moreover, in multi-view reconstruction, both Pix2Vox-A and Pix2Vox-F produce better results than 3D-R2N2.
Reconstruction of Real-world Images
To evaluate the performance on of the proposed methods on real-world images, we test our methods for single-view reconstruction on the Pascal 3D+ dataset. First, the images are cropped according to the bounding box of the largest objects within the image. Then, these cropped images are rescaled to the input size of the reconstruction network.
The mean IoU of each category is reported in Table 3. Both Pix2Vox-F and Pix2Vox-A significantly outperform the competing approaches on the Pascal 3D+ testing set. Compared with other methods, our methods are able to better reconstruct the overall shape and capture finer details from the input images. The qualitative analysis is given in Figure 7, which indicate that the proposed methods are more effective in handling real-world scenarios.
Reconstruction of Unseen Objects
In order to test how well our methods can generalize to unseen objects, we conduct additional experiments on ShapeNet. More specifically, all models are trained on the 13 major categories of ShapeNet and tested on the remain- Figure 7: Reconstructions on the Pascal 3D+ testing set from single-view images. GT represents the ground truth of the 3D object. Note that DRC [21] is trained/tested per category.
ing 44 categories of ShapeNet. All pretrained models have never "seen" either the objects in these categories or the labels of objects before. The reconstruction results of 3D-R2N2 are obtained with the released pretrained model. Several reconstruction results are presented in Figure 8. The reconstruction IoU of 3D-R2N2 on unseen objects is 0.119, while Pix2Vox-F and Pix2Vox-A are 0.209 and 0.227, respectively. Experimental results demonstrate that 3D-R2N2 can hardly recover the shape of unseen objects. In contrast, Pix2Vox-F and Pix2Vox-A show satisfactory generalization abilities to unseen objects.
Ablation Study
In this section, we validate the context-aware fusion and the refiner by ablation studies. Context-aware fusion To quantitatively evaluate the context-aware fusion, we replace the context-aware fusion in Pix2Vox-A with the average fusion, where the fused voxel v f can be calculated as
Input GT 3D-R2N2 Pix2Vox-F Pix2Vox-A Figure 8: Reconstruction on unseen objects of ShapeNet from 5-view images. GT represents the ground truth of the 3D object. Table 2 shows that the context-aware fusion performs better than the average fusion in selecting the high-quality reconstructions for each part from different coarse volumes. Refiner Pix2Vox-A uses a refiner to further refine the fused 3D volume. For single-view reconstruction on ShapeNet, the IoU of Pix2Vox-A is 0.658. In contrast, the IoU of Pix2Vox-A without the refiner decreases to 0.643. Removing refiner causes considerable degeneration for the reconstruction accuracy. However, as the number of views increases, the effect of the refiner becomes weaker. The reconstruction results of the two networks (with/without the refiner) are almost the same if the number of the input images is more than 3.
The ablation studies indicate that both the context-aware fusion and the refiner play important roles in our framework for the performance improvements against previous stateof-the-art methods. Table 4 and Figure 1 show the numbers of parameters of different methods. There is an 80% reduction in parameters in Pix2Vox-F compared to 3D-R2N2.
Space and Time Complexity
The running times are obtained on the same PC with an NVIDIA GTX 1080 Ti GPU. For more precise timing, we exclude the reading and writing time when evaluating the forward and backward inference time. Both Pix2Vox-F and Pix2Vox-A are about 8 times faster in forward inference than 3D-R2N2 in single-view reconstruction. In backward inference, Pix2Vox-F and Pix2Vox-A are about 24 and 4 times faster than 3D-R2N2, respectively.
Discussion
To give a detailed analysis of the context-aware fusion module, we visualized the score maps of three coarse volumes when reconstructing the 3D shape of a table from 3view images, as shown in Figure 4. The reconstruction quality of the table tops on the right is clearly of low quality, and the score of the corresponding part is lower than those in the other two coarse volumes. The fused 3D volume is obtained by combining the selected high-quality reconstruction parts, where bad reconstructions can be eliminated effectively by our scoring scheme.
Although our methods outperform state-of-the-arts, the reconstruction results of our methods are still with a low resolution. We can further improve the reconstruction resolutions in the future work by introducing GANs [6].
Conclusion and Future Works
In this paper, we propose a unified framework for both single-view and multi-view 3D reconstruction, named Pix2Vox. Compared with existing methods that fuse deep features generated by a shared encoder, the proposed method fuses multiple coarse volumes produced by a decoder and preserves multi-view spatial constraints better. Quantitative and qualitative evaluation for both single-view and multi-view reconstruction on the ShapeNet and Pascal 3D+ benchmarks indicate that the proposed methods outperform state-of-the-arts by a large margin. Pix2Vox is computationally efficient, which is 24 times faster than 3D-R2N2 in terms of backward inference time. In future work, we will work on improving the resolution of the reconstructed 3D objects. In addition, we also plan to extend Pix2Vox to reconstruct 3D objects from RGB-D images. | 2,898 |
1901.11153 | 2911758669 | Recovering the 3D representation of an object from single-view or multi-view RGB images by deep neural networks has attracted increasing attention in the past few years. Several mainstream works (e.g., 3D-R2N2) use recurrent neural networks (RNNs) to fuse multiple feature maps extracted from input images sequentially. However, when given the same set of input images with different orders, RNN-based approaches are unable to produce consistent reconstruction results. Moreover, due to long-term memory loss, RNNs cannot fully exploit input images to refine reconstruction results. To solve these problems, we propose a novel framework for single-view and multi-view 3D reconstruction, named Pix2Vox. By using a well-designed encoder-decoder, it generates a coarse 3D volume from each input image. Then, a context-aware fusion module is introduced to adaptively select high-quality reconstructions for each part (e.g., table legs) from different coarse 3D volumes to obtain a fused 3D volume. Finally, a refiner further refines the fused 3D volume to generate the final output. Experimental results on the ShapeNet and Pix3D benchmarks indicate that the proposed Pix2Vox outperforms state-of-the-arts by a large margin. Furthermore, the proposed method is 24 times faster than 3D-R2N2 in terms of backward inference time. The experiments on ShapeNet unseen 3D categories have shown the superior generalization abilities of our method. | SfM @cite_11 and SLAM @cite_0 methods are successful in handling many scenarios. These methods match features among images and estimate the camera pose for each image. However, the matching process becomes difficult when multiple viewpoints are separated by a large margin. Besides, scanning all surfaces of an object before reconstruction is sometimes impossible, which leads to incomplete 3D shapes with occluded or hollowed-out areas @cite_27 . | {
"abstract": [
"Visual SLAM (simultaneous localization and mapping) refers to the problem of using images, as the only source of external information, in order to establish the position of a robot, a vehicle, or a moving camera in an environment, and at the same time, construct a representation of the explored zone. SLAM is an essential task for the autonomy of a robot. Nowadays, the problem of SLAM is considered solved when range sensors such as lasers or sonar are used to built 2D maps of small static environments. However SLAM for dynamic, complex and large scale environments, using vision as the sole external sensor, is an active area of research. The computer vision techniques employed in visual SLAM, such as detection, description and matching of salient features, image recognition and retrieval, among others, are still susceptible of improvement. The objective of this article is to provide new researchers in the field of visual SLAM a brief and comprehensible review of the state-of-the-art.",
"In this paper, we propose a novel approach, 3D-RecGAN++, which reconstructs the complete 3D structure of a given object from a single arbitrary depth view using generative adversarial networks. Unlike existing work which typically requires multiple views of the same object or class labels to recover the full 3D geometry, the proposed 3D-RecGAN++ only takes the voxel grid representation of a depth view of the object as input, and is able to generate the complete 3D occupancy grid with a high resolution of @math by recovering the occluded missing regions. The key idea is to combine the generative capabilities of 3D encoder-decoder and the conditional adversarial networks framework, to infer accurate and fine-grained 3D structures of objects in high-dimensional voxel space. Extensive experiments on large synthetic datasets and real-world Kinect datasets show that the proposed 3D-RecGAN++ significantly outperforms the state of the art in single view 3D object reconstruction, and is able to reconstruct unseen types of objects.",
""
],
"cite_N": [
"@cite_0",
"@cite_27",
"@cite_11"
],
"mid": [
"1979266466",
"2888702972",
"2963221299"
]
} | Pix2Vox: Context-aware 3D Reconstruction from Single and Multi-view Images | 3D reconstruction is an important problem in robotics, CAD, virtual reality and augmented reality. Traditional methods, such as Structure from Motion (SfM) [13] and Simultaneous Localization and Mapping (SLAM) [5], match image features across views. However, establishing feature correspondences becomes extremely difficult when multiple viewpoints are separated by a large margin due to local appearance changes or self-occlusions [11]. To overcome these limitations, several deep learning based approaches, including 3D-R2N2 [2], LSM [8], and 3DensiNet [23], have been proposed to recover the 3D shape of an object and obtained promising results.
To generate 3D volumes, 3D-R2N2 [2] and LSM [8] formulate multi-view 3D reconstruction as a sequence learning problem and use recurrent neural networks (RNNs) to fuse multiple feature maps extracted by a shared encoder from input images. The feature maps are incrementally refined when more views of an object are available. However, RNN-based methods suffer from three limitations. First, when given the same set of images with different orders, RNNs are unable to estimate the 3D shape of an object consistently results due to permutation variance [22]. Second, due to long-term memory loss of RNNs, the input images cannot be fully exploited to refine reconstruction results [14]. Last but not least, RNN-based methods are time- Figure 2: An overview of the proposed Pix2Vox. The network recovers the shape of 3D objects from arbitrary (uncalibrated) single or multiple images. The reconstruction results can be refined when more input images are available. Note that the weights of the encoder and decoder are shared among all views.
consuming since input images are processed sequentially without parallelization [7].
To address the issues mentioned above, we propose Pix2Vox, a novel framework for single-view and multi-view 3D reconstruction that contains four modules: encoder, decoder, context-aware fusion, and refiner. The encoder and decoder generate coarse 3D volumes from multiple input images in parallel, which eliminates the effect of the orders of input images and accelerates the computation. Then, the context-aware fusion module selects high-quality reconstructions from all coarse 3D volumes and generates a fused 3D volume, which fully exploits information of all input images without long-term memory loss. Finally, the refiner further correct wrongly recovered parts of the fused 3D volumes to obtain a refined reconstruction. To achieve a good balance between accuracy and model size, we implement two versions of the proposed framework: Pix2Vox-F and Pix2Vox-A ( Figure 1).
The contributions can be summarized as follows:
• We present a unified framework for both single-view and multi-view 3D reconstruction, namely Pix2Vox. We equip Pix2Vox with well-designed encoder, decoder, and refiner, which shows a powerful ability to handle 3D reconstruction in both synthetic and realworld images. • We propose a context-aware fusion module to adaptively select high-quality reconstructions for each part from different coarse 3D volumes in parallel to produce a fused reconstruction of the whole object. To the best of our knowledge, it is the first time to exploit context across multiple views for 3D reconstruction. • Experimental results on the ShapeNet [29] and Pascal 3D+ [31] datasets demonstrate that the proposed approaches outperform state-of-the-art methods in terms of both accuracy and efficiency. Additional experiments also show its strong generalization abilities in reconstructing unseen 3D objects.
The Method
Overview
The proposed Pix2Vox aims to reconstruct the 3D shape of an object from either single or multiple RGB images. The 3D shape of an object is represented by a 3D voxel grid, where 0 is an empty cell and 1 denotes an occupied cell. The key components of Pix2Vox are shown in Figure 2. First, the encoder produces feature maps from input images. Second, the decoder takes each feature map as input and generates a coarse 3D volume correspondingly. Third, single or multiple 3D volumes are forwarded to the contextaware fusion module, which adaptively selects high-quality reconstructions for each part from coarse 3D volumes to obtain a fused 3D volume. Finally, the refiner with skipconnections further refines the fused 3D volume to generate the final reconstruction result. Figure 3 shows the detailed architectures of Pix2Vox-F and Pix2Vox-A. The former involves much fewer parameters and lower computational complexity, while the latter has more parameters, which can construct more accurate 3D shapes but has higher computational complexity.
Network Architecture
Encoder
The encoder is to compute a set of features for the decoder to recover the 3D shape of the object. The first nine convolutional layers, along with the corresponding batch normalization layers and ReLU activations of a pre-trained VGG-16 [18], are used to extract a 512 × 28 × 28 feature tensor from a 224 × 224 × 3 image. This feature extraction is followed by three sets of 2D convolutional layers, batch normalization layers and ELU layers to embed semantic information into feature vectors. In Pix2Vox-F, the kernel size of the first convolutional layer is 1 2 while the kernel sizes of the other two are 3 2 . The number of output channels of the convolutional layer starts with 512 and decreases by half for the subsequent layer and ends up with 128. In Pix2Vox-A, the kernel sizes of the three convolutional layers are 3 2 , 3 2 , and 1 2 , respectively. The output channels of the three convolutional layers are 512, 512, and 256, respectively. After the second convolutional layer, there is a max pooling layer with kernel sizes of 3 2 and 4 2 in Pix2Vox-F and Pix2Vox-A, respectively. The feature vectors produced by Pix2Vox-F and Pix2Vox-A are of sizes 2048 and 16384, respectively.
Decoder
The decoder is responsible for transforming information of 2D feature maps into 3D volumes. There are five 3D transposed convolutional layers in both Pix2Vox-F and Pix2Vox-A. Specifically, the first four transposed convolutional layers are of a kernel size of 4 3 , with stride of 2 and padding of 1. There is an additional transposed convolutional layer with a bank of 1 3 filter. Each transposed convolutional layer is followed by a batch normalization layer and a ReLU activation except for the last layer followed by a sigmoid function. In Pix2Vox-F, the numbers of output channels of the
Context-aware Fusion
From different viewpoints, we can see different visible parts of an object. The reconstruction qualities of visible parts are much higher than those of invisible parts. Inspired by this observation, we propose a context-aware fusion module to adaptively select high-quality reconstruction for each part (e.g., table legs) from different coarse 3D volumes. The selected reconstructions are fused to generate a 3D volume of the whole object ( Figure 4).
As shown in Figure 5, given coarse 3D volumes and the corresponding context, the context-aware fusion module generates a score map for each coarse volume and then fuses them into one volume by the weighted summation of all coarse volumes according to their score maps. The spatial information of voxels is preserved in the context-aware fusion module, and thus Pix2Vox can utilize multi-view information to recover the structure of an object better.
Specifically, the context-aware fusion module generates the context c r of the r-th coarse volume v c r by concatenating the output of the last two layers in the decoder. Then, the context scoring network generates a score m r for the context of the r-th coarse voxel. The context scoring network is composed of five sets of 3D convolutional layers, each of which has a kernel size of 3 3 and padding of 1, followed by a batch normalization and a leaky ReLU activation. The numbers of output channels of convolutional layers are 9, 16, 8, 4, and 1, respectively. The learned score m r for context c r are normalized across all learnt scores. We choose softmax as the normalization function. Therefore, the score s
Refiner
The refiner can be seen as a residual network, which aims to correct wrongly recovered parts of a 3D volume. It follows the idea of a 3D encoder-decoder with the U-net connections [16]. With the help of the U-net connections between the encoder and decoder, the local structure in the fused volume can be preserved. Specifically, the encoder has three 3D convolutional layers, each of which has a bank of 4 3 filters with padding of 2, followed by a batch normalization layer, a leaky ReLU activation and a max pooling layer with a kernel size of 2 3 . The numbers of output channels of convolutional layers are 32, 64, and 128, respectively. The encoder is finally followed by two fully connected layers with dimensions of 2048 and 8192. The decoder consists of three transposed convolutional layers, each of which has a bank of 4 3 filters with padding of 2 and stride of 1. Except Table 1: Single-view reconstruction on ShapeNet compared using Intersection-over-Union (IoU). The best number for each category is highlighted in bold. The numbers in the parenthesis are results trained and tested with the released code. Note that DRC [21] is trained/tested per category and PSGN [4] takes object masks as an additional input. for the last transposed convolutional layer that is followed by a sigmoid function, other layers are followed by a batch normalization layer and a ReLU activation.
Loss Function
The loss function of the network is defined as the mean value of the voxel-wise binary cross entropies between the reconstructed object and the ground truth. More formally, it can be defined as
= 1 N N i=1 [gt i log(p i ) + (1 − gt i ) log(1 − p i )](3)
where N denotes the number of voxels in the ground truth. p i and gt i represent the predicted occupancy and the corresponding ground truth. The smaller the value is, the closer the prediction is to the ground truth.
Experiments
Datasets and Metrics
Datasets We evaluate the proposed Pix2Vox-F and Pix2Vox-A on both synthetic images of objects from the ShapeNet [29] dataset and real images from the Pascal 3D+ [31] dataset. More specifically, we use a subset of ShapeNet consisting of 13 major categories and 44k 3D models following the settings of [2]. As for Pascal 3D+, there are 12 categories and 22k models. Evaluation Metrics To evaluate the quality of the output from the proposed methods, we binarize the probabilities at a fixed threshold of 0.4 and use intersection over union (IoU) as the similarity measure. More formally,
IoU = i,j,k I(p (i,j,k) > t)I(gt (i,j,k) ) i,j,k I I(p (i,j,k) > t) + I(gt (i,j,k) )(4)
where p (i,j,k) and gt (i,j,k) represent the predicted occupancy probability and the ground truth at (i, j, k), respectively. I(·) is an indicator function and t denotes a voxelization threshold. Higher IoU values indicate better reconstruction results.
Implementation Details
We use 224 × 224 RGB images as input to train the proposed methods with a shape batch size of 64. The output voxelized reconstruction is 32 3 in size. We implement our network in PyTorch 1 and train both Pix2Vox-F and Pix2Vox-A using an Adam optimizer [9] with a β 1 of 0.9 and a β 2 of 0.999. The initial learning rate is set to 0.001 and decayed by 2 after 150 epochs. The optimization is set to stop after 250 epochs.
Reconstruction of Synthetic Images
To evaluate the performance of the proposed methods in handling synthetic images, we compare our methods against several state-of-the-art methods on the ShapeNet testing set. Table 1 shows the performance of single-view reconstruction, while Table 2 shows the mean IoU scores of multiview reconstruction with different numbers of views. Figure 6: Single-view (left) and multi-view (right) reconstructions on the ShapeNet testing set. GT represents the ground truth of the 3D object. Note that DRC [21] is trained/tested per category.
Input GT 3D-R2N2 OGN DRC Pix2Vox-F Pix2Vox-A Multi-view Inputs (3 views) GT 3D-R2N2 Pix2Vox-F Pix2Vox-A
The single-view reconstruction results of Pix2Vox-F and Pix2Vox-A significantly outperform other methods (Table 1). Pix2Vox-A increases IoU over 3D-R2N2 by 18%. In multi-view reconstruction, Pix2Vox-A consistently outperforms 3D-R2N2 in all numbers of views ( Table 2). The IoU of Pix2Vox-A is 13% higher than that of 3D-R2N2. Figure 6 shows several reconstruction examples from the ShapeNet testing set. Both Pix2Vox-F and Pix2Vox-A are able to recover the thin parts of objects, such as lamps and table legs. Compare with Pix2Vox-F, we also observe that higher dimensional feature maps in Pix2Vox-A do contribute to 3D reconstruction. Moreover, in multi-view reconstruction, both Pix2Vox-A and Pix2Vox-F produce better results than 3D-R2N2.
Reconstruction of Real-world Images
To evaluate the performance on of the proposed methods on real-world images, we test our methods for single-view reconstruction on the Pascal 3D+ dataset. First, the images are cropped according to the bounding box of the largest objects within the image. Then, these cropped images are rescaled to the input size of the reconstruction network.
The mean IoU of each category is reported in Table 3. Both Pix2Vox-F and Pix2Vox-A significantly outperform the competing approaches on the Pascal 3D+ testing set. Compared with other methods, our methods are able to better reconstruct the overall shape and capture finer details from the input images. The qualitative analysis is given in Figure 7, which indicate that the proposed methods are more effective in handling real-world scenarios.
Reconstruction of Unseen Objects
In order to test how well our methods can generalize to unseen objects, we conduct additional experiments on ShapeNet. More specifically, all models are trained on the 13 major categories of ShapeNet and tested on the remain- Figure 7: Reconstructions on the Pascal 3D+ testing set from single-view images. GT represents the ground truth of the 3D object. Note that DRC [21] is trained/tested per category.
ing 44 categories of ShapeNet. All pretrained models have never "seen" either the objects in these categories or the labels of objects before. The reconstruction results of 3D-R2N2 are obtained with the released pretrained model. Several reconstruction results are presented in Figure 8. The reconstruction IoU of 3D-R2N2 on unseen objects is 0.119, while Pix2Vox-F and Pix2Vox-A are 0.209 and 0.227, respectively. Experimental results demonstrate that 3D-R2N2 can hardly recover the shape of unseen objects. In contrast, Pix2Vox-F and Pix2Vox-A show satisfactory generalization abilities to unseen objects.
Ablation Study
In this section, we validate the context-aware fusion and the refiner by ablation studies. Context-aware fusion To quantitatively evaluate the context-aware fusion, we replace the context-aware fusion in Pix2Vox-A with the average fusion, where the fused voxel v f can be calculated as
Input GT 3D-R2N2 Pix2Vox-F Pix2Vox-A Figure 8: Reconstruction on unseen objects of ShapeNet from 5-view images. GT represents the ground truth of the 3D object. Table 2 shows that the context-aware fusion performs better than the average fusion in selecting the high-quality reconstructions for each part from different coarse volumes. Refiner Pix2Vox-A uses a refiner to further refine the fused 3D volume. For single-view reconstruction on ShapeNet, the IoU of Pix2Vox-A is 0.658. In contrast, the IoU of Pix2Vox-A without the refiner decreases to 0.643. Removing refiner causes considerable degeneration for the reconstruction accuracy. However, as the number of views increases, the effect of the refiner becomes weaker. The reconstruction results of the two networks (with/without the refiner) are almost the same if the number of the input images is more than 3.
The ablation studies indicate that both the context-aware fusion and the refiner play important roles in our framework for the performance improvements against previous stateof-the-art methods. Table 4 and Figure 1 show the numbers of parameters of different methods. There is an 80% reduction in parameters in Pix2Vox-F compared to 3D-R2N2.
Space and Time Complexity
The running times are obtained on the same PC with an NVIDIA GTX 1080 Ti GPU. For more precise timing, we exclude the reading and writing time when evaluating the forward and backward inference time. Both Pix2Vox-F and Pix2Vox-A are about 8 times faster in forward inference than 3D-R2N2 in single-view reconstruction. In backward inference, Pix2Vox-F and Pix2Vox-A are about 24 and 4 times faster than 3D-R2N2, respectively.
Discussion
To give a detailed analysis of the context-aware fusion module, we visualized the score maps of three coarse volumes when reconstructing the 3D shape of a table from 3view images, as shown in Figure 4. The reconstruction quality of the table tops on the right is clearly of low quality, and the score of the corresponding part is lower than those in the other two coarse volumes. The fused 3D volume is obtained by combining the selected high-quality reconstruction parts, where bad reconstructions can be eliminated effectively by our scoring scheme.
Although our methods outperform state-of-the-arts, the reconstruction results of our methods are still with a low resolution. We can further improve the reconstruction resolutions in the future work by introducing GANs [6].
Conclusion and Future Works
In this paper, we propose a unified framework for both single-view and multi-view 3D reconstruction, named Pix2Vox. Compared with existing methods that fuse deep features generated by a shared encoder, the proposed method fuses multiple coarse volumes produced by a decoder and preserves multi-view spatial constraints better. Quantitative and qualitative evaluation for both single-view and multi-view reconstruction on the ShapeNet and Pascal 3D+ benchmarks indicate that the proposed methods outperform state-of-the-arts by a large margin. Pix2Vox is computationally efficient, which is 24 times faster than 3D-R2N2 in terms of backward inference time. In future work, we will work on improving the resolution of the reconstructed 3D objects. In addition, we also plan to extend Pix2Vox to reconstruct 3D objects from RGB-D images. | 2,898 |
1901.11153 | 2911758669 | Recovering the 3D representation of an object from single-view or multi-view RGB images by deep neural networks has attracted increasing attention in the past few years. Several mainstream works (e.g., 3D-R2N2) use recurrent neural networks (RNNs) to fuse multiple feature maps extracted from input images sequentially. However, when given the same set of input images with different orders, RNN-based approaches are unable to produce consistent reconstruction results. Moreover, due to long-term memory loss, RNNs cannot fully exploit input images to refine reconstruction results. To solve these problems, we propose a novel framework for single-view and multi-view 3D reconstruction, named Pix2Vox. By using a well-designed encoder-decoder, it generates a coarse 3D volume from each input image. Then, a context-aware fusion module is introduced to adaptively select high-quality reconstructions for each part (e.g., table legs) from different coarse 3D volumes to obtain a fused 3D volume. Finally, a refiner further refines the fused 3D volume to generate the final output. Experimental results on the ShapeNet and Pix3D benchmarks indicate that the proposed Pix2Vox outperforms state-of-the-arts by a large margin. Furthermore, the proposed method is 24 times faster than 3D-R2N2 in terms of backward inference time. The experiments on ShapeNet unseen 3D categories have shown the superior generalization abilities of our method. | Powered by large-scale datasets of 3D CAD models (e.g., ShapeNet @cite_24 ), deep-learning-based methods have been proposed for 3D reconstruction. Both 3D-R2N2 @cite_10 and LSM @cite_6 use RNNs to infer 3D shape from single or multiple input images and achieve impressive results. However, RNNs are time-consuming and permutation-variant, which produce inconsistent reconstruction results. 3DensiNet @cite_22 uses max pooling to aggregate the features from multiple images. However, max pooling only extracts maximum values from features, which may ignore other valuable features that are useful for 3D reconstruction. | {
"abstract": [
"3D shape is a crucial but heavily underutilized cue in today's computer vision systems, mostly due to the lack of a good generic shape representation. With the recent availability of inexpensive 2.5D depth sensors (e.g. Microsoft Kinect), it is becoming increasingly important to have a powerful 3D shape representation in the loop. Apart from category recognition, recovering full 3D shapes from view-based 2.5D depth maps is also a critical part of visual understanding. To this end, we propose to represent a geometric 3D shape as a probability distribution of binary variables on a 3D voxel grid, using a Convolutional Deep Belief Network. Our model, 3D ShapeNets, learns the distribution of complex 3D shapes across different object categories and arbitrary poses from raw CAD data, and discovers hierarchical compositional part representations automatically. It naturally supports joint object recognition and shape completion from 2.5D depth maps, and it enables active object recognition through view planning. To train our 3D deep learning model, we construct ModelNet -- a large-scale 3D CAD model dataset. Extensive experiments show that our 3D deep representation enables significant performance improvement over the-state-of-the-arts in a variety of tasks.",
"Inspired by the recent success of methods that employ shape priors to achieve robust 3D reconstructions, we propose a novel recurrent neural network architecture that we call the 3D Recurrent Reconstruction Neural Network (3D-R2N2). The network learns a mapping from images of objects to their underlying 3D shapes from a large collection of synthetic data [13]. Our network takes in one or more images of an object instance from arbitrary viewpoints and outputs a reconstruction of the object in the form of a 3D occupancy grid. Unlike most of the previous works, our network does not require any image annotations or object class labels for training or testing. Our extensive experimental analysis shows that our reconstruction framework (i) outperforms the state-of-the-art methods for single view reconstruction, and (ii) enables the 3D reconstruction of objects in situations when traditional SFM SLAM methods fail (because of lack of texture and or wide baseline).",
"3D volumetric object generation prediction from single 2D image is a quite challenging but meaningful task in 3D visual computing. In this paper, we propose a novel neural network architecture, named \"3DensiNet\", which uses density heat-map as an intermediate supervision tool for 2D-to-3D transformation. Specifically, we firstly present a 2D density heat-map to 3D volumetric object encoding-decoding network, which outperforms classical 3D autoencoder. Then we show that using 2D image to predict its density heat-map via a 2D to 2D encoding-decoding network is feasible. In addition, we leverage adversarial loss to fine tune our network, which improves the generated predicted 3D voxel objects to be more similar to the ground truth voxel object. Experimental results on 3D volumetric prediction from 2D images demonstrates superior performance of 3DensiNet over other state-of-the-art techniques in handling 3D volumetric object generation prediction from single 2D image.",
"We present a learnt system for multi-view stereopsis. In contrast to recent learning based methods for 3D reconstruction, we leverage the underlying 3D geometry of the problem through feature projection and unprojection along viewing rays. By formulating these operations in a differentiable manner, we are able to learn the system end-to-end for the task of metric 3D reconstruction. End-to-end learning allows us to jointly reason about shape priors while conforming to geometric constraints, enabling reconstruction from much fewer images (even a single image) than required by classical approaches as well as completion of unseen surfaces. We thoroughly evaluate our approach on the ShapeNet dataset and demonstrate the benefits over classical approaches and recent learning based methods."
],
"cite_N": [
"@cite_24",
"@cite_10",
"@cite_22",
"@cite_6"
],
"mid": [
"2951755740",
"2342277278",
"2765901044",
"2963966978"
]
} | Pix2Vox: Context-aware 3D Reconstruction from Single and Multi-view Images | 3D reconstruction is an important problem in robotics, CAD, virtual reality and augmented reality. Traditional methods, such as Structure from Motion (SfM) [13] and Simultaneous Localization and Mapping (SLAM) [5], match image features across views. However, establishing feature correspondences becomes extremely difficult when multiple viewpoints are separated by a large margin due to local appearance changes or self-occlusions [11]. To overcome these limitations, several deep learning based approaches, including 3D-R2N2 [2], LSM [8], and 3DensiNet [23], have been proposed to recover the 3D shape of an object and obtained promising results.
To generate 3D volumes, 3D-R2N2 [2] and LSM [8] formulate multi-view 3D reconstruction as a sequence learning problem and use recurrent neural networks (RNNs) to fuse multiple feature maps extracted by a shared encoder from input images. The feature maps are incrementally refined when more views of an object are available. However, RNN-based methods suffer from three limitations. First, when given the same set of images with different orders, RNNs are unable to estimate the 3D shape of an object consistently results due to permutation variance [22]. Second, due to long-term memory loss of RNNs, the input images cannot be fully exploited to refine reconstruction results [14]. Last but not least, RNN-based methods are time- Figure 2: An overview of the proposed Pix2Vox. The network recovers the shape of 3D objects from arbitrary (uncalibrated) single or multiple images. The reconstruction results can be refined when more input images are available. Note that the weights of the encoder and decoder are shared among all views.
consuming since input images are processed sequentially without parallelization [7].
To address the issues mentioned above, we propose Pix2Vox, a novel framework for single-view and multi-view 3D reconstruction that contains four modules: encoder, decoder, context-aware fusion, and refiner. The encoder and decoder generate coarse 3D volumes from multiple input images in parallel, which eliminates the effect of the orders of input images and accelerates the computation. Then, the context-aware fusion module selects high-quality reconstructions from all coarse 3D volumes and generates a fused 3D volume, which fully exploits information of all input images without long-term memory loss. Finally, the refiner further correct wrongly recovered parts of the fused 3D volumes to obtain a refined reconstruction. To achieve a good balance between accuracy and model size, we implement two versions of the proposed framework: Pix2Vox-F and Pix2Vox-A ( Figure 1).
The contributions can be summarized as follows:
• We present a unified framework for both single-view and multi-view 3D reconstruction, namely Pix2Vox. We equip Pix2Vox with well-designed encoder, decoder, and refiner, which shows a powerful ability to handle 3D reconstruction in both synthetic and realworld images. • We propose a context-aware fusion module to adaptively select high-quality reconstructions for each part from different coarse 3D volumes in parallel to produce a fused reconstruction of the whole object. To the best of our knowledge, it is the first time to exploit context across multiple views for 3D reconstruction. • Experimental results on the ShapeNet [29] and Pascal 3D+ [31] datasets demonstrate that the proposed approaches outperform state-of-the-art methods in terms of both accuracy and efficiency. Additional experiments also show its strong generalization abilities in reconstructing unseen 3D objects.
The Method
Overview
The proposed Pix2Vox aims to reconstruct the 3D shape of an object from either single or multiple RGB images. The 3D shape of an object is represented by a 3D voxel grid, where 0 is an empty cell and 1 denotes an occupied cell. The key components of Pix2Vox are shown in Figure 2. First, the encoder produces feature maps from input images. Second, the decoder takes each feature map as input and generates a coarse 3D volume correspondingly. Third, single or multiple 3D volumes are forwarded to the contextaware fusion module, which adaptively selects high-quality reconstructions for each part from coarse 3D volumes to obtain a fused 3D volume. Finally, the refiner with skipconnections further refines the fused 3D volume to generate the final reconstruction result. Figure 3 shows the detailed architectures of Pix2Vox-F and Pix2Vox-A. The former involves much fewer parameters and lower computational complexity, while the latter has more parameters, which can construct more accurate 3D shapes but has higher computational complexity.
Network Architecture
Encoder
The encoder is to compute a set of features for the decoder to recover the 3D shape of the object. The first nine convolutional layers, along with the corresponding batch normalization layers and ReLU activations of a pre-trained VGG-16 [18], are used to extract a 512 × 28 × 28 feature tensor from a 224 × 224 × 3 image. This feature extraction is followed by three sets of 2D convolutional layers, batch normalization layers and ELU layers to embed semantic information into feature vectors. In Pix2Vox-F, the kernel size of the first convolutional layer is 1 2 while the kernel sizes of the other two are 3 2 . The number of output channels of the convolutional layer starts with 512 and decreases by half for the subsequent layer and ends up with 128. In Pix2Vox-A, the kernel sizes of the three convolutional layers are 3 2 , 3 2 , and 1 2 , respectively. The output channels of the three convolutional layers are 512, 512, and 256, respectively. After the second convolutional layer, there is a max pooling layer with kernel sizes of 3 2 and 4 2 in Pix2Vox-F and Pix2Vox-A, respectively. The feature vectors produced by Pix2Vox-F and Pix2Vox-A are of sizes 2048 and 16384, respectively.
Decoder
The decoder is responsible for transforming information of 2D feature maps into 3D volumes. There are five 3D transposed convolutional layers in both Pix2Vox-F and Pix2Vox-A. Specifically, the first four transposed convolutional layers are of a kernel size of 4 3 , with stride of 2 and padding of 1. There is an additional transposed convolutional layer with a bank of 1 3 filter. Each transposed convolutional layer is followed by a batch normalization layer and a ReLU activation except for the last layer followed by a sigmoid function. In Pix2Vox-F, the numbers of output channels of the
Context-aware Fusion
From different viewpoints, we can see different visible parts of an object. The reconstruction qualities of visible parts are much higher than those of invisible parts. Inspired by this observation, we propose a context-aware fusion module to adaptively select high-quality reconstruction for each part (e.g., table legs) from different coarse 3D volumes. The selected reconstructions are fused to generate a 3D volume of the whole object ( Figure 4).
As shown in Figure 5, given coarse 3D volumes and the corresponding context, the context-aware fusion module generates a score map for each coarse volume and then fuses them into one volume by the weighted summation of all coarse volumes according to their score maps. The spatial information of voxels is preserved in the context-aware fusion module, and thus Pix2Vox can utilize multi-view information to recover the structure of an object better.
Specifically, the context-aware fusion module generates the context c r of the r-th coarse volume v c r by concatenating the output of the last two layers in the decoder. Then, the context scoring network generates a score m r for the context of the r-th coarse voxel. The context scoring network is composed of five sets of 3D convolutional layers, each of which has a kernel size of 3 3 and padding of 1, followed by a batch normalization and a leaky ReLU activation. The numbers of output channels of convolutional layers are 9, 16, 8, 4, and 1, respectively. The learned score m r for context c r are normalized across all learnt scores. We choose softmax as the normalization function. Therefore, the score s
Refiner
The refiner can be seen as a residual network, which aims to correct wrongly recovered parts of a 3D volume. It follows the idea of a 3D encoder-decoder with the U-net connections [16]. With the help of the U-net connections between the encoder and decoder, the local structure in the fused volume can be preserved. Specifically, the encoder has three 3D convolutional layers, each of which has a bank of 4 3 filters with padding of 2, followed by a batch normalization layer, a leaky ReLU activation and a max pooling layer with a kernel size of 2 3 . The numbers of output channels of convolutional layers are 32, 64, and 128, respectively. The encoder is finally followed by two fully connected layers with dimensions of 2048 and 8192. The decoder consists of three transposed convolutional layers, each of which has a bank of 4 3 filters with padding of 2 and stride of 1. Except Table 1: Single-view reconstruction on ShapeNet compared using Intersection-over-Union (IoU). The best number for each category is highlighted in bold. The numbers in the parenthesis are results trained and tested with the released code. Note that DRC [21] is trained/tested per category and PSGN [4] takes object masks as an additional input. for the last transposed convolutional layer that is followed by a sigmoid function, other layers are followed by a batch normalization layer and a ReLU activation.
Loss Function
The loss function of the network is defined as the mean value of the voxel-wise binary cross entropies between the reconstructed object and the ground truth. More formally, it can be defined as
= 1 N N i=1 [gt i log(p i ) + (1 − gt i ) log(1 − p i )](3)
where N denotes the number of voxels in the ground truth. p i and gt i represent the predicted occupancy and the corresponding ground truth. The smaller the value is, the closer the prediction is to the ground truth.
Experiments
Datasets and Metrics
Datasets We evaluate the proposed Pix2Vox-F and Pix2Vox-A on both synthetic images of objects from the ShapeNet [29] dataset and real images from the Pascal 3D+ [31] dataset. More specifically, we use a subset of ShapeNet consisting of 13 major categories and 44k 3D models following the settings of [2]. As for Pascal 3D+, there are 12 categories and 22k models. Evaluation Metrics To evaluate the quality of the output from the proposed methods, we binarize the probabilities at a fixed threshold of 0.4 and use intersection over union (IoU) as the similarity measure. More formally,
IoU = i,j,k I(p (i,j,k) > t)I(gt (i,j,k) ) i,j,k I I(p (i,j,k) > t) + I(gt (i,j,k) )(4)
where p (i,j,k) and gt (i,j,k) represent the predicted occupancy probability and the ground truth at (i, j, k), respectively. I(·) is an indicator function and t denotes a voxelization threshold. Higher IoU values indicate better reconstruction results.
Implementation Details
We use 224 × 224 RGB images as input to train the proposed methods with a shape batch size of 64. The output voxelized reconstruction is 32 3 in size. We implement our network in PyTorch 1 and train both Pix2Vox-F and Pix2Vox-A using an Adam optimizer [9] with a β 1 of 0.9 and a β 2 of 0.999. The initial learning rate is set to 0.001 and decayed by 2 after 150 epochs. The optimization is set to stop after 250 epochs.
Reconstruction of Synthetic Images
To evaluate the performance of the proposed methods in handling synthetic images, we compare our methods against several state-of-the-art methods on the ShapeNet testing set. Table 1 shows the performance of single-view reconstruction, while Table 2 shows the mean IoU scores of multiview reconstruction with different numbers of views. Figure 6: Single-view (left) and multi-view (right) reconstructions on the ShapeNet testing set. GT represents the ground truth of the 3D object. Note that DRC [21] is trained/tested per category.
Input GT 3D-R2N2 OGN DRC Pix2Vox-F Pix2Vox-A Multi-view Inputs (3 views) GT 3D-R2N2 Pix2Vox-F Pix2Vox-A
The single-view reconstruction results of Pix2Vox-F and Pix2Vox-A significantly outperform other methods (Table 1). Pix2Vox-A increases IoU over 3D-R2N2 by 18%. In multi-view reconstruction, Pix2Vox-A consistently outperforms 3D-R2N2 in all numbers of views ( Table 2). The IoU of Pix2Vox-A is 13% higher than that of 3D-R2N2. Figure 6 shows several reconstruction examples from the ShapeNet testing set. Both Pix2Vox-F and Pix2Vox-A are able to recover the thin parts of objects, such as lamps and table legs. Compare with Pix2Vox-F, we also observe that higher dimensional feature maps in Pix2Vox-A do contribute to 3D reconstruction. Moreover, in multi-view reconstruction, both Pix2Vox-A and Pix2Vox-F produce better results than 3D-R2N2.
Reconstruction of Real-world Images
To evaluate the performance on of the proposed methods on real-world images, we test our methods for single-view reconstruction on the Pascal 3D+ dataset. First, the images are cropped according to the bounding box of the largest objects within the image. Then, these cropped images are rescaled to the input size of the reconstruction network.
The mean IoU of each category is reported in Table 3. Both Pix2Vox-F and Pix2Vox-A significantly outperform the competing approaches on the Pascal 3D+ testing set. Compared with other methods, our methods are able to better reconstruct the overall shape and capture finer details from the input images. The qualitative analysis is given in Figure 7, which indicate that the proposed methods are more effective in handling real-world scenarios.
Reconstruction of Unseen Objects
In order to test how well our methods can generalize to unseen objects, we conduct additional experiments on ShapeNet. More specifically, all models are trained on the 13 major categories of ShapeNet and tested on the remain- Figure 7: Reconstructions on the Pascal 3D+ testing set from single-view images. GT represents the ground truth of the 3D object. Note that DRC [21] is trained/tested per category.
ing 44 categories of ShapeNet. All pretrained models have never "seen" either the objects in these categories or the labels of objects before. The reconstruction results of 3D-R2N2 are obtained with the released pretrained model. Several reconstruction results are presented in Figure 8. The reconstruction IoU of 3D-R2N2 on unseen objects is 0.119, while Pix2Vox-F and Pix2Vox-A are 0.209 and 0.227, respectively. Experimental results demonstrate that 3D-R2N2 can hardly recover the shape of unseen objects. In contrast, Pix2Vox-F and Pix2Vox-A show satisfactory generalization abilities to unseen objects.
Ablation Study
In this section, we validate the context-aware fusion and the refiner by ablation studies. Context-aware fusion To quantitatively evaluate the context-aware fusion, we replace the context-aware fusion in Pix2Vox-A with the average fusion, where the fused voxel v f can be calculated as
Input GT 3D-R2N2 Pix2Vox-F Pix2Vox-A Figure 8: Reconstruction on unseen objects of ShapeNet from 5-view images. GT represents the ground truth of the 3D object. Table 2 shows that the context-aware fusion performs better than the average fusion in selecting the high-quality reconstructions for each part from different coarse volumes. Refiner Pix2Vox-A uses a refiner to further refine the fused 3D volume. For single-view reconstruction on ShapeNet, the IoU of Pix2Vox-A is 0.658. In contrast, the IoU of Pix2Vox-A without the refiner decreases to 0.643. Removing refiner causes considerable degeneration for the reconstruction accuracy. However, as the number of views increases, the effect of the refiner becomes weaker. The reconstruction results of the two networks (with/without the refiner) are almost the same if the number of the input images is more than 3.
The ablation studies indicate that both the context-aware fusion and the refiner play important roles in our framework for the performance improvements against previous stateof-the-art methods. Table 4 and Figure 1 show the numbers of parameters of different methods. There is an 80% reduction in parameters in Pix2Vox-F compared to 3D-R2N2.
Space and Time Complexity
The running times are obtained on the same PC with an NVIDIA GTX 1080 Ti GPU. For more precise timing, we exclude the reading and writing time when evaluating the forward and backward inference time. Both Pix2Vox-F and Pix2Vox-A are about 8 times faster in forward inference than 3D-R2N2 in single-view reconstruction. In backward inference, Pix2Vox-F and Pix2Vox-A are about 24 and 4 times faster than 3D-R2N2, respectively.
Discussion
To give a detailed analysis of the context-aware fusion module, we visualized the score maps of three coarse volumes when reconstructing the 3D shape of a table from 3view images, as shown in Figure 4. The reconstruction quality of the table tops on the right is clearly of low quality, and the score of the corresponding part is lower than those in the other two coarse volumes. The fused 3D volume is obtained by combining the selected high-quality reconstruction parts, where bad reconstructions can be eliminated effectively by our scoring scheme.
Although our methods outperform state-of-the-arts, the reconstruction results of our methods are still with a low resolution. We can further improve the reconstruction resolutions in the future work by introducing GANs [6].
Conclusion and Future Works
In this paper, we propose a unified framework for both single-view and multi-view 3D reconstruction, named Pix2Vox. Compared with existing methods that fuse deep features generated by a shared encoder, the proposed method fuses multiple coarse volumes produced by a decoder and preserves multi-view spatial constraints better. Quantitative and qualitative evaluation for both single-view and multi-view reconstruction on the ShapeNet and Pascal 3D+ benchmarks indicate that the proposed methods outperform state-of-the-arts by a large margin. Pix2Vox is computationally efficient, which is 24 times faster than 3D-R2N2 in terms of backward inference time. In future work, we will work on improving the resolution of the reconstructed 3D objects. In addition, we also plan to extend Pix2Vox to reconstruct 3D objects from RGB-D images. | 2,898 |
1907.10453 | 2962723636 | Link streams model interactions over time in a wide range of fields. Under this model, the challenge is to mine efficiently both temporal and topological structures. Community detection and change point detection are one of the most powerful tools to analyze such evolving interactions. In this paper, we build on both to detect stable community structures by identifying change points within meaningful communities. Unlike existing dynamic community detection algorithms, the proposed method is able to discover stable communities efficiently at multiple temporal scales. We test the effectiveness of our method on synthetic networks, and on high-resolution time-varying networks of contacts drawn from real social networks. | The problem of detecting communities in dynamic networks has attracted a lot of attention in recent years, with various approaches tackling different aspects of the problem, see @cite_13 for a recent survey. Most of these methods consider that the studied dynamic networks are represented as sequences of snapshots, with each snapshot being a well formed graph with meaningful community structure, see for instance @cite_7 @cite_11 . Some other methods work with interval graphs, and update the community structure at each network change, e.g., @cite_6 @cite_12 . However, all those methods are not adapted to deal with link streams, for which the network is usually not well formed at any given time. Using them on such a network would require to first aggregate the links of the stream by choosing an arbitrarily temporal scale (aggregation window). | {
"abstract": [
"Network science is an interdisciplinary endeavor, with methods and applications drawn from across the natural, social, and information sciences. A prominent problem in network science is the algorithmic detection of tightly connected groups of nodes known as communities. We developed a generalized framework of network quality functions that allowed us to study the community structure of arbitrary multislice networks, which are combinations of individual networks coupled through links that connect each node in one network slice to itself in other slices. This framework allows studies of community structure in a general setting encompassing networks that evolve over time, have multiple types of links (multiplexity), and have multiple scales.",
"Community discovery has emerged during the last decade as one of the most challenging problems in social network analysis. Many algorithms have been proposed to find communities on static networks, i.e. networks which do not change in time. However, social networks are dynamic realities (e.g. call graphs, online social networks): in such scenarios static community discovery fails to identify a partition of the graph that is semantically consistent with the temporal information expressed by the data. In this work we propose Tiles, an algorithm that extracts overlapping communities and tracks their evolution in time following an online iterative procedure. Our algorithm operates following a domino effect strategy, dynamically recomputing nodes community memberships whenever a new interaction takes place. We compare Tiles with state-of-the-art community detection algorithms on both synthetic and real world networks having annotated community structure: our experiments show that the proposed approach is able to guarantee lower execution times and better correspondence with the ground truth communities than its competitors. Moreover, we illustrate the specifics of the proposed approach by discussing the properties of identified communities it is able to identify.",
"Several research studies have shown that complex networks modeling real-world phenomena are characterized by striking properties: (i) they are organized according to community structure, and (ii) their structure evolves with time. Many researchers have worked on methods that can efficiently unveil substructures in complex networks, giving birth to the field of community discovery. A novel and fascinating problem started capturing researcher interest recently: the identification of evolving communities. Dynamic networks can be used to model the evolution of a system: nodes and edges are mutable, and their presence, or absence, deeply impacts the community structure that composes them. This survey aims to present the distinctive features and challenges of dynamic community discovery and propose a classification of published approaches. As a “user manual,” this work organizes state-of-the-art methodologies into a taxonomy, based on their rationale, and their specific instantiation. Given a definition of network dynamics, desired community characteristics, and analytical needs, this survey will support researchers to identify the set of approaches that best fit their needs. The proposed classification could also help researchers choose in which direction to orient their future research.",
"Abstract Community structure is one of the most prominent features of complex networks. Community structure detection is of great importance to provide insights into the network structure and functionalities. Most proposals focus on static networks. However, finding communities in a dynamic network is even more challenging, especially when communities overlap with each other. In this article, we present an online algorithm, called OLCPM, based on clique percolation and label propagation methods. OLCPM can detect overlapping communities and works on temporal networks with a fine granularity. By locally updating the community structure, OLCPM delivers significant improvement in running time compared with previous clique percolation techniques. The experimental results on both synthetic and real-world networks illustrate the effectiveness of the method.",
"Real-world social networks from a variety of domains can naturally be modelled as dynamic graphs. However, approaches to detecting communities have largely focused on identifying communities in static graphs. Recently, researchers have begun to consider the problem of tracking the evolution of groups of users in dynamic scenarios. Here we describe a model for tracking the progress of communities over time in a dynamic network, where each community is characterised by a series of significant evolutionary events. This model is used to motivate a community-matching strategy for efficiently identifying and tracking dynamic communities. Evaluations on synthetic graphs containing embedded events demonstrate that this strategy can successfully track communities over time in volatile networks. In addition, we describe experiments exploring the dynamic communities detected in a real mobile operator network containing millions of users."
],
"cite_N": [
"@cite_7",
"@cite_6",
"@cite_13",
"@cite_12",
"@cite_11"
],
"mid": [
"2074617510",
"2508833275",
"2734601503",
"2963315742",
"2145977038"
]
} | Detecting Stable Communities in Link Streams at Multiple Temporal Scales * | In recent years, studying interactions over time has witnessed a growing interest in a wide range of fields, such as sociology, biology, physics, etc. Such dynamic interactions are often represented using the snapshot model: the network is divided into a sequence of static networks, i.e., snapshots, aggregating all contacts occurring in a given time window. The main drawback of this model is that it often requires to choose arbitrarily a temporal scale of analysis. The link stream model [9] is a more effective way for representing interactions over time, that can fully capture the underling temporal information.
Real world networks evolve frequently at many different time scales. Fluctuations in such networks can be observed at yearly, monthly, daily, hourly, or even smaller scales. For instance, if one were to look at interactions among workers in a company or laboratory, one could expect to discover clusters of people corresponding to meetings and/or coffee breaks, interacting at high frequency (e.g., every few seconds) for short periods (e.g., few minutes), project members interacting at medium frequency (e.g., once a day) for medium periods (e.g., a few months), coordination groups interacting at low frequency (e.g., once a month) for longer periods (e.g., a few years), etc.
An analysis of communities found at an arbitrary chosen scale would necessarily miss some of these communities: low latency ones are invisible using short aggregation windows, while high frequency ones are lost in the noise for long aggregation windows. A multiple temporal scale analysis of communities seems therefore the right solution to study networks of interactions represented as link streams.
To the best of our knowledge, no such method exists in the literature. In this article, we propose a method having roots both in the literature on change point detection and in dynamic community detection. It detects what we call stable communities, i.e., groups of nodes forming a coherent community throughout a period of time, at a given temporal scale.
The remainder of this paper is organized as follows. In Section 2, we present a brief review of related works. Then, we describe the proposed framework in detail in section 3. We experimentally evaluate the proposed method on both synthetic and real-world networks in section 4.
Dynamic Community Detection
The problem of detecting communities in dynamic networks has attracted a lot of attention in recent years, with various approaches tackling different aspects of the problem, see [16] for a recent survey. Most of these methods consider that the studied dynamic networks are represented as sequences of snapshots, with each snapshot being a well formed graph with meaningful community structure, see for instance [12,5]. Some other methods work with interval graphs, and update the community structure at each network change, e.g., [17,3]. However, all those methods are not adapted to deal with link streams, for which the network is usually not well formed at any given time. Using them on such a network would require to first aggregate the links of the stream by choosing an arbitrarily temporal scale (aggregation window).
Change Point Detection
Our work is also related to research conducted on change point detection considering community structures. In these approaches, given a sequence of snapshots, one wants to detect the periods during which the network organization and/or the community structure remains stable. In [15], the authors proposed the first change-point detection method for evolving networks that uses generative network models and statistical hypothesis testing. Wang et al. [19] proposed a hierarchical change point detection method to detect both inter-community(local change) and intra-community(global change) evolution. A recent work by Masuda et al. [11] used graph distance measures and hierarchical clustering to identify sequences of system state dynamics.
From those methods, our proposal keeps the principle of stable periods delimited by change points, and the idea of detecting changes at local and global scales. But our method differs in two directions: i) we are searching for stable individual communities instead of stable graph periods, and ii) we search for stable structures at multiple levels of temporal granularity.
Method
The goal of our proposed method is i) to detect stable communities ii) at multiple scales without redundancy and iii) to do so efficiently. We adopt an iterative approach, searching communities from the coarser to the more detailed temporal scales. At each temporal scale, we use a three step process:
1. Seed Discovery, to find relevant community seeds at this temporal scale. 2. Seed Pruning, to remove seeds which are redundant with communities found at higher scales. 3. Seed Expansion, expanding seeds in time to discover stable communities.
We start by presenting each of these three steps, and then we describe the method used to iterate through the different scales in section 3.4.
Our work aims to provide a general framework that could serve as baseline for further work in this field. We define three generic functions that can be set according to the user needs:
-CD(g), a static community detection algorithm on a graph g.
-QC(N, g), a function to assess the quality of a community defined by the set of nodes N on a graph g. -CSS(N 1 ,N 2 ), a function to assess the similarity of two sets of nodes N 1 and N 2 .
See section 3.5 on how to choose proper functions for those tasks. We define a stable dynamic community c as a triplet c = (N, p, γ), with c.N the list of nodes in the community, c.p its period of existence defined as an interval, e.g., c.p = [t 1 , t 2 [ 4 means that the community c exists from t 1 to t 2 , and c.γ the temporal granularity at which c has been discovered.
We denote the set of all stable dynamic communities D.
Seed Discovery
For each temporal scale, we first search for interesting seeds. A temporal scale is defined by a granularity γ, expressed as a period of time (e.g.; 20 minutes, 1 hour, 2 weeks, etc).We use this granularity as a window size, and, starting from a time t 0 -by default, the date of the first observed interaction-we create a cumulative graph (snapshot) for every period
[t 0 , t 0 +γ[, [t 0 +γ, t 0 +2γ[, [t 0 +2γ, t 0 +3γ[, etc.
, until all interactions belong to a cumulative graph. This process yields a sequence of static graphs, such as G t0,γ is a cumulated snapshot of link stream G for the period starting at t 0 and of duration γ. G γ is the list of all such graphs. Given a static community detection algorithm CD yielding a set of communities, and a function to assess the quality of communities QC, we apply CD on each snapshot and filter promising seeds, i.e., high quality communities, using QC. The set of valid seeds S is therefore defined as:
S = {∀g ∈ G γ , ∀s ∈ CD(g), QC(s, g) > θ q }(1)
With θ q a threshold of community quality.
Since community detection at each step is independent, we can run it in parallel on all steps, this is an important aspect for scalability.
Seed Pruning
The seed pruning step has a twofold objective: i) reducing redundancy and ii) speed up the multi-scale community detection process. Given a measure of structural similarity CSS, we prune the less interesting seeds, such as the set of filtered seeds FS is defined as:
FS = {∀s ∈ S, ∀c ∈ D, (CSS(s.N, c.N ) > θ s ) ∨ (s.p ∩ c.p = {∅})(2)
Where D is the set of stable communities discovered at coarser (or similar, see next section) scales, s.p is the interval corresponding to the snapshot at which this seed has been discovered, and θ s is a threshold of similarity.
Said otherwise, we keep as interesting seeds those that are not redundant topologically (in term of nodes/edges), OR not redundant temporally. A seed is kept if it corresponds to a situation never seen before.
Seed Expansion
The aim of this step is to assess whether a seed corresponds to a stable dynamic community. The instability problem has been identified since the early stages of the dynamic community detection field [1]. It means that the same algorithm ran twice on the same network after introducing minor random modifications might yield very different results. As a consequence, one cannot know if the differences observed between the community structure found at t and at t + 1 are due to structural changes or to the instability of the algorithm. This problem is usually solved by introducing smoothing techniques [16]. Our method use a similar approach, but instead of comparing communities found at step t and t − 1, we check whether a community found at t is still relevant in previous and following steps, recursively.
More formally, for each seed s ∈ F S found on the graph G t,γ , we iteratively expand the duration of the seed s.d = [t, t + γ[ (where t is the time start of this duration) at each step t i in both temporal directions (
t i ∈ (...[t − 2γ, t − γ[, [t − γ, t]; [t + γ, t + 2γ[, [t + 2γ, t + 3γ].
..)) as long as the quality QC(s.N, G ti,γ ) of the community defined by the nodes s.N on the graph at G ti,γ is good enough. Here, we use the same similarity threshold θ s as in the seed pruning step. If the final duration period |s.p| of the expanded seed is higher than a duration θ p γ, with θ p a threshold of stability, the expanded seed is added to the list of stable communities, otherwise, it is discarded. This step is formalized in Algorithm 1.
Algorithm 1: Forward seed expansion. Forward temporal expansion of a seed s found at time t of granularity γ. The reciprocal algorithm is used for backward expansion: t + 1 becomes t − 1.
Input: s, γ, θ p , θ s In order to select the most relevant stable communities, we consider seeds in descending order of their QC score, i.e., the seeds of higher quality scores are considered first. Due to the pruning strategy, a community of lowest quality might be pruned by a community of highest quality at the same granularity γ.
1 t ← t start |s.p = [t start , t end [ ; 2 g ← G t,γ ; 3 p ← [t, t + γ[; 4 while QC(s.N, g) > θ s do 5 s.p ← s.p ∪ p; 6 t ← t + γ; 7 p ← [t, t + γ[;
Multi-scale Iterative Process
Until then, we have seen how communities are found for a particular time scale. In order to detect communities at multiple scales, we first define the ordered list of studied scales Γ . The largest scale is defined as γ max = |G.d|/θ p , with |G.d| the total duration of the dynamic graph. Since we need to observe at least θ p successive steps to consider the community stable, γ max is the largest scale at which communities can be found.
We then define Γ as the ordered list:
Γ = [γ max , γ max /2 1 , γ max /2 2 , γ max /2 3 , ..., γ max /2 k ](3)
With k such as γ max /2 k > θ γ >= γ max /2 k+1 , θ γ being a parameter corresponding to the finest temporal granularity to evaluate, which is necessarily data-dependant (if time is represented as a continuous property, this value can be fixed at least at the sampling rate of data collection). This exponential reduction in the studied scale guarantees a limited number of scales to study.
The process to find seeds and extend them into communities is then summarized in Algorithm 2.
Algorithm 2: Multi-temporal-scale stable communities finding. Summary of the proposed method. See corresponding sections for the details of each step. G is the link streams to analyze, θ q , θ s , θ p , θ γ are threshold parameters.
Input: G, θ q , θ s , θ p , θ γ 1 D ← {∅}; 2 Γ ←studied scales(G, θ γ ) ; 3 for γ ∈ Γ do 4 S ← Seed Discovery(γ, CD, QC, θ q );
Choosing Functions and Parameters
The proposed method is a general framework that can be implemented using different functions for CD, QC and CSS. This section provides explicit guidance for selecting each function, and introduces the choices we make for the experimental section.
Community Detection -CD Any algorithm for community detection could be used, including overlapping methods, since each community is considered as an independant seed. Following literature consensus, we use the Louvain method [2], which yields non-overlapping communities using a greedy modularity-maximization method. The louvain method performs well on static networks, it is in particular among the fastest and most efficient methods. Note that it would be meaningful to adopt an algorithm yielding communities of good quality according to the chosen QC, which is not the case in our experiments, as we wanted to use the most standard algorithms and quality functions in order to show the genericity of our approach.
Quality of Communities -QC The QC quality function must express the quality of a set of nodes w.r.t a given network, unlike functions such as the modularity, which express the quality of a whole partition w.r.t a given network.
Many such functions exist, like Link Density or Scaled Density [7], but the most studied one is probably the Conductance [10]. Conductance is defined as the ratio of i)the number of edges between nodes inside the community and nodes outside the community, and ii)the sum of degrees of nodes inside the community (or outside, if this value is larger). More formally, the conductance φ of a community C is :
φ(C) = i∈C,j / ∈C A i,j M in(A(C), A(C))
Where A is the adjacency matrix of the network, A(C) = i∈C j∈V A i,j and C is the complement of C. Its value ranges from 0 (Best, all edges starting from nodes of the community are internal) to 1 (Worst, no edges between this community and the rest of the network). Since our generic framework expects good communities to have QC scores higher than the threshold θ q , we adopt the definition QC=1-conductance.
Community Seed Similarity -CSS This function takes as input two sets of nodes, and returns their similarity. Such a function is often used in dynamic community detection to assess the similarity between communities found in different time steps. Following [5], we choose as a reference function the Jaccard Index.Given two sets A and B, it is defined as:
J(A, B) = |A∩B| |A∪B|
Parameters
The algorithm has four parameters, θ γ , θ q , θ s , θ p , defining different thresholds. We explicit them and provide the values used in the experiments.
1. θ γ is data-dependant. It corresponds to the smallest temporal scale that will be studied, and should be set at least at the collection rate. For synthetic networks, it is set at 1 (the smallest temporal unit needed to generate a new stream), while, for SocioPatterns dataset, it is set to 20 secondes(the minimum length of time required to capture a contact). 2. θ q determines the minimal quality a seed must have to be preserved and expanded. The higher this value, the more strict we are on the quality of communities. We set θ q = 0.7 in all experiments. It is dependent on the choice of the QC function. 3. θ s determines the threshold above which two communities are considered redundant. The higher this value, the more communities will be obtained. We set θ s = 0.3 in all experiments. It is dependent on the choice of the CSS function. 4. θ p is the minimum number of consecutive periods a seed must be expanded in order to be considered as stable community. We set θ s = 3 in all experiments. The value should not be lower in order to avoid spurious detections due to pure chance. Higher values could be used to limit the number of results. 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49
(b) Stable communities discovered by the proposed method. Fig. 1: Visual comparison between planted and discovered communities. Time steps on the horizontal axis, nodes on the vertical axis. Colors correspond to communities and are randomly assigned. We can observe that most communities are correctly discovered, both in terms of nodes and of duration.
Experiments and Results
The validation of our method encompasses three main aspects: i) the validity of communities found, and ii) the multi-scale aspect of our method, iii) its scalability. We conduct two kinds of experiments: on synthetic data, on which we use planted ground-truth to quantitatively compare our results, and on real networks, on which we use both qualitative and quantitative evaluation to validate our method.
Validation on Synthetic Data
To the best of our knowledge, no existing network generator allows to generate dynamic communities at multiple temporal scale. We therefore introduce a sim-ple solution to do so. Let us consider a dynamic network composed of T steps and N different nodes. We start by adding some random noise: at each step, an Erdos-Renyi random graph [4] is generated, with a probability of edge presence equal to p. We then add a number SC of random stable communities. For each community, we attribute randomly a set of n ∈ [4, N/4] nodes, a duration d ∈ [10, T /4] and a starting date s ∈ [0, T − d]. n and d are chosen using a logarithmic probability, in order to increase variability. The temporal scale of the community is determined by the probability of observing an edge between any two of its nodes during the period of its existence, set as 10/d. As a consequence, a community of duration 10 will have edges between all of its nodes at every step of its existence, while a community of length 100 will have an edge between any two of its nodes only every 10 steps in average.
Since no algorithm exists to detect communities at multiple temporal scales, we compare our solution to a baseline: communities found by a static algorithm on each window, for different window sizes. It corresponds to detect & match methods for dynamic community detection such as [5]. We then compare the results by computing the overlapping NMI as defined in [8], at each step. For those experiments, we set T = 5000, N = 100, p = 10/N . We vary the number of communities SC. Table 1: Comparison of the Average NMI scores(over 10 runs) obtained for the proposed method (Proposed ) and for each of the temporal scales (γ ∈ Γ ) used by the proposed method, taken independently. Figure 1 represents the synthetic communities to find for SC = 10, and the communities discovered by the proposed method. We can observe a good match, with communities discovered throughout multiple scales (short-lasting and longlasting ones). We report the results of the comparison with baselines in table 1. We can observe that the proposed method outperforms the baseline at every scale in all cases in term of average NMI.
The important implication is that the problem of dynamic community detection is not only a question of choosing the right scale through a window size, but that if the network contains communities at multiple temporal scale, one needs to use an adapted method to discover them.
Validation on Real Datasets
We validate our approach by applying it to two real datasets. Because no ground truth data exists to compare our results with, we validate our method by using both quantitative and qualitative evaluation. We use the quantitative approach to analyze the scalability of the method and the characteristics of communities discovered compared with other existing algorithms. We use the qualitative approach to show that the communities found are meaningful and could allow an analyst to uncover interesting patterns in a dynamic datasets.
The datasets used are the following:
-SocioPatterns primary school data [18], face-to-face interactions between children in a school (323 nodes, 125 773 interaction). -Math overflow stack exchange interaction dataset [14], a larger network to evaluate scalability (24 818 nodes, 506 550 interactions).
Qualitative evaluation For the qualitative evaluation, we used the primary school data [18] collected by the SocioPatterns collaboration 5 using RFID devices. They capture face-to-face proximity of individuals wearing them, at a rate of one capture every 20 seconds. The dataset contains face-to-face interactions between 323 children and 10 teachers collected over two consecutive days in October 2009. This school has 5 levels, each level is divided into 2 classes(A and B), for a total of 10 classes. No community ground truth data exists to validate quantitatively our findings. We therefore focus on the descriptive information highlighted on the So-cioPatterns study [18], and we show how the results yielded by our method match the course of the day as recorded by the authors in this study.
In order to make an accurate analysis of our results, the visualization have been reduced to one day (the second day), and we limited ourselves to 4 classes (1B, 2B, 3B, 5B) 6 . 120 communities are discovered in total on this dataset. We created three different figures, corresponding to communities of length respectively i)less than half an hour, ii) between half an hour and 2 hours, iii) more than 2 hours. Figure 2 depicts the results. Nodes affiliations are ordered by class, as marked on the right side of the figure. The following observations can be made:
-Communities having the longest period of existence clearly correspond to the class structure. Similar communities had been found by the authors of the original study using aggregated networks per day. -Most communities of the shorter duration are detected during what are probably breaks between classes. In the original study, it had been noted that break periods are marked by the highest interaction rates. We know from data description that classes have 20/30 minutes breaks, and that those breaks are not necessarily synchronized between classes. This is compatible with observation, in particular with communities found between 10:00 and 10:30 in the morning, and between 4:00 and 4:30 in the afternoon. -Most communities of medium duration occur during the lunch break. We can also observe that the most communities are separated into two intervals, 12:00-13:00 and 13:00-14:00. This can be explained by the fact that children have a common canteen, and a shared playground. As the playground and the canteen do not have enough capacity to host all the students at the same time, only two or three classes have breaks at the same time, and lunches are taken in two consecutive turns of one hour. Some children do not belong to any communities during the lunch period, which matches the information that about half of the children come back home for lunch [18]. -During lunch breaks and class breaks, some communities involve children from different classes, see the community with dark-green colour during lunch time (medium duration figure) or the pink community around 10:00 for short communities, when classes 2B and 3B are probably in break at the same time. This confirms that an analysis at coarser scales only can be misleading, as it leads only to the detection of the stronger class structure, ignoring that communities exist between classes too, during shorter periods.
Quantitative evaluation In this section, we compare our proposition with other methods on two aspects: scalability, and aggregated properties of communities found. The methods we compare ourselves to are:
-An Identify and Match framework proposed by Greene et al. [5]. We implement it using the Louvain method for community detection, and the Jaccard coefficient to match communities, with a minimal similarity threshold of 0.7. We used a custom implementation, sharing the community detection phase with our method. -The multislice method introduced by Mucha et al. [12]. We used the authors implementation, with interslice coupling ω = 0.5. -The dynamic clique percolation method (D-CPM) introduced by Palla et al. [13]. We used a custom implementation, the detection in each snapshot is done using the implementation in the networkx library [6].
For Identify and Match, D-CPM and our approach, the community detection phase is performed in parallel for all snapshots. This is not possible for Mucha et al., since the method is performed on all snapshots simultaneously. On the other hand, D-CPM and Indentify and Match are methods with no dynamic smoothing. Figure 3 presents the time taken by those methods and our proposition, for each temporal granularity, on the Math Overflow network. The task accomplished by our method is, of course, not comparable, since it must not only discover communities, but also avoid redundancy between communities in different temporal scales, while other methods yield redundant communities in different 189s (about 36h). OUR and OUR-MP corresponds to our method using or not multiprocessing (4 cores) levels. Nevertheless, we can observe that the method is scalable to networks with tens of thousands of nodes and hundreds of thousands of interactions. It is slower than the Identify and Match(CD&Match) approach, but does not suffer from the scalability problem as for the two other ones(D-CPM and Mucha et al.,). In particular, the clique percolation method is not scalable to large and dense networks, a known problem due to the exponential growth in the number of cliques to find. For the method by Mucha et al., the scalability issue is due to the memory representation of a single modularity matrix for all snapshots. Table 2: Average properties of communities found by each method (independently of their temporal granularity). #Communities: number of communities found. Persistence: number of consecutive snapshots. Size: number of nodes. Stability: average Jaccard coefficient between nodes of the same community in successive snapshots. Density: average degree/size-1. Q: 1-Conductance (higher is better)
In table 2, we summarize the number of communities found by each method, their persistence, size, stability, density and conductance. It is not possible to formally rank those methods based on these values only, that correspond to vastly different scenarios. What we can observe is that existing methods yield much more communities than the method we propose, usually at the cost of lower overall quality. When digging into the results, it is clear that other methods yield many noisy communities, either found on a single snapshot for methods without smoothing, unstable for the smoothed Mucha method, and often with low density or Q.
Conclusion and future work
To conclude, this article only scratches the surface of the possibilities of multipletemporal-scale community detection. We have proposed a first method for the detection of such structures, that we validated on both synthetic and real-world networks, highlighting the interest of such an approach. The method is proposed as a general, extensible framework, and its code is available 78 as an easy to use library, for replications, applications and extensions.
As an exploratory work, further investigations and improvements are needed. Heuristics or statistical selection procedures could be implemented to reduce the computational complexity. Hierarchical organization of relations -both temporal and structural-between communities could greatly simplify the interpretation of results. | 7,911 |
1907.10453 | 2962723636 | Link streams model interactions over time in a wide range of fields. Under this model, the challenge is to mine efficiently both temporal and topological structures. Community detection and change point detection are one of the most powerful tools to analyze such evolving interactions. In this paper, we build on both to detect stable community structures by identifying change points within meaningful communities. Unlike existing dynamic community detection algorithms, the proposed method is able to discover stable communities efficiently at multiple temporal scales. We test the effectiveness of our method on synthetic networks, and on high-resolution time-varying networks of contacts drawn from real social networks. | Our work is also related to research conducted on change point detection considering community structures. In these approaches, given a sequence of snapshots, one wants to detect the periods during which the network organization and or the community structure remains stable. In @cite_4 , the authors proposed the first change-point detection method for evolving networks that uses generative network models and statistical hypothesis testing. @cite_17 proposed a hierarchical change point detection method to detect both inter-community(local change) and intra-community(global change) evolution. A recent work by @cite_16 used graph distance measures and hierarchical clustering to identify sequences of system state dynamics. From those methods, our proposal keeps the principle of stable periods delimited by change points, and the idea of detecting changes at local and global scales. But our method differs in two directions: @math we are searching for stable individual communities instead of stable graph periods, and @math we search for stable structures at multiple levels of temporal granularity. | {
"abstract": [
"Many time-evolving systems in nature, society and technology leave traces of the interactions within them. These interactions form temporal networks that reflect the states of the systems. In this work, we pursue a coarse-grained description of these systems by proposing a method to assign discrete states to the systems and inferring the sequence of such states from the data. Such states could, for example, correspond to a mental state (as inferred from neuroimaging data) or the operational state of an organization (as inferred by interpersonal communication). Our method combines a graph distance measure and hierarchical clustering. Using several empirical data sets of social temporal networks, we show that our method is capable of inferring the system’s states such as distinct activities in a school and a weekday state as opposed to a weekend state. We expect the methods to be equally useful in other settings such as temporally varying protein interactions, ecological interspecific interactions, functional connectivity in the brain and adaptive social networks.",
"Interactions among people or objects are often dynamic in nature and can be represented as a sequence of networks, each providing a snapshot of the interactions over a brief period of time. An important task in analyzing such evolving networks is change-point detection, in which we both identify the times at which the large-scale pattern of interactions changes fundamentally and quantify how large and what kind of change occurred. Here, we formalize for the first time the network change-point detection problem within an online probabilistic learning framework and introduce a method that can reliably solve it. This method combines a generalized hierarchical random graph model with a Bayesian hypothesis test to quantitatively determine if, when, and precisely how a change point has occurred. We analyze the detectability of our method using synthetic data with known change points of different types and magnitudes, and show that this method is more accurate than several previously used alternatives. Applied to two high-resolution evolving social networks, this method identifies a sequence of change points that align with known external \"shocks\" to these networks.",
""
],
"cite_N": [
"@cite_16",
"@cite_4",
"@cite_17"
],
"mid": [
"2792758678",
"1896833666",
"2964069854"
]
} | Detecting Stable Communities in Link Streams at Multiple Temporal Scales * | In recent years, studying interactions over time has witnessed a growing interest in a wide range of fields, such as sociology, biology, physics, etc. Such dynamic interactions are often represented using the snapshot model: the network is divided into a sequence of static networks, i.e., snapshots, aggregating all contacts occurring in a given time window. The main drawback of this model is that it often requires to choose arbitrarily a temporal scale of analysis. The link stream model [9] is a more effective way for representing interactions over time, that can fully capture the underling temporal information.
Real world networks evolve frequently at many different time scales. Fluctuations in such networks can be observed at yearly, monthly, daily, hourly, or even smaller scales. For instance, if one were to look at interactions among workers in a company or laboratory, one could expect to discover clusters of people corresponding to meetings and/or coffee breaks, interacting at high frequency (e.g., every few seconds) for short periods (e.g., few minutes), project members interacting at medium frequency (e.g., once a day) for medium periods (e.g., a few months), coordination groups interacting at low frequency (e.g., once a month) for longer periods (e.g., a few years), etc.
An analysis of communities found at an arbitrary chosen scale would necessarily miss some of these communities: low latency ones are invisible using short aggregation windows, while high frequency ones are lost in the noise for long aggregation windows. A multiple temporal scale analysis of communities seems therefore the right solution to study networks of interactions represented as link streams.
To the best of our knowledge, no such method exists in the literature. In this article, we propose a method having roots both in the literature on change point detection and in dynamic community detection. It detects what we call stable communities, i.e., groups of nodes forming a coherent community throughout a period of time, at a given temporal scale.
The remainder of this paper is organized as follows. In Section 2, we present a brief review of related works. Then, we describe the proposed framework in detail in section 3. We experimentally evaluate the proposed method on both synthetic and real-world networks in section 4.
Dynamic Community Detection
The problem of detecting communities in dynamic networks has attracted a lot of attention in recent years, with various approaches tackling different aspects of the problem, see [16] for a recent survey. Most of these methods consider that the studied dynamic networks are represented as sequences of snapshots, with each snapshot being a well formed graph with meaningful community structure, see for instance [12,5]. Some other methods work with interval graphs, and update the community structure at each network change, e.g., [17,3]. However, all those methods are not adapted to deal with link streams, for which the network is usually not well formed at any given time. Using them on such a network would require to first aggregate the links of the stream by choosing an arbitrarily temporal scale (aggregation window).
Change Point Detection
Our work is also related to research conducted on change point detection considering community structures. In these approaches, given a sequence of snapshots, one wants to detect the periods during which the network organization and/or the community structure remains stable. In [15], the authors proposed the first change-point detection method for evolving networks that uses generative network models and statistical hypothesis testing. Wang et al. [19] proposed a hierarchical change point detection method to detect both inter-community(local change) and intra-community(global change) evolution. A recent work by Masuda et al. [11] used graph distance measures and hierarchical clustering to identify sequences of system state dynamics.
From those methods, our proposal keeps the principle of stable periods delimited by change points, and the idea of detecting changes at local and global scales. But our method differs in two directions: i) we are searching for stable individual communities instead of stable graph periods, and ii) we search for stable structures at multiple levels of temporal granularity.
Method
The goal of our proposed method is i) to detect stable communities ii) at multiple scales without redundancy and iii) to do so efficiently. We adopt an iterative approach, searching communities from the coarser to the more detailed temporal scales. At each temporal scale, we use a three step process:
1. Seed Discovery, to find relevant community seeds at this temporal scale. 2. Seed Pruning, to remove seeds which are redundant with communities found at higher scales. 3. Seed Expansion, expanding seeds in time to discover stable communities.
We start by presenting each of these three steps, and then we describe the method used to iterate through the different scales in section 3.4.
Our work aims to provide a general framework that could serve as baseline for further work in this field. We define three generic functions that can be set according to the user needs:
-CD(g), a static community detection algorithm on a graph g.
-QC(N, g), a function to assess the quality of a community defined by the set of nodes N on a graph g. -CSS(N 1 ,N 2 ), a function to assess the similarity of two sets of nodes N 1 and N 2 .
See section 3.5 on how to choose proper functions for those tasks. We define a stable dynamic community c as a triplet c = (N, p, γ), with c.N the list of nodes in the community, c.p its period of existence defined as an interval, e.g., c.p = [t 1 , t 2 [ 4 means that the community c exists from t 1 to t 2 , and c.γ the temporal granularity at which c has been discovered.
We denote the set of all stable dynamic communities D.
Seed Discovery
For each temporal scale, we first search for interesting seeds. A temporal scale is defined by a granularity γ, expressed as a period of time (e.g.; 20 minutes, 1 hour, 2 weeks, etc).We use this granularity as a window size, and, starting from a time t 0 -by default, the date of the first observed interaction-we create a cumulative graph (snapshot) for every period
[t 0 , t 0 +γ[, [t 0 +γ, t 0 +2γ[, [t 0 +2γ, t 0 +3γ[, etc.
, until all interactions belong to a cumulative graph. This process yields a sequence of static graphs, such as G t0,γ is a cumulated snapshot of link stream G for the period starting at t 0 and of duration γ. G γ is the list of all such graphs. Given a static community detection algorithm CD yielding a set of communities, and a function to assess the quality of communities QC, we apply CD on each snapshot and filter promising seeds, i.e., high quality communities, using QC. The set of valid seeds S is therefore defined as:
S = {∀g ∈ G γ , ∀s ∈ CD(g), QC(s, g) > θ q }(1)
With θ q a threshold of community quality.
Since community detection at each step is independent, we can run it in parallel on all steps, this is an important aspect for scalability.
Seed Pruning
The seed pruning step has a twofold objective: i) reducing redundancy and ii) speed up the multi-scale community detection process. Given a measure of structural similarity CSS, we prune the less interesting seeds, such as the set of filtered seeds FS is defined as:
FS = {∀s ∈ S, ∀c ∈ D, (CSS(s.N, c.N ) > θ s ) ∨ (s.p ∩ c.p = {∅})(2)
Where D is the set of stable communities discovered at coarser (or similar, see next section) scales, s.p is the interval corresponding to the snapshot at which this seed has been discovered, and θ s is a threshold of similarity.
Said otherwise, we keep as interesting seeds those that are not redundant topologically (in term of nodes/edges), OR not redundant temporally. A seed is kept if it corresponds to a situation never seen before.
Seed Expansion
The aim of this step is to assess whether a seed corresponds to a stable dynamic community. The instability problem has been identified since the early stages of the dynamic community detection field [1]. It means that the same algorithm ran twice on the same network after introducing minor random modifications might yield very different results. As a consequence, one cannot know if the differences observed between the community structure found at t and at t + 1 are due to structural changes or to the instability of the algorithm. This problem is usually solved by introducing smoothing techniques [16]. Our method use a similar approach, but instead of comparing communities found at step t and t − 1, we check whether a community found at t is still relevant in previous and following steps, recursively.
More formally, for each seed s ∈ F S found on the graph G t,γ , we iteratively expand the duration of the seed s.d = [t, t + γ[ (where t is the time start of this duration) at each step t i in both temporal directions (
t i ∈ (...[t − 2γ, t − γ[, [t − γ, t]; [t + γ, t + 2γ[, [t + 2γ, t + 3γ].
..)) as long as the quality QC(s.N, G ti,γ ) of the community defined by the nodes s.N on the graph at G ti,γ is good enough. Here, we use the same similarity threshold θ s as in the seed pruning step. If the final duration period |s.p| of the expanded seed is higher than a duration θ p γ, with θ p a threshold of stability, the expanded seed is added to the list of stable communities, otherwise, it is discarded. This step is formalized in Algorithm 1.
Algorithm 1: Forward seed expansion. Forward temporal expansion of a seed s found at time t of granularity γ. The reciprocal algorithm is used for backward expansion: t + 1 becomes t − 1.
Input: s, γ, θ p , θ s In order to select the most relevant stable communities, we consider seeds in descending order of their QC score, i.e., the seeds of higher quality scores are considered first. Due to the pruning strategy, a community of lowest quality might be pruned by a community of highest quality at the same granularity γ.
1 t ← t start |s.p = [t start , t end [ ; 2 g ← G t,γ ; 3 p ← [t, t + γ[; 4 while QC(s.N, g) > θ s do 5 s.p ← s.p ∪ p; 6 t ← t + γ; 7 p ← [t, t + γ[;
Multi-scale Iterative Process
Until then, we have seen how communities are found for a particular time scale. In order to detect communities at multiple scales, we first define the ordered list of studied scales Γ . The largest scale is defined as γ max = |G.d|/θ p , with |G.d| the total duration of the dynamic graph. Since we need to observe at least θ p successive steps to consider the community stable, γ max is the largest scale at which communities can be found.
We then define Γ as the ordered list:
Γ = [γ max , γ max /2 1 , γ max /2 2 , γ max /2 3 , ..., γ max /2 k ](3)
With k such as γ max /2 k > θ γ >= γ max /2 k+1 , θ γ being a parameter corresponding to the finest temporal granularity to evaluate, which is necessarily data-dependant (if time is represented as a continuous property, this value can be fixed at least at the sampling rate of data collection). This exponential reduction in the studied scale guarantees a limited number of scales to study.
The process to find seeds and extend them into communities is then summarized in Algorithm 2.
Algorithm 2: Multi-temporal-scale stable communities finding. Summary of the proposed method. See corresponding sections for the details of each step. G is the link streams to analyze, θ q , θ s , θ p , θ γ are threshold parameters.
Input: G, θ q , θ s , θ p , θ γ 1 D ← {∅}; 2 Γ ←studied scales(G, θ γ ) ; 3 for γ ∈ Γ do 4 S ← Seed Discovery(γ, CD, QC, θ q );
Choosing Functions and Parameters
The proposed method is a general framework that can be implemented using different functions for CD, QC and CSS. This section provides explicit guidance for selecting each function, and introduces the choices we make for the experimental section.
Community Detection -CD Any algorithm for community detection could be used, including overlapping methods, since each community is considered as an independant seed. Following literature consensus, we use the Louvain method [2], which yields non-overlapping communities using a greedy modularity-maximization method. The louvain method performs well on static networks, it is in particular among the fastest and most efficient methods. Note that it would be meaningful to adopt an algorithm yielding communities of good quality according to the chosen QC, which is not the case in our experiments, as we wanted to use the most standard algorithms and quality functions in order to show the genericity of our approach.
Quality of Communities -QC The QC quality function must express the quality of a set of nodes w.r.t a given network, unlike functions such as the modularity, which express the quality of a whole partition w.r.t a given network.
Many such functions exist, like Link Density or Scaled Density [7], but the most studied one is probably the Conductance [10]. Conductance is defined as the ratio of i)the number of edges between nodes inside the community and nodes outside the community, and ii)the sum of degrees of nodes inside the community (or outside, if this value is larger). More formally, the conductance φ of a community C is :
φ(C) = i∈C,j / ∈C A i,j M in(A(C), A(C))
Where A is the adjacency matrix of the network, A(C) = i∈C j∈V A i,j and C is the complement of C. Its value ranges from 0 (Best, all edges starting from nodes of the community are internal) to 1 (Worst, no edges between this community and the rest of the network). Since our generic framework expects good communities to have QC scores higher than the threshold θ q , we adopt the definition QC=1-conductance.
Community Seed Similarity -CSS This function takes as input two sets of nodes, and returns their similarity. Such a function is often used in dynamic community detection to assess the similarity between communities found in different time steps. Following [5], we choose as a reference function the Jaccard Index.Given two sets A and B, it is defined as:
J(A, B) = |A∩B| |A∪B|
Parameters
The algorithm has four parameters, θ γ , θ q , θ s , θ p , defining different thresholds. We explicit them and provide the values used in the experiments.
1. θ γ is data-dependant. It corresponds to the smallest temporal scale that will be studied, and should be set at least at the collection rate. For synthetic networks, it is set at 1 (the smallest temporal unit needed to generate a new stream), while, for SocioPatterns dataset, it is set to 20 secondes(the minimum length of time required to capture a contact). 2. θ q determines the minimal quality a seed must have to be preserved and expanded. The higher this value, the more strict we are on the quality of communities. We set θ q = 0.7 in all experiments. It is dependent on the choice of the QC function. 3. θ s determines the threshold above which two communities are considered redundant. The higher this value, the more communities will be obtained. We set θ s = 0.3 in all experiments. It is dependent on the choice of the CSS function. 4. θ p is the minimum number of consecutive periods a seed must be expanded in order to be considered as stable community. We set θ s = 3 in all experiments. The value should not be lower in order to avoid spurious detections due to pure chance. Higher values could be used to limit the number of results. 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 0 1000 2000 3000 4000 5000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49
(b) Stable communities discovered by the proposed method. Fig. 1: Visual comparison between planted and discovered communities. Time steps on the horizontal axis, nodes on the vertical axis. Colors correspond to communities and are randomly assigned. We can observe that most communities are correctly discovered, both in terms of nodes and of duration.
Experiments and Results
The validation of our method encompasses three main aspects: i) the validity of communities found, and ii) the multi-scale aspect of our method, iii) its scalability. We conduct two kinds of experiments: on synthetic data, on which we use planted ground-truth to quantitatively compare our results, and on real networks, on which we use both qualitative and quantitative evaluation to validate our method.
Validation on Synthetic Data
To the best of our knowledge, no existing network generator allows to generate dynamic communities at multiple temporal scale. We therefore introduce a sim-ple solution to do so. Let us consider a dynamic network composed of T steps and N different nodes. We start by adding some random noise: at each step, an Erdos-Renyi random graph [4] is generated, with a probability of edge presence equal to p. We then add a number SC of random stable communities. For each community, we attribute randomly a set of n ∈ [4, N/4] nodes, a duration d ∈ [10, T /4] and a starting date s ∈ [0, T − d]. n and d are chosen using a logarithmic probability, in order to increase variability. The temporal scale of the community is determined by the probability of observing an edge between any two of its nodes during the period of its existence, set as 10/d. As a consequence, a community of duration 10 will have edges between all of its nodes at every step of its existence, while a community of length 100 will have an edge between any two of its nodes only every 10 steps in average.
Since no algorithm exists to detect communities at multiple temporal scales, we compare our solution to a baseline: communities found by a static algorithm on each window, for different window sizes. It corresponds to detect & match methods for dynamic community detection such as [5]. We then compare the results by computing the overlapping NMI as defined in [8], at each step. For those experiments, we set T = 5000, N = 100, p = 10/N . We vary the number of communities SC. Table 1: Comparison of the Average NMI scores(over 10 runs) obtained for the proposed method (Proposed ) and for each of the temporal scales (γ ∈ Γ ) used by the proposed method, taken independently. Figure 1 represents the synthetic communities to find for SC = 10, and the communities discovered by the proposed method. We can observe a good match, with communities discovered throughout multiple scales (short-lasting and longlasting ones). We report the results of the comparison with baselines in table 1. We can observe that the proposed method outperforms the baseline at every scale in all cases in term of average NMI.
The important implication is that the problem of dynamic community detection is not only a question of choosing the right scale through a window size, but that if the network contains communities at multiple temporal scale, one needs to use an adapted method to discover them.
Validation on Real Datasets
We validate our approach by applying it to two real datasets. Because no ground truth data exists to compare our results with, we validate our method by using both quantitative and qualitative evaluation. We use the quantitative approach to analyze the scalability of the method and the characteristics of communities discovered compared with other existing algorithms. We use the qualitative approach to show that the communities found are meaningful and could allow an analyst to uncover interesting patterns in a dynamic datasets.
The datasets used are the following:
-SocioPatterns primary school data [18], face-to-face interactions between children in a school (323 nodes, 125 773 interaction). -Math overflow stack exchange interaction dataset [14], a larger network to evaluate scalability (24 818 nodes, 506 550 interactions).
Qualitative evaluation For the qualitative evaluation, we used the primary school data [18] collected by the SocioPatterns collaboration 5 using RFID devices. They capture face-to-face proximity of individuals wearing them, at a rate of one capture every 20 seconds. The dataset contains face-to-face interactions between 323 children and 10 teachers collected over two consecutive days in October 2009. This school has 5 levels, each level is divided into 2 classes(A and B), for a total of 10 classes. No community ground truth data exists to validate quantitatively our findings. We therefore focus on the descriptive information highlighted on the So-cioPatterns study [18], and we show how the results yielded by our method match the course of the day as recorded by the authors in this study.
In order to make an accurate analysis of our results, the visualization have been reduced to one day (the second day), and we limited ourselves to 4 classes (1B, 2B, 3B, 5B) 6 . 120 communities are discovered in total on this dataset. We created three different figures, corresponding to communities of length respectively i)less than half an hour, ii) between half an hour and 2 hours, iii) more than 2 hours. Figure 2 depicts the results. Nodes affiliations are ordered by class, as marked on the right side of the figure. The following observations can be made:
-Communities having the longest period of existence clearly correspond to the class structure. Similar communities had been found by the authors of the original study using aggregated networks per day. -Most communities of the shorter duration are detected during what are probably breaks between classes. In the original study, it had been noted that break periods are marked by the highest interaction rates. We know from data description that classes have 20/30 minutes breaks, and that those breaks are not necessarily synchronized between classes. This is compatible with observation, in particular with communities found between 10:00 and 10:30 in the morning, and between 4:00 and 4:30 in the afternoon. -Most communities of medium duration occur during the lunch break. We can also observe that the most communities are separated into two intervals, 12:00-13:00 and 13:00-14:00. This can be explained by the fact that children have a common canteen, and a shared playground. As the playground and the canteen do not have enough capacity to host all the students at the same time, only two or three classes have breaks at the same time, and lunches are taken in two consecutive turns of one hour. Some children do not belong to any communities during the lunch period, which matches the information that about half of the children come back home for lunch [18]. -During lunch breaks and class breaks, some communities involve children from different classes, see the community with dark-green colour during lunch time (medium duration figure) or the pink community around 10:00 for short communities, when classes 2B and 3B are probably in break at the same time. This confirms that an analysis at coarser scales only can be misleading, as it leads only to the detection of the stronger class structure, ignoring that communities exist between classes too, during shorter periods.
Quantitative evaluation In this section, we compare our proposition with other methods on two aspects: scalability, and aggregated properties of communities found. The methods we compare ourselves to are:
-An Identify and Match framework proposed by Greene et al. [5]. We implement it using the Louvain method for community detection, and the Jaccard coefficient to match communities, with a minimal similarity threshold of 0.7. We used a custom implementation, sharing the community detection phase with our method. -The multislice method introduced by Mucha et al. [12]. We used the authors implementation, with interslice coupling ω = 0.5. -The dynamic clique percolation method (D-CPM) introduced by Palla et al. [13]. We used a custom implementation, the detection in each snapshot is done using the implementation in the networkx library [6].
For Identify and Match, D-CPM and our approach, the community detection phase is performed in parallel for all snapshots. This is not possible for Mucha et al., since the method is performed on all snapshots simultaneously. On the other hand, D-CPM and Indentify and Match are methods with no dynamic smoothing. Figure 3 presents the time taken by those methods and our proposition, for each temporal granularity, on the Math Overflow network. The task accomplished by our method is, of course, not comparable, since it must not only discover communities, but also avoid redundancy between communities in different temporal scales, while other methods yield redundant communities in different 189s (about 36h). OUR and OUR-MP corresponds to our method using or not multiprocessing (4 cores) levels. Nevertheless, we can observe that the method is scalable to networks with tens of thousands of nodes and hundreds of thousands of interactions. It is slower than the Identify and Match(CD&Match) approach, but does not suffer from the scalability problem as for the two other ones(D-CPM and Mucha et al.,). In particular, the clique percolation method is not scalable to large and dense networks, a known problem due to the exponential growth in the number of cliques to find. For the method by Mucha et al., the scalability issue is due to the memory representation of a single modularity matrix for all snapshots. Table 2: Average properties of communities found by each method (independently of their temporal granularity). #Communities: number of communities found. Persistence: number of consecutive snapshots. Size: number of nodes. Stability: average Jaccard coefficient between nodes of the same community in successive snapshots. Density: average degree/size-1. Q: 1-Conductance (higher is better)
In table 2, we summarize the number of communities found by each method, their persistence, size, stability, density and conductance. It is not possible to formally rank those methods based on these values only, that correspond to vastly different scenarios. What we can observe is that existing methods yield much more communities than the method we propose, usually at the cost of lower overall quality. When digging into the results, it is clear that other methods yield many noisy communities, either found on a single snapshot for methods without smoothing, unstable for the smoothed Mucha method, and often with low density or Q.
Conclusion and future work
To conclude, this article only scratches the surface of the possibilities of multipletemporal-scale community detection. We have proposed a first method for the detection of such structures, that we validated on both synthetic and real-world networks, highlighting the interest of such an approach. The method is proposed as a general, extensible framework, and its code is available 78 as an easy to use library, for replications, applications and extensions.
As an exploratory work, further investigations and improvements are needed. Heuristics or statistical selection procedures could be implemented to reduce the computational complexity. Hierarchical organization of relations -both temporal and structural-between communities could greatly simplify the interpretation of results. | 7,911 |
1901.10650 | 2924934677 | Person re-identification (re-ID) has attracted much attention recently due to its great importance in video surveillance. In general, distance metrics used to identify two person images are expected to be robust under various appearance changes. However, our work observes the extreme vulnerability of existing distance metrics to adversarial examples, generated by simply adding human-imperceptible perturbations to person images. Hence, the security danger is dramatically increased when deploying commercial re-ID systems in video surveillance. Although adversarial examples have been extensively applied for classification analysis, it is rarely studied in metric analysis like person re-identification. The most likely reason is the natural gap between the training and testing of re-ID networks, that is, the predictions of a re-ID network cannot be directly used during testing without an effective metric. In this work, we bridge the gap by proposing Adversarial Metric Attack, a parallel methodology to adversarial classification attacks. Comprehensive experiments clearly reveal the adversarial effects in re-ID systems. Meanwhile, we also present an early attempt of training a metric-preserving network, thereby defending the metric against adversarial attacks. At last, by benchmarking various adversarial settings, we expect that our work can facilitate the development of adversarial attack and defense in metric-based applications. | Adversarial learning @cite_16 @cite_14 @cite_38 @cite_10 has been incorporated into the training procedure of re-ID systems in many previous works. In these works, generative adversarial networks (GAN) @cite_19 typically acts as a data augmentation strategy by generating photo-realistic person images to enhance the training set. For example, Zheng al @cite_17 applied GAN to generate unlabeled images and assigned a uniform label distribution during training. Wei al @cite_23 proposed Person Transfer Generative Adversarial Network (PTGAN) to bridge the gap between different datasets. Moreover, Ge al @cite_1 propose Feature Distilling Generative Adversarial Network to learn identity-related and pose-unrelated representations. @cite_36 , binary codes are learned for efficient pedestrian matching via the proposed Adversarial Binary Coding. | {
"abstract": [
"",
"",
"Person re-identification (ReID) aims at matching persons across different views scenes. In addition to accuracy, the matching efficiency has received more and more attention because of demanding applications using large-scale data. Several binary coding based methods have been proposed for efficient ReID, which either learn projections to map high-dimensional features to compact binary codes, or directly adopt deep neural networks by simply inserting an additional fully-connected layer with tanh-like activations. However, the former approach requires time-consuming hand-crafted feature extraction and complicated (discrete) optimizations; the latter lacks the necessary discriminative information greatly due to the straightforward activation functions. In this paper, we propose a simple yet effective framework for efficient ReID inspired by the recent advances in adversarial learning. Specifically, instead of learning explicit projections or adding fully-connected mapping layers, the proposed Adversarial Binary Coding (ABC) framework guides the extraction of binary codes implicitly and effectively. The discriminability of the extracted codes is further enhanced by equipping the ABC with a deep triplet network for the ReID task. More importantly, the ABC and triplet network are simultaneously optimized in an end-to-end manner. Extensive experiments on three large-scale ReID benchmarks demonstrate the superiority of our approach over the state-of-the-art methods.",
"Person re-identification (reID) is an important task that requires to retrieve a person's images from an image dataset, given one image of the person of interest. For learning robust person features, the pose variation of person images is one of the key challenges. Existing works targeting the problem either perform human alignment, or learn human-region-based representations. Extra pose information and computational cost is generally required for inference. To solve this issue, a Feature Distilling Generative Adversarial Network (FD-GAN) is proposed for learning identity-related and pose-unrelated representations. It is a novel framework based on a Siamese structure with multiple novel discriminators on human poses and identities. In addition to the discriminators, a novel same-pose loss is also integrated, which requires appearance of a same person's generated images to be similar. After learning pose-unrelated person features with pose guidance, no auxiliary pose information and additional computational cost is required during testing. Our proposed FD-GAN achieves state-of-the-art performance on three person reID datasets, which demonstrates that the effectiveness and robust feature distilling capability of the proposed FD-GAN.",
"We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.",
"Although the performance of person Re-Identification (ReID) has been significantly boosted, many challenging issues in real scenarios have not been fully investigated, e.g., the complex scenes and lighting variations, viewpoint and pose changes, and the large number of identities in a camera network. To facilitate the research towards conquering those issues, this paper contributes a new dataset called MSMT171 with many important features, e.g., 1) the raw videos are taken by an 15-camera network deployed in both indoor and outdoor scenes, 2) the videos cover a long period of time and present complex lighting variations, and 3) it contains currently the largest number of annotated identities, i.e., 4,101 identities and 126,441 bounding boxes. We also observe that, domain gap commonly exists between datasets, which essentially causes severe performance drop when training and testing on different datasets. This results in that available training data cannot be effectively leveraged for new testing domains. To relieve the expensive costs of annotating new training samples, we propose a Person Transfer Generative Adversarial Network (PTGAN) to bridge the domain gap. Comprehensive experiments show that the domain gap could be substantially narrowed-down by the PTGAN.",
"Person re-identification (ReID) is the task of retrieving particular persons across different cameras. Despite its great progress in recent years, it is still confronted with challenges like pose variation, occlusion, and similar appearance among different persons. The large gap between training and testing performance with existing models implies the insufficiency of generalization. Considering this fact, we propose to augment the variation of training data by introducing Adversarially Occluded Samples. These special samples are both a) meaningful in that they resemble real-scene occlusions, and b) effective in that they are tough for the original model and thus provide the momentum to jump out of local optimum. We mine these samples based on a trained ReID model and with the help of network visualization techniques. Extensive experiments show that the proposed samples help the model discover new discriminative clues on the body and generalize much better at test time. Our strategy makes significant improvement over strong baselines on three large-scale ReID datasets, Market1501, CUHK03 and DukeMTMC-reID.",
"Person Re-identification (re-id) faces two major challenges: the lack of cross-view paired training data and learning discriminative identity-sensitive and view-invariant features in the presence of large pose variations. In this work, we address both problems by proposing a novel deep person image generation model for synthesizing realistic person images conditional on the pose. The model is based on a generative adversarial network (GAN) designed specifically for pose normalization in re-id, thus termed pose-normalization GAN (PN-GAN). With the synthesized images, we can learn a new type of deep re-id features free of the influence of pose variations. We show that these features are complementary to features learned with the original images. Importantly, a more realistic unsupervised learning setting is considered in this work, and our model is shown to have the potential to be generalizable to a new re-id dataset without any fine-tuning. The codes will be released at https: github.com naiq PN_GAN.",
"The main contribution of this paper is a simple semisupervised pipeline that only uses the original training set without collecting extra data. It is challenging in 1) how to obtain more training data only from the training set and 2) how to use the newly generated data. In this work, the generative adversarial network (GAN) is used to generate unlabeled samples. We propose the label smoothing regularization for outliers (LSRO). This method assigns a uniform label distribution to the unlabeled images, which regularizes the supervised model and improves the baseline. We verify the proposed method on a practical problem: person re-identification (re-ID). This task aims to retrieve a query person from other cameras. We adopt the deep convolutional generative adversarial network (DCGAN) for sample generation, and a baseline convolutional neural network (CNN) for representation learning. Experiments show that adding the GAN-generated data effectively improves the discriminative ability of learned CNN embeddings. On three large-scale datasets, Market- 1501, CUHK03 and DukeMTMC-reID, we obtain +4.37 , +1.6 and +2.46 improvement in rank-1 precision over the baseline CNN, respectively. We additionally apply the proposed method to fine-grained bird recognition and achieve a +0.6 improvement over a strong baseline. The code is available at https: github.com layumi Person-reID_GAN."
],
"cite_N": [
"@cite_38",
"@cite_14",
"@cite_36",
"@cite_1",
"@cite_19",
"@cite_23",
"@cite_16",
"@cite_10",
"@cite_17"
],
"mid": [
"",
"",
"2794742204",
"2890159224",
"2099471712",
"2963047834",
"2798590501",
"2964186374",
"2585635281"
]
} | Metric Attack and Defense for Person Re-identification | In recent years, person re-identification (re-ID) [22,50] has attracted great attention in the computer vision community, driven by the increasing demand of video surveillance in public space. Hence, great effort has been devoted to developing robust re-ID features [8,15,38,9,26] and distance metrics [35,51,29,7] to overcome the large intraclass variations of person images in viewpoint, pose, illumination, blur, occlusion and resolution. For example, the rank-1 accuracy of the latest state-of-the-art on the Market-1501 dataset [49] is 93.8 [28], increasing rapidly from 44.4 when the dataset was first released in 2015. However, we draw researchers' attention to the fact that re-ID systems can be very vulnerable to adversarial attacks. Fig. 1 shows a case where a probe image is presented. Of the two gallery images, the true positive has a large similarity value and the true negative has a small one. Nevertheless, after adding human-imperceptible perturbations to the gallery images, the metric is easily fooled even though the new gallery images appear the same as the original ones.
Adversarial examples have been extensively investigated in classification analysis, such as image classification [24,6], object detection [44], semantic segmentation [1], etc. However, they have not attracted much attention in the field of re-ID, a metric analysis task whose basic goal is to learn a discriminative distance metric. A very likely reason is the existence of a natural gap between the training and testing of re-ID networks. While a re-ID model is usually trained with a certain classification loss, it dis- The adversarial examples (red color) generated by the classification attack cross over the class decision boundary, but preserve the pairwise distance between them to a large extent.
cards the concept of class decision boundaries during testing and adopts a metric function to measure the pairwise distances between the probe and gallery images. Consequently, previous works on classification attacks [24,6] do not generalize to re-ID systems, i.e., they attempt to push images across the class decision boundaries and do not necessarily lead to a corrupted pairwise distance between images (see Fig. 2). Note that some re-ID networks are directly guided by metric losses (e.g., contrastive loss [16]), and their output can measure the between-object distances. However, it is still infeasible to directly attack such output owing to the sampling difficulty and computational complexity. Therefore, a common practice in re-ID is to take the trained model as a feature extractor and measure the similarities in a metric space.
Considering the importance of security for re-ID systems and the lack of systematic studies on its robustness towards adversarial examples, we propose Adversarial Metric Attack, an efficient and generic methodology to generate adversarial examples by attacking metric learning systems. The contributions of this work can be divided into five folds 1) Our work presents what to our knowledge is the first systematic and rigorous investigation of adversarial effects in person re-identification, which should be taken into consideration when deploying re-ID algorithms in real surveillance systems.
2) We propose adversarial metric attack, a parallel methodology to the existing adversarial classification attack [39,14], which can be potentially applied to other safety-critical applications that rely on distance metric (e.g., face verification [40] and tracking [18]).
3)
We define and benchmark various experimental settings for metric attack in re-ID, including white-box and blackbox attack, non-targeted and targeted attack, single-model and multi-model attack, etc. Under those experimental settings, comprehensive experiments are carried out with different distance metrics and attack methods.
4)
We present an early attempt on adversarial metric defense, and show that adversarial examples generated by attacking metrics can be used in turn to train a metricpreserving network.
5)
The code will be publicly available to easily generate the adversarial version of benchmark datasets (see examples in supplementary material), which can serve as a useful testbed to evaluate the robustness of re-ID algorithms.
We hope that our work can facilitate the development of robust feature learning and accelerate the progress on adversarial attack and defense of re-ID systems with the methodology and the experimental conclusions presented.
Adversarial Metric Attack
Person re-identification [12] is comprised of three sets of images, including the probe set P = {p i } Np i=1 , the gallery set
X = {x i } Nx i=1
, and the training set Y = {y i } Ny i=1 . A label set L is also given to annotate the identity of each image for training and evaluation. A general re-ID pipeline is: 1) learn a feature extractor F with parameters Θ (usually by training a neural network) by imposing a loss function J to L and Y; 2) extract the activations of intermediate layers for P and X as their visual features F(P, Θ) and F(X, Θ), respectively; 3) compute the distance between F(P, Θ) and F(X, Θ) for indexing. When representing features, F(·) and Θ are omitted where possible for notation simplicity.
In this paper, we aim to generate adversarial examples for re-ID models. As explained in Sec. 1, a different attack mechanism is required for metric learning systems as opposed to the existing attack methods which focus on classification-based models [14,24]. Instead of attacking the loss function used in training the neural network as done in these previous works, we discard the training loss and propose to attack the distance metric. Such an attack mechanism directly results in the corruption of the pairwise distance between images, thus leading to guaranteed accuracy compromises of a re-ID system. This is the gist of the methodology proposed in this work, which we call adversarial metric attack.
Adversarial metric attack consists of four components, including models for attack, metrics for attack, methods for attack and adversarial settings for attack. In the first component (Sec. 3.1), we train the model (with parameters Θ) on the training set Y as existing re-ID algorithms do. The model parameters are then fixed during attacking. In the second component (Sec. 3.2), a metric loss D is determined as the attack target. In the third component (Sec. 3.3), an optimization method for producing adversarial examples is selected. In the last component (Sec. 3.4), by setting the probe set P as a reference, we generate adversarial examples on the gallery set X in a specific adversarial setting.
Models for Attack
In the proposed methodology, the model for attack is not limited to be classification-based as opposed to [14,24]. Instead, most re-ID models [19,4,37,31] can be used. We only review two representative baseline models, which are commonly seen in person re-identification.
Cross Entropy Loss. The re-ID model is trained with the standard cross-entropy loss and the labels are the identities of training images. It is defined as
J(Y, L) = − i j 1 (l(y i ) = j) log q j i ,(1)
where q j i is the classification probability of the i-th training sample to the j-th category and l(y i ) is the ground-truth label of y i ∈ Y.
Triplet Loss. The re-ID model is trained with the triplet loss, defined as
J(Y, L) = la=lp =ln [d(y a , y p ) − d(y a , y n ) + m] + ,(2)
where y a denotes the anchor point, y p denotes the positive point and y n denotes the negative point. The motivation is that the positive y p belonging to the same identity as the anchor y a is closer to y a than the negative y n belonging to another identity, by at least a margin m.
Metrics for Attack
Metric learning (e.g., XQDA [29], KISSME [23]) has dominated the landscape of re-ID for a long time. Mathematically, a metric defined between the probe set P and the gallery set X is a function D : P × X → [0, ∞), which assigns non-negative values for each pair of p ∈ P and x ∈ X. We also use notation d(p, x) to denote the distance between p and x in the metric space D. In this section, we give the formal definition of metric loss used in adversarial metric attack. It should be mentioned that any differentiable metric (or similarity) function can be used as the target loss.
Euclidean distance is a widely used distance metric. The metric loss is defined as
d(p, x) = p − x 2 2 ,(3)
which computes the squared Euclidean distance between p and x.
Mahalanobis distance is a generalization of the Euclidean distance that considers the correlation of different feature dimensions. Accordingly, we can have a metric loss as
d(p, x) = (p − x) T M(p − x),(4)
where M is a positive semidefinite matrix.
Methods for Attack
Given a metric loss defined above, we aim at learning an adversarial example x adv = x + r, where x ∈ X denotes a certain gallery image and r denotes the adversarial perturbation. L ∞ norm is used to measure the perceptibility of the perturbation, i.e., r ∞ ≤ and is a small constant.
To this end, we introduce the following three attack methods, including:
Fast Gradient Sign Method (FGSM) [14] is a single step attack method. It generates adversarial examples by
X adv = X + · sign ∂D(P, X) ∂X ,(5)
where measures the maximum magnitude of adversarial perturbation and sign(·) denotes the signum function.
Iterative Fast Gradient Sign Method (I-FGSM) [24] is an iterative version of FGSM, defined as
X adv 0 = X X adv n+1 = Ψ X X adv n + α · sign( ∂D(P,X adv n ) ∂X adv n ) ,(6)
where n denotes the iteration number and α is the step size. Ψ X is a clip function that ensures the generated adversarial example is within the -ball of the original image.
Momentum Iterative Fast Gradient Sign Method (MI-FGSM) [6] adds the momentum term on top of I-FGSM to stabilize update directions. It is defined as
g n+1 = µ · g n + ∂D(P,X adv n ) ∂X adv n ∂D(P,X adv n ) ∂X adv n 1 X adv n+1 = Ψ X X adv n + α · sign(g n+1 ) ,(7)
where µ is the decay factor of the momentum term and g n is the accumulated gradient at the n-th iteration.
Benchmark Adversarial Settings
In this section, we benchmark the experimental settings for adversarial metric attack in re-ID .
White-box and Black-box Attack
White-box attack requires the attackers to have prior knowledge of the target networks, which means that the adversarial examples are generated with and tested on the same network having parameters Θ.
It should be mentioned that for adversarial metric attack, the loss layer used during training is replaced by the metric loss when attacking the network.
Black-box attack means that the attackers do not know the structures or the weights of the target network. That is to say, the adversarial examples are generated with a network having parameters Θ and used to attack metric on another network which differs in structures, parameters or both.
Targeted and Non-targeted Attack
Non-targeted attack aims to widen the metric distance between image pairs of the same identity. Given a probe image p and a gallery image x, where l(p) = l(x), their distance d(p, x) is ideally small. After imposing a non-targeted attack on the distance metric, the distance d(p, x adv ) between p and the generated adversarial example x adv is enlarged. Hence, when p serves as the query, x adv will not be ranked high in the ranking list of p (see Fig. 4(a)).
Non-targeted attack can be achieved by applying the attack methods described in Sec. 3.3 to the metric losses described in Sec. 3.2.
Targeted attack aims to draw the gallery image towards the probe image in the metric space. This type of attack is usually performed on image pairs with different identities, i.e., l(p) = l(x), which correspond to a large d(p, x) value. The generated x adv becomes closer to the query image p in the metric space, deceiving the network into predicting l(x adv ) = l(p). Hence, one can frequently observe adversarial examples generated by a targeted attack in top positions of the ranking list of p (see Fig. 4(b)).
Unlike non-targeted attack where adversarial examples do not steer the network towards a specific identity, targeted attack finds adversarial perturbations with pre-determined target labels during the learning procedure and tries to decrease the value of objective function. This incurs a slight modification to the attack methods described in Sec. 3.3. For example, the formulation of FGSM [14] is changed to
X adv = X − · sign ∂D(P, X) ∂X .(8)
The update procedure of I-FGSM [24] and MI-FGSM [6] can be modified similarly.
Single-model and
Adversarial Metric Defense
Here we present an early attempt on training a metricpreserving network to defend a distance metric.
The procedure is divided into four steps: 1) learn a clean model F with parameters Θ by imposing a loss function J to L and Y; 2) perform adversarial metric attack described in Sec. 3 on F with the training set Y, then obtain the adversarial version of training set Y adv ; 3) merge Y and Y adv , and re-train a metric-preserving model F adv ; 4) use F adv as the testing model in replace of F.
As for the performance, we find that F adv closely matches F when testing on the original (clean) gallery set X, but significantly outperforms F when testing on the adversarial version of gallery set X adv . In this sense, re-ID systems gain the robustness to adversarial metric attacks.
Experiments
The section evaluates the proposed adversarial metric attack and adversarial metric defense.
Datasets. Market-1501 dataset [49] is a widely accepted benchmark for person re-ID. It consists of 1501 identities, Baselines. Four base models are implemented. Specifically, we take ResNet-50 [17], ResNeXt-50 [45] and DenseNet-121 [20] pretrained on ImageNet [5] as the backbone models. The three networks are supervised by the cross-entropy loss, yielding three base models denoted as B1, B2 and B3, respectively. Meanwhile, we also supervise ResNet-50 [17] with triplet loss [19] and obtain the base model B4.
All the models are trained using the Adam optimizer for 60 epochs with a batch size of 32. When testing, we extract the L 2 normalized activations from the networks before the loss layer as the image features.
State-of-the-art Methods. As there exists a huge number of re-ID algorithms [3,2,27,10,30,47,46], it is unrealistic to evaluate all of them. Here, we reproduce two representatives which achieve the latest state-of-the-art performances, i.e., Harmonious Attention CNN (HACNN) [28] and Multi-task Attentional Network with Curriculum Sampling (Mancs) [41]. Both of them employ attention mechanisms to address person misalignment. We follow the default settings correspondingly and report their performances as well as those of the four base models in Table 1.
Experimental Design. The design of experiments involves various settings, including different distance metrics, different attack methods, white-box and black-box attack, nontargeted and targeted attack and single-model and multimodel attack described in Sec. 3.
If not specified otherwise, we use the Euclidean distance defined in Eq. (3) as the metric and I-FGSM defined in Eq. (6) with = 5 as the attack method and perform whitebox non-targeted attacks on base model B1. For other parameters, we set α = 1 in Eq. (6) and µ = 1 in Eq. (7). The iteration number n is set to min( +4, 1.25 ) following [24].
White-box and Black-box Attack
Adversarial metric attack is first evaluated with a single model. For each query class, we generate adversarial examples on the corresponding gallery set. Thus, an adversarial version of the gallery set can be stored off-line and used for performance evaluation. The maximum magnitude of adversarial perturbation is set to 5 on the Market-1501 dataset in Table 2 and the DukeMTMC-reID dataset in Table 3, which are still imperceptible to human vision (examples shown in Fig. 1). Therein, we present the networks that we attack in rows and networks that we test on in columns.
At first glance, one can clearly observe the adversarial effect of different metrics. For instance, the performance of B1 decreases sharply from mAP 77.52 to 0.367 in whitebox attack, and to 22.29 in black-box attack on the Market-1501 dataset. On the DukeMTMC-reID dataset, its performance drops from 67.72 to 0.178 in white-box attack, and to 18.12 in black-box attack. The state-of-the-art methods HACNN [28] and Mancs [41] are subjected to a dramatic performance decrease from mAP 75.28 to 37.98 and from 82.50 to 30.90, respectively on the Market-1501 dataset.
Second, the performance of white-box attack is much lower than that of black-box attack. It is easy to understand as the attack methods can generate adversarial examples that overfit the attacked model. Among the three attack methods, I-FGSM [24] delivers the strongest white-box attacks. Comparatively, MI-FGSM [6] is the most capable of learning adversarial examples for black-box attack. This observation is consistent across different base models, different state-of-the-art methods, different magnitudes of adversarial perturbation 1 and different datasets. This conclusion is somehow contrary to that drawn by classification attack [25], where non-iterative algorithms like FGSM [14] can generally generalize better. In summary, we suggest integrating iteration-based attack methods for adversarial metric attack as they have a higher attack rate.
Moreover, HACNN [28] and Mancs [41] are more robust to adversarial examples compared with the four base models. When attacked by the same set of adversarial examples, they outperform baselines by a large margin, although Table 1 shows that they only achieve comparable or even worse performances with clean images. For instance in Table 2, when attacking B1 using MI-FGSM in black-box setting, the best mAP achieved by the baselines is 25.53 on the Market-1501 dataset. In comparison, HACNN reports an mAP of 37.98 and Mancs reports an mAP of 30.90. A possible reason is that they both have more sophisticated modules and computational mechanisms, e.g., attention selection. However, it remains unclear and needs to be inves- Table 3. The mAP comparison of white-box attack (in shadow) and black-box attack (others) when = 5 on the DukeMTMC-reID dataset. For each combination of settings, the worst performances are marked in bold. tigated in the future which kind of modules are robust and why they manifest robustness to adversary.
At last, the robustness of HACNN [28] and Mancs [41] to adversary are also quite different. In most adversarial settings, HACNN outperforms Mancs remarkably, revealing that it is less vulnerable to adversary. Only when attacking B2 or B3 using FGSM on the DukeMTMC-reID dataset, Mancs seems to be better than HACNN (mAP 42.87 vs. 41.42). However, it should be emphasized that the baseline performance of HACNN is much worse than that of Mancs with clean images as presented in Table 1 (mAP 75.28 vs. 85.20 on the Market-1501 dataset and mAP 64.44 vs. 72.89 on the DukeMTMC-reID dataset). To eliminate the influence of the differences in baseline performance, we adopt a relative measurement of accuracy using the mAP ratio, i.e., the ratio of mAP on adversarial examples to that on clean images. A large mAP ratio indicates that the performance decrease is smaller, thus the model is more robust to adversary. We compare the mAP ratio of HACNN and Mancs in Fig. 3. As shown, HACNN consistently achieves a higher mAP ratio than Mancs in the adversarial settings.
From another point of view, achieving better performances on benchmark datasets does not necessarily mean that the algorithm has better generalization capacity. Therefore, it would be helpful to evaluate re-ID algorithms under the same adversarial settings to justify the potential of deploying them in real environments.
Single-model and Multi-model Attack
As shown in Sec. 5.1, black-box attacks yield much higher mAP than white-box attacks, which means that the generated adversarial examples do not transfer well to other models for testing. Attacking multiple models simultaneously can be helpful to improve the transferability.
To achieve this, we perform adversarial metric attack on Table 4. The mAP comparison of multi-model attack (white-box in shadow) when = 5. The symbol "-" indicates the name of the hold-out base model. For each combination of settings, the worst performances are marked in bold.
an ensemble of three out of the four base models. Then, the evaluation is done on the ensembled network and the hold-out network. Note that in this case, attacks on the "ensembled network" correspond to white-box attacks as the base models in the ensemble have been seen by the attacker during adversarial metric attack. In contrast, attacks on the "hold-out network" correspond to black-box attacks as this network is not used to generate adversarial examples.
We list the performances of multi-model attacks in Table 4. As indicated clearly, the identification rate of blackbox attacks continues to degenerate. For example, Table 2 shows that the worst performance of B1 is mAP 22.29 when attacking the single model B3 via MI-FGSM on the Market-1501 dataset. Under the same adversarial setting, the performance of B1 becomes 14.94 when attacking an ensemble of B2, B3 and B4. When attacking multiple models, the lowest mAP of HACNN [28] is merely 30.45 on the Market-1501 dataset, a sharp decrease of 7.53 from 37.98 as reported in Table 2 under the same adversarial settings.
Targeted and Non-targeted Attack
From Fig. 4, one can clearly observe the different effects of non-targeted and targeted attacks.
The goal of non-targeted metric attack is to maximize the distances (minimize the similarities) between a given probe and adversarial gallery images. Consequently, true positives are pushed down in the ranking list as shown in the first two rows of Fig. 4(a). However, it is indeterminable beforehand what the top-ranked images will be and to which probe the adversary will be similar as shown in the third row. In comparison, a targeted metric attack tries to minimize the distances between the given probe and the adversarial gallery images. Therefore, we find a large portion of adversarial images in top-ranked candidates in the third row of Fig. 4(b). And it is surprising to see that the metric is so easy to be fooled, which incorrectly retrieves male person images when a female person image serves as the probe.
For real applications in video surveillance, the nontargeted metric attack prevents the system from correctly retrieving desired results, while the targeted metric attack deliberately tricks the system into retrieving person images of a wrong identity. Within our framework, distance metrics can be used in two phases, that is, the one used to perform adversarial metric attack and the one used to evaluate the performance. For the Mahalanobis distance, we use a representative called Cross-view Quadratic Discriminant Analysis (XQDA) [29]. Unfortunately, by integrating metric learning with deep features, we do not observe an improvement of baseline performance, despite the fact that metric learning is extensively proven to be compatible with non-deep features (e.g., LOMO [29], GOG [34]). We obtain a rank-1 accuracy of 89.73 and an mAP of 75.86 using XQDA, lower than the rank-1 accuracy of 91.30 and mAP of 77.52 achieved by the Euclidean distance reported in Table 1.
Euclidean and Mahalanobis Metric
From Fig. 5, it is unsurprising to observe that the performance of different metric combinations decreases quickly as the maximum magnitude of adversarial perturbation increases. We also note that the iteration-based attack methods such as I-FGSM and MI-FGSM can severely mislead the distance metric with 5-pixel perturbations.
Second, we observe an interesting phenomenon which is consistent with different attack methods. When attacking the Euclidean distance and testing with XQDA, the performance is better than the setting where attacking and testing are both carried out with the Euclidean distance. This is also the case when we attack XQDA and test with the Euclidean distance. In other words, it is beneficial to adversarial met- Table 5. The mAP comparison between normally trained models (denoted by #N) and metric-preserving models (denoted by #M) on the Market-1501 dataset. #I means the relative improvement. Figure 5. The mAP comparison of FGSM (a) and I-FGSM (b) by varying the maximum magnitude of adversarial perturbation and a selection of distance metric. In the legend, the part before symbol "/" denotes the metric loss used for metric attack and the part after "/" denotes the metric used to evaluate the performance. ric defense if we use different metrics for metric attack and performance evaluation. From another perspective, it can be interpreted by the conclusion drawn in Sec. 5.1, i.e., we can take the change of metrics as a kind of black-box attack. In other words, we are using adversarial examples generated with a model using a certain distance metric to test another model which differs from the original model in its choice of distance metric.
Evaluating Adversarial Metric Defense
In Table 5, we evaluate metric defense by comparing the performance of normally trained models with metricpreserving models on the Market-1501 dataset. When testing the original clean gallery set, a slight performance decrease, generally smaller than 10%, is observed after using metric-preserving models. However, when purely testing the adversarial version of gallery images, the performance is significantly improved. For instance, when attacking B3 and testing on B1, the performance is originally 24.72, then improved to 70.46 with a relative improvement of 185%. In real video surveillance, it can improve the robustness of re-ID systems by deploying metric-preserving models.
Conclusion
In this work, we have studied the adversarial effects in person re-identification (re-ID). By observing that most existing works on adversarial examples only perform classification attacks, we propose the adversarial metric attack as a parallel methodology to be used in metric analysis.
By performing metric attack, adversarial examples can be easily generated for person re-identification. The latest state-of-the-art re-ID algorithms suffer a dramatic performance drop when they are attacked by the adversarial examples generated in this work, exposing the potential security issue of deploying re-ID algorithms in real video surveillance systems. To facilitate the development of metric attack in person re-identification, we have benchmarked and introduced various adversarial settings, including whitebox and black-box attack, targeted and non-targeted attack, single-model and multi-model attack, etc. Extensive experiments on two large scale re-ID datasets have reached some useful conclusions, which can be a helpful reference for future works. Moreover, benefiting from adversarial metric attack, we present an early attempt of training metricpreserving networks to significantly improve the robustness of re-ID models to adversary. | 4,384 |
1901.10650 | 2924934677 | Person re-identification (re-ID) has attracted much attention recently due to its great importance in video surveillance. In general, distance metrics used to identify two person images are expected to be robust under various appearance changes. However, our work observes the extreme vulnerability of existing distance metrics to adversarial examples, generated by simply adding human-imperceptible perturbations to person images. Hence, the security danger is dramatically increased when deploying commercial re-ID systems in video surveillance. Although adversarial examples have been extensively applied for classification analysis, it is rarely studied in metric analysis like person re-identification. The most likely reason is the natural gap between the training and testing of re-ID networks, that is, the predictions of a re-ID network cannot be directly used during testing without an effective metric. In this work, we bridge the gap by proposing Adversarial Metric Attack, a parallel methodology to adversarial classification attacks. Comprehensive experiments clearly reveal the adversarial effects in re-ID systems. Meanwhile, we also present an early attempt of training a metric-preserving network, thereby defending the metric against adversarial attacks. At last, by benchmarking various adversarial settings, we expect that our work can facilitate the development of adversarial attack and defense in metric-based applications. | However, to the best of our knowledge, no prior work has rigorously considered the robustness of re-ID systems towards adversarial attacks, which have received wide attention in the context of classification-based tasks, including image classification @cite_46 , object detection @cite_6 and semantic segmentation @cite_3 . As these vision tasks aims to sort an into a , they are therefore special cases of the broader classification problem. On such systems, it has been demonstrated that adding carefully generated human-imperceptible perturbations to an input image can easily cause the network to misclassify the perturbed image with high confidence. These tampered images are known as adversarial examples. Great efforts have been devoted to the generation of adversarial examples @cite_28 @cite_46 @cite_26 . In contrast, our work focuses on adversarial attacks on metric learning systems, which analyze the relationship between two . | {
"abstract": [
"Deep neural networks are vulnerable to adversarial examples, which poses security concerns on these algorithms due to the potentially severe consequences. Adversarial attacks serve as an important surrogate to evaluate the robustness of deep learning models before they are deployed. However, most of existing adversarial attacks can only fool a black-box model with a low success rate. To address this issue, we propose a broad class of momentum-based iterative algorithms to boost adversarial attacks. By integrating the momentum term into the iterative process for attacks, our methods can stabilize update directions and escape from poor local maxima during the iterations, resulting in more transferable adversarial examples. To further improve the success rates for black-box attacks, we apply momentum iterative algorithms to an ensemble of models, and show that the adversarially trained models with a strong defense ability are also vulnerable to our black-box attacks. We hope that the proposed methods will serve as a benchmark for evaluating the robustness of various deep models and defense methods. With this method, we won the first places in NIPS 2017 Non-targeted Adversarial Attack and Targeted Adversarial Attack competitions.",
"Abstract: Several machine learning models, including neural networks, consistently misclassify adversarial examples---inputs formed by applying small but intentionally worst-case perturbations to examples from the dataset, such that the perturbed input results in the model outputting an incorrect answer with high confidence. Early attempts at explaining this phenomenon focused on nonlinearity and overfitting. We argue instead that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature. This explanation is supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets. Moreover, this view yields a simple and fast method of generating adversarial examples. Using this approach to provide examples for adversarial training, we reduce the test set error of a maxout network on the MNIST dataset.",
"Deep Neural Networks (DNNs) have been demonstrated to perform exceptionally well on most recognition tasks such as image classification and segmentation. However, they have also been shown to be vulnerable to adversarial examples. This phenomenon has recently attracted a lot of attention but it has not been extensively studied on multiple, large-scale datasets and complex tasks such as semantic segmentation which often require more specialised networks with additional components such as CRFs, dilated convolutions, skip-connections and multiscale processing. In this paper, we present what to our knowledge is the first rigorous evaluation of adversarial attacks on modern semantic segmentation models, using two large-scale datasets. We analyse the effect of different network architectures, model capacity and multiscale processing, and show that many observations made on the task of classification do not always transfer to this more complex task. Furthermore, we show how mean-field inference in deep structured models and multiscale processing naturally implement recently proposed adversarial defenses. Our observations will aid future efforts in understanding and defending against adversarial examples. Moreover, in the shorter term, we show which segmentation models should currently be preferred in safety-critical applications due to their inherent robustness.",
"It has been well demonstrated that adversarial examples, i.e., natural images with visually imperceptible perturbations added, cause deep networks to fail on image classification. In this paper, we extend adversarial examples to semantic segmentation and object detection which are much more difficult. Our observation is that both segmentation and detection are based on classifying multiple targets on an image (e.g., the target is a pixel or a receptive field in segmentation, and an object proposal in detection). This inspires us to optimize a loss function over a set of targets for generating adversarial perturbations. Based on this, we propose a novel algorithm named Dense Adversary Generation (DAG), which applies to the state-of-the-art networks for segmentation and detection. We find that the adversarial perturbations can be transferred across networks with different training data, based on different architectures, and even for different recognition tasks. In particular, the transfer ability across networks with the same architecture is more significant than in other cases. Besides, we show that summing up heterogeneous perturbations often leads to better transfer performance, which provides an effective method of black-box adversarial attack.",
"Most existing machine learning classifiers are highly vulnerable to adversarial examples. An adversarial example is a sample of input data which has been modified very slightly in a way that is intended to cause a machine learning classifier to misclassify it. In many cases, these modifications can be so subtle that a human observer does not even notice the modification at all, yet the classifier still makes a mistake. Adversarial examples pose security concerns because they could be used to perform an attack on machine learning systems, even if the adversary has no access to the underlying model. Up to now, all previous work have assumed a threat model in which the adversary can feed data directly into the machine learning classifier. This is not always the case for systems operating in the physical world, for example those which are using signals from cameras and other sensors as an input. This paper shows that even in such physical world scenarios, machine learning systems are vulnerable to adversarial examples. We demonstrate this by feeding adversarial images obtained from cell-phone camera to an ImageNet Inception classifier and measuring the classification accuracy of the system. We find that a large fraction of adversarial examples are classified incorrectly even when perceived through the camera."
],
"cite_N": [
"@cite_26",
"@cite_28",
"@cite_3",
"@cite_6",
"@cite_46"
],
"mid": [
"2774644650",
"2963207607",
"2769999273",
"2604505099",
"2460937040"
]
} | Metric Attack and Defense for Person Re-identification | In recent years, person re-identification (re-ID) [22,50] has attracted great attention in the computer vision community, driven by the increasing demand of video surveillance in public space. Hence, great effort has been devoted to developing robust re-ID features [8,15,38,9,26] and distance metrics [35,51,29,7] to overcome the large intraclass variations of person images in viewpoint, pose, illumination, blur, occlusion and resolution. For example, the rank-1 accuracy of the latest state-of-the-art on the Market-1501 dataset [49] is 93.8 [28], increasing rapidly from 44.4 when the dataset was first released in 2015. However, we draw researchers' attention to the fact that re-ID systems can be very vulnerable to adversarial attacks. Fig. 1 shows a case where a probe image is presented. Of the two gallery images, the true positive has a large similarity value and the true negative has a small one. Nevertheless, after adding human-imperceptible perturbations to the gallery images, the metric is easily fooled even though the new gallery images appear the same as the original ones.
Adversarial examples have been extensively investigated in classification analysis, such as image classification [24,6], object detection [44], semantic segmentation [1], etc. However, they have not attracted much attention in the field of re-ID, a metric analysis task whose basic goal is to learn a discriminative distance metric. A very likely reason is the existence of a natural gap between the training and testing of re-ID networks. While a re-ID model is usually trained with a certain classification loss, it dis- The adversarial examples (red color) generated by the classification attack cross over the class decision boundary, but preserve the pairwise distance between them to a large extent.
cards the concept of class decision boundaries during testing and adopts a metric function to measure the pairwise distances between the probe and gallery images. Consequently, previous works on classification attacks [24,6] do not generalize to re-ID systems, i.e., they attempt to push images across the class decision boundaries and do not necessarily lead to a corrupted pairwise distance between images (see Fig. 2). Note that some re-ID networks are directly guided by metric losses (e.g., contrastive loss [16]), and their output can measure the between-object distances. However, it is still infeasible to directly attack such output owing to the sampling difficulty and computational complexity. Therefore, a common practice in re-ID is to take the trained model as a feature extractor and measure the similarities in a metric space.
Considering the importance of security for re-ID systems and the lack of systematic studies on its robustness towards adversarial examples, we propose Adversarial Metric Attack, an efficient and generic methodology to generate adversarial examples by attacking metric learning systems. The contributions of this work can be divided into five folds 1) Our work presents what to our knowledge is the first systematic and rigorous investigation of adversarial effects in person re-identification, which should be taken into consideration when deploying re-ID algorithms in real surveillance systems.
2) We propose adversarial metric attack, a parallel methodology to the existing adversarial classification attack [39,14], which can be potentially applied to other safety-critical applications that rely on distance metric (e.g., face verification [40] and tracking [18]).
3)
We define and benchmark various experimental settings for metric attack in re-ID, including white-box and blackbox attack, non-targeted and targeted attack, single-model and multi-model attack, etc. Under those experimental settings, comprehensive experiments are carried out with different distance metrics and attack methods.
4)
We present an early attempt on adversarial metric defense, and show that adversarial examples generated by attacking metrics can be used in turn to train a metricpreserving network.
5)
The code will be publicly available to easily generate the adversarial version of benchmark datasets (see examples in supplementary material), which can serve as a useful testbed to evaluate the robustness of re-ID algorithms.
We hope that our work can facilitate the development of robust feature learning and accelerate the progress on adversarial attack and defense of re-ID systems with the methodology and the experimental conclusions presented.
Adversarial Metric Attack
Person re-identification [12] is comprised of three sets of images, including the probe set P = {p i } Np i=1 , the gallery set
X = {x i } Nx i=1
, and the training set Y = {y i } Ny i=1 . A label set L is also given to annotate the identity of each image for training and evaluation. A general re-ID pipeline is: 1) learn a feature extractor F with parameters Θ (usually by training a neural network) by imposing a loss function J to L and Y; 2) extract the activations of intermediate layers for P and X as their visual features F(P, Θ) and F(X, Θ), respectively; 3) compute the distance between F(P, Θ) and F(X, Θ) for indexing. When representing features, F(·) and Θ are omitted where possible for notation simplicity.
In this paper, we aim to generate adversarial examples for re-ID models. As explained in Sec. 1, a different attack mechanism is required for metric learning systems as opposed to the existing attack methods which focus on classification-based models [14,24]. Instead of attacking the loss function used in training the neural network as done in these previous works, we discard the training loss and propose to attack the distance metric. Such an attack mechanism directly results in the corruption of the pairwise distance between images, thus leading to guaranteed accuracy compromises of a re-ID system. This is the gist of the methodology proposed in this work, which we call adversarial metric attack.
Adversarial metric attack consists of four components, including models for attack, metrics for attack, methods for attack and adversarial settings for attack. In the first component (Sec. 3.1), we train the model (with parameters Θ) on the training set Y as existing re-ID algorithms do. The model parameters are then fixed during attacking. In the second component (Sec. 3.2), a metric loss D is determined as the attack target. In the third component (Sec. 3.3), an optimization method for producing adversarial examples is selected. In the last component (Sec. 3.4), by setting the probe set P as a reference, we generate adversarial examples on the gallery set X in a specific adversarial setting.
Models for Attack
In the proposed methodology, the model for attack is not limited to be classification-based as opposed to [14,24]. Instead, most re-ID models [19,4,37,31] can be used. We only review two representative baseline models, which are commonly seen in person re-identification.
Cross Entropy Loss. The re-ID model is trained with the standard cross-entropy loss and the labels are the identities of training images. It is defined as
J(Y, L) = − i j 1 (l(y i ) = j) log q j i ,(1)
where q j i is the classification probability of the i-th training sample to the j-th category and l(y i ) is the ground-truth label of y i ∈ Y.
Triplet Loss. The re-ID model is trained with the triplet loss, defined as
J(Y, L) = la=lp =ln [d(y a , y p ) − d(y a , y n ) + m] + ,(2)
where y a denotes the anchor point, y p denotes the positive point and y n denotes the negative point. The motivation is that the positive y p belonging to the same identity as the anchor y a is closer to y a than the negative y n belonging to another identity, by at least a margin m.
Metrics for Attack
Metric learning (e.g., XQDA [29], KISSME [23]) has dominated the landscape of re-ID for a long time. Mathematically, a metric defined between the probe set P and the gallery set X is a function D : P × X → [0, ∞), which assigns non-negative values for each pair of p ∈ P and x ∈ X. We also use notation d(p, x) to denote the distance between p and x in the metric space D. In this section, we give the formal definition of metric loss used in adversarial metric attack. It should be mentioned that any differentiable metric (or similarity) function can be used as the target loss.
Euclidean distance is a widely used distance metric. The metric loss is defined as
d(p, x) = p − x 2 2 ,(3)
which computes the squared Euclidean distance between p and x.
Mahalanobis distance is a generalization of the Euclidean distance that considers the correlation of different feature dimensions. Accordingly, we can have a metric loss as
d(p, x) = (p − x) T M(p − x),(4)
where M is a positive semidefinite matrix.
Methods for Attack
Given a metric loss defined above, we aim at learning an adversarial example x adv = x + r, where x ∈ X denotes a certain gallery image and r denotes the adversarial perturbation. L ∞ norm is used to measure the perceptibility of the perturbation, i.e., r ∞ ≤ and is a small constant.
To this end, we introduce the following three attack methods, including:
Fast Gradient Sign Method (FGSM) [14] is a single step attack method. It generates adversarial examples by
X adv = X + · sign ∂D(P, X) ∂X ,(5)
where measures the maximum magnitude of adversarial perturbation and sign(·) denotes the signum function.
Iterative Fast Gradient Sign Method (I-FGSM) [24] is an iterative version of FGSM, defined as
X adv 0 = X X adv n+1 = Ψ X X adv n + α · sign( ∂D(P,X adv n ) ∂X adv n ) ,(6)
where n denotes the iteration number and α is the step size. Ψ X is a clip function that ensures the generated adversarial example is within the -ball of the original image.
Momentum Iterative Fast Gradient Sign Method (MI-FGSM) [6] adds the momentum term on top of I-FGSM to stabilize update directions. It is defined as
g n+1 = µ · g n + ∂D(P,X adv n ) ∂X adv n ∂D(P,X adv n ) ∂X adv n 1 X adv n+1 = Ψ X X adv n + α · sign(g n+1 ) ,(7)
where µ is the decay factor of the momentum term and g n is the accumulated gradient at the n-th iteration.
Benchmark Adversarial Settings
In this section, we benchmark the experimental settings for adversarial metric attack in re-ID .
White-box and Black-box Attack
White-box attack requires the attackers to have prior knowledge of the target networks, which means that the adversarial examples are generated with and tested on the same network having parameters Θ.
It should be mentioned that for adversarial metric attack, the loss layer used during training is replaced by the metric loss when attacking the network.
Black-box attack means that the attackers do not know the structures or the weights of the target network. That is to say, the adversarial examples are generated with a network having parameters Θ and used to attack metric on another network which differs in structures, parameters or both.
Targeted and Non-targeted Attack
Non-targeted attack aims to widen the metric distance between image pairs of the same identity. Given a probe image p and a gallery image x, where l(p) = l(x), their distance d(p, x) is ideally small. After imposing a non-targeted attack on the distance metric, the distance d(p, x adv ) between p and the generated adversarial example x adv is enlarged. Hence, when p serves as the query, x adv will not be ranked high in the ranking list of p (see Fig. 4(a)).
Non-targeted attack can be achieved by applying the attack methods described in Sec. 3.3 to the metric losses described in Sec. 3.2.
Targeted attack aims to draw the gallery image towards the probe image in the metric space. This type of attack is usually performed on image pairs with different identities, i.e., l(p) = l(x), which correspond to a large d(p, x) value. The generated x adv becomes closer to the query image p in the metric space, deceiving the network into predicting l(x adv ) = l(p). Hence, one can frequently observe adversarial examples generated by a targeted attack in top positions of the ranking list of p (see Fig. 4(b)).
Unlike non-targeted attack where adversarial examples do not steer the network towards a specific identity, targeted attack finds adversarial perturbations with pre-determined target labels during the learning procedure and tries to decrease the value of objective function. This incurs a slight modification to the attack methods described in Sec. 3.3. For example, the formulation of FGSM [14] is changed to
X adv = X − · sign ∂D(P, X) ∂X .(8)
The update procedure of I-FGSM [24] and MI-FGSM [6] can be modified similarly.
Single-model and
Adversarial Metric Defense
Here we present an early attempt on training a metricpreserving network to defend a distance metric.
The procedure is divided into four steps: 1) learn a clean model F with parameters Θ by imposing a loss function J to L and Y; 2) perform adversarial metric attack described in Sec. 3 on F with the training set Y, then obtain the adversarial version of training set Y adv ; 3) merge Y and Y adv , and re-train a metric-preserving model F adv ; 4) use F adv as the testing model in replace of F.
As for the performance, we find that F adv closely matches F when testing on the original (clean) gallery set X, but significantly outperforms F when testing on the adversarial version of gallery set X adv . In this sense, re-ID systems gain the robustness to adversarial metric attacks.
Experiments
The section evaluates the proposed adversarial metric attack and adversarial metric defense.
Datasets. Market-1501 dataset [49] is a widely accepted benchmark for person re-ID. It consists of 1501 identities, Baselines. Four base models are implemented. Specifically, we take ResNet-50 [17], ResNeXt-50 [45] and DenseNet-121 [20] pretrained on ImageNet [5] as the backbone models. The three networks are supervised by the cross-entropy loss, yielding three base models denoted as B1, B2 and B3, respectively. Meanwhile, we also supervise ResNet-50 [17] with triplet loss [19] and obtain the base model B4.
All the models are trained using the Adam optimizer for 60 epochs with a batch size of 32. When testing, we extract the L 2 normalized activations from the networks before the loss layer as the image features.
State-of-the-art Methods. As there exists a huge number of re-ID algorithms [3,2,27,10,30,47,46], it is unrealistic to evaluate all of them. Here, we reproduce two representatives which achieve the latest state-of-the-art performances, i.e., Harmonious Attention CNN (HACNN) [28] and Multi-task Attentional Network with Curriculum Sampling (Mancs) [41]. Both of them employ attention mechanisms to address person misalignment. We follow the default settings correspondingly and report their performances as well as those of the four base models in Table 1.
Experimental Design. The design of experiments involves various settings, including different distance metrics, different attack methods, white-box and black-box attack, nontargeted and targeted attack and single-model and multimodel attack described in Sec. 3.
If not specified otherwise, we use the Euclidean distance defined in Eq. (3) as the metric and I-FGSM defined in Eq. (6) with = 5 as the attack method and perform whitebox non-targeted attacks on base model B1. For other parameters, we set α = 1 in Eq. (6) and µ = 1 in Eq. (7). The iteration number n is set to min( +4, 1.25 ) following [24].
White-box and Black-box Attack
Adversarial metric attack is first evaluated with a single model. For each query class, we generate adversarial examples on the corresponding gallery set. Thus, an adversarial version of the gallery set can be stored off-line and used for performance evaluation. The maximum magnitude of adversarial perturbation is set to 5 on the Market-1501 dataset in Table 2 and the DukeMTMC-reID dataset in Table 3, which are still imperceptible to human vision (examples shown in Fig. 1). Therein, we present the networks that we attack in rows and networks that we test on in columns.
At first glance, one can clearly observe the adversarial effect of different metrics. For instance, the performance of B1 decreases sharply from mAP 77.52 to 0.367 in whitebox attack, and to 22.29 in black-box attack on the Market-1501 dataset. On the DukeMTMC-reID dataset, its performance drops from 67.72 to 0.178 in white-box attack, and to 18.12 in black-box attack. The state-of-the-art methods HACNN [28] and Mancs [41] are subjected to a dramatic performance decrease from mAP 75.28 to 37.98 and from 82.50 to 30.90, respectively on the Market-1501 dataset.
Second, the performance of white-box attack is much lower than that of black-box attack. It is easy to understand as the attack methods can generate adversarial examples that overfit the attacked model. Among the three attack methods, I-FGSM [24] delivers the strongest white-box attacks. Comparatively, MI-FGSM [6] is the most capable of learning adversarial examples for black-box attack. This observation is consistent across different base models, different state-of-the-art methods, different magnitudes of adversarial perturbation 1 and different datasets. This conclusion is somehow contrary to that drawn by classification attack [25], where non-iterative algorithms like FGSM [14] can generally generalize better. In summary, we suggest integrating iteration-based attack methods for adversarial metric attack as they have a higher attack rate.
Moreover, HACNN [28] and Mancs [41] are more robust to adversarial examples compared with the four base models. When attacked by the same set of adversarial examples, they outperform baselines by a large margin, although Table 1 shows that they only achieve comparable or even worse performances with clean images. For instance in Table 2, when attacking B1 using MI-FGSM in black-box setting, the best mAP achieved by the baselines is 25.53 on the Market-1501 dataset. In comparison, HACNN reports an mAP of 37.98 and Mancs reports an mAP of 30.90. A possible reason is that they both have more sophisticated modules and computational mechanisms, e.g., attention selection. However, it remains unclear and needs to be inves- Table 3. The mAP comparison of white-box attack (in shadow) and black-box attack (others) when = 5 on the DukeMTMC-reID dataset. For each combination of settings, the worst performances are marked in bold. tigated in the future which kind of modules are robust and why they manifest robustness to adversary.
At last, the robustness of HACNN [28] and Mancs [41] to adversary are also quite different. In most adversarial settings, HACNN outperforms Mancs remarkably, revealing that it is less vulnerable to adversary. Only when attacking B2 or B3 using FGSM on the DukeMTMC-reID dataset, Mancs seems to be better than HACNN (mAP 42.87 vs. 41.42). However, it should be emphasized that the baseline performance of HACNN is much worse than that of Mancs with clean images as presented in Table 1 (mAP 75.28 vs. 85.20 on the Market-1501 dataset and mAP 64.44 vs. 72.89 on the DukeMTMC-reID dataset). To eliminate the influence of the differences in baseline performance, we adopt a relative measurement of accuracy using the mAP ratio, i.e., the ratio of mAP on adversarial examples to that on clean images. A large mAP ratio indicates that the performance decrease is smaller, thus the model is more robust to adversary. We compare the mAP ratio of HACNN and Mancs in Fig. 3. As shown, HACNN consistently achieves a higher mAP ratio than Mancs in the adversarial settings.
From another point of view, achieving better performances on benchmark datasets does not necessarily mean that the algorithm has better generalization capacity. Therefore, it would be helpful to evaluate re-ID algorithms under the same adversarial settings to justify the potential of deploying them in real environments.
Single-model and Multi-model Attack
As shown in Sec. 5.1, black-box attacks yield much higher mAP than white-box attacks, which means that the generated adversarial examples do not transfer well to other models for testing. Attacking multiple models simultaneously can be helpful to improve the transferability.
To achieve this, we perform adversarial metric attack on Table 4. The mAP comparison of multi-model attack (white-box in shadow) when = 5. The symbol "-" indicates the name of the hold-out base model. For each combination of settings, the worst performances are marked in bold.
an ensemble of three out of the four base models. Then, the evaluation is done on the ensembled network and the hold-out network. Note that in this case, attacks on the "ensembled network" correspond to white-box attacks as the base models in the ensemble have been seen by the attacker during adversarial metric attack. In contrast, attacks on the "hold-out network" correspond to black-box attacks as this network is not used to generate adversarial examples.
We list the performances of multi-model attacks in Table 4. As indicated clearly, the identification rate of blackbox attacks continues to degenerate. For example, Table 2 shows that the worst performance of B1 is mAP 22.29 when attacking the single model B3 via MI-FGSM on the Market-1501 dataset. Under the same adversarial setting, the performance of B1 becomes 14.94 when attacking an ensemble of B2, B3 and B4. When attacking multiple models, the lowest mAP of HACNN [28] is merely 30.45 on the Market-1501 dataset, a sharp decrease of 7.53 from 37.98 as reported in Table 2 under the same adversarial settings.
Targeted and Non-targeted Attack
From Fig. 4, one can clearly observe the different effects of non-targeted and targeted attacks.
The goal of non-targeted metric attack is to maximize the distances (minimize the similarities) between a given probe and adversarial gallery images. Consequently, true positives are pushed down in the ranking list as shown in the first two rows of Fig. 4(a). However, it is indeterminable beforehand what the top-ranked images will be and to which probe the adversary will be similar as shown in the third row. In comparison, a targeted metric attack tries to minimize the distances between the given probe and the adversarial gallery images. Therefore, we find a large portion of adversarial images in top-ranked candidates in the third row of Fig. 4(b). And it is surprising to see that the metric is so easy to be fooled, which incorrectly retrieves male person images when a female person image serves as the probe.
For real applications in video surveillance, the nontargeted metric attack prevents the system from correctly retrieving desired results, while the targeted metric attack deliberately tricks the system into retrieving person images of a wrong identity. Within our framework, distance metrics can be used in two phases, that is, the one used to perform adversarial metric attack and the one used to evaluate the performance. For the Mahalanobis distance, we use a representative called Cross-view Quadratic Discriminant Analysis (XQDA) [29]. Unfortunately, by integrating metric learning with deep features, we do not observe an improvement of baseline performance, despite the fact that metric learning is extensively proven to be compatible with non-deep features (e.g., LOMO [29], GOG [34]). We obtain a rank-1 accuracy of 89.73 and an mAP of 75.86 using XQDA, lower than the rank-1 accuracy of 91.30 and mAP of 77.52 achieved by the Euclidean distance reported in Table 1.
Euclidean and Mahalanobis Metric
From Fig. 5, it is unsurprising to observe that the performance of different metric combinations decreases quickly as the maximum magnitude of adversarial perturbation increases. We also note that the iteration-based attack methods such as I-FGSM and MI-FGSM can severely mislead the distance metric with 5-pixel perturbations.
Second, we observe an interesting phenomenon which is consistent with different attack methods. When attacking the Euclidean distance and testing with XQDA, the performance is better than the setting where attacking and testing are both carried out with the Euclidean distance. This is also the case when we attack XQDA and test with the Euclidean distance. In other words, it is beneficial to adversarial met- Table 5. The mAP comparison between normally trained models (denoted by #N) and metric-preserving models (denoted by #M) on the Market-1501 dataset. #I means the relative improvement. Figure 5. The mAP comparison of FGSM (a) and I-FGSM (b) by varying the maximum magnitude of adversarial perturbation and a selection of distance metric. In the legend, the part before symbol "/" denotes the metric loss used for metric attack and the part after "/" denotes the metric used to evaluate the performance. ric defense if we use different metrics for metric attack and performance evaluation. From another perspective, it can be interpreted by the conclusion drawn in Sec. 5.1, i.e., we can take the change of metrics as a kind of black-box attack. In other words, we are using adversarial examples generated with a model using a certain distance metric to test another model which differs from the original model in its choice of distance metric.
Evaluating Adversarial Metric Defense
In Table 5, we evaluate metric defense by comparing the performance of normally trained models with metricpreserving models on the Market-1501 dataset. When testing the original clean gallery set, a slight performance decrease, generally smaller than 10%, is observed after using metric-preserving models. However, when purely testing the adversarial version of gallery images, the performance is significantly improved. For instance, when attacking B3 and testing on B1, the performance is originally 24.72, then improved to 70.46 with a relative improvement of 185%. In real video surveillance, it can improve the robustness of re-ID systems by deploying metric-preserving models.
Conclusion
In this work, we have studied the adversarial effects in person re-identification (re-ID). By observing that most existing works on adversarial examples only perform classification attacks, we propose the adversarial metric attack as a parallel methodology to be used in metric analysis.
By performing metric attack, adversarial examples can be easily generated for person re-identification. The latest state-of-the-art re-ID algorithms suffer a dramatic performance drop when they are attacked by the adversarial examples generated in this work, exposing the potential security issue of deploying re-ID algorithms in real video surveillance systems. To facilitate the development of metric attack in person re-identification, we have benchmarked and introduced various adversarial settings, including whitebox and black-box attack, targeted and non-targeted attack, single-model and multi-model attack, etc. Extensive experiments on two large scale re-ID datasets have reached some useful conclusions, which can be a helpful reference for future works. Moreover, benefiting from adversarial metric attack, we present an early attempt of training metricpreserving networks to significantly improve the robustness of re-ID models to adversary. | 4,384 |
1901.10244 | 2952416736 | We present a novel method to explicitly incorporate topological prior knowledge into deep learning based segmentation, which is, to our knowledge, the first work to do so. Our method uses the concept of persistent homology, a tool from topological data analysis, to capture high-level topological characteristics of segmentation results in a way which is differentiable with respect to the pixelwise probability of being assigned to a given class. The topological prior knowledge consists of the sequence of desired Betti numbers of the segmentation. As a proof-of-concept we demonstrate our approach by applying it to the problem of left-ventricle segmentation of cardiac MR images of 500 subjects from the UK Biobank dataset, where we show that it improves segmentation performance in terms of topological correctness without sacrificing pixelwise accuracy. | Other approaches have involved encouraging the correct adjacencies of various object classes, whether they were learned from the data as in @cite_15 or provided as a prior as in @cite_0 . Such methods allow the introduction of this simple topological feature into a loss function when performing image segmentation but cannot be easily generalised to any other kinds of higher-order feature such as the presence of holes, handles or voids. | {
"abstract": [
"Image segmentation based on convolutional neural networks is proving to be a powerful and efficient solution for medical applications. However, the lack of annotated data, presence of artifacts and variability in appearance can still result in inconsistencies during the inference. We choose to take advantage of the invariant nature of anatomical structures, by enforcing a semantic constraint to improve the robustness of the segmentation. The proposed solution is applied on a brain structures segmentation task, where the output of the network is constrained to satisfy a known adjacency graph of the brain regions. This criteria is introduced during the training through an original penalization loss named NonAdjLoss. With the help of a new metric, we show that the proposed approach significantly reduces abnormalities produced during the segmentation. Additionally, we demonstrate that our framework can be used in a semi-supervised way, opening a path to better generalization to unseen data.",
"We propose a generic and efficient learning framework that is applicable to segment images in which individual objects are mainly discernible by boundary cues. Our approach starts by first hierarchically clustering the image and then explaining the image in terms of a cost-minimal subset of non-overlapping segments. The cost of a segmentation is defined as a weighted sum of features of the selected candidates. This formulation allows us to take into account an extensible set of arbitrary features. The maximally discriminative linear combination of features is learned from training data using a margin-rescaled structured SVM. At the core of our formulation is a novel and simple topology-based structured loss which is a combination of counts and geodesic distance of topological errors (splits, merges, false positives and false negatives) relative to the training set. We demonstrate the generality and accuracy of our approach on three challenging 2D cell segmentation problems, where we improve accuracy compared to the current state of the art."
],
"cite_N": [
"@cite_0",
"@cite_15"
],
"mid": [
"2889890911",
"2296226447"
]
} | Explicit topological priors for deep-learning based image segmentation using persistent homology | Image segmentation, the task of assigning a class label to each pixel in an image, is a key problem in computer vision and medical image analysis. The most successful segmentation algorithms now use deep convolutional neural networks (CNN), with recent progress made in combining fine-grained local features with coarse-grained global features, such as in the popular U-net architecture [17]. Such methods allow information from a large spatial neighbourhood to be used in classifying each pixel. However, the loss function is usually one which considers each pixel individually rather than considering higher-level structures collectively. In many applications it is important to correctly capture the topological characteristics of the anatomy in a segmentation result. For example, detecting and counting distinct cells in electron microscopy images requires that neighbouring cells are correctly distinguished. Even very small pixelwise errors, such as incorrectly labelling one pixel in a thin boundary between cells, can cause two distinct cells to appear to merge. In this way significant topological errors can be caused by small pixelwise errors that have little effect on the loss function during training but may have large effects on downstream tasks. Another example is the modelling of blood flow in vessels, which requires accurate determination of vessel connectivity. In this case, small pixelwise errors can have a significant impact on the subsequent modelling task. Finally, when imaging subjects who may have congenital heart defects, the presence or absence of small holes in the walls between two chambers is diagnostically important and can be identified from images, but using current techniques it is difficult to incorporate this relevant information into a segmentation algorithm. For downstream tasks it is important that these holes are correctly segmented but they are frequently missed by current segmentation algorithms as they are insufficiently penalised during training. See Figure 1 for examples of topologically correct and incorrect segmentations of cardiac magnetic resonance images (MRI).
There has been some recent interest in introducing topological features into the training of CNNs, and this literature is reviewed in section 2 below. However, such approaches have generally involved detecting the presence or absence of topological features implicitly in order to quantify them in a differentiable way that can be incorporated into the training of the segmentation network. The weakness of this approach is that it is hard to know exactly which topological features are being learned. Instead, it would be desirable to explicitly specify the presence or absence of certain topological features directly in a loss function. This would enable us to designate, for example, that the segmentation result should have one connected component which has one hole in it. This is challenging due to the inherently discrete nature of topological features, making it hard to create a differentiable loss function which accounts for them.
In this paper we demonstrate that persistent homology (PH), a tool from the field of topological data analysis, can be used to address this problem by quantifying the persistence, or stability, of all topological features present in an image. Our method uses these high-level structural features to provide a pixelwise gradient that increases or decreases the persistence of desired or undesired topological features in a segmentation. These gradients can then be back-propagated through the weights of any segmentation network and combined with any other pixelwise loss function. In this way, the desired topological features of a segmentation can be used to help train a network even in the absence of a ground truth, and without the need for those features to be implicitly learned from a large amount of training data, which is not always available. This topologically driven gradient can be incorporated into supervised learning or used in a semi-supervised learning scenario, which is our focus here.
Our main contribution is the presentation of, to the best of our knowledge, the first method to explicitly incorporate topological prior information into deeplearning based segmentation. The explicit topological prior is the sequence of desired Betti numbers of the segmentations and our method provides a gradient calculated such that the network learns to produce segmentations with the correct topology. We begin by reviewing literature related to introducing topology into deep learning in section 2. In section 3 we cover the theory of PH and introduce the relevant notation. In section 4 we then describe in detail our approach for integrating PH and deep learning for image segmentation, and then demonstrate the method in a case study on cardiac MRI in section 5.
Theory
PH is an algebraic tool developed as part of the growing mathematical field of topological data analysis, which involves computing topological features of shapes and data. We give a brief overview of PH here, but direct the reader to [6,5,14] for more thorough reviews, discussions and historical background of the subject. Although PH most commonly considers simplicial complexes 1 due to their generality, for the analysis of images and volumes consisting of pixels and voxels, cubical complexes are considerably more convenient and so we introduce them, and their theory of PH here.
Cubical Complexes
A cubical complex is a set consisting of points, unit line segments, and unit squares, cubes, hypercubes, and so on. Following the notation of [11] its funda-mental building blocks are elementary intervals which are each a closed subset of the real line of the form I = [z, z + 1] for z ∈ Z. These represent unit line segments. Points are represented by degenerate intervals I = [z, z]. From these, we can define elementary cubes, Q as the product of elementary intervals,
Q = I 1 × I 2 × ... × I d , Q ⊂ R d .(1)
The set of all elementary cubes in R d is K d , and K = d K d . The dimension of an elementary cube, dim Q, is the number of non-degenerate components in the product defining Q, and we will denote the set of all d-dimensional elementary cubes as K d = {Q ∈ K | dim Q = d}. By setting up the theory in this way, we are restricting the class of objects we can talk about to unit cubes, which will represent pixels in the images we will consider. For simplicity, from hereon we will describe the two-dimensional case, but our approach is generalisable. Consider a 2D array, S of N × N pixels, where the pixel in row i and column j has a value S [i,j] ∈ [0, 1]. In terms of the cubical complex each pixel covers a unit square described by the elementary cube
Q i,j = [i, i + 1] × [j, j + 1] where Q i,j ∈ K 2 .
We then consider filtrations of this cubical complex. For each value of a threshold p ∈ [0, 1], we can find the cubical complex B(p) given by
B(p) = N −1 i,j=0 Q i,j : S [i,j] ≥ (1 − p).(2)
In the context of an image, B(p) represents a binarised image made by setting pixels with a value above 1 − p to 1, and below 1 − p to 0. By considering these sets for an increasing sequence of filtration values we obtain a sequence
B(0) ⊆ B(p 1 ) ⊆ B(p 2 ) ⊆ ... ⊆ B(1).(3)
This filtered space is the key object of PH.
Persistent Homology
PH measures the lifetimes of topological features within a filtration such as the sequence above. The premise is that those features with long lifetimes, in terms of the filtration value p, are significant features of the data. Those with short lifetimes are usually considered to be noise. For each complex B(p) ⊂ R d , we can consider its topology by finding 2 the homology group H n , the rank of which is the n th Betti number, β n . These numbers are topological invariants which, informally speaking, count the number of d-dimensional holes in an object. β 0 counts the number of connected components, β 1 counts the number of loops, and, although not relevant to the 2D case we consider here, β 2 counts the number of 2 We avoid the details of how the homology groups and Betti numbers are computed here. In our experiments, we used the implementation from the Python library Gudhi, available at [1]. In our implementation diagonally adjacent pixels are considered as neighbouring, but this does not generally need to be the case. hollow cavities, and so on. As the filtration value p increases, more pixels join the cubical complex and topological features in the binarised image are created and destroyed. A useful way of visualising the PH of a dataset is to use a barcode diagram, an example of which is given in Figure 2. This diagram plots the lifespans of all topological features in the data, where each feature is represented by one bar, and with different colour bars representing different Betti numbers. The Betti numbers of B(p * ) are given by the number of bars present at the x-coordinate x = p * . A key feature of the barcode diagram is that it is stable in the presence of noise, and there are theoretical guarantees that small changes to the original data can only make small changes to the positions and lengths of the bars [10]. For an array of input data S, we will describe its PH by denoting each bar in the barcode diagram as H d (S) = (p birth , p death ) which is an ordered pair of the birth and death filtration values of the th longest bar of dimension d, where ≥ 1 and d ≥ 0.
Our method will use these barcode diagrams as a description of the topological features in a predicted segmentation mask. In the case we consider below, we begin with the prior knowledge that the object being segmented should contain one hole (i.e. β 1 = 1) and so aim to extend the length of the bar corresponding to the most persistent 1-dimensional feature. It is important to note both that our method can be applied generally to encourage the presence or absence of topological features of any number or dimension, but also that this prior information must be specified for the task at hand.
Method
Throughout this paper we consider only the problem of binary segmentation, that is, assigning a value between 0 and 1 to each pixel in an image which represents the probability of it being classified as part of a particular structure. Our approach does generalise to multi-class segmentation (inasmuch as it can be described as several binary segmentation problems) but, for convenience and simplicity, we will discuss only the binary case here.
Topological Pixelwise Gradient
In our approach the desired topology of the segmentation mask needs to be specified in the form of its Betti numbers. For ease of explanation we consider the case in which β 1 = 1 is specified, corresponding to the prior knowledge that the segmentation mask should contain exactly one closed cycle. Given a neural network f which performs binary segmentation and is parameterised by a set of weights ω, an N ×N image X produces an N ×N array of pixelwise probabilities, S = f (X; ω). In the supervised learning setting, a pixelwise gradient is calculated by, for example, calculating the binary cross-entropy or Dice loss between S and some ground-truth labels Y .
We additionally calculate a pixelwise gradient for a topological loss as follows. Firstly the PH of S is calculated, producing a set of lifetimes of topological features H d , such as that shown in figure 2a. For each desired feature, the longest bars (and so the most persistent features) of the corresponding dimension are identified. In our case the presence of a closed cycle corresponds to the longest green bar in the barcode diagram, denoted by H 1 1 (S). In order to make this feature more persistent we need to identify the pixels which, if assigned a higher/lower probability of appearing in the segmentation, will extend the length of this bar in the barcode diagram, and therefore increase the persistence of that topological feature. These pixels are identified by an iterative process which begins at the pixels with the filtration values at precisely the ends of the relevant bar, which are H 1 1 (S) = (p * birth , p * death ) for the left and right ends of the bar respectively. For each of k iterations, where k is an integer parameter which can be freely chosen, the pixels with these extremal filtration values are filled in (with a 1 and 0 respectively), extending the bar in the barcode, and these pixels have a gradient of ∓1 applied to them. The PH is recomputed, and another pixel chosen for each end of the bar. These pixels are also filled in, and more chosen, and so on, until k pixels have been identified for each end of the bar. These are now the 2k pixels which, if their filtration values are adjusted, will result in the most significant change in the persistence of the relevant topological object, and it is these 2k pixels which will have a gradient applied to them. Algorithm 1 shows pseudo-code for the β 1 = 1 example 3 .
Algorithm 1 Topological loss gradient β 1 = 1 Input S: Array of real numbers -pixelwise segmentation probabilities k: Integer -number of pixels to apply gradient to : Real number > 0 -threshold to avoid modifying already persistent features Output G: Array of real numbers -pixelwise gradients 1: procedure TopoGrad(S, k, ) 2:
t ← 0 3:
G is initialised as an N × N array of 0 4:
while t < k do 5:
H d (S) ←
Semi-supervised Learning
We incorporate the topological prior into a semi-supervised learning scheme as follows. In each training batch, firstly the binary cross-entropy loss from the N labelled cases is calculated. Next the pixelwise gradients, G, for the N u unlabelled cases are calculated as in Algorithm 1 and multiplied by a positive constant λ, which weights this term. The gradient from the cross-entropy loss and the topological gradient are then summed. In our experiments we set k = 5, = 0.01 and experiment with a choice of λ, chosen by manual tuning.
Experiments and Results
We demonstrate our approach on real data with the task of myocardial segmentation of cardiac MRI. We use a subset of the UK Biobank dataset [19,15], which consists of the mid-slice of the short-axis view of the heart. Example images and segmentations from this dataset are shown in Figure 3. We use one end-systole image from each subject, each of which has a gold-standard left-ventricle segmentation provided. The images were cropped to a 64x64 square centred around the left ventricle. Since the UK Biobank dataset contains high-quality images compared to a typical clinical acquisition we made the task more challenging, degrading the images by removing k-space lines in order to lower image quality and create artefacts. For each image in the dataset we compute the Fourier transform, and k-space lines outside of a central band of 8 lines are removed with 3/4 probability and zero-filled. The degraded image is then reconstructed by performing the inverse Fourier transform, and it is these images which are used for both training and testing. Examples of original and degraded images are shown in Figure 3. Our method is demonstrated in the semi-supervised set- ting, where a small number of labelled cases, N , and N u = 400 unlabelled cases are used. As a baseline we evaluated a fully supervised method using just the labelled cases, and also post-processed the supervised results using image processing tools commonly used to correct small topological errors. We used the binary closure morphology operator with a circular structuring element with a radius of 3 pixels. Additionally, we compared our method to an iterative semisupervised approach similar to [3]. In this method the predicted segmentations from unlabelled cases were used as labels for training such that the network's weights and the predicted segmentations of unlabelled cases are iteratively improved. In our experiments, as in [3] we use 3 iterations of 100 epochs after the initial supervised training. Each of these methods was evaluated with the same network architecture. We used a simple U-net-like network [17] but with 3 levels of spatial resolution (with 16, 32, and 64 feature maps in each, and spatial downsampling by a factor of 2) and with 3 3x3 convolution plus ReLU operations before each upsampling or downsampling step, with the final layer having an additional 1x1 convolution followed by a sigmoidal activation. This results in 16 convolutional layers in total. All models were trained using the Adam optimiser with a learning rate of 10 −4 and the supervised part of the model was trained with the Dice loss. The trained networks were then evaluated against a held-out test set of N test = 500 cases. To evaluate our approach we measured the Dice score of the predicted segmentations, as a quantifier of their pixelwise accuracy, and the proportion of segmentations with the correct topology when thresholded at p = 0.5. Table 1 shows the mean results over 500 test cases averaged over 20 training runs (over which both the allocation of images into training and test sets and the image degradation were randomised). Our method provides a significant reduction in the proportion of incorrect topologies of the segmentations compared to the baseline supervised learning scenario. Notably, this can occur without significantly sacrificing the pixelwise metrics of segmentation quality demonstrating that an increase in topological accuracy does not need to come at a cost to pixelwise accuracy. In Figure 1 we show a typical clinically acquired short-axis image and its estimated segmentations with and without our method. This image has not been artificially degraded as in our experiment above and is shown to illustrate that clinically acquired scans are often of a low quality compared to the UK Biobank dataset on which we demonstrate our method, and so are challenging to segment. Qualitatively observing these cases we see that a topological prior is beneficial in this realistic scenario. Table 1: Comparison of supervised learning (SL), supervised learning with binary closure (SL + BC), the semi-supervised approach of [3] (SSL) and our semi-supervised learning (Ours) with a topological prior, averaged over the 20 training/testing runs. Bolded results are those which are statistically significantly better than all three benchmark methods. Starred results are statistically significantly worse than at least one benchmark method. Significance was determined by p < 0.01 when tested with a paired t-test.
(a) (b) (c) (d) (e)
Discussion
Although we have only demonstrated our approach for the segmentation of 2D images here, in the often challenging task of 3D segmentation, the ability to impose a topological loss function could be of significant use as the number of connected components, handles, and cavities may be specified. Our future work will investigate this generalisation and its utility in challenging tasks such as the 3D segmentation of cardiac MRI volumes of subjects with congenital conditions causing atypical connections between chambers of the heart. We will also investigate extending our approach to incorporate first learning the topology of a structure from the image, and then incorporating that knowledge into the segmentation, which would allow our approach to be applicable to cases such as cell segmentation where the number of components in the desired segmentation is not known a priori but can be deduced from the image.
In our experiments we found that setting λ = 1 meant that our method had no significant difference in Dice score to the other methods but an improved topological accuracy. As seen in table 1 a higher λ results in even better performance according to the segmentation topology, but pixelwise accuracy begins to drop. We found that changing λ allows one to trade off the pixelwise and topological accuracies and in future work we will also investigate the extent to which this trade-off can be managed so as to learn the optimal value of λ for a given objective.
The dominant computational cost in our method is the repeated PH calculation which occurs k times when calculating the pixelwise gradients for each image. Computing the PH for a cubical complex containing V pixels/voxels in d dimensions can be achieved in Θ(3 d V + d2 d V ) time, and Θ(d2 d V ) memory (see [20]) and so scales linearly with respect to the number of pixels/voxels in an image. We found that, using 64 × 64 pixel 2D images, the PH for one image was calculated in approximately 0.01s on a desktop PC. Consequently, when using k = 5 and a batch of 100 images for semi-supervised learning, one batch took about 5s to process. On large 3D volumes this cost could become prohibitive. However, the implementation of PH that we use is not optimised for our task and our algorithm allows for parallel computation of the PH of each predicted segmentation in the batch of semi-supervised images. With a GPU implementation for calculating the PH of a cubical complex, many parallel calculations could allow for significant improvements in overall run-time.
Conclusions
We have presented the first work to incorporate explicit topological priors into deep-learning based image segmentation, and demonstrated our approach in the 2D case using cardiac MRI data. We found that including prior information about the segmentation topology in a semi-supervised setting improved performance in terms of topological correctness on a challenging segmentation task with small amounts of labelled data. | 3,606 |
1901.10244 | 2952416736 | We present a novel method to explicitly incorporate topological prior knowledge into deep learning based segmentation, which is, to our knowledge, the first work to do so. Our method uses the concept of persistent homology, a tool from topological data analysis, to capture high-level topological characteristics of segmentation results in a way which is differentiable with respect to the pixelwise probability of being assigned to a given class. The topological prior knowledge consists of the sequence of desired Betti numbers of the segmentation. As a proof-of-concept we demonstrate our approach by applying it to the problem of left-ventricle segmentation of cardiac MR images of 500 subjects from the UK Biobank dataset, where we show that it improves segmentation performance in terms of topological correctness without sacrificing pixelwise accuracy. | The recent work of @cite_2 introduced a topological regulariser for classification problems by considering the stability of connected components of the classification boundary and can be extended to higher-order topological features. It also provided a differentiable loss function which can be incorporated in the training of a neural network. This approach differs from ours in that firstly, it imposes topological constraints on the shape of the classification boundary in the feature space of inputs to the network, rather than topological constraints in the space of the pixels in the image, and secondly it aims only to reduce overall topological complexity. Our approach aims to fit the desired absence or presence of certain features and so complex features can be penalised or rewarded, as is appropriate for the task at hand. | {
"abstract": [
"Regularization plays a crucial role in supervised learning. A successfully regularized model strikes a balance between a perfect description of the training data and the ability to generalize to unseen data. Most existing methods enforce a global regularization in a structure agnostic manner. In this paper, we initiate a new direction and propose to enforce the structural simplicity of the classification boundary by regularizing over its topological complexity. In particular, our measurement of topological complexity incorporates the importance of topological features (e.g., connected components, handles, and so on) in a meaningful manner, and provides a direct control over spurious topological structures. We incorporate the new measurement as a topological loss in training classifiers. We also propose an efficient algorithm to compute the gradient. Our method provides a novel way to topologically simplify the global structure of the model, without having to sacrifice too much of the flexibility of the model. We demonstrate the effectiveness of our new topological regularizer on a range of synthetic and real-world datasets."
],
"cite_N": [
"@cite_2"
],
"mid": [
"2810219751"
]
} | Explicit topological priors for deep-learning based image segmentation using persistent homology | Image segmentation, the task of assigning a class label to each pixel in an image, is a key problem in computer vision and medical image analysis. The most successful segmentation algorithms now use deep convolutional neural networks (CNN), with recent progress made in combining fine-grained local features with coarse-grained global features, such as in the popular U-net architecture [17]. Such methods allow information from a large spatial neighbourhood to be used in classifying each pixel. However, the loss function is usually one which considers each pixel individually rather than considering higher-level structures collectively. In many applications it is important to correctly capture the topological characteristics of the anatomy in a segmentation result. For example, detecting and counting distinct cells in electron microscopy images requires that neighbouring cells are correctly distinguished. Even very small pixelwise errors, such as incorrectly labelling one pixel in a thin boundary between cells, can cause two distinct cells to appear to merge. In this way significant topological errors can be caused by small pixelwise errors that have little effect on the loss function during training but may have large effects on downstream tasks. Another example is the modelling of blood flow in vessels, which requires accurate determination of vessel connectivity. In this case, small pixelwise errors can have a significant impact on the subsequent modelling task. Finally, when imaging subjects who may have congenital heart defects, the presence or absence of small holes in the walls between two chambers is diagnostically important and can be identified from images, but using current techniques it is difficult to incorporate this relevant information into a segmentation algorithm. For downstream tasks it is important that these holes are correctly segmented but they are frequently missed by current segmentation algorithms as they are insufficiently penalised during training. See Figure 1 for examples of topologically correct and incorrect segmentations of cardiac magnetic resonance images (MRI).
There has been some recent interest in introducing topological features into the training of CNNs, and this literature is reviewed in section 2 below. However, such approaches have generally involved detecting the presence or absence of topological features implicitly in order to quantify them in a differentiable way that can be incorporated into the training of the segmentation network. The weakness of this approach is that it is hard to know exactly which topological features are being learned. Instead, it would be desirable to explicitly specify the presence or absence of certain topological features directly in a loss function. This would enable us to designate, for example, that the segmentation result should have one connected component which has one hole in it. This is challenging due to the inherently discrete nature of topological features, making it hard to create a differentiable loss function which accounts for them.
In this paper we demonstrate that persistent homology (PH), a tool from the field of topological data analysis, can be used to address this problem by quantifying the persistence, or stability, of all topological features present in an image. Our method uses these high-level structural features to provide a pixelwise gradient that increases or decreases the persistence of desired or undesired topological features in a segmentation. These gradients can then be back-propagated through the weights of any segmentation network and combined with any other pixelwise loss function. In this way, the desired topological features of a segmentation can be used to help train a network even in the absence of a ground truth, and without the need for those features to be implicitly learned from a large amount of training data, which is not always available. This topologically driven gradient can be incorporated into supervised learning or used in a semi-supervised learning scenario, which is our focus here.
Our main contribution is the presentation of, to the best of our knowledge, the first method to explicitly incorporate topological prior information into deeplearning based segmentation. The explicit topological prior is the sequence of desired Betti numbers of the segmentations and our method provides a gradient calculated such that the network learns to produce segmentations with the correct topology. We begin by reviewing literature related to introducing topology into deep learning in section 2. In section 3 we cover the theory of PH and introduce the relevant notation. In section 4 we then describe in detail our approach for integrating PH and deep learning for image segmentation, and then demonstrate the method in a case study on cardiac MRI in section 5.
Theory
PH is an algebraic tool developed as part of the growing mathematical field of topological data analysis, which involves computing topological features of shapes and data. We give a brief overview of PH here, but direct the reader to [6,5,14] for more thorough reviews, discussions and historical background of the subject. Although PH most commonly considers simplicial complexes 1 due to their generality, for the analysis of images and volumes consisting of pixels and voxels, cubical complexes are considerably more convenient and so we introduce them, and their theory of PH here.
Cubical Complexes
A cubical complex is a set consisting of points, unit line segments, and unit squares, cubes, hypercubes, and so on. Following the notation of [11] its funda-mental building blocks are elementary intervals which are each a closed subset of the real line of the form I = [z, z + 1] for z ∈ Z. These represent unit line segments. Points are represented by degenerate intervals I = [z, z]. From these, we can define elementary cubes, Q as the product of elementary intervals,
Q = I 1 × I 2 × ... × I d , Q ⊂ R d .(1)
The set of all elementary cubes in R d is K d , and K = d K d . The dimension of an elementary cube, dim Q, is the number of non-degenerate components in the product defining Q, and we will denote the set of all d-dimensional elementary cubes as K d = {Q ∈ K | dim Q = d}. By setting up the theory in this way, we are restricting the class of objects we can talk about to unit cubes, which will represent pixels in the images we will consider. For simplicity, from hereon we will describe the two-dimensional case, but our approach is generalisable. Consider a 2D array, S of N × N pixels, where the pixel in row i and column j has a value S [i,j] ∈ [0, 1]. In terms of the cubical complex each pixel covers a unit square described by the elementary cube
Q i,j = [i, i + 1] × [j, j + 1] where Q i,j ∈ K 2 .
We then consider filtrations of this cubical complex. For each value of a threshold p ∈ [0, 1], we can find the cubical complex B(p) given by
B(p) = N −1 i,j=0 Q i,j : S [i,j] ≥ (1 − p).(2)
In the context of an image, B(p) represents a binarised image made by setting pixels with a value above 1 − p to 1, and below 1 − p to 0. By considering these sets for an increasing sequence of filtration values we obtain a sequence
B(0) ⊆ B(p 1 ) ⊆ B(p 2 ) ⊆ ... ⊆ B(1).(3)
This filtered space is the key object of PH.
Persistent Homology
PH measures the lifetimes of topological features within a filtration such as the sequence above. The premise is that those features with long lifetimes, in terms of the filtration value p, are significant features of the data. Those with short lifetimes are usually considered to be noise. For each complex B(p) ⊂ R d , we can consider its topology by finding 2 the homology group H n , the rank of which is the n th Betti number, β n . These numbers are topological invariants which, informally speaking, count the number of d-dimensional holes in an object. β 0 counts the number of connected components, β 1 counts the number of loops, and, although not relevant to the 2D case we consider here, β 2 counts the number of 2 We avoid the details of how the homology groups and Betti numbers are computed here. In our experiments, we used the implementation from the Python library Gudhi, available at [1]. In our implementation diagonally adjacent pixels are considered as neighbouring, but this does not generally need to be the case. hollow cavities, and so on. As the filtration value p increases, more pixels join the cubical complex and topological features in the binarised image are created and destroyed. A useful way of visualising the PH of a dataset is to use a barcode diagram, an example of which is given in Figure 2. This diagram plots the lifespans of all topological features in the data, where each feature is represented by one bar, and with different colour bars representing different Betti numbers. The Betti numbers of B(p * ) are given by the number of bars present at the x-coordinate x = p * . A key feature of the barcode diagram is that it is stable in the presence of noise, and there are theoretical guarantees that small changes to the original data can only make small changes to the positions and lengths of the bars [10]. For an array of input data S, we will describe its PH by denoting each bar in the barcode diagram as H d (S) = (p birth , p death ) which is an ordered pair of the birth and death filtration values of the th longest bar of dimension d, where ≥ 1 and d ≥ 0.
Our method will use these barcode diagrams as a description of the topological features in a predicted segmentation mask. In the case we consider below, we begin with the prior knowledge that the object being segmented should contain one hole (i.e. β 1 = 1) and so aim to extend the length of the bar corresponding to the most persistent 1-dimensional feature. It is important to note both that our method can be applied generally to encourage the presence or absence of topological features of any number or dimension, but also that this prior information must be specified for the task at hand.
Method
Throughout this paper we consider only the problem of binary segmentation, that is, assigning a value between 0 and 1 to each pixel in an image which represents the probability of it being classified as part of a particular structure. Our approach does generalise to multi-class segmentation (inasmuch as it can be described as several binary segmentation problems) but, for convenience and simplicity, we will discuss only the binary case here.
Topological Pixelwise Gradient
In our approach the desired topology of the segmentation mask needs to be specified in the form of its Betti numbers. For ease of explanation we consider the case in which β 1 = 1 is specified, corresponding to the prior knowledge that the segmentation mask should contain exactly one closed cycle. Given a neural network f which performs binary segmentation and is parameterised by a set of weights ω, an N ×N image X produces an N ×N array of pixelwise probabilities, S = f (X; ω). In the supervised learning setting, a pixelwise gradient is calculated by, for example, calculating the binary cross-entropy or Dice loss between S and some ground-truth labels Y .
We additionally calculate a pixelwise gradient for a topological loss as follows. Firstly the PH of S is calculated, producing a set of lifetimes of topological features H d , such as that shown in figure 2a. For each desired feature, the longest bars (and so the most persistent features) of the corresponding dimension are identified. In our case the presence of a closed cycle corresponds to the longest green bar in the barcode diagram, denoted by H 1 1 (S). In order to make this feature more persistent we need to identify the pixels which, if assigned a higher/lower probability of appearing in the segmentation, will extend the length of this bar in the barcode diagram, and therefore increase the persistence of that topological feature. These pixels are identified by an iterative process which begins at the pixels with the filtration values at precisely the ends of the relevant bar, which are H 1 1 (S) = (p * birth , p * death ) for the left and right ends of the bar respectively. For each of k iterations, where k is an integer parameter which can be freely chosen, the pixels with these extremal filtration values are filled in (with a 1 and 0 respectively), extending the bar in the barcode, and these pixels have a gradient of ∓1 applied to them. The PH is recomputed, and another pixel chosen for each end of the bar. These pixels are also filled in, and more chosen, and so on, until k pixels have been identified for each end of the bar. These are now the 2k pixels which, if their filtration values are adjusted, will result in the most significant change in the persistence of the relevant topological object, and it is these 2k pixels which will have a gradient applied to them. Algorithm 1 shows pseudo-code for the β 1 = 1 example 3 .
Algorithm 1 Topological loss gradient β 1 = 1 Input S: Array of real numbers -pixelwise segmentation probabilities k: Integer -number of pixels to apply gradient to : Real number > 0 -threshold to avoid modifying already persistent features Output G: Array of real numbers -pixelwise gradients 1: procedure TopoGrad(S, k, ) 2:
t ← 0 3:
G is initialised as an N × N array of 0 4:
while t < k do 5:
H d (S) ←
Semi-supervised Learning
We incorporate the topological prior into a semi-supervised learning scheme as follows. In each training batch, firstly the binary cross-entropy loss from the N labelled cases is calculated. Next the pixelwise gradients, G, for the N u unlabelled cases are calculated as in Algorithm 1 and multiplied by a positive constant λ, which weights this term. The gradient from the cross-entropy loss and the topological gradient are then summed. In our experiments we set k = 5, = 0.01 and experiment with a choice of λ, chosen by manual tuning.
Experiments and Results
We demonstrate our approach on real data with the task of myocardial segmentation of cardiac MRI. We use a subset of the UK Biobank dataset [19,15], which consists of the mid-slice of the short-axis view of the heart. Example images and segmentations from this dataset are shown in Figure 3. We use one end-systole image from each subject, each of which has a gold-standard left-ventricle segmentation provided. The images were cropped to a 64x64 square centred around the left ventricle. Since the UK Biobank dataset contains high-quality images compared to a typical clinical acquisition we made the task more challenging, degrading the images by removing k-space lines in order to lower image quality and create artefacts. For each image in the dataset we compute the Fourier transform, and k-space lines outside of a central band of 8 lines are removed with 3/4 probability and zero-filled. The degraded image is then reconstructed by performing the inverse Fourier transform, and it is these images which are used for both training and testing. Examples of original and degraded images are shown in Figure 3. Our method is demonstrated in the semi-supervised set- ting, where a small number of labelled cases, N , and N u = 400 unlabelled cases are used. As a baseline we evaluated a fully supervised method using just the labelled cases, and also post-processed the supervised results using image processing tools commonly used to correct small topological errors. We used the binary closure morphology operator with a circular structuring element with a radius of 3 pixels. Additionally, we compared our method to an iterative semisupervised approach similar to [3]. In this method the predicted segmentations from unlabelled cases were used as labels for training such that the network's weights and the predicted segmentations of unlabelled cases are iteratively improved. In our experiments, as in [3] we use 3 iterations of 100 epochs after the initial supervised training. Each of these methods was evaluated with the same network architecture. We used a simple U-net-like network [17] but with 3 levels of spatial resolution (with 16, 32, and 64 feature maps in each, and spatial downsampling by a factor of 2) and with 3 3x3 convolution plus ReLU operations before each upsampling or downsampling step, with the final layer having an additional 1x1 convolution followed by a sigmoidal activation. This results in 16 convolutional layers in total. All models were trained using the Adam optimiser with a learning rate of 10 −4 and the supervised part of the model was trained with the Dice loss. The trained networks were then evaluated against a held-out test set of N test = 500 cases. To evaluate our approach we measured the Dice score of the predicted segmentations, as a quantifier of their pixelwise accuracy, and the proportion of segmentations with the correct topology when thresholded at p = 0.5. Table 1 shows the mean results over 500 test cases averaged over 20 training runs (over which both the allocation of images into training and test sets and the image degradation were randomised). Our method provides a significant reduction in the proportion of incorrect topologies of the segmentations compared to the baseline supervised learning scenario. Notably, this can occur without significantly sacrificing the pixelwise metrics of segmentation quality demonstrating that an increase in topological accuracy does not need to come at a cost to pixelwise accuracy. In Figure 1 we show a typical clinically acquired short-axis image and its estimated segmentations with and without our method. This image has not been artificially degraded as in our experiment above and is shown to illustrate that clinically acquired scans are often of a low quality compared to the UK Biobank dataset on which we demonstrate our method, and so are challenging to segment. Qualitatively observing these cases we see that a topological prior is beneficial in this realistic scenario. Table 1: Comparison of supervised learning (SL), supervised learning with binary closure (SL + BC), the semi-supervised approach of [3] (SSL) and our semi-supervised learning (Ours) with a topological prior, averaged over the 20 training/testing runs. Bolded results are those which are statistically significantly better than all three benchmark methods. Starred results are statistically significantly worse than at least one benchmark method. Significance was determined by p < 0.01 when tested with a paired t-test.
(a) (b) (c) (d) (e)
Discussion
Although we have only demonstrated our approach for the segmentation of 2D images here, in the often challenging task of 3D segmentation, the ability to impose a topological loss function could be of significant use as the number of connected components, handles, and cavities may be specified. Our future work will investigate this generalisation and its utility in challenging tasks such as the 3D segmentation of cardiac MRI volumes of subjects with congenital conditions causing atypical connections between chambers of the heart. We will also investigate extending our approach to incorporate first learning the topology of a structure from the image, and then incorporating that knowledge into the segmentation, which would allow our approach to be applicable to cases such as cell segmentation where the number of components in the desired segmentation is not known a priori but can be deduced from the image.
In our experiments we found that setting λ = 1 meant that our method had no significant difference in Dice score to the other methods but an improved topological accuracy. As seen in table 1 a higher λ results in even better performance according to the segmentation topology, but pixelwise accuracy begins to drop. We found that changing λ allows one to trade off the pixelwise and topological accuracies and in future work we will also investigate the extent to which this trade-off can be managed so as to learn the optimal value of λ for a given objective.
The dominant computational cost in our method is the repeated PH calculation which occurs k times when calculating the pixelwise gradients for each image. Computing the PH for a cubical complex containing V pixels/voxels in d dimensions can be achieved in Θ(3 d V + d2 d V ) time, and Θ(d2 d V ) memory (see [20]) and so scales linearly with respect to the number of pixels/voxels in an image. We found that, using 64 × 64 pixel 2D images, the PH for one image was calculated in approximately 0.01s on a desktop PC. Consequently, when using k = 5 and a batch of 100 images for semi-supervised learning, one batch took about 5s to process. On large 3D volumes this cost could become prohibitive. However, the implementation of PH that we use is not optimised for our task and our algorithm allows for parallel computation of the PH of each predicted segmentation in the batch of semi-supervised images. With a GPU implementation for calculating the PH of a cubical complex, many parallel calculations could allow for significant improvements in overall run-time.
Conclusions
We have presented the first work to incorporate explicit topological priors into deep-learning based image segmentation, and demonstrated our approach in the 2D case using cardiac MRI data. We found that including prior information about the segmentation topology in a semi-supervised setting improved performance in terms of topological correctness on a challenging segmentation task with small amounts of labelled data. | 3,606 |
1901.10244 | 2952416736 | We present a novel method to explicitly incorporate topological prior knowledge into deep learning based segmentation, which is, to our knowledge, the first work to do so. Our method uses the concept of persistent homology, a tool from topological data analysis, to capture high-level topological characteristics of segmentation results in a way which is differentiable with respect to the pixelwise probability of being assigned to a given class. The topological prior knowledge consists of the sequence of desired Betti numbers of the segmentation. As a proof-of-concept we demonstrate our approach by applying it to the problem of left-ventricle segmentation of cardiac MR images of 500 subjects from the UK Biobank dataset, where we show that it improves segmentation performance in terms of topological correctness without sacrificing pixelwise accuracy. | Persistent homology has previously been applied to the problem of semantic segmentation, such as in @cite_19 @cite_12 @cite_14 . The important distinction between our method and these previous works is that they apply PH to the input image to extract features, which are then used as inputs to some other algorithm for training. Such approaches can capture complex features of the input images but require those topological features to be directly extractable from the raw image data. Our approach instead processes the image with a CNN and it is the output of the CNN, representing the pixelwise likelihood of the structure we want to segment, which has PH applied to it. | {
"abstract": [
"We introduce a novel algorithm for segmenting the high resolution CT images of the left ventricle (LV), particularly the papillary muscles and the trabeculae. High quality segmentations of these structures are necessary in order to better understand the anatomical function and geometrical properties of LV. These fine structures, however, are extremely challenging to capture due to their delicate and complex nature in both geometry and topology. Our algorithm computes the potential missing topological structures of a given initial segmentation. Using techniques from computational topology, e.g. persistent homology, our algorithm find topological handles which are likely to be the true signal. To further increase accuracy, these proposals are measured by the saliency and confidence from a trained classifier. Handles with high scores are restored in the final segmentation, leading to high quality segmentation results of the complex structures.",
"Topological tools provide features about spaces, which are insensitive to continuous deformations. Applied to images, the topological analysis reveals important characteristics: how many connected components are present, which ones have holes and how many, how are they related one to another, how to measure them and find their locations. We show in this paper that the extraction of such features by computing persistent homology is suitable for grayscale image segmentation.",
"Automated tumor segmentation in Hematoxylin & Eosin stained histology images is an essential step towards a computer-aided diagnosis system. In this work we propose a novel tumor segmentation approach for a histology whole-slide image (WSI) by exploring the degree of connectivity among nuclei using the novel idea of persistent homology profiles. Our approach is based on 3 steps: 1) selection of exemplar patches from the training dataset using convolutional neural networks (CNNs); 2) construction of persistent homology profiles based on topological features; 3) classification using variant of k-nearest neighbors (k-NN). Extensive experimental results favor our algorithm over a conventional CNN."
],
"cite_N": [
"@cite_19",
"@cite_14",
"@cite_12"
],
"mid": [
"2124496089",
"2570204044",
"2505036509"
]
} | Explicit topological priors for deep-learning based image segmentation using persistent homology | Image segmentation, the task of assigning a class label to each pixel in an image, is a key problem in computer vision and medical image analysis. The most successful segmentation algorithms now use deep convolutional neural networks (CNN), with recent progress made in combining fine-grained local features with coarse-grained global features, such as in the popular U-net architecture [17]. Such methods allow information from a large spatial neighbourhood to be used in classifying each pixel. However, the loss function is usually one which considers each pixel individually rather than considering higher-level structures collectively. In many applications it is important to correctly capture the topological characteristics of the anatomy in a segmentation result. For example, detecting and counting distinct cells in electron microscopy images requires that neighbouring cells are correctly distinguished. Even very small pixelwise errors, such as incorrectly labelling one pixel in a thin boundary between cells, can cause two distinct cells to appear to merge. In this way significant topological errors can be caused by small pixelwise errors that have little effect on the loss function during training but may have large effects on downstream tasks. Another example is the modelling of blood flow in vessels, which requires accurate determination of vessel connectivity. In this case, small pixelwise errors can have a significant impact on the subsequent modelling task. Finally, when imaging subjects who may have congenital heart defects, the presence or absence of small holes in the walls between two chambers is diagnostically important and can be identified from images, but using current techniques it is difficult to incorporate this relevant information into a segmentation algorithm. For downstream tasks it is important that these holes are correctly segmented but they are frequently missed by current segmentation algorithms as they are insufficiently penalised during training. See Figure 1 for examples of topologically correct and incorrect segmentations of cardiac magnetic resonance images (MRI).
There has been some recent interest in introducing topological features into the training of CNNs, and this literature is reviewed in section 2 below. However, such approaches have generally involved detecting the presence or absence of topological features implicitly in order to quantify them in a differentiable way that can be incorporated into the training of the segmentation network. The weakness of this approach is that it is hard to know exactly which topological features are being learned. Instead, it would be desirable to explicitly specify the presence or absence of certain topological features directly in a loss function. This would enable us to designate, for example, that the segmentation result should have one connected component which has one hole in it. This is challenging due to the inherently discrete nature of topological features, making it hard to create a differentiable loss function which accounts for them.
In this paper we demonstrate that persistent homology (PH), a tool from the field of topological data analysis, can be used to address this problem by quantifying the persistence, or stability, of all topological features present in an image. Our method uses these high-level structural features to provide a pixelwise gradient that increases or decreases the persistence of desired or undesired topological features in a segmentation. These gradients can then be back-propagated through the weights of any segmentation network and combined with any other pixelwise loss function. In this way, the desired topological features of a segmentation can be used to help train a network even in the absence of a ground truth, and without the need for those features to be implicitly learned from a large amount of training data, which is not always available. This topologically driven gradient can be incorporated into supervised learning or used in a semi-supervised learning scenario, which is our focus here.
Our main contribution is the presentation of, to the best of our knowledge, the first method to explicitly incorporate topological prior information into deeplearning based segmentation. The explicit topological prior is the sequence of desired Betti numbers of the segmentations and our method provides a gradient calculated such that the network learns to produce segmentations with the correct topology. We begin by reviewing literature related to introducing topology into deep learning in section 2. In section 3 we cover the theory of PH and introduce the relevant notation. In section 4 we then describe in detail our approach for integrating PH and deep learning for image segmentation, and then demonstrate the method in a case study on cardiac MRI in section 5.
Theory
PH is an algebraic tool developed as part of the growing mathematical field of topological data analysis, which involves computing topological features of shapes and data. We give a brief overview of PH here, but direct the reader to [6,5,14] for more thorough reviews, discussions and historical background of the subject. Although PH most commonly considers simplicial complexes 1 due to their generality, for the analysis of images and volumes consisting of pixels and voxels, cubical complexes are considerably more convenient and so we introduce them, and their theory of PH here.
Cubical Complexes
A cubical complex is a set consisting of points, unit line segments, and unit squares, cubes, hypercubes, and so on. Following the notation of [11] its funda-mental building blocks are elementary intervals which are each a closed subset of the real line of the form I = [z, z + 1] for z ∈ Z. These represent unit line segments. Points are represented by degenerate intervals I = [z, z]. From these, we can define elementary cubes, Q as the product of elementary intervals,
Q = I 1 × I 2 × ... × I d , Q ⊂ R d .(1)
The set of all elementary cubes in R d is K d , and K = d K d . The dimension of an elementary cube, dim Q, is the number of non-degenerate components in the product defining Q, and we will denote the set of all d-dimensional elementary cubes as K d = {Q ∈ K | dim Q = d}. By setting up the theory in this way, we are restricting the class of objects we can talk about to unit cubes, which will represent pixels in the images we will consider. For simplicity, from hereon we will describe the two-dimensional case, but our approach is generalisable. Consider a 2D array, S of N × N pixels, where the pixel in row i and column j has a value S [i,j] ∈ [0, 1]. In terms of the cubical complex each pixel covers a unit square described by the elementary cube
Q i,j = [i, i + 1] × [j, j + 1] where Q i,j ∈ K 2 .
We then consider filtrations of this cubical complex. For each value of a threshold p ∈ [0, 1], we can find the cubical complex B(p) given by
B(p) = N −1 i,j=0 Q i,j : S [i,j] ≥ (1 − p).(2)
In the context of an image, B(p) represents a binarised image made by setting pixels with a value above 1 − p to 1, and below 1 − p to 0. By considering these sets for an increasing sequence of filtration values we obtain a sequence
B(0) ⊆ B(p 1 ) ⊆ B(p 2 ) ⊆ ... ⊆ B(1).(3)
This filtered space is the key object of PH.
Persistent Homology
PH measures the lifetimes of topological features within a filtration such as the sequence above. The premise is that those features with long lifetimes, in terms of the filtration value p, are significant features of the data. Those with short lifetimes are usually considered to be noise. For each complex B(p) ⊂ R d , we can consider its topology by finding 2 the homology group H n , the rank of which is the n th Betti number, β n . These numbers are topological invariants which, informally speaking, count the number of d-dimensional holes in an object. β 0 counts the number of connected components, β 1 counts the number of loops, and, although not relevant to the 2D case we consider here, β 2 counts the number of 2 We avoid the details of how the homology groups and Betti numbers are computed here. In our experiments, we used the implementation from the Python library Gudhi, available at [1]. In our implementation diagonally adjacent pixels are considered as neighbouring, but this does not generally need to be the case. hollow cavities, and so on. As the filtration value p increases, more pixels join the cubical complex and topological features in the binarised image are created and destroyed. A useful way of visualising the PH of a dataset is to use a barcode diagram, an example of which is given in Figure 2. This diagram plots the lifespans of all topological features in the data, where each feature is represented by one bar, and with different colour bars representing different Betti numbers. The Betti numbers of B(p * ) are given by the number of bars present at the x-coordinate x = p * . A key feature of the barcode diagram is that it is stable in the presence of noise, and there are theoretical guarantees that small changes to the original data can only make small changes to the positions and lengths of the bars [10]. For an array of input data S, we will describe its PH by denoting each bar in the barcode diagram as H d (S) = (p birth , p death ) which is an ordered pair of the birth and death filtration values of the th longest bar of dimension d, where ≥ 1 and d ≥ 0.
Our method will use these barcode diagrams as a description of the topological features in a predicted segmentation mask. In the case we consider below, we begin with the prior knowledge that the object being segmented should contain one hole (i.e. β 1 = 1) and so aim to extend the length of the bar corresponding to the most persistent 1-dimensional feature. It is important to note both that our method can be applied generally to encourage the presence or absence of topological features of any number or dimension, but also that this prior information must be specified for the task at hand.
Method
Throughout this paper we consider only the problem of binary segmentation, that is, assigning a value between 0 and 1 to each pixel in an image which represents the probability of it being classified as part of a particular structure. Our approach does generalise to multi-class segmentation (inasmuch as it can be described as several binary segmentation problems) but, for convenience and simplicity, we will discuss only the binary case here.
Topological Pixelwise Gradient
In our approach the desired topology of the segmentation mask needs to be specified in the form of its Betti numbers. For ease of explanation we consider the case in which β 1 = 1 is specified, corresponding to the prior knowledge that the segmentation mask should contain exactly one closed cycle. Given a neural network f which performs binary segmentation and is parameterised by a set of weights ω, an N ×N image X produces an N ×N array of pixelwise probabilities, S = f (X; ω). In the supervised learning setting, a pixelwise gradient is calculated by, for example, calculating the binary cross-entropy or Dice loss between S and some ground-truth labels Y .
We additionally calculate a pixelwise gradient for a topological loss as follows. Firstly the PH of S is calculated, producing a set of lifetimes of topological features H d , such as that shown in figure 2a. For each desired feature, the longest bars (and so the most persistent features) of the corresponding dimension are identified. In our case the presence of a closed cycle corresponds to the longest green bar in the barcode diagram, denoted by H 1 1 (S). In order to make this feature more persistent we need to identify the pixels which, if assigned a higher/lower probability of appearing in the segmentation, will extend the length of this bar in the barcode diagram, and therefore increase the persistence of that topological feature. These pixels are identified by an iterative process which begins at the pixels with the filtration values at precisely the ends of the relevant bar, which are H 1 1 (S) = (p * birth , p * death ) for the left and right ends of the bar respectively. For each of k iterations, where k is an integer parameter which can be freely chosen, the pixels with these extremal filtration values are filled in (with a 1 and 0 respectively), extending the bar in the barcode, and these pixels have a gradient of ∓1 applied to them. The PH is recomputed, and another pixel chosen for each end of the bar. These pixels are also filled in, and more chosen, and so on, until k pixels have been identified for each end of the bar. These are now the 2k pixels which, if their filtration values are adjusted, will result in the most significant change in the persistence of the relevant topological object, and it is these 2k pixels which will have a gradient applied to them. Algorithm 1 shows pseudo-code for the β 1 = 1 example 3 .
Algorithm 1 Topological loss gradient β 1 = 1 Input S: Array of real numbers -pixelwise segmentation probabilities k: Integer -number of pixels to apply gradient to : Real number > 0 -threshold to avoid modifying already persistent features Output G: Array of real numbers -pixelwise gradients 1: procedure TopoGrad(S, k, ) 2:
t ← 0 3:
G is initialised as an N × N array of 0 4:
while t < k do 5:
H d (S) ←
Semi-supervised Learning
We incorporate the topological prior into a semi-supervised learning scheme as follows. In each training batch, firstly the binary cross-entropy loss from the N labelled cases is calculated. Next the pixelwise gradients, G, for the N u unlabelled cases are calculated as in Algorithm 1 and multiplied by a positive constant λ, which weights this term. The gradient from the cross-entropy loss and the topological gradient are then summed. In our experiments we set k = 5, = 0.01 and experiment with a choice of λ, chosen by manual tuning.
Experiments and Results
We demonstrate our approach on real data with the task of myocardial segmentation of cardiac MRI. We use a subset of the UK Biobank dataset [19,15], which consists of the mid-slice of the short-axis view of the heart. Example images and segmentations from this dataset are shown in Figure 3. We use one end-systole image from each subject, each of which has a gold-standard left-ventricle segmentation provided. The images were cropped to a 64x64 square centred around the left ventricle. Since the UK Biobank dataset contains high-quality images compared to a typical clinical acquisition we made the task more challenging, degrading the images by removing k-space lines in order to lower image quality and create artefacts. For each image in the dataset we compute the Fourier transform, and k-space lines outside of a central band of 8 lines are removed with 3/4 probability and zero-filled. The degraded image is then reconstructed by performing the inverse Fourier transform, and it is these images which are used for both training and testing. Examples of original and degraded images are shown in Figure 3. Our method is demonstrated in the semi-supervised set- ting, where a small number of labelled cases, N , and N u = 400 unlabelled cases are used. As a baseline we evaluated a fully supervised method using just the labelled cases, and also post-processed the supervised results using image processing tools commonly used to correct small topological errors. We used the binary closure morphology operator with a circular structuring element with a radius of 3 pixels. Additionally, we compared our method to an iterative semisupervised approach similar to [3]. In this method the predicted segmentations from unlabelled cases were used as labels for training such that the network's weights and the predicted segmentations of unlabelled cases are iteratively improved. In our experiments, as in [3] we use 3 iterations of 100 epochs after the initial supervised training. Each of these methods was evaluated with the same network architecture. We used a simple U-net-like network [17] but with 3 levels of spatial resolution (with 16, 32, and 64 feature maps in each, and spatial downsampling by a factor of 2) and with 3 3x3 convolution plus ReLU operations before each upsampling or downsampling step, with the final layer having an additional 1x1 convolution followed by a sigmoidal activation. This results in 16 convolutional layers in total. All models were trained using the Adam optimiser with a learning rate of 10 −4 and the supervised part of the model was trained with the Dice loss. The trained networks were then evaluated against a held-out test set of N test = 500 cases. To evaluate our approach we measured the Dice score of the predicted segmentations, as a quantifier of their pixelwise accuracy, and the proportion of segmentations with the correct topology when thresholded at p = 0.5. Table 1 shows the mean results over 500 test cases averaged over 20 training runs (over which both the allocation of images into training and test sets and the image degradation were randomised). Our method provides a significant reduction in the proportion of incorrect topologies of the segmentations compared to the baseline supervised learning scenario. Notably, this can occur without significantly sacrificing the pixelwise metrics of segmentation quality demonstrating that an increase in topological accuracy does not need to come at a cost to pixelwise accuracy. In Figure 1 we show a typical clinically acquired short-axis image and its estimated segmentations with and without our method. This image has not been artificially degraded as in our experiment above and is shown to illustrate that clinically acquired scans are often of a low quality compared to the UK Biobank dataset on which we demonstrate our method, and so are challenging to segment. Qualitatively observing these cases we see that a topological prior is beneficial in this realistic scenario. Table 1: Comparison of supervised learning (SL), supervised learning with binary closure (SL + BC), the semi-supervised approach of [3] (SSL) and our semi-supervised learning (Ours) with a topological prior, averaged over the 20 training/testing runs. Bolded results are those which are statistically significantly better than all three benchmark methods. Starred results are statistically significantly worse than at least one benchmark method. Significance was determined by p < 0.01 when tested with a paired t-test.
(a) (b) (c) (d) (e)
Discussion
Although we have only demonstrated our approach for the segmentation of 2D images here, in the often challenging task of 3D segmentation, the ability to impose a topological loss function could be of significant use as the number of connected components, handles, and cavities may be specified. Our future work will investigate this generalisation and its utility in challenging tasks such as the 3D segmentation of cardiac MRI volumes of subjects with congenital conditions causing atypical connections between chambers of the heart. We will also investigate extending our approach to incorporate first learning the topology of a structure from the image, and then incorporating that knowledge into the segmentation, which would allow our approach to be applicable to cases such as cell segmentation where the number of components in the desired segmentation is not known a priori but can be deduced from the image.
In our experiments we found that setting λ = 1 meant that our method had no significant difference in Dice score to the other methods but an improved topological accuracy. As seen in table 1 a higher λ results in even better performance according to the segmentation topology, but pixelwise accuracy begins to drop. We found that changing λ allows one to trade off the pixelwise and topological accuracies and in future work we will also investigate the extent to which this trade-off can be managed so as to learn the optimal value of λ for a given objective.
The dominant computational cost in our method is the repeated PH calculation which occurs k times when calculating the pixelwise gradients for each image. Computing the PH for a cubical complex containing V pixels/voxels in d dimensions can be achieved in Θ(3 d V + d2 d V ) time, and Θ(d2 d V ) memory (see [20]) and so scales linearly with respect to the number of pixels/voxels in an image. We found that, using 64 × 64 pixel 2D images, the PH for one image was calculated in approximately 0.01s on a desktop PC. Consequently, when using k = 5 and a batch of 100 images for semi-supervised learning, one batch took about 5s to process. On large 3D volumes this cost could become prohibitive. However, the implementation of PH that we use is not optimised for our task and our algorithm allows for parallel computation of the PH of each predicted segmentation in the batch of semi-supervised images. With a GPU implementation for calculating the PH of a cubical complex, many parallel calculations could allow for significant improvements in overall run-time.
Conclusions
We have presented the first work to incorporate explicit topological priors into deep-learning based image segmentation, and demonstrated our approach in the 2D case using cardiac MRI data. We found that including prior information about the segmentation topology in a semi-supervised setting improved performance in terms of topological correctness on a challenging segmentation task with small amounts of labelled data. | 3,606 |
1901.10254 | 2911594054 | Multi-model fitting has been extensively studied from the random sampling and clustering perspectives. Most assume that only a single type class of model is present and their generalizations to fitting multiple types of models structures simultaneously are non-trivial. The inherent challenges include choice of types and numbers of models, sampling imbalance and parameter tuning, all of which render conventional approaches ineffective. In this work, we formulate the multi-model multi-type fitting problem as one of learning deep feature embedding that is clustering-friendly. In other words, points of the same clusters are embedded closer together through the network. For inference, we apply K-means to cluster the data in the embedded feature space and model selection is enabled by analyzing the K-means residuals. Experiments are carried out on both synthetic and real world multi-type fitting datasets, producing state-of-the-art results. Comparisons are also made on single-type multi-model fitting tasks with promising results as well. | : Using deep learning to solve geometric model fitting has received growing considerations. The dense approaches start from raw image pairs to estimate models such as homography @cite_28 or non-rigid transformation @cite_53 . @cite_36 proposed to estimate the camera pose directly from image sequences. | {
"abstract": [
"We present a deep convolutional neural network for estimating the relative homography between a pair of images. Our feed-forward network has 10 layers, takes two stacked grayscale images as input, and produces an 8 degree of freedom homography which can be used to map the pixels from the first image to the second. We present two convolutional neural network architectures for HomographyNet: a regression network which directly estimates the real-valued homography parameters, and a classification network which produces a distribution over quantized homographies. We use a 4-point homography parameterization which maps the four corners from one image into the second image. Our networks are trained in an end-to-end fashion using warped MS-COCO images. Our approach works without the need for separate local feature detection and transformation estimation stages. Our deep models are compared to a traditional homography estimator based on ORB features and we highlight the scenarios where HomographyNet outperforms the traditional technique. We also describe a variety of applications powered by deep homography estimation, thus showcasing the flexibility of a deep learning approach.",
"We address the problem of determining correspondences between two images in agreement with a geometric model such as an affine or thin-plate spline transformation, and estimating its parameters. The contributions of this work are three-fold. First, we propose a convolutional neural network architecture for geometric matching. The architecture is based on three main components that mimic the standard steps of feature extraction, matching and simultaneous inlier detection and model parameter estimation, while being trainable end-to-end. Second, we demonstrate that the network parameters can be trained from synthetically generated imagery without the need for manual annotation and that our matching layer significantly increases generalization capabilities to never seen before images. Finally, we show that the same model can perform both instance-level and category-level matching giving state-of-the-art results on the challenging Proposal Flow dataset.",
"This paper presents a convolutional neural network based approach for estimating the relative pose between two cameras. The proposed network takes RGB images from both cameras as input and directly produces the relative rotation and translation as output. The system is trained in an end-to-end manner utilising transfer learning from a large scale classification dataset. The introduced approach is compared with widely used local feature based methods (SURF, ORB) and the results indicate a clear improvement over the baseline. In addition, a variant of the proposed architecture containing a spatial pyramid pooling (SPP) layer is evaluated and shown to further improve the performance."
],
"cite_N": [
"@cite_28",
"@cite_53",
"@cite_36"
],
"mid": [
"2439114332",
"2604233003",
"2592936284"
]
} | Learning for Multi-Model and Multi-Type Fitting | Multi-model fitting has been a key problem in computer vision for decades. It aims to discover multiple independent structures, e.g. lines, circles, rigid motions, etc, often in the presence of noise. Here, by multi-model, we mean there are multiple models of a specific type, e.g. lines only. If in addition, there is a mixture of types (e.g. both lines and circles), we specifically term the problem as multi-model multi-type.
Various attempts towards solving the multi-model clustering problem have been made. The early works tend to be based on extensions of RANSAC [9] to the multi-model setting, e.g. simply running RANSAC multiple times consecutively [47,49]. More recent works in this approach involve analyzing the interplay between data and hypotheses. J-Linkage [46], its variant T-Linkage [30] and ORK [3,4] rely on extensively sampling hypothesis models and compute the residual of data to each hypothesis. Either clustering is carried out on the mapping induced by the residu-als, or an energy minimization is performed on the point to model distance, and various regularization terms (e.g. the label count penalty [25] and spatial smoothness (PEaRL) [17]). Another class of approach involves direct analytic expressions characterizing the underlying subspaces, e.g., the powerful self-expressiveness assumption has inspired various elegant methods [8,28,24,18].
Despite the considerable development of multi-model fitting techniques in the past two decades, there are still major lacuna in the problem. First of all, in contrast with having multiple instances of the same type/class, many real world model fitting problem consists of data sampled from multiple types of models. Fig. 1 shows both a toy example of line, circle and ellipses co-existing together, and a realistic motion segmentation scenario, where the appropriate model to fit the foreground object motions (or even the background) can waver between affine motions, homography, and fundamental matrix [55] with no clear division. With few exceptions [1,43,47], none of the aforementioned works have considered this realistic scenario. Even if one attempts to fit multiple types of model sequentially like in [43], it is non-trivial to decide the type when the dichotomy of the models is unclear in the first place. Secondly, for problems where there are a significant number of models, the hypothesis-and-test approach is often overwhelmed by sampling imbalance, i.e., points from the same subspace represent only a minority, rendering the probability of hitting upon the correct hypothesis very small. This problem becomes severe when a large number of data samples are required for hypothesizing a model (e.g., eight points are needed for a linear estimation of the fundamental matrix and 5 points for fitting an ellipse). Lastly, for optimal performance, there is inevitably a lot of manipulation of parameters needed, among which the most sensitive include those for deciding what constitutes an inlier for a model [30,31], for sparsifying the affinity matrices [22,55], and for selecting the model type [47]. Often, dataset-specific tuning is required, with very little theory to guide the tuning.
There has been some recent foray into deep learning as a means to learn geometric model, e.g. camera pose [2] and essential matrix [59] from feature correspondences, but extending such deep geometric model fitting approach to the multi-model and multi-type scenario has not been attempted. Generalizing the deep learning counterparts of RANSAC to multi-model fitting is not trivial due to the same reason as conventional sequential approaches. Furthermore, in many geometric model fitting problems, there are often significant overlap between the subspaces occupied by the multiple model instances (e.g. in motion segmentation, both the foreground and the background contain the camera-induced motion). We want the network to learn the best representation so that the different model instances can be well-separated. This is in contrast to the traditional clustering approaches where hand-crafted design of the similarity metric is needed.When there are no clear division between multiple types of models (e.g. the transitions from a circle to an ellipse), the network would also need to learn the appropriate preference from the labelled examples in the training data.
Another open challenge in multi-model fitting is to automatically determine the number of models, also referred to as model selection in the literature [45,3,27,22]. Traditional methods proceed from statistical analysis of the residual of the clustering [45,39]. Other methods approach from various heuristic standpoints including analyzing eigen values [60,51], over-segment and merge [27,22], soft thresholding [28] or adding penalty terms [26]. Most of the above works cannot deal with mixed-types in the models. To redress this gap in the literature, we want our network to learn good feature representations so that the number of clusters, even in the presence of mixed types, can be readily estimated.
With the above objectives in mind, we propose a multimodel multi-type fitting network. The network is given labelled data (inlier points for each model and outliers) and is supposed to learn the various geometric models in a completely data-driven manner. Since the input to the network is often not regular grid data like images, we use what we called the CorresNet from [59] as a backbone (see Fig. 2).
As the output of network should be amenable for grouping into the respective, possibly mixed models, and invariant to any permutation of model indices among the multiple instances of the same class in the training data, we consider both an existing metric learning loss and its variant and propose a new distribution aware loss, the latter based on Fisher linear discriminant analysis (LDA). In the testing phase, standard K-means clustering is applied to the feature embeddings to obtain a discrete cluster assignment. As feature points are embedded in a clustering friendly way, we can just look into the K-means fitting residual to estimate the number of models should it be unknown.
Methodology
In this section, we first explain the training process of our multi-model multi-type fitting network. We then introduce existing metric learning loss and our MaxInterMinIntra loss.
Metric learning Loss
Cluster Label Y (N x K) Figure 2: Our multi-model multi-type fitting network. We adopt the same cascaded CorresNet blocks as [59]. The metric learning loss is defined to learn good feature representation.
Testing
K-Means
Cluster Index N X 1 MLP (d,128) CorresNet X
Network Architecture
We denote the input sparse data with N points as X =
{x i } i=1···N ∈ R D×N where each individual point is x i ∈ R D .
The input sparse data could be geometric shapes, feature correspondences in two frames or feature trajectories in multiple frames. We further denote the one-hot key encoded labels accompanying the input data as Y = {y i } ∈ {0, 1} K×N where y i ∈ {0, 1} K and K is the number of clusters or partitions of the input data.
Cascaded multi-layer perceptrons (mlps) has been used to learn feature representation from generic point input [35,59]. We adopt a backbone network similar to Cor-resNet [59] 1 shown in Fig. 2 . The output embedding of the CorresNet is denoted as
Z = {f (X; Θ)} ∈ R K×N .
To make the output Z clustering-friendly, we apply a differentiable, clustering-specific loss function L(Z, Y), measuring the match of the output feature representation with the ground-truth labels. The problem now becomes that of learning a CorresNet backbone f (X; Θ) that minimizes the loss L(Z, Y; Θ).
Clustering Loss
We expect our clustering loss function to have the following characteristics. First, It should be invariant to permutation of models, e.g. the order of these models are exchangeable. Second the loss must be adaptable to varying number of groups. Lastly, the loss should enable good separation of data points into clusters. We consider the following loss functions. L2Regression Loss: Given the ground-truth labels Y and the output embeddings Z = f (X; Θ), the ideal and reconstructed affinity matrices are respectively,
K = Y Y,K = Z Z (1)
The training objective is to minimize the difference between K andK measured by element-wise L2 distance [14].
L(Θ) = ||K −K|| 2 F = ||Y Y − Z Z|| 2 F = ||f (X; Θ) f (X; Θ)|| 2 F − 2||f (X; Θ)Y || 2 F(2)
The above L2 Regression loss is obviously differentiable w.r.t. f (X; Θ). Since the output embedding Z is L2 normalized, the inner product between two point representations is
z i z j ∈ [−1, 1].
Cross-Entropy Loss: As alternative to the L2 distance, one could measure the discrepancy between K andK as KL-Divergence. Since D kl (K||S(K)) = H(K, S(K)) − H(K), where H(·) is the entropy function and S(·) is the sigmoid function, with fixed K, we simply need to minimize the cross-entropy H(K, S(K)) which derives the following element-wise cross-entropy loss,
L(Θ) = i,j H y i y j , S z i z j = i,j H(y i y j , S(f (x i ; Θ) f (x i ; Θ)))(3)
1 Alternative sparse data networks, e.g. PointNet [35], are applicable as well Figure 3: Illustration of MaxInterMinIntra loss for point representation metric learning. The objective considers the minimal distance min m,n ||µ m − µ n || 2 2 between clusters and maximal scatter max l s l within clusters.
Input Point Representations
Embedded Point Representations
( ; ) f z x mean scatter 1 3 m m μ 1 3 l l s x z
The cross-entropy loss is more likely to push points i and j of the same cluster together faster than L2Regression, i.e. inner product z i z j → 1 and those of different clusters apart, i.e. inner product z i z j → −1. MaxInterMinIntra Loss: Both the above losses consider the pairwise relation between points; the overall point distribution in the output embedding is not explicitly considered. We now propose a new loss which takes a more global view of the point distribution rather than just the pairwise relations. Specifically, we are inspired by the classical Fisher LDA [10]. LDA discovers a linear mapping z = w x that maximizes the distance between class centers/means µ i = 1/N j z j and minimizes the scatter/variance within each class s i = j (z j − µ i ) 2 . Formally, the objective for a two-class problem is written as,
J(w) = |µ 1 − µ 2 | 2 s 2 1 + s 2 2(4)
which is to be maximized over w. For linearly nonseparable problem, one has to design kernel function to map the input features before applying the LDA objective. Equipped now with more powerful nonlinear mapping networks, we adapt the LDA objective-for the multi-class scenarios-to perform these mappings automatically as below,
J(Θ) = min m,n∈{1···K},m =n ||µ m − µ n || 2 2 max l∈{1···K} s l(5)
where µ m = 1 |Cm| i∈Cm z i , s l = i∈C l ||z i − µ l || 2 2 and C l indicating the set of points belonging to cluster l. We use the extremas of the inter-cluster distances and intracluster scatters (see Fig. 3) so that the worst case is explicitly optimized. Hence, we term the loss as MaxInterMinIntra (MIMI). By applying log operation on the objective,we arrive at the following loss function to be minimized:
L(Θ) = − log min m,n ||µ m − µ n || 2 2 + log max l s l (6)
One can easily verify that the MaxInterMinIntra loss is differentiable w.r.t. z i . We give the gradient in Eq (7). Optimization: The Adam optimizer [21] is used to minimize the loss L(Θ). The learning rate is fixed at 1e − 4 and mini-batch at one frame pair or sequence. The mini-batch size cannot exceed one because the number of points/correspondences is not uniform across different sequences. For all tasks, we train the network 300 epochs.
Inference
During testing, we apply standard K-means to the output embeddings {z j } j=1···Nte . This step is applicable to both multi-model and multi-type fitting problems, as we do not need to specify explicitly the type of model to fit. Finally, with unknown number of models K, we propose to analyze the K-means residuals,
r(K) = m=1···K i∈Cm ||z i − µ m || 2 2(8)
Good estimate of K often yields low r(K) and further increasing K does not significantly reduce r(K). So we find the K at the 'elbow' position. We adopt two offthe-shell approaches for this purpose, the second order difference (SOD) [61] and silhouette analysis [39]. Both are parameter-free.
Experiment
We demonstrate the performance of our network on both synthetic and real world data, with extensive comparisons with traditional geometric model fitting algorithms. Our focus is on the multi-type setting (the first two experiments on LCE and KT3DMoSeg), but we also carry out experiments on the pure multi-model scenario (LCE-unmixed and Adelaide RMF) experiments.
Datasets
Synthesized Lines, Circles and Ellipses (LCE): Fitting ellipses has been a fundamental problem in computer vision [11]. We synthesize for each sample four different types of conic curves in a 2D space, specifically, one straight line, two ellipses and one circle. We randomly generate 8,000 training samples, 200 validation samples and 200 testing samples. Each point is perturbed by adding a gaussian noise with σ = 0.05. KT3DMoSeg [55]: This benchmark was created based upon the KITTI self-driving dataset [12] with 22 sequences in total. Each sequence contains two to five rigid motions. As analyzed by [55], the geometric model for each individual motion can range from an affine transformation, a homography, to a fundamental matrix, with no clear dividing line between them. We evaluate this benchmark to demonstrate our network's ability to tackle multi-model multi-type
∇ Θ L(Θ) = − i∈Cm 1 |Cm| 2 2z i + j∈Cm,j =i z j − 1 |Cm||Cn| j∈Cn z j ||µ m − µ n || 2 2 ∇ Θ f (x i ; Θ) − j∈Cn 1 |Cn| 2 2z j + i∈Cm,i =j z i − 1 |Cn||Cm| k∈Cm z k ||µ m − µ n || 2 2 ∇ Θ f (x j ; Θ) + α k∈Cl 2z k − 1 |C l | 2z k + j∈Cl,j =i z j + 1 |C l | 2 2z k + 2 j∈Cl,j =i z j ∇ Θ f (x k ; Θ)(7)
fitting. For fair comparison with all existing approaches, we only crop the first 5 frames of each sequence for evaluation, so that the broken trajectory does not give undue advantage to certain methods. Synthesized Lines, Circles and Ellipses Unmixed (LCE-Unmixed): To demonstrate the ability of our network on single-type multi-model fitting, we also randomly generate in each sample a single class of conic curves in 2D space (lines, circles, or ellipses) but with multiple instances (2-4) of them. The number of training, validation and testing samples are the same as those of the multi-type LCE setting. Same perturbation as LCE is applied here. Adelaide RMF Dataset [52]: This dataset consists of 38 frame pairs, of which half are designed for multi-model fitting (the model being homographies induced by planes). The number of planes is between two to seven. The other 19 frame pairs are designed for two-view motion segmentation. It is nominally a single-type multiple fundamental matrix fitting problem and has been treated as such by the community. While we put the results under the single-type category, we hasten to add that there might indeed be degeneracies, i.e. near planar rigid objects, (and hence mixed types) present in this dataset, no matter how minor. The number of motions is between one to five.
Multi-Type Curve Fitting
The multiple types in this curve fitting task comprises of lines, circles, and ellipses in the LCE dataset. Note that there is no clear dividing boundary between them as they can be all explained by the general conic equation (with the special cases of lines and circles obtained by setting some coefficients to 0):
Ax 2 + Bxy + Cy 2 + Dx + Ey + F = 0(9)
There are two ways to adapt the traditional multi-model methods for this multi-type setting. One approach is to formulate the multi-type fitting problem as fitting multiple models parameterized by the same conic equation in Eq (9). This approach is termed HighOrder (H.O.) fitting. Alternatively, one could sequentially fit three types of models, which is termed Sequential (Seq.) fitting. For ellipse-specific fitting, the direct least square approach [11] is adopted. For our model, we evaluate the various metric learning losses introduced in Section 3.2 and present the results in Tab. 1. The results are reported with the optimal setting determined by the validation set. We evaluate the performance by two clustering metrics, Classification Error Rate (Error Rate), i.e. the best classification results subject to permutation of clustering labels, and Normalize Mutual Information (NMI). Comparisons are made with state-of-the-art multi-model fitting algorithms including Tlinkage [30], RPA [31] and RansaCov [32]. We notice that T-linkage returns extremely over-segmented results in the sequential setting, e.g. more than 10 lines, making classification error evaluation intractable as it involves finding the permutation label with lowest error rate. For our model, we evaluate the three loss variants, the L2 Regression loss (L2), Cross Entropy loss (CE) and MaxInterMinIntra loss (MIMI). We make the following observations about the results. First, all our metric learning variants outperform the High-Order and Sequential multi-type fitting approaches. Second, the all-encompassing model used in the HighOrder approach suffers from ill-conditioning when fitting simpler models. Thus, the performance is much inferior to that of Sequential fitting. However, it is worth noting that despite the Sequential approach being given the strong a priori knowledge of both the model type and the number of model for each type, its performance is still significantly worse off than ours.
For qualitative comparison, we visualize the groundtruth and segmentation results of each method in Fig. 4. Our clustering results on the bottom row show success in discovering all individual shapes with mistakes made only at the intersections of individual structures. Though good at separating straight line, the RPA failed to discover ellipses as sampling all 5 inliers amidst the large number of outliers and fitting an ellipse from even correct 5 support points with noise (noise in coordinate) are both very difficult, the latter
Multi-Type Motion Segmentation
The KT3DMoSeg benchmark [55] is put forth for the task of motion segmentation. Each sequence often consists of a background whose motion can be explained in general by a fundamental matrix while the models for the foreground motions can sometimes be ambiguous due to the limited spatial extent of the objects, thus giving rise to mixed types of models. For example, in Fig. 5, the vehicles in 'Seq009 Clip01' and 'Seq028 Clip03' can be roughly explained by an Affine transformation or Homography while the oil tanker in 'Seq095 Cip01' should be modeled by a fundamental matrix. Even the background motion can be ambiguous to model, when the background is dominated by a plane, for instance, the quasi-planar row of trees on the right side of the road in 'Seq028 Clip03' is likely to lead to degeneracies in the fundamental matrix estimation and thus cause errors in the traditional method (second row). For this dataset, we use the first five frames of each sequence for fair comparison and apply leaveone-out cross-validation, i.e. repeatedly train on 21 sequences and test on the left-out sequence; we dubbed this the 'Vanilla' setting. Each sequence has between 10-20 frames, so we could further increase the training data by augmenting with all the remaining five-frame clips from each sequence with no overlap; this is termed as the 'Augment' setting. The testing clips (first five frames of each sequence) are kept the same for both settings. We compare with subspace clusering approaches, GPCA [48], LSA [56], ALC [37], LRR [28], MSMC [7] and SSC [8] and the multiview clustering (MVC) methods in [55]. Results are presented in Tab. 2.
We make the following observations about the results. Our vanilla leave-one-out approach achieved very compet-itive performance on all 22 sequences in KT3DMoSeg. In the 'Augment' setting, our approach even outperforms the state-of-the-art multi-view clustering approaches (MVC) [55]. Of all benchmark methods, only MVC has considered the multi-type fitting issue. However, the multi-view fusion proposed therein still does not guarantee that each rigid motion is explained by the correct model. Furthermore, we notice that our proposed MIMI metric is comparable to both the L2 Regression and cross entropy loss and gives even lower error when augmented with additional data. This suggests that optimizing the distribution of the embedded features with a clustering-specific loss is effective.
Finally, we present qualitative comparison between the results of MVC and ours in Fig. 5. Not only is the proposed network capable of correctly segmenting the aforementioned degenerate motions, it surpasses our expectations in how it performs in 'Seq009 Clip01'. Here the independently moving car (the yellow group in the ground truth image) has a flow field that is consistent with the epipolar constraint associated with the background motion (due to them both translating in the same direction) [55]. Without resorting to reconstructing the depth of the car, it would be impossible to separate it from the background. However, criteria involving depth would be very unwieldy to specify analytically in the existing approaches. Here, without having any preconceived notion of the geometrical model, our network seems to have learnt the requisite criteria to separate the independent motion.
Multi-Model Fitting
In this section, we further demonstrate the ability of our network to handle conventional (i.e., single-type) multimodel fitting problems. Synthetic Multi-Model Fitting: In this experiment, we evaluate multi-model fitting of a single type (the type being line, circle or ellipse). We adopt a similar training and testing split as in the synthetic LCE task, i.e. 8,000 training samples and 200 testing samples and compare with RPA [31]. The results are presented in Fig. 6. We conclude from the figure that, first, our multi-model network performs comparably with RPA on multi-line segmentation task while outperforming RPA with large margin on the more challenging multi-circle and multi-ellipse segmentation tasks. Moreover, the performance drops sharply (higher error) from multi-line (blue) to multi-ellipse (green) fitting for RPA, with the drop getting more acute as the number of model increases. This suggests that the increasing size of the minimal support set (2 points for line, 3 points for circle and 5 points for ellipse) introduces great challenge for the Ransac-based approaches due to sampling imbalance. Hitting the true model becomes very difficult for model with larger support set and experiencing higher noise level. It is
MVC Seq009_Clip01
Ours Seq009_Clip01
GroundTruth Seq028_Clip03
MVC Seq028_Clip03
Ours Seq028_Clip03
GroundTruth Seq095_Clip01
MVC Seq095_Clip01
Ours Seq095_Clip01
GroundTruth Seq005_Clip01
MVC Seq005_Clip01
Ours Seq005_Clip01 Figure 5: Qualitative comparison on 4 sequences from KT3DMoSeg. First row are the ground-truth. Second and third rows are the results of Multi-View Clustering [55] and our multi-type network respectively. The last row are the point feature embeddings before and after learning.
evident that our multi-model network is less sensitive to the complexity of the model, as the drop in performance (purple and cyan bars) are less significant. Fig. 6 thus demonstrates that our deep learning approach is better able to deal with sampling imbalance, probably by picking up and leveraging on the additional regularity in the way the points are distributed. Two-View Multi-Model Fitting: Finally, we evaluate the multi-model fitting task on the Adelaide RMF dataset [52]. For both the multi-planar and motion segmentation tasks, we carry out a leave-one-out cross-validation. For fair comparison, we report the classification error rate (Error-Rate). The state-of-the-art models being compared include J-Linkage [46], T-Linkage [30], RPA [31], RCMSA [34] and ILP-RansaCov [32]. The comparisons are presented in Tab. 3. We observe that our multi-model network gives very competitive results on both the multi-planar and motion segmentation tasks. For the former task, our proposed Max-InterMinIntra (MIMI) loss yields 17.33% which is better than many benchmark models. For the motion segmentation task, our model with L2 Regression loss gives a mean error of 8.98%. We note the performance is achieved by training on only a very small amount of data (18 sequences) and without any dataset-specific parameter tuning. We further note that here, without the problems posed by mixed types, the traditional methods are able to reap the benefits of the given geometrical models (an advantage compared to our method which does not have any preconceived model).
Further Study
In this section, we first further analyze the impact of metric learning on transforming the point feature representations. Then we present results on model selection and finally do ablation study for the proposed MaxInterMinIntra loss. Feature Embedding To gain some insight on how the learned feature representations are more clustering-friendly, we provide direct visualization of the representations. For that purpose, we use T-SNE [29] to project both the KT3DMoSeg raw feature points (of dimension ten for 5 frames) and network output embeddings to a 2-dimensional space. Three example sequences are presented in the last row of Fig. 5. We conclude from the figure that: (i) the original feature points are hard to be grouped by K-means correctly; and (ii) after our network embedding, feature points are more likely to be grouped according to the respective motions, regardless of the underlying types of motions. Model Selection: As can be seen from Fig. 5, the point distribution in the learned feature embedding is amenable for model selection (estimating the number of clusters/motions). We evaluate both Second Order Difference (SOD) [28] and Silhouette Analysis (Silh.) [39] to estimate the number of motions. We also compare with alternative subspace clustering approaches with built-in model selection, namely, LRR [28] and MSMC [7] and additionally apply self-tuning spectral clustering(S.T.) [60] to the affinity matrix obtained in MVC [55]. Performances are evaluated in terms of mean classification error (Err.) and correct rate (Corr.), i.e. the percentage of samples/sequences with correctly estimated number of cluster (higher the better). Comparisons are presented in Tab. 4. Thanks to the deep feature learning, both SOD and Silh. applied to our method give strong performance even though they are very simple heuristics. Dimension of Output Embedding: We investigate the impact of the dimension of the output embedding z on the performance of multi-model/type fitting. Here, we vary the size of the embedding dimension from 3 to 7 for all three tasks and present the resulting error rates against the dimension in Fig 7 (left). As we can see, the errors are relatively stable w.r.t. the output embed dimension from 4 to 7 for all three tasks with optimal between 5 to 6 coninciding with the maximal number of clusters for each task (max 5 motions for KT3DMoSeg and max 4 structures for Synthetic). Thus the maximal number of clusters serves as a good heuristic for the dimension of the network output embedding. MIMI Loss: Here we investigate the necessity of both maximizing inter cluster distance and minimizing intra cluster variance. In specific, we compare the following variants.
(i) MaxInter: only maximizing the inter cluster distance is considered, equivalent to the first term in Eq (6). (ii) Min-Intra: only minimizing the intra cluster variance is considered, the second term in Eq (6). (iii) K-means loss: we further note the k-means loss [57] proposed for unsupervised deep clustering shares the same objective with MinIntra.
We therefore adapt the k-means loss to supervised learning with fixed point-to-cluster assignment during training. We compare the three variants with our final MIMI loss on KT3DMoSeg and present the results in Fig. 7 (right). The MIMI loss is consistently better (lower error) than all three variants. In particular, the MinIntra and K-means loss produce large errors. This indicates that pushing points of different clusters away is vital to feature embedding for clustering.
Conclusion
In this work, we investigate training a deep neural network for general multi-model and multi-type fitting. We formulate the problem as learning non-linear feature embeddings that maximize the distance between points of different clusters and minimize the variance within clusters. For inference, the output features are fed into a K-means to obtain the grouping. Model selection is easily achieved by just analyzing the K-means residual in a parameter free manner. Experiments are carried out on both synthetic and real geometric multi-model multi-type fitting tasks. Comparison with state-of-the-art approaches proves that our network can better deal with multiple types of models simultaneously, without any preconceived notion of the underlying model. Our method is also less sensitive to sampling imbalance brought about by the increasing number of models, and it works well in a broad range of parameter values, without the kind of careful tuning required in conventional approaches. | 4,771 |
1901.10254 | 2911594054 | Multi-model fitting has been extensively studied from the random sampling and clustering perspectives. Most assume that only a single type class of model is present and their generalizations to fitting multiple types of models structures simultaneously are non-trivial. The inherent challenges include choice of types and numbers of models, sampling imbalance and parameter tuning, all of which render conventional approaches ineffective. In this work, we formulate the multi-model multi-type fitting problem as one of learning deep feature embedding that is clustering-friendly. In other words, points of the same clusters are embedded closer together through the network. For inference, we apply K-means to cluster the data in the embedded feature space and model selection is enabled by analyzing the K-means residuals. Experiments are carried out on both synthetic and real world multi-type fitting datasets, producing state-of-the-art results. Comparisons are also made on single-type multi-model fitting tasks with promising results as well. | In contrast to the preceding works, DSAC @cite_34 learns to extract from sparse feature correspondences some geometric models in a manner akin to RANSAC. The ability to learn representations from sparse points was also developed recently @cite_0 @cite_49 . This ability was exploited by @cite_4 to fit camera motion (essential matrix) from noisy correspondences. Despite the promising results, none of the existing works have considered generic model fitting and, more importantly, fitting data of multiple models and even multiple types. In this work, we formulate the generic multi-model multi-type fitting problem as one of learning good representations for clustering. | {
"abstract": [
"Point cloud is an important type of geometric data structure. Due to its irregular format, most researchers transform such data to regular 3D voxel grids or collections of images. This, however, renders data unnecessarily voluminous and causes issues. In this paper, we design a novel type of neural network that directly consumes point clouds, which well respects the permutation invariance of points in the input. Our network, named PointNet, provides a unified architecture for applications ranging from object classification, part segmentation, to scene semantic parsing. Though simple, PointNet is highly efficient and effective. Empirically, it shows strong performance on par or even better than state of the art. Theoretically, we provide analysis towards understanding of what the network has learnt and why the network is robust with respect to input perturbation and corruption.",
"RANSAC is an important algorithm in robust optimization and a central building block for many computer vision applications. In recent years, traditionally hand-crafted pipelines have been replaced by deep learning pipelines, which can be trained in an end-to-end fashion. However, RANSAC has so far not been used as part of such deep learning pipelines, because its hypothesis selection procedure is non-differentiable. In this work, we present two different ways to overcome this limitation. The most promising approach is inspired by reinforcement learning, namely to replace the deterministic hypothesis selection by a probabilistic selection for which we can derive the expected loss w.r.t. to all learnable parameters. We call this approach DSAC, the differentiable counterpart of RANSAC. We apply DSAC to the problem of camera localization, where deep learning has so far failed to improve on traditional approaches. We demonstrate that by directly minimizing the expected loss of the output camera poses, robustly estimated by RANSAC, we achieve an increase in accuracy. In the future, any deep learning pipeline can use DSAC as a robust optimization component.",
"We develop a deep architecture to learn to find good correspondences for wide-baseline stereo. Given a set of putative sparse matches and the camera intrinsics, we train our network in an end-to-end fashion to label the correspondences as inliers or outliers, while simultaneously using them to recover the relative pose, as encoded by the essential matrix. Our architecture is based on a multi-layer perceptron operating on pixel coordinates rather than directly on the image, and is thus simple and small. We introduce a novel normalization technique, called Context Normalization, which allows us to process each data point separately while embedding global information in it, and also makes the network invariant to the order of the correspondences. Our experiments on multiple challenging datasets demonstrate that our method is able to drastically improve the state of the art with little training data.",
"Few prior works study deep learning on point sets. PointNet is a pioneer in this direction. However, by design PointNet does not capture local structures induced by the metric space points live in, limiting its ability to recognize fine-grained patterns and generalizability to complex scenes. In this work, we introduce a hierarchical neural network that applies PointNet recursively on a nested partitioning of the input point set. By exploiting metric space distances, our network is able to learn local features with increasing contextual scales. With further observation that point sets are usually sampled with varying densities, which results in greatly decreased performance for networks trained on uniform densities, we propose novel set learning layers to adaptively combine features from multiple scales. Experiments show that our network called PointNet++ is able to learn deep point set features efficiently and robustly. In particular, results significantly better than state-of-the-art have been obtained on challenging benchmarks of 3D point clouds."
],
"cite_N": [
"@cite_0",
"@cite_34",
"@cite_4",
"@cite_49"
],
"mid": [
"2560609797",
"2556455135",
"2963674285",
"2963121255"
]
} | Learning for Multi-Model and Multi-Type Fitting | Multi-model fitting has been a key problem in computer vision for decades. It aims to discover multiple independent structures, e.g. lines, circles, rigid motions, etc, often in the presence of noise. Here, by multi-model, we mean there are multiple models of a specific type, e.g. lines only. If in addition, there is a mixture of types (e.g. both lines and circles), we specifically term the problem as multi-model multi-type.
Various attempts towards solving the multi-model clustering problem have been made. The early works tend to be based on extensions of RANSAC [9] to the multi-model setting, e.g. simply running RANSAC multiple times consecutively [47,49]. More recent works in this approach involve analyzing the interplay between data and hypotheses. J-Linkage [46], its variant T-Linkage [30] and ORK [3,4] rely on extensively sampling hypothesis models and compute the residual of data to each hypothesis. Either clustering is carried out on the mapping induced by the residu-als, or an energy minimization is performed on the point to model distance, and various regularization terms (e.g. the label count penalty [25] and spatial smoothness (PEaRL) [17]). Another class of approach involves direct analytic expressions characterizing the underlying subspaces, e.g., the powerful self-expressiveness assumption has inspired various elegant methods [8,28,24,18].
Despite the considerable development of multi-model fitting techniques in the past two decades, there are still major lacuna in the problem. First of all, in contrast with having multiple instances of the same type/class, many real world model fitting problem consists of data sampled from multiple types of models. Fig. 1 shows both a toy example of line, circle and ellipses co-existing together, and a realistic motion segmentation scenario, where the appropriate model to fit the foreground object motions (or even the background) can waver between affine motions, homography, and fundamental matrix [55] with no clear division. With few exceptions [1,43,47], none of the aforementioned works have considered this realistic scenario. Even if one attempts to fit multiple types of model sequentially like in [43], it is non-trivial to decide the type when the dichotomy of the models is unclear in the first place. Secondly, for problems where there are a significant number of models, the hypothesis-and-test approach is often overwhelmed by sampling imbalance, i.e., points from the same subspace represent only a minority, rendering the probability of hitting upon the correct hypothesis very small. This problem becomes severe when a large number of data samples are required for hypothesizing a model (e.g., eight points are needed for a linear estimation of the fundamental matrix and 5 points for fitting an ellipse). Lastly, for optimal performance, there is inevitably a lot of manipulation of parameters needed, among which the most sensitive include those for deciding what constitutes an inlier for a model [30,31], for sparsifying the affinity matrices [22,55], and for selecting the model type [47]. Often, dataset-specific tuning is required, with very little theory to guide the tuning.
There has been some recent foray into deep learning as a means to learn geometric model, e.g. camera pose [2] and essential matrix [59] from feature correspondences, but extending such deep geometric model fitting approach to the multi-model and multi-type scenario has not been attempted. Generalizing the deep learning counterparts of RANSAC to multi-model fitting is not trivial due to the same reason as conventional sequential approaches. Furthermore, in many geometric model fitting problems, there are often significant overlap between the subspaces occupied by the multiple model instances (e.g. in motion segmentation, both the foreground and the background contain the camera-induced motion). We want the network to learn the best representation so that the different model instances can be well-separated. This is in contrast to the traditional clustering approaches where hand-crafted design of the similarity metric is needed.When there are no clear division between multiple types of models (e.g. the transitions from a circle to an ellipse), the network would also need to learn the appropriate preference from the labelled examples in the training data.
Another open challenge in multi-model fitting is to automatically determine the number of models, also referred to as model selection in the literature [45,3,27,22]. Traditional methods proceed from statistical analysis of the residual of the clustering [45,39]. Other methods approach from various heuristic standpoints including analyzing eigen values [60,51], over-segment and merge [27,22], soft thresholding [28] or adding penalty terms [26]. Most of the above works cannot deal with mixed-types in the models. To redress this gap in the literature, we want our network to learn good feature representations so that the number of clusters, even in the presence of mixed types, can be readily estimated.
With the above objectives in mind, we propose a multimodel multi-type fitting network. The network is given labelled data (inlier points for each model and outliers) and is supposed to learn the various geometric models in a completely data-driven manner. Since the input to the network is often not regular grid data like images, we use what we called the CorresNet from [59] as a backbone (see Fig. 2).
As the output of network should be amenable for grouping into the respective, possibly mixed models, and invariant to any permutation of model indices among the multiple instances of the same class in the training data, we consider both an existing metric learning loss and its variant and propose a new distribution aware loss, the latter based on Fisher linear discriminant analysis (LDA). In the testing phase, standard K-means clustering is applied to the feature embeddings to obtain a discrete cluster assignment. As feature points are embedded in a clustering friendly way, we can just look into the K-means fitting residual to estimate the number of models should it be unknown.
Methodology
In this section, we first explain the training process of our multi-model multi-type fitting network. We then introduce existing metric learning loss and our MaxInterMinIntra loss.
Metric learning Loss
Cluster Label Y (N x K) Figure 2: Our multi-model multi-type fitting network. We adopt the same cascaded CorresNet blocks as [59]. The metric learning loss is defined to learn good feature representation.
Testing
K-Means
Cluster Index N X 1 MLP (d,128) CorresNet X
Network Architecture
We denote the input sparse data with N points as X =
{x i } i=1···N ∈ R D×N where each individual point is x i ∈ R D .
The input sparse data could be geometric shapes, feature correspondences in two frames or feature trajectories in multiple frames. We further denote the one-hot key encoded labels accompanying the input data as Y = {y i } ∈ {0, 1} K×N where y i ∈ {0, 1} K and K is the number of clusters or partitions of the input data.
Cascaded multi-layer perceptrons (mlps) has been used to learn feature representation from generic point input [35,59]. We adopt a backbone network similar to Cor-resNet [59] 1 shown in Fig. 2 . The output embedding of the CorresNet is denoted as
Z = {f (X; Θ)} ∈ R K×N .
To make the output Z clustering-friendly, we apply a differentiable, clustering-specific loss function L(Z, Y), measuring the match of the output feature representation with the ground-truth labels. The problem now becomes that of learning a CorresNet backbone f (X; Θ) that minimizes the loss L(Z, Y; Θ).
Clustering Loss
We expect our clustering loss function to have the following characteristics. First, It should be invariant to permutation of models, e.g. the order of these models are exchangeable. Second the loss must be adaptable to varying number of groups. Lastly, the loss should enable good separation of data points into clusters. We consider the following loss functions. L2Regression Loss: Given the ground-truth labels Y and the output embeddings Z = f (X; Θ), the ideal and reconstructed affinity matrices are respectively,
K = Y Y,K = Z Z (1)
The training objective is to minimize the difference between K andK measured by element-wise L2 distance [14].
L(Θ) = ||K −K|| 2 F = ||Y Y − Z Z|| 2 F = ||f (X; Θ) f (X; Θ)|| 2 F − 2||f (X; Θ)Y || 2 F(2)
The above L2 Regression loss is obviously differentiable w.r.t. f (X; Θ). Since the output embedding Z is L2 normalized, the inner product between two point representations is
z i z j ∈ [−1, 1].
Cross-Entropy Loss: As alternative to the L2 distance, one could measure the discrepancy between K andK as KL-Divergence. Since D kl (K||S(K)) = H(K, S(K)) − H(K), where H(·) is the entropy function and S(·) is the sigmoid function, with fixed K, we simply need to minimize the cross-entropy H(K, S(K)) which derives the following element-wise cross-entropy loss,
L(Θ) = i,j H y i y j , S z i z j = i,j H(y i y j , S(f (x i ; Θ) f (x i ; Θ)))(3)
1 Alternative sparse data networks, e.g. PointNet [35], are applicable as well Figure 3: Illustration of MaxInterMinIntra loss for point representation metric learning. The objective considers the minimal distance min m,n ||µ m − µ n || 2 2 between clusters and maximal scatter max l s l within clusters.
Input Point Representations
Embedded Point Representations
( ; ) f z x mean scatter 1 3 m m μ 1 3 l l s x z
The cross-entropy loss is more likely to push points i and j of the same cluster together faster than L2Regression, i.e. inner product z i z j → 1 and those of different clusters apart, i.e. inner product z i z j → −1. MaxInterMinIntra Loss: Both the above losses consider the pairwise relation between points; the overall point distribution in the output embedding is not explicitly considered. We now propose a new loss which takes a more global view of the point distribution rather than just the pairwise relations. Specifically, we are inspired by the classical Fisher LDA [10]. LDA discovers a linear mapping z = w x that maximizes the distance between class centers/means µ i = 1/N j z j and minimizes the scatter/variance within each class s i = j (z j − µ i ) 2 . Formally, the objective for a two-class problem is written as,
J(w) = |µ 1 − µ 2 | 2 s 2 1 + s 2 2(4)
which is to be maximized over w. For linearly nonseparable problem, one has to design kernel function to map the input features before applying the LDA objective. Equipped now with more powerful nonlinear mapping networks, we adapt the LDA objective-for the multi-class scenarios-to perform these mappings automatically as below,
J(Θ) = min m,n∈{1···K},m =n ||µ m − µ n || 2 2 max l∈{1···K} s l(5)
where µ m = 1 |Cm| i∈Cm z i , s l = i∈C l ||z i − µ l || 2 2 and C l indicating the set of points belonging to cluster l. We use the extremas of the inter-cluster distances and intracluster scatters (see Fig. 3) so that the worst case is explicitly optimized. Hence, we term the loss as MaxInterMinIntra (MIMI). By applying log operation on the objective,we arrive at the following loss function to be minimized:
L(Θ) = − log min m,n ||µ m − µ n || 2 2 + log max l s l (6)
One can easily verify that the MaxInterMinIntra loss is differentiable w.r.t. z i . We give the gradient in Eq (7). Optimization: The Adam optimizer [21] is used to minimize the loss L(Θ). The learning rate is fixed at 1e − 4 and mini-batch at one frame pair or sequence. The mini-batch size cannot exceed one because the number of points/correspondences is not uniform across different sequences. For all tasks, we train the network 300 epochs.
Inference
During testing, we apply standard K-means to the output embeddings {z j } j=1···Nte . This step is applicable to both multi-model and multi-type fitting problems, as we do not need to specify explicitly the type of model to fit. Finally, with unknown number of models K, we propose to analyze the K-means residuals,
r(K) = m=1···K i∈Cm ||z i − µ m || 2 2(8)
Good estimate of K often yields low r(K) and further increasing K does not significantly reduce r(K). So we find the K at the 'elbow' position. We adopt two offthe-shell approaches for this purpose, the second order difference (SOD) [61] and silhouette analysis [39]. Both are parameter-free.
Experiment
We demonstrate the performance of our network on both synthetic and real world data, with extensive comparisons with traditional geometric model fitting algorithms. Our focus is on the multi-type setting (the first two experiments on LCE and KT3DMoSeg), but we also carry out experiments on the pure multi-model scenario (LCE-unmixed and Adelaide RMF) experiments.
Datasets
Synthesized Lines, Circles and Ellipses (LCE): Fitting ellipses has been a fundamental problem in computer vision [11]. We synthesize for each sample four different types of conic curves in a 2D space, specifically, one straight line, two ellipses and one circle. We randomly generate 8,000 training samples, 200 validation samples and 200 testing samples. Each point is perturbed by adding a gaussian noise with σ = 0.05. KT3DMoSeg [55]: This benchmark was created based upon the KITTI self-driving dataset [12] with 22 sequences in total. Each sequence contains two to five rigid motions. As analyzed by [55], the geometric model for each individual motion can range from an affine transformation, a homography, to a fundamental matrix, with no clear dividing line between them. We evaluate this benchmark to demonstrate our network's ability to tackle multi-model multi-type
∇ Θ L(Θ) = − i∈Cm 1 |Cm| 2 2z i + j∈Cm,j =i z j − 1 |Cm||Cn| j∈Cn z j ||µ m − µ n || 2 2 ∇ Θ f (x i ; Θ) − j∈Cn 1 |Cn| 2 2z j + i∈Cm,i =j z i − 1 |Cn||Cm| k∈Cm z k ||µ m − µ n || 2 2 ∇ Θ f (x j ; Θ) + α k∈Cl 2z k − 1 |C l | 2z k + j∈Cl,j =i z j + 1 |C l | 2 2z k + 2 j∈Cl,j =i z j ∇ Θ f (x k ; Θ)(7)
fitting. For fair comparison with all existing approaches, we only crop the first 5 frames of each sequence for evaluation, so that the broken trajectory does not give undue advantage to certain methods. Synthesized Lines, Circles and Ellipses Unmixed (LCE-Unmixed): To demonstrate the ability of our network on single-type multi-model fitting, we also randomly generate in each sample a single class of conic curves in 2D space (lines, circles, or ellipses) but with multiple instances (2-4) of them. The number of training, validation and testing samples are the same as those of the multi-type LCE setting. Same perturbation as LCE is applied here. Adelaide RMF Dataset [52]: This dataset consists of 38 frame pairs, of which half are designed for multi-model fitting (the model being homographies induced by planes). The number of planes is between two to seven. The other 19 frame pairs are designed for two-view motion segmentation. It is nominally a single-type multiple fundamental matrix fitting problem and has been treated as such by the community. While we put the results under the single-type category, we hasten to add that there might indeed be degeneracies, i.e. near planar rigid objects, (and hence mixed types) present in this dataset, no matter how minor. The number of motions is between one to five.
Multi-Type Curve Fitting
The multiple types in this curve fitting task comprises of lines, circles, and ellipses in the LCE dataset. Note that there is no clear dividing boundary between them as they can be all explained by the general conic equation (with the special cases of lines and circles obtained by setting some coefficients to 0):
Ax 2 + Bxy + Cy 2 + Dx + Ey + F = 0(9)
There are two ways to adapt the traditional multi-model methods for this multi-type setting. One approach is to formulate the multi-type fitting problem as fitting multiple models parameterized by the same conic equation in Eq (9). This approach is termed HighOrder (H.O.) fitting. Alternatively, one could sequentially fit three types of models, which is termed Sequential (Seq.) fitting. For ellipse-specific fitting, the direct least square approach [11] is adopted. For our model, we evaluate the various metric learning losses introduced in Section 3.2 and present the results in Tab. 1. The results are reported with the optimal setting determined by the validation set. We evaluate the performance by two clustering metrics, Classification Error Rate (Error Rate), i.e. the best classification results subject to permutation of clustering labels, and Normalize Mutual Information (NMI). Comparisons are made with state-of-the-art multi-model fitting algorithms including Tlinkage [30], RPA [31] and RansaCov [32]. We notice that T-linkage returns extremely over-segmented results in the sequential setting, e.g. more than 10 lines, making classification error evaluation intractable as it involves finding the permutation label with lowest error rate. For our model, we evaluate the three loss variants, the L2 Regression loss (L2), Cross Entropy loss (CE) and MaxInterMinIntra loss (MIMI). We make the following observations about the results. First, all our metric learning variants outperform the High-Order and Sequential multi-type fitting approaches. Second, the all-encompassing model used in the HighOrder approach suffers from ill-conditioning when fitting simpler models. Thus, the performance is much inferior to that of Sequential fitting. However, it is worth noting that despite the Sequential approach being given the strong a priori knowledge of both the model type and the number of model for each type, its performance is still significantly worse off than ours.
For qualitative comparison, we visualize the groundtruth and segmentation results of each method in Fig. 4. Our clustering results on the bottom row show success in discovering all individual shapes with mistakes made only at the intersections of individual structures. Though good at separating straight line, the RPA failed to discover ellipses as sampling all 5 inliers amidst the large number of outliers and fitting an ellipse from even correct 5 support points with noise (noise in coordinate) are both very difficult, the latter
Multi-Type Motion Segmentation
The KT3DMoSeg benchmark [55] is put forth for the task of motion segmentation. Each sequence often consists of a background whose motion can be explained in general by a fundamental matrix while the models for the foreground motions can sometimes be ambiguous due to the limited spatial extent of the objects, thus giving rise to mixed types of models. For example, in Fig. 5, the vehicles in 'Seq009 Clip01' and 'Seq028 Clip03' can be roughly explained by an Affine transformation or Homography while the oil tanker in 'Seq095 Cip01' should be modeled by a fundamental matrix. Even the background motion can be ambiguous to model, when the background is dominated by a plane, for instance, the quasi-planar row of trees on the right side of the road in 'Seq028 Clip03' is likely to lead to degeneracies in the fundamental matrix estimation and thus cause errors in the traditional method (second row). For this dataset, we use the first five frames of each sequence for fair comparison and apply leaveone-out cross-validation, i.e. repeatedly train on 21 sequences and test on the left-out sequence; we dubbed this the 'Vanilla' setting. Each sequence has between 10-20 frames, so we could further increase the training data by augmenting with all the remaining five-frame clips from each sequence with no overlap; this is termed as the 'Augment' setting. The testing clips (first five frames of each sequence) are kept the same for both settings. We compare with subspace clusering approaches, GPCA [48], LSA [56], ALC [37], LRR [28], MSMC [7] and SSC [8] and the multiview clustering (MVC) methods in [55]. Results are presented in Tab. 2.
We make the following observations about the results. Our vanilla leave-one-out approach achieved very compet-itive performance on all 22 sequences in KT3DMoSeg. In the 'Augment' setting, our approach even outperforms the state-of-the-art multi-view clustering approaches (MVC) [55]. Of all benchmark methods, only MVC has considered the multi-type fitting issue. However, the multi-view fusion proposed therein still does not guarantee that each rigid motion is explained by the correct model. Furthermore, we notice that our proposed MIMI metric is comparable to both the L2 Regression and cross entropy loss and gives even lower error when augmented with additional data. This suggests that optimizing the distribution of the embedded features with a clustering-specific loss is effective.
Finally, we present qualitative comparison between the results of MVC and ours in Fig. 5. Not only is the proposed network capable of correctly segmenting the aforementioned degenerate motions, it surpasses our expectations in how it performs in 'Seq009 Clip01'. Here the independently moving car (the yellow group in the ground truth image) has a flow field that is consistent with the epipolar constraint associated with the background motion (due to them both translating in the same direction) [55]. Without resorting to reconstructing the depth of the car, it would be impossible to separate it from the background. However, criteria involving depth would be very unwieldy to specify analytically in the existing approaches. Here, without having any preconceived notion of the geometrical model, our network seems to have learnt the requisite criteria to separate the independent motion.
Multi-Model Fitting
In this section, we further demonstrate the ability of our network to handle conventional (i.e., single-type) multimodel fitting problems. Synthetic Multi-Model Fitting: In this experiment, we evaluate multi-model fitting of a single type (the type being line, circle or ellipse). We adopt a similar training and testing split as in the synthetic LCE task, i.e. 8,000 training samples and 200 testing samples and compare with RPA [31]. The results are presented in Fig. 6. We conclude from the figure that, first, our multi-model network performs comparably with RPA on multi-line segmentation task while outperforming RPA with large margin on the more challenging multi-circle and multi-ellipse segmentation tasks. Moreover, the performance drops sharply (higher error) from multi-line (blue) to multi-ellipse (green) fitting for RPA, with the drop getting more acute as the number of model increases. This suggests that the increasing size of the minimal support set (2 points for line, 3 points for circle and 5 points for ellipse) introduces great challenge for the Ransac-based approaches due to sampling imbalance. Hitting the true model becomes very difficult for model with larger support set and experiencing higher noise level. It is
MVC Seq009_Clip01
Ours Seq009_Clip01
GroundTruth Seq028_Clip03
MVC Seq028_Clip03
Ours Seq028_Clip03
GroundTruth Seq095_Clip01
MVC Seq095_Clip01
Ours Seq095_Clip01
GroundTruth Seq005_Clip01
MVC Seq005_Clip01
Ours Seq005_Clip01 Figure 5: Qualitative comparison on 4 sequences from KT3DMoSeg. First row are the ground-truth. Second and third rows are the results of Multi-View Clustering [55] and our multi-type network respectively. The last row are the point feature embeddings before and after learning.
evident that our multi-model network is less sensitive to the complexity of the model, as the drop in performance (purple and cyan bars) are less significant. Fig. 6 thus demonstrates that our deep learning approach is better able to deal with sampling imbalance, probably by picking up and leveraging on the additional regularity in the way the points are distributed. Two-View Multi-Model Fitting: Finally, we evaluate the multi-model fitting task on the Adelaide RMF dataset [52]. For both the multi-planar and motion segmentation tasks, we carry out a leave-one-out cross-validation. For fair comparison, we report the classification error rate (Error-Rate). The state-of-the-art models being compared include J-Linkage [46], T-Linkage [30], RPA [31], RCMSA [34] and ILP-RansaCov [32]. The comparisons are presented in Tab. 3. We observe that our multi-model network gives very competitive results on both the multi-planar and motion segmentation tasks. For the former task, our proposed Max-InterMinIntra (MIMI) loss yields 17.33% which is better than many benchmark models. For the motion segmentation task, our model with L2 Regression loss gives a mean error of 8.98%. We note the performance is achieved by training on only a very small amount of data (18 sequences) and without any dataset-specific parameter tuning. We further note that here, without the problems posed by mixed types, the traditional methods are able to reap the benefits of the given geometrical models (an advantage compared to our method which does not have any preconceived model).
Further Study
In this section, we first further analyze the impact of metric learning on transforming the point feature representations. Then we present results on model selection and finally do ablation study for the proposed MaxInterMinIntra loss. Feature Embedding To gain some insight on how the learned feature representations are more clustering-friendly, we provide direct visualization of the representations. For that purpose, we use T-SNE [29] to project both the KT3DMoSeg raw feature points (of dimension ten for 5 frames) and network output embeddings to a 2-dimensional space. Three example sequences are presented in the last row of Fig. 5. We conclude from the figure that: (i) the original feature points are hard to be grouped by K-means correctly; and (ii) after our network embedding, feature points are more likely to be grouped according to the respective motions, regardless of the underlying types of motions. Model Selection: As can be seen from Fig. 5, the point distribution in the learned feature embedding is amenable for model selection (estimating the number of clusters/motions). We evaluate both Second Order Difference (SOD) [28] and Silhouette Analysis (Silh.) [39] to estimate the number of motions. We also compare with alternative subspace clustering approaches with built-in model selection, namely, LRR [28] and MSMC [7] and additionally apply self-tuning spectral clustering(S.T.) [60] to the affinity matrix obtained in MVC [55]. Performances are evaluated in terms of mean classification error (Err.) and correct rate (Corr.), i.e. the percentage of samples/sequences with correctly estimated number of cluster (higher the better). Comparisons are presented in Tab. 4. Thanks to the deep feature learning, both SOD and Silh. applied to our method give strong performance even though they are very simple heuristics. Dimension of Output Embedding: We investigate the impact of the dimension of the output embedding z on the performance of multi-model/type fitting. Here, we vary the size of the embedding dimension from 3 to 7 for all three tasks and present the resulting error rates against the dimension in Fig 7 (left). As we can see, the errors are relatively stable w.r.t. the output embed dimension from 4 to 7 for all three tasks with optimal between 5 to 6 coninciding with the maximal number of clusters for each task (max 5 motions for KT3DMoSeg and max 4 structures for Synthetic). Thus the maximal number of clusters serves as a good heuristic for the dimension of the network output embedding. MIMI Loss: Here we investigate the necessity of both maximizing inter cluster distance and minimizing intra cluster variance. In specific, we compare the following variants.
(i) MaxInter: only maximizing the inter cluster distance is considered, equivalent to the first term in Eq (6). (ii) Min-Intra: only minimizing the intra cluster variance is considered, the second term in Eq (6). (iii) K-means loss: we further note the k-means loss [57] proposed for unsupervised deep clustering shares the same objective with MinIntra.
We therefore adapt the k-means loss to supervised learning with fixed point-to-cluster assignment during training. We compare the three variants with our final MIMI loss on KT3DMoSeg and present the results in Fig. 7 (right). The MIMI loss is consistently better (lower error) than all three variants. In particular, the MinIntra and K-means loss produce large errors. This indicates that pushing points of different clusters away is vital to feature embedding for clustering.
Conclusion
In this work, we investigate training a deep neural network for general multi-model and multi-type fitting. We formulate the problem as learning non-linear feature embeddings that maximize the distance between points of different clusters and minimize the variance within clusters. For inference, the output features are fed into a K-means to obtain the grouping. Model selection is easily achieved by just analyzing the K-means residual in a parameter free manner. Experiments are carried out on both synthetic and real geometric multi-model multi-type fitting tasks. Comparison with state-of-the-art approaches proves that our network can better deal with multiple types of models simultaneously, without any preconceived notion of the underlying model. Our method is also less sensitive to sampling imbalance brought about by the increasing number of models, and it works well in a broad range of parameter values, without the kind of careful tuning required in conventional approaches. | 4,771 |
1901.10197 | 2912711185 | Abstract Query expansion (QE) is a well-known technique used to enhance the effectiveness of information retrieval. QE reformulates the initial query by adding similar terms that help in retrieving more relevant results. Several approaches have been proposed in literature producing quite favorable results, but they are not evenly favorable for all types of queries (individual and phrase queries). One of the main reasons for this is the use of the same kind of data sources and weighting scheme while expanding both the individual and the phrase query terms. As a result, the holistic relationship among the query terms is not well captured or scored. To address this issue, we have presented a new approach for QE using Wikipedia and WordNet as data sources. Specifically, Wikipedia gives rich expansion terms for phrase terms, while WordNet does the same for individual terms. We have also proposed novel weighting schemes for expansion terms: in-link score (for terms extracted from Wikipedia) and a tf-idf based scheme (for terms extracted from WordNet). In the proposed Wikipedia-WordNet-based QE technique (WWQE), we weigh the expansion terms twice: first, they are scored by the weighting scheme individually, and then, the weighting scheme scores the selected expansion terms concerning the entire query using correlation score. The proposed approach gains improvements of 24 on the MAP score and 48 on the GMAP score over unexpanded queries on the FIRE dataset. Experimental results achieve a significant improvement over individual expansion and other related state-of-the-art approaches. We also analyzed the effect on retrieval effectiveness of the proposed technique by varying the number of expansion terms. | Query Expansion has rich literature in the area of Information Retrieval (IR). In the era of 1960s, @cite_94 was the first researcher who applied QE for literature indexing and searching in a mechanized library system. In 1971, Rocchio @cite_57 brought QE to spotlight through relevance feedback method'' and its characterization in a vector space model. This method is still used in its original and modified forms in automatic query expansion (AQE). Rocchio's work was further extended and applied in techniques such as collection-based term co-occurrence @cite_0 @cite_48 , cluster-based information retrieval @cite_104 @cite_38 , comparative analysis of term distribution @cite_89 @cite_36 @cite_20 and automatic text processing @cite_98 @cite_6 @cite_15 . | {
"abstract": [
"",
"Identifying expansion forms for acronyms is beneficial to many natural language processing and information retrieval tasks. In this work, we study the problem of finding expansions in texts for given acronym queries by modeling the problem as a sequence labeling task. However, it is challenging for traditional sequence labeling models like Conditional Random Fields (CRF) due to the complexity of the input sentences and the substructure of the categories. In this paper, we propose a Latent-state Neural Conditional Random Fields model (LNCRF) to deal with the challenges. On one hand, we extend CRF by coupling it with nonlinear hidden layers to learn multi-granularity hierarchical representations of the input data under the framework of Conditional Random Fields. On the other hand, we introduce latent variables to capture the fine granular information from the intrinsic substructures within the structured output labels implicitly. The experimental results on real data show that our model achieves the best performance against the state-of-the-art baselines.",
"",
"",
"Abstract Efficient distributed numerical word representation models (word embeddings) combined with modern machine learning algorithms have recently yielded considerable improvement on automatic document classification tasks. However, the effectiveness of such techniques has not been assessed for the hierarchical text classification (HTC) yet. This study investigates application of those models and algorithms on this specific problem by means of experimentation and analysis. We trained classification models with prominent machine learning algorithm implementations—fastText, XGBoost, SVM, and Keras’ CNN—and noticeable word embeddings generation methods—GloVe, word2vec, and fastText—with publicly available data and evaluated them with measures specifically appropriate for the hierarchical context. FastText achieved an lca F 1 of 0.893 on a single-labeled version of the RCV1 dataset. An analysis indicates that using word embeddings and its flavors is a very promising approach for HTC.",
"Abstract Clustering based on grid and density for multi-density datasets plays a key role in data mining. In this work, a clustering method that consists of a grid ranking strategy based on local density and priority-based anchor expansion is proposed. In the proposed method, grid cells are ranked first according to local grid properties so the dataset is transformed into a ranked grid. An adjusted shifting grid is then introduced to calculate grid cell density. A cell expansion strategy that simulates the growth of bacterial colony is used to improve the completeness of each cluster. An adaptive technique is finally adopted to handle noisy cells to ensure accurate clustering. The accuracy, parameter sensitivity and computation cost of the proposed algorithm are analysed. The performance of the proposed algorithm is then compared to other clustering methods using four two-dimensional datasets, and the applicability of the proposed method to high-dimensional, large-scale dataset is discussed. Experimental results demonstrate that the proposed algorithm shows good performance in terms of accuracy, de-noising capability, robustness (parameters sensitivity) and computational efficiency. In addition, the results show that the proposed algorithm can handle effectively the problem of multi-density clustering.",
"1332840 Primer compositions DOW CORNINGCORP 6 Oct 1971 [30 Dec 1970] 46462 71 Heading C3T [Also in Divisions B2 and C4] A primer composition comprises 1 pbw of tetra ethoxy or propoxy silane or poly ethyl or propyl silicate or any mixture thereof, 0A75-2A5 pbw of bis(acetylacetonyl) diisopropyl titanate, 0A75- 5 pbw of a compound CF 3 CH 2 CH 2 Si[OSi(CH 3 ) 2 - X] 3 wherein each X is H or -CH 2 CH 2 Si- (OOCCH 3 ) 3 , at least one being the latter, and 1-20 pbw of a ketone, hydrocarbon or halohydrocarbon solvent boiling not above 150‹ C. In the examples 1 pbw each of bis(acetylacetonyl)diisopropyl titanate, polyethyl silicate and are dissolved in 10 pbw of acetone or in 9 pbw of light naphtha and 1 of methylisobutylketone. The solutions are used to prime Ti panels, to which a Pt-catalysed room-temperature vulcanizable poly-trifluoropropylmethyl siloxanebased rubber is then applied.",
"Abstract Spatial keyword query (SKQ) processing is gaining great interest with the proliferation of location-based devices and services. However, most of the existing SKQ processing methods are either focused on Euclidean space or suffer from poor scalability. This paper addresses the problem of SKQ processing in road networks under wireless broadcast environments, and devises a novel air index called SKQAI , which combines a road network weighted quad-tree, several keyword quad-trees and a distance bound array, to facilitate SKQ processing in road networks. Based on SKQAI , efficient algorithms for processing Boolean Range, Top-k and Ranked SKQs are proposed. The proposed methods can efficiently prune irrelevant regions of the road network based on both road network distance and keyword information, and thus improve query processing efficiency significantly. Finally, simulation studies on two real road networks and two geo-textual datasets are conducted to demonstrate the effectiveness and efficiency of the proposed algorithms.",
"",
"",
"",
"This paper reports on a novel technique for literature indexing and searching in a mechanized library system. The notion of relevance is taken as the key concept in the theory of information retrieval and a comparative concept of relevance is explicated in terms of the theory of probability. The resulting technique called “Probabilistic Indexing,” allows a computing machine, given a request for information, to make a statistical inference and derive a number (called the “relevance number”) for each document, which is a measure of the probability that the document will satisfy the given request. The result of a search is an ordered list of those documents which satisfy the request ranked according to their probable relevance. The paper goes on to show that whereas in a conventional library system the cross-referencing (“see” and “see also”) is based solely on the “semantical closeness” between index terms, statistical measures of closeness between index terms can be defined and computed. Thus, given an arbitrary request consisting of one (or many) index term(s), a machine can elaborate on it to increase the probability of selecting relevant documents that would not otherwise have been selected. Finally, the paper suggests an interpretation of the whole library problem as one where the request is considered as a clue on the basis of which the library system makes a concatenated statistical inference in order to provide as an output an ordered list of those documents which most probably satisfy the information needs of the user."
],
"cite_N": [
"@cite_38",
"@cite_36",
"@cite_48",
"@cite_98",
"@cite_6",
"@cite_104",
"@cite_57",
"@cite_0",
"@cite_89",
"@cite_15",
"@cite_20",
"@cite_94"
],
"mid": [
"",
"2474509017",
"",
"",
"2891768540",
"2885712284",
"2164547069",
"2770603453",
"",
"",
"",
"2082729696"
]
} | A New Approach for Query Expansion using Wikipedia and WordNet | Web is arguably the largest information source available on this planet and it's growing day by day. According to a recent survey [26] of the computer world magazine, approximately 70-80 percent of all data available to enterprises/organizations is unstructured information, i.e., information that either does not organize in a pre-defined manner or does not have a pre-defined data model. This makes information processing a big challenge and, creates a vocabulary gap between user queries and indexed documents. It is common for a user's query Q and its relevant document D (in a document collection) to use different vocabulary and language styles while referring to the same concept. For example, terms 'buy' and 'purchase' have the same meaning, only one of these can be present in documents-index while the other one can be user's query term. This makes it difficult to retrieve the information actually wanted by the user. An effective strategy to fill this gap is to use Query expansion (QE) technique that enhances the retrieval effectiveness by adding expansion terms to the initial query. Selection of the expansion terms plays a crucial role in QE because only a small subset of the expanded terms are actually relevant to the query. In this sense, the approach for selection of expansion terms is equally important in comparison to what we do further with the expanded terms in order to retrieve desired information. QE has a long research history in Information retrieval (IR) [64,83]. It has potential to enhance the IR effectiveness by adding relevant terms that can help to discriminate the relevant documents from irrelevant ones. The source of expansion terms plays a significant role in QE. A variety of sources have been researched for extracting the expansion terms, e.g., the entire target document collection [14,24,110], feedback documents (few top ranked documents are retrieved in response to the initial query) [31,59] or external knowledge resources [1,33,54].
References [10,25] provide comprehensive surveys on data sources used for QE. Broadly, such sources can be classified into four categories: documents used in retrieval process [14] (e.g., corpus), hand-built knowledge resources [76] (e.g., WordNet 1 , ConceptNet 2 , thesaurus, ontologies), external text collections and resources [1] (e.g., Web, Wikipedia), and hybrid data sources [32].
In corpus based sources, a corpus is prepared that contains a cluster of terms for each possible query term. During expansion, the corresponding cluster is used as the set of expanded terms (e.g., [14,24,110]). However, corpus based sources fail to establish a relationship between a word in the corpus and related words used in different communities, e.g., "senior citizen" and "elderly" [39].
Hand-built knowledge resources based QE extract knowledge from textual hand-built data sources such as dictionaries, thesaurus, ontologies and LOD cloud (e.g., [9,76,95,102,108]). Thesaurusbased QE can be either automatic or hand-built. One of the famous hand-built thesaurus is Word-Net [66]. While it significantly improves the retrieval effectiveness of badly constructed queries, it does not show a lot improvement for well formulated user queries. Primarily, there are three limitations of hand-built knowledge resources: they are commonly domain specific, they usually do not contain proper noun and they have to be kept up to date.
External text collections and resources such as web, Wikipedia, Query logs and anchor texts are the most common and effective data sources for QE ( [1,11,16,33,54,97,106]). In such cases, QE approaches show overall better results in comparison to the other previously discussed data sources.
Hybrid Data Sources are a combination of two or more data sources. For example, reference [28] uses WordNet, an external corpus, and the top retrieved documents as data sources for QE. Some of the other research works based on hybrid resources are [32,44,57,100].
Among the above data sources, Wikipedia and WordNet are popular choices for semantic enrichment of the initial query [1,4,38,76,95,104]. They are also two of the most widely used knowledge resources in natural language processing. Wikipedia is the largest encyclopedia describing entities [99]. WordNet is a large lexicon database of words in the English language. An entity is described by Wikipedia through a web-article that contains detailed related information about the entity. Each such web-article describes only one entity. The information present in the article has important keywords that can prove very useful as expansion terms for queries based on the entity being described by the article. On the other hand, WordNet consists of a graph of synsets that are collections of synonymous words linked by a number of useful properties. WordNet also provides a precise and attentively assembled hierarchy of useful concepts. These features make WordNet an ideal knowledge resource for QE.
Many of the articles [1,4,38,60,76,95,104] have used Wikipedia and WordNet separately with promising results. However, they don't produce consistent results for different types of queries (individual and phrase queries).
This article proposes a novel technique named Wikipedia-WordNet based QE technique (WWQE) for query expansion that combines Wikipedia and WordNet data sources to improve retrieval effectiveness. We have also proposed novel schemes for weighting expanded terms: in-link score (for terms extracted from Wikipedia) and a tfidf based scheme (for terms extracted from WordNet). Experimental results show that the proposed WWQE technique produces consistently better results for all kinds of queries (individual and phrase queries) when compared with query expansion based on the two data sources individually. The experiments were carried on FIRE dataset [50] using popular weighting models and evaluation matrices. They produced improved results on popular metrics such as MAP(mean average precision), GM MAP (geometric mean average precision), P@10 (precision at top 10 ranks), P@20, P@30, bpref (binary preference) and overall recall. The comparison was made with results obtained on individual data sources (i.e., Wikipedia and WordNet).
Organization
The remainder of the article is organized as follows. Section 2 discusses related work. Section 3 describes the proposed approach. Experimental Setup, dataset and evaluation matrices are discussed in Section 4. Section 5 discusses the experimental results. Finally, we conclude in Section 6.
Use of WordNet as Data Source for QE
WordNet [66] is one of the popular hand-built thesaurus, which has been significantly used for QE and word-sense disambiguation (WSD). Here, our focus is on the use of WordNet for query expansion. There are many issues that need to be addressed when using WordNet as a data source, such as:
-When a query term appears in multiple synsets, which synset(s) should be considered for query expansion! -Can only the synsets of a query term have meanings similar to the query term, or, synsets of these synsets can also have meanings similar to the query term, and hence, should also be considered as potential expansion terms! -When considering a synset of a query term, should only synonyms be considered or other relations (i.e., hypernyms, hyponyms, holonyms, meronyms etc.) should also be looked at! Further, when considering terms under a given relation, which terms should be selected!
In earlier works, a number of researchers have explored these issues. References [94,95] added manually selected WordNet synsets for QE, but unfortunately no significant improvement were obtained. Reference [87] uses synonyms of the initial query and assigns half weight. Reference [60] used word sense to add synonyms, hyponyms and terms's WordNet glosses to expand query. Their experiments yielded significant improvements on TREC datasets. Reference [41] uses semantic similarity while reference [108] uses sense disambiguation of query terms to add synonyms for QE. During experimental evaluation, in response to the user's initial query, reference [108]'s method produces an improvement of around 7% in P@10 value over the CACM collection. Reference [35] uses a set of candidate expansion terms (CET) that include all the terms from all the synsets where the query terms exist. Basically, a CET is chosen based on the vocabulary overlap between its glosses and the glosses of query terms. Recently, reference [76] used semantic relations from the WordNet. The authors proposed a novel query expansion technique where Candidate Expansion Terms (CET) are selected from a set of pseudo-relevant documents. The usefulness of these terms is determined by considering multiple sources of information. The semantic relation between the expanded terms and the query terms is determined using WordNet. On the TREC collection, their method showed significant improvement in IR over the user's unexpanded queries. Reference [58] presents an automatic query expansion (AQE) approach that uses word relations to increase the chances of finding relevant code. As data source for query expansion, it uses a thesaurus containing only software-related word relations along with WordNet. More recently, reference [62] used WordNet for effective code search, where it was used to generate synonyms, which were used as query expansion terms. During experimental evaluation, their approach showed improvement in precision and recall by values by 5% and 8% respectively.
In almost all the aforementioned studies, CETs are taken from WordNet as synsets of initial queries. In contrast, we selected CETs from not only the synsets of the initial query, but also synsets of these synsets. We then assign weights to the synonyms level wise.
Use of Wikipedia as Data Source for QE
Wikipedia [99] is a freely available and the largest multilingual Online encyclopedia on the web, where articles are regularly updated and new articles are added by a large number of web users. The exponential growth and reliability of Wikipedia makes it an ideal knowledge resource for information retrieval.
Recently, Wikipedia is being used widely for QE and a number of studies have reported significant improvements in IR over TREC and Cultural Heritage in CLEF (CHiC) datasets (e.g., [1,4,7,34,43,59,104]). Reference [59] performed an investigation using Wikipedia and retrieved all articles corresponding to the original query as a source of expansion terms for pseudo relevance feedback. It observed that for a particular query where the usual pseudo relevance feedback fails to improve the query, Wikipedia-based pseudo relevance feedback improves it significantly. Reference [34] uses link-based QE on Wikipedia and focuses on anchor text. It also proposed a phrase scoring function. Reference [104] utilized Wikipedia to categorize the original query into three types: (1) ambiguous queries (queries with terms having more than one potential meaning), (2) entity queries (queries having a specific meaning that cover a narrow topic) and (3) broader queries (queries having neither ambiguous nor specific meaning). They consolidated the expansion terms into the original query and evaluated these techniques using language modeling IR. Reference [4] uses Wikipedia for semantic enrichment of short queries based on in-link and out-link articles. Reference [32] proposed Entity Query Feature Expansion (EQFE) technique. It uses data sources such as Wikipedia and Freebase to expand the initial query with features from entities and their links to knowledge bases (Wikipedia and Freebase). It also uses structured attributes and the text of the knowledge bases for query expansion. The main motive for linking entities to knowledge bases is to improve the understanding and representation of text documents and queries.
Our proposed WWQE method differs from the above mentioned expansion methods in two ways:
1. Our method uses both Wikipedia and WordNet for query expansion, whereas the above discussed methods either use only one of these sources or some other sources. 2. For extracting expansion terms from WordNet, our method employs a novel two level approach where synsets of the query term as well as the synsets of these synsets are selected. 3. For extracting expansion terms from Wikipedia, terms are selected on the basis of a novel scheme called 'in-link score', which is based on in-links and out-links of Wikipedia articles.
Other QE Approaches
On the basis of data sources used in QE, several approaches have been proposed. All these approaches can be classified into four main categories: Linguistic approaches: The approaches in this category analyze expansion features such as lexical, morphological, semantic and syntactic term relationships to reformulate the initial query terms. They use thesaurus, dictionaries, ontologies, Linked Open Data (LOD) cloud or other similar knowledge resources such as WordNet or ConceptNet to determine the expansion terms by dealing with each keyword of initial query independently. Word stemming is one of the first and among the most influential QE approaches in linguistic association to reduce the inflected word to its root word. The stemming algorithm (e.g., [77]) can be utilized either at retrieval time or at indexing time. When used during retrieval, terms from initially retrieved documents are picked, and then, these terms are harmonized with the morphological types of query terms (e.g., [55,73]). When used during indexing time, words picked from the document collection are stemmed, and then, these words are harmonized with the query root word stems (e.g., [49]). Morphological approach [55,73] is an ordered way of studying the internal structure of the word. It has been shown to give better results than the stemming approach [20,69], however, it requires querying to be done in a structured way.
Use of semantic and contextual analysis are other popular QE approaches in linguistic association. It includes knowledge sources such as Ontologies, LOD cloud, dictionaries and thesaurus. In the context of ontological based QE, reference [17] uses domain-specific and domain-independent ontologies. Reference [101] utilizes the rich semantics of domain ontology and evaluates the trade off between the improvement in retrieval effectiveness and the computational cost. Several research works have been done on QE using a thesaurus. WordNet is a well known thesaurus for expanding the initial query using word synsets. As discussed earlier, many of the research works use WordNet for expanding the initial query. For example, reference [95] uses WordNet to find the synonyms. Reference [87] uses WordNet and POS tagger for expanding the initial query. However, this approach suffers some practical issues such as absence of accurate matching between query and senses, absence of proper nouns, and, one query term mapping to many noun synsets and collections. Generally, utilization of WordNet for QE is beneficial only if the query words are unambiguous in nature [42,95]; using word sense disambiguation (WSD) to remove ambiguity is not easy [71,74]. Several research works have attempted to address the WSD problem. For example, reference [72] suggests that instead of considering the replacement of the initial query term with its synonyms, hyponyms, and hyperonyms, it is better to extract similar concepts from the same domain of the given query from WordNet (such as the common nodes and glossy terms).
Another important approach that improves the linguistic information of the initial query is syntactic analysis [109]. Syntactic based QE uses the enhanced relational features of the query terms for expanding the initial query. It expands the query mostly through statistical approaches [101].
It recognizes the term dependency statistically [80] by employing techniques such as term cooccurrence. Reference [89] uses this approach for extracting contextual terms and relations from external corpus. Here, it uses two dependency relation based query expansion techniques for passage retrieval: Density based system (DBS) and Relation based system (RBS). DBS makes use of relation analysis to extract high quality contextual terms. RBS extracts relation paths for QE in a density and relation based passage retrieval framework. The syntactic analysis approach may be beneficial for natural language queries in search tasks, where linguistic analysis can break the task into a sequence of decisions [109] or integrate the taxonomic information effectively [61].
However, the above approaches fail to solve ambiguity problems [10,25]. Corpus-based approaches: Corpus-based Approaches examine the contents of the whole text corpus to recognize the expansion features to be utilized for QE. They are one of the earliest statistical approaches for QE. They create co-relations between terms based on co-occurrence statistics in the corpus to form sentences, paragraphs or neighboring words, which are used in the expanded query. Corpus-based approaches have two admissible strategies: (1) term clustering [29,52,68], which groups document terms into clusters based on their co-occurrences, and, (2) concept based terms [37,70,79], where expansion terms are based on the concept of query rather than the original query terms. Reference [56] selects the expansion terms after the analysis of the corpus using word embeddings, where each term in the corpus is characterized with a vector embedded in a vector space. Reference [110] uses four corpora as data sources (including one industry and three academic corpora) and presents a Two-stage Feature Selection (TFS) framework for QE known as Supervised Query Expansion (SQE).
Some of the other approaches established an association thesaurus based on the whole corpus by using, e.g., context vectors [39], term co-occurrence [24], mutual information [45] and interlinked Wikipedia articles [67]. Search log-based approaches: These approaches are based on the analysis of search logs. User feedback, which is an important source for suggesting a set of similar terms based on the user's initial query, is generally explored through the analysis of search logs. With the fast growing size of the web and the increasing use of web search engines, the abundance of search logs and their ease of use have made them an important source for QE. It usually contains user queries corresponding to the URLs of Web pages. Reference [30] uses the query logs to extract probabilistic correlations between query terms and document terms. These correlations are further used for expanding the user's initial query. Similarly, reference [31] uses search logs for QE; their experiments show better results when compared with QE based on pseudo relevance feedback. One of the advantages of using search logs is that it implicitly incorporates relevance feedback. On the other hand, it has been shown in reference [98] that implicit measurements are relatively good, however, their performance may not be the same for all types of users and search tasks.
There are commonly two types of QE approaches used on the basis of web search logs. The first type considers queries as documents and extracts features of these queries that are related to the user's initial query [47]. Among the techniques based on the first approach, some use their combined retrieval results [48], while some do not (e.g., [47,106]).
In the second type of approach, the features are extracted on relational behavior of queries. For example, reference [12] represents queries in a graph based vector space model (query-click bipartite graph) and analyzes the graph constructed using the query logs. References [23,31,80] extract the expansion terms directly from the clicked results. References [36,96] use the top results from past query terms entered by the users. Queries are also extracted from related documents [19,97], or through user clicks [46,105,106]. The second type of approach is more popular and has been shown to give better results. Web-based approaches: These approaches include Wikipedia and anchor texts from websites for expanding the user's original query. These approaches have gained popularity in recent times. Anchor text was first used in reference [65] for associating hyper-links with linked pages and with the pages in which anchor texts are found. In the context of a web-page, an anchor text can play a role similar to the title since the anchor text pointing to a page can serve as a concise summary of its contents. It has been shown that user search queries and anchor texts are very similar because an anchor text is a brief characterization of its target page. Article [54] used anchor texts for QE; their experimental results suggest that anchor texts can be used to improve the traditional QE based on query logs. On similar lines, reference [33] suggested that anchor texts can be an effective substitute for query logs. It demonstrated effectiveness of QE techniques using log-based stemming through experiments on standard TREC collection dataset.
Another popular approach is the use of Wikipedia articles, titles and hyper-links (in-link and out-link) [4,7]. We have already mentioned the importance of Wikipedia as an ideal knowledge source for QE. Recently, quite a few research works have used it for QE (e.g., [1,4,7,59,104]). Article [3] attempts to enrich initial queries using semantic annotations in Wikipedia articles combined with phrase-disambiguation. Their experiments show better results in comparison to the relevance based language model.
FAQs are another important web-based source of information for improving QE. Recently published article [53] uses domain specific FAQs data for manual QE. Some of the other works using FAQs are [2,80,88].
Our Approach
The proposed approach consist of four main steps: Pre-processing of Initial Query, QE using Wikipedia, QE using WordNet, and Re-weighting Expanded Terms. Figure 1 summarizes these steps. In the Preprocessing step, Brill's tagger [22] is used to lematize each query and assign a Part of speech (POS) to each word in the query. The POS tagging is done on queries and the POS information is used to recognize the phrase and individual words. These phrases and individual words are used in the subsequent steps of QE. Many researchers agree that instead of considering the term-to-term relationship, dealing with the query in terms of phrases gives better results [3,31,61]. Phrases usually offer richer context and have less ambiguity. Hence, documents retrieved in response to phrases from the initial query have more importance than the documents retrieved in response to non-phrase words from the initial query. A phrase usually has a specific meaning that goes beyond the cumulative meaning of the individual component words. Therefor, we give more priority to phrases in the query than the individual words when finding expansion terms from Wikipedia and WordNet. For example, consider the following query (Query ID-126) from FIRE dataset to demonstrate our pre-processing approach: < top > < num > 126 < /num > < title >Swine flu vaccine< /title > < desc >Indigenous vaccine made in India for swine flu prevention< /desc > < narr >Relevant documents should contain information related to making indigenous swine flu vaccines in India, the vaccine's use on humans and animals, arrangements that are in place to prevent scarcity / unavailability of the vaccine, and the vaccine's role in saving lives.< /narr > < /top > Multiple such queries in the standard SGML format are present in the query file of FIRE dataset. For extracting the root query, we extract the title from each query and tag it using the Stanford POS tagger library [91]. Fo example, the result of POS tagging the title of the above query is: Swine NN flu NN vaccine NN. For extracting phrases, we have only considered Nouns, Adjectives and Verbs as the words of interest. We consider a phrase to have been identified whenever two or more consecutive Noun, Adjective or Verb words are found. Based on this, we get the following individual terms and phrases from the above query: Swine flu Swine flu vaccine flu vaccine Swine flu vaccine
Pre-processing
QE using Wikipedia
After Pre-processing of the initial query we consider individual words and phrases as keywords to expand the initial query using Wikipedia. To select CETs from Wikipedia, we mainly focus on Wikipedia titles, in-links and out-links. Before going into further details, we first discuss our Wikipedia representation.
Wikipedia Representation
Wikipedia is an ideal information source for QE and can be represented as directed graph G(A, L), where A and L indicate articles and links respectively. Each article x ∈ A effectively summarizes its entity (title(x)) and provides links to the user to browse other related articles. In our work, we consider the two types of links: in-links and out-links. In-links (I(x)): Set of articles that point to the article x. It can be defined as
I(x) = {x i | (x i , x) ∈ L}(1)
For example, assume we have an article titled "Computer Science". The in-links to this article will be all the titles in Wikipedia that hyperlink to the article titled "Computer Science" in their main text or body. Out-links (O(x)): Set of articles that x point to. It can be defined as
O(x) = {x i | (x, x i ) ∈ L}(2)
For example, again consider the article titled "Computer Science". The out-links refers to all the hyperlinks within the body of the Wikipedia page of the article titled "Computer Science" (i.e. https : //en.wikipedia.org/wiki/Computer Science). The in-links and out-links have been diagrammatically demonstrated in Fig. 2.
In-links Out-links
Fig. 2: In-links & Out-links structure of Wikipedia
In addition to the article pages, Wikipedia contains "redirect" pages that provide an alternative way to reach the target article for abbreviated query terms. For example, query "ISRO" redirects to the article "Indian Space Research Organisation" and "UK" redirects to "United Kingdom".
In our proposed WWQE approach, the following steps are taken for QE using Wikipedia.
-Extraction of In-links.
-Extraction of Out-links.
-Assignment of the in-link score to expansion terms.
-Selection top n terms as expansion terms.
-Re-weighting of expansion terms.
Extraction of In-links
This step involves two sub-steps. First, extraction of In-links and, the second, computation of term frequency (tf ) of the initial query terms. The in-links of an initial query term consist of titles of all those Wikipedia articles that contain a hyper-link to the given query term in their main text or body. tf of an initial query term is the term frequency of the initial query term and its synonyms obtained from WordNet in the in-link articles (see the Fig. 3). For example, if the initial query term is "Bird", and "Wings" is one of its in-links, then tf of "Bird" in the article "Wings" is the frequency of word "Bird" and its synonyms obtained from WordNet in the article "Wings".
. . .
WikiDump
Fig. 4: Out-links Extraction
Assigning in-link score to expansion terms After extraction of in-links and out-links of the query term, expansion terms are selected from the out-links on the basis of semantic similarity. Semantic similarity has been calculated based on in-link scores. Let t be a query term and t 1 be its one of the candidate expansion terms. In reference to Wikipedia, these two articles t and t 1 are considered to be semantically similar if (i) t 1 is both an out-link and an in-link of t and (ii) t 1 has a high in-link score. The in-link score is based on the popular weighting scheme tf . idf [86] in IR and is calculated as follows:
Score(I(t 1 )) = tf (t, t 1 ) . idf (t 1 , W D )(3)
where: tf (t, t 1 ) is the term frequency of 'query term t and its synonyms obtained from WordNet' in the article t 1 , and idf (t 1 , W D ) is the inverse document frequency of term t 1 in the whole of Wikipedia dump W D . idf can be calculated as the following:
idf (t 1 , W D ) = log N |{d ∈ W D : d ∈ t 1 }|(4)
where: N is the total number of articles in Wikipedia dump, and |{d ∈ W D : d ∈ t 1 }| is the number of articles where the term t 1 appears. The intuition behind the in-link score is to capture (1) the amount of similarity between the expansion term and the initial query term, and (2) the amount of useful information being provided by the expansion term with respect to QE, i.e, whether the expansion term is common or rare across whole Wikipedia dump.
Elaborating on the above two points, the term frequency tf provides the semantic similarity between the initial query term and the expansion term, whereas idf provides score for the rareness of an expansion term. The latter assigns lower priority to the stop words (common terms) in Wikipedia articles (e.g., Main page, contents, edit, References, Help, About Wikipedia, etc.). In Wikipedia both common terms and expansion terms are hyper-links of the query term article; the idf helps in removing these common hyper-links present in all the articles of the candidate expansion terms.
After assigning an in-link score to each expanded term, for each term in the initial query, we select top n terms based on their in-link scores. These top n terms form the intermediate expanded query. After this, these intermediate terms are re-weighted using correlation score (as described in Sec. 3.4). Top m terms chosen on the basis of correlation score become one part of the expanded query. The other part is obtained from WordNet as described next.
QE using WordNet
After preprocessing of the initial query, the individual terms and phrases obtained as keywords are searched in WordNet for QE. While extracting semantically similar terms from WordNet, more priority is given to the phrases in the query than the individual terms. Specifically, phrases (formed by two consecutive words) are looked up first in WordNet for expansion. Only when no entity is found in WordNet corresponding to a phrase, its individual terms are looked up separately in WordNet. It should be noted that phrases are considered only at the time of finding semantically similar terms from WordNet.
When querying for semantically similar terms from WordNet, only synonym and hyponyms sets of the query term are considered as candidate expansion terms. Here synonyms and hyponyms are fetched at two levels, i.e., for an initial query term Q i , at level one its synonyms, denoted x i , are considered, and, at level two, synonyms of x i s are considered as shown in Fig.5. The final synonym set used for QE is the union of level one and level two synonyms. Hyponyms are also fetched similarly at two levels. After fetching synonyms and hyponyms at two levels, a wide range of semantically similar terms are obtained. Next, we rank these terms using tf . idf :
Score(t 1 ) = tf (t 1 , t) . idf (t 1 , W D )(5)
where: t is the initial query term, t 1 is an expanded term, tf (t 1 , t) is the term frequency of expanded term t 1 in the Wikipedia article of query term t, and idf (t 1 , W D ) is the inverse document frequency of term t 1 in whole Wikipedia dump W D .
idf is calculated as given in Eq. 4. After ranking expanded terms based on the above score, we collect the top n terms as the intermediate expanded query. These intermediate terms are re-weighted using correlation score. Top m terms chosen on the basis of correlation score (as described in Sec. 3.4) become the second part of the expanded query. The first part being obtained from Wikipedia as described before.
Re-weighting Expanded Terms
So far, a set of candidate expansion terms have been obtained, where each expansion term is strongly connected to an individual query term or phrase. These terms have been assigned weights using in-link score (for terms obtained from Wikipedia) and tf . idf score (for terms terms obtained from WordNet). However, this may not properly capture the relationship of the expansion term to the query as a whole. For example, the word "technology" is frequently associated with the word "information". Here, expanding the query term "technology" with "information" might work well for some queries such as "engineering technology", "science technology" and "educational technology" but might not work well for others such as "music technology", "food technology", and "financial technology". This problem has also been discussed in reference [13]. To resolve this language ambiguity problem, we re-weight expanded terms using correlation score [79,103]. The logic behind doing so is that if an expansion feature is correlated to several individual query terms, then the chances are high that it will be correlated to the query as a whole as well.
The correlation score is described as follows. Let q be the original query and let t 1 be a candidate expansion term. The correlation score of t 1 with q is calculated as:
C q,t1 = 1 |q| t∈q c t,t1 = 1 |q| t∈q w t,at . w t1,at(6)
where: c t,t1 denotes correlation (similarity) score between terms t and t 1 , and w t,at (w t1,at ) is the weight of the term t (t 1 ) in the article a t of term t.
The weight of the term t in its article t(a t ), denoted w t,at (w t1,at is similarly defined), is computed as:
w t,at = tf (t, a t ) . itf (t, a q ) = tf (t, a t ) . log T |T at |(7)
where: tf (t, a t ) is the term frequency of term t in its article a t , a q denotes all Wikipedia articles corresponding to the terms in the original query q, itf (t, a q ) is the inverse term frequency of term t associated with a q , T is the frequency of term t in all the Wikipedia articles in set a q , and |T at | is the frequency of term t in the article a t .
After assigning the correlation score to expansion terms, we collect the top m terms from both data sources to form the final set of expanded terms.
Experimental Setup
In order to evaluate the proposed WWQE approach, the experiments were carried out on a large number of queries (50) from FIRE ad-hoc test collections [50]. As real life queries are short, we used only the title field of all queries. We used Brill's tagger to assign a POS tag to each query term for extracting the phrase and individual word. These phrase and individual words have been used for QE. We used the most recent Windows version of WordNet 2.1 to extract two level of synsets terms and Wikipedia for in-links extraction for QE.
We use the Wikipeia Dump (also known as 'WikiDump') for in-link extraction. Wikipedia dump contains every Wikipedia article in XML format. As an open source project, the Wikipedia dump can be download from https://dumps.wikimedia.org/. We download the English Wikipedia dump titled "enwiki-20170101-pages-articles-multistream.xml" of January 2017.
We compare the performance of our query expansion technique with several existing weighting models as described in Sec.4.2.
Dataset
We use the well known benchmark dataset Forum for Information Retrieval Evaluation (FIRE) [50] to evaluate our proposed WWQE approach. Table 1 summarizes the dataset used. FIRE collections consists of a very large set of documents on which IR is done, a set of questions (called topics) and the right answers (called relevance judgments) stating relevance of documents to the corresponding topic(s). The FIRE dataset consists of a large collection of newswire articles from two sources namely BDnews24 [15] and The Telegraph [8] provided by Indian Statistical Institute Kolkata, India.
Evaluation Metrics
We used the TERRIER 10 retrieval system for our all experimental evaluation. We use the title field of the topics in FIRE dataset. For indexing the documents, first stopwords are removed, then Porter's Stemmer is used for stemming the documents. All experimental evaluations are based on the unigram word assumption, i.e., all documents and queries in the corpus are indexed using single terms. We did not use any phrase or positional information. To compare the effectiveness of our expansion technique, we used the following weighting models: IFB2 a probabilistic divergence from randomness (DFR) model [6], BM25 model of Okapi [82], Laplace's law of succession I(n)L2 [90], Log-logistic DFR model LGD [27], Divergence from Independence model DPH [5] and Standard tf.idf model. The Parameters for these models were set to the default values in TERRIER. We evaluate the results on standard evaluation metrics: MAP(mean average precision), GM MAP (geometric mean average precision), P@10 (precision at top 10 ranks), P@20, P@30, bpref (binary preference) and the overall recall (number of relevant documents retrieved). Additionally, we report the percentage improvement in MAP over the baseline (non expanded query) for each expansion method.
Experimental Results
The aim of our experiments is to explore the effectiveness of the proposed Wikipedia-WordNet based QE technique (WWQE) by comparing it with the three baselines on popular weighting models and evaluation metrics. The comparison was done over three baselines: (i) unexpanded query, (ii) query expansion using Wikipedia alone, and (iii) query expansion using WordNet alone. Comparative analysis is shown in Tables 2, 3 and 4. Table 4 shows performance comparison of the proposed WWQE technique over popular weighting models in the context of MAP, GM MAP, P@10, P@20, P@30 and relevant return. The table shows that the proposed WWQE technique is compatible with the existing popular weighting models and it also improves the information retrieval effectively. It also shows the relative percentage improvements (within parentheses) of various standard evaluation metrics measured against no expansion. By using the proposed query expansion technique (WWQE), the weighting models improve the MAP up to 24% and GM MAP by 48%. Based on the results presented in Table 4 we can say that in the context of all evaluation parameters, the proposed QE technique performs well with all weighting models. Figure 6 shows the comparative analysis of precision-recall curve of WWQE technique with various weighting models. This graph plots the interpolated precision of an IR system using 11 standard cutoff values from the Recall levels, i.e {0, 0.1, 0.2, 0.3, ...,1.0}. These graphs are widely used to evaluate IR systems that return ranked documents (i.e., averaging and plotting retrieval results). Comparisons are best made in three different recall ranges: 0 to 0.2, 0.2 to 0.8, and 0.8 to 1. These ranges characterize high precision, middle recall, and high recall performance, respectively. Based on the graph presented in Figures 6a and 6b, we arrive at the conclusion that P-R curve of the various weighting models have nearly the same retrieval result with or without QE respectively. Therefor we can say that for improving the information retrieval in QE, choice of the weighting models is not so important. The importance lies in the choice of technique used for selecting the relevant expansion terms. The relevant expansion terms, in turn, come from data sources. Hence, the data sources also play an important role for effective QE. This conclusion also supports our proposed WWQE technique where we select the expansion terms on the basis of individual term weighting as well as assign a correlation score on the basis of entire query. 5 documents retrieved and bpref measures a preference relation about how many judged relevant documents are ranked before judged irrelevant documents. Figure 9 compare the WWQE technique in terms of MAP, bpref and P@5 with baseline (unexpanded), QE using WordNet alone and QE using Wikipedia alone. IFB2 model is used for term weighting for experimental evaluation.
After evaluating the performance of the proposed QE technique on several popular evaluation metrics, it can be concluded that the proposed QE technique (WWQE) performs well with all weighting models on several evaluation parameters. Therefor, the proposed WWQE technique is effective in improving information retrieval results.
Conclusion
This article presents a novel Wikipedia WordNet based Query Expansion (WWQE) technique that considers the individual terms and phrases as the expansion terms. Proposed technique employs a two level strategy to select terms from WordNet. First, it fetches synsets of the initial query terms. Then, it extracts sysnets of these synsets. In order to score the expansion term on Wikipedia, we proposed a new weighting score named as in-link score. The in-link score assigns a score to each expansion term extracted from Wikipedia, and tf-idf based scoring system is used to assign a score to expansion terms extracted from WordNet. After assigning score to individual query terms, we
MAP bpref P@5
Baseline Wordnet Wikipedia WWQE Fig. 9: Comparative analysis of WWQE technique with baseline, WordNet and Wikipedia further re-weight the selected expansion terms using correlation score with respect to the entire query. The combination of the two data sources works well for extracting relevant expansion terms and the proposed QE technique performs well with these terms on several weighting models. It also yields better results when compared to the the two methods individually. The results on the basis of several evaluation metrics on FIRE dataset demonstrate the effectiveness of our proposed QE technique in the field of information retrieval. The proposed query expansion technique improves the IR effectively on evaluation with several popular weighting models. | 6,631 |
1901.10197 | 2912711185 | Abstract Query expansion (QE) is a well-known technique used to enhance the effectiveness of information retrieval. QE reformulates the initial query by adding similar terms that help in retrieving more relevant results. Several approaches have been proposed in literature producing quite favorable results, but they are not evenly favorable for all types of queries (individual and phrase queries). One of the main reasons for this is the use of the same kind of data sources and weighting scheme while expanding both the individual and the phrase query terms. As a result, the holistic relationship among the query terms is not well captured or scored. To address this issue, we have presented a new approach for QE using Wikipedia and WordNet as data sources. Specifically, Wikipedia gives rich expansion terms for phrase terms, while WordNet does the same for individual terms. We have also proposed novel weighting schemes for expansion terms: in-link score (for terms extracted from Wikipedia) and a tf-idf based scheme (for terms extracted from WordNet). In the proposed Wikipedia-WordNet-based QE technique (WWQE), we weigh the expansion terms twice: first, they are scored by the weighting scheme individually, and then, the weighting scheme scores the selected expansion terms concerning the entire query using correlation score. The proposed approach gains improvements of 24 on the MAP score and 48 on the GMAP score over unexpanded queries on the FIRE dataset. Experimental results achieve a significant improvement over individual expansion and other related state-of-the-art approaches. We also analyzed the effect on retrieval effectiveness of the proposed technique by varying the number of expansion terms. | Another popular approach is the use of Wikipedia articles, titles and hyper-links (in-link and out-link) @cite_85 @cite_8 . We have already mentioned the importance of Wikipedia as an ideal knowledge source for QE. Recently, quite a few research works have used it for QE (e.g., @cite_26 @cite_85 @cite_100 @cite_84 @cite_8 ). Article @cite_77 attempts to enrich initial queries using semantic annotations in Wikipedia articles combined with phrase-disambiguation. Their experiments show better results in comparison to the relevance based language model. | {
"abstract": [
"In an ad-hoc retrieval task, the query is usually short and the user expects to find the relevant documents in the first several result pages. We explored the possibilities of using Wikipedia's articles as an external corpus to expand ad-hoc queries. Results show promising improvements over measures that emphasize on weak queries.",
"We deal, in this paper, with the short queries (containing one or two words) problem. Short queries have no sufficient information to express their semantics in a non ambiguous way. Pseudo-relevance feedback (PRF) approach for query expansion is useful in many Information Retrieval (IR) tasks. However, this approach does not work well in the case of very short queries. Therefore, we present instead of PRF a semantic query enrichment method based on Wikipedia. This method expands short queries by semantically related terms extracted from Wikipedia. Our experiments on cultural heritage corpora show significant improvement in the retrieval performance.",
"",
"In this paper, we describe our query expansion approach submitted for the Semantic Enrichment task in Cultural Heritage in CLEF (CHiC) 2012. Our approach makes use of an external knowledge base such as Wikipedia and DBpedia. It consists of two major steps, concept candidates generation from knowledge bases and the selection of K-best related concepts. For selecting the K-best concepts, we ranked them according to their semantic relatedness with the query. We used Wikipedia-based Explicit Semantic Analysis to calculate the semantic relatedness scores. We evaluate our approach on 25 queries from the CHiC Semantic Enrichment dataset.",
"Relevance feedback methods generally suffer from topic drift caused by word ambiguities and synonymous uses of words. Topic drift is an important issue in patent information retrieval as people tend to use different expressions describing similar concepts causing low precision and recall at the same time. Furthermore, failing to retrieve relevant patents to an application during the examination process may cause legal problems caused by granting an existing invention. A possible cause of topic drift is utilizing a relevance feedback-based search method. As a way to alleviate the inherent problem, we propose a novel query phrase expansion approach utilizing semantic annotations in Wikipedia pages, trying to enrich queries with phrases disambiguating the original query words. The idea was implemented for patent search where patents are classified into a hierarchy of categories, and the analyses of the experimental results showed not only the positive roles of phrases and words in retrieving additional relevant documents through query expansion but also their contributions to alleviating the query drift problem. More specifically, our query expansion method was compared against relevance-based language model, a state-of-the-art query expansion method, to show its superiority in terms of MAP on all levels of the classification hierarchy.",
"Pseudo-relevance feedback (PRF) via query-expansion has been proven to be e®ective in many information retrieval (IR) tasks. In most existing work, the top-ranked documents from an initial search are assumed to be relevant and used for PRF. One problem with this approach is that one or more of the top retrieved documents may be non-relevant, which can introduce noise into the feedback process. Besides, existing methods generally do not take into account the significantly different types of queries that are often entered into an IR system. Intuitively, Wikipedia can be seen as a large, manually edited document collection which could be exploited to improve document retrieval effectiveness within PRF. It is not obvious how we might best utilize information from Wikipedia in PRF, and to date, the potential of Wikipedia for this task has been largely unexplored. In our work, we present a systematic exploration of the utilization of Wikipedia in PRF for query dependent expansion. Specifically, we classify TREC topics into three categories based on Wikipedia: 1) entity queries, 2) ambiguous queries, and 3) broader queries. We propose and study the effectiveness of three methods for expansion term selection, each modeling the Wikipedia based pseudo-relevance information from a different perspective. We incorporate the expansion terms into the original query and use language modeling IR to evaluate these methods. Experiments on four TREC test collections, including the large web collection GOV2, show that retrieval performance of each type of query can be improved. In addition, we demonstrate that the proposed method out-performs the baseline relevance model in terms of precision and robustness."
],
"cite_N": [
"@cite_26",
"@cite_8",
"@cite_85",
"@cite_84",
"@cite_77",
"@cite_100"
],
"mid": [
"2007585013",
"1986297686",
"",
"2394716307",
"2090867003",
"2102563107"
]
} | A New Approach for Query Expansion using Wikipedia and WordNet | Web is arguably the largest information source available on this planet and it's growing day by day. According to a recent survey [26] of the computer world magazine, approximately 70-80 percent of all data available to enterprises/organizations is unstructured information, i.e., information that either does not organize in a pre-defined manner or does not have a pre-defined data model. This makes information processing a big challenge and, creates a vocabulary gap between user queries and indexed documents. It is common for a user's query Q and its relevant document D (in a document collection) to use different vocabulary and language styles while referring to the same concept. For example, terms 'buy' and 'purchase' have the same meaning, only one of these can be present in documents-index while the other one can be user's query term. This makes it difficult to retrieve the information actually wanted by the user. An effective strategy to fill this gap is to use Query expansion (QE) technique that enhances the retrieval effectiveness by adding expansion terms to the initial query. Selection of the expansion terms plays a crucial role in QE because only a small subset of the expanded terms are actually relevant to the query. In this sense, the approach for selection of expansion terms is equally important in comparison to what we do further with the expanded terms in order to retrieve desired information. QE has a long research history in Information retrieval (IR) [64,83]. It has potential to enhance the IR effectiveness by adding relevant terms that can help to discriminate the relevant documents from irrelevant ones. The source of expansion terms plays a significant role in QE. A variety of sources have been researched for extracting the expansion terms, e.g., the entire target document collection [14,24,110], feedback documents (few top ranked documents are retrieved in response to the initial query) [31,59] or external knowledge resources [1,33,54].
References [10,25] provide comprehensive surveys on data sources used for QE. Broadly, such sources can be classified into four categories: documents used in retrieval process [14] (e.g., corpus), hand-built knowledge resources [76] (e.g., WordNet 1 , ConceptNet 2 , thesaurus, ontologies), external text collections and resources [1] (e.g., Web, Wikipedia), and hybrid data sources [32].
In corpus based sources, a corpus is prepared that contains a cluster of terms for each possible query term. During expansion, the corresponding cluster is used as the set of expanded terms (e.g., [14,24,110]). However, corpus based sources fail to establish a relationship between a word in the corpus and related words used in different communities, e.g., "senior citizen" and "elderly" [39].
Hand-built knowledge resources based QE extract knowledge from textual hand-built data sources such as dictionaries, thesaurus, ontologies and LOD cloud (e.g., [9,76,95,102,108]). Thesaurusbased QE can be either automatic or hand-built. One of the famous hand-built thesaurus is Word-Net [66]. While it significantly improves the retrieval effectiveness of badly constructed queries, it does not show a lot improvement for well formulated user queries. Primarily, there are three limitations of hand-built knowledge resources: they are commonly domain specific, they usually do not contain proper noun and they have to be kept up to date.
External text collections and resources such as web, Wikipedia, Query logs and anchor texts are the most common and effective data sources for QE ( [1,11,16,33,54,97,106]). In such cases, QE approaches show overall better results in comparison to the other previously discussed data sources.
Hybrid Data Sources are a combination of two or more data sources. For example, reference [28] uses WordNet, an external corpus, and the top retrieved documents as data sources for QE. Some of the other research works based on hybrid resources are [32,44,57,100].
Among the above data sources, Wikipedia and WordNet are popular choices for semantic enrichment of the initial query [1,4,38,76,95,104]. They are also two of the most widely used knowledge resources in natural language processing. Wikipedia is the largest encyclopedia describing entities [99]. WordNet is a large lexicon database of words in the English language. An entity is described by Wikipedia through a web-article that contains detailed related information about the entity. Each such web-article describes only one entity. The information present in the article has important keywords that can prove very useful as expansion terms for queries based on the entity being described by the article. On the other hand, WordNet consists of a graph of synsets that are collections of synonymous words linked by a number of useful properties. WordNet also provides a precise and attentively assembled hierarchy of useful concepts. These features make WordNet an ideal knowledge resource for QE.
Many of the articles [1,4,38,60,76,95,104] have used Wikipedia and WordNet separately with promising results. However, they don't produce consistent results for different types of queries (individual and phrase queries).
This article proposes a novel technique named Wikipedia-WordNet based QE technique (WWQE) for query expansion that combines Wikipedia and WordNet data sources to improve retrieval effectiveness. We have also proposed novel schemes for weighting expanded terms: in-link score (for terms extracted from Wikipedia) and a tfidf based scheme (for terms extracted from WordNet). Experimental results show that the proposed WWQE technique produces consistently better results for all kinds of queries (individual and phrase queries) when compared with query expansion based on the two data sources individually. The experiments were carried on FIRE dataset [50] using popular weighting models and evaluation matrices. They produced improved results on popular metrics such as MAP(mean average precision), GM MAP (geometric mean average precision), P@10 (precision at top 10 ranks), P@20, P@30, bpref (binary preference) and overall recall. The comparison was made with results obtained on individual data sources (i.e., Wikipedia and WordNet).
Organization
The remainder of the article is organized as follows. Section 2 discusses related work. Section 3 describes the proposed approach. Experimental Setup, dataset and evaluation matrices are discussed in Section 4. Section 5 discusses the experimental results. Finally, we conclude in Section 6.
Use of WordNet as Data Source for QE
WordNet [66] is one of the popular hand-built thesaurus, which has been significantly used for QE and word-sense disambiguation (WSD). Here, our focus is on the use of WordNet for query expansion. There are many issues that need to be addressed when using WordNet as a data source, such as:
-When a query term appears in multiple synsets, which synset(s) should be considered for query expansion! -Can only the synsets of a query term have meanings similar to the query term, or, synsets of these synsets can also have meanings similar to the query term, and hence, should also be considered as potential expansion terms! -When considering a synset of a query term, should only synonyms be considered or other relations (i.e., hypernyms, hyponyms, holonyms, meronyms etc.) should also be looked at! Further, when considering terms under a given relation, which terms should be selected!
In earlier works, a number of researchers have explored these issues. References [94,95] added manually selected WordNet synsets for QE, but unfortunately no significant improvement were obtained. Reference [87] uses synonyms of the initial query and assigns half weight. Reference [60] used word sense to add synonyms, hyponyms and terms's WordNet glosses to expand query. Their experiments yielded significant improvements on TREC datasets. Reference [41] uses semantic similarity while reference [108] uses sense disambiguation of query terms to add synonyms for QE. During experimental evaluation, in response to the user's initial query, reference [108]'s method produces an improvement of around 7% in P@10 value over the CACM collection. Reference [35] uses a set of candidate expansion terms (CET) that include all the terms from all the synsets where the query terms exist. Basically, a CET is chosen based on the vocabulary overlap between its glosses and the glosses of query terms. Recently, reference [76] used semantic relations from the WordNet. The authors proposed a novel query expansion technique where Candidate Expansion Terms (CET) are selected from a set of pseudo-relevant documents. The usefulness of these terms is determined by considering multiple sources of information. The semantic relation between the expanded terms and the query terms is determined using WordNet. On the TREC collection, their method showed significant improvement in IR over the user's unexpanded queries. Reference [58] presents an automatic query expansion (AQE) approach that uses word relations to increase the chances of finding relevant code. As data source for query expansion, it uses a thesaurus containing only software-related word relations along with WordNet. More recently, reference [62] used WordNet for effective code search, where it was used to generate synonyms, which were used as query expansion terms. During experimental evaluation, their approach showed improvement in precision and recall by values by 5% and 8% respectively.
In almost all the aforementioned studies, CETs are taken from WordNet as synsets of initial queries. In contrast, we selected CETs from not only the synsets of the initial query, but also synsets of these synsets. We then assign weights to the synonyms level wise.
Use of Wikipedia as Data Source for QE
Wikipedia [99] is a freely available and the largest multilingual Online encyclopedia on the web, where articles are regularly updated and new articles are added by a large number of web users. The exponential growth and reliability of Wikipedia makes it an ideal knowledge resource for information retrieval.
Recently, Wikipedia is being used widely for QE and a number of studies have reported significant improvements in IR over TREC and Cultural Heritage in CLEF (CHiC) datasets (e.g., [1,4,7,34,43,59,104]). Reference [59] performed an investigation using Wikipedia and retrieved all articles corresponding to the original query as a source of expansion terms for pseudo relevance feedback. It observed that for a particular query where the usual pseudo relevance feedback fails to improve the query, Wikipedia-based pseudo relevance feedback improves it significantly. Reference [34] uses link-based QE on Wikipedia and focuses on anchor text. It also proposed a phrase scoring function. Reference [104] utilized Wikipedia to categorize the original query into three types: (1) ambiguous queries (queries with terms having more than one potential meaning), (2) entity queries (queries having a specific meaning that cover a narrow topic) and (3) broader queries (queries having neither ambiguous nor specific meaning). They consolidated the expansion terms into the original query and evaluated these techniques using language modeling IR. Reference [4] uses Wikipedia for semantic enrichment of short queries based on in-link and out-link articles. Reference [32] proposed Entity Query Feature Expansion (EQFE) technique. It uses data sources such as Wikipedia and Freebase to expand the initial query with features from entities and their links to knowledge bases (Wikipedia and Freebase). It also uses structured attributes and the text of the knowledge bases for query expansion. The main motive for linking entities to knowledge bases is to improve the understanding and representation of text documents and queries.
Our proposed WWQE method differs from the above mentioned expansion methods in two ways:
1. Our method uses both Wikipedia and WordNet for query expansion, whereas the above discussed methods either use only one of these sources or some other sources. 2. For extracting expansion terms from WordNet, our method employs a novel two level approach where synsets of the query term as well as the synsets of these synsets are selected. 3. For extracting expansion terms from Wikipedia, terms are selected on the basis of a novel scheme called 'in-link score', which is based on in-links and out-links of Wikipedia articles.
Other QE Approaches
On the basis of data sources used in QE, several approaches have been proposed. All these approaches can be classified into four main categories: Linguistic approaches: The approaches in this category analyze expansion features such as lexical, morphological, semantic and syntactic term relationships to reformulate the initial query terms. They use thesaurus, dictionaries, ontologies, Linked Open Data (LOD) cloud or other similar knowledge resources such as WordNet or ConceptNet to determine the expansion terms by dealing with each keyword of initial query independently. Word stemming is one of the first and among the most influential QE approaches in linguistic association to reduce the inflected word to its root word. The stemming algorithm (e.g., [77]) can be utilized either at retrieval time or at indexing time. When used during retrieval, terms from initially retrieved documents are picked, and then, these terms are harmonized with the morphological types of query terms (e.g., [55,73]). When used during indexing time, words picked from the document collection are stemmed, and then, these words are harmonized with the query root word stems (e.g., [49]). Morphological approach [55,73] is an ordered way of studying the internal structure of the word. It has been shown to give better results than the stemming approach [20,69], however, it requires querying to be done in a structured way.
Use of semantic and contextual analysis are other popular QE approaches in linguistic association. It includes knowledge sources such as Ontologies, LOD cloud, dictionaries and thesaurus. In the context of ontological based QE, reference [17] uses domain-specific and domain-independent ontologies. Reference [101] utilizes the rich semantics of domain ontology and evaluates the trade off between the improvement in retrieval effectiveness and the computational cost. Several research works have been done on QE using a thesaurus. WordNet is a well known thesaurus for expanding the initial query using word synsets. As discussed earlier, many of the research works use WordNet for expanding the initial query. For example, reference [95] uses WordNet to find the synonyms. Reference [87] uses WordNet and POS tagger for expanding the initial query. However, this approach suffers some practical issues such as absence of accurate matching between query and senses, absence of proper nouns, and, one query term mapping to many noun synsets and collections. Generally, utilization of WordNet for QE is beneficial only if the query words are unambiguous in nature [42,95]; using word sense disambiguation (WSD) to remove ambiguity is not easy [71,74]. Several research works have attempted to address the WSD problem. For example, reference [72] suggests that instead of considering the replacement of the initial query term with its synonyms, hyponyms, and hyperonyms, it is better to extract similar concepts from the same domain of the given query from WordNet (such as the common nodes and glossy terms).
Another important approach that improves the linguistic information of the initial query is syntactic analysis [109]. Syntactic based QE uses the enhanced relational features of the query terms for expanding the initial query. It expands the query mostly through statistical approaches [101].
It recognizes the term dependency statistically [80] by employing techniques such as term cooccurrence. Reference [89] uses this approach for extracting contextual terms and relations from external corpus. Here, it uses two dependency relation based query expansion techniques for passage retrieval: Density based system (DBS) and Relation based system (RBS). DBS makes use of relation analysis to extract high quality contextual terms. RBS extracts relation paths for QE in a density and relation based passage retrieval framework. The syntactic analysis approach may be beneficial for natural language queries in search tasks, where linguistic analysis can break the task into a sequence of decisions [109] or integrate the taxonomic information effectively [61].
However, the above approaches fail to solve ambiguity problems [10,25]. Corpus-based approaches: Corpus-based Approaches examine the contents of the whole text corpus to recognize the expansion features to be utilized for QE. They are one of the earliest statistical approaches for QE. They create co-relations between terms based on co-occurrence statistics in the corpus to form sentences, paragraphs or neighboring words, which are used in the expanded query. Corpus-based approaches have two admissible strategies: (1) term clustering [29,52,68], which groups document terms into clusters based on their co-occurrences, and, (2) concept based terms [37,70,79], where expansion terms are based on the concept of query rather than the original query terms. Reference [56] selects the expansion terms after the analysis of the corpus using word embeddings, where each term in the corpus is characterized with a vector embedded in a vector space. Reference [110] uses four corpora as data sources (including one industry and three academic corpora) and presents a Two-stage Feature Selection (TFS) framework for QE known as Supervised Query Expansion (SQE).
Some of the other approaches established an association thesaurus based on the whole corpus by using, e.g., context vectors [39], term co-occurrence [24], mutual information [45] and interlinked Wikipedia articles [67]. Search log-based approaches: These approaches are based on the analysis of search logs. User feedback, which is an important source for suggesting a set of similar terms based on the user's initial query, is generally explored through the analysis of search logs. With the fast growing size of the web and the increasing use of web search engines, the abundance of search logs and their ease of use have made them an important source for QE. It usually contains user queries corresponding to the URLs of Web pages. Reference [30] uses the query logs to extract probabilistic correlations between query terms and document terms. These correlations are further used for expanding the user's initial query. Similarly, reference [31] uses search logs for QE; their experiments show better results when compared with QE based on pseudo relevance feedback. One of the advantages of using search logs is that it implicitly incorporates relevance feedback. On the other hand, it has been shown in reference [98] that implicit measurements are relatively good, however, their performance may not be the same for all types of users and search tasks.
There are commonly two types of QE approaches used on the basis of web search logs. The first type considers queries as documents and extracts features of these queries that are related to the user's initial query [47]. Among the techniques based on the first approach, some use their combined retrieval results [48], while some do not (e.g., [47,106]).
In the second type of approach, the features are extracted on relational behavior of queries. For example, reference [12] represents queries in a graph based vector space model (query-click bipartite graph) and analyzes the graph constructed using the query logs. References [23,31,80] extract the expansion terms directly from the clicked results. References [36,96] use the top results from past query terms entered by the users. Queries are also extracted from related documents [19,97], or through user clicks [46,105,106]. The second type of approach is more popular and has been shown to give better results. Web-based approaches: These approaches include Wikipedia and anchor texts from websites for expanding the user's original query. These approaches have gained popularity in recent times. Anchor text was first used in reference [65] for associating hyper-links with linked pages and with the pages in which anchor texts are found. In the context of a web-page, an anchor text can play a role similar to the title since the anchor text pointing to a page can serve as a concise summary of its contents. It has been shown that user search queries and anchor texts are very similar because an anchor text is a brief characterization of its target page. Article [54] used anchor texts for QE; their experimental results suggest that anchor texts can be used to improve the traditional QE based on query logs. On similar lines, reference [33] suggested that anchor texts can be an effective substitute for query logs. It demonstrated effectiveness of QE techniques using log-based stemming through experiments on standard TREC collection dataset.
Another popular approach is the use of Wikipedia articles, titles and hyper-links (in-link and out-link) [4,7]. We have already mentioned the importance of Wikipedia as an ideal knowledge source for QE. Recently, quite a few research works have used it for QE (e.g., [1,4,7,59,104]). Article [3] attempts to enrich initial queries using semantic annotations in Wikipedia articles combined with phrase-disambiguation. Their experiments show better results in comparison to the relevance based language model.
FAQs are another important web-based source of information for improving QE. Recently published article [53] uses domain specific FAQs data for manual QE. Some of the other works using FAQs are [2,80,88].
Our Approach
The proposed approach consist of four main steps: Pre-processing of Initial Query, QE using Wikipedia, QE using WordNet, and Re-weighting Expanded Terms. Figure 1 summarizes these steps. In the Preprocessing step, Brill's tagger [22] is used to lematize each query and assign a Part of speech (POS) to each word in the query. The POS tagging is done on queries and the POS information is used to recognize the phrase and individual words. These phrases and individual words are used in the subsequent steps of QE. Many researchers agree that instead of considering the term-to-term relationship, dealing with the query in terms of phrases gives better results [3,31,61]. Phrases usually offer richer context and have less ambiguity. Hence, documents retrieved in response to phrases from the initial query have more importance than the documents retrieved in response to non-phrase words from the initial query. A phrase usually has a specific meaning that goes beyond the cumulative meaning of the individual component words. Therefor, we give more priority to phrases in the query than the individual words when finding expansion terms from Wikipedia and WordNet. For example, consider the following query (Query ID-126) from FIRE dataset to demonstrate our pre-processing approach: < top > < num > 126 < /num > < title >Swine flu vaccine< /title > < desc >Indigenous vaccine made in India for swine flu prevention< /desc > < narr >Relevant documents should contain information related to making indigenous swine flu vaccines in India, the vaccine's use on humans and animals, arrangements that are in place to prevent scarcity / unavailability of the vaccine, and the vaccine's role in saving lives.< /narr > < /top > Multiple such queries in the standard SGML format are present in the query file of FIRE dataset. For extracting the root query, we extract the title from each query and tag it using the Stanford POS tagger library [91]. Fo example, the result of POS tagging the title of the above query is: Swine NN flu NN vaccine NN. For extracting phrases, we have only considered Nouns, Adjectives and Verbs as the words of interest. We consider a phrase to have been identified whenever two or more consecutive Noun, Adjective or Verb words are found. Based on this, we get the following individual terms and phrases from the above query: Swine flu Swine flu vaccine flu vaccine Swine flu vaccine
Pre-processing
QE using Wikipedia
After Pre-processing of the initial query we consider individual words and phrases as keywords to expand the initial query using Wikipedia. To select CETs from Wikipedia, we mainly focus on Wikipedia titles, in-links and out-links. Before going into further details, we first discuss our Wikipedia representation.
Wikipedia Representation
Wikipedia is an ideal information source for QE and can be represented as directed graph G(A, L), where A and L indicate articles and links respectively. Each article x ∈ A effectively summarizes its entity (title(x)) and provides links to the user to browse other related articles. In our work, we consider the two types of links: in-links and out-links. In-links (I(x)): Set of articles that point to the article x. It can be defined as
I(x) = {x i | (x i , x) ∈ L}(1)
For example, assume we have an article titled "Computer Science". The in-links to this article will be all the titles in Wikipedia that hyperlink to the article titled "Computer Science" in their main text or body. Out-links (O(x)): Set of articles that x point to. It can be defined as
O(x) = {x i | (x, x i ) ∈ L}(2)
For example, again consider the article titled "Computer Science". The out-links refers to all the hyperlinks within the body of the Wikipedia page of the article titled "Computer Science" (i.e. https : //en.wikipedia.org/wiki/Computer Science). The in-links and out-links have been diagrammatically demonstrated in Fig. 2.
In-links Out-links
Fig. 2: In-links & Out-links structure of Wikipedia
In addition to the article pages, Wikipedia contains "redirect" pages that provide an alternative way to reach the target article for abbreviated query terms. For example, query "ISRO" redirects to the article "Indian Space Research Organisation" and "UK" redirects to "United Kingdom".
In our proposed WWQE approach, the following steps are taken for QE using Wikipedia.
-Extraction of In-links.
-Extraction of Out-links.
-Assignment of the in-link score to expansion terms.
-Selection top n terms as expansion terms.
-Re-weighting of expansion terms.
Extraction of In-links
This step involves two sub-steps. First, extraction of In-links and, the second, computation of term frequency (tf ) of the initial query terms. The in-links of an initial query term consist of titles of all those Wikipedia articles that contain a hyper-link to the given query term in their main text or body. tf of an initial query term is the term frequency of the initial query term and its synonyms obtained from WordNet in the in-link articles (see the Fig. 3). For example, if the initial query term is "Bird", and "Wings" is one of its in-links, then tf of "Bird" in the article "Wings" is the frequency of word "Bird" and its synonyms obtained from WordNet in the article "Wings".
. . .
WikiDump
Fig. 4: Out-links Extraction
Assigning in-link score to expansion terms After extraction of in-links and out-links of the query term, expansion terms are selected from the out-links on the basis of semantic similarity. Semantic similarity has been calculated based on in-link scores. Let t be a query term and t 1 be its one of the candidate expansion terms. In reference to Wikipedia, these two articles t and t 1 are considered to be semantically similar if (i) t 1 is both an out-link and an in-link of t and (ii) t 1 has a high in-link score. The in-link score is based on the popular weighting scheme tf . idf [86] in IR and is calculated as follows:
Score(I(t 1 )) = tf (t, t 1 ) . idf (t 1 , W D )(3)
where: tf (t, t 1 ) is the term frequency of 'query term t and its synonyms obtained from WordNet' in the article t 1 , and idf (t 1 , W D ) is the inverse document frequency of term t 1 in the whole of Wikipedia dump W D . idf can be calculated as the following:
idf (t 1 , W D ) = log N |{d ∈ W D : d ∈ t 1 }|(4)
where: N is the total number of articles in Wikipedia dump, and |{d ∈ W D : d ∈ t 1 }| is the number of articles where the term t 1 appears. The intuition behind the in-link score is to capture (1) the amount of similarity between the expansion term and the initial query term, and (2) the amount of useful information being provided by the expansion term with respect to QE, i.e, whether the expansion term is common or rare across whole Wikipedia dump.
Elaborating on the above two points, the term frequency tf provides the semantic similarity between the initial query term and the expansion term, whereas idf provides score for the rareness of an expansion term. The latter assigns lower priority to the stop words (common terms) in Wikipedia articles (e.g., Main page, contents, edit, References, Help, About Wikipedia, etc.). In Wikipedia both common terms and expansion terms are hyper-links of the query term article; the idf helps in removing these common hyper-links present in all the articles of the candidate expansion terms.
After assigning an in-link score to each expanded term, for each term in the initial query, we select top n terms based on their in-link scores. These top n terms form the intermediate expanded query. After this, these intermediate terms are re-weighted using correlation score (as described in Sec. 3.4). Top m terms chosen on the basis of correlation score become one part of the expanded query. The other part is obtained from WordNet as described next.
QE using WordNet
After preprocessing of the initial query, the individual terms and phrases obtained as keywords are searched in WordNet for QE. While extracting semantically similar terms from WordNet, more priority is given to the phrases in the query than the individual terms. Specifically, phrases (formed by two consecutive words) are looked up first in WordNet for expansion. Only when no entity is found in WordNet corresponding to a phrase, its individual terms are looked up separately in WordNet. It should be noted that phrases are considered only at the time of finding semantically similar terms from WordNet.
When querying for semantically similar terms from WordNet, only synonym and hyponyms sets of the query term are considered as candidate expansion terms. Here synonyms and hyponyms are fetched at two levels, i.e., for an initial query term Q i , at level one its synonyms, denoted x i , are considered, and, at level two, synonyms of x i s are considered as shown in Fig.5. The final synonym set used for QE is the union of level one and level two synonyms. Hyponyms are also fetched similarly at two levels. After fetching synonyms and hyponyms at two levels, a wide range of semantically similar terms are obtained. Next, we rank these terms using tf . idf :
Score(t 1 ) = tf (t 1 , t) . idf (t 1 , W D )(5)
where: t is the initial query term, t 1 is an expanded term, tf (t 1 , t) is the term frequency of expanded term t 1 in the Wikipedia article of query term t, and idf (t 1 , W D ) is the inverse document frequency of term t 1 in whole Wikipedia dump W D .
idf is calculated as given in Eq. 4. After ranking expanded terms based on the above score, we collect the top n terms as the intermediate expanded query. These intermediate terms are re-weighted using correlation score. Top m terms chosen on the basis of correlation score (as described in Sec. 3.4) become the second part of the expanded query. The first part being obtained from Wikipedia as described before.
Re-weighting Expanded Terms
So far, a set of candidate expansion terms have been obtained, where each expansion term is strongly connected to an individual query term or phrase. These terms have been assigned weights using in-link score (for terms obtained from Wikipedia) and tf . idf score (for terms terms obtained from WordNet). However, this may not properly capture the relationship of the expansion term to the query as a whole. For example, the word "technology" is frequently associated with the word "information". Here, expanding the query term "technology" with "information" might work well for some queries such as "engineering technology", "science technology" and "educational technology" but might not work well for others such as "music technology", "food technology", and "financial technology". This problem has also been discussed in reference [13]. To resolve this language ambiguity problem, we re-weight expanded terms using correlation score [79,103]. The logic behind doing so is that if an expansion feature is correlated to several individual query terms, then the chances are high that it will be correlated to the query as a whole as well.
The correlation score is described as follows. Let q be the original query and let t 1 be a candidate expansion term. The correlation score of t 1 with q is calculated as:
C q,t1 = 1 |q| t∈q c t,t1 = 1 |q| t∈q w t,at . w t1,at(6)
where: c t,t1 denotes correlation (similarity) score between terms t and t 1 , and w t,at (w t1,at ) is the weight of the term t (t 1 ) in the article a t of term t.
The weight of the term t in its article t(a t ), denoted w t,at (w t1,at is similarly defined), is computed as:
w t,at = tf (t, a t ) . itf (t, a q ) = tf (t, a t ) . log T |T at |(7)
where: tf (t, a t ) is the term frequency of term t in its article a t , a q denotes all Wikipedia articles corresponding to the terms in the original query q, itf (t, a q ) is the inverse term frequency of term t associated with a q , T is the frequency of term t in all the Wikipedia articles in set a q , and |T at | is the frequency of term t in the article a t .
After assigning the correlation score to expansion terms, we collect the top m terms from both data sources to form the final set of expanded terms.
Experimental Setup
In order to evaluate the proposed WWQE approach, the experiments were carried out on a large number of queries (50) from FIRE ad-hoc test collections [50]. As real life queries are short, we used only the title field of all queries. We used Brill's tagger to assign a POS tag to each query term for extracting the phrase and individual word. These phrase and individual words have been used for QE. We used the most recent Windows version of WordNet 2.1 to extract two level of synsets terms and Wikipedia for in-links extraction for QE.
We use the Wikipeia Dump (also known as 'WikiDump') for in-link extraction. Wikipedia dump contains every Wikipedia article in XML format. As an open source project, the Wikipedia dump can be download from https://dumps.wikimedia.org/. We download the English Wikipedia dump titled "enwiki-20170101-pages-articles-multistream.xml" of January 2017.
We compare the performance of our query expansion technique with several existing weighting models as described in Sec.4.2.
Dataset
We use the well known benchmark dataset Forum for Information Retrieval Evaluation (FIRE) [50] to evaluate our proposed WWQE approach. Table 1 summarizes the dataset used. FIRE collections consists of a very large set of documents on which IR is done, a set of questions (called topics) and the right answers (called relevance judgments) stating relevance of documents to the corresponding topic(s). The FIRE dataset consists of a large collection of newswire articles from two sources namely BDnews24 [15] and The Telegraph [8] provided by Indian Statistical Institute Kolkata, India.
Evaluation Metrics
We used the TERRIER 10 retrieval system for our all experimental evaluation. We use the title field of the topics in FIRE dataset. For indexing the documents, first stopwords are removed, then Porter's Stemmer is used for stemming the documents. All experimental evaluations are based on the unigram word assumption, i.e., all documents and queries in the corpus are indexed using single terms. We did not use any phrase or positional information. To compare the effectiveness of our expansion technique, we used the following weighting models: IFB2 a probabilistic divergence from randomness (DFR) model [6], BM25 model of Okapi [82], Laplace's law of succession I(n)L2 [90], Log-logistic DFR model LGD [27], Divergence from Independence model DPH [5] and Standard tf.idf model. The Parameters for these models were set to the default values in TERRIER. We evaluate the results on standard evaluation metrics: MAP(mean average precision), GM MAP (geometric mean average precision), P@10 (precision at top 10 ranks), P@20, P@30, bpref (binary preference) and the overall recall (number of relevant documents retrieved). Additionally, we report the percentage improvement in MAP over the baseline (non expanded query) for each expansion method.
Experimental Results
The aim of our experiments is to explore the effectiveness of the proposed Wikipedia-WordNet based QE technique (WWQE) by comparing it with the three baselines on popular weighting models and evaluation metrics. The comparison was done over three baselines: (i) unexpanded query, (ii) query expansion using Wikipedia alone, and (iii) query expansion using WordNet alone. Comparative analysis is shown in Tables 2, 3 and 4. Table 4 shows performance comparison of the proposed WWQE technique over popular weighting models in the context of MAP, GM MAP, P@10, P@20, P@30 and relevant return. The table shows that the proposed WWQE technique is compatible with the existing popular weighting models and it also improves the information retrieval effectively. It also shows the relative percentage improvements (within parentheses) of various standard evaluation metrics measured against no expansion. By using the proposed query expansion technique (WWQE), the weighting models improve the MAP up to 24% and GM MAP by 48%. Based on the results presented in Table 4 we can say that in the context of all evaluation parameters, the proposed QE technique performs well with all weighting models. Figure 6 shows the comparative analysis of precision-recall curve of WWQE technique with various weighting models. This graph plots the interpolated precision of an IR system using 11 standard cutoff values from the Recall levels, i.e {0, 0.1, 0.2, 0.3, ...,1.0}. These graphs are widely used to evaluate IR systems that return ranked documents (i.e., averaging and plotting retrieval results). Comparisons are best made in three different recall ranges: 0 to 0.2, 0.2 to 0.8, and 0.8 to 1. These ranges characterize high precision, middle recall, and high recall performance, respectively. Based on the graph presented in Figures 6a and 6b, we arrive at the conclusion that P-R curve of the various weighting models have nearly the same retrieval result with or without QE respectively. Therefor we can say that for improving the information retrieval in QE, choice of the weighting models is not so important. The importance lies in the choice of technique used for selecting the relevant expansion terms. The relevant expansion terms, in turn, come from data sources. Hence, the data sources also play an important role for effective QE. This conclusion also supports our proposed WWQE technique where we select the expansion terms on the basis of individual term weighting as well as assign a correlation score on the basis of entire query. 5 documents retrieved and bpref measures a preference relation about how many judged relevant documents are ranked before judged irrelevant documents. Figure 9 compare the WWQE technique in terms of MAP, bpref and P@5 with baseline (unexpanded), QE using WordNet alone and QE using Wikipedia alone. IFB2 model is used for term weighting for experimental evaluation.
After evaluating the performance of the proposed QE technique on several popular evaluation metrics, it can be concluded that the proposed QE technique (WWQE) performs well with all weighting models on several evaluation parameters. Therefor, the proposed WWQE technique is effective in improving information retrieval results.
Conclusion
This article presents a novel Wikipedia WordNet based Query Expansion (WWQE) technique that considers the individual terms and phrases as the expansion terms. Proposed technique employs a two level strategy to select terms from WordNet. First, it fetches synsets of the initial query terms. Then, it extracts sysnets of these synsets. In order to score the expansion term on Wikipedia, we proposed a new weighting score named as in-link score. The in-link score assigns a score to each expansion term extracted from Wikipedia, and tf-idf based scoring system is used to assign a score to expansion terms extracted from WordNet. After assigning score to individual query terms, we
MAP bpref P@5
Baseline Wordnet Wikipedia WWQE Fig. 9: Comparative analysis of WWQE technique with baseline, WordNet and Wikipedia further re-weight the selected expansion terms using correlation score with respect to the entire query. The combination of the two data sources works well for extracting relevant expansion terms and the proposed QE technique performs well with these terms on several weighting models. It also yields better results when compared to the the two methods individually. The results on the basis of several evaluation metrics on FIRE dataset demonstrate the effectiveness of our proposed QE technique in the field of information retrieval. The proposed query expansion technique improves the IR effectively on evaluation with several popular weighting models. | 6,631 |
1901.10185 | 2952051141 | When facing large-scale image datasets, online hashing serves as a promising solution for online retrieval and prediction tasks. It encodes the online streaming data into compact binary codes, and simultaneously updates the hash functions to renew codes of the existing dataset. To this end, the existing methods update hash functions solely based on the new data batch, without investigating the correlation between such new data and the existing dataset. In addition, existing works update the hash functions using a relaxation process in its corresponding approximated continuous space. And it remains as an open problem to directly apply discrete optimizations in online hashing. In this paper, we propose a novel supervised online hashing method, termed Balanced Similarity for Online Discrete Hashing (BSODH), to solve the above problems in a unified framework. BSODH employs a well-designed hashing algorithm to preserve the similarity between the streaming data and the existing dataset via an asymmetric graph regularization. We further identify the "data-imbalance" problem brought by the constructed asymmetric graph, which restricts the application of discrete optimization in our problem. Therefore, a novel balanced similarity is further proposed, which uses two equilibrium factors to balance the similar and dissimilar weights and eventually enables the usage of discrete optimizations. Extensive experiments conducted on three widely-used benchmarks demonstrate the advantages of the proposed method over the state-of-the-art methods. | For SGD-based methods, Online Kernel Hashing (OKH) @cite_6 is the first attempt to learn hash functions via an online passive-aggressive strategy @cite_5 , which updates hash functions to retain important information while embracing information from new pairwise input. Adaptive Hashing (AdaptHash) @cite_10 adopts a hinge loss to decide which hash function to be updated. Similar to OKH, labels of pairwise similarity are needed for AdaptHash. Inspired by Error Correcting Output Codes (ECOCs) @cite_22 , Online Supervised Hashing (OSH) @cite_17 adopts a more general two-step hash learning framework, where each class is firstly deployed with a vector from ECOCs, and then an convex function is further exploited to replace the @math loss. In @cite_12 , an OH with Mutual Information (MIHash) is developed which targets at optimizing the mutual information between neighbors and non-neighbors. | {
"abstract": [
"Multiclass learning problems involve finding a definition for an unknown function f(x) whose range is a discrete set containing k > 2 values (i.e., k \"classes\"). The definition is acquired by studying collections of training examples of the form (xi, f(xi)). Existing approaches to multiclass learning problems include direct application of multiclass algorithms such as the decision-tree algorithms C4.5 and CART, application of binary concept learning algorithms to learn individual binary functions for each of the k classes, and application of binary concept learning algorithms with distributed output representations. This paper compares these three approaches to a new technique in which error-correcting codes are employed as a distributed output representation. We show that these output representations improve the generalization performance of both C4.5 and backpropagation on a wide range of multiclass learning tasks. We also demonstrate that this approach is robust with respect to changes in the size of the training sample, the assignment of distributed representations to particular classes, and the application of overfitting avoidance techniques such as decision-tree pruning. Finally, we show that--like the other methods--the error-correcting code technique can provide reliable class probability estimates. Taken together, these results demonstrate that error-correcting output codes provide a general-purpose method for improving the performance of inductive learning programs on multiclass problems.",
"",
"",
"With the staggering growth in image and video datasets, algorithms that provide fast similarity search and compact storage are crucial. Hashing methods that map the data into Hamming space have shown promise, however, many of these methods employ a batch-learning strategy in which the computational cost and memory requirements may become intractable and infeasible with larger and larger datasets. To overcome these challenges, we propose an online learning algorithm based on stochastic gradient descent in which the hash functions are updated iteratively with streaming data. In experiments with three image retrieval benchmarks, our online algorithm attains retrieval accuracy that is comparable to competing state-of-the-art batch-learning solutions, while our formulation is orders of magnitude faster and being online it is adaptable to the variations of the data. Moreover, our formulation yields improved retrieval performance over a recently reported online hashing technique, Online Kernel Hashing.",
"Learning-based hashing methods are widely used for nearest neighbor retrieval, and recently, online hashing methods have demonstrated good performance-complexity trade-offs by learning hash functions from streaming data. In this paper, we first address a key challenge for online hashing: the binary codes for indexed data must be recomputed to keep pace with updates to the hash functions. We propose an efficient quality measure for hash functions, based on an information-theoretic quantity, mutual information, and use it successfully as a criterion to eliminate unnecessary hash table updates. Next, we also show how to optimize the mutual information objective using stochastic gradient descent. We thus develop a novel hashing method, MIHash, that can be used in both online and batch settings. Experiments on image retrieval benchmarks (including a 2.5M image dataset) confirm the effectiveness of our formulation, both in reducing hash table recomputations and in learning high-quality hash functions.",
"Fast nearest neighbor search is becoming more and more crucial given the advent of large-scale data in many computer vision applications. Hashing approaches provide both fast search mechanisms and compact index structures to address this critical need. In image retrieval problems where labeled training data is available, supervised hashing methods prevail over unsupervised methods. Most state-of-the-art supervised hashing approaches employ batch-learners. Unfortunately, batch-learning strategies may be inefficient when confronted with large datasets. Moreover, with batch-learners, it is unclear how to adapt the hash functions as the dataset continues to grow and new variations appear over time. To handle these issues, we propose OSH: an Online Supervised Hashing technique that is based on Error Correcting Output Codes. We consider a stochastic setting where the data arrives sequentially and our method learns and adapts its hashing functions in a discriminative manner. Our method makes no assumption about the number of possible class labels, and accommodates new classes as they are presented in the incoming data stream. In experiments with three image retrieval benchmarks, our method yields state-of-the-art retrieval performance as measured in Mean Average Precision, while also being orders-of-magnitude faster than competing batch methods for supervised hashing. Also, our method significantly outperforms recently introduced online hashing solutions."
],
"cite_N": [
"@cite_22",
"@cite_6",
"@cite_5",
"@cite_10",
"@cite_12",
"@cite_17"
],
"mid": [
"1676820704",
"",
"",
"2204148968",
"2950680818",
"2535352129"
]
} | Towards Optimal Discrete Online Hashing with Balanced Similarity | With the increasing amount of image data available on the Internet, hashing has been widely applied to approximate nearest neighbor (ANN) search (Wang et al. 2016;. It aims at mapping real-valued image features to compact binary codes, which merits in both low storage and efficient computation on large-scale datasets. One promising direction is online hashing (OH), which has attracted increasing attentions recently. Under such an application scenario, data are often fed into the system via a streaming fashion, while traditional hashing methods can hardly accommodate this configuration. In OH, the online streaming data is encoded into compact binary codes, while the hash functions are simultaneously updated in order to renew codes of the existing data.
In principle, OH aims to analyze the streaming data while preserving structure of the existing dataset 1 . In the literature, several recent works have been proposed to handle OH. The representative works include, but not limited to, OKH (Huang, Yang, and Zheng 2013), SketchHash (Leng et al. 2015), AdaptHash (Fatih and Sclaroff 2015), OSH (Fatih, Bargal, and Sclaroff 2017), FROSH (Chen, King, and Lyu 2017) and MIHash (Fatih et al. 2017). However, the performance of OH is still far from satisfactory for real-world applications. We attribute it to two open issues, i.e., updating imbalance and optimization inefficiency.
In terms of the updating imbalance, the existing OH schemes update hash functions solely based on the newly coming data batch, without investigating the correlation between such new data and the existing dataset. To that effect, an asymmetric graph can be constructed to preserve similarity between the new data and the existing dataset as shown in Fig.1. Under online setting, the similarity matrix is usually sparse and unbalanced, i.e., data-imbalance phenomenon, since most image pairs are dissimilar and only a few are similar. The updating imbalance issue, if not well addressed, might cause the learned binary codes ineffective for both the new data and the existing data, and hence lead to severe performance degeneration for OH schemes.
In terms of the optimization inefficiency, the existing OH schemes still rely on the traditional relaxation (Gong and Lazebnik 2011;Datar et al. 2004;Jiang and Li 2015;) over the approximated continuous space to learn hash functions, which often makes the produced hash functions less effective, especially when the code length increases (Liu et al. 2014;Shen et al. 2015b). Despite the recent advances in direct discrete optimizations in offline hashing (Ji et al. 2017;Jiang and Li 2018) with discrete cyclic coordinate descent (DCC) (Shen et al. 2015b), such discrete optimizations can not be directly applied to online case that contains serious data-imbalance problem, since the optimization heavily relies on the dissimilar pairs, and thus lose the information of similar pairs. Figure 1: An example of data-imbalance problem and the learned binary codes. The similarity matrix S t is highly sparse under online setting and thus tends to generate consistent binary codes, which are indiscriminate and uninformative. With the introduction of the balanced similarityS t , codes of similar items are tightened while codes of dissimilar items are expanded. By combining with discrete optimizations, advanced retrieval results are obtained.
We argue that, the above two issues are not independent. In particular, to conduct discrete optimizations, the existing offline methods typically adopt an asymmetric graph regularization to preserve the similarity between training data. Constructing the asymmetric graph consumes both time and memory. Note that, since the streaming data is in a small batch, such an asymmetric graph between the streaming data and the existing dataset can be dynamically constructed under online setting. However, as verified both theoretically and experimentally later, it still can not avoid the generation of consistent codes (most bits are the same) due to the dataimbalance problem brought by the constructed asymmetric graph in online learning, as illustrated in Fig.1.
In this paper, we propose a novel supervised OH method, termed Balanced Similarity for Online Discrete Hashing (BSODH) to handle the updating imbalance and optimization inefficiency problems in a unified framework. First, unlike the previous OH schemes, the proposed BSODH mainly considers updating the hash functions with correlation between the online streaming data and the existing dataset. Therefore, we aim to adopt an asymmetric graph regularization to preserve the relation in the produced Hamming space. Second, we further integrate the discrete optimizations into OH, which essentially tackles the challenge of quantization error brought by the relaxation learning. Finally, we present a new similarity measurement, termed balanced similarity, to solve the problem of data-imbalance during the discrete binary learning process. In particular, we introduce two equilibrium factors to balance the weights of similar and dissimilar data, and thus enable the discrete optimizations. Extensive experimental results on three widely-used benchmarks, i.e., CIFAR10, Places205 and MNIST, demonstrate the advantages of the proposed BSODH over the state-ofthe-art methods.
To summarize, the main contributions of the proposed BSODH in this paper include:
• To capture the data correlation between online streaming data and the existing dataset, we introduce an asymmetric graph regularization to preserve such correlation in the produced Hamming space.
• To reduce the quantization error in the Hamming space, we design a customized discrete optimization algorithm. It handles the optimization inefficiency issue in the existing OH scheme, making discrete learning feasible for the first time in the online framework. • We propose a balanced similarity matrix to handle the data-imbalance problem, which further prevents the generation of consistent binary codes, i.e., a phenomenon that previously occurred when directly applying discrete optimizations in online setting.
The Proposed Method Problem Definition
Given a dataset X = [x 1 , ..., x n ] ∈ R d×n with its corresponding labels L = [l 1 , ..., l n ] ∈ N n , where x i ∈ R d is the i-th instance with its class label l i ∈ N. The goal of hashing is to learn a set of k-bit binary codes B = [b 1 , ..., b n ] ∈ {−1, +1} k×n , where b i is the binary vector of x i . A widelyadopted hash function is the linear hash mapping (Gong and Lazebnik 2011; Fatih, Bargal, and Sclaroff 2017), i.e.,
B = F (X) = sgn(W T X),(1)
where W = [w 1 , ..., w k ] ∈ R d×k is the projection matrix to be learned with w i being responsible for the i-th hash bit. The sign function sgn(x) returns +1 if input variable x > 0, and returns −1 otherwise. For the online learning problem, the data is coming in a streaming fashion. Therefore X is not available once for all. Without loss of generality, we denote X t s = [x t s1 , ..., x t snt ] ∈ R d×nt as the input streaming data at t-stage, and denote L t s = [l t s1 , ..., l t snt ] ∈ N nt as the corresponding label set, where n t is the size of the batch. We denote
X t e = [X 1 s , ..., X t−1 s ] = [x t e1 , ..., x t emt ] ∈ R d×mt , where m t = n 1 + ... + n t−1 , as the previously existing dataset with its label set L t e = [L 1 s , ..., L t−1 s ] = [l t e1 , ..., l t emt ] ∈ N mt . Correspondingly, we denote B t s = sgn(W t T X t s ) = [b t s1 , ..., b t snt ] ∈ R k×nt , B t e = sgn(W t T X t e ) = [b t e1 , ..., b t emt ]
∈ R k×mt as the discretely learned binary codes for X t s and X t e , respectively. Under online setting, the parameter matrix W t should be updated based on the newly coming batch X t s instead of the existing dataset X t e .
The Proposed Framework
Ideally, if data x i and x j are similar, the Hamming distance between their binary codes should be minimized, and vice versa. This is achieved by minimizing the quantization error between the similarity matrix and the Hamming similarity matrix (Liu et al. 2012). However, considering the streaming batch data alone does not reflect the structural relationship of all data samples. Therefore, following (Shen et al. 2015a; Jiang and Li 2018), we resort to preserve the similarity in the Hamming space between new data batch X t s and the existing dataset X t e at t-stage with an asymmetric graph as shown in Fig.1. To that effect, we minimize the Frobenius norm loss between the supervised similarity and the inner products of B t s and B t e as follows:
min B t s ,B t e B t s T B t e − kS t 2 F s.t. B t s ∈{−1, 1} k×nt , B t e ∈ {−1, 1} k×mt .(2)
where S t ∈ R nt×mt is the similarity matrix between X t s and X t e . Note that s t ij = 1 iff both x t si and x t ej share the same label, i.e., l t si = l t ej . Otherwise, s t ij = −1 2 . And · F denotes the Frobenius norm.
Besides, we aim to learn the hash functions by minimizing the error term between the linear hash functions F in Eq.1 and the corresponding binary codes B t s , which is constrained by B t s − F (X t s ) 2 F . It can be easily combined with the above asymmetric graph that can be seen as a regularizer for learning the hash functions, which is rewritten as:
min B t s ,B t e ,W t B t s T B t e − kS t 2 F term 1 +σ t F (X t s ) − B t s 2 F term 2 + λ t W t 2 F term 3 s.t. B t s ∈ {−1, 1} k×nt , B t e ∈ {−1, 1} k×mt ,(3)
where σ t and λ t serve as two constants at t-stage to balance the trade-offs among the three learning parts. We analyze that using such a framework can learn better coding functions. Firstly, in term 2, W t is optimized based on the dynamic streaming data X t s , which makes the hash function more adaptive to unseen data. Secondly, As in Eq.7, the training complexity for
X t s -based learning W t is O(d 2 n t + d 3 ), while it is O(d 2 m t + d 3 ) for the learnt W t based on X t e .
Therefore, updating W t based on X t e is impractical when m t n t with the increasing number of new data batch. Further, it also violates the basic principle of OH that W t can only be updated based on the newly coming data. Last but not least, with the asymmetric graph loss in term 1, the structural relationship in the original space can be well preserved in the produced Hamming space, which makes the learned binary codes B t s more robust. The above discussion will be verified in the subsequent experiments.
The Data-Imbalance Issue
As shown in Fig.1, the similarity matrix S t between the streaming data and the existing dataset is very sparse 3 . That is to say, there exists a severe data-imbalance phenomenon, i.e., most of image pairs are dissimilar and few pairs are similar. Due to this problem, the optimization will heavily rely on the dissimilar information and miss the similar information, which leads to performance degeneration.
As a theoretical analysis, we decouple the whole sparse similarity matrix into two subparts, where similar pairs and dissimilar pairs are separately considered. Term 1 in Eq.3 is then reformulated as:
term 1= i,j,S t ij =1 (b t si T b t ej − k) 2 term A + i,j,S t ij =−1 (b t si T b t ej + k) 2 term B s.t. b t si ∈ {−1, 1} k , b t ej ∈ {−1, 1} k .(4)
Analysis 1. We denote S t 1 = {S t ij ∈ S t |S t ij = 1}, i.e., the set of similar pairs and S t 2 = {S t ij ∈ S t |S t ij = −1}, i.e., the set of dissimilar pairs. In online setting, when n t m t with the increase of new data batch, the similarity matrix S t becomes a highly sparse matrix, i.e., |S t 1 | |S t 2 |. In other words, term 1 suffers from a severe data-imbalance problem. Furthermore, since term 1 term 2 in Eq.3 and term B term A in Eq.4, the learning process of B t s and B t e heavily relies on term B.
A suitable way to minimize term B is to have
b t si T b t ej = −k, i.e., b t si = −b t ej . Similarly, for any b t eg ∈ B t e with g = j, we have b t si = −b t eg .
It is easy to see that b t ej = b t eg . In other words, each item in B t e shares consistent binary codes. Similarly, each item in B t s also shares consistent binary codes which are opposite with B t e . Fig.1 illustrates such an extreme circumstance. However, as can be seen from term 2 in Eq.3, the performance of hash functions deeply relies on the learned B t s . Therefore, such a data-imbalance problem will cause all the codes produced by W t to be biased, which will seriously affect the retrieval performance.
Balanced Similarity
To solve the above problem, a common method is to keep a balance between term 1 and term 2 in Eq.3 by scaling up the parameter σ t . However, as verified later in our experiments (see Fig.5), such a scheme still suffers from unsatisfactory performance and will get stuck in how to choose an appropriate value of σ t from a large range 4 . Therefore, we present another scheme to handle this problem, which expands the feasible solutions for both B t e and B t s . Concretely, we propose to use a balanced similarity matrixS t with each element defined as follows:
S t ij = η s S t ij , S t ij = 1, η d S t ij , S t ij = −1,(5)
where η s and η d are two positive equilibrium factors used to balance the similar and dissimilar weights, respectively. When setting η s > η d , the Hamming distances among similar pairs will be reduced, while the ones among dissimilar pairs will be enlarged. Analysis 2. With the balanced similarity, the goal of term B in Eq.4 is to have b t si T b t ej ≈ −kη d . The number of common hash bits between b t si and b t ej is at least k(1−η d ) 2 5 . Therefore, by fixing b t si , the cardinal number of feasible solutions for b t ej is at least
k k(1−η d ) 2
. Thus, the balanced similarity matrixS t can effectively solve the problem of generating consistent binary codes, as showed in Fig.1. By replacing the similarity matrix S t in Eq.3 with the balanced similarity matrixS t , the overall objective function can be written as:
min B t s ,B t e ,W t B t s T B t e − kS t 2 F term 1 +σ t F (X t s ) − B t s 2 F term 2 + λ t W t 2 F term 3 s.t. B t s ∈ {−1, 1} k×nt , B t e ∈ {−1, 1} k×mt .(6)
4 Under the balanced similarity, we constrain σ t to [0, 1]. 5 · denotes the operation of rounding down.
The Optimization
Due to the binary constraints, the optimization problem of Eq.6 is still non-convex with respect to W t , B t s , B t e . To find a feasible solution, we adopt an alternative optimization approach, i.e., updating one variable with the rest two fixed until convergence.
1) W t -step: Fix B t e and B t s , then learn hash weights W t . This sub-optimization of Eq.6 is a classical linear regression that aims to find the best projection coefficient W t by minimizing term 2 and term 3 jointly. Therefore, we update W t with a close-formed solution as:
W t = σ t (σ t X t s X t s T + λ t I) −1 X t s B t s T ,(7)
where I is a d × d identity matrix.
2) B t e -step: Fix W t and B t s , then update B t e . Since only term 1 in Eq.6 contains B t e , we directly optimize this term via a discrete optimization similar to (Kang, Li, and Zhou 2016), where the squared Frobenius norm in term 1 is replaced with the L 1 norm. The new formulation is:
min B t e B t s T B t e − kS t 1 s.t. B t e ∈ {−1, 1} k×mt . (8)
Similar to (Kang, Li, and Zhou 2016), the solution of Eq.8 is as follows:
B t e = sgn(B t sS t ).
(9) 3) B t s -step: Fix B t e and W t , then update B t s . The corresponding sub-problem is:
min B t s B t s T B t e − kS t 2 F + σ t W t T X t s − B t s 2 F s.t. B t s ∈ {−1, 1} k×nt .(10)
By expanding each term in Eq.10, we get the sub-optimal problem of B t s by minimizing the following formulation:
min B t s B t e T B t s 2 F + kS t 2 F const −2tr(kS t B t e T B t s ) + σ t ( W t T X t s 2 F const + B t s 2 F const −2tr(X t s T W t B t s )) s.t. B t s ∈ {−1, 1} k×nt ,(11)
where the "const" terms denote constants. The optimization problem of Eq.11 is equivalent to
min B t s B t e T B t s term I 2 F − 2tr(P T B t s term II ) s.t. B t s ∈ {−1, 1} k×nt ,(12)
where P = kB t eS t T + σ t W t T X t s and tr(·) is trace norm. The problem in Eq.12 is NP-hard for directly optimizing the binary code matrix B t s . Inspired by the recent advance on binary code optimization (Shen et al. 2015b), a closed-form solution for one row of B t s can be obtained while fixing all the other rows. Therefore, we first reformulate term I and term II in Eq.12 as follows:
term I =b t er Tb t sr +B t e TB t s ,(13)
Algorithm 1 Balanced Similarity for Online Discrete Hashing (BSODH) Require: Training data set X with its label space L, the number of hash bits k, the parameters σ and λ, the total number of streaming data batches T . Ensure: Binary codes B for X and hash weights W.
whereb t er ,b t sr andp r stand for the r-row of B t e , B t s and P, respectively. Also,B t e ,B t s andP represent the matrix of B t e excludingb t er , the matrix of B t s excludingb t sr and the matrix of P excludingp r , respectively.
Taking Eq.13 and Eq.14 back to Eq.12 and expanding it, we obtain the following optimization problem:
Note that b t er Tb t sr 2 F = k 2 , which is a constant value. The above optimization problem is equivalent to:
miñ b t sr tr((B t s TB t eb t er T −p T r )b t sr ) s.t.b t sr ∈ {−1, 1} nt .
(16) Therefore, this sub-problem can be solved by the following updating rule:
b t sr = sgn(p r −b t erB t e TB t s ).(17)
The main procedures of the proposed BSODH are summarized in Alg.1. Note that, in the first training stage, i.e., t = 1, we initialize W 1 with normal Gaussian distribution as in line 4 and compute B 1 s as in line 5. When t ≥ 2, we initialize B t s in line 9 to fasten the training iterations from line 11 to line 15. By this way, it is quantitatively shown in the experiment that it takes only one or two iterations to get convergence (see Fig.6).
Experiments
Datasets CIFAR-10 contains 60K samples from 10 classes, with each represented by a 4, 096-dimensional CNN feature (Simonyan and Zisserman 2015). Following (Fatih et al. 2017), we partition the dataset into a retrieval set with 59K samples, and a test set with 1K samples. From the retrieval set, 20K instances are adopted to learn the hash functions.
Places205 is a 2.5-million image set with 205 classes. Following (Fatih et al. 2017;Fatih, Bargal, and Sclaroff 2017), features are first extracted from the fc7 layer of the AlexNet (Krizhevsky, Sutskever, and Hinton 2012), and then reduced to 128 dimensions by PCA. 20 instances from each category are randomly sampled to form a test set, the remaining of which are formed as a retrieval set. 100K samples from the retrieval set are sampled to learn hash functions.
MNIST consists of 70K handwritten digit images with 10 classes, each of which is represented by 784 normalized original pixels. We construct the test set by sampling 100 instances from each class, and form a retrieval set using the rest. A random subset of 20K images from the retrieval set is used to learn the hash functions.
Baselines and Evaluated Metrics
We compare the proposed BSODH with several state-of-theart OH methods, including Online Kernel Hashing (OKH) (Huang, Yang, and Zheng 2013), Online Sketch Hashing (SketchHash) (Leng et al. 2015), Adaptive Hashing (AdaptHash) (Fatih and Sclaroff 2015), Online Supervised Hashing (OSH) (Fatih, Bargal, and Sclaroff 2017) and OH with Mutual Information (MIHash) (Fatih et al. 2017).
To evaluate the proposed method, we adopt a set of widely-used protocols including mean Average Precision (denoted as mAP), mean precision of the top-R retrieved neighbors (denoted as Precision@R) and precision within a Hamming ball of radius 2 centered on each query (denoted as Precision@H2). Note that, following the work of (Fatih et al. 2017), we only compute mAP on the top-1, 000 retrieved items (denoted as mAP@1, 000) on Places205 due to its large scale. And for SketchHash (Leng et al. 2015), the batch size has to be larger than the size of hash bits. Thus, we only report its performance when the hash bit is 32.
Quantitative Results
We first show the experimental results of mAP (mAP@1, 000) and Precision@H2 on CIFAR-10, Places205 and MNIST. The results are shown in Tab.1 and Tab.2. Generally, the proposed BSODH is consistently better in these two evaluated metrics on all three benchmarks. For a depth analysis, in terms of mAP, compared with the second best method, i.e., MIHash, the proposed method achieves improvements of 5.11%, 1.40%, and 6.48% on CIFAR-10, Places-205 and MNIST, respectively. As for Precision@H2, compared with MIHash, the proposed method acquires 29.97%, 2.63% and 9.2% gains on CIFAR-10, Places-205 and MNIST, respectively. We also evaluate Precision@R with R ranging from 1 to 100 under the hash bit of 64. The experimental results are shown in Fig.2, which verifies that the proposed BSODH also achieves superior performance on all three benchmarks.
Parameter Sensitivity
The following experiments are conducted on MNIST with the hash bit fixed to 64.
Sensitivities to λ t and σ t . The left two figures in Fig.3 present the effects of the hyper-parameters λ t and σ t . For simplicity, we regard λ t and σ t as two constants across the whole training process. As shown in Fig.3, the performance of the proposed BSODH is sensitive to the values of σ t and λ t . The best combination for (λ t , σ t ) is (0.6, 0.5). By conducting similar experiments on CIFAR-10 and Places-205, we finally set the tuple value of (λ t , σ t ) as (0.3, 0.5) and (0.9, 0.8) for these two benchmarks.
Necessity ofS t . We validate the effectiveness of the proposed balanced similarityS t by plotting the Precision@H2 curves with respect to the two positive equilibrium factors, i.e., η s and η d . As shown in the right two figures of Fig.3, the performance stabilizes when η s ≥ 1 and η d ≤ 0.3. When η d = 1 and η s = 1,S t degenerates into an un-balanced version S t . However, as observed from the rightmost chart in Fig.3, when η s = 1, the proposed method suffers from severe performance loss. Precisely, the Precision@H2 shows the best of 0.814 when η s = 1.2 and η d = 0.2, while it is only 0.206 when η s = 1 and η d = 1. Compared with the un-balanced S t , the proposed balanced similarityS t gains To verify the aforementioned Analysis 1 and Analysis 2, we further visualize the learned binary codes in the last training stage via t-SNE (Maaten and Hinton 2008). As shown in Though the discretely optimized binary codes B t e (a), B t s (b) and linearly mapped binary codes sgn(W t T X t s ) (c) are clustered, each cluster is mixed with items from different classes and only four out of ten clusters are formed with each close to each other. That is to say, the majorities of Hamming codes are the same, which conforms with Analysis 1. However, under the balanced setting, both B t e and B t s are formed into ten separated clusters without mixed items in each clusters, which conforms with Analysis 2. Under such a situation, the hash functions W t are well deduced by B t s , with the hash codes in Fig.4 (f) more discriminative.
Scaling up σ t . As aforementioned, an alternative approach to solving the data-imbalance problem in Analysis 1 is to keep a balance between term 1 and term 2 in Eq.3 via scaling up the parameter σ t . To test the feasibility of this scheme, we plot the values of Precision@H2 with σ t varying in a large scale in Fig.5. Intuitively, scaling up σ t affects the performance quite a lot. Quantitatively, when the value of σ t is set as 10, 000, Precision@H2 achieves the best, i.e., 0.341. We argue that this scheme shows its drawbacks in two aspects. First, it suffers from the unsatisfactory performance. As shown in Tab.2, when hash bit is 64, the proposed BSODH gets 0.814 in term of Precision@H2 on MNIST. Compared with scaling up σ t , the proposed method achieves more than 2.5 times better performance. Second, scaling up σ t also easily gets stuck in how to choose an appropriate value due to the large range of σ t . To decide a best value, extensive experiments have to be repeated, which is infeasible in online learning. However, σ t is limited to [0, 1] under the proposed BSODH. It is much convenient to choose an appropriate value for σ t .
Convergence of B t s . Each time when the new streaming data arrives, B t s is updated based on iterative process, as shown in lines 11−15 in Alg.1. Fig.6 shows the convergence ability of the proposed BSODH on the input streaming data at t-stage. As can be seen, when t ≤ 2, it merely takes two iterations to get convergence. What's more, it costs only one iteration to finish updating B t s when t > 2, which validates not only the convergence ability, but also the efficiency of the proposed BSODH.
Conclusions
In this paper, we present a novel supervised OH method, termed BSODH. The proposed BSODH learns the correlation of binary codes between the newly streaming data and the existing database via a discrete optimization, which is the first to the best of our knowledge. To this end, first we use an asymmetric graph regularization to preserve the similarity in the produced Hamming space. Then, to reduce the quantization error, we mathematically formulate the optimization problem and derive the discrete optimal solutions. Finally, to solve the data-imbalance problem, we propose a balanced similarity, where two equilibrium factors are introduced to balance the similar/dissimilar weights. Extensive experiments on three benchmarks demonstrate that our approach merits in both effectiveness and efficiency over several state-of-the-art OH methods. | 4,912 |
1901.10185 | 2952051141 | When facing large-scale image datasets, online hashing serves as a promising solution for online retrieval and prediction tasks. It encodes the online streaming data into compact binary codes, and simultaneously updates the hash functions to renew codes of the existing dataset. To this end, the existing methods update hash functions solely based on the new data batch, without investigating the correlation between such new data and the existing dataset. In addition, existing works update the hash functions using a relaxation process in its corresponding approximated continuous space. And it remains as an open problem to directly apply discrete optimizations in online hashing. In this paper, we propose a novel supervised online hashing method, termed Balanced Similarity for Online Discrete Hashing (BSODH), to solve the above problems in a unified framework. BSODH employs a well-designed hashing algorithm to preserve the similarity between the streaming data and the existing dataset via an asymmetric graph regularization. We further identify the "data-imbalance" problem brought by the constructed asymmetric graph, which restricts the application of discrete optimization in our problem. Therefore, a novel balanced similarity is further proposed, which uses two equilibrium factors to balance the similar and dissimilar weights and eventually enables the usage of discrete optimizations. Extensive experiments conducted on three widely-used benchmarks demonstrate the advantages of the proposed method over the state-of-the-art methods. | Motivated by the idea of data sketching'' @cite_14 , skech-based methods provide a good alternative for unsupervised online binary coding, via which a large dataset is summarized by a much smaller data batch. Leng proposed the Online Sketching Hashing (SketchHash) @cite_9 , which adopts an efficient variant of SVD decomposition to learn hash functions. More recently, Subsampled Randomized Hadamard Transform (SRHT) is adopted in FasteR Online Sketching Hashing (FROSH) @cite_18 to accelerate the training process of SketchHash. | {
"abstract": [
"Recently, hashing based approximate nearest neighbor (ANN) search has attracted much attention. Extensive new algorithms have been developed and successfully applied to different applications. However, two critical problems are rarely mentioned. First, in real-world applications, the data often comes in a streaming fashion but most of existing hashing methods are batch based models. Second, when the dataset becomes huge, it is almost impossible to load all the data into memory to train hashing models. In this paper, we propose a novel approach to handle these two problems simultaneously based on the idea of data sketching. A sketch of one dataset preserves its major characters but with significantly smaller size. With a small size sketch, our method can learn hash functions in an online fashion, while needs rather low computational complexity and storage space. Extensive experiments on two large scale benchmarks and one synthetic dataset demonstrate the efficacy of the proposed method.",
"We give near-optimal space bounds in the streaming model for linear algebra problems that include estimation of matrix products, linear regression, low-rank approximation, and approximation of matrix rank. In the streaming model, sketches of input matrices are maintained under updates of matrix entries; we prove results for turnstile updates, given in an arbitrary order. We give the first lower bounds known for the space needed by the sketches, for a given estimation error e. We sharpen prior upper bounds, with respect to combinations of space, failure probability, and number of passes. The sketch we use for matrix A is simply STA, where S is a sign matrix. Our results include the following upper and lower bounds on the bits of space needed for 1-pass algorithms. Here A is an n x d matrix, B is an n x d' matrix, and c := d+d'. These results are given for fixed failure probability; for failure probability δ>0, the upper bounds require a factor of log(1 δ) more space. We assume the inputs have integer entries specified by O(log(nc)) bits, or O(log(nd)) bits. (Matrix Product) Output matrix C with F(ATB-C) ≤ e F(A) F(B). We show that Θ(ce-2log(nc)) space is needed. (Linear Regression) For d'=1, so that B is a vector b, find x so that Ax-b ≤ (1+e) minx' ∈ Reald Ax'-b. We show that Θ(d2e-1 log(nd)) space is needed. (Rank-k Approximation) Find matrix tAk of rank no more than k, so that F(A-tAk) ≤ (1+e) F A-Ak , where Ak is the best rank-k approximation to A. Our lower bound is Ω(ke-1(n+d)log(nd)) space, and we give a one-pass algorithm matching this when A is given row-wise or column-wise. For general updates, we give a one-pass algorithm needing [O(ke-2(n + d e2)log(nd))] space. We also give upper and lower bounds for algorithms using multiple passes, and a sketching analog of the CUR decomposition.",
""
],
"cite_N": [
"@cite_9",
"@cite_14",
"@cite_18"
],
"mid": [
"1893754589",
"2059867647",
"2771454759"
]
} | Towards Optimal Discrete Online Hashing with Balanced Similarity | With the increasing amount of image data available on the Internet, hashing has been widely applied to approximate nearest neighbor (ANN) search (Wang et al. 2016;. It aims at mapping real-valued image features to compact binary codes, which merits in both low storage and efficient computation on large-scale datasets. One promising direction is online hashing (OH), which has attracted increasing attentions recently. Under such an application scenario, data are often fed into the system via a streaming fashion, while traditional hashing methods can hardly accommodate this configuration. In OH, the online streaming data is encoded into compact binary codes, while the hash functions are simultaneously updated in order to renew codes of the existing data.
In principle, OH aims to analyze the streaming data while preserving structure of the existing dataset 1 . In the literature, several recent works have been proposed to handle OH. The representative works include, but not limited to, OKH (Huang, Yang, and Zheng 2013), SketchHash (Leng et al. 2015), AdaptHash (Fatih and Sclaroff 2015), OSH (Fatih, Bargal, and Sclaroff 2017), FROSH (Chen, King, and Lyu 2017) and MIHash (Fatih et al. 2017). However, the performance of OH is still far from satisfactory for real-world applications. We attribute it to two open issues, i.e., updating imbalance and optimization inefficiency.
In terms of the updating imbalance, the existing OH schemes update hash functions solely based on the newly coming data batch, without investigating the correlation between such new data and the existing dataset. To that effect, an asymmetric graph can be constructed to preserve similarity between the new data and the existing dataset as shown in Fig.1. Under online setting, the similarity matrix is usually sparse and unbalanced, i.e., data-imbalance phenomenon, since most image pairs are dissimilar and only a few are similar. The updating imbalance issue, if not well addressed, might cause the learned binary codes ineffective for both the new data and the existing data, and hence lead to severe performance degeneration for OH schemes.
In terms of the optimization inefficiency, the existing OH schemes still rely on the traditional relaxation (Gong and Lazebnik 2011;Datar et al. 2004;Jiang and Li 2015;) over the approximated continuous space to learn hash functions, which often makes the produced hash functions less effective, especially when the code length increases (Liu et al. 2014;Shen et al. 2015b). Despite the recent advances in direct discrete optimizations in offline hashing (Ji et al. 2017;Jiang and Li 2018) with discrete cyclic coordinate descent (DCC) (Shen et al. 2015b), such discrete optimizations can not be directly applied to online case that contains serious data-imbalance problem, since the optimization heavily relies on the dissimilar pairs, and thus lose the information of similar pairs. Figure 1: An example of data-imbalance problem and the learned binary codes. The similarity matrix S t is highly sparse under online setting and thus tends to generate consistent binary codes, which are indiscriminate and uninformative. With the introduction of the balanced similarityS t , codes of similar items are tightened while codes of dissimilar items are expanded. By combining with discrete optimizations, advanced retrieval results are obtained.
We argue that, the above two issues are not independent. In particular, to conduct discrete optimizations, the existing offline methods typically adopt an asymmetric graph regularization to preserve the similarity between training data. Constructing the asymmetric graph consumes both time and memory. Note that, since the streaming data is in a small batch, such an asymmetric graph between the streaming data and the existing dataset can be dynamically constructed under online setting. However, as verified both theoretically and experimentally later, it still can not avoid the generation of consistent codes (most bits are the same) due to the dataimbalance problem brought by the constructed asymmetric graph in online learning, as illustrated in Fig.1.
In this paper, we propose a novel supervised OH method, termed Balanced Similarity for Online Discrete Hashing (BSODH) to handle the updating imbalance and optimization inefficiency problems in a unified framework. First, unlike the previous OH schemes, the proposed BSODH mainly considers updating the hash functions with correlation between the online streaming data and the existing dataset. Therefore, we aim to adopt an asymmetric graph regularization to preserve the relation in the produced Hamming space. Second, we further integrate the discrete optimizations into OH, which essentially tackles the challenge of quantization error brought by the relaxation learning. Finally, we present a new similarity measurement, termed balanced similarity, to solve the problem of data-imbalance during the discrete binary learning process. In particular, we introduce two equilibrium factors to balance the weights of similar and dissimilar data, and thus enable the discrete optimizations. Extensive experimental results on three widely-used benchmarks, i.e., CIFAR10, Places205 and MNIST, demonstrate the advantages of the proposed BSODH over the state-ofthe-art methods.
To summarize, the main contributions of the proposed BSODH in this paper include:
• To capture the data correlation between online streaming data and the existing dataset, we introduce an asymmetric graph regularization to preserve such correlation in the produced Hamming space.
• To reduce the quantization error in the Hamming space, we design a customized discrete optimization algorithm. It handles the optimization inefficiency issue in the existing OH scheme, making discrete learning feasible for the first time in the online framework. • We propose a balanced similarity matrix to handle the data-imbalance problem, which further prevents the generation of consistent binary codes, i.e., a phenomenon that previously occurred when directly applying discrete optimizations in online setting.
The Proposed Method Problem Definition
Given a dataset X = [x 1 , ..., x n ] ∈ R d×n with its corresponding labels L = [l 1 , ..., l n ] ∈ N n , where x i ∈ R d is the i-th instance with its class label l i ∈ N. The goal of hashing is to learn a set of k-bit binary codes B = [b 1 , ..., b n ] ∈ {−1, +1} k×n , where b i is the binary vector of x i . A widelyadopted hash function is the linear hash mapping (Gong and Lazebnik 2011; Fatih, Bargal, and Sclaroff 2017), i.e.,
B = F (X) = sgn(W T X),(1)
where W = [w 1 , ..., w k ] ∈ R d×k is the projection matrix to be learned with w i being responsible for the i-th hash bit. The sign function sgn(x) returns +1 if input variable x > 0, and returns −1 otherwise. For the online learning problem, the data is coming in a streaming fashion. Therefore X is not available once for all. Without loss of generality, we denote X t s = [x t s1 , ..., x t snt ] ∈ R d×nt as the input streaming data at t-stage, and denote L t s = [l t s1 , ..., l t snt ] ∈ N nt as the corresponding label set, where n t is the size of the batch. We denote
X t e = [X 1 s , ..., X t−1 s ] = [x t e1 , ..., x t emt ] ∈ R d×mt , where m t = n 1 + ... + n t−1 , as the previously existing dataset with its label set L t e = [L 1 s , ..., L t−1 s ] = [l t e1 , ..., l t emt ] ∈ N mt . Correspondingly, we denote B t s = sgn(W t T X t s ) = [b t s1 , ..., b t snt ] ∈ R k×nt , B t e = sgn(W t T X t e ) = [b t e1 , ..., b t emt ]
∈ R k×mt as the discretely learned binary codes for X t s and X t e , respectively. Under online setting, the parameter matrix W t should be updated based on the newly coming batch X t s instead of the existing dataset X t e .
The Proposed Framework
Ideally, if data x i and x j are similar, the Hamming distance between their binary codes should be minimized, and vice versa. This is achieved by minimizing the quantization error between the similarity matrix and the Hamming similarity matrix (Liu et al. 2012). However, considering the streaming batch data alone does not reflect the structural relationship of all data samples. Therefore, following (Shen et al. 2015a; Jiang and Li 2018), we resort to preserve the similarity in the Hamming space between new data batch X t s and the existing dataset X t e at t-stage with an asymmetric graph as shown in Fig.1. To that effect, we minimize the Frobenius norm loss between the supervised similarity and the inner products of B t s and B t e as follows:
min B t s ,B t e B t s T B t e − kS t 2 F s.t. B t s ∈{−1, 1} k×nt , B t e ∈ {−1, 1} k×mt .(2)
where S t ∈ R nt×mt is the similarity matrix between X t s and X t e . Note that s t ij = 1 iff both x t si and x t ej share the same label, i.e., l t si = l t ej . Otherwise, s t ij = −1 2 . And · F denotes the Frobenius norm.
Besides, we aim to learn the hash functions by minimizing the error term between the linear hash functions F in Eq.1 and the corresponding binary codes B t s , which is constrained by B t s − F (X t s ) 2 F . It can be easily combined with the above asymmetric graph that can be seen as a regularizer for learning the hash functions, which is rewritten as:
min B t s ,B t e ,W t B t s T B t e − kS t 2 F term 1 +σ t F (X t s ) − B t s 2 F term 2 + λ t W t 2 F term 3 s.t. B t s ∈ {−1, 1} k×nt , B t e ∈ {−1, 1} k×mt ,(3)
where σ t and λ t serve as two constants at t-stage to balance the trade-offs among the three learning parts. We analyze that using such a framework can learn better coding functions. Firstly, in term 2, W t is optimized based on the dynamic streaming data X t s , which makes the hash function more adaptive to unseen data. Secondly, As in Eq.7, the training complexity for
X t s -based learning W t is O(d 2 n t + d 3 ), while it is O(d 2 m t + d 3 ) for the learnt W t based on X t e .
Therefore, updating W t based on X t e is impractical when m t n t with the increasing number of new data batch. Further, it also violates the basic principle of OH that W t can only be updated based on the newly coming data. Last but not least, with the asymmetric graph loss in term 1, the structural relationship in the original space can be well preserved in the produced Hamming space, which makes the learned binary codes B t s more robust. The above discussion will be verified in the subsequent experiments.
The Data-Imbalance Issue
As shown in Fig.1, the similarity matrix S t between the streaming data and the existing dataset is very sparse 3 . That is to say, there exists a severe data-imbalance phenomenon, i.e., most of image pairs are dissimilar and few pairs are similar. Due to this problem, the optimization will heavily rely on the dissimilar information and miss the similar information, which leads to performance degeneration.
As a theoretical analysis, we decouple the whole sparse similarity matrix into two subparts, where similar pairs and dissimilar pairs are separately considered. Term 1 in Eq.3 is then reformulated as:
term 1= i,j,S t ij =1 (b t si T b t ej − k) 2 term A + i,j,S t ij =−1 (b t si T b t ej + k) 2 term B s.t. b t si ∈ {−1, 1} k , b t ej ∈ {−1, 1} k .(4)
Analysis 1. We denote S t 1 = {S t ij ∈ S t |S t ij = 1}, i.e., the set of similar pairs and S t 2 = {S t ij ∈ S t |S t ij = −1}, i.e., the set of dissimilar pairs. In online setting, when n t m t with the increase of new data batch, the similarity matrix S t becomes a highly sparse matrix, i.e., |S t 1 | |S t 2 |. In other words, term 1 suffers from a severe data-imbalance problem. Furthermore, since term 1 term 2 in Eq.3 and term B term A in Eq.4, the learning process of B t s and B t e heavily relies on term B.
A suitable way to minimize term B is to have
b t si T b t ej = −k, i.e., b t si = −b t ej . Similarly, for any b t eg ∈ B t e with g = j, we have b t si = −b t eg .
It is easy to see that b t ej = b t eg . In other words, each item in B t e shares consistent binary codes. Similarly, each item in B t s also shares consistent binary codes which are opposite with B t e . Fig.1 illustrates such an extreme circumstance. However, as can be seen from term 2 in Eq.3, the performance of hash functions deeply relies on the learned B t s . Therefore, such a data-imbalance problem will cause all the codes produced by W t to be biased, which will seriously affect the retrieval performance.
Balanced Similarity
To solve the above problem, a common method is to keep a balance between term 1 and term 2 in Eq.3 by scaling up the parameter σ t . However, as verified later in our experiments (see Fig.5), such a scheme still suffers from unsatisfactory performance and will get stuck in how to choose an appropriate value of σ t from a large range 4 . Therefore, we present another scheme to handle this problem, which expands the feasible solutions for both B t e and B t s . Concretely, we propose to use a balanced similarity matrixS t with each element defined as follows:
S t ij = η s S t ij , S t ij = 1, η d S t ij , S t ij = −1,(5)
where η s and η d are two positive equilibrium factors used to balance the similar and dissimilar weights, respectively. When setting η s > η d , the Hamming distances among similar pairs will be reduced, while the ones among dissimilar pairs will be enlarged. Analysis 2. With the balanced similarity, the goal of term B in Eq.4 is to have b t si T b t ej ≈ −kη d . The number of common hash bits between b t si and b t ej is at least k(1−η d ) 2 5 . Therefore, by fixing b t si , the cardinal number of feasible solutions for b t ej is at least
k k(1−η d ) 2
. Thus, the balanced similarity matrixS t can effectively solve the problem of generating consistent binary codes, as showed in Fig.1. By replacing the similarity matrix S t in Eq.3 with the balanced similarity matrixS t , the overall objective function can be written as:
min B t s ,B t e ,W t B t s T B t e − kS t 2 F term 1 +σ t F (X t s ) − B t s 2 F term 2 + λ t W t 2 F term 3 s.t. B t s ∈ {−1, 1} k×nt , B t e ∈ {−1, 1} k×mt .(6)
4 Under the balanced similarity, we constrain σ t to [0, 1]. 5 · denotes the operation of rounding down.
The Optimization
Due to the binary constraints, the optimization problem of Eq.6 is still non-convex with respect to W t , B t s , B t e . To find a feasible solution, we adopt an alternative optimization approach, i.e., updating one variable with the rest two fixed until convergence.
1) W t -step: Fix B t e and B t s , then learn hash weights W t . This sub-optimization of Eq.6 is a classical linear regression that aims to find the best projection coefficient W t by minimizing term 2 and term 3 jointly. Therefore, we update W t with a close-formed solution as:
W t = σ t (σ t X t s X t s T + λ t I) −1 X t s B t s T ,(7)
where I is a d × d identity matrix.
2) B t e -step: Fix W t and B t s , then update B t e . Since only term 1 in Eq.6 contains B t e , we directly optimize this term via a discrete optimization similar to (Kang, Li, and Zhou 2016), where the squared Frobenius norm in term 1 is replaced with the L 1 norm. The new formulation is:
min B t e B t s T B t e − kS t 1 s.t. B t e ∈ {−1, 1} k×mt . (8)
Similar to (Kang, Li, and Zhou 2016), the solution of Eq.8 is as follows:
B t e = sgn(B t sS t ).
(9) 3) B t s -step: Fix B t e and W t , then update B t s . The corresponding sub-problem is:
min B t s B t s T B t e − kS t 2 F + σ t W t T X t s − B t s 2 F s.t. B t s ∈ {−1, 1} k×nt .(10)
By expanding each term in Eq.10, we get the sub-optimal problem of B t s by minimizing the following formulation:
min B t s B t e T B t s 2 F + kS t 2 F const −2tr(kS t B t e T B t s ) + σ t ( W t T X t s 2 F const + B t s 2 F const −2tr(X t s T W t B t s )) s.t. B t s ∈ {−1, 1} k×nt ,(11)
where the "const" terms denote constants. The optimization problem of Eq.11 is equivalent to
min B t s B t e T B t s term I 2 F − 2tr(P T B t s term II ) s.t. B t s ∈ {−1, 1} k×nt ,(12)
where P = kB t eS t T + σ t W t T X t s and tr(·) is trace norm. The problem in Eq.12 is NP-hard for directly optimizing the binary code matrix B t s . Inspired by the recent advance on binary code optimization (Shen et al. 2015b), a closed-form solution for one row of B t s can be obtained while fixing all the other rows. Therefore, we first reformulate term I and term II in Eq.12 as follows:
term I =b t er Tb t sr +B t e TB t s ,(13)
Algorithm 1 Balanced Similarity for Online Discrete Hashing (BSODH) Require: Training data set X with its label space L, the number of hash bits k, the parameters σ and λ, the total number of streaming data batches T . Ensure: Binary codes B for X and hash weights W.
whereb t er ,b t sr andp r stand for the r-row of B t e , B t s and P, respectively. Also,B t e ,B t s andP represent the matrix of B t e excludingb t er , the matrix of B t s excludingb t sr and the matrix of P excludingp r , respectively.
Taking Eq.13 and Eq.14 back to Eq.12 and expanding it, we obtain the following optimization problem:
Note that b t er Tb t sr 2 F = k 2 , which is a constant value. The above optimization problem is equivalent to:
miñ b t sr tr((B t s TB t eb t er T −p T r )b t sr ) s.t.b t sr ∈ {−1, 1} nt .
(16) Therefore, this sub-problem can be solved by the following updating rule:
b t sr = sgn(p r −b t erB t e TB t s ).(17)
The main procedures of the proposed BSODH are summarized in Alg.1. Note that, in the first training stage, i.e., t = 1, we initialize W 1 with normal Gaussian distribution as in line 4 and compute B 1 s as in line 5. When t ≥ 2, we initialize B t s in line 9 to fasten the training iterations from line 11 to line 15. By this way, it is quantitatively shown in the experiment that it takes only one or two iterations to get convergence (see Fig.6).
Experiments
Datasets CIFAR-10 contains 60K samples from 10 classes, with each represented by a 4, 096-dimensional CNN feature (Simonyan and Zisserman 2015). Following (Fatih et al. 2017), we partition the dataset into a retrieval set with 59K samples, and a test set with 1K samples. From the retrieval set, 20K instances are adopted to learn the hash functions.
Places205 is a 2.5-million image set with 205 classes. Following (Fatih et al. 2017;Fatih, Bargal, and Sclaroff 2017), features are first extracted from the fc7 layer of the AlexNet (Krizhevsky, Sutskever, and Hinton 2012), and then reduced to 128 dimensions by PCA. 20 instances from each category are randomly sampled to form a test set, the remaining of which are formed as a retrieval set. 100K samples from the retrieval set are sampled to learn hash functions.
MNIST consists of 70K handwritten digit images with 10 classes, each of which is represented by 784 normalized original pixels. We construct the test set by sampling 100 instances from each class, and form a retrieval set using the rest. A random subset of 20K images from the retrieval set is used to learn the hash functions.
Baselines and Evaluated Metrics
We compare the proposed BSODH with several state-of-theart OH methods, including Online Kernel Hashing (OKH) (Huang, Yang, and Zheng 2013), Online Sketch Hashing (SketchHash) (Leng et al. 2015), Adaptive Hashing (AdaptHash) (Fatih and Sclaroff 2015), Online Supervised Hashing (OSH) (Fatih, Bargal, and Sclaroff 2017) and OH with Mutual Information (MIHash) (Fatih et al. 2017).
To evaluate the proposed method, we adopt a set of widely-used protocols including mean Average Precision (denoted as mAP), mean precision of the top-R retrieved neighbors (denoted as Precision@R) and precision within a Hamming ball of radius 2 centered on each query (denoted as Precision@H2). Note that, following the work of (Fatih et al. 2017), we only compute mAP on the top-1, 000 retrieved items (denoted as mAP@1, 000) on Places205 due to its large scale. And for SketchHash (Leng et al. 2015), the batch size has to be larger than the size of hash bits. Thus, we only report its performance when the hash bit is 32.
Quantitative Results
We first show the experimental results of mAP (mAP@1, 000) and Precision@H2 on CIFAR-10, Places205 and MNIST. The results are shown in Tab.1 and Tab.2. Generally, the proposed BSODH is consistently better in these two evaluated metrics on all three benchmarks. For a depth analysis, in terms of mAP, compared with the second best method, i.e., MIHash, the proposed method achieves improvements of 5.11%, 1.40%, and 6.48% on CIFAR-10, Places-205 and MNIST, respectively. As for Precision@H2, compared with MIHash, the proposed method acquires 29.97%, 2.63% and 9.2% gains on CIFAR-10, Places-205 and MNIST, respectively. We also evaluate Precision@R with R ranging from 1 to 100 under the hash bit of 64. The experimental results are shown in Fig.2, which verifies that the proposed BSODH also achieves superior performance on all three benchmarks.
Parameter Sensitivity
The following experiments are conducted on MNIST with the hash bit fixed to 64.
Sensitivities to λ t and σ t . The left two figures in Fig.3 present the effects of the hyper-parameters λ t and σ t . For simplicity, we regard λ t and σ t as two constants across the whole training process. As shown in Fig.3, the performance of the proposed BSODH is sensitive to the values of σ t and λ t . The best combination for (λ t , σ t ) is (0.6, 0.5). By conducting similar experiments on CIFAR-10 and Places-205, we finally set the tuple value of (λ t , σ t ) as (0.3, 0.5) and (0.9, 0.8) for these two benchmarks.
Necessity ofS t . We validate the effectiveness of the proposed balanced similarityS t by plotting the Precision@H2 curves with respect to the two positive equilibrium factors, i.e., η s and η d . As shown in the right two figures of Fig.3, the performance stabilizes when η s ≥ 1 and η d ≤ 0.3. When η d = 1 and η s = 1,S t degenerates into an un-balanced version S t . However, as observed from the rightmost chart in Fig.3, when η s = 1, the proposed method suffers from severe performance loss. Precisely, the Precision@H2 shows the best of 0.814 when η s = 1.2 and η d = 0.2, while it is only 0.206 when η s = 1 and η d = 1. Compared with the un-balanced S t , the proposed balanced similarityS t gains To verify the aforementioned Analysis 1 and Analysis 2, we further visualize the learned binary codes in the last training stage via t-SNE (Maaten and Hinton 2008). As shown in Though the discretely optimized binary codes B t e (a), B t s (b) and linearly mapped binary codes sgn(W t T X t s ) (c) are clustered, each cluster is mixed with items from different classes and only four out of ten clusters are formed with each close to each other. That is to say, the majorities of Hamming codes are the same, which conforms with Analysis 1. However, under the balanced setting, both B t e and B t s are formed into ten separated clusters without mixed items in each clusters, which conforms with Analysis 2. Under such a situation, the hash functions W t are well deduced by B t s , with the hash codes in Fig.4 (f) more discriminative.
Scaling up σ t . As aforementioned, an alternative approach to solving the data-imbalance problem in Analysis 1 is to keep a balance between term 1 and term 2 in Eq.3 via scaling up the parameter σ t . To test the feasibility of this scheme, we plot the values of Precision@H2 with σ t varying in a large scale in Fig.5. Intuitively, scaling up σ t affects the performance quite a lot. Quantitatively, when the value of σ t is set as 10, 000, Precision@H2 achieves the best, i.e., 0.341. We argue that this scheme shows its drawbacks in two aspects. First, it suffers from the unsatisfactory performance. As shown in Tab.2, when hash bit is 64, the proposed BSODH gets 0.814 in term of Precision@H2 on MNIST. Compared with scaling up σ t , the proposed method achieves more than 2.5 times better performance. Second, scaling up σ t also easily gets stuck in how to choose an appropriate value due to the large range of σ t . To decide a best value, extensive experiments have to be repeated, which is infeasible in online learning. However, σ t is limited to [0, 1] under the proposed BSODH. It is much convenient to choose an appropriate value for σ t .
Convergence of B t s . Each time when the new streaming data arrives, B t s is updated based on iterative process, as shown in lines 11−15 in Alg.1. Fig.6 shows the convergence ability of the proposed BSODH on the input streaming data at t-stage. As can be seen, when t ≤ 2, it merely takes two iterations to get convergence. What's more, it costs only one iteration to finish updating B t s when t > 2, which validates not only the convergence ability, but also the efficiency of the proposed BSODH.
Conclusions
In this paper, we present a novel supervised OH method, termed BSODH. The proposed BSODH learns the correlation of binary codes between the newly streaming data and the existing database via a discrete optimization, which is the first to the best of our knowledge. To this end, first we use an asymmetric graph regularization to preserve the similarity in the produced Hamming space. Then, to reduce the quantization error, we mathematically formulate the optimization problem and derive the discrete optimal solutions. Finally, to solve the data-imbalance problem, we propose a balanced similarity, where two equilibrium factors are introduced to balance the similar/dissimilar weights. Extensive experiments on three benchmarks demonstrate that our approach merits in both effectiveness and efficiency over several state-of-the-art OH methods. | 4,912 |
1901.10185 | 2952051141 | When facing large-scale image datasets, online hashing serves as a promising solution for online retrieval and prediction tasks. It encodes the online streaming data into compact binary codes, and simultaneously updates the hash functions to renew codes of the existing dataset. To this end, the existing methods update hash functions solely based on the new data batch, without investigating the correlation between such new data and the existing dataset. In addition, existing works update the hash functions using a relaxation process in its corresponding approximated continuous space. And it remains as an open problem to directly apply discrete optimizations in online hashing. In this paper, we propose a novel supervised online hashing method, termed Balanced Similarity for Online Discrete Hashing (BSODH), to solve the above problems in a unified framework. BSODH employs a well-designed hashing algorithm to preserve the similarity between the streaming data and the existing dataset via an asymmetric graph regularization. We further identify the "data-imbalance" problem brought by the constructed asymmetric graph, which restricts the application of discrete optimization in our problem. Therefore, a novel balanced similarity is further proposed, which uses two equilibrium factors to balance the similar and dissimilar weights and eventually enables the usage of discrete optimizations. Extensive experiments conducted on three widely-used benchmarks demonstrate the advantages of the proposed method over the state-of-the-art methods. | However, existing sketch-based algorithms are based on unsupervised learning, and their retrieval performance is mostly unsatisfactory without fully utilizing label information. Although most SGD-based algorithms aim to preserve the label information via online hash function learning, the relaxation process is adopted to update the hash functions, which contradicts with the recent advances in offline hashing where discrete optimizations are adopted directly, such as Discrete Graph Hashing @cite_19 and Discrete Supervised Hashing @cite_16 . In this paper, we are the first to investigate OH with discrete optimizations, which have shown superior performance compared with the quantization-based schemes. | {
"abstract": [
"Hashing has emerged as a popular technique for fast nearest neighbor search in gigantic databases. In particular, learning based hashing has received considerable attention due to its appealing storage and search efficiency. However, the performance of most unsupervised learning based hashing methods deteriorates rapidly as the hash code length increases. We argue that the degraded performance is due to inferior optimization procedures used to achieve discrete binary codes. This paper presents a graph-based unsupervised hashing model to preserve the neighborhood structure of massive data in a discrete code space. We cast the graph hashing problem into a discrete optimization framework which directly learns the binary codes. A tractable alternating maximization algorithm is then proposed to explicitly deal with the discrete constraints, yielding high-quality codes to well capture the local neighborhoods. Extensive experiments performed on four large datasets with up to one million samples show that our discrete optimization based graph hashing method obtains superior search accuracy over state-of-the-art un-supervised hashing methods, especially for longer codes.",
"Recently, learning based hashing techniques have attracted broad research interests because they can support efficient storage and retrieval for high-dimensional data such as images, videos, documents, etc. However, a major difficulty of learning to hash lies in handling the discrete constraints imposed on the pursued hash codes, which typically makes hash optimizations very challenging (NP-hard in general). In this work, we propose a new supervised hashing framework, where the learning objective is to generate the optimal binary hash codes for linear classification. By introducing an auxiliary variable, we reformulate the objective such that it can be solved substantially efficiently by employing a regularization algorithm. One of the key steps in this algorithm is to solve a regularization sub-problem associated with the NP-hard binary optimization. We show that the sub-problem admits an analytical solution via cyclic coordinate descent. As such, a high-quality discrete solution can eventually be obtained in an efficient computing manner, therefore enabling to tackle massive datasets. We evaluate the proposed approach, dubbed Supervised Discrete Hashing (SDH), on four large image datasets and demonstrate its superiority to the state-of-the-art hashing methods in large-scale image retrieval."
],
"cite_N": [
"@cite_19",
"@cite_16"
],
"mid": [
"2142881874",
"1910300841"
]
} | Towards Optimal Discrete Online Hashing with Balanced Similarity | With the increasing amount of image data available on the Internet, hashing has been widely applied to approximate nearest neighbor (ANN) search (Wang et al. 2016;. It aims at mapping real-valued image features to compact binary codes, which merits in both low storage and efficient computation on large-scale datasets. One promising direction is online hashing (OH), which has attracted increasing attentions recently. Under such an application scenario, data are often fed into the system via a streaming fashion, while traditional hashing methods can hardly accommodate this configuration. In OH, the online streaming data is encoded into compact binary codes, while the hash functions are simultaneously updated in order to renew codes of the existing data.
In principle, OH aims to analyze the streaming data while preserving structure of the existing dataset 1 . In the literature, several recent works have been proposed to handle OH. The representative works include, but not limited to, OKH (Huang, Yang, and Zheng 2013), SketchHash (Leng et al. 2015), AdaptHash (Fatih and Sclaroff 2015), OSH (Fatih, Bargal, and Sclaroff 2017), FROSH (Chen, King, and Lyu 2017) and MIHash (Fatih et al. 2017). However, the performance of OH is still far from satisfactory for real-world applications. We attribute it to two open issues, i.e., updating imbalance and optimization inefficiency.
In terms of the updating imbalance, the existing OH schemes update hash functions solely based on the newly coming data batch, without investigating the correlation between such new data and the existing dataset. To that effect, an asymmetric graph can be constructed to preserve similarity between the new data and the existing dataset as shown in Fig.1. Under online setting, the similarity matrix is usually sparse and unbalanced, i.e., data-imbalance phenomenon, since most image pairs are dissimilar and only a few are similar. The updating imbalance issue, if not well addressed, might cause the learned binary codes ineffective for both the new data and the existing data, and hence lead to severe performance degeneration for OH schemes.
In terms of the optimization inefficiency, the existing OH schemes still rely on the traditional relaxation (Gong and Lazebnik 2011;Datar et al. 2004;Jiang and Li 2015;) over the approximated continuous space to learn hash functions, which often makes the produced hash functions less effective, especially when the code length increases (Liu et al. 2014;Shen et al. 2015b). Despite the recent advances in direct discrete optimizations in offline hashing (Ji et al. 2017;Jiang and Li 2018) with discrete cyclic coordinate descent (DCC) (Shen et al. 2015b), such discrete optimizations can not be directly applied to online case that contains serious data-imbalance problem, since the optimization heavily relies on the dissimilar pairs, and thus lose the information of similar pairs. Figure 1: An example of data-imbalance problem and the learned binary codes. The similarity matrix S t is highly sparse under online setting and thus tends to generate consistent binary codes, which are indiscriminate and uninformative. With the introduction of the balanced similarityS t , codes of similar items are tightened while codes of dissimilar items are expanded. By combining with discrete optimizations, advanced retrieval results are obtained.
We argue that, the above two issues are not independent. In particular, to conduct discrete optimizations, the existing offline methods typically adopt an asymmetric graph regularization to preserve the similarity between training data. Constructing the asymmetric graph consumes both time and memory. Note that, since the streaming data is in a small batch, such an asymmetric graph between the streaming data and the existing dataset can be dynamically constructed under online setting. However, as verified both theoretically and experimentally later, it still can not avoid the generation of consistent codes (most bits are the same) due to the dataimbalance problem brought by the constructed asymmetric graph in online learning, as illustrated in Fig.1.
In this paper, we propose a novel supervised OH method, termed Balanced Similarity for Online Discrete Hashing (BSODH) to handle the updating imbalance and optimization inefficiency problems in a unified framework. First, unlike the previous OH schemes, the proposed BSODH mainly considers updating the hash functions with correlation between the online streaming data and the existing dataset. Therefore, we aim to adopt an asymmetric graph regularization to preserve the relation in the produced Hamming space. Second, we further integrate the discrete optimizations into OH, which essentially tackles the challenge of quantization error brought by the relaxation learning. Finally, we present a new similarity measurement, termed balanced similarity, to solve the problem of data-imbalance during the discrete binary learning process. In particular, we introduce two equilibrium factors to balance the weights of similar and dissimilar data, and thus enable the discrete optimizations. Extensive experimental results on three widely-used benchmarks, i.e., CIFAR10, Places205 and MNIST, demonstrate the advantages of the proposed BSODH over the state-ofthe-art methods.
To summarize, the main contributions of the proposed BSODH in this paper include:
• To capture the data correlation between online streaming data and the existing dataset, we introduce an asymmetric graph regularization to preserve such correlation in the produced Hamming space.
• To reduce the quantization error in the Hamming space, we design a customized discrete optimization algorithm. It handles the optimization inefficiency issue in the existing OH scheme, making discrete learning feasible for the first time in the online framework. • We propose a balanced similarity matrix to handle the data-imbalance problem, which further prevents the generation of consistent binary codes, i.e., a phenomenon that previously occurred when directly applying discrete optimizations in online setting.
The Proposed Method Problem Definition
Given a dataset X = [x 1 , ..., x n ] ∈ R d×n with its corresponding labels L = [l 1 , ..., l n ] ∈ N n , where x i ∈ R d is the i-th instance with its class label l i ∈ N. The goal of hashing is to learn a set of k-bit binary codes B = [b 1 , ..., b n ] ∈ {−1, +1} k×n , where b i is the binary vector of x i . A widelyadopted hash function is the linear hash mapping (Gong and Lazebnik 2011; Fatih, Bargal, and Sclaroff 2017), i.e.,
B = F (X) = sgn(W T X),(1)
where W = [w 1 , ..., w k ] ∈ R d×k is the projection matrix to be learned with w i being responsible for the i-th hash bit. The sign function sgn(x) returns +1 if input variable x > 0, and returns −1 otherwise. For the online learning problem, the data is coming in a streaming fashion. Therefore X is not available once for all. Without loss of generality, we denote X t s = [x t s1 , ..., x t snt ] ∈ R d×nt as the input streaming data at t-stage, and denote L t s = [l t s1 , ..., l t snt ] ∈ N nt as the corresponding label set, where n t is the size of the batch. We denote
X t e = [X 1 s , ..., X t−1 s ] = [x t e1 , ..., x t emt ] ∈ R d×mt , where m t = n 1 + ... + n t−1 , as the previously existing dataset with its label set L t e = [L 1 s , ..., L t−1 s ] = [l t e1 , ..., l t emt ] ∈ N mt . Correspondingly, we denote B t s = sgn(W t T X t s ) = [b t s1 , ..., b t snt ] ∈ R k×nt , B t e = sgn(W t T X t e ) = [b t e1 , ..., b t emt ]
∈ R k×mt as the discretely learned binary codes for X t s and X t e , respectively. Under online setting, the parameter matrix W t should be updated based on the newly coming batch X t s instead of the existing dataset X t e .
The Proposed Framework
Ideally, if data x i and x j are similar, the Hamming distance between their binary codes should be minimized, and vice versa. This is achieved by minimizing the quantization error between the similarity matrix and the Hamming similarity matrix (Liu et al. 2012). However, considering the streaming batch data alone does not reflect the structural relationship of all data samples. Therefore, following (Shen et al. 2015a; Jiang and Li 2018), we resort to preserve the similarity in the Hamming space between new data batch X t s and the existing dataset X t e at t-stage with an asymmetric graph as shown in Fig.1. To that effect, we minimize the Frobenius norm loss between the supervised similarity and the inner products of B t s and B t e as follows:
min B t s ,B t e B t s T B t e − kS t 2 F s.t. B t s ∈{−1, 1} k×nt , B t e ∈ {−1, 1} k×mt .(2)
where S t ∈ R nt×mt is the similarity matrix between X t s and X t e . Note that s t ij = 1 iff both x t si and x t ej share the same label, i.e., l t si = l t ej . Otherwise, s t ij = −1 2 . And · F denotes the Frobenius norm.
Besides, we aim to learn the hash functions by minimizing the error term between the linear hash functions F in Eq.1 and the corresponding binary codes B t s , which is constrained by B t s − F (X t s ) 2 F . It can be easily combined with the above asymmetric graph that can be seen as a regularizer for learning the hash functions, which is rewritten as:
min B t s ,B t e ,W t B t s T B t e − kS t 2 F term 1 +σ t F (X t s ) − B t s 2 F term 2 + λ t W t 2 F term 3 s.t. B t s ∈ {−1, 1} k×nt , B t e ∈ {−1, 1} k×mt ,(3)
where σ t and λ t serve as two constants at t-stage to balance the trade-offs among the three learning parts. We analyze that using such a framework can learn better coding functions. Firstly, in term 2, W t is optimized based on the dynamic streaming data X t s , which makes the hash function more adaptive to unseen data. Secondly, As in Eq.7, the training complexity for
X t s -based learning W t is O(d 2 n t + d 3 ), while it is O(d 2 m t + d 3 ) for the learnt W t based on X t e .
Therefore, updating W t based on X t e is impractical when m t n t with the increasing number of new data batch. Further, it also violates the basic principle of OH that W t can only be updated based on the newly coming data. Last but not least, with the asymmetric graph loss in term 1, the structural relationship in the original space can be well preserved in the produced Hamming space, which makes the learned binary codes B t s more robust. The above discussion will be verified in the subsequent experiments.
The Data-Imbalance Issue
As shown in Fig.1, the similarity matrix S t between the streaming data and the existing dataset is very sparse 3 . That is to say, there exists a severe data-imbalance phenomenon, i.e., most of image pairs are dissimilar and few pairs are similar. Due to this problem, the optimization will heavily rely on the dissimilar information and miss the similar information, which leads to performance degeneration.
As a theoretical analysis, we decouple the whole sparse similarity matrix into two subparts, where similar pairs and dissimilar pairs are separately considered. Term 1 in Eq.3 is then reformulated as:
term 1= i,j,S t ij =1 (b t si T b t ej − k) 2 term A + i,j,S t ij =−1 (b t si T b t ej + k) 2 term B s.t. b t si ∈ {−1, 1} k , b t ej ∈ {−1, 1} k .(4)
Analysis 1. We denote S t 1 = {S t ij ∈ S t |S t ij = 1}, i.e., the set of similar pairs and S t 2 = {S t ij ∈ S t |S t ij = −1}, i.e., the set of dissimilar pairs. In online setting, when n t m t with the increase of new data batch, the similarity matrix S t becomes a highly sparse matrix, i.e., |S t 1 | |S t 2 |. In other words, term 1 suffers from a severe data-imbalance problem. Furthermore, since term 1 term 2 in Eq.3 and term B term A in Eq.4, the learning process of B t s and B t e heavily relies on term B.
A suitable way to minimize term B is to have
b t si T b t ej = −k, i.e., b t si = −b t ej . Similarly, for any b t eg ∈ B t e with g = j, we have b t si = −b t eg .
It is easy to see that b t ej = b t eg . In other words, each item in B t e shares consistent binary codes. Similarly, each item in B t s also shares consistent binary codes which are opposite with B t e . Fig.1 illustrates such an extreme circumstance. However, as can be seen from term 2 in Eq.3, the performance of hash functions deeply relies on the learned B t s . Therefore, such a data-imbalance problem will cause all the codes produced by W t to be biased, which will seriously affect the retrieval performance.
Balanced Similarity
To solve the above problem, a common method is to keep a balance between term 1 and term 2 in Eq.3 by scaling up the parameter σ t . However, as verified later in our experiments (see Fig.5), such a scheme still suffers from unsatisfactory performance and will get stuck in how to choose an appropriate value of σ t from a large range 4 . Therefore, we present another scheme to handle this problem, which expands the feasible solutions for both B t e and B t s . Concretely, we propose to use a balanced similarity matrixS t with each element defined as follows:
S t ij = η s S t ij , S t ij = 1, η d S t ij , S t ij = −1,(5)
where η s and η d are two positive equilibrium factors used to balance the similar and dissimilar weights, respectively. When setting η s > η d , the Hamming distances among similar pairs will be reduced, while the ones among dissimilar pairs will be enlarged. Analysis 2. With the balanced similarity, the goal of term B in Eq.4 is to have b t si T b t ej ≈ −kη d . The number of common hash bits between b t si and b t ej is at least k(1−η d ) 2 5 . Therefore, by fixing b t si , the cardinal number of feasible solutions for b t ej is at least
k k(1−η d ) 2
. Thus, the balanced similarity matrixS t can effectively solve the problem of generating consistent binary codes, as showed in Fig.1. By replacing the similarity matrix S t in Eq.3 with the balanced similarity matrixS t , the overall objective function can be written as:
min B t s ,B t e ,W t B t s T B t e − kS t 2 F term 1 +σ t F (X t s ) − B t s 2 F term 2 + λ t W t 2 F term 3 s.t. B t s ∈ {−1, 1} k×nt , B t e ∈ {−1, 1} k×mt .(6)
4 Under the balanced similarity, we constrain σ t to [0, 1]. 5 · denotes the operation of rounding down.
The Optimization
Due to the binary constraints, the optimization problem of Eq.6 is still non-convex with respect to W t , B t s , B t e . To find a feasible solution, we adopt an alternative optimization approach, i.e., updating one variable with the rest two fixed until convergence.
1) W t -step: Fix B t e and B t s , then learn hash weights W t . This sub-optimization of Eq.6 is a classical linear regression that aims to find the best projection coefficient W t by minimizing term 2 and term 3 jointly. Therefore, we update W t with a close-formed solution as:
W t = σ t (σ t X t s X t s T + λ t I) −1 X t s B t s T ,(7)
where I is a d × d identity matrix.
2) B t e -step: Fix W t and B t s , then update B t e . Since only term 1 in Eq.6 contains B t e , we directly optimize this term via a discrete optimization similar to (Kang, Li, and Zhou 2016), where the squared Frobenius norm in term 1 is replaced with the L 1 norm. The new formulation is:
min B t e B t s T B t e − kS t 1 s.t. B t e ∈ {−1, 1} k×mt . (8)
Similar to (Kang, Li, and Zhou 2016), the solution of Eq.8 is as follows:
B t e = sgn(B t sS t ).
(9) 3) B t s -step: Fix B t e and W t , then update B t s . The corresponding sub-problem is:
min B t s B t s T B t e − kS t 2 F + σ t W t T X t s − B t s 2 F s.t. B t s ∈ {−1, 1} k×nt .(10)
By expanding each term in Eq.10, we get the sub-optimal problem of B t s by minimizing the following formulation:
min B t s B t e T B t s 2 F + kS t 2 F const −2tr(kS t B t e T B t s ) + σ t ( W t T X t s 2 F const + B t s 2 F const −2tr(X t s T W t B t s )) s.t. B t s ∈ {−1, 1} k×nt ,(11)
where the "const" terms denote constants. The optimization problem of Eq.11 is equivalent to
min B t s B t e T B t s term I 2 F − 2tr(P T B t s term II ) s.t. B t s ∈ {−1, 1} k×nt ,(12)
where P = kB t eS t T + σ t W t T X t s and tr(·) is trace norm. The problem in Eq.12 is NP-hard for directly optimizing the binary code matrix B t s . Inspired by the recent advance on binary code optimization (Shen et al. 2015b), a closed-form solution for one row of B t s can be obtained while fixing all the other rows. Therefore, we first reformulate term I and term II in Eq.12 as follows:
term I =b t er Tb t sr +B t e TB t s ,(13)
Algorithm 1 Balanced Similarity for Online Discrete Hashing (BSODH) Require: Training data set X with its label space L, the number of hash bits k, the parameters σ and λ, the total number of streaming data batches T . Ensure: Binary codes B for X and hash weights W.
whereb t er ,b t sr andp r stand for the r-row of B t e , B t s and P, respectively. Also,B t e ,B t s andP represent the matrix of B t e excludingb t er , the matrix of B t s excludingb t sr and the matrix of P excludingp r , respectively.
Taking Eq.13 and Eq.14 back to Eq.12 and expanding it, we obtain the following optimization problem:
Note that b t er Tb t sr 2 F = k 2 , which is a constant value. The above optimization problem is equivalent to:
miñ b t sr tr((B t s TB t eb t er T −p T r )b t sr ) s.t.b t sr ∈ {−1, 1} nt .
(16) Therefore, this sub-problem can be solved by the following updating rule:
b t sr = sgn(p r −b t erB t e TB t s ).(17)
The main procedures of the proposed BSODH are summarized in Alg.1. Note that, in the first training stage, i.e., t = 1, we initialize W 1 with normal Gaussian distribution as in line 4 and compute B 1 s as in line 5. When t ≥ 2, we initialize B t s in line 9 to fasten the training iterations from line 11 to line 15. By this way, it is quantitatively shown in the experiment that it takes only one or two iterations to get convergence (see Fig.6).
Experiments
Datasets CIFAR-10 contains 60K samples from 10 classes, with each represented by a 4, 096-dimensional CNN feature (Simonyan and Zisserman 2015). Following (Fatih et al. 2017), we partition the dataset into a retrieval set with 59K samples, and a test set with 1K samples. From the retrieval set, 20K instances are adopted to learn the hash functions.
Places205 is a 2.5-million image set with 205 classes. Following (Fatih et al. 2017;Fatih, Bargal, and Sclaroff 2017), features are first extracted from the fc7 layer of the AlexNet (Krizhevsky, Sutskever, and Hinton 2012), and then reduced to 128 dimensions by PCA. 20 instances from each category are randomly sampled to form a test set, the remaining of which are formed as a retrieval set. 100K samples from the retrieval set are sampled to learn hash functions.
MNIST consists of 70K handwritten digit images with 10 classes, each of which is represented by 784 normalized original pixels. We construct the test set by sampling 100 instances from each class, and form a retrieval set using the rest. A random subset of 20K images from the retrieval set is used to learn the hash functions.
Baselines and Evaluated Metrics
We compare the proposed BSODH with several state-of-theart OH methods, including Online Kernel Hashing (OKH) (Huang, Yang, and Zheng 2013), Online Sketch Hashing (SketchHash) (Leng et al. 2015), Adaptive Hashing (AdaptHash) (Fatih and Sclaroff 2015), Online Supervised Hashing (OSH) (Fatih, Bargal, and Sclaroff 2017) and OH with Mutual Information (MIHash) (Fatih et al. 2017).
To evaluate the proposed method, we adopt a set of widely-used protocols including mean Average Precision (denoted as mAP), mean precision of the top-R retrieved neighbors (denoted as Precision@R) and precision within a Hamming ball of radius 2 centered on each query (denoted as Precision@H2). Note that, following the work of (Fatih et al. 2017), we only compute mAP on the top-1, 000 retrieved items (denoted as mAP@1, 000) on Places205 due to its large scale. And for SketchHash (Leng et al. 2015), the batch size has to be larger than the size of hash bits. Thus, we only report its performance when the hash bit is 32.
Quantitative Results
We first show the experimental results of mAP (mAP@1, 000) and Precision@H2 on CIFAR-10, Places205 and MNIST. The results are shown in Tab.1 and Tab.2. Generally, the proposed BSODH is consistently better in these two evaluated metrics on all three benchmarks. For a depth analysis, in terms of mAP, compared with the second best method, i.e., MIHash, the proposed method achieves improvements of 5.11%, 1.40%, and 6.48% on CIFAR-10, Places-205 and MNIST, respectively. As for Precision@H2, compared with MIHash, the proposed method acquires 29.97%, 2.63% and 9.2% gains on CIFAR-10, Places-205 and MNIST, respectively. We also evaluate Precision@R with R ranging from 1 to 100 under the hash bit of 64. The experimental results are shown in Fig.2, which verifies that the proposed BSODH also achieves superior performance on all three benchmarks.
Parameter Sensitivity
The following experiments are conducted on MNIST with the hash bit fixed to 64.
Sensitivities to λ t and σ t . The left two figures in Fig.3 present the effects of the hyper-parameters λ t and σ t . For simplicity, we regard λ t and σ t as two constants across the whole training process. As shown in Fig.3, the performance of the proposed BSODH is sensitive to the values of σ t and λ t . The best combination for (λ t , σ t ) is (0.6, 0.5). By conducting similar experiments on CIFAR-10 and Places-205, we finally set the tuple value of (λ t , σ t ) as (0.3, 0.5) and (0.9, 0.8) for these two benchmarks.
Necessity ofS t . We validate the effectiveness of the proposed balanced similarityS t by plotting the Precision@H2 curves with respect to the two positive equilibrium factors, i.e., η s and η d . As shown in the right two figures of Fig.3, the performance stabilizes when η s ≥ 1 and η d ≤ 0.3. When η d = 1 and η s = 1,S t degenerates into an un-balanced version S t . However, as observed from the rightmost chart in Fig.3, when η s = 1, the proposed method suffers from severe performance loss. Precisely, the Precision@H2 shows the best of 0.814 when η s = 1.2 and η d = 0.2, while it is only 0.206 when η s = 1 and η d = 1. Compared with the un-balanced S t , the proposed balanced similarityS t gains To verify the aforementioned Analysis 1 and Analysis 2, we further visualize the learned binary codes in the last training stage via t-SNE (Maaten and Hinton 2008). As shown in Though the discretely optimized binary codes B t e (a), B t s (b) and linearly mapped binary codes sgn(W t T X t s ) (c) are clustered, each cluster is mixed with items from different classes and only four out of ten clusters are formed with each close to each other. That is to say, the majorities of Hamming codes are the same, which conforms with Analysis 1. However, under the balanced setting, both B t e and B t s are formed into ten separated clusters without mixed items in each clusters, which conforms with Analysis 2. Under such a situation, the hash functions W t are well deduced by B t s , with the hash codes in Fig.4 (f) more discriminative.
Scaling up σ t . As aforementioned, an alternative approach to solving the data-imbalance problem in Analysis 1 is to keep a balance between term 1 and term 2 in Eq.3 via scaling up the parameter σ t . To test the feasibility of this scheme, we plot the values of Precision@H2 with σ t varying in a large scale in Fig.5. Intuitively, scaling up σ t affects the performance quite a lot. Quantitatively, when the value of σ t is set as 10, 000, Precision@H2 achieves the best, i.e., 0.341. We argue that this scheme shows its drawbacks in two aspects. First, it suffers from the unsatisfactory performance. As shown in Tab.2, when hash bit is 64, the proposed BSODH gets 0.814 in term of Precision@H2 on MNIST. Compared with scaling up σ t , the proposed method achieves more than 2.5 times better performance. Second, scaling up σ t also easily gets stuck in how to choose an appropriate value due to the large range of σ t . To decide a best value, extensive experiments have to be repeated, which is infeasible in online learning. However, σ t is limited to [0, 1] under the proposed BSODH. It is much convenient to choose an appropriate value for σ t .
Convergence of B t s . Each time when the new streaming data arrives, B t s is updated based on iterative process, as shown in lines 11−15 in Alg.1. Fig.6 shows the convergence ability of the proposed BSODH on the input streaming data at t-stage. As can be seen, when t ≤ 2, it merely takes two iterations to get convergence. What's more, it costs only one iteration to finish updating B t s when t > 2, which validates not only the convergence ability, but also the efficiency of the proposed BSODH.
Conclusions
In this paper, we present a novel supervised OH method, termed BSODH. The proposed BSODH learns the correlation of binary codes between the newly streaming data and the existing database via a discrete optimization, which is the first to the best of our knowledge. To this end, first we use an asymmetric graph regularization to preserve the similarity in the produced Hamming space. Then, to reduce the quantization error, we mathematically formulate the optimization problem and derive the discrete optimal solutions. Finally, to solve the data-imbalance problem, we propose a balanced similarity, where two equilibrium factors are introduced to balance the similar/dissimilar weights. Extensive experiments on three benchmarks demonstrate that our approach merits in both effectiveness and efficiency over several state-of-the-art OH methods. | 4,912 |
1901.09970 | 2912634655 | In this paper, we propose an auto-encoder based generative neural network model whose encoder compresses the inputs into vectors in the tangent space of a special Lie group manifold: upper triangular positive definite affine transform matrices (UTDATs). UTDATs are representations of Gaussian distributions and can straightforwardly generate Gaussian distributed samples. Therefore, the encoder is trained together with a decoder (generator) which takes Gaussian distributed latent vectors as input. Compared with related generative models such as variational auto-encoder, the proposed model incorporates the information on geometric properties of Gaussian distributions. As a special case, we derive an exponential mapping layer for diagonal Gaussian UTDATs which eliminates matrix exponential operator compared with general exponential mapping in Lie group theory. Moreover, we derive an intrinsic loss for UTDAT Lie group which can be calculated as l-2 loss in the tangent space. Furthermore, inspired by the Lie group theory, we propose to use the Lie algebra vectors rather than the raw parameters (e.g. mean) of Gaussian distributions as compressed representations of original inputs. Experimental results verity the effectiveness of the proposed new generative model and the benefits gained from the Lie group structural information of UTDATs. | GANs @cite_13 @cite_16 @cite_17 @cite_15 are proven effective in generating photo-realistic images in recent developments of neural networks. Because of the adversarial training approach, it is difficult for GANs to map inputs to latent vectors. Although some approaches @cite_7 @cite_14 are proposed to address this problem, it still remains open and requires further investigation. Compared to GANs, VAEs @cite_6 @cite_9 are generative models which can easily map an input to its corresponding latent vector. This advantage enables VAEs to be either used as data compressors or employed in application scenarios where manipulation of the latent space is required @cite_4 @cite_12 . Compared with AEs @cite_3 , VAEs encode inputs to Gaussian distributions instead of deterministic latent vectors, and thus enable them to generate examples. On one hand, Gaussian distributions do not form a vector space. Naively treating them as vectors will ignore its geometric properties. On the other hand, most machine learning models including neural networks are designed to work with vector outputs. To incorporate the geometric properties of Gaussian distributions, the type of space of Gaussian distributions needs to be identified first; then corresponding techniques from geometric theories will be adopted to design the neural networks. | {
"abstract": [
"",
"High-level manipulation of facial expressions in images --- such as changing a smile to a neutral expression --- is challenging because facial expression changes are highly non-linear, and vary depending on the appearance of the face. We present a fully automatic approach to editing faces that combines the advantages of flow-based face manipulation with the more recent generative capabilities of Variational Autoencoders (VAEs). During training, our model learns to encode the flow from one expression to another over a low-dimensional latent space. At test time, expression editing can be done simply using latent vector arithmetic. We evaluate our methods on two applications: 1) single-image facial expression editing, and 2) facial expression interpolation between two images. We demonstrate that our method generates images of higher perceptual quality than previous VAE and flow-based methods.",
"The ability of the Generative Adversarial Networks (GANs) framework to learn generative models mapping from simple latent distributions to arbitrarily complex data distributions has been demonstrated empirically, with compelling results showing that the latent space of such generators captures semantic variation in the data distribution. Intuitively, models trained to predict these semantic latent representations given data may serve as useful feature representations for auxiliary problems where semantics are relevant. However, in their existing form, GANs have no means of learning the inverse mapping -- projecting data back into the latent space. We propose Bidirectional Generative Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and demonstrate that the resulting learned feature representation is useful for auxiliary supervised discrimination tasks, competitive with contemporary approaches to unsupervised and self-supervised feature learning.",
"In just three years, Variational Autoencoders (VAEs) have emerged as one of the most popular approaches to unsupervised learning of complicated distributions. VAEs are appealing because they are built on top of standard function approximators (neural networks), and can be trained with stochastic gradient descent. VAEs have already shown promise in generating many kinds of complicated data, including handwritten digits, faces, house numbers, CIFAR images, physical models of scenes, segmentation, and predicting the future from static images. This tutorial introduces the intuitions behind VAEs, explains the mathematics behind them, and describes some empirical behavior. No prior knowledge of variational Bayesian methods is assumed.",
"",
"High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such “autoencoder” networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data.",
"",
"In this paper, we propose the Self-Attention Generative Adversarial Network (SAGAN) which allows attention-driven, long-range dependency modeling for image generation tasks. Traditional convolutional GANs generate high-resolution details as a function of only spatially local points in lower-resolution feature maps. In SAGAN, details can be generated using cues from all feature locations. Moreover, the discriminator can check that highly detailed features in distant portions of the image are consistent with each other. Furthermore, recent work has shown that generator conditioning affects GAN performance. Leveraging this insight, we apply spectral normalization to the GAN generator and find that this improves training dynamics. The proposed SAGAN achieves the state-of-the-art results, boosting the best published Inception score from 36.8 to 52.52 and reducing Frechet Inception distance from 27.62 to 18.65 on the challenging ImageNet dataset. Visualization of the attention layers shows that the generator leverages neighborhoods that correspond to object shapes rather than local regions of fixed shape.",
"We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.",
"Colorization is an ambiguous problem, with multiple viable colorizations for a single grey-level image. However, previous methods only produce the single most probable colorization. Our goal is to model the diversity intrinsic to the problem of colorization and produce multiple colorizations that display long-scale spatial co-ordination. We learn a low dimensional embedding of color fields using a variational autoencoder (VAE). We construct loss terms for the VAE decoder that avoid blurry outputs and take into account the uneven distribution of pixel colors. Finally, we build a conditional model for the multi-modal distribution between grey-level image and the color field embeddings. Samples from this conditional model result in diverse colorization. We demonstrate that our method obtains better diverse colorizations than a standard conditional variational autoencoder (CVAE) model, as well as a recently proposed conditional generative adversarial network (cGAN).",
"One of the challenges in the study of generative adversarial networks is the instability of its training. In this paper, we propose a novel weight normalization technique called spectral normalization to stabilize the training of the discriminator. Our new normalization technique is computationally light and easy to incorporate into existing implementations. We tested the efficacy of spectral normalization on CIFAR10, STL-10, and ILSVRC2012 dataset, and we experimentally confirmed that spectrally normalized GANs (SN-GANs) is capable of generating images of better or equal quality relative to the previous training stabilization techniques."
],
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_7",
"@cite_9",
"@cite_6",
"@cite_3",
"@cite_15",
"@cite_16",
"@cite_13",
"@cite_12",
"@cite_17"
],
"mid": [
"",
"2558578635",
"2412320034",
"2467604901",
"",
"2100495367",
"",
"2950893734",
"2099471712",
"2584890299",
"2963836885"
]
} | Lie Group Auto-Encoder | Unsupervised deep learning is an active research area which shows considerable progresses recently. Many deep neural network models are invented to address various problems. For example, auto-encoders (AEs) (Hinton & Salakhutdinov, 2006) are used to learn efficient data codings, i.e. * [email protected] † [email protected] Overview of the proposed LGAE model. An input example x is encoded into a vector in the tangent Lie algebra of the Lie group manifold formed by Gaussian distributions. Then, the vectors is mapped to a UTDAT representation of a Gaussian distribution. A latent vector is then sampled from this Gaussian distribution and fed to a decoder. The whole process is differentiable and optimized using stochastic gradient descent. latent representations. Generative adversarial networks (GANs) (Goodfellow et al., 2014) are powerful on generating photo-realistic images from latent variables. While having achieved numerous successes, both AEs and GANs are not without their disadvantages. On one hand, AEs are good at obtaining a compressed latent representation of a given input, but hard to generate realistic samples randomly. On the other hand, GANs are good at randomly generating realistic samples, but hard to map a given input to its latent space representation. As a variant of AE, variational auto-encoders (VAEs) (Kingma & Welling, 2013) are an-1 other kind of generative models which can also obtain the latent representation of a given input. The architectures of VAEs are similar to AEs except that the encoders encode inputs into Gaussian distributions instead of deterministic vectors. Trained with a Bayesian framework, the decoder of a VAE is able to generate random samples from latent vectors which are Gaussian distributed random noises. As a result, many applications that require manipulating the latent space representations are also feasible with VAE.
One major problem of VAEs is that the geometric structure of Gaussian distributions is not considered. Traditional machine learning models including neural networks as the encoders of VAEs are designed for vector outputs. However, Gaussian distributions do not form a vector space. This can be easily shown because the parameter vectors are not closed under regular vector operators such as vector subtraction. The variance-covariance matrix must be positive definite but simple vector subtraction will break this requirement. Naively treating Gaussians as parameter vectors ignores the geometric structure information of the space formed by them. To exploit the geometric structural property, we need to identify what kind of space it is. Gong et al. (Gong et al., 2009) reveals that Gaussians can be represented as a special kind of affine transformations which are identified as a Lie group.
In this paper, we view Gaussian distributions from a geometrical perspective using Lie group theory, and propose a novel generative model using the encoder-decoder architecture. The overview of our model is presented in Figure 1. As illustrated therein, the central part of our model is a special Lie group: upper triangular positive definite affine transform matrices (UTDATs). On the one hand, UTDATs are matrix representations of Gaussian distributions. That's to say, there is a one-to-one map between UTDATs and Gaussian distributions. Therefore, we can analyze the geometric properties of Gaussian distributions by analyzing the space of UTDAT. Also, we can sample from Gaussian distributions by matrix-vector multiplying UTDAT with a standard Gaussian noise vector. On the other hand, UTDATs form a Lie group. Therefore, one can work on the tangent spaces (which are Lie algebras) first, then project back to Lie group by exponential mapping. Since Lie algebras are vector spaces, they are suitable for most neural network architectures. As a result, the encoder in our model outputs vectors in the Lie algebra space. Those vectors are then projected to UTDATs by a proposed exponential mapping layer. Latent vectors are then generated by UTDATs and fed to a decoder. Specifically, for Gaussian distributions with diagonal variance-covariance matrices, we derive a closed form solution of exponential mapping which is fast and differentiable. Therefore, our model can be trained by stochastic gradient descents.
Gaussians as Lie group
Let v be a standard n-dimensional Gaussian random vector v 0 ∼ N (0, I), then any new vector v = Av 0 + µ which is affine transformed from v 0 is also Gaussian distributed v ∼ N (µ, Σ), where Σ = AA T . That is, any affine transformation can produce a Gaussian distributed random vector from the standard Gaussian. Furthermore, if we restrict the affine transformation to be v = U v 0 + µ where U is upper triangular and invertible (i.e. it has positive eigen values only), then conversely we can find a unique U for any non-degenerate Σ such that U U T = Σ. In other words, non-degenerate Gaussian distributions are isomorphic to UTDATs. Let G denote the matrix form of the following UTDAT:
G = U µ 0 1 ,(1)
then we can identify the type of spaces of Gaussian distributions by identifying the type of spaces of G.
According to Lie theory (Knapp, 2002), invertible affine transformations form a Lie group with matrix multiplication and inversion as its group operator. It can be easily verified that UTDATs are closed under matrix multiplication and inversion. So UTDATs form a subgroup of the general affine group. Since any subgroup of a Lie group is still a Lie group, UTDATs form a Lie group. In consequence, Gaussian distributions are elements of a Lie group.
A Lie group is also a differentiable manifold, with the property that the group operators are compatible with the smooth structure. An abstract Lie group has many isomorphic instances. Each of them is called a representation. In Lie theory, matrix representation is a useful tool for structure analysis. In our case, UTDAT is the matrix representation of the abstract Lie group formed by Gaussian distributions.
To exploit the geometric property of Lie group manifolds, the most important tools are logarithmic mapping, exponential mapping and geodesic distance. At a specific point of the group manifold, we can obtain a tangent space which is called Lie algebra in Lie theory. The Lie group manifold and Lie algebra are analogue to a curve and its tangent lines in a Euclidean space. Tangent spaces (i.e. Lie algebras) of a Lie group manifold are vector spaces. In our case, for n-dimensional Gaussians, the corresponding Lie group is 1 2 n(n + 3) dimensional. Accordingly, its tangent spaces are R 1 2 n(n+3) . Note that, at each point of the group manifold, we have a Lie algebra. We can project a point G of the UTDAT group manifold to the tangent space at a specific point G 0 by the logarithmic mapping defined as
g = log(G −1 0 G),(2)
where the log operator at the right hand side is matrix logarithm operator. Note that the points are projected to a vector space even though the form of the results are still matrices, which means that we will flatten them to vectors wherever vectors are required. Specifically, the point G 0 will be projected to 0 at its own tangent Lie algebra.
Conversely, the exponential mapping projects points in a tangent space back to the Lie group manifold. Let g be a point in the tangent space of G 0 , then the exponential mapping is defined as
G = G 0 exp(g),(3)
where the exp operator at the right hand side is matrix exponential operator. For two points G 1 and G 2 of a Lie group manifold, the geodesic distance is the length of the shortest path connecting them along the manifold, which is defined as
d LG (G 1 , G 2 ) = log(G −1 1 G 2 ) F ,(4)
where · F is the Frobenius norm.
4. Lie group auto-encoder 4.1. Overall architecture
Suppose we want to generate samples from a complex distribution P (X) where X ∈ R D . One way to accomplish this task is to generate samples from a joint distribution P (Z, X) first, then discard the part belonging to Z and keep the part belonging to X only. This seems giving us no benefit at first sight because it is usually difficult to sample from P (Z, X) if sampling from P (X) is hard. However, if we decompose the joint distribution with a Bayesian formula
P (Z, X) = P (X|Z)P (Z),(5)
then the joint distribution can be sampled by a two step process: Firstly sample from P (Z), then sample from P (X|Z). The benefits come from the fact that both P (Z) and P (X|Z) may be much easier to sample from.
Estimating parameters in P (Z, X) as modeled in Eq. 5 is not easy because samples from the joint distribution are required; however, in most scenarios, we only have samples {x i : i = 1, 2, · · · , n} from the marginal distribution P (X). To overcome this problem, we augment each example x i from the marginal distribution to several examples {(z ij , x i ) : j = 1, 2, · · · , m} in the joint distribution by sampling z ij from the conditional distribution P (Z|X).
Note that Z is an auxiliary random vector helping us perform sampling from the marginal distribution P (X), so it can be any kind of distribution but should be easy to sample from. In this paper, we let P (Z) ∼ N (0, I).
In practice, P (Z|X) should be chosen according to the type of data space of X. For example, if X is continuous, we can model P (X|Z) as a Gaussian distribution with a fixed isotropic variance-covariance matrix. For binary and categorical typed X, Bernoulli and multinomial distributions can be used, respectively.
Given P (Z) and P (X|Z), P (Z|X) is usually complex and thus difficult to sample from. So we sample z ij from another distribution Q(Z|X) instead. In this paper, we model Q(Z|X) as Gaussian distributions with diagonal variance-covariance matrices. Q(Z|X) should satisfy the following objectives as much as possible:
• Q(Z|X) should approximate P (Z|X). Therefore, given z ij sampled from Q(Z|x i ),x i sampled from P (X|Z) should reconstruct x i .
• {z ij : i = 1, 2, · · · , n; j = 1, 2, · · · , m} should fit the marginal P (Z) well.
To optimize the first objective, we minimize the reconstruction loss L rec , which is the mean squared error (MSE) for continuous X and cross-entropy for binary and categorical X.
For the second objective, directly optimizing it using z ij is not practical because we need a large sample size m for P (Z|X) to accurately estimate model parameters. The total sample size of z ij , which is mn, is too big for computation. To overcome this problem, we consider Gaussian distributions as points in the corresponding Lie group as we discussed in section 3. Note that the set {z ij , i = 1, 2, · · · , n; j = 1, 2, · · · , m} is sampled from a set of Gaussian distributions Q(Z|x i ). The second objective implies the average distribution of those Gaussians should be P (Z), which is a standard Gaussian. However, Gaussian distributions, which are equivalently represented as UTDATs, do not conform to the commonly used Euclidean geometry. Instead, we need to find the intrinsic mean of those Gaussians through Lie group geometry because Gaussian distributions have a Lie group structure. We derive a Lie group intrinsic loss L LG to optimize the second objective. The details of L LG will be present in subsection 4.3.
In our proposed Lie group auto-encoder (LGAE), P (X|Z) is called a decoder or generator, and is implemented with neural networks. Q(Z|X) is also implemented with neural networks. Note that Q(Z|X) is a Gaussian distribution, so the corresponding neural network is a function whose output is a Gaussian distribution. Neural networks as well as many other machine learning models are typically designed for vector outputs. Being intrinsically a Lie group as discussed in section 3, Gaussian distributions do not form a vector space. To best exploit the geometric structure of the Gaussians, we first estimate corresponding points g i in the tangent Lie algebra at the position of the intrinsic mean of {G i , i = 1, 2, · · · , n} using neural networks. As L LG requires the intrinsic mean to be the standard Gaussian P (Z) = N (0, I), whose UTDAT representation is the identity matrix I, the corresponding point g i in the tangent space of G i is g i = log(G i ).
Since {g i , i = 1, 2, · · · , n} are in a vector space, they can be well estimated by neural networks. g i s are then projected to the Lie group by an exponential mapping layer
G i = exp(g i ).(7)
For diagonal Gaussians, we derive a closed-form solution of the exponential mapping which eliminates the requirement of matrix exponential operator. The details will be presented in subsection 4.2.
The whole architecture of LGAE is summarized in Figure 1.
A typical forward procedure works as follows: Firstly, the encoder encodes an input x i into a point g i in the tangent Lie algebra. The exponential mapping layer then projects g i to the UTDAT matrix G i of the Lie group manifold. A latent vector z is then sampled from the Gaussian distribution represented by G i by multiplying G i with a standard Gaussian noise vector. The details of the sampling operation will be described in section 4.4. The decoder (or generator) network then generatesx i which is the reconstructed version of x i . The whole network is optimized by minimizing the following loss
L = λL LG + L rec ,(8)
where L LG and L rec are the Lie group intrinsic loss and reconstruction loss, respectively. Because the whole forward process and the loss are differentiable, the optimization can be achieved by stochastic gradient descent method.
Exponential mapping layer
We derive the exponential mapping g i = exp(G i ) for diagonal Gaussians. When
G i ∼ N (µ i , Σ i ) is diagonal, we have Σ i = σ 2 i1 σ 2 i2 . . . σ 2 iK . .(9)
The following theorem gives the forms of G i and g i , as well as their relationship.
Theorem 1. Let G i be the UTDAT and g i be the corresponding vector in its tangent Lie algebra at the standard Gaussian. Then
G i = σ i1 µ i1 σ i2 µ i2 . . . . . . 0 0 . . . 1 (10)g i = φ i1 θ i1 φ i2 θ i2 . . . . . . 0 0 . . . 1 ,(11)
where φ ik = log(σ ik ) (12)
θ ik = µ ik log(σ ik ) σ ik − 1 .(13)
Proof. By the definition of UTDAT, we can straightforwardly get Eq. 10. Let H = G i − I. Using the series form of matrix logarithm, we have
g i = log(G i ) = log(I + H) = ∞ t=1 (−1) t−1 H t t .(14)
By substituting H into 14, we get Eq. 11 and the following:
φ ik = ∞ t=1 (−1) t−1 (σ ik − 1) t t = log(σ ik ) and θ ik = ∞ t=1 (−1) t−1 µ ik (σ ik − 1) t−1 t = µ ik log(σ ik ) σ ik − 1 .
Alternatively, after we identify g i has the form as in Eq. 11, we can derive the exponential mapping by the definition of matrix exponential
G i = exp(g i ) = ∞ t=0 g t i t! = ∞ t=0 φ t i1 t! θ i1 ∞ t=1 φ t−1 i1 t! . . . . . . 0 . . . 1 = e φi1 θi1 φi1 ∞ t=0 φ t i1 t! − 1 . . . . . . 0 . . . 1 = e φi1 θ ik (e φ i1 −1) φi1 . . . . . . 0 . . . 1 .
The exponential mapping layer is expressed as
σ ik = e φ ik(15)µ ik = θ ik (e φ ik − 1) φ ik(16)
Note that if σ ik = 1 (i.e. φ ik = 0), then µ ik = θ ik due to the fact that lim x→0 log(x+1) x = 1 or lim x→0 e x −1 x = 1.
Lie group intrinsic loss
Let G i be the UTDAT representation of P (Z|x i ). The intrinsic mean G * of those G i s is defined as
G * = arg min G n i=1 d 2 LG (G, G i ).(17)
The second objective in the previous subsection requires that G * = I, which is equivalent to minimizing the loss
L LG = n i=1 d 2 LG (I, G i ) (18) = n i=1 log(G i ) 2 F = n i=1 g i 2 F .(19)
So the intrinsic loss plays a role of regularization during the training. Since the tangent Lie algebra is a vector space, the Frobenius norm is equivalent to the l 2 -norm if we flatten matrix g i to a vector. Eq. 18 plays a role of regularization which requires all the Gaussians G i to be grouped together around the standard Gaussian. Eq. 19 shows that we can regularize on the tangent Lie algebra instead, which avoids the matrix logarithm operation. Specifically, for diagonal Gaussians, we have
L LG = n i=1 g i 2 F (20) = n i=1 K k=1 (φ 2 ik + θ 2 ik )(21)
Sampling from Gaussians
According to the properties of Gaussian distributions discussed in section 3, sampling from an arbitrary Gaussian distribution can be achieved by transforming a standard Gaussian distribution with the corresponding, i.e.
z ij 1 = G i v ij 1 ,(22)
where v ij is sample from V ∼ N (0, I). Note that, this sampling operator is differentiable, which means that gradients can be back-propagated through the sampling layer to the previous layers. When G i is a diagonal Gaussian, we have
z ij = σ v ij + µ,(23)
where σ = [σ i1 , · · · , σ iK ] T , µ = [µ i1 , · · · , µ iK ] T and is the element-wise multiplication. Therefore, the reparameterization trick in (Kingma & Welling, 2013) is a special case of sampling of UTDAT represented Gaussian distributions.
Discussion
Although our proposed LGAE and VAE both (Kingma & Welling, 2013) have an encoder-decoder based architecture, they are essentially different. The loss function of VAE, which is
L = KL(Q(Z|X) N (0, I)) + L rec ,(24)
is derived from the Bayesian lower bound of the marginal likelihood of the data. In contrast, the loss function of LGAE is derived from a geometrical perspective. Further, the Lie group intrinsic loss L LG in Eq. (8) is a real metric, but the KL-divergence in Eq. (24) is not. For examples, the KL-divergence is not symmetric, nor does it satisfy the triangle inequality.
Further, while both LGAE and VAE estimate Gaussian distributions using neural networks, VAE does not address the non-vector output problem. As a contrast, we systematically address this problem and design an exponential mapping layer to solve it. One requirement arising from the non-vector property of Gaussian distributions is that the variance parameters be positive. To satisfy this requirement, (Kingma & Welling, 2013) estimate the logarithm of variance instead. This technique is equivalent to performing the exponential mapping for the variance part. Without a theoretical foundation, it was trial-and-error to choose exp over other activations such as relu and softplus. Our theoretical results confirms that exp makes more sense than others. Moreover, our theoretical results further show that a better way is to consider a Gaussian distribution as a whole rather than treat its variance part only and address the problem in an empirical way.
Because the points of the tangent Lie algebra are already vectors, we propose to use them as compressed representations of the input data examples. These vectors contain information of the Gaussian distributions and already incorporate the Lie group structural information of the Gaussian distributions; therefore, they are more informative than either a single mean vector or concatenating the mean vector and variance vector naively together.
6. Experiments
Datasets
The proposed LGAE model is evaluated on two benchmark datasets:
• MNIST: The MNIST dataset (Lecun et al., 1998) consists of a training set of 60, 000 examples of handwritten digits, and a test set of 10, 000 examples. The digits have been size-normalized and centered into fixed-size 28 × 28 images.
• SVHN: The SVHN dataset (Netzer et al., 2011) is also a collection of images of digits. But the background of image is more clutter than MNIST, so it is significantly harder to classify.
Settings
Since VAE (Kingma & Welling, 2013) is the most related model of LGAE, we use VAE as a baseline for comparisons. We follow the exact experimental settings of (Kingma & Welling, 2013). That is, MLP with 500 hidden units are used as encoder and decoder. In each hidden layer, non-linear activation tanh are applied. The parameters of neurons are initialized by random sampling from N (0, 0.01) and are optimized by Adagrad (Duchi et al., 2011) (Paszke et al., 2017). Note that there is no matrix operation in the LGAE implementation thanks to the element-wise closed-form solution presented in Section 4.2 and 4.3. Therefore, the run-time is almost the same as VAE. On a Nvidia GeForce GTX 1080 graphic card, it takes about 12.5 and 25 seconds to train on the training set and test on both the training and test sets for one epoch with mini-batches of size 100.
Results
In the first experiment, we investigate the effectiveness of the proposed exponential mapping layer. We design a variant of LGAE which uses the same loss as VAE; i.e., we replace the Lie group intrinsic loss with KL divergence but keep the exponential mapping layer in the model. LGAE (K=2) LGAE (K=5) LGAE (K=10) LGAE (K=20) Figure 3. Generated images from randomly sampled latent vectors for the MNIST dataset. The upper and lower rows are generated by the VAE and LGAE models, respectively.
which indicates the superiority of the Lie group intrinsic loss over KL divergence. Moreover, the results also show that naively concatenating covariance with mean does not contribute much to the performances, and sometimes even hurts it. This phenomenon indicates that treating Gaussians as vectors cannot fully extract important geometric structural information from the manifold they formed.
To illustrate the generative capability of LGAE, we randomly generate images using the model and plot them along with images generated from VAE. Figures 3 and 4 show the generated images from both models trained on the MNIST and SVHN datasets, respectively.
Conclusions
We propose Lie group auto-encoder (LGAE), which is a encoder-decoder type of neural network model. Similar to VAE, the proposed LGAE model has the advantages of generating examples from the training data distribution, as well as mapping inputs to latent representations. The Lie group structure of Gaussian distributions is systematically exploited to help design the network. Specifically, we design an exponential mapping layer, derive a Lie group intrinsic loss, and propose to use Lie algebra vectors as latent representations. Experimental results on the MNIST and SVHN datasets testify to the effectiveness of the proposed method.
VAE (K=2) VAE (K=5) VAE (K=10)
LGAE (K=2) LGAE (K=5) LGAE (K=10) Figure 4. Generated images from randomly sampled latent vectors for the MNIST dataset. The upper and lower rows are generated by the VAE and LGAE models, respectively. | 4,042 |
1901.09970 | 2912634655 | In this paper, we propose an auto-encoder based generative neural network model whose encoder compresses the inputs into vectors in the tangent space of a special Lie group manifold: upper triangular positive definite affine transform matrices (UTDATs). UTDATs are representations of Gaussian distributions and can straightforwardly generate Gaussian distributed samples. Therefore, the encoder is trained together with a decoder (generator) which takes Gaussian distributed latent vectors as input. Compared with related generative models such as variational auto-encoder, the proposed model incorporates the information on geometric properties of Gaussian distributions. As a special case, we derive an exponential mapping layer for diagonal Gaussian UTDATs which eliminates matrix exponential operator compared with general exponential mapping in Lie group theory. Moreover, we derive an intrinsic loss for UTDAT Lie group which can be calculated as l-2 loss in the tangent space. Furthermore, inspired by the Lie group theory, we propose to use the Lie algebra vectors rather than the raw parameters (e.g. mean) of Gaussian distributions as compressed representations of original inputs. Experimental results verity the effectiveness of the proposed new generative model and the benefits gained from the Lie group structural information of UTDATs. | Geometric theories have been applied to analyze image feature space. In @cite_10 , covariance matrices are used as image feature representations for object detection. Because covariance matrices are symmetric positive definite (SPD) matrices, which form a Riemannian manifold, a corresponding boosting algorithm is designed for SPD inputs. In @cite_2 , Gaussian distributions are used to model image features and the input space is analyzed using Lie group theory. | {
"abstract": [
"We present a new algorithm to detect pedestrian in still images utilizing covariance matrices as object descriptors. Since the descriptors do not form a vector space, well known machine learning techniques are not well suited to learn the classifiers. The space of d-dimensional nonsingular covariance matrices can be represented as a connected Riemannian manifold. The main contribution of the paper is a novel approach for classifying points lying on a connected Riemannian manifold using the geometry of the space. The algorithm is tested on INRIA and DaimlerChrysler pedestrian datasets where superior detection rates are observed over the previous approaches.",
"This paper introduces a feature descriptor called shape of Gaussian (SOG), which is based on a general feature descriptor design framework called shape of signal probability density function (SOSPDF). SOSPDF takes the shape of a signal's probability density function (pdf) as its feature. Under such a view, both histogram and region covariance often used in computer vision are SOSPDF features. Histogram describes SOSPDF by a discrete approximation way. Region covariance describes SOSPDF as an incomplete parameterized multivariate Gaussian distribution. Our proposed SOG descriptor is a full parameterized Gaussian, so it has all the advantages of region covariance and is more effective. Furthermore, we identify that SOGs form a Lie group. Based on Lie group theory, we propose a distance metric for SOG. We test SOG features in tracking problem. Experiments show better tracking results compared with region covariance. Moreover, experiment results indicate that SOG features attempt to harvest more useful information and are less sensitive against noise."
],
"cite_N": [
"@cite_10",
"@cite_2"
],
"mid": [
"2116022929",
"2119628306"
]
} | Lie Group Auto-Encoder | Unsupervised deep learning is an active research area which shows considerable progresses recently. Many deep neural network models are invented to address various problems. For example, auto-encoders (AEs) (Hinton & Salakhutdinov, 2006) are used to learn efficient data codings, i.e. * [email protected] † [email protected] Overview of the proposed LGAE model. An input example x is encoded into a vector in the tangent Lie algebra of the Lie group manifold formed by Gaussian distributions. Then, the vectors is mapped to a UTDAT representation of a Gaussian distribution. A latent vector is then sampled from this Gaussian distribution and fed to a decoder. The whole process is differentiable and optimized using stochastic gradient descent. latent representations. Generative adversarial networks (GANs) (Goodfellow et al., 2014) are powerful on generating photo-realistic images from latent variables. While having achieved numerous successes, both AEs and GANs are not without their disadvantages. On one hand, AEs are good at obtaining a compressed latent representation of a given input, but hard to generate realistic samples randomly. On the other hand, GANs are good at randomly generating realistic samples, but hard to map a given input to its latent space representation. As a variant of AE, variational auto-encoders (VAEs) (Kingma & Welling, 2013) are an-1 other kind of generative models which can also obtain the latent representation of a given input. The architectures of VAEs are similar to AEs except that the encoders encode inputs into Gaussian distributions instead of deterministic vectors. Trained with a Bayesian framework, the decoder of a VAE is able to generate random samples from latent vectors which are Gaussian distributed random noises. As a result, many applications that require manipulating the latent space representations are also feasible with VAE.
One major problem of VAEs is that the geometric structure of Gaussian distributions is not considered. Traditional machine learning models including neural networks as the encoders of VAEs are designed for vector outputs. However, Gaussian distributions do not form a vector space. This can be easily shown because the parameter vectors are not closed under regular vector operators such as vector subtraction. The variance-covariance matrix must be positive definite but simple vector subtraction will break this requirement. Naively treating Gaussians as parameter vectors ignores the geometric structure information of the space formed by them. To exploit the geometric structural property, we need to identify what kind of space it is. Gong et al. (Gong et al., 2009) reveals that Gaussians can be represented as a special kind of affine transformations which are identified as a Lie group.
In this paper, we view Gaussian distributions from a geometrical perspective using Lie group theory, and propose a novel generative model using the encoder-decoder architecture. The overview of our model is presented in Figure 1. As illustrated therein, the central part of our model is a special Lie group: upper triangular positive definite affine transform matrices (UTDATs). On the one hand, UTDATs are matrix representations of Gaussian distributions. That's to say, there is a one-to-one map between UTDATs and Gaussian distributions. Therefore, we can analyze the geometric properties of Gaussian distributions by analyzing the space of UTDAT. Also, we can sample from Gaussian distributions by matrix-vector multiplying UTDAT with a standard Gaussian noise vector. On the other hand, UTDATs form a Lie group. Therefore, one can work on the tangent spaces (which are Lie algebras) first, then project back to Lie group by exponential mapping. Since Lie algebras are vector spaces, they are suitable for most neural network architectures. As a result, the encoder in our model outputs vectors in the Lie algebra space. Those vectors are then projected to UTDATs by a proposed exponential mapping layer. Latent vectors are then generated by UTDATs and fed to a decoder. Specifically, for Gaussian distributions with diagonal variance-covariance matrices, we derive a closed form solution of exponential mapping which is fast and differentiable. Therefore, our model can be trained by stochastic gradient descents.
Gaussians as Lie group
Let v be a standard n-dimensional Gaussian random vector v 0 ∼ N (0, I), then any new vector v = Av 0 + µ which is affine transformed from v 0 is also Gaussian distributed v ∼ N (µ, Σ), where Σ = AA T . That is, any affine transformation can produce a Gaussian distributed random vector from the standard Gaussian. Furthermore, if we restrict the affine transformation to be v = U v 0 + µ where U is upper triangular and invertible (i.e. it has positive eigen values only), then conversely we can find a unique U for any non-degenerate Σ such that U U T = Σ. In other words, non-degenerate Gaussian distributions are isomorphic to UTDATs. Let G denote the matrix form of the following UTDAT:
G = U µ 0 1 ,(1)
then we can identify the type of spaces of Gaussian distributions by identifying the type of spaces of G.
According to Lie theory (Knapp, 2002), invertible affine transformations form a Lie group with matrix multiplication and inversion as its group operator. It can be easily verified that UTDATs are closed under matrix multiplication and inversion. So UTDATs form a subgroup of the general affine group. Since any subgroup of a Lie group is still a Lie group, UTDATs form a Lie group. In consequence, Gaussian distributions are elements of a Lie group.
A Lie group is also a differentiable manifold, with the property that the group operators are compatible with the smooth structure. An abstract Lie group has many isomorphic instances. Each of them is called a representation. In Lie theory, matrix representation is a useful tool for structure analysis. In our case, UTDAT is the matrix representation of the abstract Lie group formed by Gaussian distributions.
To exploit the geometric property of Lie group manifolds, the most important tools are logarithmic mapping, exponential mapping and geodesic distance. At a specific point of the group manifold, we can obtain a tangent space which is called Lie algebra in Lie theory. The Lie group manifold and Lie algebra are analogue to a curve and its tangent lines in a Euclidean space. Tangent spaces (i.e. Lie algebras) of a Lie group manifold are vector spaces. In our case, for n-dimensional Gaussians, the corresponding Lie group is 1 2 n(n + 3) dimensional. Accordingly, its tangent spaces are R 1 2 n(n+3) . Note that, at each point of the group manifold, we have a Lie algebra. We can project a point G of the UTDAT group manifold to the tangent space at a specific point G 0 by the logarithmic mapping defined as
g = log(G −1 0 G),(2)
where the log operator at the right hand side is matrix logarithm operator. Note that the points are projected to a vector space even though the form of the results are still matrices, which means that we will flatten them to vectors wherever vectors are required. Specifically, the point G 0 will be projected to 0 at its own tangent Lie algebra.
Conversely, the exponential mapping projects points in a tangent space back to the Lie group manifold. Let g be a point in the tangent space of G 0 , then the exponential mapping is defined as
G = G 0 exp(g),(3)
where the exp operator at the right hand side is matrix exponential operator. For two points G 1 and G 2 of a Lie group manifold, the geodesic distance is the length of the shortest path connecting them along the manifold, which is defined as
d LG (G 1 , G 2 ) = log(G −1 1 G 2 ) F ,(4)
where · F is the Frobenius norm.
4. Lie group auto-encoder 4.1. Overall architecture
Suppose we want to generate samples from a complex distribution P (X) where X ∈ R D . One way to accomplish this task is to generate samples from a joint distribution P (Z, X) first, then discard the part belonging to Z and keep the part belonging to X only. This seems giving us no benefit at first sight because it is usually difficult to sample from P (Z, X) if sampling from P (X) is hard. However, if we decompose the joint distribution with a Bayesian formula
P (Z, X) = P (X|Z)P (Z),(5)
then the joint distribution can be sampled by a two step process: Firstly sample from P (Z), then sample from P (X|Z). The benefits come from the fact that both P (Z) and P (X|Z) may be much easier to sample from.
Estimating parameters in P (Z, X) as modeled in Eq. 5 is not easy because samples from the joint distribution are required; however, in most scenarios, we only have samples {x i : i = 1, 2, · · · , n} from the marginal distribution P (X). To overcome this problem, we augment each example x i from the marginal distribution to several examples {(z ij , x i ) : j = 1, 2, · · · , m} in the joint distribution by sampling z ij from the conditional distribution P (Z|X).
Note that Z is an auxiliary random vector helping us perform sampling from the marginal distribution P (X), so it can be any kind of distribution but should be easy to sample from. In this paper, we let P (Z) ∼ N (0, I).
In practice, P (Z|X) should be chosen according to the type of data space of X. For example, if X is continuous, we can model P (X|Z) as a Gaussian distribution with a fixed isotropic variance-covariance matrix. For binary and categorical typed X, Bernoulli and multinomial distributions can be used, respectively.
Given P (Z) and P (X|Z), P (Z|X) is usually complex and thus difficult to sample from. So we sample z ij from another distribution Q(Z|X) instead. In this paper, we model Q(Z|X) as Gaussian distributions with diagonal variance-covariance matrices. Q(Z|X) should satisfy the following objectives as much as possible:
• Q(Z|X) should approximate P (Z|X). Therefore, given z ij sampled from Q(Z|x i ),x i sampled from P (X|Z) should reconstruct x i .
• {z ij : i = 1, 2, · · · , n; j = 1, 2, · · · , m} should fit the marginal P (Z) well.
To optimize the first objective, we minimize the reconstruction loss L rec , which is the mean squared error (MSE) for continuous X and cross-entropy for binary and categorical X.
For the second objective, directly optimizing it using z ij is not practical because we need a large sample size m for P (Z|X) to accurately estimate model parameters. The total sample size of z ij , which is mn, is too big for computation. To overcome this problem, we consider Gaussian distributions as points in the corresponding Lie group as we discussed in section 3. Note that the set {z ij , i = 1, 2, · · · , n; j = 1, 2, · · · , m} is sampled from a set of Gaussian distributions Q(Z|x i ). The second objective implies the average distribution of those Gaussians should be P (Z), which is a standard Gaussian. However, Gaussian distributions, which are equivalently represented as UTDATs, do not conform to the commonly used Euclidean geometry. Instead, we need to find the intrinsic mean of those Gaussians through Lie group geometry because Gaussian distributions have a Lie group structure. We derive a Lie group intrinsic loss L LG to optimize the second objective. The details of L LG will be present in subsection 4.3.
In our proposed Lie group auto-encoder (LGAE), P (X|Z) is called a decoder or generator, and is implemented with neural networks. Q(Z|X) is also implemented with neural networks. Note that Q(Z|X) is a Gaussian distribution, so the corresponding neural network is a function whose output is a Gaussian distribution. Neural networks as well as many other machine learning models are typically designed for vector outputs. Being intrinsically a Lie group as discussed in section 3, Gaussian distributions do not form a vector space. To best exploit the geometric structure of the Gaussians, we first estimate corresponding points g i in the tangent Lie algebra at the position of the intrinsic mean of {G i , i = 1, 2, · · · , n} using neural networks. As L LG requires the intrinsic mean to be the standard Gaussian P (Z) = N (0, I), whose UTDAT representation is the identity matrix I, the corresponding point g i in the tangent space of G i is g i = log(G i ).
Since {g i , i = 1, 2, · · · , n} are in a vector space, they can be well estimated by neural networks. g i s are then projected to the Lie group by an exponential mapping layer
G i = exp(g i ).(7)
For diagonal Gaussians, we derive a closed-form solution of the exponential mapping which eliminates the requirement of matrix exponential operator. The details will be presented in subsection 4.2.
The whole architecture of LGAE is summarized in Figure 1.
A typical forward procedure works as follows: Firstly, the encoder encodes an input x i into a point g i in the tangent Lie algebra. The exponential mapping layer then projects g i to the UTDAT matrix G i of the Lie group manifold. A latent vector z is then sampled from the Gaussian distribution represented by G i by multiplying G i with a standard Gaussian noise vector. The details of the sampling operation will be described in section 4.4. The decoder (or generator) network then generatesx i which is the reconstructed version of x i . The whole network is optimized by minimizing the following loss
L = λL LG + L rec ,(8)
where L LG and L rec are the Lie group intrinsic loss and reconstruction loss, respectively. Because the whole forward process and the loss are differentiable, the optimization can be achieved by stochastic gradient descent method.
Exponential mapping layer
We derive the exponential mapping g i = exp(G i ) for diagonal Gaussians. When
G i ∼ N (µ i , Σ i ) is diagonal, we have Σ i = σ 2 i1 σ 2 i2 . . . σ 2 iK . .(9)
The following theorem gives the forms of G i and g i , as well as their relationship.
Theorem 1. Let G i be the UTDAT and g i be the corresponding vector in its tangent Lie algebra at the standard Gaussian. Then
G i = σ i1 µ i1 σ i2 µ i2 . . . . . . 0 0 . . . 1 (10)g i = φ i1 θ i1 φ i2 θ i2 . . . . . . 0 0 . . . 1 ,(11)
where φ ik = log(σ ik ) (12)
θ ik = µ ik log(σ ik ) σ ik − 1 .(13)
Proof. By the definition of UTDAT, we can straightforwardly get Eq. 10. Let H = G i − I. Using the series form of matrix logarithm, we have
g i = log(G i ) = log(I + H) = ∞ t=1 (−1) t−1 H t t .(14)
By substituting H into 14, we get Eq. 11 and the following:
φ ik = ∞ t=1 (−1) t−1 (σ ik − 1) t t = log(σ ik ) and θ ik = ∞ t=1 (−1) t−1 µ ik (σ ik − 1) t−1 t = µ ik log(σ ik ) σ ik − 1 .
Alternatively, after we identify g i has the form as in Eq. 11, we can derive the exponential mapping by the definition of matrix exponential
G i = exp(g i ) = ∞ t=0 g t i t! = ∞ t=0 φ t i1 t! θ i1 ∞ t=1 φ t−1 i1 t! . . . . . . 0 . . . 1 = e φi1 θi1 φi1 ∞ t=0 φ t i1 t! − 1 . . . . . . 0 . . . 1 = e φi1 θ ik (e φ i1 −1) φi1 . . . . . . 0 . . . 1 .
The exponential mapping layer is expressed as
σ ik = e φ ik(15)µ ik = θ ik (e φ ik − 1) φ ik(16)
Note that if σ ik = 1 (i.e. φ ik = 0), then µ ik = θ ik due to the fact that lim x→0 log(x+1) x = 1 or lim x→0 e x −1 x = 1.
Lie group intrinsic loss
Let G i be the UTDAT representation of P (Z|x i ). The intrinsic mean G * of those G i s is defined as
G * = arg min G n i=1 d 2 LG (G, G i ).(17)
The second objective in the previous subsection requires that G * = I, which is equivalent to minimizing the loss
L LG = n i=1 d 2 LG (I, G i ) (18) = n i=1 log(G i ) 2 F = n i=1 g i 2 F .(19)
So the intrinsic loss plays a role of regularization during the training. Since the tangent Lie algebra is a vector space, the Frobenius norm is equivalent to the l 2 -norm if we flatten matrix g i to a vector. Eq. 18 plays a role of regularization which requires all the Gaussians G i to be grouped together around the standard Gaussian. Eq. 19 shows that we can regularize on the tangent Lie algebra instead, which avoids the matrix logarithm operation. Specifically, for diagonal Gaussians, we have
L LG = n i=1 g i 2 F (20) = n i=1 K k=1 (φ 2 ik + θ 2 ik )(21)
Sampling from Gaussians
According to the properties of Gaussian distributions discussed in section 3, sampling from an arbitrary Gaussian distribution can be achieved by transforming a standard Gaussian distribution with the corresponding, i.e.
z ij 1 = G i v ij 1 ,(22)
where v ij is sample from V ∼ N (0, I). Note that, this sampling operator is differentiable, which means that gradients can be back-propagated through the sampling layer to the previous layers. When G i is a diagonal Gaussian, we have
z ij = σ v ij + µ,(23)
where σ = [σ i1 , · · · , σ iK ] T , µ = [µ i1 , · · · , µ iK ] T and is the element-wise multiplication. Therefore, the reparameterization trick in (Kingma & Welling, 2013) is a special case of sampling of UTDAT represented Gaussian distributions.
Discussion
Although our proposed LGAE and VAE both (Kingma & Welling, 2013) have an encoder-decoder based architecture, they are essentially different. The loss function of VAE, which is
L = KL(Q(Z|X) N (0, I)) + L rec ,(24)
is derived from the Bayesian lower bound of the marginal likelihood of the data. In contrast, the loss function of LGAE is derived from a geometrical perspective. Further, the Lie group intrinsic loss L LG in Eq. (8) is a real metric, but the KL-divergence in Eq. (24) is not. For examples, the KL-divergence is not symmetric, nor does it satisfy the triangle inequality.
Further, while both LGAE and VAE estimate Gaussian distributions using neural networks, VAE does not address the non-vector output problem. As a contrast, we systematically address this problem and design an exponential mapping layer to solve it. One requirement arising from the non-vector property of Gaussian distributions is that the variance parameters be positive. To satisfy this requirement, (Kingma & Welling, 2013) estimate the logarithm of variance instead. This technique is equivalent to performing the exponential mapping for the variance part. Without a theoretical foundation, it was trial-and-error to choose exp over other activations such as relu and softplus. Our theoretical results confirms that exp makes more sense than others. Moreover, our theoretical results further show that a better way is to consider a Gaussian distribution as a whole rather than treat its variance part only and address the problem in an empirical way.
Because the points of the tangent Lie algebra are already vectors, we propose to use them as compressed representations of the input data examples. These vectors contain information of the Gaussian distributions and already incorporate the Lie group structural information of the Gaussian distributions; therefore, they are more informative than either a single mean vector or concatenating the mean vector and variance vector naively together.
6. Experiments
Datasets
The proposed LGAE model is evaluated on two benchmark datasets:
• MNIST: The MNIST dataset (Lecun et al., 1998) consists of a training set of 60, 000 examples of handwritten digits, and a test set of 10, 000 examples. The digits have been size-normalized and centered into fixed-size 28 × 28 images.
• SVHN: The SVHN dataset (Netzer et al., 2011) is also a collection of images of digits. But the background of image is more clutter than MNIST, so it is significantly harder to classify.
Settings
Since VAE (Kingma & Welling, 2013) is the most related model of LGAE, we use VAE as a baseline for comparisons. We follow the exact experimental settings of (Kingma & Welling, 2013). That is, MLP with 500 hidden units are used as encoder and decoder. In each hidden layer, non-linear activation tanh are applied. The parameters of neurons are initialized by random sampling from N (0, 0.01) and are optimized by Adagrad (Duchi et al., 2011) (Paszke et al., 2017). Note that there is no matrix operation in the LGAE implementation thanks to the element-wise closed-form solution presented in Section 4.2 and 4.3. Therefore, the run-time is almost the same as VAE. On a Nvidia GeForce GTX 1080 graphic card, it takes about 12.5 and 25 seconds to train on the training set and test on both the training and test sets for one epoch with mini-batches of size 100.
Results
In the first experiment, we investigate the effectiveness of the proposed exponential mapping layer. We design a variant of LGAE which uses the same loss as VAE; i.e., we replace the Lie group intrinsic loss with KL divergence but keep the exponential mapping layer in the model. LGAE (K=2) LGAE (K=5) LGAE (K=10) LGAE (K=20) Figure 3. Generated images from randomly sampled latent vectors for the MNIST dataset. The upper and lower rows are generated by the VAE and LGAE models, respectively.
which indicates the superiority of the Lie group intrinsic loss over KL divergence. Moreover, the results also show that naively concatenating covariance with mean does not contribute much to the performances, and sometimes even hurts it. This phenomenon indicates that treating Gaussians as vectors cannot fully extract important geometric structural information from the manifold they formed.
To illustrate the generative capability of LGAE, we randomly generate images using the model and plot them along with images generated from VAE. Figures 3 and 4 show the generated images from both models trained on the MNIST and SVHN datasets, respectively.
Conclusions
We propose Lie group auto-encoder (LGAE), which is a encoder-decoder type of neural network model. Similar to VAE, the proposed LGAE model has the advantages of generating examples from the training data distribution, as well as mapping inputs to latent representations. The Lie group structure of Gaussian distributions is systematically exploited to help design the network. Specifically, we design an exponential mapping layer, derive a Lie group intrinsic loss, and propose to use Lie algebra vectors as latent representations. Experimental results on the MNIST and SVHN datasets testify to the effectiveness of the proposed method.
VAE (K=2) VAE (K=5) VAE (K=10)
LGAE (K=2) LGAE (K=5) LGAE (K=10) Figure 4. Generated images from randomly sampled latent vectors for the MNIST dataset. The upper and lower rows are generated by the VAE and LGAE models, respectively. | 4,042 |
1907.09837 | 2962743035 | The colorization of grayscale images is an ill-posed problem, with multiple correct solutions. In this paper, an adversarial learning approach is proposed. A generator network is used to infer the chromaticity of a given grayscale image. The same network also performs a semantic classification of the image. This network is framed in an adversarial model that learns to colorize by incorporating perceptual and semantic understanding of color and class distributions. The model is trained via a fully self-supervised strategy. Qualitative and quantitative results show the capacity of the proposed method to colorize images in a realistic way, achieving top-tier performances relative to the state-of-the-art. | In these methods the user provides local hints, as for instance color scribbles, which are then propagated to the whole image. They were initiated with the work of Levin al @cite_5 . They assume that spatial neighboring pixels having similar intensities should have similar colors. They formalize this premise optimizing a quadratic cost function constrained to the values given by the scribbles. Several improvements were proposed. Huang al @cite_8 improve the bleeding artifact using edge information of the grayscale image. Yatziv al @cite_34 propose a luminance-weighted chrominance blending to relax the dependency of the position of the scribbles. Then, Luan al @cite_45 use the input scribbles to segment the grayscale image and thus better propagate the colors. This class of methods suffer from requiring large amounts of user inputs in particular when dealing with complex textures. Moreover, choosing the correct color palette is not an easy task. | {
"abstract": [
"Colorization is a computer-assisted process of adding color to a monochrome image or movie. The process typically involves segmenting images into regions and tracking these regions across image sequences. Neither of these tasks can be performed reliably in practice; consequently, colorization requires considerable user intervention and remains a tedious, time-consuming, and expensive task.In this paper we present a simple colorization method that requires neither precise image segmentation, nor accurate region tracking. Our method is based on a simple premise; neighboring pixels in space-time that have similar intensities should have similar colors. We formalize this premise using a quadratic cost function and obtain an optimization problem that can be solved efficiently using standard techniques. In our approach an artist only needs to annotate the image with a few color scribbles, and the indicated colors are automatically propagated in both space and time to produce a fully colorized image or sequence. We demonstrate that high quality colorizations of stills and movie clips may be obtained from a relatively modest amount of user input.",
"In this paper, we present an interactive system for users to easily colorize the natural images of complex scenes. In our system, colorization procedure is explicitly separated into two stages: Color labeling and Color mapping. Pixels that should roughly share similar colors are grouped into coherent regions in the color labeling stage, and the color mapping stage is then introduced to further fine-tune the colors in each coherent region. To handle textures commonly seen in natural images, we propose a new color labeling scheme that groups not only neighboring pixels with similar intensity but also remote pixels with similar texture. Motivated by the insight into the complementary nature possessed by the highly contrastive locations and the smooth locations, we employ a smoothness map to guide the incorporation of intensity-continuity and texture-similarity constraints in the design of our labeling algorithm. Within each coherent region obtained from the color labeling stage, the color mapping is applied to generate vivid colorization effect by assigning colors to a few pixels in the region. A set of intuitive interface tools is designed for labeling, coloring and modifying the result. We demonstrate compelling results of colorizing natural images using our system, with only a modest amount of user input.",
"Colorization, the task of coloring a grayscale image or video, involves assigning from the single dimension of intensity or luminance a quantity that varies in three dimensions, such as red, green, and blue channels. Mapping between intensity and color is, therefore, not unique, and colorization is ambiguous in nature and requires some amount of human interaction or external information. A computationally simple, yet effective, approach of colorization is presented in this paper. The method is fast and it can be conveniently used \"on the fly,\" permitting the user to interactively get the desired results promptly after providing a reduced set of chrominance scribbles. Based on the concepts of luminance-weighted chrominance blending and fast intrinsic distance computations, high-quality colorization results for still images and video are obtained at a fraction of the complexity and computational cost of previously reported techniques. Possible extensions of the algorithm introduced here included the capability of changing the colors of an existing color image or video, as well as changing the underlying luminance, and many other special effects demonstrated here.",
"Colorization is a computer-assisted process for adding colors to grayscale images or movies. It can be viewed as a process for assigning a three-dimensional color vector (YUV or RGB) to each pixel of a grayscale image. In previous works, with some color hints the resultant chrominance value varies linearly with that of the luminance. However, it is easy to find that existing methods may introduce obvious color bleeding, especially, around region boundaries. It then needs extra human-assistance to fix these artifacts, which limits its practicability. Facing such a challenging issue, we introduce a general and fast colorization methodology with the aid of an adaptive edge detection scheme. By extracting reliable edge information, the proposed approach may prevent the colorization process from bleeding over object boundaries. Next, integration of the proposed fast colorization scheme to a scribble-based colorization system, a modified color transferring system and a novel chrominance coding approach are investigated. In our experiments, each system exhibits obvious improvement as compared to those corresponding previous works."
],
"cite_N": [
"@cite_5",
"@cite_45",
"@cite_34",
"@cite_8"
],
"mid": [
"2136154655",
"2103155998",
"2120963736",
"2007607024"
]
} | 0 |
||
1907.09837 | 2962743035 | The colorization of grayscale images is an ill-posed problem, with multiple correct solutions. In this paper, an adversarial learning approach is proposed. A generator network is used to infer the chromaticity of a given grayscale image. The same network also performs a semantic classification of the image. This network is framed in an adversarial model that learns to colorize by incorporating perceptual and semantic understanding of color and class distributions. The model is trained via a fully self-supervised strategy. Qualitative and quantitative results show the capacity of the proposed method to colorize images in a realistic way, achieving top-tier performances relative to the state-of-the-art. | @cite_48 , a supervised learning method is proposed through a linear parametric model and a variational autoencoder which is computed by quadratic regression on a large dataset of color images. These approaches are improved by the use of CNNs and large-scale datasets. For instance, Iizuka al @cite_41 extract local and global features to predict the colorization. The network is trained jointly for classification and colorization in a labeled dataset. | {
"abstract": [
"We present a novel technique to automatically colorize grayscale images that combines both global priors and local image features. Based on Convolutional Neural Networks, our deep network features a fusion layer that allows us to elegantly merge local information dependent on small image patches with global priors computed using the entire image. The entire framework, including the global and local priors as well as the colorization model, is trained in an end-to-end fashion. Furthermore, our architecture can process images of any resolution, unlike most existing approaches based on CNN. We leverage an existing large-scale scene classification database to train our model, exploiting the class labels of the dataset to more efficiently and discriminatively learn the global priors. We validate our approach with a user study and compare against the state of the art, where we show significant improvements. Furthermore, we demonstrate our method extensively on many different types of images, including black-and-white photography from over a hundred years ago, and show realistic colorizations.",
"We describe an automated method for image colorization that learns to colorize from examples. Our method exploits a LEARCH framework to train a quadratic objective function in the chromaticity maps, comparable to a Gaussian random field. The coefficients of the objective function are conditioned on image features, using a random forest. The objective function admits correlations on long spatial scales, and can control spatial error in the colorization of the image. Images are then colorized by minimizing this objective function. We demonstrate that our method strongly outperforms a natural baseline on large-scale experiments with images of real scenes using a demanding loss function. We demonstrate that learning a model that is conditioned on scene produces improved results. We show how to incorporate a desired color histogram into the objective function, and that doing so can lead to further improvements in results."
],
"cite_N": [
"@cite_41",
"@cite_48"
],
"mid": [
"2461158874",
"2211456655"
]
} | 0 |
||
1907.09837 | 2962743035 | The colorization of grayscale images is an ill-posed problem, with multiple correct solutions. In this paper, an adversarial learning approach is proposed. A generator network is used to infer the chromaticity of a given grayscale image. The same network also performs a semantic classification of the image. This network is framed in an adversarial model that learns to colorize by incorporating perceptual and semantic understanding of color and class distributions. The model is trained via a fully self-supervised strategy. Qualitative and quantitative results show the capacity of the proposed method to colorize images in a realistic way, achieving top-tier performances relative to the state-of-the-art. | Zhang al @cite_12 learn the color distribution of every pixel and infer the colorization from the learnt distribution. The network is trained with a multinomial cross entropy loss with rebalanced rare classes allowing for rare colors to appear in the colorized image. In a similar spirit, Larsson al @cite_1 train a deep CNN to learn per-pixel color histograms. They use a VGG network in order to interpret the semantic composition of the scene as well as the localization of objects and then predict the color histograms of every pixel based on this interpretation. They train the network with the Kullback-Leibler divergence. Again, the colorization is inferred from the color histrograms. | {
"abstract": [
"We develop a fully automatic image colorization system. Our approach leverages recent advances in deep networks, exploiting both low-level and semantic representations. As many scene elements naturally appear according to multimodal color distributions, we train our model to predict per-pixel color histograms. This intermediate output can be used to automatically generate a color image, or further manipulated prior to image formation. On both fully and partially automatic colorization tasks, we outperform existing methods. We also explore colorization as a vehicle for self-supervised visual representation learning.",
"Given a grayscale photograph as input, this paper attacks the problem of hallucinating a plausible color version of the photograph. This problem is clearly underconstrained, so previous approaches have either relied on significant user interaction or resulted in desaturated colorizations. We propose a fully automatic approach that produces vibrant and realistic colorizations. We embrace the underlying uncertainty of the problem by posing it as a classification task and use class-rebalancing at training time to increase the diversity of colors in the result. The system is implemented as a feed-forward pass in a CNN at test time and is trained on over a million color images. We evaluate our algorithm using a “colorization Turing test,” asking human participants to choose between a generated and ground truth color image. Our method successfully fools humans on 32 of the trials, significantly higher than previous methods. Moreover, we show that colorization can be a powerful pretext task for self-supervised feature learning, acting as a cross-channel encoder. This approach results in state-of-the-art performance on several feature learning benchmarks."
],
"cite_N": [
"@cite_1",
"@cite_12"
],
"mid": [
"2308529009",
"2326925005"
]
} | 0 |
||
1907.09837 | 2962743035 | The colorization of grayscale images is an ill-posed problem, with multiple correct solutions. In this paper, an adversarial learning approach is proposed. A generator network is used to infer the chromaticity of a given grayscale image. The same network also performs a semantic classification of the image. This network is framed in an adversarial model that learns to colorize by incorporating perceptual and semantic understanding of color and class distributions. The model is trained via a fully self-supervised strategy. Qualitative and quantitative results show the capacity of the proposed method to colorize images in a realistic way, achieving top-tier performances relative to the state-of-the-art. | Other CNN based approaches are combined with user interactions. For instance, Zhang al @cite_32 propose to train a deep network given the grayscale version and a set of sparse user inputs. This allows the user to have more than one plausible solution. Also, He al @cite_33 propose an exemplar-based colorization method using a deep learning approach. The colorization network jointly learns faithful local colorization to a meaningful reference and plausible color prediction when a reliable reference is unavailable. | {
"abstract": [
"We propose a deep learning approach for user-guided image colorization. The system directly maps a grayscale image, along with sparse, local user \"hints\" to an output colorization with a Convolutional Neural Network (CNN). Rather than using hand-defined rules, the network propagates user edits by fusing low-level cues along with high-level semantic information, learned from large-scale data. We train on a million images, with simulated user inputs. To guide the user towards efficient input selection, the system recommends likely colors based on the input image and current user inputs. The colorization is performed in a single feed-forward pass, enabling real-time use. Even with randomly simulated user inputs, we show that the proposed system helps novice users quickly create realistic colorizations, and offers large improvements in colorization quality with just a minute of use. In addition, we demonstrate that the framework can incorporate other user \"hints\" to the desired colorization, showing an application to color histogram transfer. Our code and models are available at this https URL",
"This paper proposes the first deep learning approach for exemplar-based colorization, in which a convolutional neural network robustly maps a grayscale image to a colorized output given a color reference."
],
"cite_N": [
"@cite_32",
"@cite_33"
],
"mid": [
"2949359905",
"2809852002"
]
} | 0 |
||
1907.09837 | 2962743035 | The colorization of grayscale images is an ill-posed problem, with multiple correct solutions. In this paper, an adversarial learning approach is proposed. A generator network is used to infer the chromaticity of a given grayscale image. The same network also performs a semantic classification of the image. This network is framed in an adversarial model that learns to colorize by incorporating perceptual and semantic understanding of color and class distributions. The model is trained via a fully self-supervised strategy. Qualitative and quantitative results show the capacity of the proposed method to colorize images in a realistic way, achieving top-tier performances relative to the state-of-the-art. | Some methods use GANs to colorize grayscale images. Isola al @cite_27 propose to use conditional GANs to map an input image to an output image using a U-Net based generator. They train their network by combining the @math -loss with an adapted GAN loss. An extension is proposed by Nazeri al @cite_23 generalizing the procedure to high resolution images, speeding up and stabilizing the training. Cao al @cite_26 also use conditional GANs but, to obtain diverse possible colorizations, they sample several times the input noise, which is incorporated in multiple layers in the proposed network architecture, which consists of a fully convolutional non-stride network. Their choice of the LSUN bedroom dataset helps their method to learn the diversity of bedroom colors. Notice, that none of these GANs based methods use additional information such as classification. | {
"abstract": [
"",
"Colorization of grayscale images is a hot topic in computer vision. Previous research mainly focuses on producing a color image to recover the original one in a supervised learning fashion. However, since many colors share the same gray value, an input grayscale image could be diversely colorized while maintaining its reality. In this paper, we design a novel solution for unsupervised diverse colorization. Specifically, we leverage conditional generative adversarial networks to model the distribution of real-world item colors, in which we develop a fully convolutional generator with multi-layer noise to enhance diversity, with multi-layer condition concatenation to maintain reality, and with stride 1 to keep spatial information. With such a novel network architecture, the model yields highly competitive performance on the open LSUN bedroom dataset. The Turing test on 80 humans further indicates our generated color schemes are highly convincible.",
"Over the last decade, the process of automatic image colorization has been of significant interest for several application areas including restoration of aged or degraded images. This problem is highly ill-posed due to the large degrees of freedom during the assignment of color information. Many of the recent developments in automatic colorization involve images that contain a common theme or require highly processed data such as semantic maps as input. In our approach, we attempt to fully generalize the colorization procedure using a conditional Deep Convolutional Generative Adversarial Network (DCGAN). The network is trained over datasets that are publicly available such as CIFAR-10 and Places365. The results between the generative model and traditional deep neural networks are compared."
],
"cite_N": [
"@cite_27",
"@cite_26",
"@cite_23"
],
"mid": [
"",
"2590274298",
"2792021479"
]
} | 0 |
||
1901.09221 | 2953343723 | Along with the deraining performance improvement of deep networks, their structures and learning become more and more complicated and diverse, making it difficult to analyze the contribution of various network modules when developing new deraining networks. To handle this issue, this paper provides a better and simpler baseline deraining network by considering network architecture, input and output, and loss functions. Specifically, by repeatedly unfolding a shallow ResNet, progressive ResNet (PRN) is proposed to take advantage of recursive computation. A recurrent layer is further introduced to exploit the dependencies of deep features across stages, forming our progressive recurrent network (PReNet). Furthermore, intra-stage recursive computation of ResNet can be adopted in PRN and PReNet to notably reduce network parameters with graceful degradation in deraining performance. For network input and output, we take both stage-wise result and original rainy image as input to each ResNet and finally output the prediction of residual image . As for loss functions, single MSE or negative SSIM losses are sufficient to train PRN and PReNet. Experiments show that PRN and PReNet perform favorably on both synthetic and real rainy images. Considering its simplicity, efficiency and effectiveness, our models are expected to serve as a suitable baseline in future deraining research. The source codes are available at this https URL. | In general, a rainy image can be formed as the composition of a clean background image layer and a rain layer. On one hand, linear summation is usually adopted as the composition model @cite_21 @cite_3 @cite_27 . Then, image deraining can be formulated by incorporating with proper regularizers on both background image and rain layer, and solved by specific optimization algorithms. Among these methods, Gaussian mixture model (GMM) @cite_3 , sparse representation @cite_27 , and low rank representation @cite_21 have been adopted for modeling background image or a rain layers. Based on linear summation model, optimization-based methods have been also extended for video deraining @cite_32 @cite_2 @cite_24 @cite_17 @cite_14 . On the other hand, screen blend model @cite_16 is assumed to be more realistic for the composition of rainy image, based on which Luo al @cite_16 use the discriminative dictionary learning to separate rain streaks by enforcing the two layers share fewest dictionary atoms. However, the real composition generally is more complicated and the regularizers are still insufficient in characterizing background and rain layers, making optimization-based methods remain limited in deraining performance. | {
"abstract": [
"Videos captured by outdoor surveillance equipments sometimes contain unexpected rain streaks, which brings difficulty in subsequent video processing tasks. Rain streak removal from a video is thus an important topic in recent computer vision research. In this paper, we raise two intrinsic characteristics specifically possessed by rain streaks. Firstly, the rain streaks in a video contain repetitive local patterns sparsely scattered over different positions of the video. Secondly, the rain streaks are with multiscale configurations due to their occurrence on positions with different distances to the cameras. Based on such understanding, we specifically formulate both characteristics into a multiscale convolutional sparse coding (MS-CSC) model for the video rain streak removal task. Specifically, we use multiple convolutional filters convolved on the sparse feature maps to deliver the former characteristic, and further use multiscale filters to represent different scales of rain streaks. Such a new encoding manner makes the proposed method capable of properly extracting rain streaks from videos, thus getting fine video deraining effects. Experiments implemented on synthetic and real videos verify the superiority of the proposed method, as compared with the state-of-the-art ones along this research line, both visually and quantitatively.",
"In this paper, we propose a novel low-rank appearance model for removing rain streaks. Different from previous work, our method needs neither rain pixel detection nor time-consuming dictionary learning stage. Instead, as rain streaks usually reveal similar and repeated patterns on imaging scene, we propose and generalize a low-rank model from matrix to tensor structure in order to capture the spatio-temporally correlated rain streaks. With the appearance model, we thus remove rain streaks from image video (and also other high-order image structure) in a unified way. Our experimental results demonstrate competitive (or even better) visual quality and efficient run-time in comparison with state of the art.",
"The visual effects of rain are complex. Rain consists of spatially distributed drops falling at high velocities. Each drop refracts and reflects the environment, producing sharp intensity changes in an image. A group of such falling drops creates a complex time varying signal in images and videos. In addition, due to the finite exposure time of the camera, intensities due to rain are motion blurred and hence depend on the background intensities. Thus, the visual manifestations of rain are a combination of both the dynamics of rain and the photometry of the environment. In this paper, we present the first comprehensive analysis of the visual effects of rain on an imaging system. We develop a correlation model that captures the dynamics of rain and a physics-based motion blur model that explains the photometry of rain. Based on these models, we develop efficient algorithms for detecting and removing rain from videos. The effectiveness of our algorithms is demonstrated using experiments on videos of complex scenes with moving objects and time-varying textures. The techniques described in this paper can be used in a wide range of applications including video surveillance, vision based navigation, video movie editing and video indexing retrieval.",
"This paper addresses the problem of rain streak removal from a single image. Rain streaks impair visibility of an image and introduce undesirable interference that can severely affect the performance of computer vision algorithms. Rain streak removal can be formulated as a layer decomposition problem, with a rain streak layer superimposed on a background layer containing the true scene content. Existing decomposition methods that address this problem employ either dictionary learning methods or impose a low rank structure on the appearance of the rain streaks. While these methods can improve the overall visibility, they tend to leave too many rain streaks in the background image or over-smooth the background image. In this paper, we propose an effective method that uses simple patch-based priors for both the background and rain layers. These priors are based on Gaussian mixture models and can accommodate multiple orientations and scales of the rain streaks. This simple approach removes rain streaks better than the existing methods qualitatively and quantitatively. We overview our method and demonstrate its effectiveness over prior work on a number of examples.",
"Rain streaks removal is an important issue in outdoor vision systems and has recently been investigated extensively. In this paper, we propose a novel video rain streak removal approach FastDeRain, which fully considers the discriminative characteristics of rain streaks and the clean video in the gradient domain. Specifically, on the one hand, rain streaks are sparse and smooth along the direction of the raindrops, whereas on the other hand, clean videos exhibit piecewise smoothness along the rain-perpendicular direction and continuity along the temporal direction. Theses smoothness and continuity result in the sparse distribution in the different directional gradient domain. Thus, we minimize: 1) the @math norm to enhance the sparsity of the underlying rain streaks; 2) two @math norm of unidirectional total variation regularizers to guarantee the anisotropic spatial smoothness; and 3) an @math norm of the time-directional difference operator to characterize the temporal continuity. A split augmented Lagrangian shrinkage algorithm-based algorithm is designed to solve the proposed minimization model. Experiments conducted on synthetic and real data demonstrate the effectiveness and efficiency of the proposed method. According to the comprehensive quantitative performance measures, our approach outperforms other state-of-the-art methods, especially on account of the running time. The code of FastDeRain can be downloaded at https: github.com TaiXiangJiang FastDeRain .",
"",
"Rain streaks removal is an important issue of the outdoor vision system and has been recently investigated extensively. In this paper, we propose a novel tensor based video rain streaks removal approach by fully considering the discriminatively intrinsic characteristics of rain streaks and clean videos, which needs neither rain detection nor time-consuming dictionary learning stage. In specific, on the one hand, rain streaks are sparse and smooth along the raindrops direction, and on the other hand, the clean videos possess smoothness along the rain-perpendicular direction and global and local correlation along time direction. We use the l1 norm to enhance the sparsity of the underlying rain, two unidirectional Total Variation (TV) regularizers to guarantee the different discriminative smoothness, and a tensor nuclear norm and a time directional difference operator to characterize the exclusive correlation of the clean video along time. Alternation direction method of multipliers (ADMM) is employed to solve the proposed concise tensor based convex model. Experiments implemented on synthetic and real data substantiate the effectiveness and efficiency of the proposed method. Under comprehensive quantitative performance measures, our approach outperforms other state-of-the-art methods.",
"Visual distortions on images caused by bad weather conditions can have a negative impact on the performance of many outdoor vision systems. One often seen bad weather is rain which causes significant yet complex local intensity fluctuations in images. The paper aims at developing an effective algorithm to remove visual effects of rain from a single rainy image, i.e. separate the rain layer and the de-rained image layer from an rainy image. Built upon a non-linear generative model of rainy image, namely screen blend mode, we proposed a dictionary learning based algorithm for single image de-raining. The basic idea is to sparsely approximate the patches of two layers by very high discriminative codes over a learned dictionary with strong mutual exclusivity property. Such discriminative sparse codes lead to accurate separation of two layers from their non-linear composite. The experiments showed that the proposed method outperformed the existing single image de-raining methods on tested rain images.",
"A novel algorithm to remove rain or snow streaks from a video sequence using temporal correlation and low-rank matrix completion is proposed in this paper. Based on the observation that rain streaks are too small and move too fast to affect the optical flow estimation between consecutive frames, we obtain an initial rain map by subtracting temporally warped frames from a current frame. Then, we decompose the initial rain map into basis vectors based on the sparse representation, and classify those basis vectors into rain streak ones and outliers with a support vector machine. We then refine the rain map by excluding the outliers. Finally, we remove the detected rain streaks by employing a low-rank matrix completion technique. Furthermore, we extend the proposed algorithm to stereo video deraining. Experimental results demonstrate that the proposed algorithm detects and removes rain or snow streaks efficiently, outperforming conventional algorithms."
],
"cite_N": [
"@cite_14",
"@cite_21",
"@cite_32",
"@cite_3",
"@cite_24",
"@cite_27",
"@cite_2",
"@cite_16",
"@cite_17"
],
"mid": [
"2798401637",
"2154621477",
"2119535410",
"2466666260",
"2789288870",
"",
"2737207197",
"2209874411",
"1909316225"
]
} | Progressive Image Deraining Networks: A Better and Simpler Baseline | Rain is a common weather condition, and has severe adverse effect on not only human visual perception but also the performance of various high level vision tasks such as image classification, object detection, and video surveillance [7,14]. Single image deraining aims at restoring clean background image from a rainy image, and has drawn con-Rainy image RESCAN [20] t = 1 t = 2 t = 4 t = 6 Figure 1. Deraining results by RESCAN [20] and PReNet (T = 6) at stage t = 1, 2, 4, 6, respectively. siderable recent research attention. For example, several traditional optimization based methods [1,9,21,22] have been suggested for modeling and separating rain streaks from background clean image. However, due to the complex composition of rain and background layers, image deraining remains a challenging ill-posed problem. Driven by the unprecedented success of deep learning in low level vision [3,15,18,28,34], recent years have also witnessed the rapid progress of deep convolutional neural network (CNN) in image deraining. In [5], Fu et al. show that it is difficult to train a CNN to directly predict background image from rainy image, and utilize a 3-layer CNN to remove rain streaks from high-pass detail layer instead of the input image. Subsequently, other formulations are also introduced, such as residual learning for predicting rain steak layer [20], joint detection and removal of rain streaks [30], and joint rain density estimation and deraining [32].
On the other hand, many modules are suggested to constitute different deraining networks, including residual blocks [6,10], dilated convolution [30,31], dense blocks [32], squeeze-and-excitation [20], and recurrent layers [20,25]. Multi-stream [32] and multi-stage [20] networks are also deployed to capture multi-scale characteristics and to remove heavy rain. Moreover, several models are designed to improve computational efficiency by utilizing lightweight networks in a cascaded scheme [4] or a Laplacian pyramid framework [7], but at the cost of obvious degradation in deraining performance. To sum up, albeit the progress of deraining performance, the structures of deep networks become more and more complicated and diverse. As a result, it is difficult to analyze the contribution of various modules and their combinations, and to develop new models by introducing modules to existing deeper and complex deraining networks.
In this paper, we aim to present a new baseline network for single image deraining to demonstrate that: (i) by combining only a few modules, a better and simpler baseline network can be constructed and achieve noteworthy performance gains over state-of-the-art deeper and complex deraining networks, (ii) unlike [5], the improvement of de-raining networks may ease the difficulty of training CNNs to directly recover clean image from rainy image. Moreover, the simplicity of baseline network makes it easier to develop new deraining models by introducing other network modules or modifying the existing ones.
To this end, we consider the network architecture, input and output, and loss functions to form a better and simpler baseline networks. In terms of network architecture, we begin with a basic shallow residual network (ResNet) with five residual blocks (ResBlocks). Then, progressive ResNet (PRN) is introduced by recursively unfolding the ResNet into multiple stages without the increase of model parameters (see Fig. 2(a)). Moreover, a recurrent layer [11,27] is introduced to exploit the dependencies of deep features across recursive stages to form the PReNet in Fig. 2(b). From Fig. 1, a 6-stage PReNet can remove most rain streaks at the first stage, and then remaining rain streaks can be progressively removed, leading to promising deraining quality at the final stage. Furthermore, PRN r and PReNet r are presented by adopting intra-stage recursive unfolding of only one ResBlock, which reduces network parameters only at the cost of unsubstantial performance degradation.
Using PRN and PReNet, we further investigate the effect of network input/output and loss function. In terms of network input, we take both stage-wise result and original rainy image as input to each ResNet, and empirically find that the introduction of original image does benefit deraining performance. In terms of network output, we adopt the residual learning formulation by predicting rain streak layer, and find that it is also feasible to directly learn a PRN or PReNet model for predicting clean background from rainy image. Finally, instead of hybrid losses with careful hyperparameters tuning [4,6], a single negative SSIM [29] or MSE loss can readily train PRN and PReNet with favorable deraining performance.
Comprehensive experiments have been conducted to evaluate our baseline networks PRN and PReNet. On four synthetic datasets, our PReNet and PRN are computationally very efficient, and achieve much better quantitative and qualitative deraining results in comparison with the stateof-the-art methods. In particular, on heavy rainy dataset Rain100H [30], the performance gains by our PRN and PReNet are still significant. The visually pleasing deraining results on real rainy images and videos have also validated the generalization ability of the trained PReNet and PRN models.
The contribution of this work is four-fold:
• Baseline deraining networks, i.e., PRN and PReNet, are proposed, by which better and simpler networks can work well in removing rain streaks, and provide a suitable basis to future studies on image deraining. • By taking advantage of intra-stage recursive computation, PRN r and PReNet r are also suggested to reduce network parameters while maintaining state-of-the-art deraining performance. • Using PRN and PReNet, the deraining performance can be further improved by taking both stage-wise result and original rainy image as input to each ResNet, and our progressive networks can be readily trained with single negative SSIM or MSE loss. • Extensive experiments show that our baseline networks are computationally very efficient, and perform favorably against state-of-the-arts on both synthetic and real rainy images.
Optimization-based Deraining Methods
In general, a rainy image can be formed as the composition of a clean background image layer and a rain layer. On the one hand, linear summation is usually adopted as the composition model [1,21,35]. Then, image deraining can be formulated by incorporating with proper regularizers on both background image and rain layer, and solved by specific optimization algorithms. Among these methods, Gaussian mixture model (GMM) [21], sparse representation [35], and low rank representation [1] have been adopted for modeling background image or a rain layers. Based on linear summation model, optimization-based methods have been also extended for video deraining [8,12,13,16,19]. On the other hand, screen blend model [22,26] is assumed to be more realistic for the composition of rainy image, based on which Luo et al. [22] use the discriminative dictionary learning to separate rain streaks by enforcing the two layers share fewest dictionary atoms. However, the real composition generally is more complicated and the regularizers are still insufficient in characterizing background and rain layers, making optimization-based methods remain limited in deraining performance.
Deep Network-based Deraining Methods
When applied deep network to single image deraining, one natural solution is to learn a direct mapping to predict clean background image x from rainy image y. However, it is suggested that plain fully convolutional networks (FCN) are ineffective in learning the direct mapping [5,6]. Instead, Fu et al. [5,6] apply a low-pass filter to decompose y into a base layer y base and a detail layer y detail . By assuming y base ≈ x base , FCNs are then deployed to predict x detail from y detail . In contrast, Li et al. [20] adopt the residual learning formulation to predict rain layer y − x from y. More complicated learning formulations, such as joint detection and removal of rain streaks [30], and joint rain density estimation and deraining [32], are also suggested. And adversarial loss is also introduced to enhance the texture details of deraining result [25,33]. In this work, we show that the improvement of deraining networks actually eases the difficulty of learning, and it is also feasible to train PRN and PReNet to learn either direct or residual mapping.
For the architecture of deraining network, Fu et al. first adopt a shallow CNN [5] and then a deeper ResNet [6]. In [30], a multi-task CNN architecture is designed for joint detection and removal of rain streaks, in which contextualized dilated convolution and recurrent structure are adopted to handle multi-scale and heavy rain steaks. Subsequently, Zhang et al. [32] propose a density aware multi-stream densely connected CNN for joint estimating rain density and removing rain streaks. In [25], attentive-recurrent network is developed for single image raindrop removal. Most recently, Li et al. [20] recurrently utilize dilated CNN and squeeze-and-excitation blocks to remove heavy rain streaks. In comparison to these deeper and complex networks, our work incorporates ResNet, recurrent layer and multi-stage recursion to constitute a better, simpler and more efficient deraining network.
Besides, several lightweight networks, e.g., cascaded scheme [4] and Laplacian pyrimid framework [7] are also developed to improve computational efficiency but at the cost of obvious performance degradation. As for PRN and PReNet, we further introduce intra-stage recursive computation to reduce network parameters while maintain-ing state-of-the-art deraining performance, resulting in our PRN r and PReNet r models.
Progressive Image Deraining Networks
In this section, progressive image deraining networks are presented by considering network architecture, input and output, and loss functions. To this end, we first describe the general framework of progressive networks as well as input/output, then implement the network modules, and finally discuss the learning objectives of progressive networks.
Progressive Networks
A simple deep network generally cannot succeed in removing rain streaks from rainy images [5,6]. Instead of designing deeper and complex networks, we suggest to tackle the deraining problem in multiple stages, where a shallow ResNet is deployed at each stage. One natural multi-stage solution is to stack several sub-networks, which inevitably leads to the increase of network parameters and susceptibility to over-fitting. In comparison, we take advantage of inter-stage recursive computation [15, 20, 28] by requiring multiple stages share the same network parameters. Besides, we can incorporate intra-stage recursive unfolding of only 1 ResBlock to significantly reduce network parameters with graceful degradation in deraining performance.
Progressive Residual Network
We first present a progressive residual network (PRN) as shown in Fig. 2(a). In particular, we adopt a basic ResNet with three parts: (i) a convolution layer f in receives network inputs, (ii) several residual blocks (ResBlocks) f res extract deep representation, and (iii) a convolution layer f out outputs deraining results. The inference of PRN at stage t can be formulated as
x t−0.5 = f in (x t−1 , y), x t = f out (f res (x t−0.5 )),(1)
where f in , f res and f out are stage-invariant, i.e., network parameters are reused across different stages.
We note that f in takes the concatenation of the current estimation x t−1 and rainy image y as input. In comparison to only x t−1 in [20], the inclusion of y can further improve the deraining performance. The network output can be the prediction of either rain layer or clean background image. Our empirical study show that, although predicting rain layer performs moderately better, it is also possible to learn progressive networks for predicting background image.
Progressive Recurrent Network
We further introduce a recurrent layer into PRN, by which feature dependencies across stages can be propagated to facilitate rain streak removal, resulting in our progressive recurrent network (PReNet). The only difference between PReNet and PRN is the inclusion of recurrent state s t ,
x t−0.5 = f in (x t−1 , y), s t = f recurrent (s t−1 , x t−0.5 ), x t = f out (f res (s t )),(2)
where the recurrent layer f recurrent takes both x t−0.5 and the recurrent state s t−1 as input at stage t − 1. f recurrent can be implemented using either convolutional Long Short-Term Memory (LSTM) [11,27] or convolutional Gated Recurrent Unit (GRU) [2]. In PReNet, we choose LSTM due to its empirical superiority in image deraining.
The architecture of PReNet is shown in Fig. 2(b). By unfolding PReNet with T recursive stages, the deep representation that facilitates rain streak removal are propagated by recurrent states. The deraining results at intermediate stages in Fig. 1 show that the heavy rain streak accumulation can be gradually removed stage-by-stage.
Network Architectures
We hereby present the network architectures of PRN and PReNet. All the filters are with size 3×3 and padding 1×1. Generally, f in is a 1-layer convolution with ReLU nonlinearity [23], f res includes 5 ResBlocks and f out is also a 1layer convolution. Due to the concatenation of 3-channel RGB y and 3-channel RGB x t−1 , the convolution in f in has 6 and 32 channels for input and output, respectively. f out takes the output of f res (or f recurrent ) with 32 channels as input and outputs 3-channel RGB image for PRN (or PReNet). In f recurrent , all the convolutions in LSTM have 32 input channels and 32 output channels. f res is the key component to extract deep representation for rain streak removal, and we provide two implementations, i.e., conventional ResBlocks shown in Fig. 3(a) and recursive Res-Blocks shown in Fig. 3(b). Conventional ResBlocks: As shown in Fig. 3(a), we first implement f res with 5 ResBlocks as its conventional form, i.e., each ResBlock includes 2 convolution layers followed by ReLU [23]. All the convolution layers receive 32channel features without downsampling or upsamping operations. Conventional ResBlocks are adopted in PRN and PReNet.
Recursive ResBlocks: Motivated by [15,28], we also implement f res by recursively unfolding one ResBlock 5 times, as shown in Fig. 3(b). Since network parameters mainly come from ResBlocks, the intra-stage recursive computation leads to a much smaller model size, resulting in PRN r and PReNet r . We have evaluated the performance of recursive ResBlocks in Sec. 4.2, indicating its elegant tradeoff between model size and deraining performance.
Learning Objective
Recently, hybrid loss functions, e.g., MSE+SSIM [4], 1 +SSIM [7] and even adversarial loss [33], have been widely adopted for training deraining networks, but incredibly increase the burden of hyper-parameter tuning. In contrast, benefited from the progressive network architecture, we empirically find that a single loss function, e.g., MSE loss or negative SSIM loss [29], is sufficient to train PRN and PReNet. For a model with T stages, we have T outputs, i.e., x 1 , x 2 ,..., x T . By only imposing supervision on the final output x T , the MSE loss is
L = x T − x gt 2 ,(3)
and the negative SSIM loss is
L = −SSIM x T , x gt ,(4)
where x gt is the corresponding ground-truth clean image. It is worth noting that, our empirical study shows that negative SSIM loss outperforms MSE loss in terms of both PSNR and SSIM. Moreover, recursive supervision can be imposed on each intermediate result,
L = − T t=1 λ t SSIM x t , x gt ,(5)
where λ t is the tradeoff parameter for stage t. Experimental result in Sec. 4.1.1 shows that recursive supervision cannot achieve any performance gain when t = T , but can be adopted to generate visually satisfying result at early stages.
Experimental Results
In this section, we first conduct ablation studies to verify the main components of our methods, then quantitatively and qualitatively evaluate progressive networks, and finally assess PReNet on real rainy images and video. All the source code, pre-trained models and results can be found at https://github.com/csdwren/PReNet.
Our progressive networks are implemented using Pytorch [24], and are trained on a PC equipped with two NVIDIA GTX 1080Ti GPUs. In our experiments, all the progressive networks share the same training setting. The patch size is 100 × 100, and the batch size is 18. The ADAM [17] algorithm is adopted to train the models with an initial learning rate 1 × 10 −3 , and ends after 100 epochs. When reaching 30, 50 and 80 epochs, the learning rate is decayed by multiplying 0.2.
Ablation Studies
All the ablation studies are conducted on a heavy rainy dataset [30] with 1,800 rainy images for training and 100 rainy images (Rain100H) for testing. However, we find that 546 rainy images from the 1,800 training samples have the same background contents with testing images. Therefore, we exclude these 546 images from training set, and train all our models on the remaining 1,254 training images.
Loss Functions
Using PReNet (T = 6) as an example, we discuss the effect of loss functions on deraining performance, including MSE loss, negative SSIM loss, and recursive supervision loss.
Negative SSIM v.s. MSE. We train two PReNet models by minimizing MSE loss (PReNet-MSE) and negative SSIM loss (PReNet-SSIM), and Table 1 lists their PSNR and SSIM values on Rain100H. Unsurprisingly, PReNet-SSIM outperforms PReNet-MSE in terms of SSIM. We also note that PReNet-SSIM even achieves higher PSNR, partially attributing to that PReNet-MSE may be inclined to get trapped into poor solution. As shown in Fig. 4, the deraining result by PReNet-SSIM is also visually more plausible than that by PReNet-MSE. Therefore, negative SSIM loss is adopted as the default in the following experiments. (5). For PReNet-RecSSIM, we set λ t = 0.5 (t = 1, 2, ..., 5) and λ 6 = 1.5, where the tradeoff parameter for the final stage is larger than the others. From Table 1, PReNet-RecSSIM performs moderately inferior to PReNet-SSIM. As shown in Fig. 4 ing resource constrained environments by stopping the inference at any stage t.
Network Architecture
In this subsection, we assess the effect of several key modules of progressive networks, including recurrent layer, multi-stage recursion, and intra-stage recursion.
Recurrent Layer. Using PReNet (T = 6), we test two types of recurrent layers, i.e., LSTM (PReNet-LSTM) and GRU (PReNet-GRU). It can be seen from Table 3 that LSTM performs slightly better than GRU in terms of quantitative metrics, and thus is adopted as the default implementation of recurrent layer in our experiments. We further compare progressive networks with and without recurrent layer, i.e., PReNet and PRN, in Table 4, and obviously the introduction of recurrent layer does benefit the deraining performance in terms of PSNR and SSIM.
Intra-stage Recursion. From Table 4, intra-stage recursion, i.e., recursive ResBlocks, is introduced to significantly reduce the number of parameters of progressive networks, resulting in PRN r and PReNet r . As for deraining performance, it is reasonable to see that PRN and PReNet respectively achieve higher average PSNR and SSIM values than PRN r and PReNet r . But in terms of visual quality, PRN r and PReNet r are comparable with PRN and PReNet, as shown in Fig. 6.
Recursive Stage Number T . Table 2 lists the PSNR and SSIM values of four PReNet models with stages T = 2, 3, 4, 5, 6, 7. One can see that PReNet with more stages (from 2 stages to 6 stages) usually leads to higher average PSNR and SSIM values. However, larger T also makes PReNet more difficult to train. When T = 7, PReNet 7 performs a little inferior to PReNet 6 . Thus, we set T = 6 in the following experiments.
Effect of Network Input/Output
Network Input. We also test a variant of PReNet by only taking x t−1 at each stage as input to f in (i.e., PReNet x ), where such strategy has been adopted in [20,30]. From Table 3, PReNet x is obviously inferior to PReNet in terms of both PSNR and SSIM, indicating the benefit of receiving y at each stage.
Network Output. We consider two types of network outputs by incorporating residual learning formulation (i.e., PReNet in Table 3) or not (i.e., PReNet-LSTM in Table 3). From Table 3, residual learning can make a further contribution to performance gain. It is worth noting that, benefited from progressive networks, it is feasible to learn PReNet for directly predicting clean background from rainy image, and even PReNet-LSTM can achieve appealing deraining performance.
Evaluation on Synthetic Datasets
Our progressive networks are evaluated on three synthetic datasets, i.e., Rain100H [30], Rain100L [30] and Rain12 [21]. Five competing methods are considered, including one traditional optimization-based method (GMM [21]) and three state-of-the-art deep CNN-based models, i.e., DDN [6], JORDER [30] and RESCAN [20], and one lightweight network (RGN [4]). For heavy rainy images (Rain100H) and light rainy images (Rain100L), the models are respectively trained, and the models for light rain are directly applied on Rain12. Since the source codes of RGN are not available, we adopt the numerical results reported in [4]. As for JORDER, we directly compute average PSNR and SSIM on deraining results provided by the authors. We re-train RESCAN [20] for Rain100H with the default settings. Besides, we have tried to train RESCAN for light rainy images, but the results are much inferior to the others. So its results on Rain100L and Rain12 are not reported in our experiments.
Our PReNet achieves significant PSNR and SSIM gains over all the competing methods. We also note that for Rain100H, RESCAN [20] is re-trained on the full 1,800 rainy images, the performance gain by our PReNet trained on the strict 1,254 rainy images is still notable. Moreover, even PReNet r can perform better than all the competing methods. From Fig. 7, visible dark noises along rain directions can still be observed from the results by the other methods. In comparison, the results by PRN and PReNet are visually more pleasing.
We further evaluate progressive networks on another dataset [6] which includes 12,600 rainy images for training and 1,400 rainy images for testing (Rain1400). From Table 6, all the four versions of progressive networks outperform DDN in terms of PSNR and SSIM. As shown in Fig. 8, the visual quality improvement by our methods is also significant, while the result by DDN still contains visible rain streaks. Table 7 lists the running time of different methods based on a computer equipped with a NVIDIA GTX 1080Ti GPU. We only give the running time of DDN [6], JORDER [30] and RESCAN [20], due to the codes of the other competing methods are not available. We note that the running time of DDN [6] takes the separation of details layer into account. Unsurprisingly, PRN and PReNet are much more efficient due to its simple network architecture.
Evaluation on Real Rainy Images
Using two real rainy images in Fig. 9, we compare PReNet with two state-of-the-art deep methods, i.e., JORDER [30] and DDN [6]. It can be seen that PReNet and JORDER perform better than DDN in removing rain streaks. For the first image, rain streaks remain visible in the result by DDN, while PReNet and JORDER can generate satisfying deraining results. For the second image, there are more or less rain streaks in the results by DDN and JORDER, while the result by PReNet is more clear.
Evaluation on Real Rainy Videos
Finally, PReNet is adopted to process a rainy video in a frame-by-frame manner, and is compared with state-of-theart video deraining method, i.e., FastDerain [12]. As shown in Fig. 10, for frame #510, both FastDerain and our PReNet can remove all the rain streaks, indicating the performance of PReNet even without the help of temporal consistency. However, FastDerain fails in switching frames, since it is developed by exploiting the consistency of adjacent frames. As a result, for frame #571, #572 and 640, rain streaks are remained in the results by FastDerain, while our PReNet performs favorably and is not affected by switching frames and accumulation error.
Conclusion
In this paper, a better and simpler baseline network is presented for single image deraining. Instead of deeper and complex networks, we find that the simple combination of ResNet and multi-stage recursion, i.e., PRN, can result in favorable performance. Moreover, the deraining Table 5. Average PSNR and SSIM comparison on the synthetic datasets Rain100H [30], Rain100L [30] and Rain12 [21]. Red, blue and cyan colors are used to indicate top 1 st , 2 nd and 3 rd rank, respectively. means these metrics are copied from [4]. • means the metrics are directly computed based on the deraining images provided by the authors [30]. donates the method is re-trained with their default settings (i.e., all the 1800 training samples for Rain100H).
Method GMM [21] performance can be further boosted by the inclusion of recurrent layer, and stage-wise result is also taken as input to each ResNet, resulting in our PReNet model. Further-more, the network parameters can be reduced by incorporating inter-and intra-stage recursive computation (PRN r and PReNet r ). And our progressive deraining networks can be readily trained with single negative SSIM or MSE loss. Extensive experiments validate the superiority of our PReNet and PReNet r on synthetic and real rainy images in comparison to state-of-the-art deraining methods. Taking their simplicity, effectiveness and efficiency into account, it is also appealing to exploit our models as baselines when developing new deraining networks. | 4,123 |
1901.09221 | 2953343723 | Along with the deraining performance improvement of deep networks, their structures and learning become more and more complicated and diverse, making it difficult to analyze the contribution of various network modules when developing new deraining networks. To handle this issue, this paper provides a better and simpler baseline deraining network by considering network architecture, input and output, and loss functions. Specifically, by repeatedly unfolding a shallow ResNet, progressive ResNet (PRN) is proposed to take advantage of recursive computation. A recurrent layer is further introduced to exploit the dependencies of deep features across stages, forming our progressive recurrent network (PReNet). Furthermore, intra-stage recursive computation of ResNet can be adopted in PRN and PReNet to notably reduce network parameters with graceful degradation in deraining performance. For network input and output, we take both stage-wise result and original rainy image as input to each ResNet and finally output the prediction of residual image . As for loss functions, single MSE or negative SSIM losses are sufficient to train PRN and PReNet. Experiments show that PRN and PReNet perform favorably on both synthetic and real rainy images. Considering its simplicity, efficiency and effectiveness, our models are expected to serve as a suitable baseline in future deraining research. The source codes are available at this https URL. | When applied deep network to single image deraining, one natural solution is to learn a direct mapping to predict clean background image @math from rainy image @math . However, it is suggested that plain fully convolutional networks (FCN) are ineffective in learning the direct mapping @cite_0 @cite_4 . Instead, Fu al @cite_0 @cite_4 apply a low-pass filter to decompose @math into a base layer @math and a detail layer @math . By assuming @math , FCNs are then deployed to predict @math from @math . In contrast, Li al @cite_10 adopt the residual learning formulation to predict rain layer @math from @math . More complicated learning formulations, such as joint detection and removal of rain streaks @cite_20 , and joint rain density estimation and deraining @cite_8 , are also suggested. And adversarial loss is also introduced to enhance the texture details of deraining result @cite_1 @cite_7 . In this work, we show that the improvement of deraining networks actually eases the difficulty of learning, and it is also feasible to train PRN and PReNet to learn either direct or residual mapping. | {
"abstract": [
"We propose a new deep network architecture for removing rain streaks from individual images based on the deep convolutional neural network (CNN). Inspired by the deep residual network (ResNet) that simplifies the learning process by changing the mapping form, we propose a deep detail network to directly reduce the mapping range from input to output, which makes the learning process easier. To further improve the de-rained result, we use a priori image domain knowledge by focusing on high frequency detail during training, which removes background interference and focuses the model on the structure of rain in images. This demonstrates that a deep architecture not only has benefits for high-level vision tasks but also can be used to solve low-level imaging problems. Though we train the network on synthetic data, we find that the learned network generalizes well to real-world test images. Experiments show that the proposed method significantly outperforms state-of-the-art methods on both synthetic and real-world images in terms of both qualitative and quantitative measures. We discuss applications of this structure to denoising and JPEG artifact reduction at the end of the paper.",
"Raindrops adhered to a glass window or camera lens can severely hamper the visibility of a background scene and degrade an image considerably. In this paper, we address the problem by visually removing raindrops, and thus transforming a raindrop degraded image into a clean one. The problem is intractable, since first the regions occluded by raindrops are not given. Second, the information about the background scene of the occluded regions is completely lost for most part. To resolve the problem, we apply an attentive generative network using adversarial training. Our main idea is to inject visual attention into both the generative and discriminative networks. During the training, our visual attention learns about raindrop regions and their surroundings. Hence, by injecting this information, the generative network will pay more attention to the raindrop regions and the surrounding structures, and the discriminative network will be able to assess the local consistency of the restored regions. This injection of visual attention to both generative and discriminative networks is the main contribution of this paper. Our experiments show the effectiveness of our approach, which outperforms the state of the art methods quantitatively and qualitatively.",
"",
"Severe weather conditions such as rain and snow adversely affect the visual quality of images captured under such conditions thus rendering them useless for further usage and sharing. In addition, such degraded images drastically affect performance of vision systems. Hence, it is important to solve the problem of single image de-raining de-snowing. However, this is a difficult problem to solve due to its inherent ill-posed nature. Existing approaches attempt to introduce prior information to convert it into a well-posed problem. In this paper, we investigate a new point of view in addressing the single image de-raining problem. Instead of focusing only on deciding what is a good prior or a good framework to achieve good quantitative and qualitative performance, we also ensure that the de-rained image itself does not degrade the performance of a given computer vision algorithm such as detection and classification. In other words, the de-rained result should be indistinguishable from its corresponding clear image to a given discriminator. This criterion can be directly incorporated into the optimization framework by using the recently introduced conditional generative adversarial networks (GANs). To minimize artifacts introduced by GANs and ensure better visual quality, a new refined loss function is introduced. Based on this, we propose a novel single image de-raining method called Image De-raining Conditional General Adversarial Network (ID-CGAN), which considers quantitative, visual and also discriminative performance into the objective function. Experiments evaluated on synthetic images and real images show that the proposed method outperforms many recent state-of-the-art single image de-raining methods in terms of quantitative and visual performance.",
"We introduce a deep network architecture called DerainNet for removing rain streaks from an image. Based on the deep convolutional neural network (CNN), we directly learn the mapping relationship between rainy and clean image detail layers from data. Because we do not possess the ground truth corresponding to real-world rainy images, we synthesize images with rain for training. In contrast to other common strategies that increase depth or breadth of the network, we use image processing domain knowledge to modify the objective function and improve deraining with a modestly sized CNN. Specifically, we train our DerainNet on the detail (high-pass) layer rather than in the image domain. Though DerainNet is trained on synthetic data, we find that the learned network translates very effectively to real-world images for testing. Moreover, we augment the CNN framework with image enhancement to improve the visual results. Compared with the state-of-the-art single image de-raining methods, our method has improved rain removal and much faster computation time after network training.",
"",
"In this paper, we address a rain removal problem from a single image, even in the presence of heavy rain and rain streak accumulation. Our core ideas lie in our new rain image model and new deep learning architecture. We add a binary map that provides rain streak locations to an existing model, which comprises a rain streak layer and a background layer. We create a model consisting of a component representing rain streak accumulation (where individual streaks cannot be seen, and thus visually similar to mist or fog), and another component representing various shapes and directions of overlapping rain streaks, which usually happen in heavy rain. Based on the model, we develop a multi-task deep learning architecture that learns the binary rain streak map, the appearance of rain streaks, and the clean background, which is our ultimate output. The additional binary map is critically beneficial, since its loss function can provide additional strong information to the network. To handle rain streak accumulation (again, a phenomenon visually similar to mist or fog) and various shapes and directions of overlapping rain streaks, we propose a recurrent rain detection and removal network that removes rain streaks and clears up the rain accumulation iteratively and progressively. In each recurrence of our method, a new contextualized dilated network is developed to exploit regional contextual information and to produce better representations for rain detection. The evaluation on real images, particularly on heavy rain, shows the effectiveness of our models and architecture."
],
"cite_N": [
"@cite_4",
"@cite_7",
"@cite_8",
"@cite_1",
"@cite_0",
"@cite_10",
"@cite_20"
],
"mid": [
"2740982616",
"2963800716",
"",
"2580458810",
"2509784253",
"",
"2559264300"
]
} | Progressive Image Deraining Networks: A Better and Simpler Baseline | Rain is a common weather condition, and has severe adverse effect on not only human visual perception but also the performance of various high level vision tasks such as image classification, object detection, and video surveillance [7,14]. Single image deraining aims at restoring clean background image from a rainy image, and has drawn con-Rainy image RESCAN [20] t = 1 t = 2 t = 4 t = 6 Figure 1. Deraining results by RESCAN [20] and PReNet (T = 6) at stage t = 1, 2, 4, 6, respectively. siderable recent research attention. For example, several traditional optimization based methods [1,9,21,22] have been suggested for modeling and separating rain streaks from background clean image. However, due to the complex composition of rain and background layers, image deraining remains a challenging ill-posed problem. Driven by the unprecedented success of deep learning in low level vision [3,15,18,28,34], recent years have also witnessed the rapid progress of deep convolutional neural network (CNN) in image deraining. In [5], Fu et al. show that it is difficult to train a CNN to directly predict background image from rainy image, and utilize a 3-layer CNN to remove rain streaks from high-pass detail layer instead of the input image. Subsequently, other formulations are also introduced, such as residual learning for predicting rain steak layer [20], joint detection and removal of rain streaks [30], and joint rain density estimation and deraining [32].
On the other hand, many modules are suggested to constitute different deraining networks, including residual blocks [6,10], dilated convolution [30,31], dense blocks [32], squeeze-and-excitation [20], and recurrent layers [20,25]. Multi-stream [32] and multi-stage [20] networks are also deployed to capture multi-scale characteristics and to remove heavy rain. Moreover, several models are designed to improve computational efficiency by utilizing lightweight networks in a cascaded scheme [4] or a Laplacian pyramid framework [7], but at the cost of obvious degradation in deraining performance. To sum up, albeit the progress of deraining performance, the structures of deep networks become more and more complicated and diverse. As a result, it is difficult to analyze the contribution of various modules and their combinations, and to develop new models by introducing modules to existing deeper and complex deraining networks.
In this paper, we aim to present a new baseline network for single image deraining to demonstrate that: (i) by combining only a few modules, a better and simpler baseline network can be constructed and achieve noteworthy performance gains over state-of-the-art deeper and complex deraining networks, (ii) unlike [5], the improvement of de-raining networks may ease the difficulty of training CNNs to directly recover clean image from rainy image. Moreover, the simplicity of baseline network makes it easier to develop new deraining models by introducing other network modules or modifying the existing ones.
To this end, we consider the network architecture, input and output, and loss functions to form a better and simpler baseline networks. In terms of network architecture, we begin with a basic shallow residual network (ResNet) with five residual blocks (ResBlocks). Then, progressive ResNet (PRN) is introduced by recursively unfolding the ResNet into multiple stages without the increase of model parameters (see Fig. 2(a)). Moreover, a recurrent layer [11,27] is introduced to exploit the dependencies of deep features across recursive stages to form the PReNet in Fig. 2(b). From Fig. 1, a 6-stage PReNet can remove most rain streaks at the first stage, and then remaining rain streaks can be progressively removed, leading to promising deraining quality at the final stage. Furthermore, PRN r and PReNet r are presented by adopting intra-stage recursive unfolding of only one ResBlock, which reduces network parameters only at the cost of unsubstantial performance degradation.
Using PRN and PReNet, we further investigate the effect of network input/output and loss function. In terms of network input, we take both stage-wise result and original rainy image as input to each ResNet, and empirically find that the introduction of original image does benefit deraining performance. In terms of network output, we adopt the residual learning formulation by predicting rain streak layer, and find that it is also feasible to directly learn a PRN or PReNet model for predicting clean background from rainy image. Finally, instead of hybrid losses with careful hyperparameters tuning [4,6], a single negative SSIM [29] or MSE loss can readily train PRN and PReNet with favorable deraining performance.
Comprehensive experiments have been conducted to evaluate our baseline networks PRN and PReNet. On four synthetic datasets, our PReNet and PRN are computationally very efficient, and achieve much better quantitative and qualitative deraining results in comparison with the stateof-the-art methods. In particular, on heavy rainy dataset Rain100H [30], the performance gains by our PRN and PReNet are still significant. The visually pleasing deraining results on real rainy images and videos have also validated the generalization ability of the trained PReNet and PRN models.
The contribution of this work is four-fold:
• Baseline deraining networks, i.e., PRN and PReNet, are proposed, by which better and simpler networks can work well in removing rain streaks, and provide a suitable basis to future studies on image deraining. • By taking advantage of intra-stage recursive computation, PRN r and PReNet r are also suggested to reduce network parameters while maintaining state-of-the-art deraining performance. • Using PRN and PReNet, the deraining performance can be further improved by taking both stage-wise result and original rainy image as input to each ResNet, and our progressive networks can be readily trained with single negative SSIM or MSE loss. • Extensive experiments show that our baseline networks are computationally very efficient, and perform favorably against state-of-the-arts on both synthetic and real rainy images.
Optimization-based Deraining Methods
In general, a rainy image can be formed as the composition of a clean background image layer and a rain layer. On the one hand, linear summation is usually adopted as the composition model [1,21,35]. Then, image deraining can be formulated by incorporating with proper regularizers on both background image and rain layer, and solved by specific optimization algorithms. Among these methods, Gaussian mixture model (GMM) [21], sparse representation [35], and low rank representation [1] have been adopted for modeling background image or a rain layers. Based on linear summation model, optimization-based methods have been also extended for video deraining [8,12,13,16,19]. On the other hand, screen blend model [22,26] is assumed to be more realistic for the composition of rainy image, based on which Luo et al. [22] use the discriminative dictionary learning to separate rain streaks by enforcing the two layers share fewest dictionary atoms. However, the real composition generally is more complicated and the regularizers are still insufficient in characterizing background and rain layers, making optimization-based methods remain limited in deraining performance.
Deep Network-based Deraining Methods
When applied deep network to single image deraining, one natural solution is to learn a direct mapping to predict clean background image x from rainy image y. However, it is suggested that plain fully convolutional networks (FCN) are ineffective in learning the direct mapping [5,6]. Instead, Fu et al. [5,6] apply a low-pass filter to decompose y into a base layer y base and a detail layer y detail . By assuming y base ≈ x base , FCNs are then deployed to predict x detail from y detail . In contrast, Li et al. [20] adopt the residual learning formulation to predict rain layer y − x from y. More complicated learning formulations, such as joint detection and removal of rain streaks [30], and joint rain density estimation and deraining [32], are also suggested. And adversarial loss is also introduced to enhance the texture details of deraining result [25,33]. In this work, we show that the improvement of deraining networks actually eases the difficulty of learning, and it is also feasible to train PRN and PReNet to learn either direct or residual mapping.
For the architecture of deraining network, Fu et al. first adopt a shallow CNN [5] and then a deeper ResNet [6]. In [30], a multi-task CNN architecture is designed for joint detection and removal of rain streaks, in which contextualized dilated convolution and recurrent structure are adopted to handle multi-scale and heavy rain steaks. Subsequently, Zhang et al. [32] propose a density aware multi-stream densely connected CNN for joint estimating rain density and removing rain streaks. In [25], attentive-recurrent network is developed for single image raindrop removal. Most recently, Li et al. [20] recurrently utilize dilated CNN and squeeze-and-excitation blocks to remove heavy rain streaks. In comparison to these deeper and complex networks, our work incorporates ResNet, recurrent layer and multi-stage recursion to constitute a better, simpler and more efficient deraining network.
Besides, several lightweight networks, e.g., cascaded scheme [4] and Laplacian pyrimid framework [7] are also developed to improve computational efficiency but at the cost of obvious performance degradation. As for PRN and PReNet, we further introduce intra-stage recursive computation to reduce network parameters while maintain-ing state-of-the-art deraining performance, resulting in our PRN r and PReNet r models.
Progressive Image Deraining Networks
In this section, progressive image deraining networks are presented by considering network architecture, input and output, and loss functions. To this end, we first describe the general framework of progressive networks as well as input/output, then implement the network modules, and finally discuss the learning objectives of progressive networks.
Progressive Networks
A simple deep network generally cannot succeed in removing rain streaks from rainy images [5,6]. Instead of designing deeper and complex networks, we suggest to tackle the deraining problem in multiple stages, where a shallow ResNet is deployed at each stage. One natural multi-stage solution is to stack several sub-networks, which inevitably leads to the increase of network parameters and susceptibility to over-fitting. In comparison, we take advantage of inter-stage recursive computation [15, 20, 28] by requiring multiple stages share the same network parameters. Besides, we can incorporate intra-stage recursive unfolding of only 1 ResBlock to significantly reduce network parameters with graceful degradation in deraining performance.
Progressive Residual Network
We first present a progressive residual network (PRN) as shown in Fig. 2(a). In particular, we adopt a basic ResNet with three parts: (i) a convolution layer f in receives network inputs, (ii) several residual blocks (ResBlocks) f res extract deep representation, and (iii) a convolution layer f out outputs deraining results. The inference of PRN at stage t can be formulated as
x t−0.5 = f in (x t−1 , y), x t = f out (f res (x t−0.5 )),(1)
where f in , f res and f out are stage-invariant, i.e., network parameters are reused across different stages.
We note that f in takes the concatenation of the current estimation x t−1 and rainy image y as input. In comparison to only x t−1 in [20], the inclusion of y can further improve the deraining performance. The network output can be the prediction of either rain layer or clean background image. Our empirical study show that, although predicting rain layer performs moderately better, it is also possible to learn progressive networks for predicting background image.
Progressive Recurrent Network
We further introduce a recurrent layer into PRN, by which feature dependencies across stages can be propagated to facilitate rain streak removal, resulting in our progressive recurrent network (PReNet). The only difference between PReNet and PRN is the inclusion of recurrent state s t ,
x t−0.5 = f in (x t−1 , y), s t = f recurrent (s t−1 , x t−0.5 ), x t = f out (f res (s t )),(2)
where the recurrent layer f recurrent takes both x t−0.5 and the recurrent state s t−1 as input at stage t − 1. f recurrent can be implemented using either convolutional Long Short-Term Memory (LSTM) [11,27] or convolutional Gated Recurrent Unit (GRU) [2]. In PReNet, we choose LSTM due to its empirical superiority in image deraining.
The architecture of PReNet is shown in Fig. 2(b). By unfolding PReNet with T recursive stages, the deep representation that facilitates rain streak removal are propagated by recurrent states. The deraining results at intermediate stages in Fig. 1 show that the heavy rain streak accumulation can be gradually removed stage-by-stage.
Network Architectures
We hereby present the network architectures of PRN and PReNet. All the filters are with size 3×3 and padding 1×1. Generally, f in is a 1-layer convolution with ReLU nonlinearity [23], f res includes 5 ResBlocks and f out is also a 1layer convolution. Due to the concatenation of 3-channel RGB y and 3-channel RGB x t−1 , the convolution in f in has 6 and 32 channels for input and output, respectively. f out takes the output of f res (or f recurrent ) with 32 channels as input and outputs 3-channel RGB image for PRN (or PReNet). In f recurrent , all the convolutions in LSTM have 32 input channels and 32 output channels. f res is the key component to extract deep representation for rain streak removal, and we provide two implementations, i.e., conventional ResBlocks shown in Fig. 3(a) and recursive Res-Blocks shown in Fig. 3(b). Conventional ResBlocks: As shown in Fig. 3(a), we first implement f res with 5 ResBlocks as its conventional form, i.e., each ResBlock includes 2 convolution layers followed by ReLU [23]. All the convolution layers receive 32channel features without downsampling or upsamping operations. Conventional ResBlocks are adopted in PRN and PReNet.
Recursive ResBlocks: Motivated by [15,28], we also implement f res by recursively unfolding one ResBlock 5 times, as shown in Fig. 3(b). Since network parameters mainly come from ResBlocks, the intra-stage recursive computation leads to a much smaller model size, resulting in PRN r and PReNet r . We have evaluated the performance of recursive ResBlocks in Sec. 4.2, indicating its elegant tradeoff between model size and deraining performance.
Learning Objective
Recently, hybrid loss functions, e.g., MSE+SSIM [4], 1 +SSIM [7] and even adversarial loss [33], have been widely adopted for training deraining networks, but incredibly increase the burden of hyper-parameter tuning. In contrast, benefited from the progressive network architecture, we empirically find that a single loss function, e.g., MSE loss or negative SSIM loss [29], is sufficient to train PRN and PReNet. For a model with T stages, we have T outputs, i.e., x 1 , x 2 ,..., x T . By only imposing supervision on the final output x T , the MSE loss is
L = x T − x gt 2 ,(3)
and the negative SSIM loss is
L = −SSIM x T , x gt ,(4)
where x gt is the corresponding ground-truth clean image. It is worth noting that, our empirical study shows that negative SSIM loss outperforms MSE loss in terms of both PSNR and SSIM. Moreover, recursive supervision can be imposed on each intermediate result,
L = − T t=1 λ t SSIM x t , x gt ,(5)
where λ t is the tradeoff parameter for stage t. Experimental result in Sec. 4.1.1 shows that recursive supervision cannot achieve any performance gain when t = T , but can be adopted to generate visually satisfying result at early stages.
Experimental Results
In this section, we first conduct ablation studies to verify the main components of our methods, then quantitatively and qualitatively evaluate progressive networks, and finally assess PReNet on real rainy images and video. All the source code, pre-trained models and results can be found at https://github.com/csdwren/PReNet.
Our progressive networks are implemented using Pytorch [24], and are trained on a PC equipped with two NVIDIA GTX 1080Ti GPUs. In our experiments, all the progressive networks share the same training setting. The patch size is 100 × 100, and the batch size is 18. The ADAM [17] algorithm is adopted to train the models with an initial learning rate 1 × 10 −3 , and ends after 100 epochs. When reaching 30, 50 and 80 epochs, the learning rate is decayed by multiplying 0.2.
Ablation Studies
All the ablation studies are conducted on a heavy rainy dataset [30] with 1,800 rainy images for training and 100 rainy images (Rain100H) for testing. However, we find that 546 rainy images from the 1,800 training samples have the same background contents with testing images. Therefore, we exclude these 546 images from training set, and train all our models on the remaining 1,254 training images.
Loss Functions
Using PReNet (T = 6) as an example, we discuss the effect of loss functions on deraining performance, including MSE loss, negative SSIM loss, and recursive supervision loss.
Negative SSIM v.s. MSE. We train two PReNet models by minimizing MSE loss (PReNet-MSE) and negative SSIM loss (PReNet-SSIM), and Table 1 lists their PSNR and SSIM values on Rain100H. Unsurprisingly, PReNet-SSIM outperforms PReNet-MSE in terms of SSIM. We also note that PReNet-SSIM even achieves higher PSNR, partially attributing to that PReNet-MSE may be inclined to get trapped into poor solution. As shown in Fig. 4, the deraining result by PReNet-SSIM is also visually more plausible than that by PReNet-MSE. Therefore, negative SSIM loss is adopted as the default in the following experiments. (5). For PReNet-RecSSIM, we set λ t = 0.5 (t = 1, 2, ..., 5) and λ 6 = 1.5, where the tradeoff parameter for the final stage is larger than the others. From Table 1, PReNet-RecSSIM performs moderately inferior to PReNet-SSIM. As shown in Fig. 4 ing resource constrained environments by stopping the inference at any stage t.
Network Architecture
In this subsection, we assess the effect of several key modules of progressive networks, including recurrent layer, multi-stage recursion, and intra-stage recursion.
Recurrent Layer. Using PReNet (T = 6), we test two types of recurrent layers, i.e., LSTM (PReNet-LSTM) and GRU (PReNet-GRU). It can be seen from Table 3 that LSTM performs slightly better than GRU in terms of quantitative metrics, and thus is adopted as the default implementation of recurrent layer in our experiments. We further compare progressive networks with and without recurrent layer, i.e., PReNet and PRN, in Table 4, and obviously the introduction of recurrent layer does benefit the deraining performance in terms of PSNR and SSIM.
Intra-stage Recursion. From Table 4, intra-stage recursion, i.e., recursive ResBlocks, is introduced to significantly reduce the number of parameters of progressive networks, resulting in PRN r and PReNet r . As for deraining performance, it is reasonable to see that PRN and PReNet respectively achieve higher average PSNR and SSIM values than PRN r and PReNet r . But in terms of visual quality, PRN r and PReNet r are comparable with PRN and PReNet, as shown in Fig. 6.
Recursive Stage Number T . Table 2 lists the PSNR and SSIM values of four PReNet models with stages T = 2, 3, 4, 5, 6, 7. One can see that PReNet with more stages (from 2 stages to 6 stages) usually leads to higher average PSNR and SSIM values. However, larger T also makes PReNet more difficult to train. When T = 7, PReNet 7 performs a little inferior to PReNet 6 . Thus, we set T = 6 in the following experiments.
Effect of Network Input/Output
Network Input. We also test a variant of PReNet by only taking x t−1 at each stage as input to f in (i.e., PReNet x ), where such strategy has been adopted in [20,30]. From Table 3, PReNet x is obviously inferior to PReNet in terms of both PSNR and SSIM, indicating the benefit of receiving y at each stage.
Network Output. We consider two types of network outputs by incorporating residual learning formulation (i.e., PReNet in Table 3) or not (i.e., PReNet-LSTM in Table 3). From Table 3, residual learning can make a further contribution to performance gain. It is worth noting that, benefited from progressive networks, it is feasible to learn PReNet for directly predicting clean background from rainy image, and even PReNet-LSTM can achieve appealing deraining performance.
Evaluation on Synthetic Datasets
Our progressive networks are evaluated on three synthetic datasets, i.e., Rain100H [30], Rain100L [30] and Rain12 [21]. Five competing methods are considered, including one traditional optimization-based method (GMM [21]) and three state-of-the-art deep CNN-based models, i.e., DDN [6], JORDER [30] and RESCAN [20], and one lightweight network (RGN [4]). For heavy rainy images (Rain100H) and light rainy images (Rain100L), the models are respectively trained, and the models for light rain are directly applied on Rain12. Since the source codes of RGN are not available, we adopt the numerical results reported in [4]. As for JORDER, we directly compute average PSNR and SSIM on deraining results provided by the authors. We re-train RESCAN [20] for Rain100H with the default settings. Besides, we have tried to train RESCAN for light rainy images, but the results are much inferior to the others. So its results on Rain100L and Rain12 are not reported in our experiments.
Our PReNet achieves significant PSNR and SSIM gains over all the competing methods. We also note that for Rain100H, RESCAN [20] is re-trained on the full 1,800 rainy images, the performance gain by our PReNet trained on the strict 1,254 rainy images is still notable. Moreover, even PReNet r can perform better than all the competing methods. From Fig. 7, visible dark noises along rain directions can still be observed from the results by the other methods. In comparison, the results by PRN and PReNet are visually more pleasing.
We further evaluate progressive networks on another dataset [6] which includes 12,600 rainy images for training and 1,400 rainy images for testing (Rain1400). From Table 6, all the four versions of progressive networks outperform DDN in terms of PSNR and SSIM. As shown in Fig. 8, the visual quality improvement by our methods is also significant, while the result by DDN still contains visible rain streaks. Table 7 lists the running time of different methods based on a computer equipped with a NVIDIA GTX 1080Ti GPU. We only give the running time of DDN [6], JORDER [30] and RESCAN [20], due to the codes of the other competing methods are not available. We note that the running time of DDN [6] takes the separation of details layer into account. Unsurprisingly, PRN and PReNet are much more efficient due to its simple network architecture.
Evaluation on Real Rainy Images
Using two real rainy images in Fig. 9, we compare PReNet with two state-of-the-art deep methods, i.e., JORDER [30] and DDN [6]. It can be seen that PReNet and JORDER perform better than DDN in removing rain streaks. For the first image, rain streaks remain visible in the result by DDN, while PReNet and JORDER can generate satisfying deraining results. For the second image, there are more or less rain streaks in the results by DDN and JORDER, while the result by PReNet is more clear.
Evaluation on Real Rainy Videos
Finally, PReNet is adopted to process a rainy video in a frame-by-frame manner, and is compared with state-of-theart video deraining method, i.e., FastDerain [12]. As shown in Fig. 10, for frame #510, both FastDerain and our PReNet can remove all the rain streaks, indicating the performance of PReNet even without the help of temporal consistency. However, FastDerain fails in switching frames, since it is developed by exploiting the consistency of adjacent frames. As a result, for frame #571, #572 and 640, rain streaks are remained in the results by FastDerain, while our PReNet performs favorably and is not affected by switching frames and accumulation error.
Conclusion
In this paper, a better and simpler baseline network is presented for single image deraining. Instead of deeper and complex networks, we find that the simple combination of ResNet and multi-stage recursion, i.e., PRN, can result in favorable performance. Moreover, the deraining Table 5. Average PSNR and SSIM comparison on the synthetic datasets Rain100H [30], Rain100L [30] and Rain12 [21]. Red, blue and cyan colors are used to indicate top 1 st , 2 nd and 3 rd rank, respectively. means these metrics are copied from [4]. • means the metrics are directly computed based on the deraining images provided by the authors [30]. donates the method is re-trained with their default settings (i.e., all the 1800 training samples for Rain100H).
Method GMM [21] performance can be further boosted by the inclusion of recurrent layer, and stage-wise result is also taken as input to each ResNet, resulting in our PReNet model. Further-more, the network parameters can be reduced by incorporating inter-and intra-stage recursive computation (PRN r and PReNet r ). And our progressive deraining networks can be readily trained with single negative SSIM or MSE loss. Extensive experiments validate the superiority of our PReNet and PReNet r on synthetic and real rainy images in comparison to state-of-the-art deraining methods. Taking their simplicity, effectiveness and efficiency into account, it is also appealing to exploit our models as baselines when developing new deraining networks. | 4,123 |
1901.09221 | 2953343723 | Along with the deraining performance improvement of deep networks, their structures and learning become more and more complicated and diverse, making it difficult to analyze the contribution of various network modules when developing new deraining networks. To handle this issue, this paper provides a better and simpler baseline deraining network by considering network architecture, input and output, and loss functions. Specifically, by repeatedly unfolding a shallow ResNet, progressive ResNet (PRN) is proposed to take advantage of recursive computation. A recurrent layer is further introduced to exploit the dependencies of deep features across stages, forming our progressive recurrent network (PReNet). Furthermore, intra-stage recursive computation of ResNet can be adopted in PRN and PReNet to notably reduce network parameters with graceful degradation in deraining performance. For network input and output, we take both stage-wise result and original rainy image as input to each ResNet and finally output the prediction of residual image . As for loss functions, single MSE or negative SSIM losses are sufficient to train PRN and PReNet. Experiments show that PRN and PReNet perform favorably on both synthetic and real rainy images. Considering its simplicity, efficiency and effectiveness, our models are expected to serve as a suitable baseline in future deraining research. The source codes are available at this https URL. | For the architecture of deraining network, Fu al first adopt a shallow CNN @cite_0 and then a deeper ResNet @cite_4 . In @cite_20 , a multi-task CNN architecture is designed for joint detection and removal of rain streaks, in which contextualized dilated convolution and recurrent structure are adopted to handle multi-scale and heavy rain steaks. Subsequently, Zhang al @cite_8 propose a density aware multi-stream densely connected CNN for joint estimating rain density and removing rain streaks. @cite_7 , attentive-recurrent network is developed for single image raindrop removal. Most recently, Li al @cite_10 recurrently utilize dilated CNN and squeeze-and-excitation blocks to remove heavy rain streaks. In comparison to these deeper and complex networks, our work incorporates ResNet, recurrent layer and multi-stage recursion to constitute a better, simpler and more efficient deraining network. | {
"abstract": [
"We propose a new deep network architecture for removing rain streaks from individual images based on the deep convolutional neural network (CNN). Inspired by the deep residual network (ResNet) that simplifies the learning process by changing the mapping form, we propose a deep detail network to directly reduce the mapping range from input to output, which makes the learning process easier. To further improve the de-rained result, we use a priori image domain knowledge by focusing on high frequency detail during training, which removes background interference and focuses the model on the structure of rain in images. This demonstrates that a deep architecture not only has benefits for high-level vision tasks but also can be used to solve low-level imaging problems. Though we train the network on synthetic data, we find that the learned network generalizes well to real-world test images. Experiments show that the proposed method significantly outperforms state-of-the-art methods on both synthetic and real-world images in terms of both qualitative and quantitative measures. We discuss applications of this structure to denoising and JPEG artifact reduction at the end of the paper.",
"Raindrops adhered to a glass window or camera lens can severely hamper the visibility of a background scene and degrade an image considerably. In this paper, we address the problem by visually removing raindrops, and thus transforming a raindrop degraded image into a clean one. The problem is intractable, since first the regions occluded by raindrops are not given. Second, the information about the background scene of the occluded regions is completely lost for most part. To resolve the problem, we apply an attentive generative network using adversarial training. Our main idea is to inject visual attention into both the generative and discriminative networks. During the training, our visual attention learns about raindrop regions and their surroundings. Hence, by injecting this information, the generative network will pay more attention to the raindrop regions and the surrounding structures, and the discriminative network will be able to assess the local consistency of the restored regions. This injection of visual attention to both generative and discriminative networks is the main contribution of this paper. Our experiments show the effectiveness of our approach, which outperforms the state of the art methods quantitatively and qualitatively.",
"",
"We introduce a deep network architecture called DerainNet for removing rain streaks from an image. Based on the deep convolutional neural network (CNN), we directly learn the mapping relationship between rainy and clean image detail layers from data. Because we do not possess the ground truth corresponding to real-world rainy images, we synthesize images with rain for training. In contrast to other common strategies that increase depth or breadth of the network, we use image processing domain knowledge to modify the objective function and improve deraining with a modestly sized CNN. Specifically, we train our DerainNet on the detail (high-pass) layer rather than in the image domain. Though DerainNet is trained on synthetic data, we find that the learned network translates very effectively to real-world images for testing. Moreover, we augment the CNN framework with image enhancement to improve the visual results. Compared with the state-of-the-art single image de-raining methods, our method has improved rain removal and much faster computation time after network training.",
"",
"In this paper, we address a rain removal problem from a single image, even in the presence of heavy rain and rain streak accumulation. Our core ideas lie in our new rain image model and new deep learning architecture. We add a binary map that provides rain streak locations to an existing model, which comprises a rain streak layer and a background layer. We create a model consisting of a component representing rain streak accumulation (where individual streaks cannot be seen, and thus visually similar to mist or fog), and another component representing various shapes and directions of overlapping rain streaks, which usually happen in heavy rain. Based on the model, we develop a multi-task deep learning architecture that learns the binary rain streak map, the appearance of rain streaks, and the clean background, which is our ultimate output. The additional binary map is critically beneficial, since its loss function can provide additional strong information to the network. To handle rain streak accumulation (again, a phenomenon visually similar to mist or fog) and various shapes and directions of overlapping rain streaks, we propose a recurrent rain detection and removal network that removes rain streaks and clears up the rain accumulation iteratively and progressively. In each recurrence of our method, a new contextualized dilated network is developed to exploit regional contextual information and to produce better representations for rain detection. The evaluation on real images, particularly on heavy rain, shows the effectiveness of our models and architecture."
],
"cite_N": [
"@cite_4",
"@cite_7",
"@cite_8",
"@cite_0",
"@cite_10",
"@cite_20"
],
"mid": [
"2740982616",
"2963800716",
"",
"2509784253",
"",
"2559264300"
]
} | Progressive Image Deraining Networks: A Better and Simpler Baseline | Rain is a common weather condition, and has severe adverse effect on not only human visual perception but also the performance of various high level vision tasks such as image classification, object detection, and video surveillance [7,14]. Single image deraining aims at restoring clean background image from a rainy image, and has drawn con-Rainy image RESCAN [20] t = 1 t = 2 t = 4 t = 6 Figure 1. Deraining results by RESCAN [20] and PReNet (T = 6) at stage t = 1, 2, 4, 6, respectively. siderable recent research attention. For example, several traditional optimization based methods [1,9,21,22] have been suggested for modeling and separating rain streaks from background clean image. However, due to the complex composition of rain and background layers, image deraining remains a challenging ill-posed problem. Driven by the unprecedented success of deep learning in low level vision [3,15,18,28,34], recent years have also witnessed the rapid progress of deep convolutional neural network (CNN) in image deraining. In [5], Fu et al. show that it is difficult to train a CNN to directly predict background image from rainy image, and utilize a 3-layer CNN to remove rain streaks from high-pass detail layer instead of the input image. Subsequently, other formulations are also introduced, such as residual learning for predicting rain steak layer [20], joint detection and removal of rain streaks [30], and joint rain density estimation and deraining [32].
On the other hand, many modules are suggested to constitute different deraining networks, including residual blocks [6,10], dilated convolution [30,31], dense blocks [32], squeeze-and-excitation [20], and recurrent layers [20,25]. Multi-stream [32] and multi-stage [20] networks are also deployed to capture multi-scale characteristics and to remove heavy rain. Moreover, several models are designed to improve computational efficiency by utilizing lightweight networks in a cascaded scheme [4] or a Laplacian pyramid framework [7], but at the cost of obvious degradation in deraining performance. To sum up, albeit the progress of deraining performance, the structures of deep networks become more and more complicated and diverse. As a result, it is difficult to analyze the contribution of various modules and their combinations, and to develop new models by introducing modules to existing deeper and complex deraining networks.
In this paper, we aim to present a new baseline network for single image deraining to demonstrate that: (i) by combining only a few modules, a better and simpler baseline network can be constructed and achieve noteworthy performance gains over state-of-the-art deeper and complex deraining networks, (ii) unlike [5], the improvement of de-raining networks may ease the difficulty of training CNNs to directly recover clean image from rainy image. Moreover, the simplicity of baseline network makes it easier to develop new deraining models by introducing other network modules or modifying the existing ones.
To this end, we consider the network architecture, input and output, and loss functions to form a better and simpler baseline networks. In terms of network architecture, we begin with a basic shallow residual network (ResNet) with five residual blocks (ResBlocks). Then, progressive ResNet (PRN) is introduced by recursively unfolding the ResNet into multiple stages without the increase of model parameters (see Fig. 2(a)). Moreover, a recurrent layer [11,27] is introduced to exploit the dependencies of deep features across recursive stages to form the PReNet in Fig. 2(b). From Fig. 1, a 6-stage PReNet can remove most rain streaks at the first stage, and then remaining rain streaks can be progressively removed, leading to promising deraining quality at the final stage. Furthermore, PRN r and PReNet r are presented by adopting intra-stage recursive unfolding of only one ResBlock, which reduces network parameters only at the cost of unsubstantial performance degradation.
Using PRN and PReNet, we further investigate the effect of network input/output and loss function. In terms of network input, we take both stage-wise result and original rainy image as input to each ResNet, and empirically find that the introduction of original image does benefit deraining performance. In terms of network output, we adopt the residual learning formulation by predicting rain streak layer, and find that it is also feasible to directly learn a PRN or PReNet model for predicting clean background from rainy image. Finally, instead of hybrid losses with careful hyperparameters tuning [4,6], a single negative SSIM [29] or MSE loss can readily train PRN and PReNet with favorable deraining performance.
Comprehensive experiments have been conducted to evaluate our baseline networks PRN and PReNet. On four synthetic datasets, our PReNet and PRN are computationally very efficient, and achieve much better quantitative and qualitative deraining results in comparison with the stateof-the-art methods. In particular, on heavy rainy dataset Rain100H [30], the performance gains by our PRN and PReNet are still significant. The visually pleasing deraining results on real rainy images and videos have also validated the generalization ability of the trained PReNet and PRN models.
The contribution of this work is four-fold:
• Baseline deraining networks, i.e., PRN and PReNet, are proposed, by which better and simpler networks can work well in removing rain streaks, and provide a suitable basis to future studies on image deraining. • By taking advantage of intra-stage recursive computation, PRN r and PReNet r are also suggested to reduce network parameters while maintaining state-of-the-art deraining performance. • Using PRN and PReNet, the deraining performance can be further improved by taking both stage-wise result and original rainy image as input to each ResNet, and our progressive networks can be readily trained with single negative SSIM or MSE loss. • Extensive experiments show that our baseline networks are computationally very efficient, and perform favorably against state-of-the-arts on both synthetic and real rainy images.
Optimization-based Deraining Methods
In general, a rainy image can be formed as the composition of a clean background image layer and a rain layer. On the one hand, linear summation is usually adopted as the composition model [1,21,35]. Then, image deraining can be formulated by incorporating with proper regularizers on both background image and rain layer, and solved by specific optimization algorithms. Among these methods, Gaussian mixture model (GMM) [21], sparse representation [35], and low rank representation [1] have been adopted for modeling background image or a rain layers. Based on linear summation model, optimization-based methods have been also extended for video deraining [8,12,13,16,19]. On the other hand, screen blend model [22,26] is assumed to be more realistic for the composition of rainy image, based on which Luo et al. [22] use the discriminative dictionary learning to separate rain streaks by enforcing the two layers share fewest dictionary atoms. However, the real composition generally is more complicated and the regularizers are still insufficient in characterizing background and rain layers, making optimization-based methods remain limited in deraining performance.
Deep Network-based Deraining Methods
When applied deep network to single image deraining, one natural solution is to learn a direct mapping to predict clean background image x from rainy image y. However, it is suggested that plain fully convolutional networks (FCN) are ineffective in learning the direct mapping [5,6]. Instead, Fu et al. [5,6] apply a low-pass filter to decompose y into a base layer y base and a detail layer y detail . By assuming y base ≈ x base , FCNs are then deployed to predict x detail from y detail . In contrast, Li et al. [20] adopt the residual learning formulation to predict rain layer y − x from y. More complicated learning formulations, such as joint detection and removal of rain streaks [30], and joint rain density estimation and deraining [32], are also suggested. And adversarial loss is also introduced to enhance the texture details of deraining result [25,33]. In this work, we show that the improvement of deraining networks actually eases the difficulty of learning, and it is also feasible to train PRN and PReNet to learn either direct or residual mapping.
For the architecture of deraining network, Fu et al. first adopt a shallow CNN [5] and then a deeper ResNet [6]. In [30], a multi-task CNN architecture is designed for joint detection and removal of rain streaks, in which contextualized dilated convolution and recurrent structure are adopted to handle multi-scale and heavy rain steaks. Subsequently, Zhang et al. [32] propose a density aware multi-stream densely connected CNN for joint estimating rain density and removing rain streaks. In [25], attentive-recurrent network is developed for single image raindrop removal. Most recently, Li et al. [20] recurrently utilize dilated CNN and squeeze-and-excitation blocks to remove heavy rain streaks. In comparison to these deeper and complex networks, our work incorporates ResNet, recurrent layer and multi-stage recursion to constitute a better, simpler and more efficient deraining network.
Besides, several lightweight networks, e.g., cascaded scheme [4] and Laplacian pyrimid framework [7] are also developed to improve computational efficiency but at the cost of obvious performance degradation. As for PRN and PReNet, we further introduce intra-stage recursive computation to reduce network parameters while maintain-ing state-of-the-art deraining performance, resulting in our PRN r and PReNet r models.
Progressive Image Deraining Networks
In this section, progressive image deraining networks are presented by considering network architecture, input and output, and loss functions. To this end, we first describe the general framework of progressive networks as well as input/output, then implement the network modules, and finally discuss the learning objectives of progressive networks.
Progressive Networks
A simple deep network generally cannot succeed in removing rain streaks from rainy images [5,6]. Instead of designing deeper and complex networks, we suggest to tackle the deraining problem in multiple stages, where a shallow ResNet is deployed at each stage. One natural multi-stage solution is to stack several sub-networks, which inevitably leads to the increase of network parameters and susceptibility to over-fitting. In comparison, we take advantage of inter-stage recursive computation [15, 20, 28] by requiring multiple stages share the same network parameters. Besides, we can incorporate intra-stage recursive unfolding of only 1 ResBlock to significantly reduce network parameters with graceful degradation in deraining performance.
Progressive Residual Network
We first present a progressive residual network (PRN) as shown in Fig. 2(a). In particular, we adopt a basic ResNet with three parts: (i) a convolution layer f in receives network inputs, (ii) several residual blocks (ResBlocks) f res extract deep representation, and (iii) a convolution layer f out outputs deraining results. The inference of PRN at stage t can be formulated as
x t−0.5 = f in (x t−1 , y), x t = f out (f res (x t−0.5 )),(1)
where f in , f res and f out are stage-invariant, i.e., network parameters are reused across different stages.
We note that f in takes the concatenation of the current estimation x t−1 and rainy image y as input. In comparison to only x t−1 in [20], the inclusion of y can further improve the deraining performance. The network output can be the prediction of either rain layer or clean background image. Our empirical study show that, although predicting rain layer performs moderately better, it is also possible to learn progressive networks for predicting background image.
Progressive Recurrent Network
We further introduce a recurrent layer into PRN, by which feature dependencies across stages can be propagated to facilitate rain streak removal, resulting in our progressive recurrent network (PReNet). The only difference between PReNet and PRN is the inclusion of recurrent state s t ,
x t−0.5 = f in (x t−1 , y), s t = f recurrent (s t−1 , x t−0.5 ), x t = f out (f res (s t )),(2)
where the recurrent layer f recurrent takes both x t−0.5 and the recurrent state s t−1 as input at stage t − 1. f recurrent can be implemented using either convolutional Long Short-Term Memory (LSTM) [11,27] or convolutional Gated Recurrent Unit (GRU) [2]. In PReNet, we choose LSTM due to its empirical superiority in image deraining.
The architecture of PReNet is shown in Fig. 2(b). By unfolding PReNet with T recursive stages, the deep representation that facilitates rain streak removal are propagated by recurrent states. The deraining results at intermediate stages in Fig. 1 show that the heavy rain streak accumulation can be gradually removed stage-by-stage.
Network Architectures
We hereby present the network architectures of PRN and PReNet. All the filters are with size 3×3 and padding 1×1. Generally, f in is a 1-layer convolution with ReLU nonlinearity [23], f res includes 5 ResBlocks and f out is also a 1layer convolution. Due to the concatenation of 3-channel RGB y and 3-channel RGB x t−1 , the convolution in f in has 6 and 32 channels for input and output, respectively. f out takes the output of f res (or f recurrent ) with 32 channels as input and outputs 3-channel RGB image for PRN (or PReNet). In f recurrent , all the convolutions in LSTM have 32 input channels and 32 output channels. f res is the key component to extract deep representation for rain streak removal, and we provide two implementations, i.e., conventional ResBlocks shown in Fig. 3(a) and recursive Res-Blocks shown in Fig. 3(b). Conventional ResBlocks: As shown in Fig. 3(a), we first implement f res with 5 ResBlocks as its conventional form, i.e., each ResBlock includes 2 convolution layers followed by ReLU [23]. All the convolution layers receive 32channel features without downsampling or upsamping operations. Conventional ResBlocks are adopted in PRN and PReNet.
Recursive ResBlocks: Motivated by [15,28], we also implement f res by recursively unfolding one ResBlock 5 times, as shown in Fig. 3(b). Since network parameters mainly come from ResBlocks, the intra-stage recursive computation leads to a much smaller model size, resulting in PRN r and PReNet r . We have evaluated the performance of recursive ResBlocks in Sec. 4.2, indicating its elegant tradeoff between model size and deraining performance.
Learning Objective
Recently, hybrid loss functions, e.g., MSE+SSIM [4], 1 +SSIM [7] and even adversarial loss [33], have been widely adopted for training deraining networks, but incredibly increase the burden of hyper-parameter tuning. In contrast, benefited from the progressive network architecture, we empirically find that a single loss function, e.g., MSE loss or negative SSIM loss [29], is sufficient to train PRN and PReNet. For a model with T stages, we have T outputs, i.e., x 1 , x 2 ,..., x T . By only imposing supervision on the final output x T , the MSE loss is
L = x T − x gt 2 ,(3)
and the negative SSIM loss is
L = −SSIM x T , x gt ,(4)
where x gt is the corresponding ground-truth clean image. It is worth noting that, our empirical study shows that negative SSIM loss outperforms MSE loss in terms of both PSNR and SSIM. Moreover, recursive supervision can be imposed on each intermediate result,
L = − T t=1 λ t SSIM x t , x gt ,(5)
where λ t is the tradeoff parameter for stage t. Experimental result in Sec. 4.1.1 shows that recursive supervision cannot achieve any performance gain when t = T , but can be adopted to generate visually satisfying result at early stages.
Experimental Results
In this section, we first conduct ablation studies to verify the main components of our methods, then quantitatively and qualitatively evaluate progressive networks, and finally assess PReNet on real rainy images and video. All the source code, pre-trained models and results can be found at https://github.com/csdwren/PReNet.
Our progressive networks are implemented using Pytorch [24], and are trained on a PC equipped with two NVIDIA GTX 1080Ti GPUs. In our experiments, all the progressive networks share the same training setting. The patch size is 100 × 100, and the batch size is 18. The ADAM [17] algorithm is adopted to train the models with an initial learning rate 1 × 10 −3 , and ends after 100 epochs. When reaching 30, 50 and 80 epochs, the learning rate is decayed by multiplying 0.2.
Ablation Studies
All the ablation studies are conducted on a heavy rainy dataset [30] with 1,800 rainy images for training and 100 rainy images (Rain100H) for testing. However, we find that 546 rainy images from the 1,800 training samples have the same background contents with testing images. Therefore, we exclude these 546 images from training set, and train all our models on the remaining 1,254 training images.
Loss Functions
Using PReNet (T = 6) as an example, we discuss the effect of loss functions on deraining performance, including MSE loss, negative SSIM loss, and recursive supervision loss.
Negative SSIM v.s. MSE. We train two PReNet models by minimizing MSE loss (PReNet-MSE) and negative SSIM loss (PReNet-SSIM), and Table 1 lists their PSNR and SSIM values on Rain100H. Unsurprisingly, PReNet-SSIM outperforms PReNet-MSE in terms of SSIM. We also note that PReNet-SSIM even achieves higher PSNR, partially attributing to that PReNet-MSE may be inclined to get trapped into poor solution. As shown in Fig. 4, the deraining result by PReNet-SSIM is also visually more plausible than that by PReNet-MSE. Therefore, negative SSIM loss is adopted as the default in the following experiments. (5). For PReNet-RecSSIM, we set λ t = 0.5 (t = 1, 2, ..., 5) and λ 6 = 1.5, where the tradeoff parameter for the final stage is larger than the others. From Table 1, PReNet-RecSSIM performs moderately inferior to PReNet-SSIM. As shown in Fig. 4 ing resource constrained environments by stopping the inference at any stage t.
Network Architecture
In this subsection, we assess the effect of several key modules of progressive networks, including recurrent layer, multi-stage recursion, and intra-stage recursion.
Recurrent Layer. Using PReNet (T = 6), we test two types of recurrent layers, i.e., LSTM (PReNet-LSTM) and GRU (PReNet-GRU). It can be seen from Table 3 that LSTM performs slightly better than GRU in terms of quantitative metrics, and thus is adopted as the default implementation of recurrent layer in our experiments. We further compare progressive networks with and without recurrent layer, i.e., PReNet and PRN, in Table 4, and obviously the introduction of recurrent layer does benefit the deraining performance in terms of PSNR and SSIM.
Intra-stage Recursion. From Table 4, intra-stage recursion, i.e., recursive ResBlocks, is introduced to significantly reduce the number of parameters of progressive networks, resulting in PRN r and PReNet r . As for deraining performance, it is reasonable to see that PRN and PReNet respectively achieve higher average PSNR and SSIM values than PRN r and PReNet r . But in terms of visual quality, PRN r and PReNet r are comparable with PRN and PReNet, as shown in Fig. 6.
Recursive Stage Number T . Table 2 lists the PSNR and SSIM values of four PReNet models with stages T = 2, 3, 4, 5, 6, 7. One can see that PReNet with more stages (from 2 stages to 6 stages) usually leads to higher average PSNR and SSIM values. However, larger T also makes PReNet more difficult to train. When T = 7, PReNet 7 performs a little inferior to PReNet 6 . Thus, we set T = 6 in the following experiments.
Effect of Network Input/Output
Network Input. We also test a variant of PReNet by only taking x t−1 at each stage as input to f in (i.e., PReNet x ), where such strategy has been adopted in [20,30]. From Table 3, PReNet x is obviously inferior to PReNet in terms of both PSNR and SSIM, indicating the benefit of receiving y at each stage.
Network Output. We consider two types of network outputs by incorporating residual learning formulation (i.e., PReNet in Table 3) or not (i.e., PReNet-LSTM in Table 3). From Table 3, residual learning can make a further contribution to performance gain. It is worth noting that, benefited from progressive networks, it is feasible to learn PReNet for directly predicting clean background from rainy image, and even PReNet-LSTM can achieve appealing deraining performance.
Evaluation on Synthetic Datasets
Our progressive networks are evaluated on three synthetic datasets, i.e., Rain100H [30], Rain100L [30] and Rain12 [21]. Five competing methods are considered, including one traditional optimization-based method (GMM [21]) and three state-of-the-art deep CNN-based models, i.e., DDN [6], JORDER [30] and RESCAN [20], and one lightweight network (RGN [4]). For heavy rainy images (Rain100H) and light rainy images (Rain100L), the models are respectively trained, and the models for light rain are directly applied on Rain12. Since the source codes of RGN are not available, we adopt the numerical results reported in [4]. As for JORDER, we directly compute average PSNR and SSIM on deraining results provided by the authors. We re-train RESCAN [20] for Rain100H with the default settings. Besides, we have tried to train RESCAN for light rainy images, but the results are much inferior to the others. So its results on Rain100L and Rain12 are not reported in our experiments.
Our PReNet achieves significant PSNR and SSIM gains over all the competing methods. We also note that for Rain100H, RESCAN [20] is re-trained on the full 1,800 rainy images, the performance gain by our PReNet trained on the strict 1,254 rainy images is still notable. Moreover, even PReNet r can perform better than all the competing methods. From Fig. 7, visible dark noises along rain directions can still be observed from the results by the other methods. In comparison, the results by PRN and PReNet are visually more pleasing.
We further evaluate progressive networks on another dataset [6] which includes 12,600 rainy images for training and 1,400 rainy images for testing (Rain1400). From Table 6, all the four versions of progressive networks outperform DDN in terms of PSNR and SSIM. As shown in Fig. 8, the visual quality improvement by our methods is also significant, while the result by DDN still contains visible rain streaks. Table 7 lists the running time of different methods based on a computer equipped with a NVIDIA GTX 1080Ti GPU. We only give the running time of DDN [6], JORDER [30] and RESCAN [20], due to the codes of the other competing methods are not available. We note that the running time of DDN [6] takes the separation of details layer into account. Unsurprisingly, PRN and PReNet are much more efficient due to its simple network architecture.
Evaluation on Real Rainy Images
Using two real rainy images in Fig. 9, we compare PReNet with two state-of-the-art deep methods, i.e., JORDER [30] and DDN [6]. It can be seen that PReNet and JORDER perform better than DDN in removing rain streaks. For the first image, rain streaks remain visible in the result by DDN, while PReNet and JORDER can generate satisfying deraining results. For the second image, there are more or less rain streaks in the results by DDN and JORDER, while the result by PReNet is more clear.
Evaluation on Real Rainy Videos
Finally, PReNet is adopted to process a rainy video in a frame-by-frame manner, and is compared with state-of-theart video deraining method, i.e., FastDerain [12]. As shown in Fig. 10, for frame #510, both FastDerain and our PReNet can remove all the rain streaks, indicating the performance of PReNet even without the help of temporal consistency. However, FastDerain fails in switching frames, since it is developed by exploiting the consistency of adjacent frames. As a result, for frame #571, #572 and 640, rain streaks are remained in the results by FastDerain, while our PReNet performs favorably and is not affected by switching frames and accumulation error.
Conclusion
In this paper, a better and simpler baseline network is presented for single image deraining. Instead of deeper and complex networks, we find that the simple combination of ResNet and multi-stage recursion, i.e., PRN, can result in favorable performance. Moreover, the deraining Table 5. Average PSNR and SSIM comparison on the synthetic datasets Rain100H [30], Rain100L [30] and Rain12 [21]. Red, blue and cyan colors are used to indicate top 1 st , 2 nd and 3 rd rank, respectively. means these metrics are copied from [4]. • means the metrics are directly computed based on the deraining images provided by the authors [30]. donates the method is re-trained with their default settings (i.e., all the 1800 training samples for Rain100H).
Method GMM [21] performance can be further boosted by the inclusion of recurrent layer, and stage-wise result is also taken as input to each ResNet, resulting in our PReNet model. Further-more, the network parameters can be reduced by incorporating inter-and intra-stage recursive computation (PRN r and PReNet r ). And our progressive deraining networks can be readily trained with single negative SSIM or MSE loss. Extensive experiments validate the superiority of our PReNet and PReNet r on synthetic and real rainy images in comparison to state-of-the-art deraining methods. Taking their simplicity, effectiveness and efficiency into account, it is also appealing to exploit our models as baselines when developing new deraining networks. | 4,123 |
1907.09495 | 2964040729 | Deep learning models have achieved huge success in numerous fields, such as computer vision and natural language processing. However, unlike such fields, it is hard to apply traditional deep learning models on the graph data due to the node-orderless' property. Normally, we use an adjacent matrix to represent a graph, but an artificial and random node-order will be cast on the graphs, which renders the performance of deep models extremely erratic and not robust. In order to eliminate the unnecessary node-order constraint, in this paper, we propose a novel model named Isomorphic Neural Network (IsoNN), which learns the graph representation by extracting its isomorphic features via the graph matching between input graph and templates. IsoNN has two main components: graph isomorphic feature extraction component and classification component. The graph isomorphic feature extraction component utilizes a set of subgraph templates as the kernel variables to learn the possible subgraph patterns existing in the input graph and then computes the isomorphic features. A set of permutation matrices is used in the component to break the node-order brought by the matrix representation. To further lower down the computational cost and identify the optimal subgraph patterns, IsoNN adopts two min-pooling layers to find the optimal matching. The first min-pooling layer aims at finding the best permutation matrix, whereas the second one is used to determine the best templates for the input graph data. Three fully-connected layers are used as the classification component in IsoNN. Extensive experiments are conducted on real-world datasets, and the experimental results demonstrate both the effectiveness and efficiency of IsoNN. | Graph classification is an important problem with many practical applications. Data like social networks, chemical compounds, brain networks can be represented as graphs naturally and they can have applications such as community detection @cite_31 , anti-cancer activity identification @cite_30 @cite_2 and Alzheimer's patients diagnosis @cite_32 @cite_33 respectively. Traditionally, researchers mine the subgraphs by DFS or BFS @cite_16 @cite_24 , and use them as the features. With the rapid development of deep learning (DL), many works are done based on DL methods. GAM builds the model by RNN with self-attention mechanism @cite_5 . DCNN extend CNN to general graph-structured data by introducing a ‘diffusion-convolution’ operation @cite_27 . | {
"abstract": [
"Mining discriminative features for graph data has attracted much attention in recent years due to its important role in constructing graph classifiers, generating graph indices, etc. Most measurement of interestingness of discriminative subgraph features are defined on certain graphs, where the structure of graph objects are certain, and the binary edges within each graph represent the “presence” of linkages among the nodes. In many real-world applications, however, the linkage structure of the graphs is inherently uncertain. Therefore, existing measurements of interestingness based upon certain graphs are unable to capture the structural uncertainty in these applications effectively. In this paper, we study the problem of discriminative subgraph feature selection from uncertain graphs. This problem is challenging and different from conventional subgraph mining problems because both the structure of the graph objects and the discrimination score of each subgraph feature are uncertain. To address these challenges, we propose a novel discriminative subgraph feature selection method, Dug, which can find discriminative subgraph features in uncertain graphs based upon different statistical measures including expectation, median, mode and φ-probability. We first compute the probability distribution of the discrimination scores for each subgraph feature based on dynamic programming. Then a branch-and-bound algorithm is proposed to search for discriminative subgraphs efficiently. Extensive experiments on various neuroimaging applications (i.e., Alzheimers Disease, ADHD and HIV) have been performed to analyze the gain in performance by taking into account structural uncertainties in identifying discriminative subgraph features for graph classification.",
"",
"Recent studies have demonstrated that biomarkers from multiple modalities contain complementary information for the diagnosis of Alzheimer's disease AD and its prodromal stage mild cognitive impairment MCI. In order to fuse data from multiple modalities, most previous approaches calculate a mixed kernel or a similarity matrix by linearly combining kernels or similarities from multiple modalities. However, the complementary information from multi-modal data are not necessarily linearly related. In addition, this linear combination is also sensitive to the weights assigned to each modality. In this paper, we propose a nonlinear graph fusion method to efficiently exploit the complementarity in the multi-modal data for the classification of AD. Specifically, a graph is first constructed for each modality individually. Afterwards, a single unified graph is obtained via a nonlinear combination of the graphs in an iterative cross diffusion process. Using the unified graphs, we achieved classification accuracies of 91.8 between AD subjects and normal controls NC, 79.5 between MCI subjects and NC and 60.2 in a three-way classification, which are competitive with state-of-the-art results.",
"",
"We present diffusion-convolutional neural networks (DCNNs), a new model for graph-structured data. Through the introduction of a diffusion-convolution operation, we show how diffusion-based representations can be learned from graph-structured data and used as an effective basis for node classification. DCNNs have several attractive qualities, including a latent representation for graphical data that is invariant under isomorphism, as well as polynomial-time prediction and learning that can be represented as tensor operations and efficiently implemented on a GPU. Through several experiments with real structured datasets, we demonstrate that DCNNs are able to outperform probabilistic relational models and kernel-on-graph methods at relational node classification tasks.",
"",
"Graph classification is a problem with practical applications in many different domains. To solve this problem, one usually calculates certain graph statistics (i.e., graph features) that help discriminate between graphs of different classes. When calculating such features, most existing approaches process the entire graph. In a graphlet-based approach, for instance, the entire graph is processed to get the total count of different graphlets or subgraphs. In many real-world applications, however, graphs can be noisy with discriminative patterns confined to certain regions in the graph only. In this work, we study the problem of attention-based graph classification. The use of attention allows us to focus on small but informative parts of the graph, avoiding noise in the rest of the graph. We present a novel RNN model, called the Graph Attention Model (GAM), that processes only a portion of the graph by adaptively selecting a sequence of \"informative\" nodes. Experimental results on multiple real-world datasets show that the proposed method is competitive against various well-known methods in graph classification even though our method is limited to only a portion of the graph.",
"",
"Graph mining methods enumerate frequently appearing subgraph patterns, which can be used as features for subsequent classification or regression. However, frequent patterns are not necessarily informative for the given learning problem. We propose a mathematical programming boosting method (gBoost) that progressively collects informative patterns. Compared to AdaBoost, gBoost can build the prediction rule with fewer iterations. To apply the boosting method to graph data, a branch-and-bound pattern search algorithm is developed based on the DFS code tree. The constructed search space is reused in later iterations to minimize the computation time. Our method can learn more efficiently than the simpler method based on frequent substructure mining, because the output labels are used as an extra information source for pruning the search space. Furthermore, by engineering the mathematical program, a wide range of machine learning problems can be solved without modifying the pattern search algorithm."
],
"cite_N": [
"@cite_30",
"@cite_33",
"@cite_32",
"@cite_24",
"@cite_27",
"@cite_2",
"@cite_5",
"@cite_31",
"@cite_16"
],
"mid": [
"2962995121",
"",
"2273249530",
"",
"2963984147",
"",
"2809343047",
"2788919350",
"2148611932"
]
} | 0 |
||
1907.09245 | 2963324243 | Recognition of objects with subtle differences has been used in many practical applications, such as car model recognition and maritime vessel identification. For discrimination of the objects in fine-grained detail, we focus on deep embedding learning by using a multi-task learning framework, in which the hierarchical labels (coarse and fine labels) of the samples are utilized both for classification and a quadruplet-based loss function. In order to improve the recognition strength of the learned features, we present a novel feature selection method specifically designed for four training samples of a quadruplet. By experiments, it is observed that the selection of very hard negative samples with relatively easy positive ones from the same coarse and fine classes significantly increases some performance metrics in a fine-grained dataset when compared to selecting the quadruplet samples randomly. The feature embedding learned by the proposed method achieves favorable performance against its state-of-the-art counterparts. | Earlier works on metric learning are based on @cite_19 . In that study, two identical neural networks extract the features of two arbitrary images. Next, these features are compared by a metric which is based on a radial function The distance between any two members in the feature space is defined as the cosine of the angle between them @cite_19 . . While their loss function forces the samples in the same class to be closer to each other in the sense of the selected distance function, the samples in the different classes are forced to be mapped far from each other. The cost function of such a network is given below @cite_2 where @math represents the operation of @math , and @math are distances in between samples. | {
"abstract": [
"This paper describes an algorithm for verification of signatures written on a pen-input tablet. The algorithm is based on a novel, artificial neural network, called a \"Siamese\" neural network. This network consists of two identical sub-networks joined at their outputs. During training the two sub-networks extract features from two signatures, while the joining neuron measures the distance between the two feature vectors. Verification consists of comparing an extracted feature vector with a stored feature vector for the signer. Signatures closer to this stored representation than a chosen threshold are accepted, all other signatures are rejected as forgeries.",
"Dimensionality reduction involves mapping a set of high dimensional input points onto a low dimensional manifold so that 'similar\" points in input space are mapped to nearby points on the manifold. We present a method - called Dimensionality Reduction by Learning an Invariant Mapping (DrLIM) - for learning a globally coherent nonlinear function that maps the data evenly to the output manifold. The learning relies solely on neighborhood relationships and does not require any distancemeasure in the input space. The method can learn mappings that are invariant to certain transformations of the inputs, as is demonstrated with a number of experiments. Comparisons are made to other techniques, in particular LLE."
],
"cite_N": [
"@cite_19",
"@cite_2"
],
"mid": [
"2127589108",
"2138621090"
]
} | QUADRUPLET SELECTION METHODS FOR DEEP EMBEDDING LEARNING | Recently, embedding learning has become one of the most popular issues in machine learning [1,2,22]. Proper mapping from the raw data to a feature space is commonly utilized for image retrieval [4] and duplicate detection [5], which are used in many applications such as online image search.
For training a model that can extract proper features, the distance between two samples of a dataset in the feature space † This work was done when Erhan Gundogdu was with Middle East Technical University.
Copyright 2019 IEEE. Published in the IEEE 2019 International Conference on Image Processing (ICIP 2019), scheduled for 22-25 September 2019 in Taipei, Taiwan. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works, must be obtained from the IEEE. Contact:Manager, Copyrights and Permissions / IEEE Service Center / 445 Hoes Lane / P.O. Box 1331 / Piscataway, NJ 08855-1331, USA. Telephone: + Intl. 908-562-3966. should be considered. Moreover, some embedding learning methods are employed to increase the classification accuracy, e.g., fine-grained object recognition [6] by using deep convolutional neural network (CNN) models which require a significant amount of training samples. Fortunately, there are datasets for various purposes such as car model recognition [7] and maritime vessel classification and identification [8]. Some of these datasets can be used for classifying land, marine, and air vehicles in a real-world scenario. Concretely, car model recognition can be employed in the context of visual surveillance and security for the land traffic control [6] and marine vessel recognition is used for the purpose of coastal surveillance [9] [10]. In this work, we focus on the feature learning problem specifically designed for car model recognition.
Recently developed studies on feature learning focus on extracting features from raw data such that the samples belonging to different classes are well-separated and the ones from the same classes are close to each other in the feature space. The state-of-the-art network architectures such as VGG [11] and GoogLeNet [12] are frequently used for extracting features from images by several different training processes. In the early years, pairwise similarity is used for signature verification with contrastive loss [13]. Since consideration of the whole pairs or triplet samples in a dataset is not computationally tractable, carefully designed mining techniques are proposed, such as hard positive [14] and negative [15] mining.
In the previous methods that employ a hard mining step during training, at each iteration of the optimization, they focus on the separation of samples in the feature space in a selected batch from the dataset. Therefore, the distance relations among the samples in a dataset are not fully exploited. Moreover, the classification loss function for the fine-grained labels is not considered in the training phase. On the other hand, our proposed method for the quadruplet sample selection enables to convey more information from the utilized dataset by considering the globally hard negatives and relatively easy positives in the distance loss terms and the auxiliary classification layers.
The contributions of this work are summarized as follows: (1) In order to improve embedding learning, we have proposed two novel quadruplet selection methods where the globally hardest negative and moderately easy positive samples are selected. (2) Our framework contains a CNN trained with the combination of the classification and distance losses. These losses are designed to exploit the hierarchical labels of the training samples. (3) To test the proposed method, we have conducted experiments on the Stanford Cars 196 dataset [7] and observed that the recognition accuracy of the unobserved classes has been improved with respect to the random selection of samples in the quadruplets while outperforming the state-of-the-art feature learning methods.
PROPOSED METHOD
Each quadruplet sample is represented as
Q i = {X R i , X P + i , X P − i , X N i } where X i = (x i , y i1 , y i2
). x i ∈ R n represents the vector of the pixels of an image (n is the number of the pixels in the image), y i1 ∈ C 1 and y i2 ∈ C 2 represents the coarse, and fine classes, respectively, where
C 1 = {c i 1 } k1 i=1 (k 1
is the number of coarse classes) and similarly,
C 2 = {c i 2 } k2 i=1 .
Let the weights of a CNN be θ ∈ R m where m is the number of the weights, then the network can be defined as f θ (x i ) : R m × R n → R k where k is the dimension of the feature space.
Our proposed cost function consists of two parts: the classification (Section 3.1) and distance (Section 3.2) cost functions. The aim of these cost functions is to form the feature space so that fine classes are well-separated. However, the learning process highly depends on the selection of the 5 In (3), σ 2 P +/− = var{D R,P +/− }, σ 2 N = var{D R,N }, and µ P +/− = E{D R,P +/− }, µ N = E{D R,N } as defined in [20].
quadruplets. The training process takes more time when selecting the quadruplets in an erroneous strategy. We propose to select the members of the quadruplets from the most informative region in the feature space in Section 3.3. As validated by the experiments (Section 4), proposed method increases the performance of separation significantly as it can be observed from both Recall@K and Normalized Mutual Information (NMI) values in Table 1.
Classification Cost Function
In order to increase the discriminativeness of the features for the available class labels, softmax loss is employed. Contrary to the traditional one, the proposed neural network has two outputs which are dedicated to the fine and coarse classes. Let s θ = [g θ , h θ ] where g θ denotes the output for the coarse class, whereas h θ is for the fine class. Then, the proposed cost function is obtained:
L C1,C2 (x) = −λ c1 k1 i=1 p(c i 1 )log e h x θ (c i 1 ) k1 j=1 e h x θ (c j 1 ) −λ c2 k2 i=1 p(c i 2 )log e g x θ (c i 2 ) k2 j=1 e g x θ (c j 2 )
.
(4)
C 1 and C 2 specify the coarse and fine classes, respectively. p(c i 1 ) is the probability that the x vector belongs to the i th coarse class. If x ∈ c j 1 , then by using hard decision, p(c i 1 ) = δ ij where δ ij is the Kronecker delta function. Similarly, p(c i 2 ) is also calculated for C 2 . h x θ (c i 1 ) represents the i th element of the h x θ vector, where h x θ is the score vector for the coarse classes (C 1 ). Likewise, g x θ is the one for the fine classes (C 2 ). λ c1 and λ c2 are the weights of the fine and coarse classification terms of the cost function.
Distance Cost Function
The distances between the samples in the feature space are commonly defined by a radial function [17]. For this reason, the representations which will be learned by our proposed framework are m-dimensional feature vectors. The distance for any two members can be defined by l 2 norm. Hence, we can clearly formulate our goal by the inequality D R,P + < D R,P − < D R,N . The first part can be rewritten as D R,P + + m 1 < D R,P − , and the second part would be D R,P − + m 2 < D R,N where m 1 and m 2 are the margins, which should be positive numbers. Moreover, we emphasize the discrimination of the coarse classes by using the condition m 1 > m 2 > 0. Then, the new cost function can be proposed as:
L joint (x R , x P + , x P − , x N ) = 1 − D R,P − D R,P + + m 1 − m 2 + + 1 − D R,N D R,P − + m 2 + + L C1,C2 (x R ).
(5) Finally, the overall proposed network is shown in Figure 1 with the loss function given in (6). This loss function, which is the combination of (5) and (3), consider the distances of the samples in the feature space using L joint while L global regularizes the statistics of the distances batch-wise. . . C2 Fig. 1: The proposed framework is similar to the model used in [9]. The dimension of the last fully connected (FC) layer is 1024. Note that all the weights in the network are shared, including the weights in the FC layers.
L comb (Q) = ∀i L joint (Q i ) + ηL global (Q).
Quadruplet Selection
In the previous section, we have briefly summarized our novel loss function. As it is mentioned before, selecting the quadruplet samples randomly makes it difficult to exploit the most informative training examples. Instead of attempting to cover all the quadruplet combinations in the training set, we propose two novel selection strategies. First, a reference sample is randomly selected with equal probability from the training set (Let the reference sample be selected as X R , where C R 1 and C R 2 are the coarse and fine labels of the reference sample, respectively.). The negative sample is selected from the set of the samples belonging to the different coarse classes. The critical point is that, like hard negative mining in [15], we should select the closest negative sample to X R (X N := argmin
X N ∈C R 1 ||f θ (x R ) − f θ (x N )|| 2 )
. At this point, we propose two different methods for the selection of X P + and X P − . The experimental comparison of these two methods is given in Section 4.
Method 1
For determining X P + , we select the sample whose fine class is the same as the fine class of X R , and which is closest to X N . At this point, the constraint for selection of X P + is as follows: the distance between X P + and X R is greater than the distance between X R and X N (D R,P + > D R,N ). Similarly, we select X P − whose coarse class is the same as the coarse class of X R , which is the closest sample to X N , and also satisfying D R,P − > D R,N . This method is visualized in Figure 2.
Method 2
In the second method, after selecting X N , the distance between X R and X N (D R,N ) determines a hyper-sphere which takes X R as its center. After selecting the labels of X P + and X P − according to the constraints in Section 2, X P + and X P − are selected from the predetermined classes such that they are the closest points to X R but outside the region enclosed by this hyper-sphere. If there are no samples which are both close to X R and outside of the hyper-sphere, then the furthest sample to X R inside the hyper-sphere is selected. This selection method is illustrated in Figure 2. After X R is selected, the nearest sample belonging to the different coarse class is selected as X N . X P + and X P − are also selected as in Method 1 (left), and Method 2 (right).
RESULTS
We compare the performance of our proposed method against the state-of-the-art feature learning approaches in [18,21,4,22,20] by using the same evaluation methods. In addition, the randomly selected quadruplets are utilized as in [9]. Stanford Cars 196 dataset [7] is used in the experiments. To implement the proposed methods, a hierarchical structure is required for all the samples in the dataset, where each sample originally has only one label. For this purpose, we should add the highlevel classes (coarse labels) to the dataset. In other words, the 196 classes, which are originally in the dataset, are taken as the fine classes and 22 coarse classes are added using the types of the cars, similar to the study in [6].
The important point in the generation of the training and test sets is that they should not share any fine class labels. With this restriction, we want to measure the adequacy of our neural network to separate the classes that have not been seen before. The most common performance analysis methods for zero-shot learning are Recall@K and NMI. Recall@K specifies whether the samples belonging to the same fine class are close to each other, and NMI is a measure of clustering quantity as mentioned in [22].
For this purpose, the first 98 fine classes of the dataset are selected as the training set, and the rest are used only as the test set similar to the study in [1]. In our experimental setup, the pre-trained ResNet101 model [23] (that has been trained using the ImageNet dataset [24]) is employed as our CNN model to extract the features. The experiments are performed on Pytorch platform [25]. In addition, the hyper-parameters of the cost function are selected as 0.08 for λ c1 , 0.25 for λ c2 ; 1 for λ g1 , λ g2 , and η. The margins are 0.7 for m 1 , and t 1 ; 0.3 for m 2 , and t 2 . The learning parameters are as follows: the learning rate is 0.0003, the momentum is 0.9, and stochastic gradient descent algorithm is used for optimization. The results can be examined in Table 1. Our proposed quadruplet based learning framework has improved the precision in terms of Recall@K even if they are selected randomly. According to Recall@K metric, random quadruplet selection method outperforms the previous studies in [18,21,4,22], and it is comparable to the study in [20]. On top of that, when the proposed selection methods are used, even higher levels of accuracy can be obtained. As it is demonstrated in Table 1, Method 1 results in 64.85% accuracy of Recall@1, which is an improvement by at least 3.4% compared to the other studies; while Method 2 results in 66.06% accuracy of Recall@1 corresponding to a 4.5% increase.
CONCLUSION
We have demonstrated the proposed method of selection significantly increases the rate of separation of a model in terms of recall performance. Unlike previous studies that consider only the distances between X R -X P +/− and X R -X N , the proposed methods consider also the distances between X N -X P +/− in the feature space. This consideration helps us im-prove the model and achieve better accuracy performance. These two proposed selection methods allow the loss function not only to enlarge margins between the samples in the different classes but also to create several tight clusters for each class. Moreover, these two proposed methods have the advantage that they pay attention to the samples at the region around the critical hyper-sphere. Especially, the second method attacks the easier problem, i.e. while the first method can reshape the only particular region in the feature space, the second one can use all the region on the surface of a hypersphere. Therefore, the feature space is manipulated through a better optimization procedure. | 2,630 |
1907.09245 | 2963324243 | Recognition of objects with subtle differences has been used in many practical applications, such as car model recognition and maritime vessel identification. For discrimination of the objects in fine-grained detail, we focus on deep embedding learning by using a multi-task learning framework, in which the hierarchical labels (coarse and fine labels) of the samples are utilized both for classification and a quadruplet-based loss function. In order to improve the recognition strength of the learned features, we present a novel feature selection method specifically designed for four training samples of a quadruplet. By experiments, it is observed that the selection of very hard negative samples with relatively easy positive ones from the same coarse and fine classes significantly increases some performance metrics in a fine-grained dataset when compared to selecting the quadruplet samples randomly. The feature embedding learned by the proposed method achieves favorable performance against its state-of-the-art counterparts. | Another approach is to utilize the hierarchical class labels of the training samples @cite_15 . In that method, samples with similar fine labels have the same coarse label, i.e. a sample has more than one label. The cost function is modified by considering both the coarse and fine labels. For this purpose, each quadruplet sample is constructed as follows: (1) Reference sample (anchor sample), @math , (2) Positive positive sample, @math , (3) Positive negative sample, @math , (4) Negative sample, @math . Similar to the triplet selection, the quadruplets are selected such that three constraints should be taken into account. First, both the coarse and fine classes of @math and @math should be the same. Second, although the coarse class of @math is the same as the coarse class of @math , the fine classes are different. Finally, the coarse class of @math and @math should be different. | {
"abstract": [
"Recent algorithms in convolutional neural networks (CNN) considerably advance the fine-grained image classification, which aims to differentiate subtle differences among subordinate classes. However, previous studies have rarely focused on learning a fined-grained and structured feature representation that is able to locate similar images at different levels of relevance, e.g., discovering cars from the same make or the same model, both of which require high precision. In this paper, we propose two main contributions to tackle this problem. 1) A multitask learning framework is designed to effectively learn fine-grained feature representations by jointly optimizing both classification and similarity constraints. 2) To model the multi-level relevance, label structures such as hierarchy or shared attributes are seamlessly embedded into the framework by generalizing the triplet loss. Extensive and thorough experiments have been conducted on three finegrained datasets, i.e., the Stanford car, the Car-333, and the food datasets, which contain either hierarchical labels or shared attributes. Our proposed method has achieved very competitive performance, i.e., among state-of-the-art classification accuracy when not using parts. More importantly, it significantly outperforms previous fine-grained feature representations for image retrieval at different levels of relevance."
],
"cite_N": [
"@cite_15"
],
"mid": [
"2964189431"
]
} | QUADRUPLET SELECTION METHODS FOR DEEP EMBEDDING LEARNING | Recently, embedding learning has become one of the most popular issues in machine learning [1,2,22]. Proper mapping from the raw data to a feature space is commonly utilized for image retrieval [4] and duplicate detection [5], which are used in many applications such as online image search.
For training a model that can extract proper features, the distance between two samples of a dataset in the feature space † This work was done when Erhan Gundogdu was with Middle East Technical University.
Copyright 2019 IEEE. Published in the IEEE 2019 International Conference on Image Processing (ICIP 2019), scheduled for 22-25 September 2019 in Taipei, Taiwan. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works, must be obtained from the IEEE. Contact:Manager, Copyrights and Permissions / IEEE Service Center / 445 Hoes Lane / P.O. Box 1331 / Piscataway, NJ 08855-1331, USA. Telephone: + Intl. 908-562-3966. should be considered. Moreover, some embedding learning methods are employed to increase the classification accuracy, e.g., fine-grained object recognition [6] by using deep convolutional neural network (CNN) models which require a significant amount of training samples. Fortunately, there are datasets for various purposes such as car model recognition [7] and maritime vessel classification and identification [8]. Some of these datasets can be used for classifying land, marine, and air vehicles in a real-world scenario. Concretely, car model recognition can be employed in the context of visual surveillance and security for the land traffic control [6] and marine vessel recognition is used for the purpose of coastal surveillance [9] [10]. In this work, we focus on the feature learning problem specifically designed for car model recognition.
Recently developed studies on feature learning focus on extracting features from raw data such that the samples belonging to different classes are well-separated and the ones from the same classes are close to each other in the feature space. The state-of-the-art network architectures such as VGG [11] and GoogLeNet [12] are frequently used for extracting features from images by several different training processes. In the early years, pairwise similarity is used for signature verification with contrastive loss [13]. Since consideration of the whole pairs or triplet samples in a dataset is not computationally tractable, carefully designed mining techniques are proposed, such as hard positive [14] and negative [15] mining.
In the previous methods that employ a hard mining step during training, at each iteration of the optimization, they focus on the separation of samples in the feature space in a selected batch from the dataset. Therefore, the distance relations among the samples in a dataset are not fully exploited. Moreover, the classification loss function for the fine-grained labels is not considered in the training phase. On the other hand, our proposed method for the quadruplet sample selection enables to convey more information from the utilized dataset by considering the globally hard negatives and relatively easy positives in the distance loss terms and the auxiliary classification layers.
The contributions of this work are summarized as follows: (1) In order to improve embedding learning, we have proposed two novel quadruplet selection methods where the globally hardest negative and moderately easy positive samples are selected. (2) Our framework contains a CNN trained with the combination of the classification and distance losses. These losses are designed to exploit the hierarchical labels of the training samples. (3) To test the proposed method, we have conducted experiments on the Stanford Cars 196 dataset [7] and observed that the recognition accuracy of the unobserved classes has been improved with respect to the random selection of samples in the quadruplets while outperforming the state-of-the-art feature learning methods.
PROPOSED METHOD
Each quadruplet sample is represented as
Q i = {X R i , X P + i , X P − i , X N i } where X i = (x i , y i1 , y i2
). x i ∈ R n represents the vector of the pixels of an image (n is the number of the pixels in the image), y i1 ∈ C 1 and y i2 ∈ C 2 represents the coarse, and fine classes, respectively, where
C 1 = {c i 1 } k1 i=1 (k 1
is the number of coarse classes) and similarly,
C 2 = {c i 2 } k2 i=1 .
Let the weights of a CNN be θ ∈ R m where m is the number of the weights, then the network can be defined as f θ (x i ) : R m × R n → R k where k is the dimension of the feature space.
Our proposed cost function consists of two parts: the classification (Section 3.1) and distance (Section 3.2) cost functions. The aim of these cost functions is to form the feature space so that fine classes are well-separated. However, the learning process highly depends on the selection of the 5 In (3), σ 2 P +/− = var{D R,P +/− }, σ 2 N = var{D R,N }, and µ P +/− = E{D R,P +/− }, µ N = E{D R,N } as defined in [20].
quadruplets. The training process takes more time when selecting the quadruplets in an erroneous strategy. We propose to select the members of the quadruplets from the most informative region in the feature space in Section 3.3. As validated by the experiments (Section 4), proposed method increases the performance of separation significantly as it can be observed from both Recall@K and Normalized Mutual Information (NMI) values in Table 1.
Classification Cost Function
In order to increase the discriminativeness of the features for the available class labels, softmax loss is employed. Contrary to the traditional one, the proposed neural network has two outputs which are dedicated to the fine and coarse classes. Let s θ = [g θ , h θ ] where g θ denotes the output for the coarse class, whereas h θ is for the fine class. Then, the proposed cost function is obtained:
L C1,C2 (x) = −λ c1 k1 i=1 p(c i 1 )log e h x θ (c i 1 ) k1 j=1 e h x θ (c j 1 ) −λ c2 k2 i=1 p(c i 2 )log e g x θ (c i 2 ) k2 j=1 e g x θ (c j 2 )
.
(4)
C 1 and C 2 specify the coarse and fine classes, respectively. p(c i 1 ) is the probability that the x vector belongs to the i th coarse class. If x ∈ c j 1 , then by using hard decision, p(c i 1 ) = δ ij where δ ij is the Kronecker delta function. Similarly, p(c i 2 ) is also calculated for C 2 . h x θ (c i 1 ) represents the i th element of the h x θ vector, where h x θ is the score vector for the coarse classes (C 1 ). Likewise, g x θ is the one for the fine classes (C 2 ). λ c1 and λ c2 are the weights of the fine and coarse classification terms of the cost function.
Distance Cost Function
The distances between the samples in the feature space are commonly defined by a radial function [17]. For this reason, the representations which will be learned by our proposed framework are m-dimensional feature vectors. The distance for any two members can be defined by l 2 norm. Hence, we can clearly formulate our goal by the inequality D R,P + < D R,P − < D R,N . The first part can be rewritten as D R,P + + m 1 < D R,P − , and the second part would be D R,P − + m 2 < D R,N where m 1 and m 2 are the margins, which should be positive numbers. Moreover, we emphasize the discrimination of the coarse classes by using the condition m 1 > m 2 > 0. Then, the new cost function can be proposed as:
L joint (x R , x P + , x P − , x N ) = 1 − D R,P − D R,P + + m 1 − m 2 + + 1 − D R,N D R,P − + m 2 + + L C1,C2 (x R ).
(5) Finally, the overall proposed network is shown in Figure 1 with the loss function given in (6). This loss function, which is the combination of (5) and (3), consider the distances of the samples in the feature space using L joint while L global regularizes the statistics of the distances batch-wise. . . C2 Fig. 1: The proposed framework is similar to the model used in [9]. The dimension of the last fully connected (FC) layer is 1024. Note that all the weights in the network are shared, including the weights in the FC layers.
L comb (Q) = ∀i L joint (Q i ) + ηL global (Q).
Quadruplet Selection
In the previous section, we have briefly summarized our novel loss function. As it is mentioned before, selecting the quadruplet samples randomly makes it difficult to exploit the most informative training examples. Instead of attempting to cover all the quadruplet combinations in the training set, we propose two novel selection strategies. First, a reference sample is randomly selected with equal probability from the training set (Let the reference sample be selected as X R , where C R 1 and C R 2 are the coarse and fine labels of the reference sample, respectively.). The negative sample is selected from the set of the samples belonging to the different coarse classes. The critical point is that, like hard negative mining in [15], we should select the closest negative sample to X R (X N := argmin
X N ∈C R 1 ||f θ (x R ) − f θ (x N )|| 2 )
. At this point, we propose two different methods for the selection of X P + and X P − . The experimental comparison of these two methods is given in Section 4.
Method 1
For determining X P + , we select the sample whose fine class is the same as the fine class of X R , and which is closest to X N . At this point, the constraint for selection of X P + is as follows: the distance between X P + and X R is greater than the distance between X R and X N (D R,P + > D R,N ). Similarly, we select X P − whose coarse class is the same as the coarse class of X R , which is the closest sample to X N , and also satisfying D R,P − > D R,N . This method is visualized in Figure 2.
Method 2
In the second method, after selecting X N , the distance between X R and X N (D R,N ) determines a hyper-sphere which takes X R as its center. After selecting the labels of X P + and X P − according to the constraints in Section 2, X P + and X P − are selected from the predetermined classes such that they are the closest points to X R but outside the region enclosed by this hyper-sphere. If there are no samples which are both close to X R and outside of the hyper-sphere, then the furthest sample to X R inside the hyper-sphere is selected. This selection method is illustrated in Figure 2. After X R is selected, the nearest sample belonging to the different coarse class is selected as X N . X P + and X P − are also selected as in Method 1 (left), and Method 2 (right).
RESULTS
We compare the performance of our proposed method against the state-of-the-art feature learning approaches in [18,21,4,22,20] by using the same evaluation methods. In addition, the randomly selected quadruplets are utilized as in [9]. Stanford Cars 196 dataset [7] is used in the experiments. To implement the proposed methods, a hierarchical structure is required for all the samples in the dataset, where each sample originally has only one label. For this purpose, we should add the highlevel classes (coarse labels) to the dataset. In other words, the 196 classes, which are originally in the dataset, are taken as the fine classes and 22 coarse classes are added using the types of the cars, similar to the study in [6].
The important point in the generation of the training and test sets is that they should not share any fine class labels. With this restriction, we want to measure the adequacy of our neural network to separate the classes that have not been seen before. The most common performance analysis methods for zero-shot learning are Recall@K and NMI. Recall@K specifies whether the samples belonging to the same fine class are close to each other, and NMI is a measure of clustering quantity as mentioned in [22].
For this purpose, the first 98 fine classes of the dataset are selected as the training set, and the rest are used only as the test set similar to the study in [1]. In our experimental setup, the pre-trained ResNet101 model [23] (that has been trained using the ImageNet dataset [24]) is employed as our CNN model to extract the features. The experiments are performed on Pytorch platform [25]. In addition, the hyper-parameters of the cost function are selected as 0.08 for λ c1 , 0.25 for λ c2 ; 1 for λ g1 , λ g2 , and η. The margins are 0.7 for m 1 , and t 1 ; 0.3 for m 2 , and t 2 . The learning parameters are as follows: the learning rate is 0.0003, the momentum is 0.9, and stochastic gradient descent algorithm is used for optimization. The results can be examined in Table 1. Our proposed quadruplet based learning framework has improved the precision in terms of Recall@K even if they are selected randomly. According to Recall@K metric, random quadruplet selection method outperforms the previous studies in [18,21,4,22], and it is comparable to the study in [20]. On top of that, when the proposed selection methods are used, even higher levels of accuracy can be obtained. As it is demonstrated in Table 1, Method 1 results in 64.85% accuracy of Recall@1, which is an improvement by at least 3.4% compared to the other studies; while Method 2 results in 66.06% accuracy of Recall@1 corresponding to a 4.5% increase.
CONCLUSION
We have demonstrated the proposed method of selection significantly increases the rate of separation of a model in terms of recall performance. Unlike previous studies that consider only the distances between X R -X P +/− and X R -X N , the proposed methods consider also the distances between X N -X P +/− in the feature space. This consideration helps us im-prove the model and achieve better accuracy performance. These two proposed selection methods allow the loss function not only to enlarge margins between the samples in the different classes but also to create several tight clusters for each class. Moreover, these two proposed methods have the advantage that they pay attention to the samples at the region around the critical hyper-sphere. Especially, the second method attacks the easier problem, i.e. while the first method can reshape the only particular region in the feature space, the second one can use all the region on the surface of a hypersphere. Therefore, the feature space is manipulated through a better optimization procedure. | 2,630 |
1907.09245 | 2963324243 | Recognition of objects with subtle differences has been used in many practical applications, such as car model recognition and maritime vessel identification. For discrimination of the objects in fine-grained detail, we focus on deep embedding learning by using a multi-task learning framework, in which the hierarchical labels (coarse and fine labels) of the samples are utilized both for classification and a quadruplet-based loss function. In order to improve the recognition strength of the learned features, we present a novel feature selection method specifically designed for four training samples of a quadruplet. By experiments, it is observed that the selection of very hard negative samples with relatively easy positive ones from the same coarse and fine classes significantly increases some performance metrics in a fine-grained dataset when compared to selecting the quadruplet samples randomly. The feature embedding learned by the proposed method achieves favorable performance against its state-of-the-art counterparts. | Moreover, the loss function for the quadruplets is similar to the triplet based methods @cite_15 . On the other hand, in @cite_7 , the use of the global loss has been proposed, while the quadruplet samples are selected randomly (Note that these quadruplets hold the constraints). The global loss penalizes the network in case of the mean and variance of the distances between the samples in a quadruplet are not appropriate, as given in In , @math , @math , and @math , @math as defined in @cite_21 . , where @math and @math are the margins, similar to . | {
"abstract": [
"Recent algorithms in convolutional neural networks (CNN) considerably advance the fine-grained image classification, which aims to differentiate subtle differences among subordinate classes. However, previous studies have rarely focused on learning a fined-grained and structured feature representation that is able to locate similar images at different levels of relevance, e.g., discovering cars from the same make or the same model, both of which require high precision. In this paper, we propose two main contributions to tackle this problem. 1) A multitask learning framework is designed to effectively learn fine-grained feature representations by jointly optimizing both classification and similarity constraints. 2) To model the multi-level relevance, label structures such as hierarchy or shared attributes are seamlessly embedded into the framework by generalizing the triplet loss. Extensive and thorough experiments have been conducted on three finegrained datasets, i.e., the Stanford car, the Car-333, and the food datasets, which contain either hierarchical labels or shared attributes. Our proposed method has achieved very competitive performance, i.e., among state-of-the-art classification accuracy when not using parts. More importantly, it significantly outperforms previous fine-grained feature representations for image retrieval at different levels of relevance.",
"Recent innovations in training deep convolutional neural network (ConvNet) models have motivated the design of new methods to automatically learn local image descriptors. The latest deep ConvNets proposed for this task consist of a siamese network that is trained by penalising misclassification of pairs of local image patches. Current results from machine learning show that replacing this siamese by a triplet network can improve the classification accuracy in several problems, but this has yet to be demonstrated for local image descriptor learning. Moreover, current siamese and triplet networks have been trained with stochastic gradient descent that computes the gradient from individual pairs or triplets of local image patches, which can make them prone to overfitting. In this paper, we first propose the use of triplet networks for the problem of local image descriptor learning. Furthermore, we also propose the use of a global loss that minimises the overall classification error in the training set, which can improve the generalisation capability of the model. Using the UBC benchmark dataset for comparing local image descriptors, we show that the triplet network produces a more accurate embedding than the siamese network in terms of the UBC dataset errors. Moreover, we also demonstrate that a combination of the triplet and global losses produces the best embedding in the field, using this triplet network. Finally, we also show that the use of the central-surround siamese network trained with the global loss produces the best result of the field on the UBC dataset. Pre-trained models are available online at this https URL",
"This paper addresses the problem of maritime vessel identification by exploiting the state-of-the-art techniques of distance metric learning and deep convolutional neural networks since vessels are the key constituents of marine surveillance. In order to increase the performance of visual vessel identification, we propose a joint learning framework which considers a classification and a distance metric learning cost function. The proposed method utilizes the quadruplet samples from a diverse image dataset to learn the ranking of the distances for hierarchical levels of labeling. The proposed method performs favorably well for vessel identification task against the conventional use of neuron activations towards the final layers of the classification networks. The proposed method achieves 60 percent vessel identification accuracy for 3965 different vessels without sacrificing vessel type classification accuracy."
],
"cite_N": [
"@cite_15",
"@cite_21",
"@cite_7"
],
"mid": [
"2964189431",
"2219193941",
"2730259892"
]
} | QUADRUPLET SELECTION METHODS FOR DEEP EMBEDDING LEARNING | Recently, embedding learning has become one of the most popular issues in machine learning [1,2,22]. Proper mapping from the raw data to a feature space is commonly utilized for image retrieval [4] and duplicate detection [5], which are used in many applications such as online image search.
For training a model that can extract proper features, the distance between two samples of a dataset in the feature space † This work was done when Erhan Gundogdu was with Middle East Technical University.
Copyright 2019 IEEE. Published in the IEEE 2019 International Conference on Image Processing (ICIP 2019), scheduled for 22-25 September 2019 in Taipei, Taiwan. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works, must be obtained from the IEEE. Contact:Manager, Copyrights and Permissions / IEEE Service Center / 445 Hoes Lane / P.O. Box 1331 / Piscataway, NJ 08855-1331, USA. Telephone: + Intl. 908-562-3966. should be considered. Moreover, some embedding learning methods are employed to increase the classification accuracy, e.g., fine-grained object recognition [6] by using deep convolutional neural network (CNN) models which require a significant amount of training samples. Fortunately, there are datasets for various purposes such as car model recognition [7] and maritime vessel classification and identification [8]. Some of these datasets can be used for classifying land, marine, and air vehicles in a real-world scenario. Concretely, car model recognition can be employed in the context of visual surveillance and security for the land traffic control [6] and marine vessel recognition is used for the purpose of coastal surveillance [9] [10]. In this work, we focus on the feature learning problem specifically designed for car model recognition.
Recently developed studies on feature learning focus on extracting features from raw data such that the samples belonging to different classes are well-separated and the ones from the same classes are close to each other in the feature space. The state-of-the-art network architectures such as VGG [11] and GoogLeNet [12] are frequently used for extracting features from images by several different training processes. In the early years, pairwise similarity is used for signature verification with contrastive loss [13]. Since consideration of the whole pairs or triplet samples in a dataset is not computationally tractable, carefully designed mining techniques are proposed, such as hard positive [14] and negative [15] mining.
In the previous methods that employ a hard mining step during training, at each iteration of the optimization, they focus on the separation of samples in the feature space in a selected batch from the dataset. Therefore, the distance relations among the samples in a dataset are not fully exploited. Moreover, the classification loss function for the fine-grained labels is not considered in the training phase. On the other hand, our proposed method for the quadruplet sample selection enables to convey more information from the utilized dataset by considering the globally hard negatives and relatively easy positives in the distance loss terms and the auxiliary classification layers.
The contributions of this work are summarized as follows: (1) In order to improve embedding learning, we have proposed two novel quadruplet selection methods where the globally hardest negative and moderately easy positive samples are selected. (2) Our framework contains a CNN trained with the combination of the classification and distance losses. These losses are designed to exploit the hierarchical labels of the training samples. (3) To test the proposed method, we have conducted experiments on the Stanford Cars 196 dataset [7] and observed that the recognition accuracy of the unobserved classes has been improved with respect to the random selection of samples in the quadruplets while outperforming the state-of-the-art feature learning methods.
PROPOSED METHOD
Each quadruplet sample is represented as
Q i = {X R i , X P + i , X P − i , X N i } where X i = (x i , y i1 , y i2
). x i ∈ R n represents the vector of the pixels of an image (n is the number of the pixels in the image), y i1 ∈ C 1 and y i2 ∈ C 2 represents the coarse, and fine classes, respectively, where
C 1 = {c i 1 } k1 i=1 (k 1
is the number of coarse classes) and similarly,
C 2 = {c i 2 } k2 i=1 .
Let the weights of a CNN be θ ∈ R m where m is the number of the weights, then the network can be defined as f θ (x i ) : R m × R n → R k where k is the dimension of the feature space.
Our proposed cost function consists of two parts: the classification (Section 3.1) and distance (Section 3.2) cost functions. The aim of these cost functions is to form the feature space so that fine classes are well-separated. However, the learning process highly depends on the selection of the 5 In (3), σ 2 P +/− = var{D R,P +/− }, σ 2 N = var{D R,N }, and µ P +/− = E{D R,P +/− }, µ N = E{D R,N } as defined in [20].
quadruplets. The training process takes more time when selecting the quadruplets in an erroneous strategy. We propose to select the members of the quadruplets from the most informative region in the feature space in Section 3.3. As validated by the experiments (Section 4), proposed method increases the performance of separation significantly as it can be observed from both Recall@K and Normalized Mutual Information (NMI) values in Table 1.
Classification Cost Function
In order to increase the discriminativeness of the features for the available class labels, softmax loss is employed. Contrary to the traditional one, the proposed neural network has two outputs which are dedicated to the fine and coarse classes. Let s θ = [g θ , h θ ] where g θ denotes the output for the coarse class, whereas h θ is for the fine class. Then, the proposed cost function is obtained:
L C1,C2 (x) = −λ c1 k1 i=1 p(c i 1 )log e h x θ (c i 1 ) k1 j=1 e h x θ (c j 1 ) −λ c2 k2 i=1 p(c i 2 )log e g x θ (c i 2 ) k2 j=1 e g x θ (c j 2 )
.
(4)
C 1 and C 2 specify the coarse and fine classes, respectively. p(c i 1 ) is the probability that the x vector belongs to the i th coarse class. If x ∈ c j 1 , then by using hard decision, p(c i 1 ) = δ ij where δ ij is the Kronecker delta function. Similarly, p(c i 2 ) is also calculated for C 2 . h x θ (c i 1 ) represents the i th element of the h x θ vector, where h x θ is the score vector for the coarse classes (C 1 ). Likewise, g x θ is the one for the fine classes (C 2 ). λ c1 and λ c2 are the weights of the fine and coarse classification terms of the cost function.
Distance Cost Function
The distances between the samples in the feature space are commonly defined by a radial function [17]. For this reason, the representations which will be learned by our proposed framework are m-dimensional feature vectors. The distance for any two members can be defined by l 2 norm. Hence, we can clearly formulate our goal by the inequality D R,P + < D R,P − < D R,N . The first part can be rewritten as D R,P + + m 1 < D R,P − , and the second part would be D R,P − + m 2 < D R,N where m 1 and m 2 are the margins, which should be positive numbers. Moreover, we emphasize the discrimination of the coarse classes by using the condition m 1 > m 2 > 0. Then, the new cost function can be proposed as:
L joint (x R , x P + , x P − , x N ) = 1 − D R,P − D R,P + + m 1 − m 2 + + 1 − D R,N D R,P − + m 2 + + L C1,C2 (x R ).
(5) Finally, the overall proposed network is shown in Figure 1 with the loss function given in (6). This loss function, which is the combination of (5) and (3), consider the distances of the samples in the feature space using L joint while L global regularizes the statistics of the distances batch-wise. . . C2 Fig. 1: The proposed framework is similar to the model used in [9]. The dimension of the last fully connected (FC) layer is 1024. Note that all the weights in the network are shared, including the weights in the FC layers.
L comb (Q) = ∀i L joint (Q i ) + ηL global (Q).
Quadruplet Selection
In the previous section, we have briefly summarized our novel loss function. As it is mentioned before, selecting the quadruplet samples randomly makes it difficult to exploit the most informative training examples. Instead of attempting to cover all the quadruplet combinations in the training set, we propose two novel selection strategies. First, a reference sample is randomly selected with equal probability from the training set (Let the reference sample be selected as X R , where C R 1 and C R 2 are the coarse and fine labels of the reference sample, respectively.). The negative sample is selected from the set of the samples belonging to the different coarse classes. The critical point is that, like hard negative mining in [15], we should select the closest negative sample to X R (X N := argmin
X N ∈C R 1 ||f θ (x R ) − f θ (x N )|| 2 )
. At this point, we propose two different methods for the selection of X P + and X P − . The experimental comparison of these two methods is given in Section 4.
Method 1
For determining X P + , we select the sample whose fine class is the same as the fine class of X R , and which is closest to X N . At this point, the constraint for selection of X P + is as follows: the distance between X P + and X R is greater than the distance between X R and X N (D R,P + > D R,N ). Similarly, we select X P − whose coarse class is the same as the coarse class of X R , which is the closest sample to X N , and also satisfying D R,P − > D R,N . This method is visualized in Figure 2.
Method 2
In the second method, after selecting X N , the distance between X R and X N (D R,N ) determines a hyper-sphere which takes X R as its center. After selecting the labels of X P + and X P − according to the constraints in Section 2, X P + and X P − are selected from the predetermined classes such that they are the closest points to X R but outside the region enclosed by this hyper-sphere. If there are no samples which are both close to X R and outside of the hyper-sphere, then the furthest sample to X R inside the hyper-sphere is selected. This selection method is illustrated in Figure 2. After X R is selected, the nearest sample belonging to the different coarse class is selected as X N . X P + and X P − are also selected as in Method 1 (left), and Method 2 (right).
RESULTS
We compare the performance of our proposed method against the state-of-the-art feature learning approaches in [18,21,4,22,20] by using the same evaluation methods. In addition, the randomly selected quadruplets are utilized as in [9]. Stanford Cars 196 dataset [7] is used in the experiments. To implement the proposed methods, a hierarchical structure is required for all the samples in the dataset, where each sample originally has only one label. For this purpose, we should add the highlevel classes (coarse labels) to the dataset. In other words, the 196 classes, which are originally in the dataset, are taken as the fine classes and 22 coarse classes are added using the types of the cars, similar to the study in [6].
The important point in the generation of the training and test sets is that they should not share any fine class labels. With this restriction, we want to measure the adequacy of our neural network to separate the classes that have not been seen before. The most common performance analysis methods for zero-shot learning are Recall@K and NMI. Recall@K specifies whether the samples belonging to the same fine class are close to each other, and NMI is a measure of clustering quantity as mentioned in [22].
For this purpose, the first 98 fine classes of the dataset are selected as the training set, and the rest are used only as the test set similar to the study in [1]. In our experimental setup, the pre-trained ResNet101 model [23] (that has been trained using the ImageNet dataset [24]) is employed as our CNN model to extract the features. The experiments are performed on Pytorch platform [25]. In addition, the hyper-parameters of the cost function are selected as 0.08 for λ c1 , 0.25 for λ c2 ; 1 for λ g1 , λ g2 , and η. The margins are 0.7 for m 1 , and t 1 ; 0.3 for m 2 , and t 2 . The learning parameters are as follows: the learning rate is 0.0003, the momentum is 0.9, and stochastic gradient descent algorithm is used for optimization. The results can be examined in Table 1. Our proposed quadruplet based learning framework has improved the precision in terms of Recall@K even if they are selected randomly. According to Recall@K metric, random quadruplet selection method outperforms the previous studies in [18,21,4,22], and it is comparable to the study in [20]. On top of that, when the proposed selection methods are used, even higher levels of accuracy can be obtained. As it is demonstrated in Table 1, Method 1 results in 64.85% accuracy of Recall@1, which is an improvement by at least 3.4% compared to the other studies; while Method 2 results in 66.06% accuracy of Recall@1 corresponding to a 4.5% increase.
CONCLUSION
We have demonstrated the proposed method of selection significantly increases the rate of separation of a model in terms of recall performance. Unlike previous studies that consider only the distances between X R -X P +/− and X R -X N , the proposed methods consider also the distances between X N -X P +/− in the feature space. This consideration helps us im-prove the model and achieve better accuracy performance. These two proposed selection methods allow the loss function not only to enlarge margins between the samples in the different classes but also to create several tight clusters for each class. Moreover, these two proposed methods have the advantage that they pay attention to the samples at the region around the critical hyper-sphere. Especially, the second method attacks the easier problem, i.e. while the first method can reshape the only particular region in the feature space, the second one can use all the region on the surface of a hypersphere. Therefore, the feature space is manipulated through a better optimization procedure. | 2,630 |
1907.09245 | 2963324243 | Recognition of objects with subtle differences has been used in many practical applications, such as car model recognition and maritime vessel identification. For discrimination of the objects in fine-grained detail, we focus on deep embedding learning by using a multi-task learning framework, in which the hierarchical labels (coarse and fine labels) of the samples are utilized both for classification and a quadruplet-based loss function. In order to improve the recognition strength of the learned features, we present a novel feature selection method specifically designed for four training samples of a quadruplet. By experiments, it is observed that the selection of very hard negative samples with relatively easy positive ones from the same coarse and fine classes significantly increases some performance metrics in a fine-grained dataset when compared to selecting the quadruplet samples randomly. The feature embedding learned by the proposed method achieves favorable performance against its state-of-the-art counterparts. | In @cite_15 , the hierarchical labels of the training samples are utilized. It should be noted that a model has difficulty in convergence when the samples are selected randomly since the most informative pairs are not effectively considered. Here, we propose two methods for sample selection to address this issue. | {
"abstract": [
"Recent algorithms in convolutional neural networks (CNN) considerably advance the fine-grained image classification, which aims to differentiate subtle differences among subordinate classes. However, previous studies have rarely focused on learning a fined-grained and structured feature representation that is able to locate similar images at different levels of relevance, e.g., discovering cars from the same make or the same model, both of which require high precision. In this paper, we propose two main contributions to tackle this problem. 1) A multitask learning framework is designed to effectively learn fine-grained feature representations by jointly optimizing both classification and similarity constraints. 2) To model the multi-level relevance, label structures such as hierarchy or shared attributes are seamlessly embedded into the framework by generalizing the triplet loss. Extensive and thorough experiments have been conducted on three finegrained datasets, i.e., the Stanford car, the Car-333, and the food datasets, which contain either hierarchical labels or shared attributes. Our proposed method has achieved very competitive performance, i.e., among state-of-the-art classification accuracy when not using parts. More importantly, it significantly outperforms previous fine-grained feature representations for image retrieval at different levels of relevance."
],
"cite_N": [
"@cite_15"
],
"mid": [
"2964189431"
]
} | QUADRUPLET SELECTION METHODS FOR DEEP EMBEDDING LEARNING | Recently, embedding learning has become one of the most popular issues in machine learning [1,2,22]. Proper mapping from the raw data to a feature space is commonly utilized for image retrieval [4] and duplicate detection [5], which are used in many applications such as online image search.
For training a model that can extract proper features, the distance between two samples of a dataset in the feature space † This work was done when Erhan Gundogdu was with Middle East Technical University.
Copyright 2019 IEEE. Published in the IEEE 2019 International Conference on Image Processing (ICIP 2019), scheduled for 22-25 September 2019 in Taipei, Taiwan. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works, must be obtained from the IEEE. Contact:Manager, Copyrights and Permissions / IEEE Service Center / 445 Hoes Lane / P.O. Box 1331 / Piscataway, NJ 08855-1331, USA. Telephone: + Intl. 908-562-3966. should be considered. Moreover, some embedding learning methods are employed to increase the classification accuracy, e.g., fine-grained object recognition [6] by using deep convolutional neural network (CNN) models which require a significant amount of training samples. Fortunately, there are datasets for various purposes such as car model recognition [7] and maritime vessel classification and identification [8]. Some of these datasets can be used for classifying land, marine, and air vehicles in a real-world scenario. Concretely, car model recognition can be employed in the context of visual surveillance and security for the land traffic control [6] and marine vessel recognition is used for the purpose of coastal surveillance [9] [10]. In this work, we focus on the feature learning problem specifically designed for car model recognition.
Recently developed studies on feature learning focus on extracting features from raw data such that the samples belonging to different classes are well-separated and the ones from the same classes are close to each other in the feature space. The state-of-the-art network architectures such as VGG [11] and GoogLeNet [12] are frequently used for extracting features from images by several different training processes. In the early years, pairwise similarity is used for signature verification with contrastive loss [13]. Since consideration of the whole pairs or triplet samples in a dataset is not computationally tractable, carefully designed mining techniques are proposed, such as hard positive [14] and negative [15] mining.
In the previous methods that employ a hard mining step during training, at each iteration of the optimization, they focus on the separation of samples in the feature space in a selected batch from the dataset. Therefore, the distance relations among the samples in a dataset are not fully exploited. Moreover, the classification loss function for the fine-grained labels is not considered in the training phase. On the other hand, our proposed method for the quadruplet sample selection enables to convey more information from the utilized dataset by considering the globally hard negatives and relatively easy positives in the distance loss terms and the auxiliary classification layers.
The contributions of this work are summarized as follows: (1) In order to improve embedding learning, we have proposed two novel quadruplet selection methods where the globally hardest negative and moderately easy positive samples are selected. (2) Our framework contains a CNN trained with the combination of the classification and distance losses. These losses are designed to exploit the hierarchical labels of the training samples. (3) To test the proposed method, we have conducted experiments on the Stanford Cars 196 dataset [7] and observed that the recognition accuracy of the unobserved classes has been improved with respect to the random selection of samples in the quadruplets while outperforming the state-of-the-art feature learning methods.
PROPOSED METHOD
Each quadruplet sample is represented as
Q i = {X R i , X P + i , X P − i , X N i } where X i = (x i , y i1 , y i2
). x i ∈ R n represents the vector of the pixels of an image (n is the number of the pixels in the image), y i1 ∈ C 1 and y i2 ∈ C 2 represents the coarse, and fine classes, respectively, where
C 1 = {c i 1 } k1 i=1 (k 1
is the number of coarse classes) and similarly,
C 2 = {c i 2 } k2 i=1 .
Let the weights of a CNN be θ ∈ R m where m is the number of the weights, then the network can be defined as f θ (x i ) : R m × R n → R k where k is the dimension of the feature space.
Our proposed cost function consists of two parts: the classification (Section 3.1) and distance (Section 3.2) cost functions. The aim of these cost functions is to form the feature space so that fine classes are well-separated. However, the learning process highly depends on the selection of the 5 In (3), σ 2 P +/− = var{D R,P +/− }, σ 2 N = var{D R,N }, and µ P +/− = E{D R,P +/− }, µ N = E{D R,N } as defined in [20].
quadruplets. The training process takes more time when selecting the quadruplets in an erroneous strategy. We propose to select the members of the quadruplets from the most informative region in the feature space in Section 3.3. As validated by the experiments (Section 4), proposed method increases the performance of separation significantly as it can be observed from both Recall@K and Normalized Mutual Information (NMI) values in Table 1.
Classification Cost Function
In order to increase the discriminativeness of the features for the available class labels, softmax loss is employed. Contrary to the traditional one, the proposed neural network has two outputs which are dedicated to the fine and coarse classes. Let s θ = [g θ , h θ ] where g θ denotes the output for the coarse class, whereas h θ is for the fine class. Then, the proposed cost function is obtained:
L C1,C2 (x) = −λ c1 k1 i=1 p(c i 1 )log e h x θ (c i 1 ) k1 j=1 e h x θ (c j 1 ) −λ c2 k2 i=1 p(c i 2 )log e g x θ (c i 2 ) k2 j=1 e g x θ (c j 2 )
.
(4)
C 1 and C 2 specify the coarse and fine classes, respectively. p(c i 1 ) is the probability that the x vector belongs to the i th coarse class. If x ∈ c j 1 , then by using hard decision, p(c i 1 ) = δ ij where δ ij is the Kronecker delta function. Similarly, p(c i 2 ) is also calculated for C 2 . h x θ (c i 1 ) represents the i th element of the h x θ vector, where h x θ is the score vector for the coarse classes (C 1 ). Likewise, g x θ is the one for the fine classes (C 2 ). λ c1 and λ c2 are the weights of the fine and coarse classification terms of the cost function.
Distance Cost Function
The distances between the samples in the feature space are commonly defined by a radial function [17]. For this reason, the representations which will be learned by our proposed framework are m-dimensional feature vectors. The distance for any two members can be defined by l 2 norm. Hence, we can clearly formulate our goal by the inequality D R,P + < D R,P − < D R,N . The first part can be rewritten as D R,P + + m 1 < D R,P − , and the second part would be D R,P − + m 2 < D R,N where m 1 and m 2 are the margins, which should be positive numbers. Moreover, we emphasize the discrimination of the coarse classes by using the condition m 1 > m 2 > 0. Then, the new cost function can be proposed as:
L joint (x R , x P + , x P − , x N ) = 1 − D R,P − D R,P + + m 1 − m 2 + + 1 − D R,N D R,P − + m 2 + + L C1,C2 (x R ).
(5) Finally, the overall proposed network is shown in Figure 1 with the loss function given in (6). This loss function, which is the combination of (5) and (3), consider the distances of the samples in the feature space using L joint while L global regularizes the statistics of the distances batch-wise. . . C2 Fig. 1: The proposed framework is similar to the model used in [9]. The dimension of the last fully connected (FC) layer is 1024. Note that all the weights in the network are shared, including the weights in the FC layers.
L comb (Q) = ∀i L joint (Q i ) + ηL global (Q).
Quadruplet Selection
In the previous section, we have briefly summarized our novel loss function. As it is mentioned before, selecting the quadruplet samples randomly makes it difficult to exploit the most informative training examples. Instead of attempting to cover all the quadruplet combinations in the training set, we propose two novel selection strategies. First, a reference sample is randomly selected with equal probability from the training set (Let the reference sample be selected as X R , where C R 1 and C R 2 are the coarse and fine labels of the reference sample, respectively.). The negative sample is selected from the set of the samples belonging to the different coarse classes. The critical point is that, like hard negative mining in [15], we should select the closest negative sample to X R (X N := argmin
X N ∈C R 1 ||f θ (x R ) − f θ (x N )|| 2 )
. At this point, we propose two different methods for the selection of X P + and X P − . The experimental comparison of these two methods is given in Section 4.
Method 1
For determining X P + , we select the sample whose fine class is the same as the fine class of X R , and which is closest to X N . At this point, the constraint for selection of X P + is as follows: the distance between X P + and X R is greater than the distance between X R and X N (D R,P + > D R,N ). Similarly, we select X P − whose coarse class is the same as the coarse class of X R , which is the closest sample to X N , and also satisfying D R,P − > D R,N . This method is visualized in Figure 2.
Method 2
In the second method, after selecting X N , the distance between X R and X N (D R,N ) determines a hyper-sphere which takes X R as its center. After selecting the labels of X P + and X P − according to the constraints in Section 2, X P + and X P − are selected from the predetermined classes such that they are the closest points to X R but outside the region enclosed by this hyper-sphere. If there are no samples which are both close to X R and outside of the hyper-sphere, then the furthest sample to X R inside the hyper-sphere is selected. This selection method is illustrated in Figure 2. After X R is selected, the nearest sample belonging to the different coarse class is selected as X N . X P + and X P − are also selected as in Method 1 (left), and Method 2 (right).
RESULTS
We compare the performance of our proposed method against the state-of-the-art feature learning approaches in [18,21,4,22,20] by using the same evaluation methods. In addition, the randomly selected quadruplets are utilized as in [9]. Stanford Cars 196 dataset [7] is used in the experiments. To implement the proposed methods, a hierarchical structure is required for all the samples in the dataset, where each sample originally has only one label. For this purpose, we should add the highlevel classes (coarse labels) to the dataset. In other words, the 196 classes, which are originally in the dataset, are taken as the fine classes and 22 coarse classes are added using the types of the cars, similar to the study in [6].
The important point in the generation of the training and test sets is that they should not share any fine class labels. With this restriction, we want to measure the adequacy of our neural network to separate the classes that have not been seen before. The most common performance analysis methods for zero-shot learning are Recall@K and NMI. Recall@K specifies whether the samples belonging to the same fine class are close to each other, and NMI is a measure of clustering quantity as mentioned in [22].
For this purpose, the first 98 fine classes of the dataset are selected as the training set, and the rest are used only as the test set similar to the study in [1]. In our experimental setup, the pre-trained ResNet101 model [23] (that has been trained using the ImageNet dataset [24]) is employed as our CNN model to extract the features. The experiments are performed on Pytorch platform [25]. In addition, the hyper-parameters of the cost function are selected as 0.08 for λ c1 , 0.25 for λ c2 ; 1 for λ g1 , λ g2 , and η. The margins are 0.7 for m 1 , and t 1 ; 0.3 for m 2 , and t 2 . The learning parameters are as follows: the learning rate is 0.0003, the momentum is 0.9, and stochastic gradient descent algorithm is used for optimization. The results can be examined in Table 1. Our proposed quadruplet based learning framework has improved the precision in terms of Recall@K even if they are selected randomly. According to Recall@K metric, random quadruplet selection method outperforms the previous studies in [18,21,4,22], and it is comparable to the study in [20]. On top of that, when the proposed selection methods are used, even higher levels of accuracy can be obtained. As it is demonstrated in Table 1, Method 1 results in 64.85% accuracy of Recall@1, which is an improvement by at least 3.4% compared to the other studies; while Method 2 results in 66.06% accuracy of Recall@1 corresponding to a 4.5% increase.
CONCLUSION
We have demonstrated the proposed method of selection significantly increases the rate of separation of a model in terms of recall performance. Unlike previous studies that consider only the distances between X R -X P +/− and X R -X N , the proposed methods consider also the distances between X N -X P +/− in the feature space. This consideration helps us im-prove the model and achieve better accuracy performance. These two proposed selection methods allow the loss function not only to enlarge margins between the samples in the different classes but also to create several tight clusters for each class. Moreover, these two proposed methods have the advantage that they pay attention to the samples at the region around the critical hyper-sphere. Especially, the second method attacks the easier problem, i.e. while the first method can reshape the only particular region in the feature space, the second one can use all the region on the surface of a hypersphere. Therefore, the feature space is manipulated through a better optimization procedure. | 2,630 |
1907.09242 | 2963549923 | We consider the robust version of items selection problem, in which the goal is to choose representatives from a family of sets, preserving constraints on the allowed items' combinations. We prove NP-hardness of the deterministic version, and establish polynomially solvable special cases. Next, we consider the robust version in which we aim at minimizing the maximum regret of the solution under interval parameter uncertainty. We show that this problem is hard for the second level of polynomial-time hierarchy. We develop an exact solution algorithm for the robust problem, based on cut generation, and present the results of computational experiments. | The basic variant of this problem has been first considered in @cite_8 under the name Representatives Selection Problem, where we are allowed to select one item from each set of alternatives. In order to alleviate the effects of cost uncertainty on decision making, the min-max and min-max regret criteria @cite_11 @cite_17 have been proposed to assess the solution quality. The problem formulations using these criteria belong to the class of robust optimization problems @cite_4 . Such approach appears to be more suitable for large scale design projects than an alternative stochastic optimization approach @cite_1 , when: 1) decision makers do not have sufficient historical data for estimating probability distributions; 2) there is a high factor of risk involved in one-shot decisions, and a precautionary approach is preferred. The robust approach to discrete optimization problems has been applied in many areas of industrial engineering, such as: scheduling and sequencing @cite_9 @cite_0 @cite_21 , network optimization @cite_10 @cite_15 , assignment @cite_19 @cite_7 , and others @cite_20 . | {
"abstract": [
"In Introduction, I explain the meaning I give to the qualifier term \"robust\" and justify my preference for the expression robustness concern rather than robustness analysis, which I feel is likely to be interpreted too narrowly. In Section 2, I discuss this concern in more details and I try to clarify the numerous raisons d'etre of this concern. As a means of examining the multiple facets of robustness concern more comprehensively, I explore the existing research about robustness, attempting to highlight what I see as the three different territories covered by these studies (Section 3). In Section 4, I refer to these territories to illustrate how responses to robustness concern could be even more varied than they currently are. In this perspective, I propose in Section 5 three new measures of robustness. In the last section, I identify several aspects of the problem that should be examined more closely because they could lead to new avenues of research, which could in turn yield new and innovative responses.",
"",
"The following optimization problem is studied. There are several sets of integer positive numbers whose values are uncertain. The problem is to select one representative of each set such that the sum of the selected numbers is minimum. The uncertainty is modeled by discrete and interval scenarios, and the min–max and min–max (relative) regret approaches are used for making a selection decision. The arising min–max, min–max regret and min–max relative regret optimization problems are shown to be polynomially solvable for interval scenarios. For discrete scenarios, they are proved to be NP-hard in the strong sense if the number of scenarios is part of the input. If it is part of the problem type, then they are NP-hard in the ordinary sense, pseudo-polynomially solvable by a dynamic programming algorithm and possess an FPTAS. This study is motivated by the problem of selecting tools of minimum total cost in the design of a production line.",
"We consider the problem of scheduling jobs on parallel identical machines, where only interval bounds of processing times of jobs are known. The optimality criterion of a schedule is the total completion time. In order to cope with the uncertainty, we consider the maximum regret objective and seek a schedule that performs well under all possible instantiations of processing times. We show how to compute the maximum regret, and prove that its minimization is strongly NP-hard.",
"",
"",
"",
"",
"We consider the Assignment Problem with interval data, where it is assumed that only upper and lower bounds are known for each cost coefficient. It is required to find a minmax regret assignment. The problem is known to be strongly NP-hard. We present and compare computationally several exact and heuristic methods, including Benders decomposition, using CPLEX, a variable depth neighborhood local search, and two hybrid population-based heuristics. We report results of extensive computational experiments.",
"",
"We propose an approach to address data uncertainty for discrete optimization and network flow problems that allows controlling the degree of conservatism of the solution, and is computationally tractable both practically and theoretically. In particular, when both the cost coefficients and the data in the constraints of an integer programming problem are subject to uncertainty, we propose a robust integer programming problem of moderately larger size that allows controlling the degree of conservatism of the solution in terms of probabilistic bounds on constraint violation. When only the cost coefficients are subject to uncertainty and the problem is a 0−1 discrete optimization problem on n variables, then we solve the robust counterpart by solving at most n+1 instances of the original problem. Thus, the robust counterpart of a polynomially solvable 0−1 discrete optimization problem remains polynomially solvable. In particular, robust matching, spanning tree, shortest path, matroid intersection, etc. are polynomially solvable. We also show that the robust counterpart of an NP-hard α-approximable 0−1 discrete optimization problem, remains α-approximable. Finally, we propose an algorithm for robust network flows that solves the robust counterpart by solving a polynomial number of nominal minimum cost flow problems in a modified network.",
"Robust optimization is a young and emerging field of research having received a considerable increase of interest over the last decade. In this paper, we argue that the the algorithm engineering methodology fits very well to the field of robust optimization and yields a rewarding new perspective on both the current state of research and open research directions. To this end we go through the algorithm engineering cycle of design and analysis of concepts, development and implementation of algorithms, and theoretical and experimental evaluation. We show that many ideas of algorithm engineering have already been applied in publications on robust optimization. Most work on robust optimization is devoted to analysis of the concepts and the development of algorithms, some papers deal with the evaluation of a particular concept in case studies, and work on comparison of concepts just starts. What is still a drawback in many papers on robustness is the missing link to include the results of the experiments again in the design.",
"Min-max and min-max regret criteria are commonly used to define robust solutions. After motivating the use of these criteria, we present general results. Then, we survey complexity results for the min-max and min-max regret versions of some combinatorial optimization problems: shortest path, spanning tree, assignment, min cut, min s-t cut, knapsack. Since most of these problems are NP-hard, we also investigate the approximability of these problems. Furthermore, we present algorithms to solve these problems to optimality."
],
"cite_N": [
"@cite_4",
"@cite_7",
"@cite_8",
"@cite_9",
"@cite_21",
"@cite_1",
"@cite_17",
"@cite_0",
"@cite_19",
"@cite_15",
"@cite_10",
"@cite_20",
"@cite_11"
],
"mid": [
"2078858191",
"",
"2044046971",
"2963035806",
"",
"",
"",
"",
"1968866800",
"",
"2165775468",
"2952485782",
"2028589351"
]
} | 0 |
||
1907.09242 | 2963549923 | We consider the robust version of items selection problem, in which the goal is to choose representatives from a family of sets, preserving constraints on the allowed items' combinations. We prove NP-hardness of the deterministic version, and establish polynomially solvable special cases. Next, we consider the robust version in which we aim at minimizing the maximum regret of the solution under interval parameter uncertainty. We show that this problem is hard for the second level of polynomial-time hierarchy. We develop an exact solution algorithm for the robust problem, based on cut generation, and present the results of computational experiments. | Note that deterministic version of Representatives Selection Problem is easily solvable in polynomial time. For interval uncertainty representation of cost parameters the problem can still be solved in polynomial time, both in case of minimizing the maximum regret and the relative regret @cite_8 . However, in case of discrete set of scenarios, the problem becomes NP-hard even for 2 scenarios, and strongly NP-hard when the number of scenarios @math is a part of the input. In @cite_6 authors prove that strong NP-hardness holds also when sets of eligible items are bounded. In @cite_2 an @math -approximation algorithm for this variant was given. | {
"abstract": [
"In this paper new complexity and approximation results on the robust versions of the representatives selection problem, under the scenario uncertainty representation, are provided, which extend the results obtained in the recent papers by Dolgui and Kovalev (2012) and Deineko and Woeginger (2013). Namely, it is shown that if the number of scenarios is a part of input, then the min-max (regret) representatives selection problem is not approximable within a ratio of O ( log 1 - ? K ) for any ? > 0 , where K is the number of scenarios, unless the problems in NP have quasi-polynomial time algorithms. An approximation algorithm with an approximation ratio of O ( log K log log K ) for the min-max version of the problem is also provided.",
"We establish strong NP-hardness and in-approximability of the so-called representatives selection problem, a tool selection problem in the area of robust optimization. Our results answer a recent question of Dolgui and Kovalev (4OR Q J Oper Res 10:181–192, 2012).",
"The following optimization problem is studied. There are several sets of integer positive numbers whose values are uncertain. The problem is to select one representative of each set such that the sum of the selected numbers is minimum. The uncertainty is modeled by discrete and interval scenarios, and the min–max and min–max (relative) regret approaches are used for making a selection decision. The arising min–max, min–max regret and min–max relative regret optimization problems are shown to be polynomially solvable for interval scenarios. For discrete scenarios, they are proved to be NP-hard in the strong sense if the number of scenarios is part of the input. If it is part of the problem type, then they are NP-hard in the ordinary sense, pseudo-polynomially solvable by a dynamic programming algorithm and possess an FPTAS. This study is motivated by the problem of selecting tools of minimum total cost in the design of a production line."
],
"cite_N": [
"@cite_2",
"@cite_6",
"@cite_8"
],
"mid": [
"2027995829",
"2029243025",
"2044046971"
]
} | 0 |
||
1907.09387 | 2963297137 | Industry 4.0 is becoming more and more important for manufacturers as the developments in the area of Internet of Things advance. Another technology gaining more attention is data stream processing systems. Although such streaming frameworks seem to be a natural fit for Industry 4.0 scenarios, their application in this context is still low. The contributions in this paper are threefold. Firstly, we present industry findings that we derived from site inspections with a focus on Industry 4.0. Moreover, our view on Industry 4.0 and important related aspects is elaborated. As a third contribution, we illustrate our opinion on why data stream processing technologies could act as an enabler for Industry 4.0 and point out possible obstacles on this way. | A recent work developed a framework called Production Assessment 4.0, which aims to support enterprises developing Industry 4.0 use cases. For doing so, they made use of the design thinking approach. After elaborating on the framework and its processes, a section about its evaluation is presented. Production Assessment 4.0 was evaluated in several consulting projects with enterprises. However, no details about, e.g., their data characteristics or their state of Industry 4.0 adoption progress are given @cite_17 . | {
"abstract": [
"Facing a wide range of new technologies and best practices within Industry 4.0, companies are seeking a systematic approach to identify potential application scenarios in their production. Best practices are often customized use cases – generally too specific to apply in a different manufacturing setting right away. The Production Assessment 4.0 presents a pragmatic approach to support companies to develop Industry 4.0 use cases in their factories. The approach follows the Design Thinking method and focus especially on the human role as a key aspect in the use cases designing process."
],
"cite_N": [
"@cite_17"
],
"mid": [
"2811438692"
]
} | Application of Data Stream Processing Technologies in Industry 4.0 - What is Missing? | In the backlight of technological and economic developments, the term Industry 4.0 gained more and more popularity. Technically, new Internet of Things (IoT) technologies, such as sensors, are being created, sensor accuracy increases and analytical IT systems are being developed that allow querying huge amounts of data within seconds, to name but a few. On the economic side, a substantial price decrease for sensor IoT equipment can be recognized. This trend is expected to continue in the following years. To be more concrete, the price for an IoT node is expected to drop by about 50% from 2015 to 2020 (McKinsey&Company, 2015). These developments fostered the increased deployment of IoT technologies in companies, especially in the manufacturing sector, and thus, more IoT data is available to companies (Weiner and Line, 2014). Monetarily expressed, the total global worth of IoT technology is expected to reach USD 6.2 trillion by 2025. One of the industry sectors investing most on IoT is industrial manufacturing (Intel, 2014).
A related term in the context of manufacturing that gained attention in the past years is Industry 4.0. One reason for that is the potential that is seen in it with respect to creating an added value for enterprises. A survey conducted by McKinsey in January 2016 amongst enterprises in the US, Germany, and a https://orcid.org/0000000276343021 b https://orcid.org/0000000266125055 Japan with at least 50 employees highlights the significance of Industry 4.0. The study reveals, e.g., that the majority of companies expect Industry 4.0 to increase competitiveness (McKinsey&Company, 2016).
One of the identified key challenges is integrating data from different sources to enable Industry 4.0 applications (McKinsey&Company, 2016). Especially with the emerging significance of IoT data, the fairly old challenge of integrating disparate data sources gets a new flavor. Data Stream Processing Systems (DSPSs) can be a technology suitable for tackling this issue of data integration. Within this paper, a view on Industry 4.0 as well as the potential of Data Stream Processing technologies in that context is presented.
Following the introduction, industry insights related to Industry 4.0 observed through interviews and site inspections are highlighted. In Section 3, we elaborate our view on Industry 4.0, i.e., our definition as well as our view on data integration and IoT. Afterward, Section 4 discusses DSPSs and their role in the area of Industry 4.0, including challenges regarding their application in Industry 4.0 settings. A section to related work and a conclusion complete this paper.
OBSERVATIONS IN INDUSTRY
Beginning in 2015, we conducted interviews with multiple enterprises with a focus on Industry 4.0 implementation strategies and associated challenges and solutions. In this section, we describe and contrast the Industry 4.0 efforts of two selected companies. Both enterprises belong to the manufacturing sector and are comparatively large, with more than 10,000 employees and revenue of more than e 1bn each.
Company I
The first company collects sensor and log data from two sources, its machines used for manufacturing as well as from its sold products used by its customers. About 250 of the vended machines were configured to collect and send sensor and log data to an external central cloud storage service back in late 2015. The data is sent as a batch every 23 hours and includes several state values, such as temperature and position information. Overall, that results in about 800GB data on a monthly basis. Another external company is responsible for data cleansing and some basic calculations. The results are then used by Company I. As Company I is producing the machines, they also developed the format of the log data that is collected. Over time, this format changed with different software releases, which introduces additional complexity with respect to data integration.
Regarding the machines used for manufacturing, five machines are configured to collect sensor data. This data is recorded every 100ms and sent every hour to the same cloud storage service. Each batch is about 20MB big with respect to size. It contains, e.g., information about energy consumption and position data.
As of late 2015, none of the collected data has ever been deleted. Moreover, the stored sensor data had not found its way into an application at that point in time. However, Company I expected growth in its services area. As part of that, it could imagine offering several services around its products for which the collected data would be useful. Predictive maintenance or remote service and support scenarios are an example of such services. Besides, the collected data could reveal further insights about product usage and behavior, which could help product development. The internally captured data could be used for, e.g., predictive maintenance or quality improvements scenarios. The knowledge about production behaviors of previously manufactured products can be combined and gained learnings can be used to support product development and production planning.
Company II
The second company has several measurement stations in its production line. At these stations, certain parts of the product-to-be are gauged. Result-ing data are mostly coordinates, e.g., borehole positions. By doing so, possible inaccuracies added in previous production steps are identified. If an inaccuracy exceeds a threshold, the corresponding product is removed from the production line and the mistakes are corrected if possible.
Furthermore, there is a central database storing all warning or error messages that appear in the production line. A higher five-digit number of messages occurs on a single day on average, whereas this number can go up to more than a million messages. Besides the time stating when the deviation took place, the point in time when it is remedied is stored next to further values describing the event.
With respect to Industry 4.0 applications, the company was in the evaluation process, meaning thinking about how the existing data could be used for such scenarios. Back in 2015, the stored warnings and errors had a documentary character rather than being used in applications for, e.g., preventing future deviations or optimizing processes. However, it was an objective to leverage this data more in such kind of programs. The measurement data was considered first for this kind of evaluations.
Industry Study Conclusions
Both presented and studied companies have in common the positive view on Industry 4.0, meaning they see it as a chance rather than a threat, which fits the before-mentioned survey conducted by McKinsey (McKinsey&Company, 2015). However, neither of the companies, which can both be considered as leaders with respect to market share or revenue, have been able to significantly leverage the potential of Industry 4.0. None of them is using data stream processing technologies in this domain so far. To be more concrete, IoT data is collected but no major new applications using this data or even combining it with business data have been introduced. That might serve as an example for technological leaders struggling to implement new innovations. This situation is often referred to as the innovator's dilemma, which elaborates on the challenge for successful companies to stay innovative (Christensen, 2013).
INDUSTRY 4.0
In this section, we elaborate on the Industry 4.0related topics of data integration and the Internet of Things. Based on that, we present our view on the term Industry 4.0 afterward.
Data Integration
Being able to map data from different sources belonging together is crucial to get holistic pictures of processes, entities, and relationships. The more data can be combined, the more complete and valuable is the created view that is needed for fact-based assessments and decisions within enterprises. Consequently, better data integration and so more available data can lead to greater insights and understanding, better decisions, and thus, to a competitive advantage.
After giving a brief overview of the current situation in enterprises that we discovered through conducted interviews, site inspections, and research projects, our views on the terms horizontal and vertical data integration are elaborated.
Current Situation in Enterprises
Business processes are central artifacts that describe an enterprise and the infrastructure they are embedded in. Business systems represent such processes digitally, e.g., in the form of data model entities like a customer, customer order, product, production order, or journal entries. For different companies, the semantics of these entities can vary, which can hamper data integration exceeding company boundaries. Within a single company, definitions should be clear. However, that does not necessarily represent reality.
Besides business systems, sensors or IoT-related technologies become a greater source of data that is describing processes and infrastructure in an enterprise. This information is usually connected to business systems, such as an Enterprise Resource Planning (ERP) system or alike, via a Machine Execution System (MES) in the manufacturing sector. Additionally, there might be more systems installed underlying the MES responsible for managing the shop floor.
A typical IT landscape comprises many different business systems. Döhler, for instance, a company with more than 6,000 employees from the food and beverage industry, has more than ten business systems and supporting systems that need to be managed and where ideally data can be exchanged amongst each other. In addition to an ERP system, there are, e.g., systems for customer relationship management, extended warehouse management, and an enterprise portal (Döhler, 2019;SAP, 2018). Often, fragmented IT landscapes have been developed historically and complexity increased through, e.g., acquisitions. Simplification is a challenge in companies, e.g., due to the lack of knowledge about old systems that might be still used.
But even if all business systems are from the same vendor, entities can differ between systems. A cen-tralization to a single ERP system is unlikely to happen for multiple reasons. Such arguments can be related to aspects like data security of sensitive data, e.g., HR data shall be decoupled from the main ERP, or the wish to be not dependent on a single software vendor for economic or risk diversification reasons. Figure 1 visualizes a very simplified IT landscape how it can be found at companies belonging to the manufacturing sector. It distinguishes between different system categories and highlights the areas of horizontal and vertical integration, that are explained in Section 3.1.2 and Section 3.1.3 respectively.
Horizontal Data Integration
We see horizontal data integration as a holistic view of business processes, i.e., from the beginning to the end. Technically, that means joining database tables stored within business systems that are involved in the business process execution as conceptually outlined in Figure 1. These links can be established, e.g., through foreign key dependencies. The greater the number of tables that can be connected, the more detailed and valuable the resulting view on a process. As mentioned before, enterprises generally have multiple business systems for the elaborated reasons, which increases the effort for achieving a horizontal data integration. Compared to vertical integration, horizontal integration is further developed having relatively advanced software solutions for achieving it.
Vertical Data Integration
Vertical data integration describes the connection of technical data created by IoT technologies and business systems, including the systems in between these two layers as depicted in Figure 1. That means, two different kinds of data have to be combined in contrast to integrating only business data as in horizontal data integration. These distinct data characteristics introduce new challenges.
While business data is well-structured and with a comparatively high degree of correctness, sensor data can be relatively unstructured and error-prone. Contrary to the close business process reference of business data, sensors have a strong time and location reference. Moreover, both volume and creation speed of IoT data is generally higher, which impacts, e.g., the performance requirements on IT systems handling this kind of data (Hesse et al., 2017b).
Moreover, it is a challenge to map entities in business processes, such as a product that is being produced, to the corresponding IoT data that has been measured while exactly this product has been produced at the corresponding workplace. In contrast to integrating relatively homogeneous data among business systems, foreign keys cannot simply be used. Instead, a time-based approach is often applied, which can potentially introduce errors due to imprecise time measurements. However, the progress of vertical data integration we experienced in site inspections, e.g., in the form of being able to map sensor measurements created at production machines to the corresponding products that were being produced, is not as advanced as the horizontal integration. Nevertheless, vertical integration as in the previously described scenario is desired since it can help to get further insights about processes and thus, support to create an added value.
Internet of Things
Internet of Things is a term often used in the context of Industry 4.0 that has versatile meanings. Originally emerged out of the area of radio frequency identification (RFID), where it described connected physical objects using this technology, the term IoT became broader over recent years. It is not limited to RFID technology anymore, but also comprises, e.g., things connected via sensors or machine-to-machine communication. Additionally, applications leveraging these technologies are referred to as IoT (Ashton et al., 2009;Miorandi et al., 2012).
We see IoT as network-connected physical objects, whereas it does not matter which exact technology is used for establishing a connection. Moreover, IoT is an enabler for Industry 4.0 as it is driving vertical data integration and thus, paving the way for new business applications. Through making machines or physical objects in general digitally accessible, new data can be analyzed, new insights be gained and a more holistic view of processes can be created. This increased level of live information can lead to a competitive advantage for enterprises.
Our View on Industry 4.0
We see Industry 4.0 as a term describing an advanced way of manufacturing enabled and driven by technological progress in various areas.
These areas can be categorized into two groups, developments with respect to IoT technologies and regarding IT systems. While the advances related to IoT enable to gain new, higher volumes and more precise measurements, the IT system development progresses allow to analyze high volumes of data with reasonable response times nowadays. Moreover, high volumes of data created with a high velocity can also be handled with the help of modern DSPSs.
These achievements lead to new opportunities in manufacturing. New data is being generated in high volume and velocity, which can now also be analyzed in a reasonable amount of time with state-of-the-art IT systems. This natural fit of two technological developments generates opportunities. Making use of both advances in combination with full data integration, i.e., horizontally as well as vertically, raises the level of detail and completeness enterprises can have on their processes and entities. This information gain
• leads to the enablement of better data-driven decisions,
• facilitates new insights into processes or entities,
• creates the opportunity for new business applications, and
• allows for rethinking the way of manufacturing.
Specifically, holistic data integration enables a flexible and more customizable production, i.e., moving from a nowadays commonly existing batch-wise production to piece-wise production while not sacrificing economic performance. Although we have not observed a batch size of one as an explicitly formulated objective in our side inspections, it was consid-ered as a desirable situation. Generally, we got the impressions that there are greater challenges related to IT compared to those related to the engineering aspect of IoT.
DATA STREAM PROCESSING SYSTEMS
In this section, our view on the potential role of DSPSs in the context of Industry 4.0 is presented. Moreover, the related challenges that need to be tackled are highlighted.
A Possible Role in Industry 4.0
Although data stream processing systems is not a new technology, it gained more attraction in the past couple of years (Hesse and Lorenz, 2015). Reasons for that are technological advances, e.g., with respect to distributed systems on the one hand, and on the other hand the grown need for such systems due to the increased data masses that are being created through developments like IoT for instance. We think that stream processing technologies have the potential to play a central role in the context of Industry 4.0. A reason for that is its suitability regarding the data characteristics of processed data, which fit the overall purpose behind DSPSs. Instead of issuing a query that is executed once and returns a single result set as in a database management system (DBMS), DSPSs execute queries on a continuous basis. Similarly, IoT data is often generated on a continuous basis, which is contrary to traditional business data.
Altering requirements, e.g., due to growing data volumes introduced by added machines or advanced IoT technologies, can be handled as modern DSPSs are typically scalable distributed systems. As another consequence, high elasticity is enabled, i.e., nodes can be added or removed from the cluster as the workload increases or decreases. This flexibility is advantageous from an economic perspective. Especially manufacturers that do not produce during certain periods, generally speaking companies with large IoT workload variations, can benefit. Scalability can be reached by using a message broker between the sources of streaming data and the DSPS. That is a common approach seen in many architectures, both in industry and science (Hesse et al., 2017b). A schematic overview of a possible architecture is visualized in Figure 2.
IoT devices, such as manufacturing equipment, can send their measurements to a message broker, from which a DSPS can consume the data. Streaming applications that require more than IoT data, i.e., programs that need vertical integration, can also be realized using DSPSs. Corresponding data can be consumed via established interfaces, such as JDBC, and enrich the IoT data. If a horizontal data integration can be achieved in the business system layer. A holistic view on entities or processes can then be created in the DSPS where all data is brought together. Additionally, data from MES systems or alike can be integrated as depicted in Figure 2. That makes data stream processing technologies a suitable framework for developing Industry 4.0 applications whose use case does not have further requirements that can not be satisfied in this setting. Summarizing, since DSPSs are capable of handling the high volume and high-velocity IoT data as mentioned previously, they can act as an enabler for vertical integration and thus, for Industry 4.0 scenarios. Data can be analyzed on the fly without the need of storing high volumes of data in advance, which has a positive impact economically as well as on the performance side. An imaginable pre-aggregation that would lower these effects is not needed. Moreover, aggregation comes at the cost of data loss and thus, sacrifices accuracy.
Challenges
Certain challenges exist that could hinder an establishment of data stream processing technologies in the context of Industry 4.0 on a broader scale.
One reason is the existing lack of a broadly accepted abstraction layer for formulating queries or developing applications, such as SQL for DBMSs. Similarly, Stonebreaker, Ç etintemel and Zdonik mentioned the need for DSPSs to support a high-level stream processing SQL as one of the eight requirements of real-time stream processing they defined in (Stonebraker et al., 2005). The lack of such an established abstraction layer introduces multiple challenges. It reduces flexibility for enterprises as after choosing a certain system, the boundaries to exactly this system are comparatively tight. Switching to another framework, e.g., due to altered system requirements or changed performance ratios amongst the group of existing systems, is more complex and thus, costlier for companies. Streaming applications need to be developed using native system APIs, which results in high porting effort if a system is supposed to be exchanged compared to the effort needed for switching a DBMS. The resulting potentially needed SQL adaptions are relatively small since the same abstraction layer, namely SQL, is also used in the new system in typical scenarios. There are multiple system-specific SQL dialects developed for stream processing frameworks, but none of them gained broader acceptance. However, there is the opensource project Apache Beam aiming to close this gap. It is not a domain-specific language like SQL, but a software development kit that allows writing programs, which can be executed on any of the supported stream processing engines. The impact of using this abstraction layer on selected state-of-the-art DSPSs with respect to performance is analyzed in (Hesse et al., 2019).
Furthermore, identifying the most suitable system might be a challenge for enterprises. Although the growing number of DSPSs that have been developed in recent years is generally a good thing, the more choice the harder to make a decision for choosing a system. Typically, performance benchmarks are used for this task. Similarly to the previously described circumstances regarding the abstraction layers, the situation for DBMSs is more sophisticated. While there are many well-known and often used benchmarks for databases, such as TPC-C, TPC-H, or TPC-DS, the area of DSPS benchmarks is significantly less developed. Linear Road is probably the best-known benchmark for stream processing architectures (Arasu et al., 2004). However, it does not reflect typical Industry 4.0 scenarios in contrast to a benchmark currently under development and proposed by (Hesse et al., 2017a), which could close the gap of not having a suitable benchmark for comparing different DSPSs for use in the Industry 4.0 domain.
Another challenge we recognized in site inspections is the identification of Industry 4.0 scenarios that possibly create an added value. Although this situation is not directly linked to DSPSs, thoughts about Industry 4.0 and technologies that can be used barely include stream processing frameworks and their capabilities based on our industry experiences. This lack of awareness of streaming technologies results in not considering it for new application scenarios. Moreover, when taken into account, there are often reservations, such as that there is no or only little knowledge about these technology amongst the employees. Another fear is that modern DSPSs are very complex systems, which are hard to maintain and difficult to use for application development. However, these points could be eliminated automatically in near future if development efforts and improvements of DSPSs stay as high as they are at the moment.
CONCLUSION
The present paper pictures a point of view on Industry 4.0 and on data stream processing systems in its context. Thereby, contributions are threefold. First, we present insights about current situations and opinions at two selected companies with respect to Industry 4.0. This includes information about data characteristics and Industry 4.0 applications. All findings were derived from site inspections and alike.
Secondly, a viewpoint on Industry 4.0 as well as on further important and closely related aspects is given. Among others, it ensures a common understanding needed for the third contribution.
This third part is about data stream processing systems. Particularly, it is about why and how this technology could become an enabler for Industry 4.0. A possible architecture for Industry 4.0 scenarios is proposed and obstacles hindering DSPSs from being applied more in this context are pointed out. | 3,772 |
1907.09387 | 2963297137 | Industry 4.0 is becoming more and more important for manufacturers as the developments in the area of Internet of Things advance. Another technology gaining more attention is data stream processing systems. Although such streaming frameworks seem to be a natural fit for Industry 4.0 scenarios, their application in this context is still low. The contributions in this paper are threefold. Firstly, we present industry findings that we derived from site inspections with a focus on Industry 4.0. Moreover, our view on Industry 4.0 and important related aspects is elaborated. As a third contribution, we illustrate our opinion on why data stream processing technologies could act as an enabler for Industry 4.0 and point out possible obstacles on this way. | With respect to Industry 4.0, there are many existing definitions and views published. An overview of selected perceptions of Industry 4.0 is presented in @cite_7 . Moreover, it also states that there is no generally accepted definition for the term Industry 4.0. | {
"abstract": [
"Abstract Lean Production is widely recognized and accepted in the industrial setting. It concerns the strict integration of humans in the manufacturing process, a continuous improvement and focus on value-adding activities by avoiding waste. However, a new paradigm called Industry 4.0 or the fourth industrial revolution has recently emerged in the manufacturing sector. It allows creating a smart network of machines, products, components, properties, individuals and ICT systems in the entire value chain to have an intelligent factory. So, now a question arises if, and how these two approaches can coexist and support each other."
],
"cite_N": [
"@cite_7"
],
"mid": [
"2530328109"
]
} | Application of Data Stream Processing Technologies in Industry 4.0 - What is Missing? | In the backlight of technological and economic developments, the term Industry 4.0 gained more and more popularity. Technically, new Internet of Things (IoT) technologies, such as sensors, are being created, sensor accuracy increases and analytical IT systems are being developed that allow querying huge amounts of data within seconds, to name but a few. On the economic side, a substantial price decrease for sensor IoT equipment can be recognized. This trend is expected to continue in the following years. To be more concrete, the price for an IoT node is expected to drop by about 50% from 2015 to 2020 (McKinsey&Company, 2015). These developments fostered the increased deployment of IoT technologies in companies, especially in the manufacturing sector, and thus, more IoT data is available to companies (Weiner and Line, 2014). Monetarily expressed, the total global worth of IoT technology is expected to reach USD 6.2 trillion by 2025. One of the industry sectors investing most on IoT is industrial manufacturing (Intel, 2014).
A related term in the context of manufacturing that gained attention in the past years is Industry 4.0. One reason for that is the potential that is seen in it with respect to creating an added value for enterprises. A survey conducted by McKinsey in January 2016 amongst enterprises in the US, Germany, and a https://orcid.org/0000000276343021 b https://orcid.org/0000000266125055 Japan with at least 50 employees highlights the significance of Industry 4.0. The study reveals, e.g., that the majority of companies expect Industry 4.0 to increase competitiveness (McKinsey&Company, 2016).
One of the identified key challenges is integrating data from different sources to enable Industry 4.0 applications (McKinsey&Company, 2016). Especially with the emerging significance of IoT data, the fairly old challenge of integrating disparate data sources gets a new flavor. Data Stream Processing Systems (DSPSs) can be a technology suitable for tackling this issue of data integration. Within this paper, a view on Industry 4.0 as well as the potential of Data Stream Processing technologies in that context is presented.
Following the introduction, industry insights related to Industry 4.0 observed through interviews and site inspections are highlighted. In Section 3, we elaborate our view on Industry 4.0, i.e., our definition as well as our view on data integration and IoT. Afterward, Section 4 discusses DSPSs and their role in the area of Industry 4.0, including challenges regarding their application in Industry 4.0 settings. A section to related work and a conclusion complete this paper.
OBSERVATIONS IN INDUSTRY
Beginning in 2015, we conducted interviews with multiple enterprises with a focus on Industry 4.0 implementation strategies and associated challenges and solutions. In this section, we describe and contrast the Industry 4.0 efforts of two selected companies. Both enterprises belong to the manufacturing sector and are comparatively large, with more than 10,000 employees and revenue of more than e 1bn each.
Company I
The first company collects sensor and log data from two sources, its machines used for manufacturing as well as from its sold products used by its customers. About 250 of the vended machines were configured to collect and send sensor and log data to an external central cloud storage service back in late 2015. The data is sent as a batch every 23 hours and includes several state values, such as temperature and position information. Overall, that results in about 800GB data on a monthly basis. Another external company is responsible for data cleansing and some basic calculations. The results are then used by Company I. As Company I is producing the machines, they also developed the format of the log data that is collected. Over time, this format changed with different software releases, which introduces additional complexity with respect to data integration.
Regarding the machines used for manufacturing, five machines are configured to collect sensor data. This data is recorded every 100ms and sent every hour to the same cloud storage service. Each batch is about 20MB big with respect to size. It contains, e.g., information about energy consumption and position data.
As of late 2015, none of the collected data has ever been deleted. Moreover, the stored sensor data had not found its way into an application at that point in time. However, Company I expected growth in its services area. As part of that, it could imagine offering several services around its products for which the collected data would be useful. Predictive maintenance or remote service and support scenarios are an example of such services. Besides, the collected data could reveal further insights about product usage and behavior, which could help product development. The internally captured data could be used for, e.g., predictive maintenance or quality improvements scenarios. The knowledge about production behaviors of previously manufactured products can be combined and gained learnings can be used to support product development and production planning.
Company II
The second company has several measurement stations in its production line. At these stations, certain parts of the product-to-be are gauged. Result-ing data are mostly coordinates, e.g., borehole positions. By doing so, possible inaccuracies added in previous production steps are identified. If an inaccuracy exceeds a threshold, the corresponding product is removed from the production line and the mistakes are corrected if possible.
Furthermore, there is a central database storing all warning or error messages that appear in the production line. A higher five-digit number of messages occurs on a single day on average, whereas this number can go up to more than a million messages. Besides the time stating when the deviation took place, the point in time when it is remedied is stored next to further values describing the event.
With respect to Industry 4.0 applications, the company was in the evaluation process, meaning thinking about how the existing data could be used for such scenarios. Back in 2015, the stored warnings and errors had a documentary character rather than being used in applications for, e.g., preventing future deviations or optimizing processes. However, it was an objective to leverage this data more in such kind of programs. The measurement data was considered first for this kind of evaluations.
Industry Study Conclusions
Both presented and studied companies have in common the positive view on Industry 4.0, meaning they see it as a chance rather than a threat, which fits the before-mentioned survey conducted by McKinsey (McKinsey&Company, 2015). However, neither of the companies, which can both be considered as leaders with respect to market share or revenue, have been able to significantly leverage the potential of Industry 4.0. None of them is using data stream processing technologies in this domain so far. To be more concrete, IoT data is collected but no major new applications using this data or even combining it with business data have been introduced. That might serve as an example for technological leaders struggling to implement new innovations. This situation is often referred to as the innovator's dilemma, which elaborates on the challenge for successful companies to stay innovative (Christensen, 2013).
INDUSTRY 4.0
In this section, we elaborate on the Industry 4.0related topics of data integration and the Internet of Things. Based on that, we present our view on the term Industry 4.0 afterward.
Data Integration
Being able to map data from different sources belonging together is crucial to get holistic pictures of processes, entities, and relationships. The more data can be combined, the more complete and valuable is the created view that is needed for fact-based assessments and decisions within enterprises. Consequently, better data integration and so more available data can lead to greater insights and understanding, better decisions, and thus, to a competitive advantage.
After giving a brief overview of the current situation in enterprises that we discovered through conducted interviews, site inspections, and research projects, our views on the terms horizontal and vertical data integration are elaborated.
Current Situation in Enterprises
Business processes are central artifacts that describe an enterprise and the infrastructure they are embedded in. Business systems represent such processes digitally, e.g., in the form of data model entities like a customer, customer order, product, production order, or journal entries. For different companies, the semantics of these entities can vary, which can hamper data integration exceeding company boundaries. Within a single company, definitions should be clear. However, that does not necessarily represent reality.
Besides business systems, sensors or IoT-related technologies become a greater source of data that is describing processes and infrastructure in an enterprise. This information is usually connected to business systems, such as an Enterprise Resource Planning (ERP) system or alike, via a Machine Execution System (MES) in the manufacturing sector. Additionally, there might be more systems installed underlying the MES responsible for managing the shop floor.
A typical IT landscape comprises many different business systems. Döhler, for instance, a company with more than 6,000 employees from the food and beverage industry, has more than ten business systems and supporting systems that need to be managed and where ideally data can be exchanged amongst each other. In addition to an ERP system, there are, e.g., systems for customer relationship management, extended warehouse management, and an enterprise portal (Döhler, 2019;SAP, 2018). Often, fragmented IT landscapes have been developed historically and complexity increased through, e.g., acquisitions. Simplification is a challenge in companies, e.g., due to the lack of knowledge about old systems that might be still used.
But even if all business systems are from the same vendor, entities can differ between systems. A cen-tralization to a single ERP system is unlikely to happen for multiple reasons. Such arguments can be related to aspects like data security of sensitive data, e.g., HR data shall be decoupled from the main ERP, or the wish to be not dependent on a single software vendor for economic or risk diversification reasons. Figure 1 visualizes a very simplified IT landscape how it can be found at companies belonging to the manufacturing sector. It distinguishes between different system categories and highlights the areas of horizontal and vertical integration, that are explained in Section 3.1.2 and Section 3.1.3 respectively.
Horizontal Data Integration
We see horizontal data integration as a holistic view of business processes, i.e., from the beginning to the end. Technically, that means joining database tables stored within business systems that are involved in the business process execution as conceptually outlined in Figure 1. These links can be established, e.g., through foreign key dependencies. The greater the number of tables that can be connected, the more detailed and valuable the resulting view on a process. As mentioned before, enterprises generally have multiple business systems for the elaborated reasons, which increases the effort for achieving a horizontal data integration. Compared to vertical integration, horizontal integration is further developed having relatively advanced software solutions for achieving it.
Vertical Data Integration
Vertical data integration describes the connection of technical data created by IoT technologies and business systems, including the systems in between these two layers as depicted in Figure 1. That means, two different kinds of data have to be combined in contrast to integrating only business data as in horizontal data integration. These distinct data characteristics introduce new challenges.
While business data is well-structured and with a comparatively high degree of correctness, sensor data can be relatively unstructured and error-prone. Contrary to the close business process reference of business data, sensors have a strong time and location reference. Moreover, both volume and creation speed of IoT data is generally higher, which impacts, e.g., the performance requirements on IT systems handling this kind of data (Hesse et al., 2017b).
Moreover, it is a challenge to map entities in business processes, such as a product that is being produced, to the corresponding IoT data that has been measured while exactly this product has been produced at the corresponding workplace. In contrast to integrating relatively homogeneous data among business systems, foreign keys cannot simply be used. Instead, a time-based approach is often applied, which can potentially introduce errors due to imprecise time measurements. However, the progress of vertical data integration we experienced in site inspections, e.g., in the form of being able to map sensor measurements created at production machines to the corresponding products that were being produced, is not as advanced as the horizontal integration. Nevertheless, vertical integration as in the previously described scenario is desired since it can help to get further insights about processes and thus, support to create an added value.
Internet of Things
Internet of Things is a term often used in the context of Industry 4.0 that has versatile meanings. Originally emerged out of the area of radio frequency identification (RFID), where it described connected physical objects using this technology, the term IoT became broader over recent years. It is not limited to RFID technology anymore, but also comprises, e.g., things connected via sensors or machine-to-machine communication. Additionally, applications leveraging these technologies are referred to as IoT (Ashton et al., 2009;Miorandi et al., 2012).
We see IoT as network-connected physical objects, whereas it does not matter which exact technology is used for establishing a connection. Moreover, IoT is an enabler for Industry 4.0 as it is driving vertical data integration and thus, paving the way for new business applications. Through making machines or physical objects in general digitally accessible, new data can be analyzed, new insights be gained and a more holistic view of processes can be created. This increased level of live information can lead to a competitive advantage for enterprises.
Our View on Industry 4.0
We see Industry 4.0 as a term describing an advanced way of manufacturing enabled and driven by technological progress in various areas.
These areas can be categorized into two groups, developments with respect to IoT technologies and regarding IT systems. While the advances related to IoT enable to gain new, higher volumes and more precise measurements, the IT system development progresses allow to analyze high volumes of data with reasonable response times nowadays. Moreover, high volumes of data created with a high velocity can also be handled with the help of modern DSPSs.
These achievements lead to new opportunities in manufacturing. New data is being generated in high volume and velocity, which can now also be analyzed in a reasonable amount of time with state-of-the-art IT systems. This natural fit of two technological developments generates opportunities. Making use of both advances in combination with full data integration, i.e., horizontally as well as vertically, raises the level of detail and completeness enterprises can have on their processes and entities. This information gain
• leads to the enablement of better data-driven decisions,
• facilitates new insights into processes or entities,
• creates the opportunity for new business applications, and
• allows for rethinking the way of manufacturing.
Specifically, holistic data integration enables a flexible and more customizable production, i.e., moving from a nowadays commonly existing batch-wise production to piece-wise production while not sacrificing economic performance. Although we have not observed a batch size of one as an explicitly formulated objective in our side inspections, it was consid-ered as a desirable situation. Generally, we got the impressions that there are greater challenges related to IT compared to those related to the engineering aspect of IoT.
DATA STREAM PROCESSING SYSTEMS
In this section, our view on the potential role of DSPSs in the context of Industry 4.0 is presented. Moreover, the related challenges that need to be tackled are highlighted.
A Possible Role in Industry 4.0
Although data stream processing systems is not a new technology, it gained more attraction in the past couple of years (Hesse and Lorenz, 2015). Reasons for that are technological advances, e.g., with respect to distributed systems on the one hand, and on the other hand the grown need for such systems due to the increased data masses that are being created through developments like IoT for instance. We think that stream processing technologies have the potential to play a central role in the context of Industry 4.0. A reason for that is its suitability regarding the data characteristics of processed data, which fit the overall purpose behind DSPSs. Instead of issuing a query that is executed once and returns a single result set as in a database management system (DBMS), DSPSs execute queries on a continuous basis. Similarly, IoT data is often generated on a continuous basis, which is contrary to traditional business data.
Altering requirements, e.g., due to growing data volumes introduced by added machines or advanced IoT technologies, can be handled as modern DSPSs are typically scalable distributed systems. As another consequence, high elasticity is enabled, i.e., nodes can be added or removed from the cluster as the workload increases or decreases. This flexibility is advantageous from an economic perspective. Especially manufacturers that do not produce during certain periods, generally speaking companies with large IoT workload variations, can benefit. Scalability can be reached by using a message broker between the sources of streaming data and the DSPS. That is a common approach seen in many architectures, both in industry and science (Hesse et al., 2017b). A schematic overview of a possible architecture is visualized in Figure 2.
IoT devices, such as manufacturing equipment, can send their measurements to a message broker, from which a DSPS can consume the data. Streaming applications that require more than IoT data, i.e., programs that need vertical integration, can also be realized using DSPSs. Corresponding data can be consumed via established interfaces, such as JDBC, and enrich the IoT data. If a horizontal data integration can be achieved in the business system layer. A holistic view on entities or processes can then be created in the DSPS where all data is brought together. Additionally, data from MES systems or alike can be integrated as depicted in Figure 2. That makes data stream processing technologies a suitable framework for developing Industry 4.0 applications whose use case does not have further requirements that can not be satisfied in this setting. Summarizing, since DSPSs are capable of handling the high volume and high-velocity IoT data as mentioned previously, they can act as an enabler for vertical integration and thus, for Industry 4.0 scenarios. Data can be analyzed on the fly without the need of storing high volumes of data in advance, which has a positive impact economically as well as on the performance side. An imaginable pre-aggregation that would lower these effects is not needed. Moreover, aggregation comes at the cost of data loss and thus, sacrifices accuracy.
Challenges
Certain challenges exist that could hinder an establishment of data stream processing technologies in the context of Industry 4.0 on a broader scale.
One reason is the existing lack of a broadly accepted abstraction layer for formulating queries or developing applications, such as SQL for DBMSs. Similarly, Stonebreaker, Ç etintemel and Zdonik mentioned the need for DSPSs to support a high-level stream processing SQL as one of the eight requirements of real-time stream processing they defined in (Stonebraker et al., 2005). The lack of such an established abstraction layer introduces multiple challenges. It reduces flexibility for enterprises as after choosing a certain system, the boundaries to exactly this system are comparatively tight. Switching to another framework, e.g., due to altered system requirements or changed performance ratios amongst the group of existing systems, is more complex and thus, costlier for companies. Streaming applications need to be developed using native system APIs, which results in high porting effort if a system is supposed to be exchanged compared to the effort needed for switching a DBMS. The resulting potentially needed SQL adaptions are relatively small since the same abstraction layer, namely SQL, is also used in the new system in typical scenarios. There are multiple system-specific SQL dialects developed for stream processing frameworks, but none of them gained broader acceptance. However, there is the opensource project Apache Beam aiming to close this gap. It is not a domain-specific language like SQL, but a software development kit that allows writing programs, which can be executed on any of the supported stream processing engines. The impact of using this abstraction layer on selected state-of-the-art DSPSs with respect to performance is analyzed in (Hesse et al., 2019).
Furthermore, identifying the most suitable system might be a challenge for enterprises. Although the growing number of DSPSs that have been developed in recent years is generally a good thing, the more choice the harder to make a decision for choosing a system. Typically, performance benchmarks are used for this task. Similarly to the previously described circumstances regarding the abstraction layers, the situation for DBMSs is more sophisticated. While there are many well-known and often used benchmarks for databases, such as TPC-C, TPC-H, or TPC-DS, the area of DSPS benchmarks is significantly less developed. Linear Road is probably the best-known benchmark for stream processing architectures (Arasu et al., 2004). However, it does not reflect typical Industry 4.0 scenarios in contrast to a benchmark currently under development and proposed by (Hesse et al., 2017a), which could close the gap of not having a suitable benchmark for comparing different DSPSs for use in the Industry 4.0 domain.
Another challenge we recognized in site inspections is the identification of Industry 4.0 scenarios that possibly create an added value. Although this situation is not directly linked to DSPSs, thoughts about Industry 4.0 and technologies that can be used barely include stream processing frameworks and their capabilities based on our industry experiences. This lack of awareness of streaming technologies results in not considering it for new application scenarios. Moreover, when taken into account, there are often reservations, such as that there is no or only little knowledge about these technology amongst the employees. Another fear is that modern DSPSs are very complex systems, which are hard to maintain and difficult to use for application development. However, these points could be eliminated automatically in near future if development efforts and improvements of DSPSs stay as high as they are at the moment.
CONCLUSION
The present paper pictures a point of view on Industry 4.0 and on data stream processing systems in its context. Thereby, contributions are threefold. First, we present insights about current situations and opinions at two selected companies with respect to Industry 4.0. This includes information about data characteristics and Industry 4.0 applications. All findings were derived from site inspections and alike.
Secondly, a viewpoint on Industry 4.0 as well as on further important and closely related aspects is given. Among others, it ensures a common understanding needed for the third contribution.
This third part is about data stream processing systems. Particularly, it is about why and how this technology could become an enabler for Industry 4.0. A possible architecture for Industry 4.0 scenarios is proposed and obstacles hindering DSPSs from being applied more in this context are pointed out. | 3,772 |
1907.09387 | 2963297137 | Industry 4.0 is becoming more and more important for manufacturers as the developments in the area of Internet of Things advance. Another technology gaining more attention is data stream processing systems. Although such streaming frameworks seem to be a natural fit for Industry 4.0 scenarios, their application in this context is still low. The contributions in this paper are threefold. Firstly, we present industry findings that we derived from site inspections with a focus on Industry 4.0. Moreover, our view on Industry 4.0 and important related aspects is elaborated. As a third contribution, we illustrate our opinion on why data stream processing technologies could act as an enabler for Industry 4.0 and point out possible obstacles on this way. | Another work presents design principles for Industry 4.0 that are derived through text analysis and literature studies @cite_5 . Thereby, it is aimed to help both, the scientific community and practitioners with this result. In total, four design principles were identified, namely technical assistance, interconnection, decentralized decisions, and information transparency. | {
"abstract": [
"The increasing integration of the Internet of Everything into the industrial value chain has built the foundation for the next industrial revolution called Industrie 4.0. Although Industrie 4.0 is currently a top priority for many companies, research centers, and universities, a generally accepted understanding of the term does not exist. As a result, discussing the topic on an academic level is difficult, and so is implementing Industrie 4.0 scenarios. Based on a quantitative text analysis and a qualitative literature review, the paper identifies design principles of Industrie 4.0. Taking into account these principles, academics may be enabled to further investigate on the topic, while practitioners may find assistance in identifying appropriate scenarios. A case study illustrates how the identified design principles support practitioners in identifying Industrie 4.0 scenarios."
],
"cite_N": [
"@cite_5"
],
"mid": [
"2295939521"
]
} | Application of Data Stream Processing Technologies in Industry 4.0 - What is Missing? | In the backlight of technological and economic developments, the term Industry 4.0 gained more and more popularity. Technically, new Internet of Things (IoT) technologies, such as sensors, are being created, sensor accuracy increases and analytical IT systems are being developed that allow querying huge amounts of data within seconds, to name but a few. On the economic side, a substantial price decrease for sensor IoT equipment can be recognized. This trend is expected to continue in the following years. To be more concrete, the price for an IoT node is expected to drop by about 50% from 2015 to 2020 (McKinsey&Company, 2015). These developments fostered the increased deployment of IoT technologies in companies, especially in the manufacturing sector, and thus, more IoT data is available to companies (Weiner and Line, 2014). Monetarily expressed, the total global worth of IoT technology is expected to reach USD 6.2 trillion by 2025. One of the industry sectors investing most on IoT is industrial manufacturing (Intel, 2014).
A related term in the context of manufacturing that gained attention in the past years is Industry 4.0. One reason for that is the potential that is seen in it with respect to creating an added value for enterprises. A survey conducted by McKinsey in January 2016 amongst enterprises in the US, Germany, and a https://orcid.org/0000000276343021 b https://orcid.org/0000000266125055 Japan with at least 50 employees highlights the significance of Industry 4.0. The study reveals, e.g., that the majority of companies expect Industry 4.0 to increase competitiveness (McKinsey&Company, 2016).
One of the identified key challenges is integrating data from different sources to enable Industry 4.0 applications (McKinsey&Company, 2016). Especially with the emerging significance of IoT data, the fairly old challenge of integrating disparate data sources gets a new flavor. Data Stream Processing Systems (DSPSs) can be a technology suitable for tackling this issue of data integration. Within this paper, a view on Industry 4.0 as well as the potential of Data Stream Processing technologies in that context is presented.
Following the introduction, industry insights related to Industry 4.0 observed through interviews and site inspections are highlighted. In Section 3, we elaborate our view on Industry 4.0, i.e., our definition as well as our view on data integration and IoT. Afterward, Section 4 discusses DSPSs and their role in the area of Industry 4.0, including challenges regarding their application in Industry 4.0 settings. A section to related work and a conclusion complete this paper.
OBSERVATIONS IN INDUSTRY
Beginning in 2015, we conducted interviews with multiple enterprises with a focus on Industry 4.0 implementation strategies and associated challenges and solutions. In this section, we describe and contrast the Industry 4.0 efforts of two selected companies. Both enterprises belong to the manufacturing sector and are comparatively large, with more than 10,000 employees and revenue of more than e 1bn each.
Company I
The first company collects sensor and log data from two sources, its machines used for manufacturing as well as from its sold products used by its customers. About 250 of the vended machines were configured to collect and send sensor and log data to an external central cloud storage service back in late 2015. The data is sent as a batch every 23 hours and includes several state values, such as temperature and position information. Overall, that results in about 800GB data on a monthly basis. Another external company is responsible for data cleansing and some basic calculations. The results are then used by Company I. As Company I is producing the machines, they also developed the format of the log data that is collected. Over time, this format changed with different software releases, which introduces additional complexity with respect to data integration.
Regarding the machines used for manufacturing, five machines are configured to collect sensor data. This data is recorded every 100ms and sent every hour to the same cloud storage service. Each batch is about 20MB big with respect to size. It contains, e.g., information about energy consumption and position data.
As of late 2015, none of the collected data has ever been deleted. Moreover, the stored sensor data had not found its way into an application at that point in time. However, Company I expected growth in its services area. As part of that, it could imagine offering several services around its products for which the collected data would be useful. Predictive maintenance or remote service and support scenarios are an example of such services. Besides, the collected data could reveal further insights about product usage and behavior, which could help product development. The internally captured data could be used for, e.g., predictive maintenance or quality improvements scenarios. The knowledge about production behaviors of previously manufactured products can be combined and gained learnings can be used to support product development and production planning.
Company II
The second company has several measurement stations in its production line. At these stations, certain parts of the product-to-be are gauged. Result-ing data are mostly coordinates, e.g., borehole positions. By doing so, possible inaccuracies added in previous production steps are identified. If an inaccuracy exceeds a threshold, the corresponding product is removed from the production line and the mistakes are corrected if possible.
Furthermore, there is a central database storing all warning or error messages that appear in the production line. A higher five-digit number of messages occurs on a single day on average, whereas this number can go up to more than a million messages. Besides the time stating when the deviation took place, the point in time when it is remedied is stored next to further values describing the event.
With respect to Industry 4.0 applications, the company was in the evaluation process, meaning thinking about how the existing data could be used for such scenarios. Back in 2015, the stored warnings and errors had a documentary character rather than being used in applications for, e.g., preventing future deviations or optimizing processes. However, it was an objective to leverage this data more in such kind of programs. The measurement data was considered first for this kind of evaluations.
Industry Study Conclusions
Both presented and studied companies have in common the positive view on Industry 4.0, meaning they see it as a chance rather than a threat, which fits the before-mentioned survey conducted by McKinsey (McKinsey&Company, 2015). However, neither of the companies, which can both be considered as leaders with respect to market share or revenue, have been able to significantly leverage the potential of Industry 4.0. None of them is using data stream processing technologies in this domain so far. To be more concrete, IoT data is collected but no major new applications using this data or even combining it with business data have been introduced. That might serve as an example for technological leaders struggling to implement new innovations. This situation is often referred to as the innovator's dilemma, which elaborates on the challenge for successful companies to stay innovative (Christensen, 2013).
INDUSTRY 4.0
In this section, we elaborate on the Industry 4.0related topics of data integration and the Internet of Things. Based on that, we present our view on the term Industry 4.0 afterward.
Data Integration
Being able to map data from different sources belonging together is crucial to get holistic pictures of processes, entities, and relationships. The more data can be combined, the more complete and valuable is the created view that is needed for fact-based assessments and decisions within enterprises. Consequently, better data integration and so more available data can lead to greater insights and understanding, better decisions, and thus, to a competitive advantage.
After giving a brief overview of the current situation in enterprises that we discovered through conducted interviews, site inspections, and research projects, our views on the terms horizontal and vertical data integration are elaborated.
Current Situation in Enterprises
Business processes are central artifacts that describe an enterprise and the infrastructure they are embedded in. Business systems represent such processes digitally, e.g., in the form of data model entities like a customer, customer order, product, production order, or journal entries. For different companies, the semantics of these entities can vary, which can hamper data integration exceeding company boundaries. Within a single company, definitions should be clear. However, that does not necessarily represent reality.
Besides business systems, sensors or IoT-related technologies become a greater source of data that is describing processes and infrastructure in an enterprise. This information is usually connected to business systems, such as an Enterprise Resource Planning (ERP) system or alike, via a Machine Execution System (MES) in the manufacturing sector. Additionally, there might be more systems installed underlying the MES responsible for managing the shop floor.
A typical IT landscape comprises many different business systems. Döhler, for instance, a company with more than 6,000 employees from the food and beverage industry, has more than ten business systems and supporting systems that need to be managed and where ideally data can be exchanged amongst each other. In addition to an ERP system, there are, e.g., systems for customer relationship management, extended warehouse management, and an enterprise portal (Döhler, 2019;SAP, 2018). Often, fragmented IT landscapes have been developed historically and complexity increased through, e.g., acquisitions. Simplification is a challenge in companies, e.g., due to the lack of knowledge about old systems that might be still used.
But even if all business systems are from the same vendor, entities can differ between systems. A cen-tralization to a single ERP system is unlikely to happen for multiple reasons. Such arguments can be related to aspects like data security of sensitive data, e.g., HR data shall be decoupled from the main ERP, or the wish to be not dependent on a single software vendor for economic or risk diversification reasons. Figure 1 visualizes a very simplified IT landscape how it can be found at companies belonging to the manufacturing sector. It distinguishes between different system categories and highlights the areas of horizontal and vertical integration, that are explained in Section 3.1.2 and Section 3.1.3 respectively.
Horizontal Data Integration
We see horizontal data integration as a holistic view of business processes, i.e., from the beginning to the end. Technically, that means joining database tables stored within business systems that are involved in the business process execution as conceptually outlined in Figure 1. These links can be established, e.g., through foreign key dependencies. The greater the number of tables that can be connected, the more detailed and valuable the resulting view on a process. As mentioned before, enterprises generally have multiple business systems for the elaborated reasons, which increases the effort for achieving a horizontal data integration. Compared to vertical integration, horizontal integration is further developed having relatively advanced software solutions for achieving it.
Vertical Data Integration
Vertical data integration describes the connection of technical data created by IoT technologies and business systems, including the systems in between these two layers as depicted in Figure 1. That means, two different kinds of data have to be combined in contrast to integrating only business data as in horizontal data integration. These distinct data characteristics introduce new challenges.
While business data is well-structured and with a comparatively high degree of correctness, sensor data can be relatively unstructured and error-prone. Contrary to the close business process reference of business data, sensors have a strong time and location reference. Moreover, both volume and creation speed of IoT data is generally higher, which impacts, e.g., the performance requirements on IT systems handling this kind of data (Hesse et al., 2017b).
Moreover, it is a challenge to map entities in business processes, such as a product that is being produced, to the corresponding IoT data that has been measured while exactly this product has been produced at the corresponding workplace. In contrast to integrating relatively homogeneous data among business systems, foreign keys cannot simply be used. Instead, a time-based approach is often applied, which can potentially introduce errors due to imprecise time measurements. However, the progress of vertical data integration we experienced in site inspections, e.g., in the form of being able to map sensor measurements created at production machines to the corresponding products that were being produced, is not as advanced as the horizontal integration. Nevertheless, vertical integration as in the previously described scenario is desired since it can help to get further insights about processes and thus, support to create an added value.
Internet of Things
Internet of Things is a term often used in the context of Industry 4.0 that has versatile meanings. Originally emerged out of the area of radio frequency identification (RFID), where it described connected physical objects using this technology, the term IoT became broader over recent years. It is not limited to RFID technology anymore, but also comprises, e.g., things connected via sensors or machine-to-machine communication. Additionally, applications leveraging these technologies are referred to as IoT (Ashton et al., 2009;Miorandi et al., 2012).
We see IoT as network-connected physical objects, whereas it does not matter which exact technology is used for establishing a connection. Moreover, IoT is an enabler for Industry 4.0 as it is driving vertical data integration and thus, paving the way for new business applications. Through making machines or physical objects in general digitally accessible, new data can be analyzed, new insights be gained and a more holistic view of processes can be created. This increased level of live information can lead to a competitive advantage for enterprises.
Our View on Industry 4.0
We see Industry 4.0 as a term describing an advanced way of manufacturing enabled and driven by technological progress in various areas.
These areas can be categorized into two groups, developments with respect to IoT technologies and regarding IT systems. While the advances related to IoT enable to gain new, higher volumes and more precise measurements, the IT system development progresses allow to analyze high volumes of data with reasonable response times nowadays. Moreover, high volumes of data created with a high velocity can also be handled with the help of modern DSPSs.
These achievements lead to new opportunities in manufacturing. New data is being generated in high volume and velocity, which can now also be analyzed in a reasonable amount of time with state-of-the-art IT systems. This natural fit of two technological developments generates opportunities. Making use of both advances in combination with full data integration, i.e., horizontally as well as vertically, raises the level of detail and completeness enterprises can have on their processes and entities. This information gain
• leads to the enablement of better data-driven decisions,
• facilitates new insights into processes or entities,
• creates the opportunity for new business applications, and
• allows for rethinking the way of manufacturing.
Specifically, holistic data integration enables a flexible and more customizable production, i.e., moving from a nowadays commonly existing batch-wise production to piece-wise production while not sacrificing economic performance. Although we have not observed a batch size of one as an explicitly formulated objective in our side inspections, it was consid-ered as a desirable situation. Generally, we got the impressions that there are greater challenges related to IT compared to those related to the engineering aspect of IoT.
DATA STREAM PROCESSING SYSTEMS
In this section, our view on the potential role of DSPSs in the context of Industry 4.0 is presented. Moreover, the related challenges that need to be tackled are highlighted.
A Possible Role in Industry 4.0
Although data stream processing systems is not a new technology, it gained more attraction in the past couple of years (Hesse and Lorenz, 2015). Reasons for that are technological advances, e.g., with respect to distributed systems on the one hand, and on the other hand the grown need for such systems due to the increased data masses that are being created through developments like IoT for instance. We think that stream processing technologies have the potential to play a central role in the context of Industry 4.0. A reason for that is its suitability regarding the data characteristics of processed data, which fit the overall purpose behind DSPSs. Instead of issuing a query that is executed once and returns a single result set as in a database management system (DBMS), DSPSs execute queries on a continuous basis. Similarly, IoT data is often generated on a continuous basis, which is contrary to traditional business data.
Altering requirements, e.g., due to growing data volumes introduced by added machines or advanced IoT technologies, can be handled as modern DSPSs are typically scalable distributed systems. As another consequence, high elasticity is enabled, i.e., nodes can be added or removed from the cluster as the workload increases or decreases. This flexibility is advantageous from an economic perspective. Especially manufacturers that do not produce during certain periods, generally speaking companies with large IoT workload variations, can benefit. Scalability can be reached by using a message broker between the sources of streaming data and the DSPS. That is a common approach seen in many architectures, both in industry and science (Hesse et al., 2017b). A schematic overview of a possible architecture is visualized in Figure 2.
IoT devices, such as manufacturing equipment, can send their measurements to a message broker, from which a DSPS can consume the data. Streaming applications that require more than IoT data, i.e., programs that need vertical integration, can also be realized using DSPSs. Corresponding data can be consumed via established interfaces, such as JDBC, and enrich the IoT data. If a horizontal data integration can be achieved in the business system layer. A holistic view on entities or processes can then be created in the DSPS where all data is brought together. Additionally, data from MES systems or alike can be integrated as depicted in Figure 2. That makes data stream processing technologies a suitable framework for developing Industry 4.0 applications whose use case does not have further requirements that can not be satisfied in this setting. Summarizing, since DSPSs are capable of handling the high volume and high-velocity IoT data as mentioned previously, they can act as an enabler for vertical integration and thus, for Industry 4.0 scenarios. Data can be analyzed on the fly without the need of storing high volumes of data in advance, which has a positive impact economically as well as on the performance side. An imaginable pre-aggregation that would lower these effects is not needed. Moreover, aggregation comes at the cost of data loss and thus, sacrifices accuracy.
Challenges
Certain challenges exist that could hinder an establishment of data stream processing technologies in the context of Industry 4.0 on a broader scale.
One reason is the existing lack of a broadly accepted abstraction layer for formulating queries or developing applications, such as SQL for DBMSs. Similarly, Stonebreaker, Ç etintemel and Zdonik mentioned the need for DSPSs to support a high-level stream processing SQL as one of the eight requirements of real-time stream processing they defined in (Stonebraker et al., 2005). The lack of such an established abstraction layer introduces multiple challenges. It reduces flexibility for enterprises as after choosing a certain system, the boundaries to exactly this system are comparatively tight. Switching to another framework, e.g., due to altered system requirements or changed performance ratios amongst the group of existing systems, is more complex and thus, costlier for companies. Streaming applications need to be developed using native system APIs, which results in high porting effort if a system is supposed to be exchanged compared to the effort needed for switching a DBMS. The resulting potentially needed SQL adaptions are relatively small since the same abstraction layer, namely SQL, is also used in the new system in typical scenarios. There are multiple system-specific SQL dialects developed for stream processing frameworks, but none of them gained broader acceptance. However, there is the opensource project Apache Beam aiming to close this gap. It is not a domain-specific language like SQL, but a software development kit that allows writing programs, which can be executed on any of the supported stream processing engines. The impact of using this abstraction layer on selected state-of-the-art DSPSs with respect to performance is analyzed in (Hesse et al., 2019).
Furthermore, identifying the most suitable system might be a challenge for enterprises. Although the growing number of DSPSs that have been developed in recent years is generally a good thing, the more choice the harder to make a decision for choosing a system. Typically, performance benchmarks are used for this task. Similarly to the previously described circumstances regarding the abstraction layers, the situation for DBMSs is more sophisticated. While there are many well-known and often used benchmarks for databases, such as TPC-C, TPC-H, or TPC-DS, the area of DSPS benchmarks is significantly less developed. Linear Road is probably the best-known benchmark for stream processing architectures (Arasu et al., 2004). However, it does not reflect typical Industry 4.0 scenarios in contrast to a benchmark currently under development and proposed by (Hesse et al., 2017a), which could close the gap of not having a suitable benchmark for comparing different DSPSs for use in the Industry 4.0 domain.
Another challenge we recognized in site inspections is the identification of Industry 4.0 scenarios that possibly create an added value. Although this situation is not directly linked to DSPSs, thoughts about Industry 4.0 and technologies that can be used barely include stream processing frameworks and their capabilities based on our industry experiences. This lack of awareness of streaming technologies results in not considering it for new application scenarios. Moreover, when taken into account, there are often reservations, such as that there is no or only little knowledge about these technology amongst the employees. Another fear is that modern DSPSs are very complex systems, which are hard to maintain and difficult to use for application development. However, these points could be eliminated automatically in near future if development efforts and improvements of DSPSs stay as high as they are at the moment.
CONCLUSION
The present paper pictures a point of view on Industry 4.0 and on data stream processing systems in its context. Thereby, contributions are threefold. First, we present insights about current situations and opinions at two selected companies with respect to Industry 4.0. This includes information about data characteristics and Industry 4.0 applications. All findings were derived from site inspections and alike.
Secondly, a viewpoint on Industry 4.0 as well as on further important and closely related aspects is given. Among others, it ensures a common understanding needed for the third contribution.
This third part is about data stream processing systems. Particularly, it is about why and how this technology could become an enabler for Industry 4.0. A possible architecture for Industry 4.0 scenarios is proposed and obstacles hindering DSPSs from being applied more in this context are pointed out. | 3,772 |
1901.08707 | 2963863924 | We investigate the effectiveness of a simple solution to the common problem of deep learning in medical image analysis with limited quantities of labeled training data. The underlying idea is to assign artificial labels to abundantly available unlabeled medical images and, through a process known as surrogate supervision, pre-train a deep neural network model for the target medical image analysis task lacking sufficient labeled training data. In particular, we employ 3 surrogate supervision schemes, namely rotation, reconstruction, and colorization, in 4 different medical imaging applications representing classification and segmentation for both 2D and 3D medical images. 3 key findings emerge from our research: 1) pre-training with surrogate supervision is effective for small training sets; 2) deep models trained from initial weights pre-trained through surrogate supervision outperform the same models when trained from scratch, suggesting that pre-training with surrogate supervision should be considered prior to training any deep 3D models; 3) pre-training models in the medical domain with surrogate supervision is more effective than transfer learning from an unrelated domain (e.g., natural images), indicating the practical value of abundant unlabeled medical image data. | Self-supervised learning with surrogate supervision is a relatively new trend in computer vision, with promising schemes appearing only in recent years. Consequently, the literature on the effectiveness of surrogate supervision in medical imaging is meager. @cite_5 proposed longitudinal relationships between medical images as the surrogate task to pre-train model weights. To generate surrogate supervision, they assign a label of 1 if two longitudinal studies belong to the same patient and 0 otherwise. @cite_4 used noise removal in small image patches as the surrogate task, wherein the surrogate supervision was created by mapping the patches with user-injected noise to the original clean image patches. @cite_18 used image colorization as the surrogate task, wherein color colonoscopy images are converted to gray-scale and then recovered using a conditional Generative Adversarial Network (GAN). | {
"abstract": [
"A significant proportion of patients scanned in a clinical setting have follow-up scans. We show in this work that such longitudinal scans alone can be used as a form of “free” self-supervision for training a deep network. We demonstrate this self-supervised learning for the case of T2-weighted sagittal lumbar Magnetic Resonance Images (MRIs). A Siamese convolutional neural network (CNN) is trained using two losses: (i) a contrastive loss on whether the scan is of the same person (i.e. longitudinal) or not, together with (ii) a classification loss on predicting the level of vertebral bodies. The performance of this pre-trained network is then assessed on a grading classification task. We experiment on a dataset of 1016 subjects, 423 possessing follow-up scans, with the end goal of learning the disc degeneration radiological gradings attached to the intervertebral discs. We show that the performance of the pre-trained CNN on the supervised classification task is (i) superior to that of a network trained from scratch; and (ii) requires far fewer annotated training samples to reach an equivalent performance to that of the network trained from scratch.",
"Purpose Surgical data science is a new research field that aims to observe all aspects of the patient treatment process in order to provide the right assistance at the right time. Due to the breakthrough successes of deep learning-based solutions for automatic image annotation, the availability of reference annotations for algorithm training is becoming a major bottleneck in the field. The purpose of this paper was to investigate the concept of self-supervised learning to address this issue.",
"The work explores the use of denoising autoencoders (DAEs) for brain lesion detection, segmentation, and false-positive reduction. Stacked denoising autoencoders (SDAEs) were pretrained using a large number of unlabeled patient volumes and fine-tuned with patches drawn from a limited number of patients (n=20, 40, 65). The results show negligible loss in performance even when SDAE was fine-tuned using 20 labeled patients. Low grade glioma (LGG) segmentation was achieved using a transfer learning approach in which a network pretrained with high grade glioma data was fine-tuned using LGG image patches. The networks were also shown to generalize well and provide good segmentation on unseen BraTS 2013 and BraTS 2015 test data. The manuscript also includes the use of a single layer DAE, referred to as novelty detector (ND). ND was trained to accurately reconstruct nonlesion patches. The reconstruction error maps of test data were used to localize lesions. The error maps were shown to assign unique error distributions to various constituents of the glioma, enabling localization. The ND learns the nonlesion brain accurately as it was also shown to provide good segmentation performance on ischemic brain lesions in images from a different database."
],
"cite_N": [
"@cite_5",
"@cite_18",
"@cite_4"
],
"mid": [
"2742126485",
"2962936819",
"2558464646"
]
} | SURROGATE SUPERVISION FOR MEDICAL IMAGE ANALYSIS: EFFECTIVE DEEP LEARNING FROM LIMITED QUANTITIES OF LABELED DATA | The limited training sample size problem in medical imaging has often been mitigated through transfer learning, where models pre-trained on ImageNet are fine-tuned for target medical image analysis tasks. Despite some promising results [1,2], this approach has its limitations. First, it may limit the designer to architectures that have been pre-trained on ImageNet, which are often needlessly deep for medical imaging, thus retarding training and inference. Second, as shown in [3], fine-tuning a pre-trained model even with a large labeled dataset results in a model where a large fraction * corresponding author of the neurons remain "loyal" to the unrelated source dataset. As such, fine-tuning may not fully leverage the full capacity of a pre-trained model. Third, transfer learning is barely applicable to 3D medical image analysis applications, because the 2D and 3D kernels are not shape compatible. Therefore, transfer learning from natural images is only a partial solution to the common problem of insufficient labeled data in medical imaging.
The limitations associated with transfer learning from a foreign domain, together with the abundance of unlabeled data in medical imaging leads to the following question: Can we pre-train network weights directly in the medical imaging domain by assigning artificial labels to unlabeled medical data? This question has recently been addressed in mainstream computer vision, leading in a number of surrogate supervision schemes including color prediction [4], rotation prediction [5], and noise prediction [6], where the key idea is to assign to unlabeled data artificial labels for the surrogate task and then use the resultant supervision, known as surrogate supervision, to pre-train a deep model for the vision task of interest. Surrogate supervision schemes have improved markedly; however, the resulting learned representation is not yet as effective as the representations learned through strong supervision. Nevertheless, surrogate supervision can still be a viable solution to the limited sample size problem in medical imaging, where strong supervision is expensive and difficult to obtain, sometimes even for small-scale datasets.
In this paper, we assess the effectiveness of surrogate supervision in tackling the limited sample size problem in medical imaging, as an alternative to training from scratch or transfer learning. Specifically, our research attempts to address the following central question: Does learning through surrogate supervision provide more effective weight initialization than random initialization or initial weights transferred from an unrelated domain? To answer this question, we consider four medical imaging applications: false positive reduction for nodule detection in chest CT images, lung lobe segmentation in these images, diabetic retinopathy classification in fundus images, and skin segmentation in color tele-medicine images. For each application, we train mod-els using various fractions of the training data with initial weights pre-trained using surrogate supervision, pre-trained weights transferred from ImageNet (where possible), and random weights. Our experiments on 3D datasets show that surrogate supervision leads to improved performance over training from scratch. Our experiments on 2D datasets further show a performance gain over both training from scratch and transfer learning when the medical image dataset differs markedly from natural images.
SURROGATE SUPERVISION SCHEMES
We use rotation [5] as the surrogate supervision where possible. This is because of its simplicity and superior results over similar techniques such as learning by predicting noise [6] and learning by predicting color [4]. The underlying idea is for the model to learn high-level semantic features in order to estimate the degree by which an image has been rotated. For instance, to predict if a chest CT is flipped horizontally, the model may rely on heart orientation among other image cues, or the model may learn to distinguish between the apex and base of the lung to predict if the chest CT is flipped vertically. However, predicting the degree of rotation makes sense as a surrogate task only if the underlying images follow a consistent geometry and have landmarks adequate for the task of rotation prediction. For example, similar to chest CTs, fundus images show a consistent geometry with distinct anatomical landmarks, such as the optic disc and macula, whereas a small image cube extracted from a CT scan may lack distinct landmarks to enable a reliable prediction of 3D rotation. For such Fig. 1. Surrogate supervision schemes and the targets tasks used in our study. In each cell, the left network belongs to the surrogate task trained with surrogate supervision and the right network belongs to the target task. The grey and yellow trapezoids indicate the untrained weights and pre-trained weights, respectively. See Section 4 for further details.
applications, we resort to other surrogate supervision schemes such as patch reconstruction using a Wasserstein GAN [10] and image colorization using a conditional GAN [4]. Figure 1 illustrates the surrogate supervision schemes used in our applications.
APPLICATIONS
To investigate the effectiveness of surrogate supervision, we considered 4 medical imaging applications consisting of classification and segmentation for both 2D and 3D medical images. Specifically, we studied False Positive Reduction (FPR) for nodule detection in chest CTs as a representative for 3D pattern classification, severity classification of diabetic retinopathy in fundus images as a representative for 2D pattern classification, lung lobe segmentation in chest CTs as a representative for 3D image segmentation, and skin segmentation in color tele-medicine images as a representative for 2D image segmentation. Table 1 summarizes the dataset for each application along with the selected surrogate supervision and experimental setup. Our supplementary material [11] provides further details on each studied application.
Diabetic retinopathy (DR) classification requires assigning a severity level between 0 and 4 to each fundus image. For this, we have used inception-resnet v2 [15]. To pre-train this architecture, we used rotation as surrogate supervision by appending a fully connected layer with 3 neurons corresponding to 0 • , 90 • , and 270 • rotation followed by softmax with a cross entropy loss layer. Note that applying a 180 • rotation on the retina image from the left eye results in an image with appearance similar to the unrotated retina image from the right eye. In view of this ambiguity, we did not use a 180 • rota-¦ denotes a fully labeled dataset; and thus, D u = T r l ∪ V l
Data split Task
Dataset Architecture Surrogate supervision T r l |V l |T e l |D u FPR for nodule det. LIDC-IDRI [12] 8-layer 3D CNN [13] 3D patch reconstruction 733|95|190|828¦ CTs Lung lobe seg. LIDC-IDRI [12] Progressive dense V-Net [ The FPR task for nodule detection requires labeling each nodule candidate as either nodule or non-nodule. We use a 3D faster RCNN model to generate nodule candidates and further use the 3D CNN architecture suggested in [13] for FPR. The FPR model consists of 8 convolution layers with a pooling layer followed by dropout layer placed after every 2 convolution layers. FPR is commonly done based on local 3D patches around candidates, but these patches are often structurally sparse with no clear landmark; hence, they are unsuitable for the task of rotation estimation. Instead, a GAN [10] is utilized for the surrogate supervision task, where 3D patches are reconstructed using a generator network and we use the real-or-fake signal predicted by the discriminator network as the surrogate label. For this purpose, we use a U-Net-like network [17] as the generator and employ the above 8-layer architecture as the discriminator. Model training is conducted in a progressive manner as suggested by [18] for generating higher quality 3D patches. Once converged, we use the discriminator as the pre-trained model for FPR. By learning to generate 3D patches, our model may learn about the continuity of vessels across slices as well as to distinguish nodules from vascular structures.
Lung lobe segmentation requires assigning every voxel in a chest CT to one of the 5 major lobes of the lung or background. For this, we use the progressive dense V-Net introduced in [14]. Since thorax geometry is consistent in chest CT scans (assuming images are re-oriented to a common axis code), it makes sense to use rotation as surrogate supervision. However, since 3D rotation is computationally expensive, we resort to flipping along the x, y, and z axes. The model is pre-trained by appending a global average pooling layer and a fully connected layer with 3 neurons where each neuron is followed by a sigmoid cross entropy loss layer. Essentially, each of the 3 neurons is responsible for predicting flipping along a particular axis. By learning to predict how the chest CT is flipped, the model may learn the global geometry of the lung and heart, which is likely to aid the target task of lung lobe segmentation. For instance, the model can learn to distinguish between the apex and base of the lung while learning to detect a flip along the z-axis, or it may learn the heart orientation while trying to detect a flip along the x-axis.
Body skin segmentation is a multi-class segmentation problem for tele-medicine images, where each pixel is labeled as background, uncertain, face, hand, foot, limb, trunk, scalp, or anogenital. The tele-medicine images are taken by cell phone cameras and, thus, the body parts can appear in arbitrary orientations in the captured images, which naturally rules out the possibility of using orientation as surrogate supervision. Instead, we use image colorization for pre-training, by which the model can learn about the color and texture of the skin in an unsupervised fashion, which in turn can aid the target skin segmentation task. For image colorization, we use a conditional GAN [19], where the generator is a U-Net architecture based on resnet50-DeepLabV3+ [16] and the discriminator is a simple CNN model with 3 convolution layers. Once trained, the generator of the conditional GAN serves as a pre-trained model for skin segmentation. Figure 2 illustrates the experiment setup in its general form for the studied applications. Specifically, for the 2D applications, we trained the models from scratch using Xavier's initialization, from an ImageNet pre-trained model, and from a model pre-trained by surrogate supervision. By doing so, we can compare the impact of surrogate supervision (done directly in the target domain) against transfer learning from a distant domain and training from scratch. For the 3D applications, we trained the models from scratch using Xavier's initialization and from a model pre-trained by surrogate supervision. Note that transfer learning from natural images is not possible for 3D medical imagining applications. Also, to investigate the impact of surrogate supervision in the presence of limited labeled datasets, we have trained the models described above using k=10%, 25%, 50%, and 100% of the available labeled training data (see Figure 2).
EXPERIMENT SETUP AND RESULTS
The data split can change from one application to another depending on whether the corresponding dataset is partially or fully labeled. If the entire dataset is labeled, we first divide it into 3 disjoint training, validation, and test subsets: T r l , V l , and T e l . We then form the unlabeled dataset by merging T r l and V l , followed by removing the labels of the target task, D u = T r l ∪ V l . During training for the surrogate task, surrogate labels are assigned to unlabeled images. On the other hand, if the dataset is only partially labeled, we first divide the labeled part of the dataset into 3 disjoint training, validation, and test subsets. We then form the unlabeled dataset by merging the remaining unlabeled images, denoted X, with T r l and V l ; i.e., D u = X ∪ T r l ∪ V l . As with the previous scenario, labels related to the target tasks are removed from the images placed in the unlabeled dataset. Table 1 specifies the size of each dataset split for each application under study. Figure 3 shows the results for the 4 applications under study. For DR classification, we used the Kappa statistic to measure the agreement between model predictions and ground truth, the average dice score for lung lobe segmentation and skin segmentation, and the area under the FROC curve (up to 3 false positives) for nodule detection. We see that pre-training with surrogate supervision enables the training of better-performing models, particularly when limited labeled data (10% or 25%) are used for training. However, such improvement tends to diminish in some applications when the training set grows in size.
For 3D applications, pre-training with surrogate supervision shows marked improvement over training from scratch, particularly for lung lobe segmentation, which requires strong supervision and large quantities of labeled data. This is an important finding, because 3D architectures are commonly trained from scratch due to the scarcity of pre-trained 3D models, whereas they could have benefited from surrogate supervision. Our results suggest that pre-training 3D networks with surrogate supervision merits consideration as the first step towards solving 3D medical image analysis tasks.
Of the 2D applications under study, skin images are much closer to the domain of natural images than fundus images. This is because skin images show body parts often along with indoor and outdoor scenes in the background depending on camera-skin distance, whereas fundus images, which show only the retina, are quite distinct from natural images. The domain discrepancy between fundus images, skin images, and natural images is clearly reflected in our experimental results, inasmuch as pre-training with surrogate supervision is quite Fig. 3. Performance evaluation for the applications under study. Except for skin segmentation (see text), pre-training with surrogate supervision (SS) is more effective than the ImageNet-trained model and random initialization. effective for the DR classification, but inferior to ImageNet weights for skin segmentation. It is noteworthy that, for both applications, pre-training with surrogate supervision is superior to random initialization. Our experiments suggest that pre-training with surrogate supervision in the target domain can be more effective than transferring weights from an unrelated domain even though the transferred weights are trained using much stronger supervision.
CONCLUSION
We investigated the effectiveness of surrogate supervision in training deep models for medical image analysis. Furthermore, we studied the impact of surrogate supervision with respect to the size of the training set. Our experimental results showed that models trained from weights pre-trained using surrogate supervision consistently outperformed the same models when trained from scratch. This is a key finding because 3D models in medical imaging have commonly been trained from scratch whereas they could have benefited from surrogate supervision. Our results further demonstrated that pre-training models in the medical domain was more effective than transfer learning from an unrelated domain (natural images). This finding highlights the practical value of unlabeled data in the medical imaging domain. We also observed that surrogate supervision was effective for the small training sets, but its impact tended to diminish for some applications as the sizes of the training sets grew. | 2,452 |
1901.08163 | 2912974420 | Classifying semantic relations between entity pairs in sentences is an important task in Natural Language Processing (NLP). Most previous models for relation classification rely on the high-level lexical and syntactic features obtained by NLP tools such as WordNet, dependency parser, part-of-speech (POS) tagger, and named entity recognizers (NER). In addition, state-of-the-art neural models based on attention mechanisms do not fully utilize information of entity that may be the most crucial features for relation classification. To address these issues, we propose a novel end-to-end recurrent neural model which incorporates an entity-aware attention mechanism with a latent entity typing (LET) method. Our model not only utilizes entities and their latent types as features effectively but also is more interpretable by visualizing attention mechanisms applied to our model and results of LET. Experimental results on the SemEval-2010 Task 8, one of the most popular relation classification task, demonstrate that our model outperforms existing state-of-the-art models without any high-level features. | There are several studies for solving relation classification task. Early methods used handcrafted features through a series of NLP tools or manually designing kernels @cite_19 . These approaches use high-level lexical and syntactic features obtained from NLP tools and manually designing kernels, but the classification models relying on such features suffer from propagation of implicit error of the tools. | {
"abstract": [
"This paper describes our system for SemEval-2010 Task 8 on multi-way classification of semantic relations between nominals. First, the type of semantic relation is classified. Then a relation type-specific classifier determines the relation direction. Classification is performed using SVM classifiers and a number of features that capture the context, semantic role affiliation, and possible pre-existing relations of the nominals. This approach achieved an F1 score of 82.19 and an accuracy of 77.92 ."
],
"cite_N": [
"@cite_19"
],
"mid": [
"1887754209"
]
} | Semantic Relation Classification via Bidirectional LSTM Networks with Entity-aware Attention using Latent Entity Typing | Classifying semantic relations between entity pairs in sentences plays a vital role in various NLP tasks, such as information extraction, question answering and knowledge base population [14]. A task of relation classification is defined as predicting a semantic relationship between two tagged entities in a sentence. For example, given a sentence with tagged entity pair, crash and attack, this sentence is classified into the re-lation Cause-Effect(e1,e2) 1 between the entity pair like Figure 1. A first entity is surrounded by e1 and /e1 , and a second entity is surrounded by e2 and /e2 .
Most previous relation classification models rely heavily on high-level lexical and syntactic features obtained from NLP tools such as WordNet, dependency parser, part-of-speech (POS) tagger, and named entity recognizer (NER). The classification models relying on such features suffer from propagation of implicit error of the tools and they are computationally expensive.
Recently, many studies therefore propose end-toend neural models without the high-level features. Among them, attention-based models, which focus to the most important semantic information in a sentence, show state-of-the-art results in a lot of NLP tasks. Since these models are mainly proposed for solving translation and language modeling tasks, they could not fully utilize the information of tagged entities in relation classification task. However, tagged entity pairs could be powerful hints for solving relation classification task. For example, even if we do not consider other words except the crash and attack, we intuitively know that the entity pair has a relation Cause-Effect(e1,e2) 1 better than Component-Whole(e1,e2) 1 in Figure 1 To address these issues, We propose a novel endto-end recurrent neural model which incorporates an entity-aware attention mechanism with a latent entity typing (LET). To capture the context of sentences, We obtain word representations by self attention mechanisms and build the recurrent neural architecture with Bidirectional Long Short-Term Memory (LSTM) networks. Entity-aware attention focuses on the most important semantic information considering entity pairs with word positions relative to these pairs and latent types obtained by LET.
The contributions of our work are summarized as follows: (1) We propose an novel end-to-end recurrent neural model and an entity-aware attention mechanism with a LET which focuses to semantic information of entities and their latent types; (2) Our model obtains 85.2% F1-score in SemEval-2010 Task 8 and it outper-
Figure 2:
The architecture of our model (best viewed in color). Entity 1 and 2 corresponds to the 3 and (n − 1)-th words, respectively, which are fed into the LET.
forms existing state-of-the-art models without any highlevel features; (3) We show that our model is more interpretable since it's decision making process could be visualized with self attention, entity-aware attention, and LET.
Model
In this section, we introduce a novel recurrent neural model that incorporate an entity-aware attention mechanism with a LET method in detail. As shown in Fig-ure 2, our model consists of four main components: (1) Word Representation that maps each word in a sentence into vector representations; (2) Self Attention that captures the meaning of the correlation between words based on multi-head attention [20]; (3) BLSTM which sequentially encodes the representations of self attention layer; (4) Entity-aware Attention that calculates attention weights with respect to the entity pairs, word positions relative to these pairs, and their latent types obtained by LET. After that, the features are averaged along the time steps to produce the sentencelevel features.
Word Representation
Let a input sentence is denoted by S = {w 1 , w 2 , ..., w n }, where n is the number of words. We transform each word into vector representations by looking up word embedding matrix W word ∈ R dw×|V | , where d w is the dimension of the vector and |V | is the size of vocabulary. Then the word representations X = {x 1 , x 2 , ..., x n } are obtained by mapping w i , the i-th word, to a column vector x i ∈ R dw are fed into the next layer.
Self Attention
The word representations are fixed for each word, even though meanings of words vary depending on the context. Many neural models encoding sequence of words may expect to learn implicitly of the contextual meaning, but they may not learn well because of the long-term dependency problems [1]. In order for the representation vectors to capture the meaning of words considering the context, we employ the self attention, a special case of attention mechanism, that only requires a single sequence. Self attention has been successfully applied to various NLP tasks such as machine translation, language understanding, and semantic role labeling [20,17,19].
We adopt the multi-head attention formulation [20], one of the methods for implementing self attentions. Figure 3 illustrates the multi-head attention mechanism that consists of several linear transformations and scaled dot-product attention corresponding to the center block of the figure. Given a matrix of n vectors, query Q, key K, and value V , the scaled dot-product attention is calculated by the following equation:
(3.1) Attention(Q, K, V ) = softmax( QK √ d w )V
In multi-head attention, the scaled dot-product attention with linear transformations is performed on r parallel heads to pay attention to different parts. Then formulation of multi-head attention is defined by the Figure 3: Multi-Head Self Attention. For self attention, the Q(query), K(key), and V (value), inputs of multihead attention, should be the same vectors. In our work, they are equivalent to X, the word representation vectors.
follows:
(3.2) MultiHead(Q, K, V ) = W M [head 1 ; ...; head r ] (3.3) head i = Attention(W Q i Q, W K i K, W V i V )
where [;] indicates row concatenation and r is the number of heads. The weights
W M ∈ R dw×dw , W Q i ∈ R dw/r×dw , W K i ∈ R dw/r×dw , and W V i ∈ R dw/
r×dw are learnable parameter for linear transformation. W M is for concatenation outputs of scaled dot-product attention and the others are for query, key, value of i-th head respectively.
Because our work requires self attention, the input matrices of multi-head attention, Q, K, and V are all equivalent to X, the word representation vectors. As a result, outputs of multi-head attention are denoted by M = {m 1 , m 2 , ..., m n } = MultiHead(X, X, X), where m i is the output vector corresponding to i-th word. The output of self attention layer is the sequence of representations whose include informative factors in the input sentence.
Bidirectional LSTM Network
For sequentially encoding the output of self attention layer, we use a BLSTM [5,4] that consists of two sub LSTM networks: a forward LSTM network which encodes the context of a input sentence and a backward LSTM network which encodes that one of the reverse sentence. More formally, BLSTM works as follows:
(3.4) − → h t = −−−−→ LST M (m t ) (3.5) ← − h t = ←−−−− LST M (m t ) (3.6) h t = [ − → h t ; ← − h t ]
The representation vectors M obtained from self attention layer are forwarded into to the network step by step. At the time step t, the hidden state
h t ∈ R 2d h of a BLSTM is obtained by concatenating − → h t ∈ R d h , the hid- den state of forward LSTM network, and ← − h t ∈ R d h , the backward one, where d h is dimension of each LSTM's state. (3.7) − → h t ∈ R d h ← − h t ∈ R d h
Entity-aware Attention Mechanism
Although many models with attention mechanism achieved state-of-the-art performance in many NLP tasks. However, for the relation classification task, these models lack of prior knowledge for given entity pairs, which could be powerful hints for solving the task. Relation classification differs from sentence classification in that information about entities is given along with sentences.
We propose a novel entity-aware attention mechanism for fully utilizing informative factors in given entity pairs. Entity-aware attention utilizes the two additional features except H = {h 1 , h 2 , ..., h n }, (1) relative position features, (2) entity features with LET, and the final sentence representation z, result of the attention, is computed as follows:
(3.8) u i = tanh(W H [h i ; p e1 i ; p e2 i ] + W E [h e1 ; t 1 ; h e2 ; t 2 ]) (3.9) α i = exp(v u i ) n j=1 exp(v u j ) (3.10) z = n i=1 α i h i
Relative Position Features
In relation classification, the position of each word relative to entities has been widely used for word representations [30,14,8].
Recently, position-aware attention is published as a way to use the relative position features more effectively [33].
It is a variant of attention mechanisms, which use not only outputs of BLSTM but also the relative position features when calculating attention weights. We adopt this method with slightly modification as shown in Equation 3.8. In the equation, p e1 i ∈ R dp and p e2 i ∈ R dp corresponds to the position of the i-th word relative to the first entity (e 1 -th word) and second entity (e 2 -th word) in a sentence respectively, where e j∈{1,2} is a index of j-th entity. Similar to word embeddings, the relative positions are converted to vector representations by looking up learnable embedding matrix W pos ∈ R dp×(2L−1) , where d p is the dimension of the relative position vectors and L is the maximum sentence length.
Finally, the representations of BLSTM layer take into account the context and the positional relationship with entities by concatenating h i , p e1 i , and p e2 i . The representation is linearly transformed by W H ∈ R da×(2d h +2dp) as in the Equation 3.8.
Entity Features with Latent Type
Since entity pairs are powerful hints for solving relation classification task, we involve the entity pairs and their types in the attention mechanism to effectively train relations between entity pairs and other words in a sentence. We employ the two entity-aware features. The first is the hidden states of BLSTM corresponding to positions of entity pairs, which are high-level features representing entities. These are denoted by h ei ∈ R 2d h , where e i is index of i-th entity.
In addition, latent types of the entities obtained by LET, our proposed novel method, are the second one. Using types as features can be a great way to improve performance, since the types of entities alone can be inferred the approximate relations. Because the annotated types are not given, we use the latent type representations by applying the LET inspired by latent topic clustering, a method for predicting latent topic of texts in question answering task [26]. The LET constructs the type representations by weighting K latent type vectors based on attention mechanisms. The mathematical formulation is the follows:
(3.11) a j i = exp((h ej ) c i ) K k=1 exp((h ej ) c k ) (3.12) t j∈{1,2} = K i=1 a j i c i
where c i is the i-th latent type vector and K is the number of latent entity types. As a result, entity features are constructed by concatenating the hidden states corresponding entity positions and types of entity pairs. After linear transformation of the entity features, they add up with the representations of BLSTM layer as in Equation 3.8, and the representation of sentence z ∈ R 2d h is computed by Equations from 3.8 to 3.10.
Classification and Training
The sentence representation obtained from the entity-aware attention z is fed into a fully connected softmax layer for classification. It produces the conditional probability p(y|S, θ) over all relation types:
(3.13) p(y|S, θ) = softmax(W O z + b O )
where y is a target relation class and S is the input sentence. The θ is whole learnable parameters in the whole network including
W O ∈ R |R|×2d h , b O ∈ R |R| ,
where |R| is the number of relation classes. A loss function L is the cross entropy between the predictions and the ground truths, which is defined as:
(3.14) L = − |D| i=1 log p(y (i) |S (i) , θ) + λ||θ|| 2 2
where |D| is the size of training dataset and (S (i) , y (i) ) is the i-th sample in the dataset. We minimize the loss L using AdaDelta optimizer [29] to compute the parameters θ of our model. To alleviate overfitting, we constrain the L2 regularization with the coefficient λ [13]. In addition, the dropout method is applied after word embedding, LSTM network, and entity-aware attention to prevent co-adaptation of hidden units by randomly omitting feature detectors [7,28].
Experiments
Dataset and Evaluation Metrics
We evaluate our model on the SemEval-2010 Task 8 dataset, which is an commonly used benchmark for relation classification [6] and compare the results with the state-of-the-art models in this area. The dataset contains 10 distinguished relations, Cause-Effect, Instrument-Agency, Product-Producer, Content-Container, Entity-Origin, Entity-Destination, Component-Whole, Member-Collection, Message-Topic, and Other. The former 9 relations have two directions, whereas Other is not directional, so the total number of relations is 19. There are 10,717 annotated sentences which consist of 8,000 samples for training and 2,717 samples for testing. We adopt the official evaluation metric of SemEval-2010 Task 8, which is based on the macro-averaged F1-score (excluding Other ), and takes into consideration the directionality.
Implementation Details
We tune the hyperparameters for our model on the development set randomly sampled 800 sentences for validation. The best hyperparameters in our proposed model are shown in following Table 1 We use pre-trained weights of the publicly available GloVe model [15] to initialize word embeddings in our model, and other weights are randomly initialized from zero-mean Gaussian distribution [3]. Table 2 compares our Entity-aware Attention LSTM model with state-of-theart models on this relation classification dataset. We divide the models into three groups, Non-Neural Model, SDP-based Model, and End-to-End Model. First, the SVM [16], Non-Neural Model, was top of the SemEval-2010 task, during the official competition period. They used many handcraft feature and SVM classifier. As a result, they achieved an F1-score of 82.2%. The second is SDP-based Model such as MVRNN [18], FCM [27], DepNN [9], depLCNN+NS [22], SDP-LSTM [24], and DRNNs [23]. The SDP is reasonable features for detecting semantic structure of sentences. Actually, the SDP-based models show high performance, but SDP may not always be accurate and the parsing time is exponentially increased by long sentences. The last model is End-to-End Model automatically learned internal representations can occur between the original inputs and the final outputs in deep learning. There are CNN-based models such as CNN [30,14], CR-CNN [2], and Attention-CNN [8] and RNN-based models such as BLSTM [32], Attention-BLSTM [34], and Hierarchical-BLSTM (Hier-BLSTM) [25] for this task.
Experimental Results
Model F1
Non Our proposed model achieves an F1-score of 85.2% which outperforms all competing state-of-theart approaches except depLCNN+NS, DRNNs, and Attention-CNN. However, they rely on high-level lexical features such as WordNet, dependency parse trees, POS tags, and NER tags from NLP tools.
The experimental results show that the LET is effective for relation classification. The LET improve a performance of 0.5% than the model not applied it. The model showed the best performance with three types.
Visualization
There are three different visualization to demonstrate that our model is more interpretable. First, the visualization of self attention shows where each word focus on parts of a sentence. By showing the words that the entity pair attends, we can find the words that well represent the relation between them. Next, the entity-aware attention visualization shows where the model pays attend to a sentence. This visualization result highlights important words in a sentence, which are usually important keywords for classification. Finally, we visualize representation of type in LET by using t-SNE [10], a method for dimensionality reduction, and group the whole entities in the dataset by the its latent types.
Self Attention
We can obtain the richer word representations by using self attentions. These word representations are considered the context based on correlation between words in a sentence. The Figure 4 illustrates the results of the self attention in the sentence, "the 〈e1〉pollution〈/e1〉was caused by the 〈e2〉shipwrek〈/e2〉", which is labeled Cause-Effect(e1,e2). There are visualizations of the two heads in the multi-head attention applied for self attention. The color density indicates the attention values, results of Equation 3.1, which means how much an entity focuses on each word in a sentence. In Figure 4, the left represents the words that pollution, the first entity, focuses on and the right represents the words that shipwreck, the second entity, focuses on. We can recognize that the entity pair is commonly concentrated on was, caused, and each other. Actually, these words play the most important role in semantically predicting the Cause-Effect(e1,e2), which is the relation class of this entity pair. Figure 5 shows where the model focuses on the sentence to compute relations between entity pairs, which is the result of visualizing the alpha vectors in Equation 3.9. The important words in sentence are highlighted in yellow, which means that the more clearly the color is, the more important it is. For example, in the first sentence, the inside is strongly highlighted, which is actually the best word representing the relation Component-whole(e1,e2) between the given entity pair. As another example, in the third sentence, the highlighted assess and using represent the relation, Figure 5: Visualization of Entity-aware Attention Instrument-Agency(e2,e1) between entity pair, analysts and frequency, well. We can see that the using is more highlighted than the assess, because the former represents the relation better. Figure 6: Visualization of latent type representations using t-SNE Figure 6 visualizes latent type representation t j∈{1,2} in Equation 3.12 Since the dimensionality of representation vectors are too large to visualize, we applied the t-SNE, one of the most popular dimensionality reduction methods. In Figure 6, the red points represent latent type vectors c i∈K and the rests are latent type representations t j , where the colors of points are determined by the closest of the latent type vectors in the vector space of the original dimensionality. The points are generally well divided and are almost uniformly distributed without being biased to one side. Figure 7 summarizes the results of extracting 50 entities in close order with each latent type vector. This allows us to roughly understand what latent types of entities are. We use a total of three types and find that similar characteristics appear in words grouped by together. In the type 1, the words are related to human's jobs and foods. The type2 has a lot of entities related to machines and engineering like engine, woofer, and motor. Finally, in type3, there are many words with bad meanings related associated with disasters and Figure 7: Sets of Entities grouped by Latent Types drugs. As a result, each type has a set of words with similar characteristics, which can prove that LET works effectively.
Entity-aware Attention
Latent Entity Type
Conclusion
In this paper, we proposed entity-aware attention mechanism with latent entity typing and a novel end-to-end recurrent neural model which incorporates this mechanism for relation classification. Our model achieves 85.2% F1-score in SemEval-2010 Task 8 using only raw sentence and word embeddings without any high-level features from NLP tools and it outperforms existing state-of-the-art methods. In addition, our three visualizations of attention mechanisms applied to the model demonstrate that our model is more interpretable than previous models. We expect our model to be extended not only the relation classification task but also other tasks that entity plays an important role. Especially, latent entity typing can be effectively applied to sequence modeling task using entity information without NER. In the future, we will propose a new method in question answering or knowledge base population based on relations between entities extracted from our model. | 3,364 |
1901.08163 | 2912974420 | Classifying semantic relations between entity pairs in sentences is an important task in Natural Language Processing (NLP). Most previous models for relation classification rely on the high-level lexical and syntactic features obtained by NLP tools such as WordNet, dependency parser, part-of-speech (POS) tagger, and named entity recognizers (NER). In addition, state-of-the-art neural models based on attention mechanisms do not fully utilize information of entity that may be the most crucial features for relation classification. To address these issues, we propose a novel end-to-end recurrent neural model which incorporates an entity-aware attention mechanism with a latent entity typing (LET) method. Our model not only utilizes entities and their latent types as features effectively but also is more interpretable by visualizing attention mechanisms applied to our model and results of LET. Experimental results on the SemEval-2010 Task 8, one of the most popular relation classification task, demonstrate that our model outperforms existing state-of-the-art models without any high-level features. | On the other hands, deep neural networks have shown outperform previous models using handcraft features. Especially, many researches tried to solve the problem based on end-to-end models using only raw sentences and pre-trained word representations learned by Skip-gram and Continuous Bag-of-Words @cite_16 @cite_8 @cite_26 . employed a deep convolutional neural network (CNN) for extracting lexical and sentence level features @cite_25 . Dos proposed model for learning vector of each relation class using ranking loss to reduce the impact of artificial classes @cite_15 . Zhang and Wang used bidirectional recurrent neural network (RNN) to learn long-term dependency between entity pairs @cite_21 . Furthermore, proposed bidirectional LSTM network (BLSTM) utilizing position of words, POS tags, named entity information, dependency parse @cite_24 . This model resolved vanishing gradient problem appeared in RNNs by using BLSTM. | {
"abstract": [
"",
"",
"Deep learning has gained much success in sentence-level relation classification. For example, convolutional neural networks (CNN) have delivered competitive performance without much effort on feature engineering as the conventional pattern-based methods. Thus a lot of works have been produced based on CNN structures. However, a key issue that has not been well addressed by the CNN-based method is the lack of capability to learn temporal features, especially long-distance dependency between nominal pairs. In this paper, we propose a simple framework based on recurrent neural networks (RNN) and compare it with CNN-based model. To show the limitation of popular used SemEval-2010 Task 8 dataset, we introduce another dataset refined from MIMLRE(, 2014). Experiments on two different datasets strongly indicates that the RNN-based model can deliver better performance on relation classification, and it is particularly capable of learning long-distance relation patterns. This makes it suitable for real-world applications where complicated expressions are often involved.",
"Relation classification is an important semantic processing, which has achieved great attention in recent years. The main challenge is the fact that important information can appear at any position in the sentence. Therefore, we propose bidirectional long short-term memory networks (BLSTM) to model the sentence with complete, sequential information about all words. At the same time, we also use features derived from the lexical resources such as WordNet or NLP systems such as dependency parser and named entity recognizers (NER). The experimental results on SemEval-2010 show that BLSTMbased method only with word embeddings as input features is sufficient to achieve state-of-the-art performance, and importing more features could further improve the performance.",
"Relation classification is an important semantic processing task for which state-ofthe-art systems still rely on costly handcrafted features. In this work we tackle the relation classification task using a convolutional neural network that performs classification by ranking (CR-CNN). We propose a new pairwise ranking loss function that makes it easy to reduce the impact of artificial classes. We perform experiments using the the SemEval-2010 Task 8 dataset, which is designed for the task of classifying the relationship between two nominals marked in a sentence. Using CRCNN, we outperform the state-of-the-art for this dataset and achieve a F1 of 84.1 without using any costly handcrafted features. Additionally, our experimental results show that: (1) our approach is more effective than CNN followed by a softmax classifier; (2) omitting the representation of the artificial class Other improves both precision and recall; and (3) using only word embeddings as input features is enough to achieve state-of-the-art results if we consider only the text between the two target nominals.",
"The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of \"Canada\" and \"Air\" cannot be easily combined to obtain \"Air Canada\". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible.",
"The state-of-the-art methods used for relation classification are primarily based on statistical machine learning, and their performance strongly depends on the quality of the extracted features. The extracted features are often derived from the output of pre-existing natural language processing (NLP) systems, which leads to the propagation of the errors in the existing tools and hinders the performance of these systems. In this paper, we exploit a convolutional deep neural network (DNN) to extract lexical and sentence level features. Our method takes all of the word tokens as input without complicated pre-processing. First, the word tokens are transformed to vectors by looking up word embeddings 1 . Then, lexical level features are extracted according to the given nouns. Meanwhile, sentence level features are learned using a convolutional approach. These two level features are concatenated to form the final extracted feature vector. Finally, the features are fed into a softmax classifier to predict the relationship between two marked nouns. The experimental results demonstrate that our approach significantly outperforms the state-of-the-art methods."
],
"cite_N": [
"@cite_26",
"@cite_8",
"@cite_21",
"@cite_24",
"@cite_15",
"@cite_16",
"@cite_25"
],
"mid": [
"",
"",
"1838058638",
"2293023260",
"2155454737",
"2153579005",
"2250521169"
]
} | Semantic Relation Classification via Bidirectional LSTM Networks with Entity-aware Attention using Latent Entity Typing | Classifying semantic relations between entity pairs in sentences plays a vital role in various NLP tasks, such as information extraction, question answering and knowledge base population [14]. A task of relation classification is defined as predicting a semantic relationship between two tagged entities in a sentence. For example, given a sentence with tagged entity pair, crash and attack, this sentence is classified into the re-lation Cause-Effect(e1,e2) 1 between the entity pair like Figure 1. A first entity is surrounded by e1 and /e1 , and a second entity is surrounded by e2 and /e2 .
Most previous relation classification models rely heavily on high-level lexical and syntactic features obtained from NLP tools such as WordNet, dependency parser, part-of-speech (POS) tagger, and named entity recognizer (NER). The classification models relying on such features suffer from propagation of implicit error of the tools and they are computationally expensive.
Recently, many studies therefore propose end-toend neural models without the high-level features. Among them, attention-based models, which focus to the most important semantic information in a sentence, show state-of-the-art results in a lot of NLP tasks. Since these models are mainly proposed for solving translation and language modeling tasks, they could not fully utilize the information of tagged entities in relation classification task. However, tagged entity pairs could be powerful hints for solving relation classification task. For example, even if we do not consider other words except the crash and attack, we intuitively know that the entity pair has a relation Cause-Effect(e1,e2) 1 better than Component-Whole(e1,e2) 1 in Figure 1 To address these issues, We propose a novel endto-end recurrent neural model which incorporates an entity-aware attention mechanism with a latent entity typing (LET). To capture the context of sentences, We obtain word representations by self attention mechanisms and build the recurrent neural architecture with Bidirectional Long Short-Term Memory (LSTM) networks. Entity-aware attention focuses on the most important semantic information considering entity pairs with word positions relative to these pairs and latent types obtained by LET.
The contributions of our work are summarized as follows: (1) We propose an novel end-to-end recurrent neural model and an entity-aware attention mechanism with a LET which focuses to semantic information of entities and their latent types; (2) Our model obtains 85.2% F1-score in SemEval-2010 Task 8 and it outper-
Figure 2:
The architecture of our model (best viewed in color). Entity 1 and 2 corresponds to the 3 and (n − 1)-th words, respectively, which are fed into the LET.
forms existing state-of-the-art models without any highlevel features; (3) We show that our model is more interpretable since it's decision making process could be visualized with self attention, entity-aware attention, and LET.
Model
In this section, we introduce a novel recurrent neural model that incorporate an entity-aware attention mechanism with a LET method in detail. As shown in Fig-ure 2, our model consists of four main components: (1) Word Representation that maps each word in a sentence into vector representations; (2) Self Attention that captures the meaning of the correlation between words based on multi-head attention [20]; (3) BLSTM which sequentially encodes the representations of self attention layer; (4) Entity-aware Attention that calculates attention weights with respect to the entity pairs, word positions relative to these pairs, and their latent types obtained by LET. After that, the features are averaged along the time steps to produce the sentencelevel features.
Word Representation
Let a input sentence is denoted by S = {w 1 , w 2 , ..., w n }, where n is the number of words. We transform each word into vector representations by looking up word embedding matrix W word ∈ R dw×|V | , where d w is the dimension of the vector and |V | is the size of vocabulary. Then the word representations X = {x 1 , x 2 , ..., x n } are obtained by mapping w i , the i-th word, to a column vector x i ∈ R dw are fed into the next layer.
Self Attention
The word representations are fixed for each word, even though meanings of words vary depending on the context. Many neural models encoding sequence of words may expect to learn implicitly of the contextual meaning, but they may not learn well because of the long-term dependency problems [1]. In order for the representation vectors to capture the meaning of words considering the context, we employ the self attention, a special case of attention mechanism, that only requires a single sequence. Self attention has been successfully applied to various NLP tasks such as machine translation, language understanding, and semantic role labeling [20,17,19].
We adopt the multi-head attention formulation [20], one of the methods for implementing self attentions. Figure 3 illustrates the multi-head attention mechanism that consists of several linear transformations and scaled dot-product attention corresponding to the center block of the figure. Given a matrix of n vectors, query Q, key K, and value V , the scaled dot-product attention is calculated by the following equation:
(3.1) Attention(Q, K, V ) = softmax( QK √ d w )V
In multi-head attention, the scaled dot-product attention with linear transformations is performed on r parallel heads to pay attention to different parts. Then formulation of multi-head attention is defined by the Figure 3: Multi-Head Self Attention. For self attention, the Q(query), K(key), and V (value), inputs of multihead attention, should be the same vectors. In our work, they are equivalent to X, the word representation vectors.
follows:
(3.2) MultiHead(Q, K, V ) = W M [head 1 ; ...; head r ] (3.3) head i = Attention(W Q i Q, W K i K, W V i V )
where [;] indicates row concatenation and r is the number of heads. The weights
W M ∈ R dw×dw , W Q i ∈ R dw/r×dw , W K i ∈ R dw/r×dw , and W V i ∈ R dw/
r×dw are learnable parameter for linear transformation. W M is for concatenation outputs of scaled dot-product attention and the others are for query, key, value of i-th head respectively.
Because our work requires self attention, the input matrices of multi-head attention, Q, K, and V are all equivalent to X, the word representation vectors. As a result, outputs of multi-head attention are denoted by M = {m 1 , m 2 , ..., m n } = MultiHead(X, X, X), where m i is the output vector corresponding to i-th word. The output of self attention layer is the sequence of representations whose include informative factors in the input sentence.
Bidirectional LSTM Network
For sequentially encoding the output of self attention layer, we use a BLSTM [5,4] that consists of two sub LSTM networks: a forward LSTM network which encodes the context of a input sentence and a backward LSTM network which encodes that one of the reverse sentence. More formally, BLSTM works as follows:
(3.4) − → h t = −−−−→ LST M (m t ) (3.5) ← − h t = ←−−−− LST M (m t ) (3.6) h t = [ − → h t ; ← − h t ]
The representation vectors M obtained from self attention layer are forwarded into to the network step by step. At the time step t, the hidden state
h t ∈ R 2d h of a BLSTM is obtained by concatenating − → h t ∈ R d h , the hid- den state of forward LSTM network, and ← − h t ∈ R d h , the backward one, where d h is dimension of each LSTM's state. (3.7) − → h t ∈ R d h ← − h t ∈ R d h
Entity-aware Attention Mechanism
Although many models with attention mechanism achieved state-of-the-art performance in many NLP tasks. However, for the relation classification task, these models lack of prior knowledge for given entity pairs, which could be powerful hints for solving the task. Relation classification differs from sentence classification in that information about entities is given along with sentences.
We propose a novel entity-aware attention mechanism for fully utilizing informative factors in given entity pairs. Entity-aware attention utilizes the two additional features except H = {h 1 , h 2 , ..., h n }, (1) relative position features, (2) entity features with LET, and the final sentence representation z, result of the attention, is computed as follows:
(3.8) u i = tanh(W H [h i ; p e1 i ; p e2 i ] + W E [h e1 ; t 1 ; h e2 ; t 2 ]) (3.9) α i = exp(v u i ) n j=1 exp(v u j ) (3.10) z = n i=1 α i h i
Relative Position Features
In relation classification, the position of each word relative to entities has been widely used for word representations [30,14,8].
Recently, position-aware attention is published as a way to use the relative position features more effectively [33].
It is a variant of attention mechanisms, which use not only outputs of BLSTM but also the relative position features when calculating attention weights. We adopt this method with slightly modification as shown in Equation 3.8. In the equation, p e1 i ∈ R dp and p e2 i ∈ R dp corresponds to the position of the i-th word relative to the first entity (e 1 -th word) and second entity (e 2 -th word) in a sentence respectively, where e j∈{1,2} is a index of j-th entity. Similar to word embeddings, the relative positions are converted to vector representations by looking up learnable embedding matrix W pos ∈ R dp×(2L−1) , where d p is the dimension of the relative position vectors and L is the maximum sentence length.
Finally, the representations of BLSTM layer take into account the context and the positional relationship with entities by concatenating h i , p e1 i , and p e2 i . The representation is linearly transformed by W H ∈ R da×(2d h +2dp) as in the Equation 3.8.
Entity Features with Latent Type
Since entity pairs are powerful hints for solving relation classification task, we involve the entity pairs and their types in the attention mechanism to effectively train relations between entity pairs and other words in a sentence. We employ the two entity-aware features. The first is the hidden states of BLSTM corresponding to positions of entity pairs, which are high-level features representing entities. These are denoted by h ei ∈ R 2d h , where e i is index of i-th entity.
In addition, latent types of the entities obtained by LET, our proposed novel method, are the second one. Using types as features can be a great way to improve performance, since the types of entities alone can be inferred the approximate relations. Because the annotated types are not given, we use the latent type representations by applying the LET inspired by latent topic clustering, a method for predicting latent topic of texts in question answering task [26]. The LET constructs the type representations by weighting K latent type vectors based on attention mechanisms. The mathematical formulation is the follows:
(3.11) a j i = exp((h ej ) c i ) K k=1 exp((h ej ) c k ) (3.12) t j∈{1,2} = K i=1 a j i c i
where c i is the i-th latent type vector and K is the number of latent entity types. As a result, entity features are constructed by concatenating the hidden states corresponding entity positions and types of entity pairs. After linear transformation of the entity features, they add up with the representations of BLSTM layer as in Equation 3.8, and the representation of sentence z ∈ R 2d h is computed by Equations from 3.8 to 3.10.
Classification and Training
The sentence representation obtained from the entity-aware attention z is fed into a fully connected softmax layer for classification. It produces the conditional probability p(y|S, θ) over all relation types:
(3.13) p(y|S, θ) = softmax(W O z + b O )
where y is a target relation class and S is the input sentence. The θ is whole learnable parameters in the whole network including
W O ∈ R |R|×2d h , b O ∈ R |R| ,
where |R| is the number of relation classes. A loss function L is the cross entropy between the predictions and the ground truths, which is defined as:
(3.14) L = − |D| i=1 log p(y (i) |S (i) , θ) + λ||θ|| 2 2
where |D| is the size of training dataset and (S (i) , y (i) ) is the i-th sample in the dataset. We minimize the loss L using AdaDelta optimizer [29] to compute the parameters θ of our model. To alleviate overfitting, we constrain the L2 regularization with the coefficient λ [13]. In addition, the dropout method is applied after word embedding, LSTM network, and entity-aware attention to prevent co-adaptation of hidden units by randomly omitting feature detectors [7,28].
Experiments
Dataset and Evaluation Metrics
We evaluate our model on the SemEval-2010 Task 8 dataset, which is an commonly used benchmark for relation classification [6] and compare the results with the state-of-the-art models in this area. The dataset contains 10 distinguished relations, Cause-Effect, Instrument-Agency, Product-Producer, Content-Container, Entity-Origin, Entity-Destination, Component-Whole, Member-Collection, Message-Topic, and Other. The former 9 relations have two directions, whereas Other is not directional, so the total number of relations is 19. There are 10,717 annotated sentences which consist of 8,000 samples for training and 2,717 samples for testing. We adopt the official evaluation metric of SemEval-2010 Task 8, which is based on the macro-averaged F1-score (excluding Other ), and takes into consideration the directionality.
Implementation Details
We tune the hyperparameters for our model on the development set randomly sampled 800 sentences for validation. The best hyperparameters in our proposed model are shown in following Table 1 We use pre-trained weights of the publicly available GloVe model [15] to initialize word embeddings in our model, and other weights are randomly initialized from zero-mean Gaussian distribution [3]. Table 2 compares our Entity-aware Attention LSTM model with state-of-theart models on this relation classification dataset. We divide the models into three groups, Non-Neural Model, SDP-based Model, and End-to-End Model. First, the SVM [16], Non-Neural Model, was top of the SemEval-2010 task, during the official competition period. They used many handcraft feature and SVM classifier. As a result, they achieved an F1-score of 82.2%. The second is SDP-based Model such as MVRNN [18], FCM [27], DepNN [9], depLCNN+NS [22], SDP-LSTM [24], and DRNNs [23]. The SDP is reasonable features for detecting semantic structure of sentences. Actually, the SDP-based models show high performance, but SDP may not always be accurate and the parsing time is exponentially increased by long sentences. The last model is End-to-End Model automatically learned internal representations can occur between the original inputs and the final outputs in deep learning. There are CNN-based models such as CNN [30,14], CR-CNN [2], and Attention-CNN [8] and RNN-based models such as BLSTM [32], Attention-BLSTM [34], and Hierarchical-BLSTM (Hier-BLSTM) [25] for this task.
Experimental Results
Model F1
Non Our proposed model achieves an F1-score of 85.2% which outperforms all competing state-of-theart approaches except depLCNN+NS, DRNNs, and Attention-CNN. However, they rely on high-level lexical features such as WordNet, dependency parse trees, POS tags, and NER tags from NLP tools.
The experimental results show that the LET is effective for relation classification. The LET improve a performance of 0.5% than the model not applied it. The model showed the best performance with three types.
Visualization
There are three different visualization to demonstrate that our model is more interpretable. First, the visualization of self attention shows where each word focus on parts of a sentence. By showing the words that the entity pair attends, we can find the words that well represent the relation between them. Next, the entity-aware attention visualization shows where the model pays attend to a sentence. This visualization result highlights important words in a sentence, which are usually important keywords for classification. Finally, we visualize representation of type in LET by using t-SNE [10], a method for dimensionality reduction, and group the whole entities in the dataset by the its latent types.
Self Attention
We can obtain the richer word representations by using self attentions. These word representations are considered the context based on correlation between words in a sentence. The Figure 4 illustrates the results of the self attention in the sentence, "the 〈e1〉pollution〈/e1〉was caused by the 〈e2〉shipwrek〈/e2〉", which is labeled Cause-Effect(e1,e2). There are visualizations of the two heads in the multi-head attention applied for self attention. The color density indicates the attention values, results of Equation 3.1, which means how much an entity focuses on each word in a sentence. In Figure 4, the left represents the words that pollution, the first entity, focuses on and the right represents the words that shipwreck, the second entity, focuses on. We can recognize that the entity pair is commonly concentrated on was, caused, and each other. Actually, these words play the most important role in semantically predicting the Cause-Effect(e1,e2), which is the relation class of this entity pair. Figure 5 shows where the model focuses on the sentence to compute relations between entity pairs, which is the result of visualizing the alpha vectors in Equation 3.9. The important words in sentence are highlighted in yellow, which means that the more clearly the color is, the more important it is. For example, in the first sentence, the inside is strongly highlighted, which is actually the best word representing the relation Component-whole(e1,e2) between the given entity pair. As another example, in the third sentence, the highlighted assess and using represent the relation, Figure 5: Visualization of Entity-aware Attention Instrument-Agency(e2,e1) between entity pair, analysts and frequency, well. We can see that the using is more highlighted than the assess, because the former represents the relation better. Figure 6: Visualization of latent type representations using t-SNE Figure 6 visualizes latent type representation t j∈{1,2} in Equation 3.12 Since the dimensionality of representation vectors are too large to visualize, we applied the t-SNE, one of the most popular dimensionality reduction methods. In Figure 6, the red points represent latent type vectors c i∈K and the rests are latent type representations t j , where the colors of points are determined by the closest of the latent type vectors in the vector space of the original dimensionality. The points are generally well divided and are almost uniformly distributed without being biased to one side. Figure 7 summarizes the results of extracting 50 entities in close order with each latent type vector. This allows us to roughly understand what latent types of entities are. We use a total of three types and find that similar characteristics appear in words grouped by together. In the type 1, the words are related to human's jobs and foods. The type2 has a lot of entities related to machines and engineering like engine, woofer, and motor. Finally, in type3, there are many words with bad meanings related associated with disasters and Figure 7: Sets of Entities grouped by Latent Types drugs. As a result, each type has a set of words with similar characteristics, which can prove that LET works effectively.
Entity-aware Attention
Latent Entity Type
Conclusion
In this paper, we proposed entity-aware attention mechanism with latent entity typing and a novel end-to-end recurrent neural model which incorporates this mechanism for relation classification. Our model achieves 85.2% F1-score in SemEval-2010 Task 8 using only raw sentence and word embeddings without any high-level features from NLP tools and it outperforms existing state-of-the-art methods. In addition, our three visualizations of attention mechanisms applied to the model demonstrate that our model is more interpretable than previous models. We expect our model to be extended not only the relation classification task but also other tasks that entity plays an important role. Especially, latent entity typing can be effectively applied to sequence modeling task using entity information without NER. In the future, we will propose a new method in question answering or knowledge base population based on relations between entities extracted from our model. | 3,364 |
1901.08201 | 2911583982 | Abstract To improve the performance of Intensive Care Units (ICUs), the field of bio-statistics has developed scores which try to predict the likelihood of negative outcomes. These help evaluate the effectiveness of treatments and clinical practice, and also help to identify patients with unexpected outcomes. However, they have been shown by several studies to offer sub-optimal performance. Alternatively, Deep Learning offers state of the art capabilities in certain prediction tasks and research suggests deep neural networks are able to outperform traditional techniques. Nevertheless, a main impediment for the adoption of Deep Learning in healthcare is its reduced interpretability, for in this field it is crucial to gain insight into the why of predictions, to assure that models are actually learning relevant features instead of spurious correlations. To address this, we propose a deep multi-scale convolutional architecture trained on the Medical Information Mart for Intensive Care III (MIMIC-III) for mortality prediction, and the use of concepts from coalitional game theory to construct visual explanations aimed to show how important these inputs are deemed by the network. Results show our model attains a ROC AUC of 0.8735 ( ± 0.0025) which is competitive with the state of the art of Deep Learning mortality models trained on MIMIC-III data, while remaining interpretable. Supporting code can be found at https: github.com williamcaicedo ISeeU . | Although the most natural application of Deep Learning algorithms to medical diagnosis is automated medical image diagnosis @cite_38 , the usage of Physiological Time Series (PTS) and Electronic Medical Record (EMR) data, is a more general source of data on which machine learning models can be trained. EMRs are very attractive as a potential data source since their use is widespread, which makes them abundant and accessible electronically. However, there are certain challenges associated with their “secondary use” in Machine Learning @cite_39 . Despite this, several works have reported the successful use of EMRs and PTS to train Machine Learning Deep Learning based models for diagnosis. | {
"abstract": [
"This review covers computer-assisted analysis of images in the field of medical imaging. Recent advances in machine learning, especially with regard to deep learning, are helping to identify, classify, and quantify patterns in medical images. At the core of these advances is the ability to exploit hierarchical feature representations learned solely from data, instead of features designed by hand according to domain-specific knowledge. Deep learning is rapidly becoming the state of the art, leading to enhanced performance in various medical applications. We introduce the fundamentals of deep learning methods and review their successes in image registration, detection of anatomical and cellular structures, tissue segmentation, computer-aided disease diagnosis and prognosis, and so on. We conclude by discussing research issues and suggesting future directions for further improvement.",
"Clinical data management systems typically provide caregiver teams with useful information, derived from large, sometimes highly heterogeneous, data sources that are often changing dynamically. Over the last decade there has been a significant surge in interest in using these data sources, from simply reusing the standard clinical databases for event prediction or decision support, to including dynamic and patient-specific information into clinical monitoring and prediction problems. However, in most cases, commercial clinical databases have been designed to document clinical activity for reporting, liability, and billing reasons, rather than for developing new algorithms. With increasing excitement surrounding “secondary use of medical records” and “Big Data” analytics, it is important to understand the limitations of current databases and what needs to change in order to enter an era of “precision medicine.” This review article covers many of the issues involved in the collection and preprocessing of critical care data. The three challenges in critical care are considered: compartmentalization, corruption, and complexity. A range of applications addressing these issues are covered, including the modernization of static acuity scoring; online patient tracking; personalized prediction and risk assessment; artifact detection; state estimation; and incorporation of multimodal data sources such as genomic and free text data."
],
"cite_N": [
"@cite_38",
"@cite_39"
],
"mid": [
"2533800772",
"2277786047"
]
} | ISeeU: Visually interpretable deep learning for mortality prediction inside the ICU | Intensive Care Units (ICUs) have helped to make improvements in mortality, length of stay and complication rates among patients [1], but they are costly to operate and sometimes skilled personnel to staff them seems to be in short supply [1]. For this reason, research efforts to better equip ICUs to handle patients in a more cost-effective manner are warranted.
The field of bio-statistics have produced throughout the years a series of predictive scores which try to quantify the likelihood of negative outcomes (i.e. death) in clinical settings. These tools are necessary to evaluate the effectiveness of treatments and clinical practice, and to identify patients with unexpected outcomes [2]. Scores as the Apache (in its several versions), SAPS, MODS and others have had moderate success [2]. Although their performance is not optimal, they have become de facto standards for severity and mortality risk prediction. These scores have been built using statistical techniques such as Logistic Regression, which are limited to the modeling of linear decision boundaries, when it is quite likely that the actual dynamics of the related biological systems does not respond to such prior. A reason for limiting the modeling to linear/additive techniques as Logistic Regression is that they tend to be readily interpretable, allowing medical staff to derive rules and gain insight over the reasons why such a score is predicting certain risk or mortality probability. However, such statistical approaches (APACHE, SAPS, MODS, etc.), have been shown by several studies to generalize sub-optimally [3,4]. [4] show that over time, fixed scores' performance tends to deteriorate (e.g. APACHE III-j over-predicts mortality in Australasia), and cite as possible reasons changes in medical practice and better care. It's no wonder then, that ICU mortality prediction appears to have reached a plateau [1].
On the other hand, Deep Learning offers state of the art capabilities in object recognition and several related areas, and those capabilities can be used to learn to detect patterns in patient data and predict the likelihood of negative outcomes. A reliable survival prediction system using Machine Learning concepts such as supervised fine-tuning (with pre-training that uses data from a related domain) and online learning (keep learning after deployment) could overcome the degradation problems exhibited by fixed scores, by being able to learn from the environments where they are being deployed. This would benefit ICUs everywhere, allowing staff to benchmark ICU performance and improve treatment protocols and practice [2].
Machine Learning models depend on data for training, and in the case of Deep Learning, the amount of data needed to reach adequate performance can be larger than what traditional Machine Learning models require. However, today there is a deluge of data coming from various disparate sources, and said data sometimes sits in databases without much use. In the case of Electronic Medical Records, detailed information about patients as visit records and sociodemographic data is stored indefinitely and could be leveraged to train predictive models that enable precision healthcare.
One of the main impediments for widespread adoption of advanced Machine Learning and Deep Learning in healthcare is lack of interpretability [5,6]. There seems to be a trade-off between predictive accuracy and interpretability in the landscape of learning algorithms, and in the case of Deep Neural Networks, models of greater depth consistently outperform shallower ones in some tasks [7,8,9], at the expense of simpler representations. Crucially, high capacity Machine Learning models can easily latch onto epistemically flawed correlations and statistical flukes as long as they help minimize the loss in the training set, because the minimization of the associated loss function does not care for causality but merely for correlation [6]. For instance, in one well-known case a neural network [10] was trained to predict the risk of death in patients with pneumonia, and it was found that the model consistently predicted lower probability of death for patients who also had asthma. There was a counter-intuitive correlation in the training data that did not reflect any causality whatsoever, just the fact that asthma patients were treated more aggressively and thus fared better in average. The model in question performed better than the rest of models considered but it was ultimately discarded in favor of less performant, but interpretable ones. It is crucial then to offer mechanisms to gain insight on the why of predictions, i.e. the features our models attend to when generating an output, to make sure that models are actually learning sensible features instead of spurious and misleading correlations
In this paper, we propose a multi-scale deep convolutional architecture to tackle the problem of mortality prediction inside the ICU, trained on clinical data. One central feature of our approach is that it is geared to offer interpretable predictions, i.e. predictions accompanied by explanations and/or justifications which make for a more transparent decision process. For the latter we leverage the concept of Shapley Values [11], to create visualizations that convey the importance that the convolutional model assigns to each input feature. The relationship of this work with the existing literature and its main contributions are summarized next.
Our work relates to the existing literature in a number of ways. We use Deep Learning for mortality prediction inside the ICU as it also has been used by Che et al [12,5], Grnarova et al [13] and Purushotham et al [14], but our work has key differences:
• We are able to show that ConvNets offer predictive performance comparable to the reported performance of RNNs when dealing with physiological time-series data from MIMIC-III.
• We show evidence that a deep convolutional architecture can handle both static and dynamic data from MIMIC-III, making hybrid architectures (DNN/RNN) unnecessary at this particular task and performance level.
• We achieve the previously mentioned results using simple forward/backward filling imputation and mean imputation instead of more involved and computationally expensive approaches.
Regarding the problem of interpretability, the work most related to ours is the one by Che et al [5]. However, in this case there are also some important differences:
• Che et al sidestep the problem of interpreting a deep model directly by using Mimic Learning with an interpretable student model (Gradient Boosted Trees) [15], while our work focus instead on interpreting directly a deep model trained to predict ICU mortality, without using any surrogate model.
• We are able to provide not only dataset-level interpretability but also patient-level interpretability.
• Our model works with raw features instead of pre-processed ones.
On the other hand, our architecture uses multi-scale convolutional layers and a "channel" input representation, similar to [16], but for a different task (mortality prediction instead of clinical intervention prediction). We also note that the use of Shapley Values [11] or their approximations for providing interpretability in the ICU setting has not, to the best our knowledge, been reported by the relevant literature.
Methods and materials
Participants
We used the Medical Information Mart for Intensive Care III (MIMIC-III v1.4) to train our deep models. MIMIC III is a database comprised of more than a decade worth of different modalities of detailed data from patients admitted to the Beth Israel Deaconess Medical Centre in Boston, Massachusets, freely available for research [26]. To establish a cohort and build our dataset, several entry criteria were established: we only considered stays longer than 48 hours, only patients older than 16 years old at the time of admission were included, and in case of multiple admissions to the ICU, only the first one was considered. Application of these entry criteria led to a dataset containing 22.413 distinct patients.
Input features
For each patient, we extracted measurements of 22 different concepts roughly matching the concepts used by the SAPS-II score [31], during the first 48 hours of each patient stay. In the case of temporal data, all measurements were extracted and in case of multiple measurement in the same hour, values were averaged (except urine output which was summed). To resolve inconsistencies and harmonize sometimes seemingly disparate concepts (i.e. temperature is reported both in Celsius and Fahrenheit, different codes for the same measurement are used, same or related concepts are present in different tables, etc), data was preprocessed, measurements merged and renamed, and un-physiological values were discarded. For privacy reasons, MIMIC-III shifts ages greater than 89 years (i.e. patients appear to be 300 years old). To address this, we clipped all ages greater than 80 years to 80 years (see figure 1). For reproducibility, all of our code is available at https://github.com/williamcaicedo/MortalityPrediction.
/A N/A N/A N/A N/A N/A N/A 14.134% Surgical admission 8030 N/A N/A N/A N/A N/A N/A N/A 35.827% AIDS 113 N/A N/A N/A N/A N/A N/A N/A 0.504% Metastatic cancer 688 N/A N/A N/A N/A N/A N/A N/A 3.069% Lymphoma 317 N/A N/A N/A N/A N/A N/A N/A 1.414% Mortality 2185 N/A N/A N/A N/A N/A N/A N/A 9.748%
Missing data. Due to the nature of patient monitoring, different physiological variables and features are sampled at different rates. This lead to large number of missing observations, as not all measurements were available hourly. Given this situation, simple data imputation techniques were applied to obtain a 22x48 observation matrix for each patient (static features like age and admission where replicated). Concretely, except for urine observations, forward/backward filling imputation was attempted. After this, outstanding missing FiO2 values were then imputed to their normal values. On the other hand, when multiple observations are present in the same hour, values were averaged. In cases where a patient did not have a single observation recorded, we imputed the whole physiological time series using the empirical mean. Our data imputation procedure is summarized in table 3.
Deep Learning model
Our prediction model is a multi-scale deep convolutional neural network (ConvNet). ConvNets are Multi-Layer Neural Networks that use a particular architecture with sparse connections and parameter sharing [32]. They can be thought of performing a discrete convolution operation between the input (often a two-dimensional image) and a set of trainable kernels at each layer. The discrete convolution operation, in the context of Deep Learning and computer vision is defined as where I is a two-dimensional image and K is a two-dimensional kernel. The kernel acts as a local feature detector that is displaced all over the image. Each convolution between the input and a kernel produces a spatial receptive field, also called a feature map, in which each kernel-image multiplication can be thought of as pattern matching, which produces an output that is a function of the similarity between certain image region and the kernel itself. After the convolution operation, the output of the receptive field is ran through a non-linear activation function which allows the network to work with transforms of the input space and construct non-linear features. The feature map can be thought as a 2-D tensor (matrix) of neuron outputs, where the weights of each neuron are the same but have been shifted spatially (hence the parameter sharing), and which are not connected to every single pixel of the input (which also can be seen as having the corresponding weight set to zero). ConvNets were one the first models to use Gradient Descent with Backpropagation [33] with success [34]. Convolution based filters are extensively used to detect features as shapes and edges in computer vision [35]. However, in traditional computer vision fixed kernels are used to detect specific features, in contrast to ConvNets where kernels are learned directly from the data. We define our model as a multi-scale ConvNet, as we use convolution kernels with different sizes and then concatenate the resulting feature maps into a single layer output tensor ( figure 3). To deal with the different characteristics of our input time series, we employ a multi-scale convolutional layer, followed by ReLU activations, average-pooling with a window size of 3 and dropout [36] plus Batch Normalization [37] performed after the concatenation operation. In this layer we employ three temporal scales: Three hours, six hours, and twelve hours; each represented by a stack of convolution kernels with dimensions 3x1, 6x1, and 12x1, respectively. The convolutional layer is followed by a fully connected layer with ReLU activations, Dropout, Batch Normalization and a final oneneuron layer with logistic activation. Finally, our input representation places each feature as an image channel instead of stacking them as a 2-D input. This allows us to use 1-D temporal convolutions no matter how many input series we use.
s(i, j) = (x * w)(i, j) = m,n I(m, n)K(i − m, j − n)(1)
Shapley Values and input relevance attribution
The Shapley Value [11] is a concept from game theory that formalizes the individual contribution of a coalition of players to the attainment of a reward Feature MIMIC-III describes the worth of a player coalition, we have that
Sh i (v) = S⊆N \{i},s=|S| (n − s − 1)!s! n! (v(S ∪ {i}) − v(S))(2)
where Sh i is the individual contribution of player i to the total worth v(N ), i.e. its Shapley value [11]. The summation runs over all possible subsets of players S ⊆ N that don't include player i, and each term involves the difference between the reward when player i is present and absent, v(S ∪ {i}) − v(S). Equation 2 not only considers the presence of a particular player, but also the position it occupies in the coalition. This is extremely useful in the context of our study, where values are time sensitive.
Strumbelj et al [38] shown that such values can be used to represent the relevance of each input to a machine learning classifier in order to gain insight on the patterns it considers important to predict a particular class and proposed a feature importance attribution method equivalent to calculating the Shapley values. It is worth mentioning that the use of Shapley values for importance attribution is able to take into account the possible interactions between input features in a way occlusion-based methods [39] cannot.
DeepLIFT
Computing equation 2 has combinatorial cost, making it unfeasible for several practical applications, reason why we must resort to approximations. In this context, we will discuss a new importance attribution method, called DeepLIFT. DeepLIFT [40] is an importance attribution method for feed forward neural networks, that is akin to the Layer-wise Relevance Propagation method (LRP) proposed by [41], in the sense that both use a backpropagation-like approach to the calculation and attribution of relevance/importance scores to the input features. DeepLIFT overcomes problems associated with gradient-based attribution methods [42,43] as saturation, overlooking negative contributions and contributions when the associated gradient is zero, and discontinuities in the gradients [40]. Since the attribution output of LRP was later shown to be roughly equivalent to a factor of a gradient method's output [44], it follows that LRP suffers from similar problems to those outlined before. To compute feature importance the following procedure is carried out: first a reference input value must be provided. This reference value can be informed by domain knowledge or simply be the empirical mean of the input features, and once the references have been defined the corresponding network output is computed for both the original input and the reference input. Then the difference between outputs is calculated and backpropagated through the network layers using rules provided by DeepLIFT. This results in importance values that capture how a change in inputs contribute to the observed change in the output.
More formally, for a target neuron t and a collection of neurons x 1 , x 2 , ..., x n whose outputs are needed to compute the output of t, the method assigns importance attributions C ∆xi∆t subject to the fact that such attributions are additive and must satisfy
n i=1 C ∆xi∆t = ∆t(3)
where ∆t = t o − t r is the difference between the original and reference outputs of t. DeepLIFT introduces multipliers m ∆xi∆t = c∆x i ∆t ∆x that allow to use a chain-rule to backpropagate the neuron attributions through a hidden layer.
The rule takes the form
m ∆xi∆z = j m ∆xi∆yj m ∆yj ∆z(4)
where m ∆xi∆z is the contribution of neuron x i to the output of neuron z divided by the difference in outputs for neuron x i , ∆x i , given a hidden layer of neurons y j in-between (see figure 4). The corresponding contribution c ∆xi∆z can be recovered from equation 4 as c ∆xi∆z = m ∆xi∆z ∆x.
For a linear unit, the contribution of the inputs x i to the output difference ∆y is simply = w i ∆x i . To avoid the issues that affect other methods regarding negative contributions, DeepLIFT treats separately positive and negative contributions, which leads to ∆y and x i being decomposed into its positive and negative components
∆y + = i 1{w i ∆x i > 0}w i (∆x + i + ∆x − i )(5)∆y − = i 1{w i ∆x i < 0}w i (∆x + i + ∆x − i )(6)
The contributions can be stated then as
c ∆x + i ∆y + = i 1{w i ∆x i > 0}w i ∆x + i (7) c ∆x − i ∆y + = i 1{w i ∆x i > 0}w i ∆x − i (8) c ∆x + i ∆y − = i 1{w i ∆x i < 0}w i ∆x + i (9) c ∆x − i ∆y − = i 1{w i ∆x i < 0}w i ∆x − i(10)
For non-linear operations with a single input (e.g. ReLU activations), DeepLIFT proposes the so-called RevealCancel rule, which is able to better uncover nonlinear dynamics [40]. For this case, ∆y decomposes as
∆y + = 1 2 (f (x 0 + ∆x + ) − f (x 0 )) + 1 2 (f (x 0 + ∆x − + ∆x + ) − f (x 0 + ∆x − )) (11) ∆y − = 1 2 (f (x 0 + ∆x − ) − f (x 0 )) + 1 2 (f (x 0 + ∆x + + ∆x − ) − f (x 0 + ∆x + )) (12)
And to satisfy 3 we have that ∆y + = c ∆x + i y + and ∆y − = c ∆x − i y − . Given this, the multipliers for the RevealCancel rule are
m ∆x + y + = c ∆x + i y + ∆y + = ∆y + ∆y +(13)m ∆x − y − = c ∆x − i y − ∆x − = ∆y − ∆y −(14)
What makes DeepLIFT especially relevant is that it has been shown by Lundberg et al [45] that DeepLIFT can be understood as fast approximation to the real Shapley Values when the feature reference values are set to their expected values. It can be seen that the RevealCancel rule computes the Shapley Values of the positive and negative contributions at the non-linear operations, and the successive application of the chain rule proposed by DeepLIFT allows to propagate the approximate Shapley Values back to the inputs.
Results
We built our ConvNet using Keras [46] with Tensorflow [47] as back-end. Since our dataset is highly unbalanced with the positive class (death) representing just under 10% of training examples, we used a weighted logarithmic loss giving more weight to positive examples (1:10 importance ratio). We used 5-fold cross validation for a more reliable performance estimate and we standardized the dataset (µ = 0, σ ≈ 1) calculating fold statistics independently to avoid data leakage. We did not perform any substantial hyper-parameter optimization and instead opted for heuristically chosen values (dropout probability 0.45, and a batch size of 32). Our choice of optimizer Stochastic Gradient Descent with Nesterov Momentum 0.9, and learning rate of 0.01 with a 1e − 7 decay.
Model performance
Using this training configuration we obtained a cross validated Receiver Operating Characteristic Area Under the Curve (ROC AUC) of 0.8933 (±0.0032) for the training set, and 0.8735 (±0.0025) ROC AUC for the cross validation set. Using a 0.5 decision threshold, the model reaches 75.423% sensitivity at 82.776% specificity.
Model interpretability
We used the DeepLIFT implementation provided by its authors to compute our input feature importances from a model trained on one cross validation fold. We selected zero (empirical mean) as the reference value. We also computed importances for individual patients and at the dataset level, and created a series of visualizations to offer explanations for the predictions of the model. Visualizations are designed to combine patient features with their importance towards the predicted probability of death. Our visualizations constitute a form of post hoc interpretability [6] insofar as they try to convey how the model regards the inputs in terms of their impact on the predicted probability of death, without having to explain the internal mechanisms of our neural network, nor sacrificing predictive performance.
Predictor importance. Here we treated the patient tensor representation as an image and we grouped feature importance attribution semantically (i.e. observations belong to a particular predictor, as pixels on an image belong to an object) to find net contributions per predictor. Figure 7 shows the feature importances computed for a single patient (predicted probability of mortality: 0.5764, observed mortality: 1), summed over 48 hours for each individual predictor and then normalized over the predictor set. As mentioned this visualization shows the importance of each predictor as a whole, highlighting with red those predictors that contribute to a positive (death) prediction, and with blue those that contribute to a negative (survival) prediction. Since hourly importances can be either positive or negative in sign, it is possible that the total importance might be close to zero (gray background), even if the individual importances are not. We can clearly see that the network is assigning high positive importance to the components of the Glasgow Comma Scale -GCS, and high negative importance to the age of the patient. These are interesting because GCS values are shown to be abnormal, and the patient is very young (20 years old), and it is plausible that a young age is negatively correlated with mortality in the ICU. Predictor importance (hourly). In this visualization we further de-aggregate importance and show the individual approximate Shapley Values for each input value and hour (Figure 8). We can see evidence for the non-linear dynamics the network has learned, as values from the same predictor have different importance across the temporal axis.
Positive and negative importance barplot. Alternatively we can treat up positive and negative importances separately to have a better sense of how each predictor affects the final prediction. Figure 9 shows a barplot with positive and negative importance grouped by predictor. Dataset-level feature importances. Additionally we computed importances for the validation set to offer interpretability at the dataset level. Figure 10 shows dataset-level statistics for the normalized positive and negative importance of each predictor.
Discussion
Our ConvNet model shows strong performance on the MIMIC-III dataset with low variability across folds. Also performance over training and validation data are quite close, evidencing that our model exhibits signs of good generalization properties, as there is no serious overfitting ocurring (0.8933 (±0.0032) vs 0.8735 (±0.0025) ROC AUC). Validation performance reaches the state of the art for mortality prediction on MIMIC-III data and a comparable feature set (95% CI [0.870396, 0.876604] against a 95% CI [0.873706, 0.882894] corresponding to the results reported by [14]). Moreover our results show that a single convolutional architecture can handle both temporal and static features using simple time replication for the static inputs, instead of using a recurrent/feedforward hybrid architecture as in [14].
Regarding model interpretability, the predictor marginal importance visualization shows that the model is attending to sensible features to predict mortality. As mentioned previously, the model is attending to the components of the GCS scale which show abnormal values and assigns them a positive contribution to mortality. PO2, FiO2, blood sodium and temperature are also regarded, to various degrees, as evidence favoring predicting mortality. On the other hand the patient age is regarded by the model as strong evidence against mortality, followed by urine output.
The marginal importance visualization allows us to see something interesting: the model assigns a negative net contribution to the fact that the patient was admitted after surgery, this is, the model regards the surgical admission as evidence for survival (however at the dataset level, median positive importance for surgical admissions are greater across classes than their negative counterpart, i.e. the model tends to see surgical admission as evidence for mortality). This could due to correlations present in the underlying dataset, or higher order interactions between predictors. The latter is attested by the predictor plus hour visualization, which shows that for static predictors, different observations accross time of the same predictor are assigned different contributions, sometimes with different sign. It is also worth noticing that while the patient's surgical admission is regarded as evidence for survival, the fact that the surgery was not an elective surgery is considered as evidence for mortality, which is sensible. However both input features must not be analysed separately (i.e. they correspond to a single concept in SAPS-II score [31]). This is the kind of insight that interpretability efforts can reveal about black boxes, which is also absent in the majority of related works [14].
Dataset-level analysis of feature importance show a high variance in attributed importance, both negative and positive. GCS components tend to be the features with the most importance attributed (especially positive importance for patients that eventually died), followed by age. On the other hand, there are a number of features with both low positive and low negative mean importance. Presence of AIDS or lymphoma, are deemed by the ConvNet as not carrying much weight for predicting either survival or death. Also some of the other predictors have modest mean importances. This could signal a possibility to simplify the input feature set and get better predictive performance.
Conclusions
In this paper we presented ISeeU, a novel multi-scale convolutional network for interpretable mortality prediction inside the ICU, trained on MIMIC-III. We showed that our model offers state of the art performance, while offering visual explanations based on a concept from coalitional game theory, that show how important the inputs features are for the model's output. Such explanations are offered at the single patient level with different levels of de-aggregation, and at the dataset level, allowing for a more complete statistical understanding of how the model regards input predictors, compared to what related works have provided so far, and without resorting to auxiliary or surrogate models. We were able to show that a convolutional model can handle both temporal and static features at the same time without having to resort to hybrid neural architectures. We also showed that simple imputation techniques offer competitive performance without incurring in the computational costs associated with more complex approaches. | 4,456 |
1901.07822 | 2911565143 | This paper presents a new method for medical diagnosis of neurodegenerative diseases, such as Parkinson's, by extracting and using latent information from trained Deep convolutional, or convolutional-recurrent Neural Networks (DNNs). In particular, our approach adopts a combination of transfer learning, k-means clustering and k-Nearest Neighbour classification of deep neural network learned representations to provide enriched prediction of the disease based on MRI and or DaT Scan data. A new loss function is introduced and used in the training of the DNNs, so as to perform adaptation of the generated learned representations between data from different medical environments. Results are presented using a recently published database of Parkinson's related information, which was generated and evaluated in a hospital environment. | A Parkinson's database comprising MRI and DaT Scan data from 78 subjects, 55 patients with Parkinson's and 23 non patients, has been recently released @cite_14 ; it includes, in total 41528 MRI data (31147 from patients and 10381 from non patients) and 925 DaT scans (595 and 330 respectively). Our developments next are based on this database. | {
"abstract": [
"Neurodegenerative disorders, such as Alzheimer’s and Parkinson’s, constitute a major factor in long-term disability and are becoming more and more a serious concern in developed countries. As there..."
],
"cite_N": [
"@cite_14"
],
"mid": [
"2789037848"
]
} | Predicting Parkinson's Disease using Latent Information extracted from Deep Neural Networks | Machine learning techniques have been largely used in medical signal and image analysis for prediction of neurodegenerative disorders, such as Alzheimer's and Parkinson's, which significantly affect elderly people, especially in developed countries [1], [2], [3].
In the last few years, the development of deep learning technologies has boosted the investigation of using deep neural networks for early prediction of the above-mentioned neurodegenerative disorders. In [4], stacked auto-encoders have been used for diagnosis of Alzheimer's disease.3-D Convolutional Neural Networks (CNNs) have been used in [5] to analyze imaging data for Alzheimer's diagnosis. Both methods were based on the Alzheimer's disease neuroimaging initiative dataset, including medical images and assessments of several hundred subjects. Recently, CNNs and convolutional-recurrent neural network (CNN-RNN) architectures have been developed for prediction of Parkinson's disease [6], based on a new database including Magnetic Resonance Imaging (MRI) data and Dopamine Transporters (DaT) Scans from patients with Parkinson's and non patients [7].
In this paper we focus on the early prediction of Parkinson's. It is the above two types of medical image data, i.e. MRI and DaT Scans that we explore for predicting an asymptomatic (healthy) status, or the stage of Parkinson's at which a subject appears to be. In particular, MRI data show the internal structure of the brain, using magnetic fields and radio waves. An atrophy of the Lentiform and Caudate Nucleus can be detected in MRI data of patients with Parkinson's. DaT Scans are a specific form of single-photon emission computed tomography, using Ioflupane Iodide-123 to detect lack of dopamine in patients' brain.
In the paper we base our developments on the deep neural network (DNN) structures (CNNs, CNN-RNNs) developed in [6] for predicting Parkinson's using MRI, or DaT Scan, or MRI & DaT Scan data from the recently developed Parkinson's database [7]. We extend these developments by extracting latent variable information from the DNNs trained with MRI & DaT Scan data and generate clusters of this information; these are evaluated by medical experts with reference to the corresponding status/stage of Parkinson's. The generated and medically annotated cluster centroids are used next in three different scenarios of major medical significance: 1) Transparently predicting a new subject's status/stage of Parkinson's; this is performed using nearest neighbor classification of new subjects' MRI and DaT scan data with reference to the cluster centroids and the respective medical annotations.
2) Retraining the DNNs with the new subjects' data, without forgetting the current medical cluster annotations; this is performed by considering the retraining as a constrained optimization problem and using a gradient projection training algorithm instead of the usual gradient descent method.
3) Transferring the learning achieved by DNNs fed with MRI & DaT scan data, to medical centers that only possess MRI information about subjects, thus improving their prediction capabilities; this is performed through a domain adaptation methodology, in which a new error criterion is introduced that includes the above-derived cluster centroids as desired outputs during training.
Section II describes related work where machine learning techniques have been applied to MRI and DaT Scan data for detecting Parkinson's. The new Parkinson's database we are using in this paper is also described in this section. Section III first describes the extraction of latent variable information from trained deep neural networks and then presents the proposed approach in the framework of the three considered testing, transfer learning and domain adaptation scenarios.
Section IV provides the experimental evaluation which illustrates the performance of the proposed approach using an augmented version of the Parkinson's database, which we also make publicly available. Conclusions and future work are presented in Section V.
III. THE PROPOSED APPROACH
A. Extracting Latent Variables from Trained Deep Neural Networks
The proposed approach begins with training a CNN, or a CNN-RNN architecture, on the (train) dataset of MRI and DaT Scan data. The CNN networks include a convolutional part and one or more Fully Connected (FC) layers, using neurons with a ReLU activation function. In the CNN-RNN case, these are followed by a recurrent part, including one ore more hidden layers, composed of GRU neurons.
We then focus on the neuron outputs in the last FC layer (CNN case), or in the last RNN hidden layer (CNN-RNN case). These latent variables, extracted from the trained DNNs, represent the higher level information through which the networks produce their predictions, i.e., whether the input information indicates that the subject is patient, or not.
In particular, let us consider the following dataset for training the DNN to predict Parkinson's:
P = (x(j), d(j)); j = 1, . . . , n(1)
and the corresponding test dataset:
Q = ( x(j), d(j)); j = 1, . . . , m(2)
where: x(j) and d(j) represent the n network training inputs (each of which consists of an MRI triplet and a DaT Scan) and respective desired outputs (with a binary value 0/1, where 0 represents a non patient and 1 represents a patient case); x(j) and d(j) similarly represent the m network test inputs and respective desired outputs. After training the Deep Neural Network using dataset P, its l neurons' outputs in the final FC, or hidden layer, {r(j)} and { r(j)}, both ∈ R l , are extracted as latent variables, obtained through forward-propagation of each image, in train set R p and test set R q respectively:
R p = (r(j), j = 1, . . . , n(3)
and
R q = ( r(j), j = 1, . . . , m(4)
The following clustering procedure is then implemented on the {r(j)} in R p :
We generate a set of clusters T = {t 1 , . . . , t k } by minimizing the within-cluster L 2 norms of the function
T k-means = arg min T k j=1 r∈Rp r − µ j 2 (5)
where µ j is the mean value of the data in cluster j. This is done using the k-means++ [18] algorithm, with the first cluster centroid u(1) being selected at random from T . The class label of a given cluster is simply the mode class of the data points within it.
As a consequence, we generate a set of cluster centroids, representing the different types of input data included in our train set P:
U = (u(j), j = 1, . . . , k(6)
Through medical evaluation of the MRI and DaT Scan images corresponding to the cluster centroids, we can annotate each cluster according to the stage of Parkinson's that its centroid represents.
By computing the euclidean distances between the test data in R q and the cluster centroids in U and by then using the nearest neighbor criterion, we can assign each one of test data to a specific cluster and evaluate the obtained classificationdisease prediction -performance. This is an alternative way to the prediction accomplished when the trained DNN is applied to the test data.
This alternative prediction is, however, of great significance: in the case of non-annotated new subject's data, selecting the nearest cluster centroid in U can be a transparent way for diagnosis of the subject's Parkinson's stage; the available MRI and DaT Scan data and related medical annotations of the cluster centroids being compared to the new subject's data.
B. Retraining of Deep Neural Networks with Annotated Latent Variables
Whenever new data, either from patients, or from non patients, are collected, they should be used to extend the knowledge already acquired by the DNN, by adapting its weights to the new data. In such a case, let us assume that a new train dataset, say P 1 , usually of small size, say s, is generated and an updated DNN should be created based on this dataset as well.
There are different methods developed in the framework of transfer learning [19], for training a new DNN on P 1 using the structure and weights of the above-described DNN. However, a major problem is that of catastrophic forgetting, i.e., the fact that the DNN forgets some formerly learned information when fine-tuning to the new data. This can lead to loss of annotations related to the latent variables extracted from the formerly trained DNN. To avoid this, we propose the following DNN adaptation method, which preserves annotated latent variables.
For simplicity of presentation, let us consider a CNN architecture, in which we keep the convolutional and pooling layers fixed and retrain the FC and output layers. Let W be a vector including the weights of the FC and output network layers of the original network, before retraining, and W denote the new (updated) weight vector, obtained through retraining. Let us also denote by, w and w , respectively, the weights connecting the outputs of the last FC, defined as r in Eq. (3), to the network outputs, y.
During retraining, the new network weights, W , are computed by minimizing the following error criterion:
E = E P1 + λ · E P(7)
where E P1 represents the misclassifications done in P 1 , which includes the new data and E P represents the misclassifications in P, which includes the old information. λ is used to differentiate the focus between the new and old data. In the following we make the hypothesis that a small change of the weights W is enough to achieve good classification performance in the current conditions. Consequently, we get:
W = W + ∆W(8)
and in the output layer case:
w = w + ∆w(9)
in which ∆W and ∆w denote small weight increments. Under this formulation, we can apply a first-order Taylor series expansion to make neurons' activation linear.
Let us now give more attention to the new data in P 1 . We can do this, by expressing E P1 in Eq. (7) in terms of the following constraint:
y (j) = d(j); j = 1, . . . , s(10)
which requests that the new network outputs and the desired outputs are identical.
Moreover, to preserve the formerly extracted latent variables, we move the input data corresponding to the annotated cluster centroids in U from dataset P to P 1 . Consequently, Eq. (10) includes these inputs as well; the size of P 1 becomes:
s = s + k(11)
where k is the number of clusters in U.
Let the difference of the retrained network output y from the original one y be:
∆y(j) = y (j) − y(j)(12)
Expressing the output y as a weighted average of the last FC layer outputs r with the w weights, we get [6] y (j) = y(j) + f h · w · ∆r(j) + ∆w · r(j)
where f h denotes the derivative of the former DNN output layer's neurons' activation function. Inserting Eq. (10) into Eq. (13) results in:
d(j) − y(j) = f h · w · ∆r(j) + ∆w · r(j)(14)
All terms in Eq. (14) are known, except of the differences in weights ∆w and last FC neuron outputs ∆r. As a consequence, Eq. (14) can be used to compute the new DNN weights of the output layer in terms of the neuron outputs of the last FC layer.
If there are more than one FC layers, we apply the same procedure, i.e., linearize the difference of the r , iteratively through the previous FC layers and express the ∆r in terms of the weight differences in these layers. When reaching the convolutional/pooling layers, where no retraining is to be performed, the procedure ends, since the respective ∆r is zero. It can be shown, similarly to [6] that the weight updates ∆W are finally estimated through the solution of a set of linear equations defined on P 1 :
v = V · ∆W(15)
where matrix V includes weights of the original DNN and vector v is defined as follows:
v(j) = d(j) − y(j); j = 1, . . . , s(16)
with y(j) denoting the output of the original DNN applied to the data in P 1 . Similarly to [6], the size of v is lower than the size of ∆W; many methods exist, therefore, for solving Eq. (16). Following the assumption made in the beginning of this section, we choose the solution that provides minimal modification of the original DNN weight. This is the one that provides the minimum change in the value of E in Eq. (7).
Summarizing, the targeted adaptation can be solved as a nonlinear constrained optimization problem, minimizing Eq. (7), subject to Eq. (10) and the selection of minimal weight increments. In our implementation, we use the gradient projection method [20] for computing the network weight updates and consequently the adapted DNN architecture.
C. Domain Adaptation of Deep Neural Networks through Annotated Latent Variables
In the two previous subsections we have focused on generation, based on extraction of latent variables from a trained DNN, and use of cluster centroids for prediction and adaptation of a Parkinson's diagnosis system. To do this, we have considered all available imaging information, consisting of MRI and DaT Scan data.
However, in many cases, especially in general purpose medical centers, DaT Scan equipment may not be available, whilst having access to MRI technology. In the following we present a domain adaptation methodology, using the annotated latent variables extracted from the originally trained DNN, to improve prediction of Parkinson's achieved when using only MRI input data. A new DNN training loss function is used to achieve this target.
Let us consider the following train and test datasets, similar to P and Q in Eq. (1) and Eq. (2) respectively, in which the input consists only of triplets of MRI data:
P = (x (j), d (j)); j = 1, . . . , n(17)
and
Q = ( x (j), d (j)); j = 1, . . . , m(18)
where: x (j) and d (j) represent the n network training inputs (each of which consists of only an MRI triplet) and respective desired outputs (with a binary value 0/1, where 0 represents a non patient and 1 represents a patient case); x (j) and d (j) similarly represent the m network test inputs and respective desired outputs. Using P , we train a similar DNN structure -as in the full MRI and DaT Scan case -, producing the following vector of l neuron outputs in its last FC or hidden layer:
R p = (r (j), j = 1, . . . , n(19)
with the dimension of each r vector being l, as in the original DNN last FC, or hidden, layer. A far as the r outputs are concerned, it would be desirable that these latent variables being closer, e.g., according to the mean squared error criterion, to one of the cluster centroids in Eq. (6) that belongs to the same category(patient/non patient) with them.
In this way, training of the DNN with only MRI inputs, would also bring its output y closer to the one generated by the original DNN; this would potentially improve the network's performance, towards the much better one produced by the original DNN (trained with both MRI and DaT Scan data).
Let us compute the euclidean distances between the latent variables in R p and the cluster centroids in U as defined in Eq. (6). Using the nearest neighbor criterion we can define a set of desired vector values for the r latent variables, with respect to the k cluster centroids, as follows:
Z p = (z(i, j), i = 1, . . . , k; j = 1, . . . , n(20)
where z(i, j) is equal to, either 1 in the case of the cluster centroid u(i) that was selected, as closest to r (j) during the above-described procedure, or equal to 0 in the case of the rest cluster centroids. In the following, we introduce the z(i, j) values in a modified Error Criterion to be used in DNN learning to correctly classify the MRI inputs.
Normally, the DNN (CNN, or CNN-RNN) training is performed through minimization of the error criterion in Eq. (21) in terms of the DNN weights:
E 1 = 1 n n j=1 (d (j) − y (j)) 2(21)
where y and d denote the actual and desired network outputs and n is equal to the number of all MRI input triplets.
We propose a modified Error Criterion, introducing an additional term, using the following definitions:
g(i, j) = u(i) − r (j), i = 1, . . . , k; j = 1, . . . , n(22)
and
G(i, j) = g(i, j) * (g(i, j)) T(23)
with T indicating the transpose operator. It is desirable that the G(i, j) term -with a respective value of z(i, j) equal to one -is minimized, whilst the G(i, j) values -corresponding to the rest of the z(i, j) values, which are equal to zero -are maximized. Similarly to [21], we pass G(i, j) through a softmax f function and subtract its output from 1, so as to obtain the above-described respective minimum and maximum values.
The generated Loss Function is expressed in terms of the differences of the transformed G(i, j) values from the corresponding desired responses z(i, j), as follows:
E 2 = 1 kn k i=1 n j=1 (z(i, j) − [1 − f (G(i, j)]) 2(24)
calculated on the n data and the k cluster centroids. In general, our target is to minimize together Eq. (21) and Eq. (24). We can achieve this using the following Loss Function:
E new = ηE 1 + (1 − η)E 2(25)
where η is chosen in the interval [0, 1]. Using a value of η towards zero provides more importance to the introduced centroids of the clusters of the latent variables extracted from the best performing DNN, trained with both MRI and DaT Scan data. On the contrary, using a value towards one leads to normal error criterion minimization.
IV. EXPERIMENTAL EVALUATION
In this section we present a variety of experiments for evaluating the proposed approach. The implementation of all algorithms described in the former Section has been performed in Python using the Tensorflow library.
A. The Parkinson's Dataset
The data that are used in our experiments come from the Parkinson's database described in Section II. For training the CNN and CNN-RNN networks, we performed an augmentation procedure in the train dataset, as follows. After forming all triplets of consecutive MRI frames, we generated combinations of these image triplets with each one of the DaT Scans in each category (patients, non patients).
Consequently, we created a dataset of 66,176 training inputs, each of them consisting of 3 MRI and 1 DaT Scan images. In the test dataset, which referred to different subjects than the train dataset, we made this combination per subject; this created 1130 test inputs.
For possible reproduction of our experiments, both the training and test datasets, each being split in two folderspatients and non patients -are available upon request from the mlearn.lincoln.ac.uk web site.
B. Testing the proposed Approach for Parkinson's Prediction
We used the DNN structures described in [6], including both CNN and CNN-RNN architectures to perform Parkinson's diagnosis, using the train and test data of the abovedescribed database. The convolutional and pooling part of the architectures was based on the ResNet-50 structure; GRU units were used in the RNN part of the CNN-RNN architecture.
The best performing CNN and CNN-RNN structures, when trained with both MRI and DaT Scan data, are presented in Table I.
It is evident that the CNN-RNN architecture was able to provide excellent prediction results on the database test set. We, therefore, focus on this architecture for extracting latent variables. For comparison purposes, it can be mentioned that the performance of a similar CNN-RNN architecture when trained only with MRI inputs was about 70%.
It can be seen, from Table I, that the number l of neurons in the last FC layer of the CNN-RNN architecture was 128. This is, therefore, the dimension of the vectors r extracted as in Eq. (3) and used in the cluster generation procedure of Eq. (5).
We then implemented this cluster generation procedure, as described in the former Section. The k-means algorithm provided five clusters of the data in the 128-dimensional space. Fig. 2 depicts a 3-D visualization of the five cluster centroids; stars in blue color denote the two centroids corresponding to non patient data, while squares in red color represent the three cluster centroids corresponding to patient data.
With the aid of medical experts, we generated annotations of the images (3 MRI and 1 DaT Scan) corresponding to the 5 cluster centroids. It was very interesting to discover that these centroids represent different levels of Parkinson's evolution. Since the DaT Scans conveyed the major part of this discrimination, we show in Fig.3 the DaT Scans corresponding to each one of the cluster centroids.
According to the provided medical annotation, the 1st centroid (t 1 ) corresponds to a typical non patient case. The 2nd centroid (t 2 ) represents a non patient case as well, but with some findings that seem to be pathological. Moving to the patient cases, the 3rd centroid (t 3 ) shows an early step of Parkinson's -in stage 1 to stage 2, while the 4th centroid (t 4 ) denotes a typical Parkinson's case -in stage 2. Finally, the 5th centroid (t 5 ) represents an advanced step of Parkinson's -in stage 3. It is interesting to note here that, although the DNN was trained to classify input data in two categories -patients and non patients -, by extracting and clustering the latent variables, we were able to generate a richer representation of the diagnosis problem in five categories. It should be mentioned that the purity of each generated cluster was almost perfect. Table II shows the percentages of training data included in each one of the five generated clusters. It should be mentioned that almost two thirds of the data belong in clusters 2 and 3, i.e., in the categories which are close to the borderline between patients and non patients. These cases require major attention by the medical experts and the proposed procedure can be very helpful for diagnosis of such subjects' cases.
We tested this procedure on the Parkinson's test dataset, by computing the euclidean distances of the corresponding extracted latent variables from the 5 cluster centroids and by classifying them to the closest centroid. Table III shows the number of test data referring to six different subjects that were classified to each cluster. All non patient cases were correctly classified. In the patient cases, the great majority of the data of each patient were correctly classified to one of the respective centroids. In the small number of misclassifications, the disease symptoms were not so evident. However, based on the large majority of correct classifications, the subject would certainly attract the necessary interest from the medical expert.
We next examined the ability of the above-described DNN to be retrained using the procedure described in Subsection III.B.
In the developed scenario, we split the above test data in two parts: we included 3 of them (Non Patient 2, Patient 2 and Patient 3) in the retraining dataset P 1 and let the other 3 subjects in the new test dataset. The size s of P 1 was equal to 493 inputs, including the five inputs corresponding to cluster centroids in U; the size of the new test set was equal to 642 inputs.
We applied the proposed procedure to minimize the error over all train data in P and P 1 , focusing more on the latter, as described by Eq. (10).
The network managed to learn and correctly classify all 493 P 1 inputs, including the inputs corresponding to the cluster centroids, with a minimal degradation of its performance over P input data. We then applied the trained network to the test dataset consisting of three subjects. In this case, there was also a slight improvement, since the performance was raised to 98,91%, compared to the corresponding performance on the same three subjects' data, shown in Table III, which was 98,44%. Table IV shows the clusters to which the new extracted latent variables r were classified. A comparison with the corresponding results in Table III shows the differences produced through retraining.
We finally examined the performance of the domain adaptation approach that was presented in Subsection III.C.
We started by training the CNN-RNN network with only the MRI triplets in P as inputs. The obtained performance when the trained network was applied to the test set Q was only 70,6%. For illustration of the proposed developments we extracted the r latent variables from this trained network and classified them to a set of respectively extracted cluster centroids. Table V presents the results of this classification task, which is consistent with the acquired DNN performance. It can be seen that the MRI information leads DNN prediction towards the patient class, which indeed contained more samples in the train dataset. Most errors were made in the non patient class (subjects 1 and 2). We then examined the ability of the proposed approach, to train the CNN-RNN network using the modified Loss Function, using various values of η; here we present the case when using a value equal to 0.5. The obtained performance when the trained network was applied to the test set Q was raised to 81,1%. For illustrating this improvement we also extracted the r latent variables from this trained network and classified them to one of the five annotated original cluster centroids U. Table VI presents the results of this classification task. It is evident that minimization of the modified Loss Function managed to force the extracted latent variables get closer to cluster centroids which belonged in the correct class for Parkinson's diagnosis.
V. CONCLUSIONS AND FUTURE WORK
The paper proposed a new approach for extracting latent variables from trained DNNs, in particular CNN and CNN-RNN architectures, and using them in a clustering and nearest neighbor classification method for achieving high performance and transparency in Parkinson's diagnosis. We have used | 4,346 |
1901.07822 | 2911565143 | This paper presents a new method for medical diagnosis of neurodegenerative diseases, such as Parkinson's, by extracting and using latent information from trained Deep convolutional, or convolutional-recurrent Neural Networks (DNNs). In particular, our approach adopts a combination of transfer learning, k-means clustering and k-Nearest Neighbour classification of deep neural network learned representations to provide enriched prediction of the disease based on MRI and or DaT Scan data. A new loss function is introduced and used in the training of the DNNs, so as to perform adaptation of the generated learned representations between data from different medical environments. Results are presented using a recently published database of Parkinson's related information, which was generated and evaluated in a hospital environment. | Recent advances in deep neural networks @cite_4 , @cite_6 , @cite_8 , @cite_16 have been explored in @cite_0 , where convolutional (CNN) and convolutional-recurrent (CNN-RNN) neural networks were developed and trained to classify the information in the above Parkinson's database in two categories, i.e., patients and non patients, based on either MRI inputs, or DaT Scan inputs, or together MRI and DaT Scan inputs. | {
"abstract": [
"",
"The ability of Deep Neural Networks (DNNs) to provide very high accuracy in classification and recognition problems makes them the major tool for developments in such problems. It is, however, known that DNNs are currently used in a ‘black box’ manner, lacking transparency and interpretability of their decision-making process. Moreover, DNNs should use prior information on data classes, or object categories, so as to provide efficient classification of new data, or objects, without forgetting their previous knowledge. In this paper, we propose a novel class of systems that are able to adapt and contextualize the structure of trained DNNs, providing ways for handling the above-mentioned problems. A hierarchical and distributed system memory is generated and used for this purpose. The main memory is composed of the trained DNN architecture for classification prediction, i.e., its structure and weights, as well as of an extracted — equivalent — Clustered Representation Set (CRS) generated by the DNN during training at its final — before the output — hidden layer. The latter includes centroids — ‘points of attraction’ — which link the extracted representation to a specific area in the existing system memory. Drift detection, occurring, for example, in personalized data analysis, can be accomplished by comparing the distances of new data from the centroids, taking into account the intra-cluster distances. Moreover, using the generated CRS, the system is able to contextualize its decision-making process, when new data become available. A new public medical database on Parkinson's disease is used as testbed to illustrate the capabilities of the proposed architecture.",
"In this paper we utilize the first large-scale \"in-the-wild\" (Aff-Wild) database, which is annotated in terms of the valence-arousal dimensions, to train and test an end-to-end deep neural architecture for the estimation of continuous emotion dimensions based on visual cues. The proposed architecture is based on jointly training convolutional (CNN) and recurrent neural network (RNN) layers, thus exploiting both the invariant properties of convolutional features, while also modelling temporal dynamics that arise in human behaviour via the recurrent layers. Various pre-trained networks are used as starting structures which are subsequently appropriately fine-tuned to the Aff-Wild database. Obtained results show premise for the utilization of deep architectures for the visual analysis of human behaviour in terms of continuous emotion dimensions and analysis of different types of affect.",
"This paper presents a novel class of systems assisting diagnosis and personalised assessment of diseases in healthcare. The targeted systems are end-to-end deep neural architectures that are designed (trained and tested) and subsequently used as whole systems, accepting raw input data and producing the desired outputs. Such architectures are state-of-the-art in image analysis and computer vision, speech recognition and language processing. Their application in healthcare for prediction and diagnosis purposes can produce high accuracy results and can be combined with medical knowledge to improve effectiveness, adaptation and transparency of decision making. The paper focuses on neurodegenerative diseases, particularly Parkinson’s, as the development model, by creating a new database and using it for training, evaluating and validating the proposed systems. Experimental results are presented which illustrate the ability of the systems to detect and predict Parkinson’s based on medical imaging information.",
"In this work, a novel deep learning approach to unfold nuclear power reactor signals is proposed. It includes a combination of convolutional neural networks (CNN), denoising autoencoders (DAE) and @math -means clustering of representations. Monitoring nuclear reactors while running at nominal conditions is critical. Based on analysis of the core reactor neutron flux, it is possible to derive useful information for building fault anomaly detection systems. By leveraging signal and image pre-processing techniques, the high and low energy spectra of the signals were appropriated into a compatible format for CNN training. Firstly, a CNN was employed to unfold the signal into either twelve or forty-eight perturbation location sources, followed by a @math -means clustering and @math -Nearest Neighbour coarse-to-fine procedure, which significantly increases the unfolding resolution. Secondly, a DAE was utilised to denoise and reconstruct power reactor signals at varying levels of noise and or corruption. The reconstructed signals were evaluated w.r.t. their original counter parts, by way of normalised cross correlation and unfolding metrics. The results illustrate that the origin of perturbations can be localised with high accuracy, despite limited training data and obscured @math noisy signals, across various levels of granularity."
],
"cite_N": [
"@cite_4",
"@cite_8",
"@cite_6",
"@cite_0",
"@cite_16"
],
"mid": [
"",
"2772603561",
"2655404332",
"2769215776",
"2797302551"
]
} | Predicting Parkinson's Disease using Latent Information extracted from Deep Neural Networks | Machine learning techniques have been largely used in medical signal and image analysis for prediction of neurodegenerative disorders, such as Alzheimer's and Parkinson's, which significantly affect elderly people, especially in developed countries [1], [2], [3].
In the last few years, the development of deep learning technologies has boosted the investigation of using deep neural networks for early prediction of the above-mentioned neurodegenerative disorders. In [4], stacked auto-encoders have been used for diagnosis of Alzheimer's disease.3-D Convolutional Neural Networks (CNNs) have been used in [5] to analyze imaging data for Alzheimer's diagnosis. Both methods were based on the Alzheimer's disease neuroimaging initiative dataset, including medical images and assessments of several hundred subjects. Recently, CNNs and convolutional-recurrent neural network (CNN-RNN) architectures have been developed for prediction of Parkinson's disease [6], based on a new database including Magnetic Resonance Imaging (MRI) data and Dopamine Transporters (DaT) Scans from patients with Parkinson's and non patients [7].
In this paper we focus on the early prediction of Parkinson's. It is the above two types of medical image data, i.e. MRI and DaT Scans that we explore for predicting an asymptomatic (healthy) status, or the stage of Parkinson's at which a subject appears to be. In particular, MRI data show the internal structure of the brain, using magnetic fields and radio waves. An atrophy of the Lentiform and Caudate Nucleus can be detected in MRI data of patients with Parkinson's. DaT Scans are a specific form of single-photon emission computed tomography, using Ioflupane Iodide-123 to detect lack of dopamine in patients' brain.
In the paper we base our developments on the deep neural network (DNN) structures (CNNs, CNN-RNNs) developed in [6] for predicting Parkinson's using MRI, or DaT Scan, or MRI & DaT Scan data from the recently developed Parkinson's database [7]. We extend these developments by extracting latent variable information from the DNNs trained with MRI & DaT Scan data and generate clusters of this information; these are evaluated by medical experts with reference to the corresponding status/stage of Parkinson's. The generated and medically annotated cluster centroids are used next in three different scenarios of major medical significance: 1) Transparently predicting a new subject's status/stage of Parkinson's; this is performed using nearest neighbor classification of new subjects' MRI and DaT scan data with reference to the cluster centroids and the respective medical annotations.
2) Retraining the DNNs with the new subjects' data, without forgetting the current medical cluster annotations; this is performed by considering the retraining as a constrained optimization problem and using a gradient projection training algorithm instead of the usual gradient descent method.
3) Transferring the learning achieved by DNNs fed with MRI & DaT scan data, to medical centers that only possess MRI information about subjects, thus improving their prediction capabilities; this is performed through a domain adaptation methodology, in which a new error criterion is introduced that includes the above-derived cluster centroids as desired outputs during training.
Section II describes related work where machine learning techniques have been applied to MRI and DaT Scan data for detecting Parkinson's. The new Parkinson's database we are using in this paper is also described in this section. Section III first describes the extraction of latent variable information from trained deep neural networks and then presents the proposed approach in the framework of the three considered testing, transfer learning and domain adaptation scenarios.
Section IV provides the experimental evaluation which illustrates the performance of the proposed approach using an augmented version of the Parkinson's database, which we also make publicly available. Conclusions and future work are presented in Section V.
III. THE PROPOSED APPROACH
A. Extracting Latent Variables from Trained Deep Neural Networks
The proposed approach begins with training a CNN, or a CNN-RNN architecture, on the (train) dataset of MRI and DaT Scan data. The CNN networks include a convolutional part and one or more Fully Connected (FC) layers, using neurons with a ReLU activation function. In the CNN-RNN case, these are followed by a recurrent part, including one ore more hidden layers, composed of GRU neurons.
We then focus on the neuron outputs in the last FC layer (CNN case), or in the last RNN hidden layer (CNN-RNN case). These latent variables, extracted from the trained DNNs, represent the higher level information through which the networks produce their predictions, i.e., whether the input information indicates that the subject is patient, or not.
In particular, let us consider the following dataset for training the DNN to predict Parkinson's:
P = (x(j), d(j)); j = 1, . . . , n(1)
and the corresponding test dataset:
Q = ( x(j), d(j)); j = 1, . . . , m(2)
where: x(j) and d(j) represent the n network training inputs (each of which consists of an MRI triplet and a DaT Scan) and respective desired outputs (with a binary value 0/1, where 0 represents a non patient and 1 represents a patient case); x(j) and d(j) similarly represent the m network test inputs and respective desired outputs. After training the Deep Neural Network using dataset P, its l neurons' outputs in the final FC, or hidden layer, {r(j)} and { r(j)}, both ∈ R l , are extracted as latent variables, obtained through forward-propagation of each image, in train set R p and test set R q respectively:
R p = (r(j), j = 1, . . . , n(3)
and
R q = ( r(j), j = 1, . . . , m(4)
The following clustering procedure is then implemented on the {r(j)} in R p :
We generate a set of clusters T = {t 1 , . . . , t k } by minimizing the within-cluster L 2 norms of the function
T k-means = arg min T k j=1 r∈Rp r − µ j 2 (5)
where µ j is the mean value of the data in cluster j. This is done using the k-means++ [18] algorithm, with the first cluster centroid u(1) being selected at random from T . The class label of a given cluster is simply the mode class of the data points within it.
As a consequence, we generate a set of cluster centroids, representing the different types of input data included in our train set P:
U = (u(j), j = 1, . . . , k(6)
Through medical evaluation of the MRI and DaT Scan images corresponding to the cluster centroids, we can annotate each cluster according to the stage of Parkinson's that its centroid represents.
By computing the euclidean distances between the test data in R q and the cluster centroids in U and by then using the nearest neighbor criterion, we can assign each one of test data to a specific cluster and evaluate the obtained classificationdisease prediction -performance. This is an alternative way to the prediction accomplished when the trained DNN is applied to the test data.
This alternative prediction is, however, of great significance: in the case of non-annotated new subject's data, selecting the nearest cluster centroid in U can be a transparent way for diagnosis of the subject's Parkinson's stage; the available MRI and DaT Scan data and related medical annotations of the cluster centroids being compared to the new subject's data.
B. Retraining of Deep Neural Networks with Annotated Latent Variables
Whenever new data, either from patients, or from non patients, are collected, they should be used to extend the knowledge already acquired by the DNN, by adapting its weights to the new data. In such a case, let us assume that a new train dataset, say P 1 , usually of small size, say s, is generated and an updated DNN should be created based on this dataset as well.
There are different methods developed in the framework of transfer learning [19], for training a new DNN on P 1 using the structure and weights of the above-described DNN. However, a major problem is that of catastrophic forgetting, i.e., the fact that the DNN forgets some formerly learned information when fine-tuning to the new data. This can lead to loss of annotations related to the latent variables extracted from the formerly trained DNN. To avoid this, we propose the following DNN adaptation method, which preserves annotated latent variables.
For simplicity of presentation, let us consider a CNN architecture, in which we keep the convolutional and pooling layers fixed and retrain the FC and output layers. Let W be a vector including the weights of the FC and output network layers of the original network, before retraining, and W denote the new (updated) weight vector, obtained through retraining. Let us also denote by, w and w , respectively, the weights connecting the outputs of the last FC, defined as r in Eq. (3), to the network outputs, y.
During retraining, the new network weights, W , are computed by minimizing the following error criterion:
E = E P1 + λ · E P(7)
where E P1 represents the misclassifications done in P 1 , which includes the new data and E P represents the misclassifications in P, which includes the old information. λ is used to differentiate the focus between the new and old data. In the following we make the hypothesis that a small change of the weights W is enough to achieve good classification performance in the current conditions. Consequently, we get:
W = W + ∆W(8)
and in the output layer case:
w = w + ∆w(9)
in which ∆W and ∆w denote small weight increments. Under this formulation, we can apply a first-order Taylor series expansion to make neurons' activation linear.
Let us now give more attention to the new data in P 1 . We can do this, by expressing E P1 in Eq. (7) in terms of the following constraint:
y (j) = d(j); j = 1, . . . , s(10)
which requests that the new network outputs and the desired outputs are identical.
Moreover, to preserve the formerly extracted latent variables, we move the input data corresponding to the annotated cluster centroids in U from dataset P to P 1 . Consequently, Eq. (10) includes these inputs as well; the size of P 1 becomes:
s = s + k(11)
where k is the number of clusters in U.
Let the difference of the retrained network output y from the original one y be:
∆y(j) = y (j) − y(j)(12)
Expressing the output y as a weighted average of the last FC layer outputs r with the w weights, we get [6] y (j) = y(j) + f h · w · ∆r(j) + ∆w · r(j)
where f h denotes the derivative of the former DNN output layer's neurons' activation function. Inserting Eq. (10) into Eq. (13) results in:
d(j) − y(j) = f h · w · ∆r(j) + ∆w · r(j)(14)
All terms in Eq. (14) are known, except of the differences in weights ∆w and last FC neuron outputs ∆r. As a consequence, Eq. (14) can be used to compute the new DNN weights of the output layer in terms of the neuron outputs of the last FC layer.
If there are more than one FC layers, we apply the same procedure, i.e., linearize the difference of the r , iteratively through the previous FC layers and express the ∆r in terms of the weight differences in these layers. When reaching the convolutional/pooling layers, where no retraining is to be performed, the procedure ends, since the respective ∆r is zero. It can be shown, similarly to [6] that the weight updates ∆W are finally estimated through the solution of a set of linear equations defined on P 1 :
v = V · ∆W(15)
where matrix V includes weights of the original DNN and vector v is defined as follows:
v(j) = d(j) − y(j); j = 1, . . . , s(16)
with y(j) denoting the output of the original DNN applied to the data in P 1 . Similarly to [6], the size of v is lower than the size of ∆W; many methods exist, therefore, for solving Eq. (16). Following the assumption made in the beginning of this section, we choose the solution that provides minimal modification of the original DNN weight. This is the one that provides the minimum change in the value of E in Eq. (7).
Summarizing, the targeted adaptation can be solved as a nonlinear constrained optimization problem, minimizing Eq. (7), subject to Eq. (10) and the selection of minimal weight increments. In our implementation, we use the gradient projection method [20] for computing the network weight updates and consequently the adapted DNN architecture.
C. Domain Adaptation of Deep Neural Networks through Annotated Latent Variables
In the two previous subsections we have focused on generation, based on extraction of latent variables from a trained DNN, and use of cluster centroids for prediction and adaptation of a Parkinson's diagnosis system. To do this, we have considered all available imaging information, consisting of MRI and DaT Scan data.
However, in many cases, especially in general purpose medical centers, DaT Scan equipment may not be available, whilst having access to MRI technology. In the following we present a domain adaptation methodology, using the annotated latent variables extracted from the originally trained DNN, to improve prediction of Parkinson's achieved when using only MRI input data. A new DNN training loss function is used to achieve this target.
Let us consider the following train and test datasets, similar to P and Q in Eq. (1) and Eq. (2) respectively, in which the input consists only of triplets of MRI data:
P = (x (j), d (j)); j = 1, . . . , n(17)
and
Q = ( x (j), d (j)); j = 1, . . . , m(18)
where: x (j) and d (j) represent the n network training inputs (each of which consists of only an MRI triplet) and respective desired outputs (with a binary value 0/1, where 0 represents a non patient and 1 represents a patient case); x (j) and d (j) similarly represent the m network test inputs and respective desired outputs. Using P , we train a similar DNN structure -as in the full MRI and DaT Scan case -, producing the following vector of l neuron outputs in its last FC or hidden layer:
R p = (r (j), j = 1, . . . , n(19)
with the dimension of each r vector being l, as in the original DNN last FC, or hidden, layer. A far as the r outputs are concerned, it would be desirable that these latent variables being closer, e.g., according to the mean squared error criterion, to one of the cluster centroids in Eq. (6) that belongs to the same category(patient/non patient) with them.
In this way, training of the DNN with only MRI inputs, would also bring its output y closer to the one generated by the original DNN; this would potentially improve the network's performance, towards the much better one produced by the original DNN (trained with both MRI and DaT Scan data).
Let us compute the euclidean distances between the latent variables in R p and the cluster centroids in U as defined in Eq. (6). Using the nearest neighbor criterion we can define a set of desired vector values for the r latent variables, with respect to the k cluster centroids, as follows:
Z p = (z(i, j), i = 1, . . . , k; j = 1, . . . , n(20)
where z(i, j) is equal to, either 1 in the case of the cluster centroid u(i) that was selected, as closest to r (j) during the above-described procedure, or equal to 0 in the case of the rest cluster centroids. In the following, we introduce the z(i, j) values in a modified Error Criterion to be used in DNN learning to correctly classify the MRI inputs.
Normally, the DNN (CNN, or CNN-RNN) training is performed through minimization of the error criterion in Eq. (21) in terms of the DNN weights:
E 1 = 1 n n j=1 (d (j) − y (j)) 2(21)
where y and d denote the actual and desired network outputs and n is equal to the number of all MRI input triplets.
We propose a modified Error Criterion, introducing an additional term, using the following definitions:
g(i, j) = u(i) − r (j), i = 1, . . . , k; j = 1, . . . , n(22)
and
G(i, j) = g(i, j) * (g(i, j)) T(23)
with T indicating the transpose operator. It is desirable that the G(i, j) term -with a respective value of z(i, j) equal to one -is minimized, whilst the G(i, j) values -corresponding to the rest of the z(i, j) values, which are equal to zero -are maximized. Similarly to [21], we pass G(i, j) through a softmax f function and subtract its output from 1, so as to obtain the above-described respective minimum and maximum values.
The generated Loss Function is expressed in terms of the differences of the transformed G(i, j) values from the corresponding desired responses z(i, j), as follows:
E 2 = 1 kn k i=1 n j=1 (z(i, j) − [1 − f (G(i, j)]) 2(24)
calculated on the n data and the k cluster centroids. In general, our target is to minimize together Eq. (21) and Eq. (24). We can achieve this using the following Loss Function:
E new = ηE 1 + (1 − η)E 2(25)
where η is chosen in the interval [0, 1]. Using a value of η towards zero provides more importance to the introduced centroids of the clusters of the latent variables extracted from the best performing DNN, trained with both MRI and DaT Scan data. On the contrary, using a value towards one leads to normal error criterion minimization.
IV. EXPERIMENTAL EVALUATION
In this section we present a variety of experiments for evaluating the proposed approach. The implementation of all algorithms described in the former Section has been performed in Python using the Tensorflow library.
A. The Parkinson's Dataset
The data that are used in our experiments come from the Parkinson's database described in Section II. For training the CNN and CNN-RNN networks, we performed an augmentation procedure in the train dataset, as follows. After forming all triplets of consecutive MRI frames, we generated combinations of these image triplets with each one of the DaT Scans in each category (patients, non patients).
Consequently, we created a dataset of 66,176 training inputs, each of them consisting of 3 MRI and 1 DaT Scan images. In the test dataset, which referred to different subjects than the train dataset, we made this combination per subject; this created 1130 test inputs.
For possible reproduction of our experiments, both the training and test datasets, each being split in two folderspatients and non patients -are available upon request from the mlearn.lincoln.ac.uk web site.
B. Testing the proposed Approach for Parkinson's Prediction
We used the DNN structures described in [6], including both CNN and CNN-RNN architectures to perform Parkinson's diagnosis, using the train and test data of the abovedescribed database. The convolutional and pooling part of the architectures was based on the ResNet-50 structure; GRU units were used in the RNN part of the CNN-RNN architecture.
The best performing CNN and CNN-RNN structures, when trained with both MRI and DaT Scan data, are presented in Table I.
It is evident that the CNN-RNN architecture was able to provide excellent prediction results on the database test set. We, therefore, focus on this architecture for extracting latent variables. For comparison purposes, it can be mentioned that the performance of a similar CNN-RNN architecture when trained only with MRI inputs was about 70%.
It can be seen, from Table I, that the number l of neurons in the last FC layer of the CNN-RNN architecture was 128. This is, therefore, the dimension of the vectors r extracted as in Eq. (3) and used in the cluster generation procedure of Eq. (5).
We then implemented this cluster generation procedure, as described in the former Section. The k-means algorithm provided five clusters of the data in the 128-dimensional space. Fig. 2 depicts a 3-D visualization of the five cluster centroids; stars in blue color denote the two centroids corresponding to non patient data, while squares in red color represent the three cluster centroids corresponding to patient data.
With the aid of medical experts, we generated annotations of the images (3 MRI and 1 DaT Scan) corresponding to the 5 cluster centroids. It was very interesting to discover that these centroids represent different levels of Parkinson's evolution. Since the DaT Scans conveyed the major part of this discrimination, we show in Fig.3 the DaT Scans corresponding to each one of the cluster centroids.
According to the provided medical annotation, the 1st centroid (t 1 ) corresponds to a typical non patient case. The 2nd centroid (t 2 ) represents a non patient case as well, but with some findings that seem to be pathological. Moving to the patient cases, the 3rd centroid (t 3 ) shows an early step of Parkinson's -in stage 1 to stage 2, while the 4th centroid (t 4 ) denotes a typical Parkinson's case -in stage 2. Finally, the 5th centroid (t 5 ) represents an advanced step of Parkinson's -in stage 3. It is interesting to note here that, although the DNN was trained to classify input data in two categories -patients and non patients -, by extracting and clustering the latent variables, we were able to generate a richer representation of the diagnosis problem in five categories. It should be mentioned that the purity of each generated cluster was almost perfect. Table II shows the percentages of training data included in each one of the five generated clusters. It should be mentioned that almost two thirds of the data belong in clusters 2 and 3, i.e., in the categories which are close to the borderline between patients and non patients. These cases require major attention by the medical experts and the proposed procedure can be very helpful for diagnosis of such subjects' cases.
We tested this procedure on the Parkinson's test dataset, by computing the euclidean distances of the corresponding extracted latent variables from the 5 cluster centroids and by classifying them to the closest centroid. Table III shows the number of test data referring to six different subjects that were classified to each cluster. All non patient cases were correctly classified. In the patient cases, the great majority of the data of each patient were correctly classified to one of the respective centroids. In the small number of misclassifications, the disease symptoms were not so evident. However, based on the large majority of correct classifications, the subject would certainly attract the necessary interest from the medical expert.
We next examined the ability of the above-described DNN to be retrained using the procedure described in Subsection III.B.
In the developed scenario, we split the above test data in two parts: we included 3 of them (Non Patient 2, Patient 2 and Patient 3) in the retraining dataset P 1 and let the other 3 subjects in the new test dataset. The size s of P 1 was equal to 493 inputs, including the five inputs corresponding to cluster centroids in U; the size of the new test set was equal to 642 inputs.
We applied the proposed procedure to minimize the error over all train data in P and P 1 , focusing more on the latter, as described by Eq. (10).
The network managed to learn and correctly classify all 493 P 1 inputs, including the inputs corresponding to the cluster centroids, with a minimal degradation of its performance over P input data. We then applied the trained network to the test dataset consisting of three subjects. In this case, there was also a slight improvement, since the performance was raised to 98,91%, compared to the corresponding performance on the same three subjects' data, shown in Table III, which was 98,44%. Table IV shows the clusters to which the new extracted latent variables r were classified. A comparison with the corresponding results in Table III shows the differences produced through retraining.
We finally examined the performance of the domain adaptation approach that was presented in Subsection III.C.
We started by training the CNN-RNN network with only the MRI triplets in P as inputs. The obtained performance when the trained network was applied to the test set Q was only 70,6%. For illustration of the proposed developments we extracted the r latent variables from this trained network and classified them to a set of respectively extracted cluster centroids. Table V presents the results of this classification task, which is consistent with the acquired DNN performance. It can be seen that the MRI information leads DNN prediction towards the patient class, which indeed contained more samples in the train dataset. Most errors were made in the non patient class (subjects 1 and 2). We then examined the ability of the proposed approach, to train the CNN-RNN network using the modified Loss Function, using various values of η; here we present the case when using a value equal to 0.5. The obtained performance when the trained network was applied to the test set Q was raised to 81,1%. For illustrating this improvement we also extracted the r latent variables from this trained network and classified them to one of the five annotated original cluster centroids U. Table VI presents the results of this classification task. It is evident that minimization of the modified Loss Function managed to force the extracted latent variables get closer to cluster centroids which belonged in the correct class for Parkinson's diagnosis.
V. CONCLUSIONS AND FUTURE WORK
The paper proposed a new approach for extracting latent variables from trained DNNs, in particular CNN and CNN-RNN architectures, and using them in a clustering and nearest neighbor classification method for achieving high performance and transparency in Parkinson's diagnosis. We have used | 4,346 |
1901.07822 | 2911565143 | This paper presents a new method for medical diagnosis of neurodegenerative diseases, such as Parkinson's, by extracting and using latent information from trained Deep convolutional, or convolutional-recurrent Neural Networks (DNNs). In particular, our approach adopts a combination of transfer learning, k-means clustering and k-Nearest Neighbour classification of deep neural network learned representations to provide enriched prediction of the disease based on MRI and or DaT Scan data. A new loss function is introduced and used in the training of the DNNs, so as to perform adaptation of the generated learned representations between data from different medical environments. Results are presented using a recently published database of Parkinson's related information, which was generated and evaluated in a hospital environment. | The developed networks included: transfer learning of the ResNet-50 network @cite_20 as far as the convolutional part of the networks was concerned, with retraining of the fully connected network layers; adding on top of this and training a recurrent network using Gated Recurrent Units (GRU) @cite_5 in an end-to-end manner. | {
"abstract": [
"In this paper we compare different types of recurrent units in recurrent neural networks (RNNs). Especially, we focus on more sophisticated units that implement a gating mechanism, such as a long short-term memory (LSTM) unit and a recently proposed gated recurrent unit (GRU). We evaluate these recurrent units on the tasks of polyphonic music modeling and speech signal modeling. Our experiments revealed that these advanced recurrent units are indeed better than more traditional recurrent units such as tanh units. Also, we found GRU to be comparable to LSTM.",
"Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation."
],
"cite_N": [
"@cite_5",
"@cite_20"
],
"mid": [
"1924770834",
"2949650786"
]
} | Predicting Parkinson's Disease using Latent Information extracted from Deep Neural Networks | Machine learning techniques have been largely used in medical signal and image analysis for prediction of neurodegenerative disorders, such as Alzheimer's and Parkinson's, which significantly affect elderly people, especially in developed countries [1], [2], [3].
In the last few years, the development of deep learning technologies has boosted the investigation of using deep neural networks for early prediction of the above-mentioned neurodegenerative disorders. In [4], stacked auto-encoders have been used for diagnosis of Alzheimer's disease.3-D Convolutional Neural Networks (CNNs) have been used in [5] to analyze imaging data for Alzheimer's diagnosis. Both methods were based on the Alzheimer's disease neuroimaging initiative dataset, including medical images and assessments of several hundred subjects. Recently, CNNs and convolutional-recurrent neural network (CNN-RNN) architectures have been developed for prediction of Parkinson's disease [6], based on a new database including Magnetic Resonance Imaging (MRI) data and Dopamine Transporters (DaT) Scans from patients with Parkinson's and non patients [7].
In this paper we focus on the early prediction of Parkinson's. It is the above two types of medical image data, i.e. MRI and DaT Scans that we explore for predicting an asymptomatic (healthy) status, or the stage of Parkinson's at which a subject appears to be. In particular, MRI data show the internal structure of the brain, using magnetic fields and radio waves. An atrophy of the Lentiform and Caudate Nucleus can be detected in MRI data of patients with Parkinson's. DaT Scans are a specific form of single-photon emission computed tomography, using Ioflupane Iodide-123 to detect lack of dopamine in patients' brain.
In the paper we base our developments on the deep neural network (DNN) structures (CNNs, CNN-RNNs) developed in [6] for predicting Parkinson's using MRI, or DaT Scan, or MRI & DaT Scan data from the recently developed Parkinson's database [7]. We extend these developments by extracting latent variable information from the DNNs trained with MRI & DaT Scan data and generate clusters of this information; these are evaluated by medical experts with reference to the corresponding status/stage of Parkinson's. The generated and medically annotated cluster centroids are used next in three different scenarios of major medical significance: 1) Transparently predicting a new subject's status/stage of Parkinson's; this is performed using nearest neighbor classification of new subjects' MRI and DaT scan data with reference to the cluster centroids and the respective medical annotations.
2) Retraining the DNNs with the new subjects' data, without forgetting the current medical cluster annotations; this is performed by considering the retraining as a constrained optimization problem and using a gradient projection training algorithm instead of the usual gradient descent method.
3) Transferring the learning achieved by DNNs fed with MRI & DaT scan data, to medical centers that only possess MRI information about subjects, thus improving their prediction capabilities; this is performed through a domain adaptation methodology, in which a new error criterion is introduced that includes the above-derived cluster centroids as desired outputs during training.
Section II describes related work where machine learning techniques have been applied to MRI and DaT Scan data for detecting Parkinson's. The new Parkinson's database we are using in this paper is also described in this section. Section III first describes the extraction of latent variable information from trained deep neural networks and then presents the proposed approach in the framework of the three considered testing, transfer learning and domain adaptation scenarios.
Section IV provides the experimental evaluation which illustrates the performance of the proposed approach using an augmented version of the Parkinson's database, which we also make publicly available. Conclusions and future work are presented in Section V.
III. THE PROPOSED APPROACH
A. Extracting Latent Variables from Trained Deep Neural Networks
The proposed approach begins with training a CNN, or a CNN-RNN architecture, on the (train) dataset of MRI and DaT Scan data. The CNN networks include a convolutional part and one or more Fully Connected (FC) layers, using neurons with a ReLU activation function. In the CNN-RNN case, these are followed by a recurrent part, including one ore more hidden layers, composed of GRU neurons.
We then focus on the neuron outputs in the last FC layer (CNN case), or in the last RNN hidden layer (CNN-RNN case). These latent variables, extracted from the trained DNNs, represent the higher level information through which the networks produce their predictions, i.e., whether the input information indicates that the subject is patient, or not.
In particular, let us consider the following dataset for training the DNN to predict Parkinson's:
P = (x(j), d(j)); j = 1, . . . , n(1)
and the corresponding test dataset:
Q = ( x(j), d(j)); j = 1, . . . , m(2)
where: x(j) and d(j) represent the n network training inputs (each of which consists of an MRI triplet and a DaT Scan) and respective desired outputs (with a binary value 0/1, where 0 represents a non patient and 1 represents a patient case); x(j) and d(j) similarly represent the m network test inputs and respective desired outputs. After training the Deep Neural Network using dataset P, its l neurons' outputs in the final FC, or hidden layer, {r(j)} and { r(j)}, both ∈ R l , are extracted as latent variables, obtained through forward-propagation of each image, in train set R p and test set R q respectively:
R p = (r(j), j = 1, . . . , n(3)
and
R q = ( r(j), j = 1, . . . , m(4)
The following clustering procedure is then implemented on the {r(j)} in R p :
We generate a set of clusters T = {t 1 , . . . , t k } by minimizing the within-cluster L 2 norms of the function
T k-means = arg min T k j=1 r∈Rp r − µ j 2 (5)
where µ j is the mean value of the data in cluster j. This is done using the k-means++ [18] algorithm, with the first cluster centroid u(1) being selected at random from T . The class label of a given cluster is simply the mode class of the data points within it.
As a consequence, we generate a set of cluster centroids, representing the different types of input data included in our train set P:
U = (u(j), j = 1, . . . , k(6)
Through medical evaluation of the MRI and DaT Scan images corresponding to the cluster centroids, we can annotate each cluster according to the stage of Parkinson's that its centroid represents.
By computing the euclidean distances between the test data in R q and the cluster centroids in U and by then using the nearest neighbor criterion, we can assign each one of test data to a specific cluster and evaluate the obtained classificationdisease prediction -performance. This is an alternative way to the prediction accomplished when the trained DNN is applied to the test data.
This alternative prediction is, however, of great significance: in the case of non-annotated new subject's data, selecting the nearest cluster centroid in U can be a transparent way for diagnosis of the subject's Parkinson's stage; the available MRI and DaT Scan data and related medical annotations of the cluster centroids being compared to the new subject's data.
B. Retraining of Deep Neural Networks with Annotated Latent Variables
Whenever new data, either from patients, or from non patients, are collected, they should be used to extend the knowledge already acquired by the DNN, by adapting its weights to the new data. In such a case, let us assume that a new train dataset, say P 1 , usually of small size, say s, is generated and an updated DNN should be created based on this dataset as well.
There are different methods developed in the framework of transfer learning [19], for training a new DNN on P 1 using the structure and weights of the above-described DNN. However, a major problem is that of catastrophic forgetting, i.e., the fact that the DNN forgets some formerly learned information when fine-tuning to the new data. This can lead to loss of annotations related to the latent variables extracted from the formerly trained DNN. To avoid this, we propose the following DNN adaptation method, which preserves annotated latent variables.
For simplicity of presentation, let us consider a CNN architecture, in which we keep the convolutional and pooling layers fixed and retrain the FC and output layers. Let W be a vector including the weights of the FC and output network layers of the original network, before retraining, and W denote the new (updated) weight vector, obtained through retraining. Let us also denote by, w and w , respectively, the weights connecting the outputs of the last FC, defined as r in Eq. (3), to the network outputs, y.
During retraining, the new network weights, W , are computed by minimizing the following error criterion:
E = E P1 + λ · E P(7)
where E P1 represents the misclassifications done in P 1 , which includes the new data and E P represents the misclassifications in P, which includes the old information. λ is used to differentiate the focus between the new and old data. In the following we make the hypothesis that a small change of the weights W is enough to achieve good classification performance in the current conditions. Consequently, we get:
W = W + ∆W(8)
and in the output layer case:
w = w + ∆w(9)
in which ∆W and ∆w denote small weight increments. Under this formulation, we can apply a first-order Taylor series expansion to make neurons' activation linear.
Let us now give more attention to the new data in P 1 . We can do this, by expressing E P1 in Eq. (7) in terms of the following constraint:
y (j) = d(j); j = 1, . . . , s(10)
which requests that the new network outputs and the desired outputs are identical.
Moreover, to preserve the formerly extracted latent variables, we move the input data corresponding to the annotated cluster centroids in U from dataset P to P 1 . Consequently, Eq. (10) includes these inputs as well; the size of P 1 becomes:
s = s + k(11)
where k is the number of clusters in U.
Let the difference of the retrained network output y from the original one y be:
∆y(j) = y (j) − y(j)(12)
Expressing the output y as a weighted average of the last FC layer outputs r with the w weights, we get [6] y (j) = y(j) + f h · w · ∆r(j) + ∆w · r(j)
where f h denotes the derivative of the former DNN output layer's neurons' activation function. Inserting Eq. (10) into Eq. (13) results in:
d(j) − y(j) = f h · w · ∆r(j) + ∆w · r(j)(14)
All terms in Eq. (14) are known, except of the differences in weights ∆w and last FC neuron outputs ∆r. As a consequence, Eq. (14) can be used to compute the new DNN weights of the output layer in terms of the neuron outputs of the last FC layer.
If there are more than one FC layers, we apply the same procedure, i.e., linearize the difference of the r , iteratively through the previous FC layers and express the ∆r in terms of the weight differences in these layers. When reaching the convolutional/pooling layers, where no retraining is to be performed, the procedure ends, since the respective ∆r is zero. It can be shown, similarly to [6] that the weight updates ∆W are finally estimated through the solution of a set of linear equations defined on P 1 :
v = V · ∆W(15)
where matrix V includes weights of the original DNN and vector v is defined as follows:
v(j) = d(j) − y(j); j = 1, . . . , s(16)
with y(j) denoting the output of the original DNN applied to the data in P 1 . Similarly to [6], the size of v is lower than the size of ∆W; many methods exist, therefore, for solving Eq. (16). Following the assumption made in the beginning of this section, we choose the solution that provides minimal modification of the original DNN weight. This is the one that provides the minimum change in the value of E in Eq. (7).
Summarizing, the targeted adaptation can be solved as a nonlinear constrained optimization problem, minimizing Eq. (7), subject to Eq. (10) and the selection of minimal weight increments. In our implementation, we use the gradient projection method [20] for computing the network weight updates and consequently the adapted DNN architecture.
C. Domain Adaptation of Deep Neural Networks through Annotated Latent Variables
In the two previous subsections we have focused on generation, based on extraction of latent variables from a trained DNN, and use of cluster centroids for prediction and adaptation of a Parkinson's diagnosis system. To do this, we have considered all available imaging information, consisting of MRI and DaT Scan data.
However, in many cases, especially in general purpose medical centers, DaT Scan equipment may not be available, whilst having access to MRI technology. In the following we present a domain adaptation methodology, using the annotated latent variables extracted from the originally trained DNN, to improve prediction of Parkinson's achieved when using only MRI input data. A new DNN training loss function is used to achieve this target.
Let us consider the following train and test datasets, similar to P and Q in Eq. (1) and Eq. (2) respectively, in which the input consists only of triplets of MRI data:
P = (x (j), d (j)); j = 1, . . . , n(17)
and
Q = ( x (j), d (j)); j = 1, . . . , m(18)
where: x (j) and d (j) represent the n network training inputs (each of which consists of only an MRI triplet) and respective desired outputs (with a binary value 0/1, where 0 represents a non patient and 1 represents a patient case); x (j) and d (j) similarly represent the m network test inputs and respective desired outputs. Using P , we train a similar DNN structure -as in the full MRI and DaT Scan case -, producing the following vector of l neuron outputs in its last FC or hidden layer:
R p = (r (j), j = 1, . . . , n(19)
with the dimension of each r vector being l, as in the original DNN last FC, or hidden, layer. A far as the r outputs are concerned, it would be desirable that these latent variables being closer, e.g., according to the mean squared error criterion, to one of the cluster centroids in Eq. (6) that belongs to the same category(patient/non patient) with them.
In this way, training of the DNN with only MRI inputs, would also bring its output y closer to the one generated by the original DNN; this would potentially improve the network's performance, towards the much better one produced by the original DNN (trained with both MRI and DaT Scan data).
Let us compute the euclidean distances between the latent variables in R p and the cluster centroids in U as defined in Eq. (6). Using the nearest neighbor criterion we can define a set of desired vector values for the r latent variables, with respect to the k cluster centroids, as follows:
Z p = (z(i, j), i = 1, . . . , k; j = 1, . . . , n(20)
where z(i, j) is equal to, either 1 in the case of the cluster centroid u(i) that was selected, as closest to r (j) during the above-described procedure, or equal to 0 in the case of the rest cluster centroids. In the following, we introduce the z(i, j) values in a modified Error Criterion to be used in DNN learning to correctly classify the MRI inputs.
Normally, the DNN (CNN, or CNN-RNN) training is performed through minimization of the error criterion in Eq. (21) in terms of the DNN weights:
E 1 = 1 n n j=1 (d (j) − y (j)) 2(21)
where y and d denote the actual and desired network outputs and n is equal to the number of all MRI input triplets.
We propose a modified Error Criterion, introducing an additional term, using the following definitions:
g(i, j) = u(i) − r (j), i = 1, . . . , k; j = 1, . . . , n(22)
and
G(i, j) = g(i, j) * (g(i, j)) T(23)
with T indicating the transpose operator. It is desirable that the G(i, j) term -with a respective value of z(i, j) equal to one -is minimized, whilst the G(i, j) values -corresponding to the rest of the z(i, j) values, which are equal to zero -are maximized. Similarly to [21], we pass G(i, j) through a softmax f function and subtract its output from 1, so as to obtain the above-described respective minimum and maximum values.
The generated Loss Function is expressed in terms of the differences of the transformed G(i, j) values from the corresponding desired responses z(i, j), as follows:
E 2 = 1 kn k i=1 n j=1 (z(i, j) − [1 − f (G(i, j)]) 2(24)
calculated on the n data and the k cluster centroids. In general, our target is to minimize together Eq. (21) and Eq. (24). We can achieve this using the following Loss Function:
E new = ηE 1 + (1 − η)E 2(25)
where η is chosen in the interval [0, 1]. Using a value of η towards zero provides more importance to the introduced centroids of the clusters of the latent variables extracted from the best performing DNN, trained with both MRI and DaT Scan data. On the contrary, using a value towards one leads to normal error criterion minimization.
IV. EXPERIMENTAL EVALUATION
In this section we present a variety of experiments for evaluating the proposed approach. The implementation of all algorithms described in the former Section has been performed in Python using the Tensorflow library.
A. The Parkinson's Dataset
The data that are used in our experiments come from the Parkinson's database described in Section II. For training the CNN and CNN-RNN networks, we performed an augmentation procedure in the train dataset, as follows. After forming all triplets of consecutive MRI frames, we generated combinations of these image triplets with each one of the DaT Scans in each category (patients, non patients).
Consequently, we created a dataset of 66,176 training inputs, each of them consisting of 3 MRI and 1 DaT Scan images. In the test dataset, which referred to different subjects than the train dataset, we made this combination per subject; this created 1130 test inputs.
For possible reproduction of our experiments, both the training and test datasets, each being split in two folderspatients and non patients -are available upon request from the mlearn.lincoln.ac.uk web site.
B. Testing the proposed Approach for Parkinson's Prediction
We used the DNN structures described in [6], including both CNN and CNN-RNN architectures to perform Parkinson's diagnosis, using the train and test data of the abovedescribed database. The convolutional and pooling part of the architectures was based on the ResNet-50 structure; GRU units were used in the RNN part of the CNN-RNN architecture.
The best performing CNN and CNN-RNN structures, when trained with both MRI and DaT Scan data, are presented in Table I.
It is evident that the CNN-RNN architecture was able to provide excellent prediction results on the database test set. We, therefore, focus on this architecture for extracting latent variables. For comparison purposes, it can be mentioned that the performance of a similar CNN-RNN architecture when trained only with MRI inputs was about 70%.
It can be seen, from Table I, that the number l of neurons in the last FC layer of the CNN-RNN architecture was 128. This is, therefore, the dimension of the vectors r extracted as in Eq. (3) and used in the cluster generation procedure of Eq. (5).
We then implemented this cluster generation procedure, as described in the former Section. The k-means algorithm provided five clusters of the data in the 128-dimensional space. Fig. 2 depicts a 3-D visualization of the five cluster centroids; stars in blue color denote the two centroids corresponding to non patient data, while squares in red color represent the three cluster centroids corresponding to patient data.
With the aid of medical experts, we generated annotations of the images (3 MRI and 1 DaT Scan) corresponding to the 5 cluster centroids. It was very interesting to discover that these centroids represent different levels of Parkinson's evolution. Since the DaT Scans conveyed the major part of this discrimination, we show in Fig.3 the DaT Scans corresponding to each one of the cluster centroids.
According to the provided medical annotation, the 1st centroid (t 1 ) corresponds to a typical non patient case. The 2nd centroid (t 2 ) represents a non patient case as well, but with some findings that seem to be pathological. Moving to the patient cases, the 3rd centroid (t 3 ) shows an early step of Parkinson's -in stage 1 to stage 2, while the 4th centroid (t 4 ) denotes a typical Parkinson's case -in stage 2. Finally, the 5th centroid (t 5 ) represents an advanced step of Parkinson's -in stage 3. It is interesting to note here that, although the DNN was trained to classify input data in two categories -patients and non patients -, by extracting and clustering the latent variables, we were able to generate a richer representation of the diagnosis problem in five categories. It should be mentioned that the purity of each generated cluster was almost perfect. Table II shows the percentages of training data included in each one of the five generated clusters. It should be mentioned that almost two thirds of the data belong in clusters 2 and 3, i.e., in the categories which are close to the borderline between patients and non patients. These cases require major attention by the medical experts and the proposed procedure can be very helpful for diagnosis of such subjects' cases.
We tested this procedure on the Parkinson's test dataset, by computing the euclidean distances of the corresponding extracted latent variables from the 5 cluster centroids and by classifying them to the closest centroid. Table III shows the number of test data referring to six different subjects that were classified to each cluster. All non patient cases were correctly classified. In the patient cases, the great majority of the data of each patient were correctly classified to one of the respective centroids. In the small number of misclassifications, the disease symptoms were not so evident. However, based on the large majority of correct classifications, the subject would certainly attract the necessary interest from the medical expert.
We next examined the ability of the above-described DNN to be retrained using the procedure described in Subsection III.B.
In the developed scenario, we split the above test data in two parts: we included 3 of them (Non Patient 2, Patient 2 and Patient 3) in the retraining dataset P 1 and let the other 3 subjects in the new test dataset. The size s of P 1 was equal to 493 inputs, including the five inputs corresponding to cluster centroids in U; the size of the new test set was equal to 642 inputs.
We applied the proposed procedure to minimize the error over all train data in P and P 1 , focusing more on the latter, as described by Eq. (10).
The network managed to learn and correctly classify all 493 P 1 inputs, including the inputs corresponding to the cluster centroids, with a minimal degradation of its performance over P input data. We then applied the trained network to the test dataset consisting of three subjects. In this case, there was also a slight improvement, since the performance was raised to 98,91%, compared to the corresponding performance on the same three subjects' data, shown in Table III, which was 98,44%. Table IV shows the clusters to which the new extracted latent variables r were classified. A comparison with the corresponding results in Table III shows the differences produced through retraining.
We finally examined the performance of the domain adaptation approach that was presented in Subsection III.C.
We started by training the CNN-RNN network with only the MRI triplets in P as inputs. The obtained performance when the trained network was applied to the test set Q was only 70,6%. For illustration of the proposed developments we extracted the r latent variables from this trained network and classified them to a set of respectively extracted cluster centroids. Table V presents the results of this classification task, which is consistent with the acquired DNN performance. It can be seen that the MRI information leads DNN prediction towards the patient class, which indeed contained more samples in the train dataset. Most errors were made in the non patient class (subjects 1 and 2). We then examined the ability of the proposed approach, to train the CNN-RNN network using the modified Loss Function, using various values of η; here we present the case when using a value equal to 0.5. The obtained performance when the trained network was applied to the test set Q was raised to 81,1%. For illustrating this improvement we also extracted the r latent variables from this trained network and classified them to one of the five annotated original cluster centroids U. Table VI presents the results of this classification task. It is evident that minimization of the modified Loss Function managed to force the extracted latent variables get closer to cluster centroids which belonged in the correct class for Parkinson's diagnosis.
V. CONCLUSIONS AND FUTURE WORK
The paper proposed a new approach for extracting latent variables from trained DNNs, in particular CNN and CNN-RNN architectures, and using them in a clustering and nearest neighbor classification method for achieving high performance and transparency in Parkinson's diagnosis. We have used | 4,346 |
1901.08100 | 2914653242 | Whole-body control (WBC) is a generic task-oriented control method for feedback control of loco-manipulation behaviors in humanoid robots. The combination of WBC and model-based walking controllers has been widely utilized in various humanoid robots. However, to date, the WBC method has not been employed for unsupported passive-ankle dynamic locomotion. As such, in this paper, we devise a new WBC, dubbed whole-body locomotion controller (WBLC), that can achieve experimental dynamic walking on unsupported passive-ankle biped robots. A key aspect of WBLC is the relaxation of contact constraints such that the control commands produce reduced jerk when switching foot contacts. To achieve robust dynamic locomotion, we conduct an in-depth analysis of uncertainty for our dynamic walking algorithm called time-to-velocity-reversal (TVR) planner. The uncertainty study is fundamental as it allows us to improve the control algorithms and mechanical structure of our robot to fulfill the tolerated uncertainty. In addition, we conduct extensive experimentation for: 1) unsupported dynamic balancing (i.e. in-place stepping) with a six degree-of-freedom (DoF) biped, Mercury; 2) unsupported directional walking with Mercury; 3) walking over an irregular and slippery terrain with Mercury; and 4) in-place walking with our newly designed ten-DoF viscoelastic liquid-cooled biped, DRACO. Overall, the main contributions of this work are on: a) achieving various modalities of unsupported dynamic locomotion of passive-ankle bipeds using a WBLC controller and a TVR planner, b) conducting an uncertainty analysis to improve the mechanical structure and the controllers of Mercury, and c) devising a whole-body control strategy that reduces movement jerk during walking. | Passive walking robots @cite_10 @cite_26 fall in the dynamic locomotion category too. These studies shed light on the important aspects of biped locomotion, but do not provide direct application for feedback control related to our methods. On the other hand, the progress made in actuated planar biped locomotion is impressive. @cite_21 @cite_23 show biped robots running and their capability to recover from disturbances on irregular terrains. However, there is an obvious gap between supported (or constrained) locomotion and unsupported walking. @cite_30 shows unsupported single leg hopping, which is a remarkable accomplishment. Besides the strong contribution in dynamic locomotion of that work, the study omitted several important aspects of unsupported biped locomotion such as body posture control, continuous interaction of the stance leg through the ground contact phases, and disturbances from the other limbs' motion, which are a focus of our paper. | {
"abstract": [
"In order to explore the balance in legged locomotion, we are studying systems that hop and run on one springy leg. Pre vious work has shown that relatively simple algorithms can achieve balance on one leg for the special case of a system that is constrained mechanically to operate in a plane (Rai bert, in press; Raibert and Brown, in press). Here we general ize the approach to a three-dimensional (3D) one-legged machine that runs and balances on an open floor without physical support. We decompose control of the machine into three separate parts: one part that controls forward running velocity, one part that controls attitude of the body, and a third part that controls hopping height. Experiments with a physical 3D one-legged hopping machine showed that this control scheme, while simple to implement, is powerful enough to permit hopping in place, running at a desired rate, and travel along a simple path. These algorithms that control locomotion in 3D are direct generalizations of those in 2D, with surpris...",
"Passive-dynamic walkers are simple mechanical devices, composed of solid parts connected by joints, that walk stably down a slope. They have no motors or controllers, yet can have remarkably humanlike motions. This suggests that these machines are useful models of human locomotion; however, they cannot walk on level ground. Here we present three robots based on passive-dynamics, with small active power sources substituted for gravity, which can walk on level ground. These robots use less control and less energy than other powered robots, yet walk more naturally, further suggesting the importance of passive-dynamics in human locomotion.",
"This report documents our work in exploring active balance for dynamic legged systems for the period from September 1985 through September 1989. The purpose of this research is to build a foundation of knowledge that can lead both to the construction of useful legged vehicles and to a better understanding of animal locomotion. In this report we focus on the control of biped locomotion, the use of terrain footholds, running at high speed, biped gymnastics, symmetry in running, and the mechanical design of articulated legs.",
"",
"There exists a class of two-legged machines for which walking is a natural dynamic mode. Once started on a shallow slope, a machine of this class will settle into a steady gait quite comparable to human walking, without active control or en ergy input. Interpretation and analysis of the physics are straightforward; the walking cycle, its stability, and its sensi tivity to parameter variations are easily calculated. Experi ments with a test machine verify that the passive walking effect can be readily exploited in practice. The dynamics are most clearly demonstrated by a machine powered only by gravity, but they can be combined easily with active energy input to produce efficient and dextrous walking over a broad range of terrain."
],
"cite_N": [
"@cite_30",
"@cite_26",
"@cite_21",
"@cite_23",
"@cite_10"
],
"mid": [
"2061063863",
"2029058516",
"1507646238",
"",
"2163668399"
]
} | Dynamic Locomotion For Passive-Ankle Biped Robots And Humanoids Using Whole-Body Locomotion Control Journal Title XX(X):1-18 | Passive-ankle walking has some key differences with respect to ankle actuated biped legged locomotion: 1) bipeds with passive ankles have lesser degrees-of-freedom (DoF) than ankle actuated legged robots resulting in lower mechanical complexity and lighter lower legs. 2) bipeds with passive ankles have tiny feet which lead to a small horizontal footprint of the robot. Our paper targets passive and quasi-passive ankle legged robots in leverage the above characteristics. In addition, there is a disconnect between dynamic legged locomotion methods, e.g. Rezazadeh et al. (2015); Hartley et al. (2017) and humanoid control methods, e.g. Koolen et al. (2016); Escande et al. (2014); Kuindersma et al. (2015), the latter focusing on coordinating locomanipulation behaviors. Humanoid robots like the ones used during the DARPA robotics challenges (DRC) have often employed task-oriented inverse kinematics and inverse dynamics methods coupled with control of the robots' horizontal center of mass (CoM) demonstrating versatility for whole-body behaviors Kohlbrecher et al. (2014); Feng et al. (2015); Johnson et al. (2015); Radford et al. (2015a). However, they have been practically slower and less robust to external disturbances than bipeds employing dynamic locomotion methods which do not rely on horizontal CoM control. This paper aims to explore and offer a solution to close the gap between these two lines of controls, i.e. versatile task-oriented controllers and dynamic locomotion controllers.
There is a family of walking control methods Hubicki et al. (2018); Raibert et al. (1984) that do not rely on explicit control of the horizontal CoM movement enabling passiveankle walking and also fulfilling many of the benefits listed above. These controllers use foot placements as a control mechanism to stabilize the under-actuated horizontal CoM dynamics. At no point, they attempt to directly control the CoM instantaneous state. Instead, they calculate a control policy in which the foot location is a feedback weighted sum of the sensed CoM state. Our dynamic locomotion control policy falls into this category of controllers albeit using a particular CoM feedback gain matrix based on the concept of time-to-velocity-reversal (TVR) Kim et al. (2014). Another important dynamic locomotion control strategy relies on the concept of hybrid zero dynamics (HZD) Westervelt et al. (2007). HZD considers an orbit for dynamic locomotion and a feedback control policy that warranties asymptotic stability to the orbit Hartley et al. (2017); Hereid et al. (2016). Although these two lines of dynamic walking controls have had an enormous impact in the legged locomotion field, they have not been extended yet to full humanoid systems. In particular, humanoid systems employing task-based whole body control strategies require closing the gap with the above dynamic locomotion methods. And this is precisely the main objective of this paper.
The main contribution of this paper is to achieve unsupported dynamic walking of passive-ankle and full humanoid robots using the whole-body control method. To do so, we: 1) devise a new task-based whole-body locomotion controller that fulfills maximum tracking errors and significantly reduces contact jerks; 2) conduct an uncertainty analysis to improve the robot mechanics and controls; 3) integrate the whole-body control method with our dynamic locomotion planner into two experimental bipeds robots, and 4) extensively experiment with unsupported dynamic walking such as throwing balls, pushing a biped, or walking in irregular terrains.
One important improvement we have incorporated in our control scheme is to switch from joint torque control to joint position control. This low-level control change is due to the lessons we have learned regarding the overall system performance difference between low-level joint control versus torque control. Namely that joint position control used in this paper works better than a joint torque control Kim et al. (2016). Additionally, our decision to use a low level joint-level control is supported by previous studies that torque control reduces the ability to achieve a highimpedance behavior Calanca et al. (2016), which is needed for achieving dynamic biped locomotion with passive-ankle bipeds. Indeed, switching to joint position control has been a strong performance improvement to achieve the difficult experimental results.
From the uncertainty analysis of our TVR dynamic locomotion planner, we found that to achieve stable locomotion the robot requires higher position tracking accuracy than initially expected. Our uncertainty analysis concludes that the landing foot positions need to be controlled within a 1 cm error and the CoM state needs to be estimated within a 0.5 cm error. Both the robot's posture control and the swing foot control require high tracking accuracy. For this reason, we remove the torque feedback in the low-level controller and instead impose a feedforward current command to compensate for wholebody inertial, Coriolis, and gravitational effects. However, this is not enough to overcome friction and stiction of the joint drivetrain. To overcome this issue, we introduce a motor position feedback controller Pratt et al. (2004). Next, the low-level joint commands are computed by our proposed whole-body locomotion controller (WBLC). WBLC consists of two sequential blocks: a kinematics-level whole-body control, hereafter referred to as (KinWBC) and a dynamics-level whole-body controller (DynWBC). The first block, KinWBC, computes joint position commands as a function of the desired operational task commands using feedback control over the robot's body posture and its foot position.
Given these joint position commands, DynWBC computes feedforward torque commands while incorporating gravity and Coriolis forces, as well as friction cone constraints at the contact points. One key characteristic of DynWBC is the formulation of reduced jerk torque commands to handle sudden contact changes. Indeed, in our formulation, we avoid formulating contacts as hard constraints Herzog et al. (2016); Saab et al. (2013); Wensing and Orin (2013) and instead include them as a cost function. We then use the cost weights associated with the contacts to change behavior during contact transitions in a way that it significantly reduces movement jerk. For instance, when we apply heavy cost weights to the contact accelerations, we effectively emulate the effect of contact constraints. During foot detachment, we continuously reduce the contact cost weights. By doing so, we accomplish smooth transitions as the contact conditions change. An approach based on whole-body inverse dynamics has been proposed for smooth task transitions Salini et al. (2011), but has not been proposed for contact transitions like ours, neither has it been implemented in experimental platforms.
The above WBLC and joint-level position feedback controller can achieve high fidelity real-time control of bipeds and humanoid robots. For locomotion control, we employ the time-to-velocity-reversal (TVR) planner presented in Kim et al. (2016). We use the TVR planner to update foot landing locations at every step as a function of the CoM state. And we do so by planning in the middle of leg swing motions. By continuously updating the foot landing locations, bipeds accomplish dynamic walking that is robust to control errors and to external disturbances. The capability of our walking controller is extensively tested in a passive-ankle biped robot and in a quasi-passive ankle lower body humanoid robot. By relying on foot landing location commands, our control scheme is generic to various types of bipeds and therefore, we can accomplish similar walking capabilities across various robots by simply switching the robot parameters. To demonstrate the generality of our controller, we test not only two experimental bipeds but also a simulation of other humanoid robots.
Indeed, experimental validation is a main contribution of this paper. The passive ankle biped, Mercury, is used for extensive testing of dynamic balancing, directional walking, and rough terrain walking. We also deploy the same methods to our new biped, DRACO, and accomplish dynamic walking within a few days after the robot had its joint position controllers developed. Such timely deployment showcases the robustness and versatility of the proposed control framework.
The paper is organized as follows. Section. 5 introduces our robot hardware and its characteristic features. In section 3, we explain the control framework consisting of the dynamic locomotion planner, whole-body locomotion controller, and the joint-level position controller. Section 4 explains how the measurement noise and landing location error affect the stability of our dynamic locomotion controller, and analyzes the required accuracy for state estimation and swing foot control to asymptotically stabilize bipeds. In section 6, we address implementation details. Section 7 discusses extensively experimental and simulation results. Finally, section 8 concludes and summarizes our works. ATRIAS Rezazadeh et al. (2015); Hartley et al. (2017) is the closest example to our proposed work for this paper. It is one of the first passive-ankle biped robots that is able to dynamically balance and walk unsupported.
Locomotion Control Architecture
Our proposed control architecture consists of three components: 1) a whole-body locomotion controller (WBLC) which coordinates joint commands based on desired operational space goal trajectories, 2) a set of joint-level feedback controllers which execute the commanded joint trajectories, and 3) a dynamic locomotion planner for passive-ankle bipeds which generates the foot landing locations based on TVR considerations. In this section, we will describe the details of these layers as well as their interaction.
Whole-Body Locomotion Controller (WBLC)
Many WBCs include a robot dynamic model to compute joint torque/force commands to achieve desired operational space trajectories. If we had ideal motors with perfect gears, the computed torque commands of a WBC could be sent out as open-loop motor currents. However, excluding some special actuator designs Wensing et al. (2017), it is nontrivial to achieve the desired torque/force commands using open-loop motor currents because most actuators have high friction and stiction in their drivetrains. One established way to overcome drivetrain friction is to employ torque/force sensor feedback at the joint level. However, negative torque/force feedback control is known to reduce the maximum achievable close-loop stiffness of joint controllers Calanca et al. (2016). In addition, torque/force feedback controllers used in combination with position control are known to be more sensitive to contact disturbance and time delay. Therefore, we need a solution that addresses all of these limitations.
Another consideration is related to the task space impedance behavior that is needed to achieve dynamic walking. Our observation is that a high impedance behavior in task space is preferred for dynamic walking because: 1) the foot landing location must be fairly accurate to stabilize the biped; 2) the swing leg must be able to overcome external disturbances; and 3) the robot's body posture needs to suppress oscillations caused by the effect of moving limbs or other disturbances. High stiffness control of robots with sizable mechanical imperfections is the only way to achieve stable passive-ankle biped walking despite making them less compliant with respect to the terrain.
To accomplish high gain position control, we have opted to remove sensor-based torque feedback at the joint level and replace it with motor position feedback control. Our observation is that this change significantly reduces the effect of the imperfect mechanics and achieves higher position control bandwidth than using torque feedback. In addition to the joint position commands, the desired torque commands computed via WBC are incorporated as feedforward motor current commands. Thus, to combine motor position and feedforward motor current commands for dynamic locomotion, we devise a new WBC formulation that we call whole-body locomotion control (WBLC).
WBLC is sequentially implemented with two control blocks. The first block is a kinematic-level WBC (KinWBC) that computes joint position, velocity, and acceleration commands. KinWBC does not rely on a dynamical model of the robot, instead it relies only on a kinematics model to coordinate multiple prioritized operation space tasks. The second block, called the dynamic-level WBC (DynWBC), takes the joint commands from KinWBC and computes the desired torque commands that are consistent with the robot dynamics and the changing contact constraints. The output of WBLC is therefore comprised of desired joint torque, position, and velocity commands, which are sent out to the joint-level feedback controllers.
Kinematic-level Whole-Body Controller (KinWBC)
We first formulate a kinematic whole-body controller to obtain joint commands given operational space commands. The basic idea is to compute incremental joint positions based on operational space position errors and add them to the current joint positions. This is done using null-space task prioritization as follows.
∆q 1 = J † 1 (x des 1 − x 1 ),(1)∆q 2 = ∆q 1 + J † 2|pre (x des 2 − x 2 − J 2 ∆q 1 ),(2)
. . .
∆q i = ∆q i−1 + J † i|pre (x des i − x i − J i ∆q i−1 ),(3)
where J i , x des i , and ∆q i are the ith task Jacobian, a desired position of the ith task, and the change of joint configuration related to the i th task iteration. The {·} † denotes an SVDbased pseudo-inverse operator in which small singular values are set to 0. Note that there is no feedback gain terms in this formulation, which can be interpreted as gains being equal to unity. In addition, the prioritized Jacobians take the form:
J i|pre = J i N i−1 , (4) N i−1 = N 1|0 · · · N i−1|i−2 ,(5)N i−1|i−2 = I − J † i−1|pre J i−1|pre ,(6)N 0 = I.(7)
Then, the joint position commands can be found with
q d = q + ∆q,(8)
where ∆q is joint increment computed in the ith task in Eq.
(3). In addition, the joint velocity and acceleration for every task iteration can be computed as,
q d i =q d i−1 + J † i|pre ẋ des − J iq d i−1 ,(9)q d i =q d i−1 + J † i|pre ẍ des −J iq − J iq d i−1 .(10)
Finally, the joint commands, q d ,q d , andq d are sent out to the block, DynWBC. We note that q is the full configuration of the robot containing both floating base and actuated joints.
Dynamic-level Whole-Body Controller (DynWBC)
Given joint position, velocity, and acceleration commands from the KinWBC, the DynWBC computes torque commands while considering the robot dynamic model and various constraints. The optimization algorithm to compute torque commands in DynWBC is as follows:
min Fr,ẍc,δq
F r W r F r +ẍ c W cẍc + δ q Wqδq (11) s.t. U F r ≥ 0,(12)SF r ≤ F max r,z ,(13)
x c = J cq +J cq ,
Aq
+ b + g = 0 6×1 τ cmd + J c F r ,(15)q =q cmd + δq,(16)q cmd =q d + k d (q d −q) + k p (q d − q), (17) τ min ≤ τ cmd ≤ τ max .(18)
In turn, these computed torque commands are sent out as feedforward motor current commands to the joint-level controllers. One key difference with other QP formulations for whole-body control is that we do not use the null space operators of the contact constraints nor do we use a null velocity or acceleration assumption to describe the surface contacts of the robot with the ground. Instead, contact interactions are addressed with contact acceleration terms in the cost function regulated with weighting matrices that effectively model the changes in the contact state. This new term is particularly important since traditional modeling of contacts as hard constraints causes torque command discontinuities due to sudden contact switches. As such, we call our formulation reduced "jerk" whole-body control. We note that our formulation is the first attempt that we know of to use WBC for unsupported passiveankle dynamic locomotion in experimental bipeds. Contact changes in passive-ankle biped locomotion are far more sudden than changes on robots that control the horizontal CoM movement. Our proposed formulation emerges from extensive experimentation and comparison between QPbased WBC formulations using hard contact constraints versus soft constraints as proposed. We report that the above formulation has empirically shown to produce rapidly changing but smooth torque commands than using WBCs with hard constraints. To achieve smooth contact switching, the contact Jacobian employed above includes both the robot's feet contacts even if one of them is not currently in contact. As mentioned above, we never set foot contact accelerations to be zero even if they are in contact. Instead, we penalize foot accelerations in the cost function depending on whether they are in contact or not using the weight W c . When a foot is in contact, we increase the values of W c for the block corresponding to the contact. Similarly, we reduce the values of the weights when the foot is removed from the contact. At the same time, we increase the weight W r for the swing foot and reduce the upper bound of the reaction force F max r,z . In essence, by smoothly changing the upper bounds, F max r,z , and weights, W r and W c , we practically achieve jerk-free walking motions. The concrete description for the weights and bounds used in our experiments are explained in Section 6.2.
In the above algorithm, U computes normal and friction cone forces as described in Bouyarmane et al. (2018), and F r represents contact reaction forces. Eq. (13) introduces the upper bounds on the normal reaction forces to facilitate smooth contact transitions. As mentioned before, this upper bound is selected to decrease when the foot contacts detach from the ground and increase again when the foot makes contact.
Eq. (15) models the full-body dynamics of the robot including the reaction forces. A, b, and g are the generalized inertia, Coriolis, and gravitational forces, respectively. The diagonal terms of the inertia matrix include the rotor inertia of each actuator in addition to the linkage inertia. The rotor inertia is an important inclusion to achieve good performance. Eq. (16) shows the relaxation of the joint commands,q cmd , by the term, δq. We include this relaxation because of two reasons. First, the KinWBC specifies virtual joint acceleration which cannot be perfectly attained. Second, the torque limit on the above optimization can prevent achieving the desired joint acceleration. Eq. (17) shows how the KinWBC's joint commands are used to find desired acceleration commands. Here, q d ,q d , andq d are the computed commands from KinWBC. Eq. (18) represents torque limits.
Joint-Level Controller
Each actuated joint has an embedded control board that we use to implement the motor position PD control with feedforward torque inputs:
τ m = τ cmd + k p q d m − q m + k d q d j −q m ,(19)
where τ m and τ cmd are the desired motor torque and computed torque command, the latter is obtained from Eq. (15) in the optimization problem. Thus, τ cmd acts as the feedforward control input.q d j is the desired joint velocity computed from the KinWBC. It is obtained by applying the iterative algorithm in Eq. (9). q d m is a desired motor position command and is computed using the following formula,
q d m = q d j + τ cmd k s ,(20)
where k s is the spring constant of each SEA joint. q d j is obtained via the iterative algorithm shown in Eq. (1) ∼ (8). We incorporate this spring deflection consideration because the computation of joint positions from motor positions, q m , considers only the transmission ratio, N , but the spring deflection is ignored in the computation.
Time-to-Velocity-Reversal (TVR) Planner
At every step, a TVR planner computes foot placements as a function of the CoM state, i.e. its position and velocity. This is done around the middle of the swing foot motion. Our TVR planner operates with the principle of reversing the CoM velocity every step and it can be shown that the CoM movement is asymptotically stable. The original method was presented in our previous paper Kim et al. (2016). In this paper, we use a simplified version of TVR which considers a constant CoM height. This consideration has been beneficial on various experimental results across multiple biped robotics platforms explored in this paper. In Appendix 2 we explain the difference of our planner and the ones proposed by Raibert et al. (1984), Koolen et al. (2012), and Rezazadeh et al. (2015).
Uncertainty Analysis Of The Planner
One of the biggest challenges in unsupported passive-ankle dynamic locomotion is to determine what control accuracy is needed to effectively stabilize a biped. Given that a passive-ankle biped robot cannot use ankle torques to control the robot's CoM movement, foot position accuracy, state estimation, and other related considerations become much more important in achieving the desired dynamic behavior. For instance, the CoM dynamics emerging from passiveankle behavior evolves exponentially with time, pointing out the need to determine the tolerable foot position and body estimation errors. In this section, we develop the tools to explicitly quantify the required accuracy to achieve asymptotically stable passive-ankle dynamic locomotion.
As previously mentioned, our TVR locomotion planner observes the CoM position and velocity states and computes a foot landing location. For our analysis and experimentation, we enforce a constant CoM height constraint. Our reliance on linear inverted pendulum (LIP) model enables a straightforward uncertainty analysis given noisy CoM state observations and landing location errors under kinematic constraints.
Formulation of the Planner
Our TVR planner relies on the LIP model:
x = g h (x − p),(21)
where g is the gravitational acceleration, h is the constant CoM height value, and p is the foot landing location which acts as a stabilizing input for reversing the CoM dynamics at every step. More concretely, the TVR planner aims to reverse the CoM velocity after a set time duration t by computing a new stance foot location, p. Note that Eq. (21) is linear so it has an exact solution for the CoM state, x(t). Thus, for a given p, the CoM state after a desired swing time T can be described as a discrete system where k corresponds to the k-th walking step of the robot:
x k+1 = Ax k + Bp k ,(22)A = cosh(ωT ) ω −1 sinh(ωT ) ω sinh(ωT ) cosh(ωT ) ,(23)B = 1 − cosh(ωT ) −ω sinh(ωT ) ,(24)
where ω = g/h. The system above can be straightforwardly obtained by applying known second order linear ODE techniques to Eq. (21). Next, let p k correspond to the foot location of the k-th step in a sequence of steps. Our TVR planner is based on the objective of finding a p k which reverse the CoM velocity at every step. Letting the velocity component (bottom row) of Eq. (22) be zero after the desired reversal time, t < T , results into the quality,
0 = ω sinh(ωt ) cosh(ωt ) x k − ω sinh(ωt )p k . (25)
Solving for p k in the above equation will result in the foot landing location policy that reverses CoM velocity after t . With the CoM velocity being reversed after every step, an additional bias term, κ, is added to steer the robot toward the origin. Further details about κ can be found in Kim et al. (2016). Solving for Eq. (25) and including the additional κ term, we get
p k = 1 ω −1 coth(ωt ) x k + κ 0 x k .(26)
Incorporating the above feedback policy into Eq. (22), we get the closed loop dynamics,
x k+1 = (A + BK)x k ,(27)K = (1 + κ) ω −1 coth(ωt ) .(28)
Notice that the control policy in Eq. (27) has a simple PD control form; therefore, applying standard linear stability methods for PD control, the planner parameters, (κ, t ), can be tuned to achieve magnitudes such that the closed loop eigenvalues of A + BK are smaller than 1. In our case, we chose eigenvalues with magnitude equal to 0.8. Since our desired behavior is to take multiple small steps toward a desired reference position rather than a single big step, the eigenvalue magnitudes are intentionally set to be close to one rather than zero. The resulting motion (simulated numerically) in Fig. 2(a), shows the asymptotically converging trajectories in the phase plot.
Uncertainty Analysis
During experimental walking tests, we observed notable body position and landing location errors due to the deflection of the mechanical linkages of the robot. We note that in our attempt to make Mercury a light-weight robot, we designed body and leg structures made of thin aluminum pieces and carbon fiber structures. In particular, the lower and upper legs of Mercury are constructed using carbon fibers without further rigid support. In addition, the abduction and flexion hip joints contain drivetrains made out of thin aluminum with pin joints that deflect when a contact occurs. Rather than focusing on the effect of these existing mechanical deformations, we decided to focus on the maximum errors that our dynamic locomotion controller can tolerate. After we found the maximum tolerances, we went back to the robot's mechanical design and replaced hip joints and the leg linkages to be significantly more rigid in order to fulfill the maximum tolerances. Therefore, our uncertainty analysis has been fundamental to drive the new mechanical structure on the original biped hardware to achieve the desired performance.
t x t y κ x κ y [0.2,
To quantify the acceptable errors for our TVR planner, we perform here an analysis of stability borrowing ideas from robust control Bahnasawi et al. (1989). We apply some assumptions to simplify our analysis:
1. The robot's step size is limited to 0.5 m based on an approximated leg kinematic limits. 2. State-dependent errors are ignored.
For our analysis, we model foot landing location errors (presumably resulting from mechanical deflection and limited control bandwidth) with a scalar term, η. On the other hand, we model CoM state estimation errors as a vector of position and velocity errors, δ. Based on these error variables, we extend the dynamics of Eq. 22 to be
x k+1 = Ax k + B(p k + η), p k = K(x k + δ).(29)
In order to provide design specifications to improve the robot mechanics, controllers, and estimation processes, we choose arbitrary bounds such that
||δ|| ≤ δ M , ||η|| ≤ η M .(30)
Once again, we use the proposed uncertainty analysis to determine the maximum tolerance bounds δ M and η M , providing design specifications. Since the velocity of the state resulting from our TVR planner changes sign after every step, typical convergence analysis regards this effect as an oscillatory behavior despite the fact that the absolute value of the CoM state, x, effectively decreases over time.
To remedy this, we perform a convergence analysis after two steps instead of a single step. Therefore, given an initial state, x, after two steps, the new state, x , is obtained by applying Eq. (29) twice,
x = A 2 x + AB(p + η) + B(p + η ), p = K (x + δ) , p = K (x + δ ) ,(31)
Prepared using sagej.cls where (), () , and () represent the kth, (k + 1)th and (k + 2)th step respectively. The main idea is to find the region in x for which a Lyapunov function decreases value after two steps subject to the maximum errors, δ M and η M :
∆V = x P x − x P x ≤ 0.(32)
Substituting Eq. (31), arranging the terms, and setting the upper bound ∆V , it can be shown that
∆V = x (A cc P A cc − P )x + 2ζ P A cc x + ζ P ζ ≤ −a||x|| 2 + 2b||x|| + c (33) ≤ 0, where, A c = A − BK(34)A cc = A 2 c (35) ζ = A c BKδ + BKδ + A c Bη + Bη (36) a = −λ M A cc P A cc − P ,(37)b = δ M ||A cc P A c BK|| + ||A cc P BK|| + η M ||A cc P A c B|| + ||A cc P B|| , (38) c = g(ζ P ζ).(39)
Notice that the upper bounds defined by a, b, and c have a quadratic form which allows us to find easily a solution of the Euclidean norm of the CoM state. || · || is the l 2 -norm, λ M (·) denotes the maximum eigenvalue of its matrix arguments, and g(ζ P ζ) is the sum of the l 2 -norm of every term in ζ P ζ similar to b. The definition of g is pushed down to Appendix 3 due to the length of the expression. Note that a is positive if the planner parameters are tuned such that the LIP behavior is stable. Solving for −a||x|| 2 + 2b||x|| + c ≤ 0, we get the uncertainty ball region,
B r = x ||x|| ≤ b + √ b 2 + ac a .(40)
The above ball defines the region of states for which we cannot warranty asymptotic stability. And conversely, the region of states outside of the ball, x ∈ B r , corresponds to asymptotically stable states. Note that a smaller ball means a larger stability region, and if the errors η and δ are zero, the ball would have zero radius and any state would be asymptotically stable. However, because of mechanical deflection, limited control bandwidth, and estimation errors, b and c are non-zero. By substituting the planner's parameters from Table 1 into the above equation, we can quantify and analyze the effect of the errors mentioned above. Fig. 2(b) shows the CoM phase space plot. Take Eq. (26) and write it in the simple form,
p = k p x + k dẋ .(41)
As we said, this equation corresponds to the foot landing location control policy to stabilize a biped robot. We also mention that the maximum step size for our robot, Mercury, is −0.5m < p < 0.5m. If we apply these kinematic limits to the above foot control policy, we obtain a pair of lines in the phase plane which define the area of feasible CoM states given foot kinematic limits. This area is highlighted in light blue color in our phase plot. To be clear, the light blue colored area defines the state for which the robot can recover within a single walking step without violating kinematic limits. Next let us consider the uncertainty region defined by Eq. (40). Notice that the terms b and c depend on the uncertainty errors. For example, if we have a maximum foot landing error of 0.045 m and a maximum state estimation error of 0.03 m, then η M = 0.045m and δ M = 0.03m. If we plug these values in Eq. (38) ∼ (40), we get the orange ball shown in Fig. 2 (b). The inside of this ball represents states for which we cannot warranty asymptotic stability. The problem is that the orange uncertainty region include states outside of the feasible CoM state, the light blue region. This means that the actual CoM could have a value for which the robot cannot recover because it requires foot steps outside of the robot's kinematic limits. As we mentioned before, our biped robot, Mercury, underwent significant mechanical, control, and sensing improvements to remedy this problem. The errors represented by the orange ball are close to what we have observed in our walking tests before we upgraded Mercury. After making hardware and control improvement, we reduced the errors to η M = 0.01m and δ M = 0.007m.
In particular, to reduce δ M , we employed a tactical IMU (STIM-300) and MoCap data from a phase space motion capture system providing a body velocity estimation resolution of 0.005m/s and a body position accuracy of 0.005m. The blue ball in Fig. 2(b) represents the new uncertainty region given by this significant improvements. We can now see that the blue ball is completely contained within the light blue region. This means that although we do not know where the CoM state is located inside the blue ball, we know that whatever the state is, it is within the feasible CoM state region, and therefore, the foot control policy will find stabilizing foot locations. The virtual joints consist of three prismatic joints and a ball joint which is expressed as a quaternion. Each leg has an actuated abduction/adduction (q6, q9), hip flexion/extension (q7, q10), and knee flexion/extension (q8, q11) joints. Lastly, three LED sensors are attached on the front of the robot's body to estimate the velocity of its physical base.
Mercury Experimental Robot
The methods described in this paper have been extensively tested in two biped platforms. Most experiments are performed in our biped robot, Mercury, which we describe here. An additional experiment is performed in a new biped, called DRACO, which is described in Ahn et al. (2019). Mercury has six actuators which control the hip abduction/adduction, flexion/extension, and knee flexion/extension joints. Mercury uses series-elastic actuators (SEAs), which incorporate a spring between the drivetrains and the joint outputs. The springs protect the drivetrains from external impacts and are used for estimating torque outputs at the joints. Additionally, Mercury went through significant hardware upgrades from our previous robot, Hume Kim et al. (2016). In this section, we provide an overview of our system and discuss the upgrade. We also explain similarities and differences with respect to other humanoid robots in terms of mass distribution. The orientation of the virtual ball joint is represented by a quaternion and its angular velocity is represented by the space so(3) with respect to the local base frame. The actuated joints start from the right hip abduction/adduction and goes down to the hip flexion/extension, and knee flexion/extension joints. Then, the joint labels continue on to the left leg starting also at the hip joint. Three LED sensors are attached to the front of robot's body frame to estimate the robot's linear velocity and its global position via MoCap. In addition, we also estimate the relative robot position using joint encoder data with respect to the stance foot. This last sensing procedure is partially used to control foot landing locations, and therefore, the reference frame changes every time the robot switches contact.
Robot Configuration
Mercury's SEA actuators were built in 2011 by Meka, each having three encoders to measure joint position, spring deflection, and motor position. An absolute position encoder is used to measure the joint output position, q j , while a low-noise quadrature encoder measures motor position, θ. Joint position and joint velocity sensing can be done either using the absolute encoder or via applying a transmission ratio transformation on the motor's quadrature encoder data (q m ). In our experiment, we use the absolute encoders to obtain joint positions and motor quadrature encoders to obtain joint velocities. The transmission ratio of all Mercury's joints has a constant value except for the abduction/adduction joints which are non-constant. The constant ratio occurs for transmissions consisting of a pulley mechanism with constant radius. On the other hand, the hip abduction/adduction joints consist of a spring cage directly connected to the joints which results on a change of the moment arm. To account for this change, we use a look-up table mapping the moment arm length with respect to the joint position.
Hardware Upgrades
The original biped, Hume, was mostly built in 2011 by Meka as a custom robot for our laboratory. It had several limitations that made dynamic locomotion difficult. It had a low-performance IMU which made it difficult to control the robot's body orientation. Hume's legs were not strong enough causing buckling of the structure when supporting the robot's body mass. Because of this structural buckling, the estimated foot positions obtained from the joint encoders was off by 5 cm from their actual positions. We estimated this error by comparing the joint encoder data with the MoCap system data. Hume terminated its legs with cylindrical cups that would make contact with the ground. These cups had a extremely small contact surface with the ground. During walking, Hume suffered from significant vertical rotation, i.e. yaw rotation, due to the minimal contact of its supporting foot with the ground. All of these problems, i.e. structural buckling, poor IMU sensor, and small contact surfaces, prevented Hume from accomplishing stable walking. Therefore, for the proposed work, we have significantly upgraded the robot in all of these respects and changed its name to Mercury.
To improve on state estimation, we upgraded the original IMU, a Microstrain 3DM-GX3-25-OEM, to a tactical one, a STIM-300 ( Fig. 4(a)). Both IMUs are MEMS-based but the bias instability of the tactical IMU is only 0.0087 per-mode=symbol rad/h. Such low-bias noise allows us to estimate the robot's body orientation by simply integrating over the angular velocity from the initial orientation. Another problem we were facing with our original biped is the aging electronics, originally built by Meka in 2011. For this reason, all control boards (Fig. 4(d)) have been replaced with new The on-board electronics have been installed with cases to secure the electric cables in place. Keeping the cables in place significantly reduces loses of connections and cable damage during robot operations. (c) Carbon fiber cases were installed on Mercury's thighs to increase structural stiffness. (d) All of the embedded electronics were replaced with Apptronik's Medulla and Axon boards that come with a variety of low-level controllers for SEAs. (e) Spring-loaded passive-ankles with limit switches were also added to limit the uncontrollable yaw body rotation and detect ground contacts. embedded electronics manufactured by Apptronik. These new control boards are equipped with, a powerful microcontroller, a TI Delfino, that performs complex computations with low-latency for signal processing and control. The control boards are installed in a special board case ( Fig. 4(b)) holding safely all cables connected to the board. This wiring routing and housing detail is important because Mercury hits the ground hard when walking in rough terrains and performs experiments by being hit by people and balls. It secures signal and power cables to enable solid signal communications.
Thirdly, we manufactured carbon shells (Fig. 4(c)) to reinforce the thigh linkages. We also redesigned the robot's shank to increase structural stiffness by including two carbon fiber cylinders as supporting linkages. In addition, we designed new passive feet in the form of thin and short prisms that are a few centimeters long. The feet pivot about a pin fulcrum which connects in parallel to a spring between the foot support and the pivoting ankle. A contact switch is located on the front of the foot and engages when the foot makes contact with the ground (see Fig. 4(e) for mechanical details). These contact switches are used to terminate swing foot motion controls when the swing foot touches the ground earlier than anticipated. The main purpose of the line feet is to prevent yaw rotations of the entire robot turning around the supporting foot. Previously, our robot had quasi-pointed feet, which caused the robot's heading to turn due to any vertical moments. The mechanical line contacts provided by the passive feet interact with the ground contacts as a friction moment preventing excessive body rotations.
Challenges in Passive-Ankle Locomotion
To discuss the locomotion challenges presented by Mercury, it is necessary to discuss the mass distribution of Mercury against other bipeds (Fig. 5). The robots' inertia information used for this comparison is taken from open source robot models found in the following public repositories: https://github.com/ openhumanoids (Valkyrie), https://github.com/ dartsim/ (ATLAS), and https://github.com/ sir-avinash/atrias-matlab (ATRIAS). Mercury's mass distribution is somewhat similar to anthropomorphic humanoid robots such as Valkyrie Radford et al. (2015b) or Atlas Kuindersma et al. (2015). These robots have (1) a torso CoM located around the center of its body, and (2) the ratio between the total leg mass and the torso mass is significant, about 0.4. On the other hand, ATRIAS Hubicki et al. (2018) has a mass distribution optimized to be a mechanical realization of the inverted pendulum model, which is designed to aid with the implementation of locomotion controllers. Unlike other humanoid robots, ATRIAS's torso CoM location is close to the hip joints and the ratio between the total leg mass to the torso is negligible, which is less than 0.1.
While Mercury and ATRIAS are similar in their lack of ankle actuation and number of DoFs, the difference in mass distributions creates difficulties on locomotion control. Since ATRIAS has its torso CoM close to the hip joint axis, the link inertia reflected to the hip joint is small, which reduces the difficulty of controlling the robot body's orientation. In contrast, the CoM of Mercury and other humanoid robots mentioned above are located well above the hip joint, which creates a larger moment arm and increases the difficulty of body orientation control.
Next, since ATRIAS has negligible leg mass compared to its body, body perturbations caused by the swing leg are also negligible. However, Mercury, having significant leg mass, causes noticeable body perturbations during the swing phase. Thus, it becomes necessary for Mercury to have a whole-body controller which can compensate against Coriolis and gravitational forces introduced by the swing leg to maintain desired body configurations, follow inverted pendulum dynamics, and control the swing foot to desired landing locations. Overall, in addition to Mercury's SEAs and lack of ankle actuation, its mass distribution makes it more difficult to control.
Implementation Details
Walking Control
For our purposes, a biped's walking control process consists of three phases: swing (or single stance), double stance, and contact transition. In particular, the contact transition ensures smooth transition from single to double contact. Each phase starts and ends following predefined temporal parameters as shown in Table. 2. The swing phase can, and it often does, terminate earlier than the specified swing time because the biped might make contact with the ground earlier than planned. We automatically terminate the swing phase upon detecting contact to prevent sudden jerks that can occur when pushing against the ground. The ground contact is detected by the limit switches attached to the spring-loaded passive ankles shown in Fig. 4(e). The locomotion phases are illustrated in Fig. 6. At the middle of the duration of each swing phase, our TVR planner computes the immediate foot step location to achieve stable locomotion based on the policy given by Eq. (26). This decision process works as follows. After breaking contact with the ground, the swing foot first moves to a predefined default location with respect to the stance foot. Then, a new foot landing location is computed using the TVR planner. Based on this computation, the swing trajectory is re-adjusted to move to the computed foot landing location completing the second half of the swing motion until contact occurs.
Due to the non-negligible body-to-leg-weight ratio, when the swing motion occurs, it disturbs the robot's body. As the inertial coupling between the leg and body has a strong negative effect on the robot's ability to walk and balance, it is important to reduce these types of disturbances. In particular, we mentioned earlier that the robot's swing leg Table. 2. The robot's swing leg phase can be terminated earlier than the predefined swing time if the contact is detected before the end of the swing phase. In the middle of the swing, the next foot placement is computed by the TVR planner.
Double stance Transition Swing 0.01 sec 0.03 sec 0.33 sec first moves to a default location, and from there it computes a new foot landing location to dynamically balance and walk. Therefore, we focus on reducing the jerky motion that occurs from re-adjusting the foot trajectory at the middle of the swing motion. In our experiments, we first move to the default swing location using a B-spline, and then compute a minimum jerk trajectory to achieve the final landing location. The inclusion of this minimum-jerk trajectory is important as it significantly reduces the said disturbances between the swinging leg and the robot's body posture. When the swing motion ends, the state machine switches to the contact transition phase. Here the DynWBC control block shown in Section. 3.1.2 plays a key role to smoothly transition the contact from single to double support without introducing additional jerky movement. On the other hand, when a contact occurs, triggering a switch from single to double support, the KinWBC control block can generate a discontinuity of the joint position command. To reduce this additional jerk caused by KinWBC, the joint position command of the swing leg at the end of the swing phase is linearly interpolated with the command from KinWBC for the transition phase. As the contact transition progresses, the ratio between the final joint position command and the transition phase decreases which completes the transition. By doing all of these improvements, we accomplish smooth motions with reduced jerk for effective walking.
R x , R y , z R x , R y , z R x , R y , z - - Foot x,y,z
Task and Weight Setup of WBLC
The WBLC task setup for each phase is summarized in Table. 3. A common task for every control phase is the body posture task which keeps the body's height, roll, and pitch constant. Since Mercury has only six actuators, the robot Double support Transition (right) Swing (right)
Wq 10 2 × 1 12×1 10 2 × 1 12×1 10 2 × 1 12×1 W r 1, 1, 0.01, 1, 1, 0.01 1 → 5, 1 → 5, 0.01 → 0.5, 1, 1, 0.01 5, 5, 0.5, 1, 1, 0.01 W c 10 3 × 1 6×1 10 3 → 10 −3 × 1 1×3 , 10 3 × 1 1×3 10 −3 × 1 1×3 , 10 3 × 1 1×3 Table 4. Weight Setup. Here, the values of the weight matrices are described in vector form because we consider only diagonal weight matrices. The components associated with reaction and contact weights are six dimensional, starting from the right foot's x, y, and z directions and then considering the left foot's Cartesian components; therefore, Wr and Wc have six components.
cannot directly control its body yaw rotation and horizontal movement most of the time. Therefore, we only control three components (R x , R y , z) of the six-dimensional body motions (R x , R y , R z , x, y, z). Here R {·} stands for rotations. During the swing phase, we control the linear motion of the foot in addition to the robot's body posture. The swing foot task is hierarchically ordered under the body posture task to prevent the swing motion from influencing the body posture control. However, this priority setup is not enough to completely isolate the body posture control from the swing motion control because the null space of the body task does not remove the entire six DoF body motion. In our case, the body posture control task only controls three of the sixdimensions of body motion, which means that the other three components still reflect on the swing foot task even after the foot task has been projected into the null-space of the body posture task. To further decrease the coupling between the body motion and the swing foot intended motion, we set to zero all of the terms corresponding to the floating base DOFs appearing in the foot task Jacobian. By doing so, the three actuators in the stance leg are dedicated only to body posture control while the other three actuators in the swing leg are dedicated to control the swing foot trajectory.
The values of the weights of the cost function in Eq. (11) in DynWBC are specified in Table. 4. These values are presented in vector form because all of the cost matrices are diagonal. Wq is the weight matrix for relaxing desired joint accelerations to adjust for partially feasible acceleration commands. These weights are set to relatively large values to penalize deviations from the commanded joint accelerations. The same values are kept for every phase. W r and W c change as a function of the walking control phase because the reaction forces and feet movements are regulated by those weights. During the double support phase, the weights related to the contact point acceleration, W c , are assigned a large value, 10 3 . Penalizing contact accelerations approximates contact conditions without imposing hard constraints. Also during double support, the weight matrix regulating reaction forces, W r , is assigned relatively small values to provide sufficiently large forces to support the robot's body. Note that W r penalizes the tangential direction values more than the normal direction values, which helps to fulfill the friction limits associated with the contact reaction forces. W r and W c change value during the contact transition phase. The right arrows in Table 4 indicates that the weights transition smoothly from the left to the right values. For instance, 1 → 5 means that the value applied to the weight is set to 1 at the beginning of the transition phase, and we linearly increase it to 5 by the end of the phase. Let's take the are computed with joint velocity data measured by absolute encoders or quadrature encoders. The base velocity estimated by absolute encoders are too noisy and significantly fluctuates during the swing phases. Even with quadrature encoders, the fluctuation remains although the noisy level is lower than the ones estimated by using absolute encoders. During experiments, we use the results indicated by the yellow line, which corresponds to the filtered velocity data obtained from the MoCap system. example with the right foot during the transition phase. At the beginning of the transition phase the weight values coincide with the values in the previous phase, i.e. double support. At the end of the transition phase, when the right leg is about to leave the ground and start the swing phase, the first three terms of W r , coinciding with the Cartesian components of the right foot reaction force, are set to large values to penalize reaction forces. At the same time, the three first terms of W c are set to tiny values to boost swing accelerations on the right foot.
During this transition we perform an additional step. For the constraint defined in Eq. (13) of DynWBC, i.e. SF r ≤ F max r,z , we linearly decrease the value of the upper bound F max r,z to drive the right foot normal force to zero before the swing motion initiates. This linear decrease starts with the value set during double support and ends with a value equal to zero.
Base State Estimation
As the true CoM state is subject to errors from the model and disturbances from the swing leg motion, our current implementation instead uses the robot's base state, and assumes that the CoM of the robot is approximately at this location. The robot's base is a concrete point on the torso indicated by a black dot in Fig. 3. The base point was chosen by empirically comparing the CoM position and base position to find the lowest discrepancies. Fig. 7 shows velocity estimation values. As we can observe, the difference between the CoM velocity (black dotted line) and base velocity (blue solid line) is unperceivable. This enables us to (1) decouple the computation of the CoM state from the swing leg motion, and (2) perform a straight-forward sensorfusion process with a Kalman filter by combining the sensed body positions from joint-encoders and the overhead MoCap system.
As said, Fig. 7 compares the base velocity data obtained from different sensors. In Section. 5, we stated that there are two ways to measure joint data: one is using the absolute encoders directly attached to the robot joints, and another one is using the quadrature encoders attached to the back of the electric motors by multiplying their value with the actuator's transmission ratio. The green lines and the blue lines on the above figures correspond to the base velocities computed from data measured by absolute encoders and quadrature encoders. The blue lines are less noisy, but both green and blue data are not proper for our walking planner because the velocity profile shows a significant fluctuation, which makes the prediction of the state challenging. However, the velocity data obtained from the MoCap system, i.e. the red lines, shows a consistent trend with the walking phases such that we decided to rely on it. To deal with MoCap marker occlusions, we perform sensor fusion between the MoCap and encoder data via Kalman filtering and average filtering techniques. This data is shown as a yellow line on the previous figure showing that it is fairly similar to the red line.
For the estimation of the base positions in global frame we use the MoCap system. As for estimating base positions with respect to the stance foot we rely only on the robot's IMU and joint encoder data without using the MoCap system. This last process is more robust than attaching LED sensors to the feet because they incur frequent occlusions and break often due to the repetitive impacts.
Kinematic Model Verification With MoCap Data
As we mentioned in the previous section, an accurate kinematic model is very important to compute stabilizing foot landing locations via the TVR planner. Moreover, for real-time WBLC, the model's accuracy significantly influences the landing location accuracy. To perfect our kinematic model which was initially built using the parameters obtained from CAD design, we utilize the MoCap system. By comparing the MoCap data and the kinematic model data, we tune the model parameters and enhance the accuracy of the kinematic model until the two sets of data are sufficiently close together. For this calibration process, we first fix Mercury's torso on top of a table as shown in Fig. 8(a). For this fixed posture, we let the robot swing one of its legs and simultaneously gather MoCap and kinematic data. The positions of the LED sensors attached to the leg are post-processed to be described in the robot's local frame, which is defined by three LED sensors attached to the robot's body (see Fig. 3). The two different sets of data, one obtained from the MoCap system and the other one obtained from the current robot kinematic model are used to further tune the kinematic parameters. Fig. 8 shows both the LED position data measured by the MoCap system and the same position data measured by the joint encoders using the tuned kinematic model. The result shows that the error of our final kinematic model has less than a 5mm error.
(b)
Results
We conducted extensive walking and stepping experiments of various kinds using our passive ankle biped robot, Mercury. For all of these experiments, Mercury was unsupported, that is, without overhead support. The experiments show stable behavior during directional walking, push recovery, and mildly irregular terrain walking. We also deploy the same control and dynamic walking schemes to our new lower body humanoid robot, DRACO, and rapidly accomplished dynamic balancing. Finally, we conducted simulations using other humanoid robots to show the versatility of our wholebody controller and walking algorithm.
Directional Walking
Directional walking means achieving dynamic walking toward a particular direction. To achieve this, we manipulate the origin of Mercury's reference frame. In turn, our TVR planner controls Mercury's foot stepping to converge to the reference frame, which for this test is a moving target. In other words, we steer the robot in the four cardinal directions in this manner, see Fig. 9 (a). Fig. 9 (b) shows the time trajectory of the desired robot's path and the actual robot's location. The actual location is obtained using the MoCap system based on the LED attached to the robot's base. These results show that Mercury follows the commanded path relatively well albeit slow convergence rates in the lateral direction possibly due to the limited hip's abduction/adduction range. Fig. 9 (c) shows commanded and sensed joint torque data. The vertical black lines indicate the walking control phases. As we can see, the torque commands smoothly transition despite contact changes. The knee torque commands change between 0 and 40 Nm depending on the control phase of the leg, but there is no discontinuity causing jerky behavior of the desired torque commands despite the short (0.06 sec) transition periods.
The right and left knee joint position data are shown in Fig. 9 (d). As mentioned in Section. 3.2, the desired motor position commands are adjusted to account for spring deflections. The data shows that joint positions sensed with the absolute encoders are close to the position commands while the motor position data is off by the amount corresponding to spring deflections. The spring deflection compensation is notable when the knee joint supports the body weight, i.e. the periods between 19.8 ∼ 20.2sec for the right knee and 19.4 ∼ 19.8sec for the left knee.
Robustness Of Balance Controller
To demonstrate the robustness of the proposed walking control scheme, we conducted multiple instances of an experiment involving external disturbances. The first test, shown in Fig. 10, analyzes Mercury recovering its balance after a junior football ball of weight 0.32kg and horizontal speed of about 9m/s impacts its body. A second test, shown in Fig. 11, shows a person continuously pushing Mercury's body with gentle forces to see how the robot reacts. Finally, the last experiment, shown in Fig. 12, shows Mercury walking in a mildly irregular terrain without knowledge or sensing of the terrains topology. In the three experiments, Mercury successfully recovers from the disturbances.
For the ball impact experiment shown in Fig. 10, we show the phase plots of the lateral CoM direction. Since the ball hits the robot laterally, the analysis is done on Figure 10. Mercury recovers its balance after being disturbed by a lateral impact applied by throwing a junior American rubber football ball, weighting 0.32kg. In the 28th step, the ball hits Mercury on its side as depicted in the lateral change of the CoM state, i.e. y direction. For this instance, when the lateral impact happens, the next foot landing location, in our case the left leg, has already been planned and there is nothing else that can be done. So Mercury finishes the lateral step without responding to the disturbance. For the following step, step 29th, the CoM velocity is positive in the y direction due to the lateral disturbance. This value on the CoM state causes our TVR walking planner to trigger a recovery step using the right foot which is commanded to move inward towards the stance foot. However, the amount it has to move would cause a collision with the stance leg, therefore our planner chooses to land the right foot at the minimum lateral range of 10cm from the stance leg. This choice, causes the robot to only partially recover from the disturbance but failing to reverse velocity. As a result, for the next step, step 30th, Mercury's TVR walking controller decides to take a large step, 48cm from the stance leg, which enables it to reverse velocity in the direction opposite to the impact. Finally, Mercury goes back to its nominal balancing motion in step 31st. the y direction. Lateral impact recovery is difficult because the hip abduction/adduction joints have a very limited range of motion, ±15 o . Due to the very small width of the feet, the landing location has to be very accurate as previously discussed. Each phase plot in this figure, shows two sequential steps, depicted in blue and red color lines. For instance, for the 28th step, we differentiate the solid blue line, which represents the sensed base trajectory for the actual 28th step, from the solid red line, which representss the trajectory for the next step, the 29th. Dotted blue and red lines represent the predicted trajectory given the TVR control policy and pendulum dynamics hypothesis. The particular operating details of the TVR controller during this impact experiment are described in the caption of Fig. 10. In essence, the ball hits the robot at the 28th step and at the 30th step, Mercury fully recovers its balance, going back to the normal regime at the 31st step.
Also from Fig. 10, we analyze the foot landing accuracy. In the phase plots, the red star, the red circle, and the blue cross represent the stance foot, commanded foot landing location, and actual foot landing location, respectively. Except during the recovery steps, 29th and 30th steps, the foot landing location errors are less than 0.5 cm, as seen in the 28th and 31st steps. This is significantly less error than the maximum tolerable one as shown in the uncertainty analysis of Fig 2. In analyzing extended experimental data, the foot landing error is consistently less than 0.5 cm.
Our control and walking methods are robust to mild terrain variations as shown in Fig. 12. For this particular experiment, Push Figure 11. Interaction with a human subject. Mercury maintians its balance despite the continuous pushing forces.
we set κ x , shown in Table. 1, to a value of 0.25 to enable the robot to keep moving forward despite the terrain variations. In addition, the robot's feet sometimes get stuck on the edge of the mats, which adds difficulty to the locomotion process. However, the robot successfully traverses the terrain.
Experimental Evaluation On New Biped
Robot DRACO DRACO is our newest humanoid lower body, having ten viscoelastic liquid-cooled actuators Kim et al. (2018) on its hips and legs. Each limb has five actuators: hip yaw, roll, pitch, knee pitch, and ankle pitch. The IMU is the same as in Mercury, a STIM 300, and the MoCap LED sensor system Figure 12. Forward walking over an irregular terrain. Mercury walks forward over an irregular terrain constructed with foam mats arranged on top of each other. The robot's feet sometimes slip over the mat segments since the latter do not stick tightly to each other. Therefore there are multiple disturbances. Our control and walking algorithms accomplish the necessary robustness to traverse these type of terrain including height variations of 2.5 cm), foot slippage, and foot trippings. is configured similarly to Mercury. The robot has many interesting features such as liquid cooling, tiny feet, quasipassive ankles, and elastomers in the actuator drivetrains. We won't describe the hardware details of DRACO as they are being prepared for submission for an upcoming paper.
To equate DRACO to Mercury in some respects, we apply a soft joint stiffness policy to the ankle pitch emulating a passive joint. For this first experiment, we set the hip yaw joint to a fix position with a joint control task implemented in WBLC. From a controller's standpoint, Mercury and DRACO are very similar for this experiment. DRACO is forced to perform dynamic locomotion without controlling ankles in similar ways than Mercury. For now, we check for foot contacts on DRACO based on ankle joint velocity measurements. As shown in Fig. 13, DRACO balances successfully unsupported, just like Mercury did.
For WBLC on DRACO, we generated the robot's model using the CAD files and slightly adjusted mass values from gravity compensation tests. Except for feedback gains of the joint position controllers, we use similar planner parameters to Mercury. t is set to [0.21, 0.2] and κ is set to [0.08, 0.13] for the experiment. Testing on DRACO was successfully accomplished thus demonstrating that our WBLC-TVR framework is easily transferable to multiple robots, showing the generality of our methods.
Simulation Results in Assorted Platforms
To show further applicability of the proposed control methods, we implement and test our WBLC and TVR algorithms on assorted robotic platforms such as Mercury, DRACO, Atlas, and Valkyrie. We implemented two types of simulation scenarios: dynamic walking and locomanipulation. Mercury, DRACO, and Atlas are utilized to implement dynamic walking motions. As mentioned in Section 6.2, for locomotion we define a foot task and a body posture task, X Mercury = {ẍ f oot ,ẍ body }, whereẍ f oot andẍ body are specifications for the foot and body tasks. The height, roll, and pitch of the body are controlled as constant values, respectively. Since DRACO includes hip joints on both left and right side legs, we additionally formulate a hip configuration task for both hip joints of DRACO in addition to Mercury's tasks, X DRACO = X Mercury ∪ {ẍ hip }. The body task of DRACO controls its body height, roll, pitch and yaw orientation. As shown in Fig 14 (a) and (b), the simulation results of Mercury and DRACO demonstrate that both robot simulations are able to perform dynamic walking without much algorithmic modifications. The parameters of the planner are set to t = [0.2, 0.2] and κ = [0.16, 0.16] Unlike the above two robots, Atlas and Valkyrie are fullbody humanoid robots and their ankle joints are actuated so that we modify the task sets and constraints to test our algorithm using simulations. We modify the inequality constraint for the contact wrench cone to surface contacts in (12). For these full-body humanoid robots, the height of the pelvis, which corresponds to the floating base, is constantly controlled in the same way than Mercury and DRACO. We define the orientation tasks for the pelvis and torso. Also, a task for controlling foot orientation is introduced for stable contact on the feet. Based on the defined tasks, the task set of Atlas is designed to be X Atlas = {ẍ f oot ,ẍ pelvis ,ẍ torso ,ẍ jpos } whereẍ jpos represents a task for controlling the entire joint positions of the robot. As shown in the simulation, Atlas is able to perform dynamic walking similarly to Mercury and DRACO without modifying our algorithms as shown in Fig. 14 (c).
We define additional tasks for controlling the left hand and the head orientations to demonstrate locomanipuation capabilities on Valkyrie, X Valkyrie = X Atlas ∪ {ẍ hand ,ẍ head }. The simulation result shows that our algorithm can accomplish the desired locomanipulation behavior as shown in Fig. 14 (d). These four simulations show that our algorithm is applicable to various biped humanoids.
Conclusions
We demonstrated here robust dynamic walking of various biped robots, including one with no ankle actuation, using a novel locomotion-control scheme consisting of two components dubbed WBLC controller and TVR planner. The algorithmic generality has been verified on hardware with the bipeds Mercury and DRACO and in simulation with other humanoids such as Valkryie and Atlas. We have performed an uncertainty analysis of the TVR planner and found maximum allowable errors for our state estimator and controllers, which enabled us to significantly redesign and rebuild the Mercury robot and tune the controllers and estimators. By integrating a high-performance wholebody feedback controller, WBLC, a robust locomotion planner, TVR, and a reliable state estimator, our passiveankle biped robot and lower body humanoid robot accomplish unsupported dynamic locomotion robust to impact disturbances and rough terrains.
In devising our control scheme, we have experimented with a variety of whole-body control formulations and feedback controllers. We compared different WBC operational task specifications such as foot position vs leg joint position control, base vs CoM position control, having vs not having task priorities, etc. In the low-level controller we also experimented with torque feedback with disturbance observers, joint vs motor position feedback, and joint position control with and without feedforward torques. The methodology presented here is our best performing controller after systemlevel integration and exhaustive testing.
With our new biped, DRACO, we have explored initial dynamic locomotion. In the future, we will explore more versatile locomotion behaviors such as turning and walking in a cluttered environment. In the case of Mercury, we could not change the robot's heading because of the lack of yaw directional actuation. With simple additions to the current TVR planner, we will be able to test turning of DRACO since the robot has hip yaw actuation. In addition, we will conduct robustness tests in a more complex way by exploring a cluttered environment involving contacts with many objects including human crowds. | 11,294 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.