forum_id
stringlengths 8
20
| forum_title
stringlengths 4
171
| forum_authors
sequencelengths 0
25
| forum_abstract
stringlengths 4
4.27k
| forum_keywords
sequencelengths 0
10
| forum_pdf_url
stringlengths 38
50
| note_id
stringlengths 8
13
| note_type
stringclasses 6
values | note_created
int64 1,360B
1,736B
| note_replyto
stringlengths 8
20
| note_readers
sequencelengths 1
5
| note_signatures
sequencelengths 1
1
| note_text
stringlengths 10
16.6k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
r1QXQkSYg | Out-of-class novelty generation: an experimental foundation | [
"Mehdi Cherti",
"Balázs Kégl",
"Akın Kazakçı"
] | Recent advances in machine learning have brought the field closer to computational creativity research. From a creativity research point of view, this offers the potential to study creativity in relationship with knowledge acquisition. From a machine learning perspective, however, several aspects of creativity need to be better defined to allow the machine learning community to develop and test hypotheses in a systematic way. We propose an actionable definition of creativity as the generation of out-of-distribution novelty. We assess several metrics designed for evaluating the quality of generative models on this new task. We also propose a new experimental setup. Inspired by the usual held-out validation, we hold out entire classes for evaluating the generative potential of models. The goal of the novelty generator is then to use training classes to build a model that can generate objects from future (hold-out) classes, unknown at training time - and thus, are novel with respect to the knowledge the model incorporates. Through extensive experiments on various types of generative models, we are able to find architectures and hyperparameter combinations which lead to out-of-distribution novelty. | [
"Deep learning",
"Unsupervised Learning"
] | https://openreview.net/pdf?id=r1QXQkSYg | H1w-b4Esg | comment | 1,489,416,446,938 | SkUcM-bjl | [
"everyone"
] | [
"~mehdi_cherti1"
] | title: Answer
comment: Thank you for your comments and suggestions. We definitely want
to redo the same experiments and analysis on other settings or
datasets like Omniglot for which the availability of a large
number of classes will be helpful.
Regarding your question about how the pangrams were generated,
we took the set of images generated by a given model, then
we selected manually one character from the top 16 in every letter,
where the top 16 was selected automatically according to the predicted
probability of the letter according to the discriminator which was
trained on digits and letters. |
r1QXQkSYg | Out-of-class novelty generation: an experimental foundation | [
"Mehdi Cherti",
"Balázs Kégl",
"Akın Kazakçı"
] | Recent advances in machine learning have brought the field closer to computational creativity research. From a creativity research point of view, this offers the potential to study creativity in relationship with knowledge acquisition. From a machine learning perspective, however, several aspects of creativity need to be better defined to allow the machine learning community to develop and test hypotheses in a systematic way. We propose an actionable definition of creativity as the generation of out-of-distribution novelty. We assess several metrics designed for evaluating the quality of generative models on this new task. We also propose a new experimental setup. Inspired by the usual held-out validation, we hold out entire classes for evaluating the generative potential of models. The goal of the novelty generator is then to use training classes to build a model that can generate objects from future (hold-out) classes, unknown at training time - and thus, are novel with respect to the knowledge the model incorporates. Through extensive experiments on various types of generative models, we are able to find architectures and hyperparameter combinations which lead to out-of-distribution novelty. | [
"Deep learning",
"Unsupervised Learning"
] | https://openreview.net/pdf?id=r1QXQkSYg | rJf7vaeie | official_review | 1,489,192,729,762 | r1QXQkSYg | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper134/AnonReviewer2"
] | title: Review
rating: 7: Good paper, accept
review:
This paper attempts to formalize the notion of 'computational creativity' from a machine learning perspective, in order for machine learning researchers to make better progress on this problem. In particular, the authors propose measuring the 'computational creativity' of a model by several metrics intending to capture whether the model can generate new objects from classes unseen during training.
I think this is an interesting paper and a good first step in this area. Indeed, absent proper definitions and metrics for vague concepts such as 'creativity', it is difficult to make progress on related computational problems. While the proposed metrics are not perfect,* they seem reasonable enough to warrant future investigation, and thus I think this paper is worthy of acceptance as an ICLR workshop paper.
*Further thoughts: I'm not convinced that these metrics are selecting for the "right" models from a creativity point of view. If Figure 1 is really a random sample of digits generated by one of the 'most creative' models according to the proposed metrics, it seems like it is mostly just good at capturing lower-level correlations in the data, while generating random high-level details. Thus it seems like a 'creative' model is one that has been artificially limited in order to poorly model high-level features of the data. This seems intuitively to contrast with creativity as we perceive it in humans -- creative humans are still capable of modeling the world around them, they are just able to combine what they've learned in new and interesting ways. Perhaps 'true creativity' is out of the reach of current generative models? (Or, perhaps the word 'creativity' is not really meaningful from a computational perspective?) However, I'm not an expert in this area, and I still think the idea is worthwhile presenting, as it may generate interesting discussions. In future work, I'd like to see a more thorough analysis of what model settings lead to the most 'creative behaviour' according to these metrics.
confidence: 3: The reviewer is fairly confident that the evaluation is correct |
r1QXQkSYg | Out-of-class novelty generation: an experimental foundation | [
"Mehdi Cherti",
"Balázs Kégl",
"Akın Kazakçı"
] | Recent advances in machine learning have brought the field closer to computational creativity research. From a creativity research point of view, this offers the potential to study creativity in relationship with knowledge acquisition. From a machine learning perspective, however, several aspects of creativity need to be better defined to allow the machine learning community to develop and test hypotheses in a systematic way. We propose an actionable definition of creativity as the generation of out-of-distribution novelty. We assess several metrics designed for evaluating the quality of generative models on this new task. We also propose a new experimental setup. Inspired by the usual held-out validation, we hold out entire classes for evaluating the generative potential of models. The goal of the novelty generator is then to use training classes to build a model that can generate objects from future (hold-out) classes, unknown at training time - and thus, are novel with respect to the knowledge the model incorporates. Through extensive experiments on various types of generative models, we are able to find architectures and hyperparameter combinations which lead to out-of-distribution novelty. | [
"Deep learning",
"Unsupervised Learning"
] | https://openreview.net/pdf?id=r1QXQkSYg | SySL_Kpjl | comment | 1,490,028,621,029 | r1QXQkSYg | [
"everyone"
] | [
"ICLR.cc/2017/pcs"
] | decision: Accept
title: ICLR committee final decision |
rkdF0ZNKl | Fast Generation for Convolutional Autoregressive Models | [
"Prajit Ramachandran",
"Tom Le Paine",
"Pooya Khorrami",
"Mohammad Babaeizadeh",
"Shiyu Chang",
"Yang Zhang",
"Mark A. Hasegawa-Johnson",
"Roy H. Campbell",
"Thomas S. Huang"
] | Convolutional autoregressive models have recently demonstrated state-of-the-art performance on a number of generation tasks. While fast, parallel training methods have been crucial for their success, generation is typically implemented in a naive fashion where redundant computations are unnecessarily repeated. This results in slow generation, making such models infeasible for production environments. In this work, we describe a method to speed up generation in convolutional autoregressive models. The key idea is to cache hidden states to avoid redundant computation. We apply our fast generation method to the Wavenet and PixelCNN++ models and achieve up to 21x and 183x speedups respectively. | [
"Deep learning",
"Applications"
] | https://openreview.net/pdf?id=rkdF0ZNKl | B1C8tvgil | official_review | 1,489,168,726,329 | rkdF0ZNKl | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper62/AnonReviewer2"
] | title: Very simple idea, but likely to be used
rating: 7: Good paper, accept
review: This paper proposes a simple technique for speeding up generation from Convolutional Autoregressive Models (e.g., WaveNet and PixelCNN). The key observation is that if one naively generates each output from scratch without re-using any computation, then it is wasteful. The paper instead proposes to cache hidden state values across the generation of all the outputs that share the intermediate results. Experimentally the paper shows large speedups over the naive approach when the depth of a WaveNet is increased to 13+ layers and PixelCNN++ when the batch size is large.
Overall the paper is clear, and the approach is a clear improvement over the naive version. One question I have, though, is if it wouldn't be simpler to just build a TensorFlow model that generates an entire output at once. That is, instead of building a TensorFlow model that generates the next pixel and then calling this model repeatedly, would it be possible to define a TensorFlow model that outputs a full image? (To deal with having to sample output values, the Gumbel-max trick could be used with all of the randomness needed supplied as an input). Then presumably the TensorFlow execution model would take care of all the necessary caching.
A second question is about the relevance of the technique in the WaveNet experiments. The headline improvement of 21x doesn't happen until there are 15 layers in the WaveNet. Is this a useful parameter regime for the model?
Pros:
- This is a clearly better method than the naive approach, and the naive approach does appear to have been used before
- The idea is simple and clearly explained
- The authors are open-sourcing their implementation, which will likely be used by a number of people in the ICLR audience
Cons:
- It's not obvious to me that this is the simplest way to implement the idea
- The idea is very simple, effectively being "cache in the obvious way"
Overall I'd lean towards accepting, but I wouldn't fight strongly for it.
confidence: 3: The reviewer is fairly confident that the evaluation is correct |
rkdF0ZNKl | Fast Generation for Convolutional Autoregressive Models | [
"Prajit Ramachandran",
"Tom Le Paine",
"Pooya Khorrami",
"Mohammad Babaeizadeh",
"Shiyu Chang",
"Yang Zhang",
"Mark A. Hasegawa-Johnson",
"Roy H. Campbell",
"Thomas S. Huang"
] | Convolutional autoregressive models have recently demonstrated state-of-the-art performance on a number of generation tasks. While fast, parallel training methods have been crucial for their success, generation is typically implemented in a naive fashion where redundant computations are unnecessarily repeated. This results in slow generation, making such models infeasible for production environments. In this work, we describe a method to speed up generation in convolutional autoregressive models. The key idea is to cache hidden states to avoid redundant computation. We apply our fast generation method to the Wavenet and PixelCNN++ models and achieve up to 21x and 183x speedups respectively. | [
"Deep learning",
"Applications"
] | https://openreview.net/pdf?id=rkdF0ZNKl | rkO7_Ypjx | comment | 1,490,028,575,959 | rkdF0ZNKl | [
"everyone"
] | [
"ICLR.cc/2017/pcs"
] | decision: Accept
title: ICLR committee final decision |
rkdF0ZNKl | Fast Generation for Convolutional Autoregressive Models | [
"Prajit Ramachandran",
"Tom Le Paine",
"Pooya Khorrami",
"Mohammad Babaeizadeh",
"Shiyu Chang",
"Yang Zhang",
"Mark A. Hasegawa-Johnson",
"Roy H. Campbell",
"Thomas S. Huang"
] | Convolutional autoregressive models have recently demonstrated state-of-the-art performance on a number of generation tasks. While fast, parallel training methods have been crucial for their success, generation is typically implemented in a naive fashion where redundant computations are unnecessarily repeated. This results in slow generation, making such models infeasible for production environments. In this work, we describe a method to speed up generation in convolutional autoregressive models. The key idea is to cache hidden states to avoid redundant computation. We apply our fast generation method to the Wavenet and PixelCNN++ models and achieve up to 21x and 183x speedups respectively. | [
"Deep learning",
"Applications"
] | https://openreview.net/pdf?id=rkdF0ZNKl | rkpqv3xoe | official_review | 1,489,188,757,381 | rkdF0ZNKl | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper62/AnonReviewer1"
] | title: simple and good
rating: 7: Good paper, accept
review: This is a nice workshop paper. its a simple idea but people will be interested in it. If nothing else, the released code is valuable, and having the poster to advertise it is a good use of workshop poster space.
confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
SJgabgBFl | A Quantitative Measure of Generative Adversarial Network Distributions | [
"Dan Hendrycks*",
"Steven Basart*"
] | We introduce a new measure for evaluating the quality of distributions learned by Generative Adversarial Networks (GANs). This measure computes the Kullback-Leibler divergence from a GAN-generated image set to a real image set. Since our measure utilizes a GAN's whole distribution, our measure penalizes outputs lacking in diversity, and it contrasts with evaluating GANs based upon a few cherry-picked examples. We demonstrate the measure's efficacy on the MNIST, SVHN, and CIFAR-10 datasets. | [] | https://openreview.net/pdf?id=SJgabgBFl | BJ4-Knvox | comment | 1,489,647,868,095 | SJgabgBFl | [
"everyone"
] | [
"~Steven_Basart1"
] | title: Update
comment: Thanks to the reviewers’ comments, we have updated our draft.
Now we include Parzen window estimates, and we show that reducing the image to the primary PCA coefficients does not fix Parzen window estimates. We hope that this added analysis addresses much of our reviewers’ concerns. |
SJgabgBFl | A Quantitative Measure of Generative Adversarial Network Distributions | [
"Dan Hendrycks*",
"Steven Basart*"
] | We introduce a new measure for evaluating the quality of distributions learned by Generative Adversarial Networks (GANs). This measure computes the Kullback-Leibler divergence from a GAN-generated image set to a real image set. Since our measure utilizes a GAN's whole distribution, our measure penalizes outputs lacking in diversity, and it contrasts with evaluating GANs based upon a few cherry-picked examples. We demonstrate the measure's efficacy on the MNIST, SVHN, and CIFAR-10 datasets. | [] | https://openreview.net/pdf?id=SJgabgBFl | r1HN1EQox | official_review | 1,489,350,444,578 | SJgabgBFl | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper162/AnonReviewer1"
] | title: Review
rating: 3: Clear rejection
review: This paper is addressing the important problem of evaluating the distributions learnt by GANs. In the proposed approach, first a PCA is applied to the samples of real and generated images and then the distributions are approximated by GMMs on the principal components. The KL between these two GMMs does not have a closed form, so a nearest neighboring approximation is used.
In general, I find this approach very similar to Parzen window estimate. Given that Parzen window estimate is flawed in evaluating the likelihood of generative models (see Theis et al., 2015), I don't see why a simple linear transformation before GMM approximation would solve this problem. Especially, the KL divergence approximation of GMMs does not seem like a good approximation, and is essentially computing the nearest neighbor within the two sets of images.
confidence: 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature |
SJgabgBFl | A Quantitative Measure of Generative Adversarial Network Distributions | [
"Dan Hendrycks*",
"Steven Basart*"
] | We introduce a new measure for evaluating the quality of distributions learned by Generative Adversarial Networks (GANs). This measure computes the Kullback-Leibler divergence from a GAN-generated image set to a real image set. Since our measure utilizes a GAN's whole distribution, our measure penalizes outputs lacking in diversity, and it contrasts with evaluating GANs based upon a few cherry-picked examples. We demonstrate the measure's efficacy on the MNIST, SVHN, and CIFAR-10 datasets. | [] | https://openreview.net/pdf?id=SJgabgBFl | r1RvY2vol | comment | 1,489,647,974,486 | B1E6xogig | [
"everyone"
] | [
"~Steven_Basart1"
] | title: Response
comment: Thank you for your careful analysis of our paper. We initially left out a comparison to Parzen window estimates as we believed it was known to be a poor measure in high dimensions (Theis et al.), but due to its prevalence, you are right in saying it should be included.
To that end, we have updated our paper by running Parzen window estimates on the CIFAR-10 dataset which, by way of its high-dimensionality, most closely reflects real world data. Parzen windows did not track CIFAR-10 image quality. Moreover, we now show that it is not PCA compression that allows our method to work. We demonstrate this by embedding each sample with its the primary PCA components, and then we use Parzen window estimates on this compressed PCA coefficient embedding. Here Parzen windows estimation still fails as a measure of image quality.
We do not think that it is a trivial task to construct a metric that corresponds to image quality and correlates with diversity of samples, otherwise the measure would already exist and be used in GAN research.
Thank you again for reviewing our paper. We hope to have addressed your primary concern, and please let us know if there are any other concerns or suggestions for improving our paper. |
SJgabgBFl | A Quantitative Measure of Generative Adversarial Network Distributions | [
"Dan Hendrycks*",
"Steven Basart*"
] | We introduce a new measure for evaluating the quality of distributions learned by Generative Adversarial Networks (GANs). This measure computes the Kullback-Leibler divergence from a GAN-generated image set to a real image set. Since our measure utilizes a GAN's whole distribution, our measure penalizes outputs lacking in diversity, and it contrasts with evaluating GANs based upon a few cherry-picked examples. We demonstrate the measure's efficacy on the MNIST, SVHN, and CIFAR-10 datasets. | [] | https://openreview.net/pdf?id=SJgabgBFl | HJx5P_Kpjl | comment | 1,490,028,642,289 | SJgabgBFl | [
"everyone"
] | [
"ICLR.cc/2017/pcs"
] | decision: Reject
title: ICLR committee final decision |
SJgabgBFl | A Quantitative Measure of Generative Adversarial Network Distributions | [
"Dan Hendrycks*",
"Steven Basart*"
] | We introduce a new measure for evaluating the quality of distributions learned by Generative Adversarial Networks (GANs). This measure computes the Kullback-Leibler divergence from a GAN-generated image set to a real image set. Since our measure utilizes a GAN's whole distribution, our measure penalizes outputs lacking in diversity, and it contrasts with evaluating GANs based upon a few cherry-picked examples. We demonstrate the measure's efficacy on the MNIST, SVHN, and CIFAR-10 datasets. | [] | https://openreview.net/pdf?id=SJgabgBFl | B1E6xogig | official_review | 1,489,182,908,558 | SJgabgBFl | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper162/AnonReviewer2"
] | title: failure to relate to parzen window and other generative modeling evaluation metrics
rating: 3: Clear rejection
review: This paper addresses the problem of quantitatively evaluating GANs (or any generative model for which samples can be drawn). They propose to build a gaussian mixture model approx. to the generators distribution and the empirical data distribution by fitting a single gaussian to each point of the respective distributions (and having equal mixing proportions). Rather than using each image as the gaussian mean, they use the vector of the first k principle components. To compare the generative and empirical distributions they compute the min KL distribution between each pair of gaussians and same the expectation over all mixture components.
Pros:
The main advantage I see of this approach over other nearest neighbor based approaches (namely parzen window estimates) is that by running PCA on the image and then computing distance between images in this reduced dimensional space, some of the issue of high dimensional spaces can be alleviated.
Cons:
This paper does not mention parzen window estimates, which have long been used as a measure of generative modeling quality (specifically an approximation to the likelihood of held out data under the generative models distribution). Parzen window estimates are also known to be very ineffective in high dimensional image spaces and the authors do not mention this at all. Potentially their approach is better because of the dimensionality reduction of PCA but the authors do no mention this. More generally, the authors don't compare their metric against anything else! Their only experiment show that their metric correlates with image quality on a couple datasets. This is not very interesting since one could construct a number of metrics based on nearest neighbor based approaches that correlate with image samples but that doesn't mean they are better than anything that exists.
Summary:
Overall, I think the idea is potentially useful, but the authors have not shown either empirically or theoretically that this metric is any better than existing approaches. Furthermore, they don't even mention related approaches, namely the closely related parzen window estimate method. As such, I think there is potential if revised, but don't think the paper can be accepted in its current form.
confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
SJgabgBFl | A Quantitative Measure of Generative Adversarial Network Distributions | [
"Dan Hendrycks*",
"Steven Basart*"
] | We introduce a new measure for evaluating the quality of distributions learned by Generative Adversarial Networks (GANs). This measure computes the Kullback-Leibler divergence from a GAN-generated image set to a real image set. Since our measure utilizes a GAN's whole distribution, our measure penalizes outputs lacking in diversity, and it contrasts with evaluating GANs based upon a few cherry-picked examples. We demonstrate the measure's efficacy on the MNIST, SVHN, and CIFAR-10 datasets. | [] | https://openreview.net/pdf?id=SJgabgBFl | SkOrK3Psx | comment | 1,489,647,935,874 | r1HN1EQox | [
"everyone"
] | [
"~Steven_Basart1"
] | title: Response
comment: Thank you for your review of our paper. We now realize that we should have made a clearer distinction between our work and that Parzen window estimation, and we have updated the paper to delineate between them. The main difference is that we compare distributions rather than tally the average quality of generated examples. This is reflected in our choice of the KL divergence rather than the average log-likelihood. We show that Parzen windows does not track image quality in CIFAR-10. Moreover, we have also updated our paper to show that doing “a simple linear transformation before GMM approximation ” does not solve the problem either.
We appreciate your feedback, as it made us compare our measure with a prominent technique. If there are any other concerns or criticisms, let us know. Thank you again for reviewing our paper. |
S1-6egSFl | Unsupervised and Scalable Algorithm for Learning Node Representations | [
"Tiago Pimentel",
"Adriano Veloso",
"Nivio Ziviani"
] | Representation learning is one of the foundations of Deep Learning and allowed big improvements on several Machine Learning fields, such as Neural Machine Translation, Question Answering and Speech Recognition. Recent works have proposed new methods for learning representations for nodes and edges in graphs. In this work, we propose a new unsupervised and efficient method, called here Neighborhood Based Node Embeddings (NBNE), capable of generating node embeddings for very large graphs. This method is based on SkipGram and uses nodes' neighborhoods as contexts to generate representations. NBNE achieves results comparable or better to the state-of-the-art in three different datasets. | [
"Unsupervised Learning"
] | https://openreview.net/pdf?id=S1-6egSFl | Hk6bxt4ie | comment | 1,489,436,676,817 | rywV8Flie | [
"everyone"
] | [
"~Tiago_Pimentel1"
] | title: Updated paper and clarifications
comment: Thank you for your feedback. I think this ('state-of-the-art link prediction') was indeed stated poorly from our part, I've updated the paper's abstract, now saying that NBNE achieves results comparable or better than state-of-the-art feature learning algorithms. Instead of specifically stating state-of-the-art at the tasks themselves.
We compare our algorithm to the baselines in these two problems, i.e. node classification and link prediction, because it’s the usual benchmark when comparing node embedding algorithms. These problems are used for comparisons in Node2Vec (Grover & Leskovec, 2016) and SBNE (Wang et al., 2016), while DeepWalk (Perozzi et al., 2014) and LINE (Tang et al., 2015) evaluate using node classification only.
All these four methods, and NBNE, are supposed to generate general purpose embeddings, so they are not, nor should be, explicitly optimized for any such test. These chosen tests in tasks with different properties and in different datasets are mainly supposed to 'benchmark' the algorithm.
REFERENCES
Aditya Grover and Jure Leskovec. Node2vec: Scalable feature learning for networks. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 855–864, 2016.
Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. Deepwalk: Online learning of social representations. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 701–710, 2014.
Jian Tang, Meng Qu, Mingzhe Wang, Ming Zhang, Jun Yan, and Qiaozhu Mei. Line: Large-scale information network embedding. In Proceedings of the 24th International Conference on World Wide Web, pp. 1067–1077, 2015.
Daixin Wang, Peng Cui, and Wenwu Zhu. Structural deep network embedding. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1225–1234, 2016. |
S1-6egSFl | Unsupervised and Scalable Algorithm for Learning Node Representations | [
"Tiago Pimentel",
"Adriano Veloso",
"Nivio Ziviani"
] | Representation learning is one of the foundations of Deep Learning and allowed big improvements on several Machine Learning fields, such as Neural Machine Translation, Question Answering and Speech Recognition. Recent works have proposed new methods for learning representations for nodes and edges in graphs. In this work, we propose a new unsupervised and efficient method, called here Neighborhood Based Node Embeddings (NBNE), capable of generating node embeddings for very large graphs. This method is based on SkipGram and uses nodes' neighborhoods as contexts to generate representations. NBNE achieves results comparable or better to the state-of-the-art in three different datasets. | [
"Unsupervised Learning"
] | https://openreview.net/pdf?id=S1-6egSFl | rywV8Flie | official_review | 1,489,176,110,741 | S1-6egSFl | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper158/AnonReviewer1"
] | rating: 7: Good paper, accept
review: Essentially the goal of the contribution is to adapt ideas from Word2Vec to learn node embeddings. I.e., like Node2Vec but borrowing ideas from SkipGrams rather than random walks. This is claimed to lead to faster training times and more general-purpose embeddings.
The basic idea is to form "sentences" based on random permutations of neighbors around some node, so that the ideas from Word2Vec can be adopted. This idea is relatively straightforward and perhaps a little ad-hoc, but makes sense.
The experiments on a few graphs show improvements on link prediction tasks. These are fine though it's not clear to me whether state-of-the-art link prediction methods are in fact similar to what's being shown, nor is this the task the methods being compared are optimized for. Some more thoroughness would be useful here, though what's shown is sufficient for a workshop paper.
confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
S1-6egSFl | Unsupervised and Scalable Algorithm for Learning Node Representations | [
"Tiago Pimentel",
"Adriano Veloso",
"Nivio Ziviani"
] | Representation learning is one of the foundations of Deep Learning and allowed big improvements on several Machine Learning fields, such as Neural Machine Translation, Question Answering and Speech Recognition. Recent works have proposed new methods for learning representations for nodes and edges in graphs. In this work, we propose a new unsupervised and efficient method, called here Neighborhood Based Node Embeddings (NBNE), capable of generating node embeddings for very large graphs. This method is based on SkipGram and uses nodes' neighborhoods as contexts to generate representations. NBNE achieves results comparable or better to the state-of-the-art in three different datasets. | [
"Unsupervised Learning"
] | https://openreview.net/pdf?id=S1-6egSFl | ryDPuF6ig | comment | 1,490,028,639,144 | S1-6egSFl | [
"everyone"
] | [
"ICLR.cc/2017/pcs"
] | decision: Accept
title: ICLR committee final decision |
S1-6egSFl | Unsupervised and Scalable Algorithm for Learning Node Representations | [
"Tiago Pimentel",
"Adriano Veloso",
"Nivio Ziviani"
] | Representation learning is one of the foundations of Deep Learning and allowed big improvements on several Machine Learning fields, such as Neural Machine Translation, Question Answering and Speech Recognition. Recent works have proposed new methods for learning representations for nodes and edges in graphs. In this work, we propose a new unsupervised and efficient method, called here Neighborhood Based Node Embeddings (NBNE), capable of generating node embeddings for very large graphs. This method is based on SkipGram and uses nodes' neighborhoods as contexts to generate representations. NBNE achieves results comparable or better to the state-of-the-art in three different datasets. | [
"Unsupervised Learning"
] | https://openreview.net/pdf?id=S1-6egSFl | S1uM0LVjx | comment | 1,489,427,983,626 | BJY1_1Vse | [
"everyone"
] | [
"~Tiago_Pimentel1"
] | title: Updated paper and clarifications
comment: Thank you for your feedback, I’m uploading a revised version of the paper which, I think, better describes the way sentences are generated. We would like to point out that, besides having a lower training time, our method is completely unsupervised, while node2vec is semi-supervised. Our method also only depends on a single parameter ‘n‘, which is easier to understand and choose and which can be selected dynamically, by increasing its value until the embeddings start to overfit.
Another point we would like to make is that choosing how sentences/context are generated in a graph is a fairly complex problem, due to the changing dimensionality in its structure. Unlike text or image, there’s no straight forward way to ‘read‘ it. Also, differences like the one between SkipGram and CBOW are ‘simple‘, since they only change how one word is predicted from the others in already constructed sentences, but create fairly different representations and results, being more efficient when applied to different datasets.
There was no space to fully state the differences in training time between our method and the baselines, but it was about 100 to 1000x faster than node2vec, when using n=1, n=5 or n=10 on the three datasets (respectively: Astro, Facebook and Blog).
About testing against different baselines, to the best of our knowledge, there’s no supervised method for learning representations specific for neither link prediction or node classification. Grover & Leskovec (2016) state that ”none of feature learning algorithms have been previously used for link prediction”. In it, they additionally test their algorithm against common heuristics of the problem, like Common Neighbours and Adamir Adar, strongly beating those baselines. Due to the lack of space in this workshop paper version, we found it was not necessary to compare against these weak baselines.
Most supervised learning algorithms for node classification/link prediction we found use, besides structural knowledge from the graph, node attributes, like sex, age, etc, which we do not use. We compare our algorithm to theirs in these two problems, i.e. node classification and link prediction, because it’s the usual benchmark when comparing node embedding algorithms. These problems are used for comparisons in Node2Vec (Grover & Leskovec, 2016) and SBNE (Wang et al., 2016), while DeepWalk (Perozzi et al., 2014) and LINE (Tang et al., 2015) evaluate using node classification only.
REFERENCES
Aditya Grover and Jure Leskovec. Node2vec: Scalable feature learning for networks. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 855–864, 2016.
Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. Deepwalk: Online learning of social representations. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 701–710, 2014.
Jian Tang, Meng Qu, Mingzhe Wang, Ming Zhang, Jun Yan, and Qiaozhu Mei. Line: Large-scale information network embedding. In Proceedings of the 24th International Conference on World Wide Web, pp. 1067–1077, 2015.
Daixin Wang, Peng Cui, and Wenwu Zhu. Structural deep network embedding. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1225–1234, 2016. |
S1-6egSFl | Unsupervised and Scalable Algorithm for Learning Node Representations | [
"Tiago Pimentel",
"Adriano Veloso",
"Nivio Ziviani"
] | Representation learning is one of the foundations of Deep Learning and allowed big improvements on several Machine Learning fields, such as Neural Machine Translation, Question Answering and Speech Recognition. Recent works have proposed new methods for learning representations for nodes and edges in graphs. In this work, we propose a new unsupervised and efficient method, called here Neighborhood Based Node Embeddings (NBNE), capable of generating node embeddings for very large graphs. This method is based on SkipGram and uses nodes' neighborhoods as contexts to generate representations. NBNE achieves results comparable or better to the state-of-the-art in three different datasets. | [
"Unsupervised Learning"
] | https://openreview.net/pdf?id=S1-6egSFl | BJY1_1Vse | official_review | 1,489,397,729,102 | S1-6egSFl | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper158/AnonReviewer2"
] | title: Review
rating: 4: Ok but not good enough - rejection
review: The paper proposes a new method for computing nodes representations in large graphs. The idea is very close to the ideas of other existing papers and consists in transforming nodes+neighbors into sentences, and then to learn a word2vec model on the generated sentences. The originality of the paper is in the way these sentences are generated, using random permutations of nodes. Experimental results are made on both link prediction and node classification problems and show competitive results w.r.t baselines.
The originality of the approach is quite limited since the only new thing is how the sentences are generated. Moreover, due to the lack of details, I am not sure to exactly understand how the sentes are generated. Adding an example would be nice. The model seems competitive with other unsupervised methods and with a lower training time which is interesting. But comparisons could be done with supervised methods that have been already proposed, particularly for learning representations for node classification.
Pros:
• Simple idea
• Low training time
Cons:
• Not a string contribution
• Incomplete Experimental setting
confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
H1PMaa1Yg | Exploring loss function topology with cyclical learning rates | [
"Leslie N. Smith",
"Nicholay Topin"
] | We present observations and discussion of previously unreported phenomena discovered while training residual networks. The goal of this work is to better understand the nature of neural networks through the examination of these new empirical results. These behaviors were identified through the application of Cyclical Learning Rates (CLR) and linear network interpolation. Among these behaviors are counterintuitive increases and decreases in training loss and instances of rapid training. For example, we demonstrate how CLR can produce greater testing accuracy than traditional training despite using large learning rates. | [
"Deep learning"
] | https://openreview.net/pdf?id=H1PMaa1Yg | S1i9J52cx | official_review | 1,488,916,370,729 | H1PMaa1Yg | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper19/AnonReviewer2"
] | title: Official Review
rating: 4: Ok but not good enough - rejection
review: This work presents a series of observations gleaned from training a ResNet at different learning rates and schedules. While in general this sort of empirical analysis is a good thing, the paper does not put forward any novel explanation or theory based on these observations. Overall the paper is reasonably well written but lacks clear motivation or take-aways. The techniques in this paper are not novel but the analysis is interesting. I would recommend rejection at this time but encourage the authors to see if they can further explore possible insights their experiments may have uncovered.
confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
H1PMaa1Yg | Exploring loss function topology with cyclical learning rates | [
"Leslie N. Smith",
"Nicholay Topin"
] | We present observations and discussion of previously unreported phenomena discovered while training residual networks. The goal of this work is to better understand the nature of neural networks through the examination of these new empirical results. These behaviors were identified through the application of Cyclical Learning Rates (CLR) and linear network interpolation. Among these behaviors are counterintuitive increases and decreases in training loss and instances of rapid training. For example, we demonstrate how CLR can produce greater testing accuracy than traditional training despite using large learning rates. | [
"Deep learning"
] | https://openreview.net/pdf?id=H1PMaa1Yg | Hku-l_lse | official_review | 1,489,170,432,487 | H1PMaa1Yg | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper19/AnonReviewer1"
] | title: Interesting phenomena, but more experiments needed to rule out less interesting explanations
rating: 4: Ok but not good enough - rejection
review: This paper discusses several interesting phenomena regarding the training and testing error curves over the course of training deep network models on image classification tasks. Among the findings are that test error performance can be nonmonotonic with certain learning rates, and imposing a cyclic alternation between low and high learning rates can speed learning.
-While these results may point to something deeper, additional control experiments would greatly strengthen the paper. The finding that a cyclic learning schedule can speed learning would be potentially of practical interest, but the experiments compare just one particular cyclic scheme to one particular fixed learning rate. Does a carefully optimized fixed learning rate match the cyclic performance? Is a cycle really necessary, or can the learning rate just decrease monotonically over the course of learning?
-There may be simple standard explanations of these phenomena. The test error spikes up on each cycle as the learning rate crosses some threshold, which seems a straightforward case of SGD becoming unstable and diverging when the learning rate is made too high. After taking a giant bad step, higher learning rates can make progress because the network is terrible and fine adjustments are not necessary. More is necessary to back up the claim that these results provide insight into the "loss function topology."
+The finding of faster convergence with cyclic learning rate schedules, if it remains faster than the optimal fixed or monotonically decreasing schedule, would be very interesting and merits more investigation.
+The suggestion of interpolating many models to yield higher generalization performance is also a potentially interesting direction.
confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
H1PMaa1Yg | Exploring loss function topology with cyclical learning rates | [
"Leslie N. Smith",
"Nicholay Topin"
] | We present observations and discussion of previously unreported phenomena discovered while training residual networks. The goal of this work is to better understand the nature of neural networks through the examination of these new empirical results. These behaviors were identified through the application of Cyclical Learning Rates (CLR) and linear network interpolation. Among these behaviors are counterintuitive increases and decreases in training loss and instances of rapid training. For example, we demonstrate how CLR can produce greater testing accuracy than traditional training despite using large learning rates. | [
"Deep learning"
] | https://openreview.net/pdf?id=H1PMaa1Yg | HJ_Rv84jx | comment | 1,489,426,384,527 | Hku-l_lse | [
"everyone"
] | [
"~Leslie_N_Smith1"
] | title: Reply to AnonReviewers
comment: We believe the intent of the ICLR workshop is to provide a forum for late-breaking results, even if a paper hasn't been fully developed into what one expects for a conference paper. Our workshop paper is such a paper, providing experimental results that have not been seen before, though it isn't as fully developed as a conference paper.
Unfortunately, the 3-page limit meant not showing the control experiments and many of the other results we've obtained. We ran experiments with a variety of learning rate schedules, architectures, solvers, and hyper-parameters. We mentioned some of the other results very briefly in our conclusion but could not include them fully due to space limitations.
The original cyclical learning rate paper (Smith 2015, Smith 2017) discusses that the current scheme was compared with many other cyclic methods and the linear scheme was chosen because the more complex methods provided no additional benefit. Please skim the earlier paper for more details. The purpose of this current paper was not to introduce cyclical learning rate as a practical tool but to show it is also an experimental tool that demonstrates the new phenomena described.
Regarding Figure 2a, some simple explanations are possible, and there are certainly many examples in the literature where SGD becomes unstable and diverges. However, to our knowledge, the literature does not show examples where SGD becomes unstable, diverges, and then starts converging (note that the test accuracy falls slightly and recovers quickly), especially while the learning rate continues to increase. This is why we include this as a novel phenomenon. Furthermore, from a geometric perspective, one can imagine that the increasing learning rate causes the solution to jump out of a local minimum and hence the sudden jump but, if so, why would it continue to converge while learning rate increases? We believe these phenomena are unusual and are providing some insight into the loss function topology.
In addition, Figure 1 shows the plots that started our investigation and we don't think your explanation holds for this example. These plots show test accuracy during regular training (not using cyclical learning rates), so the learning rate is monotonically decreasing. Furthermore, the dip in test accuracy happens for an initial learning rate of 0.14 but not for 0.24 or 0.35.
Regarding Figure 2b, it does show the cyclical learning rate result compared to an optimal monotonically decreasing schedule. The point is that within 20,000 iterations it produced a better solution than the optimal schedule could in 80,000 - 100,000 iterations. We also feel it is interesting that such high performance is possible when the smallest value used for the learning rate is 0.1, which is commonly considered large.
As we say in the Conclusions, we are actively searching for a collaborator who can provide a theoretical analysis for a full follow-up paper. We welcome any readers who feel they understand the theoretical causes for these phenomena to please contact me.
|
H1PMaa1Yg | Exploring loss function topology with cyclical learning rates | [
"Leslie N. Smith",
"Nicholay Topin"
] | We present observations and discussion of previously unreported phenomena discovered while training residual networks. The goal of this work is to better understand the nature of neural networks through the examination of these new empirical results. These behaviors were identified through the application of Cyclical Learning Rates (CLR) and linear network interpolation. Among these behaviors are counterintuitive increases and decreases in training loss and instances of rapid training. For example, we demonstrate how CLR can produce greater testing accuracy than traditional training despite using large learning rates. | [
"Deep learning"
] | https://openreview.net/pdf?id=H1PMaa1Yg | B1XMuFpjl | comment | 1,490,028,555,013 | H1PMaa1Yg | [
"everyone"
] | [
"ICLR.cc/2017/pcs"
] | decision: Reject
title: ICLR committee final decision |
Sk1OOnNFx | Restricted Boltzmann Machines provide an accurate metric for retinal responses to visual stimuli | [
"Christophe Gardella",
"Olivier Marre",
"Thierry Mora"
] | How to discriminate visual stimuli based on the activity they evoke in sensory neurons is still an open challenge. To measure discriminability power, we search for a neural metric that preserves distances in stimulus space, so that responses to different stimuli are far apart and responses to the same stimulus are close. Here, we show that Restricted Boltzmann Machines (RBMs) provide such a distance-preserving neural metric. Even when learned in a unsupervised way, RBM-based metric can discriminate stimuli with higher resolution than classical metrics. | [] | https://openreview.net/pdf?id=Sk1OOnNFx | rJapli7sx | official_review | 1,489,379,524,614 | Sk1OOnNFx | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper101/AnonReviewer1"
] | title: nice method and analysis
rating: 9: Top 15% of accepted papers, strong accept
review: This paper proposes using the hidden units of an RBM to compute a metric of the similarity of neural responses to different stimuli. It seems like a sensible idea - to compare population activity in a latent space defined by the statistics rather than in the raw spike data - but it could use more explicit motivation rather than relying on the reader to come up with this for themselves. The proposed method exhibits good performance compared to other methods in discriminating spike trains in a meaningful way that is related to stimulus changes.
Some related work using RBM's to model spike trains:
Köster, Urs, et al. "Modeling higher-order correlations within cortical microcolumns." PLoS Comput Biol 10.7 (2014): e1003684.
confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
Sk1OOnNFx | Restricted Boltzmann Machines provide an accurate metric for retinal responses to visual stimuli | [
"Christophe Gardella",
"Olivier Marre",
"Thierry Mora"
] | How to discriminate visual stimuli based on the activity they evoke in sensory neurons is still an open challenge. To measure discriminability power, we search for a neural metric that preserves distances in stimulus space, so that responses to different stimuli are far apart and responses to the same stimulus are close. Here, we show that Restricted Boltzmann Machines (RBMs) provide such a distance-preserving neural metric. Even when learned in a unsupervised way, RBM-based metric can discriminate stimuli with higher resolution than classical metrics. | [] | https://openreview.net/pdf?id=Sk1OOnNFx | BJcohY1jg | official_review | 1,489,112,226,391 | Sk1OOnNFx | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper101/AnonReviewer2"
] | title: ok use of rbm for spike train metrics
rating: 6: Marginally above acceptance threshold
review: pro:
- overall it’s a sensible approach and seems to be a reasonable first step towards a deep learning spike train metric
- spike train metrics are a topic of some interest in the neuroscience community
- the authors understand the relevant literature and have cited it.
con:
- there is not much here in the way of novelty that will be of interest to the ICLR community
- the writing would benefit from a thorough edit for grammar and style.
- the layout of the experiments is not entirely clear; specifically, is there any training/validation/test data split, or are all the results training data only?
In short, a sensible idea and something that should at some point grow into a published work. I am marginally positive.
confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
Sk1OOnNFx | Restricted Boltzmann Machines provide an accurate metric for retinal responses to visual stimuli | [
"Christophe Gardella",
"Olivier Marre",
"Thierry Mora"
] | How to discriminate visual stimuli based on the activity they evoke in sensory neurons is still an open challenge. To measure discriminability power, we search for a neural metric that preserves distances in stimulus space, so that responses to different stimuli are far apart and responses to the same stimulus are close. Here, we show that Restricted Boltzmann Machines (RBMs) provide such a distance-preserving neural metric. Even when learned in a unsupervised way, RBM-based metric can discriminate stimuli with higher resolution than classical metrics. | [] | https://openreview.net/pdf?id=Sk1OOnNFx | SJxHuKTig | comment | 1,490,028,599,719 | Sk1OOnNFx | [
"everyone"
] | [
"ICLR.cc/2017/pcs"
] | decision: Accept
title: ICLR committee final decision |
Syh_o0pPx | CommAI: Evaluating the first steps towards a useful general AI | [
"Marco Baroni",
"Armand Joulin",
"Allan Jabri",
"Germàn Kruszewski",
"Angeliki Lazaridou",
"Klemen Simonic",
"Tomas Mikolov"
] | With machine learning successfully applied to new daunting problems almost every day, general AI starts looking like an attainable goal. However, most current research focuses instead on important but narrow applications, such as image classification or machine translation. We believe this to be largely due to the lack of objective ways to measure progress towards broad machine intelligence. In order to fill this gap, we propose here a set of concrete desiderata for general AI, together with a platform to test machines on how well they satisfy such desiderata, while keeping all further complexities to a minimum. | [
"Theory",
"Natural language processing",
"Reinforcement Learning",
"Transfer Learning"
] | https://openreview.net/pdf?id=Syh_o0pPx | Hk9M9Stix | comment | 1,489,750,546,248 | rJKbBmYsg | [
"everyone"
] | [
"~Marco_Baroni1"
] | title: thanks
comment: Thanks for your further comments and the pointer. I agree that it's difficult to continue this conversation on the forum: let's hope well' have chances to discuss these topics in person! |
Syh_o0pPx | CommAI: Evaluating the first steps towards a useful general AI | [
"Marco Baroni",
"Armand Joulin",
"Allan Jabri",
"Germàn Kruszewski",
"Angeliki Lazaridou",
"Klemen Simonic",
"Tomas Mikolov"
] | With machine learning successfully applied to new daunting problems almost every day, general AI starts looking like an attainable goal. However, most current research focuses instead on important but narrow applications, such as image classification or machine translation. We believe this to be largely due to the lack of objective ways to measure progress towards broad machine intelligence. In order to fill this gap, we propose here a set of concrete desiderata for general AI, together with a platform to test machines on how well they satisfy such desiderata, while keeping all further complexities to a minimum. | [
"Theory",
"Natural language processing",
"Reinforcement Learning",
"Transfer Learning"
] | https://openreview.net/pdf?id=Syh_o0pPx | Byewaz2cx | comment | 1,488,887,128,101 | rkl9cjs9e | [
"everyone"
] | [
"~Marcus_Abundis1"
] | title: Intuition . . .
comment: Dear Marco, Thank you for your note.
I am struck by your reply of an ‘intuitive notion of useful AI’, as intuition must precede firm models. Still, can you offer more detail on that intuition? I reread the material and I am left mostly with (I paraphrase) ‘learning the use of language is important’. I ask for more detail as I wonder how precise, well-formed, or extensible that intuition is – my own interpretation here feels a bit superficial.
An intriguing part of the CommAI environment is that it may target what I call a ‘universal grammar’ for machine learning. This, by itself, is interesting. Similarly, Shannon’s signal entropy gave objective structure our sense of ‘information’, and still underlies many modern advances. But this also led to ‘bizarre and unsatisfying’ (Shannon & Weaver, 1949) views of information. It would be sad to see ‘bizarre and unsatisfying’ aspects perpetuated – with that mistake now made for ‘intelligence’, as occurred with ‘information’.
For example, the CommAI environment is essentially semiotic, focusing on syntactical tasks but excluding semantic aspects (the *functional value* one might ascribe to a banana versus an apple, or to bananas and apples of different types). The later would need to be included if one wishes to call the system truly intelligent, no? I assume we both target a *true* general intelligence, so this seems like a critical matter. Do you have thoughts on treating syntactic/semantic differences?
Other parts of the proposal I find similarly bothersome . . .
‘. . . important to instruct the machine in new domains’ (sec 1).
Why is the system (ultimately) not set to posit, reveal, or articulate new domains, in an intelligent manner. For example, in a backward looking way, we might ask ‘How might a wheel be invented, from a set of previously existing elements?’ The developmental steps involved could then be mapped, as a bottoms-up model. Discrete domain mapping (top down) is needed, but at the risk of leaving explanatory gaps between domains? Enough 'local domains' may eventually be mapped that a synthesizing (general) view can be attempted, but at some far distant point. Bottoms-up modeling ‘forces the gap issue’ by requiring first principles (where possible) that close said gaps.
Also (perhaps trivial?), a ‘. . . common bit-level interface’ is variously referenced in the material, which puzzles me. A bit-to-bit role seems to imply something coded in machine language, rather than programs that pass through a conversion (compiling) process. The later is innately indirect. Even in processor design, subtle bit-to-bit differences are known to exist across architectures that affect processor outputs. This leads me to wonder if you mean something specific (or figurative) in pointing to bit-to-bit relationships?
Lastly, I see the proposal as circumspect in any claims on what is possible with a CommAI environment, so I do not wish to force a defensive position. I merely hope to better grasp the group’s thinking on this challenging topic. Thank you, in advance, for your reply.
Shannon, C., & Weaver, W. (1949). Advances in ‘a mathematical theory of communication’. Urbana, IL: University of Illinois Press. |
Syh_o0pPx | CommAI: Evaluating the first steps towards a useful general AI | [
"Marco Baroni",
"Armand Joulin",
"Allan Jabri",
"Germàn Kruszewski",
"Angeliki Lazaridou",
"Klemen Simonic",
"Tomas Mikolov"
] | With machine learning successfully applied to new daunting problems almost every day, general AI starts looking like an attainable goal. However, most current research focuses instead on important but narrow applications, such as image classification or machine translation. We believe this to be largely due to the lack of objective ways to measure progress towards broad machine intelligence. In order to fill this gap, we propose here a set of concrete desiderata for general AI, together with a platform to test machines on how well they satisfy such desiderata, while keeping all further complexities to a minimum. | [
"Theory",
"Natural language processing",
"Reinforcement Learning",
"Transfer Learning"
] | https://openreview.net/pdf?id=Syh_o0pPx | rkl9cjs9e | comment | 1,488,857,736,042 | BkDRV09qx | [
"everyone"
] | [
"~Marco_Baroni1"
] | title: Thanks for your thoughts
comment: We are proposing only one possible approach to General AI, and we would like to see other proposals that take alternative routes, such as the "bottom-up" one you are presenting (thanks for the pointer).
It's also true that CommAI is a "top down" approach, but based on an intuitive notion of useful AI, rather than general mathematical or psychological considerations.
|
Syh_o0pPx | CommAI: Evaluating the first steps towards a useful general AI | [
"Marco Baroni",
"Armand Joulin",
"Allan Jabri",
"Germàn Kruszewski",
"Angeliki Lazaridou",
"Klemen Simonic",
"Tomas Mikolov"
] | With machine learning successfully applied to new daunting problems almost every day, general AI starts looking like an attainable goal. However, most current research focuses instead on important but narrow applications, such as image classification or machine translation. We believe this to be largely due to the lack of objective ways to measure progress towards broad machine intelligence. In order to fill this gap, we propose here a set of concrete desiderata for general AI, together with a platform to test machines on how well they satisfy such desiderata, while keeping all further complexities to a minimum. | [
"Theory",
"Natural language processing",
"Reinforcement Learning",
"Transfer Learning"
] | https://openreview.net/pdf?id=Syh_o0pPx | H1pG2k-sg | comment | 1,489,202,196,764 | rJrqi2gig | [
"everyone"
] | [
"~Marco_Baroni1"
] | title: thanks for your review
comment: Thanks for your review and your open-mindedness regarding top-down vs bottom-up approaches.
We respectfully think that you are seriously underestimating the difficulty of our tasks. Consider that in our setup the learner is only getting one bit at a time, and thus simply discovering that there are recurrent, re-usable patterns of 8-bit sequences that are playing a meaningful role in the definition of the tasks is a big challenge for it. Moreover, the algorithm cannot learn the regexps by example, as it will be exposed to each regexp only once. What the learner needs to do, after it has discovered how to parse the environment strings into their component parts (regexp, order, target string...), is to learn a general way to "compile" the regular expression in order to analyze the string at hand (or to produce one or multiple strings). This is an enormously more difficult task than generalizing a stringset based on a number of examples. All this, with no explicit task segmentation, and very sparse reward, as the learner is only getting reward when it produces the right solution for a task. Note also that, unlike in the Weston paper, our learner is free to generate any sequence of bits (with thus a huge space to explore), rather than having to pick a fixed answer from a list.
Compositionality should play a crucial role in the solution. In order to solve what is essentially a continuous stream of 0-shot tasks, the learner must learn to re-use components such as the ability to parse bits into bytes, the ability to parse the instructions into parts, the ability to process and apply regular expressions, and the ability to parse an increasingly richer regexp syntax. Similar abilities would doubtlessly also be learned by, say, natural language parsing, in a setup in which the learner is provided no supervision about what it needs to do in order to get reward, but that would be even more complex.
We are currently experimenting with a set of tasks that are much simpler than the ones presented in the position abstract, using a RNN trained with RL. We are finding that this approach does not go anywhere even in the simplified scenario. We are however not reporting such results in the paper since it's hard to definitely prove a negative result.
|
Syh_o0pPx | CommAI: Evaluating the first steps towards a useful general AI | [
"Marco Baroni",
"Armand Joulin",
"Allan Jabri",
"Germàn Kruszewski",
"Angeliki Lazaridou",
"Klemen Simonic",
"Tomas Mikolov"
] | With machine learning successfully applied to new daunting problems almost every day, general AI starts looking like an attainable goal. However, most current research focuses instead on important but narrow applications, such as image classification or machine translation. We believe this to be largely due to the lack of objective ways to measure progress towards broad machine intelligence. In order to fill this gap, we propose here a set of concrete desiderata for general AI, together with a platform to test machines on how well they satisfy such desiderata, while keeping all further complexities to a minimum. | [
"Theory",
"Natural language processing",
"Reinforcement Learning",
"Transfer Learning"
] | https://openreview.net/pdf?id=Syh_o0pPx | HJJUXrwie | comment | 1,489,617,734,862 | SyP8eGwje | [
"everyone"
] | [
"~Marco_Baroni1"
] | title: language and general AI
comment: Thanks for your interesting comments. We welcome other views on what are the first skills to focus on in the development of general AI, and we hope our position paper, if published, will stir further discussion of this kind.
Our reason to focus on language is two-fold. First, we find it hard to conceive that an AI could be useful to human beings if we were not able to communicate with it (to give it instructions and teach it new skills). Language is by far the easiest and most powerful communication tool that humans can use. Second, while our tasks are superficially linguistic in nature, for a system to learn how to handle them from scratch would require very powerful learning to learn capabilities (discovering that certain recurrent sequences are meaningful and thus they should be memorized even in the absence of specific reward, the ability to combine skills learned in simpler tasks in order to address more advanced tasks, the ability to find systematic correspondences between signs--the regexps--and their denotation--the strings, etc.). The minimal setup we are considering should allow researchers to focus on such challenges, rather than on large-scale/noisy data processing issues.
We are definitely not claiming that a system trained on the specific set of CommAI-mini tasks would then be ready to go out in the world and tackle all sorts of advanced tasks, but we realistically think that a learner that was able to solve these tasks without any ad-hoc hand-coded knowledge would be so general that it should be possible to train it, e.g., to have more general conversations with humans. Next, the conversational and linguistic skills could be exploited to teach the machine about the domains of interest (e.g., by instructing the machine to study the Wikipedia), and so on and so forth. We recognize, of course, that we are not there, yet, but we believe that this is an avenue that is worth exploring.
GoodAI recently announced a challenge based on our CommAI-mini tasks. We will thus soon be able to ascertain whether there are systems that can solve them, and whether such systems can then scale up to tasks in other domains.
|
Syh_o0pPx | CommAI: Evaluating the first steps towards a useful general AI | [
"Marco Baroni",
"Armand Joulin",
"Allan Jabri",
"Germàn Kruszewski",
"Angeliki Lazaridou",
"Klemen Simonic",
"Tomas Mikolov"
] | With machine learning successfully applied to new daunting problems almost every day, general AI starts looking like an attainable goal. However, most current research focuses instead on important but narrow applications, such as image classification or machine translation. We believe this to be largely due to the lack of objective ways to measure progress towards broad machine intelligence. In order to fill this gap, we propose here a set of concrete desiderata for general AI, together with a platform to test machines on how well they satisfy such desiderata, while keeping all further complexities to a minimum. | [
"Theory",
"Natural language processing",
"Reinforcement Learning",
"Transfer Learning"
] | https://openreview.net/pdf?id=Syh_o0pPx | SyP8eGwje | official_review | 1,489,604,687,252 | Syh_o0pPx | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper2/AnonReviewer1"
] | title: Review
rating: 5: Marginally below acceptance threshold
review: The paper proposes a new evaluation platform for what they define 'useful' general AI and the desired characteristics for this kind of system.
Pro:
(Attempts to) Tackle a very important problem, that has yet to be properly formalized or agreed on by the general community.
It's in line with other efforts, such as openai.com/blog/universe, gvgai.net, github.com/deepmind/lab, to bring forward new and diverse tasks for the community to play with. This ultimately pushes us to develop more general learning algorithms that indeed need to "learn to learn" or learn to adapt to different, but related tasks. I think that's something that has been more and more important.
Cons:
The major problem I have with paper and framework, stems not necessarily from the tasks themselves, but from the identified desiderata. It seems to be <<very>> natural language/text focused. This is an on-going debate whether or not that is a crucial component in the development of general AI and how we will interact with AI. It seems to me, that most of the effort -- at least computationally -- would be spent modelling the particular structure present in text-like inputs/data and that automatically shifts the focus away from what we should be doing, or what the AIs should be trying to figure out which is more complex tasks, more planning and optimization challenging scenarios.
Which brings me to the second point. Say you believe in the desiderata outline, the tasks seem to match what was highlighted in the agenda, but the level of complexity it's relatively low. That's not to say that these are simple tasks for our learning algorithms to pick up, just the complexity doesn't lie in the task per se, but trying to model the language/syntax. I fail to see what succeeding on these tasks, say, in general, about now taking this system and applying it to optimizing energy consumption, or recognizing emotions, or even something domain-related like dialogue systems.
To sum: I think the paper addresses a very real problem and, as I said previously, I don't think we don't have the/a right answer or even a satisfactory answer at this point in time. I do think though the framework proposed is too limited in scope to claim generality. That being said, it might still be 'useful' -- if you agree with the proposed desiderata, it seems like a sensible set of tasks to try out.
confidence: 3: The reviewer is fairly confident that the evaluation is correct |
Syh_o0pPx | CommAI: Evaluating the first steps towards a useful general AI | [
"Marco Baroni",
"Armand Joulin",
"Allan Jabri",
"Germàn Kruszewski",
"Angeliki Lazaridou",
"Klemen Simonic",
"Tomas Mikolov"
] | With machine learning successfully applied to new daunting problems almost every day, general AI starts looking like an attainable goal. However, most current research focuses instead on important but narrow applications, such as image classification or machine translation. We believe this to be largely due to the lack of objective ways to measure progress towards broad machine intelligence. In order to fill this gap, we propose here a set of concrete desiderata for general AI, together with a platform to test machines on how well they satisfy such desiderata, while keeping all further complexities to a minimum. | [
"Theory",
"Natural language processing",
"Reinforcement Learning",
"Transfer Learning"
] | https://openreview.net/pdf?id=Syh_o0pPx | BkDRV09qx | comment | 1,488,803,022,763 | Syh_o0pPx | [
"everyone"
] | [
"~Marcus_Abundis1"
] | title: A Few Thoughts . . .
comment: I wish to offer thoughts on your CommAI proposal as a ‘top-down view, deriving their requirements from psychological or mathematical considerations’ (sec. 3). The gist of my note asserts your worthy project is not sufficient to describe or explain ‘general intelligence’. For example, a criticism is that I see no balanced consideration of bottoms-up facets. A purely *top-down* CommAI seems not only essentially anthropocentric, but also largely symbol based (semiotic), Anglocentric, and narrowly denotative (versus connotative). Each *qualifier* lessens the range of what might be seen as ‘general’ – which, you may then agree, is general only in a very narrow sense(?).
This leaves me wondering about ‘requirements derived from psychological or mathematical (or even ‘simple physics tasks’ [sec 2]) considerations’. This seems like a large leap from foundational traits (physics, etc.) to more semiotic roles (CommAI environment), entailing many unexamined assumptions/details. I see your effort to address some that innate ambiguity in the appendix, but it (again) seems largely semiotic in nature, and is thus unsatisfying. The key issue I see here lies in how narrowly or widely defined the ‘environment’ is, within which proposals are tested. For example, game environments are too narrow to compare with ‘general uncontrolled variables’ that typify much of our daily environs. Yes, practical limits are needed as a starting point (a bounded rationale), but is the essentially semiotic view you offer the best starting point?
Alternatively, I advocate for a functionalist foundation (true bottoms-up) as a needed complement, or even a precursor, to framing a true general intelligence. As I understand ICLR is not the correct venue for exploring general intelligence, I would appreciate thoughts you may wish to share (e.g., re appropriate venues) on this matter, and/or on the bottoms-up proposal offered in ‘A Priori Modeling of Information and Intelligence’. Regardless, I wish you the best with this worthy project! |
Syh_o0pPx | CommAI: Evaluating the first steps towards a useful general AI | [
"Marco Baroni",
"Armand Joulin",
"Allan Jabri",
"Germàn Kruszewski",
"Angeliki Lazaridou",
"Klemen Simonic",
"Tomas Mikolov"
] | With machine learning successfully applied to new daunting problems almost every day, general AI starts looking like an attainable goal. However, most current research focuses instead on important but narrow applications, such as image classification or machine translation. We believe this to be largely due to the lack of objective ways to measure progress towards broad machine intelligence. In order to fill this gap, we propose here a set of concrete desiderata for general AI, together with a platform to test machines on how well they satisfy such desiderata, while keeping all further complexities to a minimum. | [
"Theory",
"Natural language processing",
"Reinforcement Learning",
"Transfer Learning"
] | https://openreview.net/pdf?id=Syh_o0pPx | Hy4pclAce | comment | 1,489,009,339,621 | HkY_ubTqx | [
"everyone"
] | [
"~Marco_Baroni1"
] | title: further comments
comment: More semantic tasks: I'd say e.g., some of the association and navigation tasks already implemented in our CommAI-env environment might be more "semantic" in the sense you mean. However, for me the CommAI-mini tasks are already semantic, in the sense that there are symbols (the regexps) referring to sets (the string sets).
We try to stick to bits as they allow a maximally agnostic interface (and to define tasks with no added complexity whatsoever), but system developers could certainly implement a BIOS into their system. One thing is the input/output channel, another the constraints one might impose on the "perceptual system" of the learner, so to speak.
Finally, we do hope ICLR will accept papers addressing general intelligence, especially as they encourage position papers for the workshop track... Representation learning seems like a core component of any general AI, and, conversely, moving towards more general AIs is a core reason to develop better representation learning methods.
|
Syh_o0pPx | CommAI: Evaluating the first steps towards a useful general AI | [
"Marco Baroni",
"Armand Joulin",
"Allan Jabri",
"Germàn Kruszewski",
"Angeliki Lazaridou",
"Klemen Simonic",
"Tomas Mikolov"
] | With machine learning successfully applied to new daunting problems almost every day, general AI starts looking like an attainable goal. However, most current research focuses instead on important but narrow applications, such as image classification or machine translation. We believe this to be largely due to the lack of objective ways to measure progress towards broad machine intelligence. In order to fill this gap, we propose here a set of concrete desiderata for general AI, together with a platform to test machines on how well they satisfy such desiderata, while keeping all further complexities to a minimum. | [
"Theory",
"Natural language processing",
"Reinforcement Learning",
"Transfer Learning"
] | https://openreview.net/pdf?id=Syh_o0pPx | ryoRsTh9l | comment | 1,488,931,794,624 | Byewaz2cx | [
"everyone"
] | [
"~Marco_Baroni1"
] | title: thanks for the further comments
comment: I'll briefly answer to some of the further points you raise...
* More details on the leading intuition
We would like to develop an AI that could be helpful to humans by receiving instructions through natural language interaction, and being able to perform them, even if they require them new skills that they did not encounter before.
* Syntax vs semantics
We do not agree that the CommAI-mini tasks are purely syntactic. You can see the regular expressions as words denoting stringsets, and the corresponding stringsets as their denotations. It would also be interesting to extend a similar approach to other domains where there is more explicit grounding, e.g., reasoning about simple geometric shapes.
* Domains
We fully agree that domains should not be established in a top-down way but they should, if useful, implicitly learned by the machine (if this was your point).
* Bit-to-bit
We simply mean that, even if we graphically display ASCII characters as such, the real input to the machine will be one bit at the time, and the same for the machine output (with no assumption that the machine will already know about ASCII or other encoding systems).
|
Syh_o0pPx | CommAI: Evaluating the first steps towards a useful general AI | [
"Marco Baroni",
"Armand Joulin",
"Allan Jabri",
"Germàn Kruszewski",
"Angeliki Lazaridou",
"Klemen Simonic",
"Tomas Mikolov"
] | With machine learning successfully applied to new daunting problems almost every day, general AI starts looking like an attainable goal. However, most current research focuses instead on important but narrow applications, such as image classification or machine translation. We believe this to be largely due to the lack of objective ways to measure progress towards broad machine intelligence. In order to fill this gap, we propose here a set of concrete desiderata for general AI, together with a platform to test machines on how well they satisfy such desiderata, while keeping all further complexities to a minimum. | [
"Theory",
"Natural language processing",
"Reinforcement Learning",
"Transfer Learning"
] | https://openreview.net/pdf?id=Syh_o0pPx | SJjGPlbje | comment | 1,489,205,011,448 | H1pG2k-sg | [
"everyone"
] | [
"~Marco_Baroni1"
] | title: PS
comment: Interestingly, GoodAI has now implemented our tasks as part of their general AI challenge: https://www.general-ai-challenge.org/ This should soon tell us whether the tasks are indeed solvable with existing techniques. |
Syh_o0pPx | CommAI: Evaluating the first steps towards a useful general AI | [
"Marco Baroni",
"Armand Joulin",
"Allan Jabri",
"Germàn Kruszewski",
"Angeliki Lazaridou",
"Klemen Simonic",
"Tomas Mikolov"
] | With machine learning successfully applied to new daunting problems almost every day, general AI starts looking like an attainable goal. However, most current research focuses instead on important but narrow applications, such as image classification or machine translation. We believe this to be largely due to the lack of objective ways to measure progress towards broad machine intelligence. In order to fill this gap, we propose here a set of concrete desiderata for general AI, together with a platform to test machines on how well they satisfy such desiderata, while keeping all further complexities to a minimum. | [
"Theory",
"Natural language processing",
"Reinforcement Learning",
"Transfer Learning"
] | https://openreview.net/pdf?id=Syh_o0pPx | rJKbBmYsg | comment | 1,489,741,056,682 | Hy4pclAce | [
"everyone"
] | [
"~Marcus_Abundis1"
] | title: Semantics redux
comment: > . . . CommAI-mini tasks are already semantic [with} . . . symbols (the regexps) referring to sets (the string sets)<
• As *symbols* are involved, for me, this means meaningful interpretation of symbols is needed at some point – where the *interpreter* is, in fact, the intelligent agent (programmer/pre-ascribed values, in this case[?]). This means I would say your method is NOT innately semantic or conveying 'an intelligence' beyond what is programmed. This is a tricky area as the syntactic and semantic become entangled across simple-to-complex roles. Thus, trying to debate/parse this matter in a forum like this is pointless. Yes, *some* innately semantic aspects are always inherently entailed, but . . . (further depending on the level of analysis used and the project's ultimate aims . . .)
> . . . maximally agnostic interface <
• Yes, a worthy aim. But in any case, as you point out, always limited by the platform's innate architecture and capacities. (i.e., *not* truly general). As such, I take a more purely informational approach to minimize innate platform issues. But still, at some point such limits must always be seen as somehow present . . .
> . . . papers addressing general intelligence <
• For your information I came across this, with a May 5 submission deadline.
http://users.dsic.upv.es/~flip/EGPAI2017/#call4papers
> Representation learning seems like a core component of any general AI . . .<
• Easily agreed! In fact, it is THE central defining characteristic I think.
• I too saw the AI Roadmap project (and the CommAI inclusion) and thought it quite interesting. But the effort also seems very early in its formation. I will explore it further.
Best of luck with your project! |
Syh_o0pPx | CommAI: Evaluating the first steps towards a useful general AI | [
"Marco Baroni",
"Armand Joulin",
"Allan Jabri",
"Germàn Kruszewski",
"Angeliki Lazaridou",
"Klemen Simonic",
"Tomas Mikolov"
] | With machine learning successfully applied to new daunting problems almost every day, general AI starts looking like an attainable goal. However, most current research focuses instead on important but narrow applications, such as image classification or machine translation. We believe this to be largely due to the lack of objective ways to measure progress towards broad machine intelligence. In order to fill this gap, we propose here a set of concrete desiderata for general AI, together with a platform to test machines on how well they satisfy such desiderata, while keeping all further complexities to a minimum. | [
"Theory",
"Natural language processing",
"Reinforcement Learning",
"Transfer Learning"
] | https://openreview.net/pdf?id=Syh_o0pPx | rJrqi2gig | official_review | 1,489,189,773,265 | Syh_o0pPx | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper2/AnonReviewer2"
] | title: Review
rating: 6: Marginally above acceptance threshold
review: This paper proposes a method for evaluating the capacity of a learning algorithm to function as a "general AI". The proposal consists of two pieces:
- high-level desiderata for a general AI: namely, the ability to efficiently learn multiple tasks with shared structure from natural language guidance provided as a generic bit stream.
- a concrete task for investigating these desiderata: namely, membership and sampling queries for regular languages specified by regular expressions.
I'm honestly not sure what to make of this paper. It's very much in the same spirit as the earlier bAbI tasks, which, while well-intentioned, I think have done active harm to the AI research community by making it socially acceptable to develop methods for toy problems without ever verifying that they actually scale-up to real-world tasks. To use the dichotomy the authors introduce in section 3, the problem is that essentially all meaningful progress in the field has come from "bottom-up" approaches: "top-down" approaches have a poor track record of scaling up, while the set of challenging reasoning problems solved by "bottom-up" approaches continues to grow.
On the other hand, I recognize that mine may no longer be a mainstream position, and that in any case it's unfair to downgrade a position paper because I disagree with the position. It's probably healthy for the community to have this discussion, and the ICLR crowd perhaps needs it most of all.
But I do think the position could be better defended. In particular: we already know that:
1. It's easy for RNNs to sample from and query regular languages (learned by example---I don't actually know of work on starting from symbolic REs as done here)
2. It's certainly possible to learn from this mixed RL / text-based supervision condition (e.g. Weston 2016).
These two things together make up the whole task. So it's not obvious that we can't solve it by throwing generic RNN machinery at it, and the burden is on the authors of a new task to show that it can't already be solved using state-of-the-art methods. The paper's claim that "We hope the CommAI-mini challenge is at the right level of complexity to stimulate researchers to develop genuinely new models" would be much stronger if it already demonstrated that genuinely new models are required; right now that demonstration is missing.
Now suppose we do solve this task (with whatever model). What have we learned? Just that it's possible to quickly learn how to work with regular languages? What notion of compositionality is present in this task and not, e.g. natural language parsing? What have we learned about "learning to learn" that's different from learning good gradient descent algorithms, initializers, RL with natural language instructions, etc.?
confidence: 3: The reviewer is fairly confident that the evaluation is correct |
Syh_o0pPx | CommAI: Evaluating the first steps towards a useful general AI | [
"Marco Baroni",
"Armand Joulin",
"Allan Jabri",
"Germàn Kruszewski",
"Angeliki Lazaridou",
"Klemen Simonic",
"Tomas Mikolov"
] | With machine learning successfully applied to new daunting problems almost every day, general AI starts looking like an attainable goal. However, most current research focuses instead on important but narrow applications, such as image classification or machine translation. We believe this to be largely due to the lack of objective ways to measure progress towards broad machine intelligence. In order to fill this gap, we propose here a set of concrete desiderata for general AI, together with a platform to test machines on how well they satisfy such desiderata, while keeping all further complexities to a minimum. | [
"Theory",
"Natural language processing",
"Reinforcement Learning",
"Transfer Learning"
] | https://openreview.net/pdf?id=Syh_o0pPx | Hya-dKTsx | comment | 1,490,028,548,624 | Syh_o0pPx | [
"everyone"
] | [
"ICLR.cc/2017/pcs"
] | decision: Accept
title: ICLR committee final decision |
Syh_o0pPx | CommAI: Evaluating the first steps towards a useful general AI | [
"Marco Baroni",
"Armand Joulin",
"Allan Jabri",
"Germàn Kruszewski",
"Angeliki Lazaridou",
"Klemen Simonic",
"Tomas Mikolov"
] | With machine learning successfully applied to new daunting problems almost every day, general AI starts looking like an attainable goal. However, most current research focuses instead on important but narrow applications, such as image classification or machine translation. We believe this to be largely due to the lack of objective ways to measure progress towards broad machine intelligence. In order to fill this gap, we propose here a set of concrete desiderata for general AI, together with a platform to test machines on how well they satisfy such desiderata, while keeping all further complexities to a minimum. | [
"Theory",
"Natural language processing",
"Reinforcement Learning",
"Transfer Learning"
] | https://openreview.net/pdf?id=Syh_o0pPx | HkY_ubTqx | comment | 1,488,947,313,262 | ryoRsTh9l | [
"everyone"
] | [
"~Marcus_Abundis1"
] | title: syntax vs. semantics . . .
comment: 'reasoning about simple geometric shapes' – yes, this seems like a good initial base, also emphasized in my own modeling. Could you point to specific CommAI tasks that you feel are most plainly semantic in nature? I hope to better grasp your view of semantic tasks.
'no assumption that the machine will already know about ASCII or other encoding systems' – This implies a platform with no BIOS(?). Again, this is puzzling and I am unsure of how/why it applies to, or carries weight in, your larger project.
Lastly, few proposals here (ICLR) address *general intelligence*. I would appreciate your thoughts on more useful venues/forums for directly addressing this topic.
Thank you, |
rJeYrsEYg | Unsupervised Feature Learning for Audio Analysis | [
"Matthias Meyer",
"Jan Beutel",
"Lothar Thiele"
] | Identifying acoustic events from a continuously streaming audio source is of interest for many applications including environmental monitoring for basic research. In this scenario neither different event classes are known nor what distinguishes one class from another. Therefore, an unsupervised feature learning method for exploration of audio data is presented in this paper. It incorporates the two following novel contributions: First, an audio frame predictor based on a Convolutional LSTM autoencoder is demonstrated, which is used for unsupervised feature extraction. Second, a training method for autoencoders is presented, which leads to distinct features by amplifying event similarities. In comparison to standard approaches, the features extracted from the audio frame predictor trained with the novel approach show 13 % better results when used with a classifier and 36 % better results when used for clustering. | [] | https://openreview.net/pdf?id=rJeYrsEYg | rJ8yzarjg | comment | 1,489,519,069,696 | S1Lv_ZMjx | [
"everyone"
] | [
"~Matthias_Meyer1"
] | title: Response
comment: Thank you very much for the valuable feedback.
In contrast to images or videos, acoustic events are almost solely characterized by temporal changes. Considering this temporal change is necessary for a good classification (see references in the paper) whereas much information about a video can already be identified from still images. The predictive autoencoder was used instead of a normal autoencoder to exploit this time dependency and has shown better results. These experiments are not part of the current version of the paper due to the page limit.
The submitted paper reflects the state of the work and its core idea and thus has been submitted to the Workshop Track. Therefore the comparison to other methods is missing but indeed necessary for a full evaluation of the proposed method. This is being worked on at the moment. However, we see a fundamental difference to the mentioned approaches (VAE, ladder networks). Due to the pairwise loss an inter-sample comparison is achieved, while the mentioned methods only optimize for the current input sample. From the paper it can be seen that due to this inter-sample comparison we can not only extract features but can make them distinct, which helps for the intended exploration of a dataset. Having said this, a comparison to these other approaches will definitely strengthen the paper.
The dataset has been chosen to be close to the designated application. Therefore key aspects of the presented work rely on the specific application scenario (e.g. time-dependency, variety of sound sources). Available references like for the TIMIT dataset are not suitable for a fair comparison, since the applied algorithms are optimized for speech/phonetic classification while our proposed approach is designed to work for general audio analysis without prior knowledge.
Despite its relation to the application the used AED dataset has been chosen because it contains a large number of samples per category (around 20 minutes/category), which is beneficial to train the network, whereas ESC-50 ( https://github.com/karoldvl/ESC-50 ) and DCASE2016 ( http://www.cs.tut.fi/sgn/arg/dcase2016/ ) contain less training samples (3 minutes/category and <1 minute/category, respectively). Therefore a meaningful comparison between the different datasets with the settings from the current paper was not possible. However, the recently released AudioSet ( https://research.google.com/audioset/ ) can fill this gap. |
rJeYrsEYg | Unsupervised Feature Learning for Audio Analysis | [
"Matthias Meyer",
"Jan Beutel",
"Lothar Thiele"
] | Identifying acoustic events from a continuously streaming audio source is of interest for many applications including environmental monitoring for basic research. In this scenario neither different event classes are known nor what distinguishes one class from another. Therefore, an unsupervised feature learning method for exploration of audio data is presented in this paper. It incorporates the two following novel contributions: First, an audio frame predictor based on a Convolutional LSTM autoencoder is demonstrated, which is used for unsupervised feature extraction. Second, a training method for autoencoders is presented, which leads to distinct features by amplifying event similarities. In comparison to standard approaches, the features extracted from the audio frame predictor trained with the novel approach show 13 % better results when used with a classifier and 36 % better results when used for clustering. | [] | https://openreview.net/pdf?id=rJeYrsEYg | Hk2VdYTje | comment | 1,490,028,595,829 | rJeYrsEYg | [
"everyone"
] | [
"ICLR.cc/2017/pcs"
] | decision: Accept
title: ICLR committee final decision |
rJeYrsEYg | Unsupervised Feature Learning for Audio Analysis | [
"Matthias Meyer",
"Jan Beutel",
"Lothar Thiele"
] | Identifying acoustic events from a continuously streaming audio source is of interest for many applications including environmental monitoring for basic research. In this scenario neither different event classes are known nor what distinguishes one class from another. Therefore, an unsupervised feature learning method for exploration of audio data is presented in this paper. It incorporates the two following novel contributions: First, an audio frame predictor based on a Convolutional LSTM autoencoder is demonstrated, which is used for unsupervised feature extraction. Second, a training method for autoencoders is presented, which leads to distinct features by amplifying event similarities. In comparison to standard approaches, the features extracted from the audio frame predictor trained with the novel approach show 13 % better results when used with a classifier and 36 % better results when used for clustering. | [] | https://openreview.net/pdf?id=rJeYrsEYg | S1Lv_ZMjx | official_review | 1,489,274,974,046 | rJeYrsEYg | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper96/AnonReviewer1"
] | rating: 5: Marginally below acceptance threshold
review: This paper presents a convLSTM based audio frame prediction approach as a method of unsupervised learning of representative audio features. Proposed model is trained using a combination of mean squared error as well as a pair wise similarity measure. Model and the training approach are evaluated on the task of audio event classification.
While the combination of the ideas is novel, the individual elements model and training approach are previously known. It is also not intuitively clear why a predictive auto-encoding would be a good unsupervised feature learning approach for the task of audio event classification, thus I’d like to see comparisons with some well known basic unsupervised feature learning approaches (e.g. VAE, ladder networks, etc.).
Results are presented on an audio event detection dataset which is relatively new and not many reference comparisons are available. To make the paper stronger I’d also advise authors provide comparisons with other known results on this task, and also to apply their feature learning approach to other well established sound classification tasks (e.g. phone classification in TIMIT).
Overall I feel the paper is not strong enough in current shape for ICLR.
confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
rJeYrsEYg | Unsupervised Feature Learning for Audio Analysis | [
"Matthias Meyer",
"Jan Beutel",
"Lothar Thiele"
] | Identifying acoustic events from a continuously streaming audio source is of interest for many applications including environmental monitoring for basic research. In this scenario neither different event classes are known nor what distinguishes one class from another. Therefore, an unsupervised feature learning method for exploration of audio data is presented in this paper. It incorporates the two following novel contributions: First, an audio frame predictor based on a Convolutional LSTM autoencoder is demonstrated, which is used for unsupervised feature extraction. Second, a training method for autoencoders is presented, which leads to distinct features by amplifying event similarities. In comparison to standard approaches, the features extracted from the audio frame predictor trained with the novel approach show 13 % better results when used with a classifier and 36 % better results when used for clustering. | [] | https://openreview.net/pdf?id=rJeYrsEYg | ry_pSceie | official_review | 1,489,180,095,608 | rJeYrsEYg | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper96/AnonReviewer2"
] | title: review
rating: 5: Marginally below acceptance threshold
review: This paper combines several ideas, ConvLSTM autoencoders and a pairwise lose. The idea is to do sound classification/clustering.
I feel this paper is more suited towards the signal processing community (i.e., ICASSP/INTERSPEECH). The main problem I have with this paper/task it seems too specific and there isn't enough core-ML contributions for this round of ICLR workshop acceptance. Sequence autoencoders (see Dai et al.,) and ConvLSTM (as cited by authors Zhang et al.,) and pair wise losses (see SIGIR) are not new. Merging all these ideas together is a contribution, but I am not sure it would generate a lot of interest in the ICLR community.
Note:
This reviewer is unfamiliar w/ the "acoustic event dataset (AED) from Takahashi et al. (2016)" used in evaluation.
Citations Missing:
https://arxiv.org/pdf/1511.01432.pdf (for sequence autoencoders which this model is quite similar).
confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
rJeYrsEYg | Unsupervised Feature Learning for Audio Analysis | [
"Matthias Meyer",
"Jan Beutel",
"Lothar Thiele"
] | Identifying acoustic events from a continuously streaming audio source is of interest for many applications including environmental monitoring for basic research. In this scenario neither different event classes are known nor what distinguishes one class from another. Therefore, an unsupervised feature learning method for exploration of audio data is presented in this paper. It incorporates the two following novel contributions: First, an audio frame predictor based on a Convolutional LSTM autoencoder is demonstrated, which is used for unsupervised feature extraction. Second, a training method for autoencoders is presented, which leads to distinct features by amplifying event similarities. In comparison to standard approaches, the features extracted from the audio frame predictor trained with the novel approach show 13 % better results when used with a classifier and 36 % better results when used for clustering. | [] | https://openreview.net/pdf?id=rJeYrsEYg | S1gMz6rjl | comment | 1,489,519,111,930 | ry_pSceie | [
"everyone"
] | [
"~Matthias_Meyer1"
] | title: Response
comment: Thank you for your review and your feedback. The missing citation will be corrected in the next paper revision.
We understand your point that the paper is quite specific in its application, which might not be the preferred application for some people at ICLR, but we submitted the paper to ICLR Workshop Track because the conference topics in the Call for Abstracts include
+ Unsupervised, semi-supervised, and supervised representation learning
+ Applications in vision, audio, speech, natural language processing, robotics, neuroscience, or any other field |
rJNa3C4Yg | Performance guarantees for transferring representations | [
"Daniel McNamara",
"Maria-Florina Balcan"
] | A popular machine learning strategy is the transfer of a representation (i.e. a feature extraction function) learned on a source task to a target task. Examples include the re-use of neural network weights or word embeddings. Our work proposes novel and general sufficient conditions for the success of this approach. If the representation learned from the source task is fixed, we identify conditions on how the tasks relate to obtain an upper bound on target task risk via a VC dimension-based argument. We then consider using the representation from the source task to construct a prior, which is fine-tuned using target task data. We give a PAC-Bayes target task risk bound in this setting under suitable conditions. We show examples of our bounds using feedforward neural networks. Our results motivate a practical approach to weight sharing, which we validate with experiments. | [
"Theory",
"Transfer Learning"
] | https://openreview.net/pdf?id=rJNa3C4Yg | SJWBkqmie | official_review | 1,489,375,032,756 | rJNa3C4Yg | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper124/AnonReviewer1"
] | title: reasonable first step for an important problem
rating: 7: Good paper, accept
review: The paper provides generalization bounds for a common practice in transfer learning with deep neural nets, where the representation learned on a source task (having lot of labeled data) is transferred to a target task. It analyzes two settings: (i) when representation learned on the source is kept fixed and a new classifier for the target task is learned on top of it, (ii) when the representation is also fine-tuned for the target task. To the best of my knowledge it seems to be the first work to analyze this setting.
Pros:
- considers an important problem
- well-written paper
Cons:
- hard to say if the proof techniques used are novel -- not enough details
- doesn't give much intuition on when is fine tuning better than fixed representation
confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
rJNa3C4Yg | Performance guarantees for transferring representations | [
"Daniel McNamara",
"Maria-Florina Balcan"
] | A popular machine learning strategy is the transfer of a representation (i.e. a feature extraction function) learned on a source task to a target task. Examples include the re-use of neural network weights or word embeddings. Our work proposes novel and general sufficient conditions for the success of this approach. If the representation learned from the source task is fixed, we identify conditions on how the tasks relate to obtain an upper bound on target task risk via a VC dimension-based argument. We then consider using the representation from the source task to construct a prior, which is fine-tuned using target task data. We give a PAC-Bayes target task risk bound in this setting under suitable conditions. We show examples of our bounds using feedforward neural networks. Our results motivate a practical approach to weight sharing, which we validate with experiments. | [
"Theory",
"Transfer Learning"
] | https://openreview.net/pdf?id=rJNa3C4Yg | r1RO6jBse | comment | 1,489,513,846,334 | ByVRBv7jl | [
"everyone"
] | [
"~Daniel_McNamara1"
] | title: Response to AnonReviewer2
comment: Thank you for the review and the thoughtful feedback. We have made amendments to the paper which address your suggestions.
We have made a few changes to the explanations given for each of the theorems to enhance the readability and clarity of the paper. We have also made a couple of refinements to the paper's notation.
While the function \omega in Theorem 1 is necessarily abstract, we have added wording describing the role it plays, explaining why it is a necessary assumption and pointing the reader to the example \omega in Theorem 2.
We have provided greater explanation in Section 4 about why assuming lower level features are more transferrable is reasonable in relevant applications.
We have tightened the language describing Theorem 1 to remove the "vague terms" that you mentioned. |
rJNa3C4Yg | Performance guarantees for transferring representations | [
"Daniel McNamara",
"Maria-Florina Balcan"
] | A popular machine learning strategy is the transfer of a representation (i.e. a feature extraction function) learned on a source task to a target task. Examples include the re-use of neural network weights or word embeddings. Our work proposes novel and general sufficient conditions for the success of this approach. If the representation learned from the source task is fixed, we identify conditions on how the tasks relate to obtain an upper bound on target task risk via a VC dimension-based argument. We then consider using the representation from the source task to construct a prior, which is fine-tuned using target task data. We give a PAC-Bayes target task risk bound in this setting under suitable conditions. We show examples of our bounds using feedforward neural networks. Our results motivate a practical approach to weight sharing, which we validate with experiments. | [
"Theory",
"Transfer Learning"
] | https://openreview.net/pdf?id=rJNa3C4Yg | rywlTsSol | comment | 1,489,513,710,972 | SJWBkqmie | [
"everyone"
] | [
"~Daniel_McNamara1"
] | title: Response to AnonReviewer1
comment: Thank you for the review and the thoughtful feedback. We have made amendments to the paper which address your suggestions.
We have made a few amendments to the explanations given for each of the theorems to provide additional insights about them. We have also highlighted the novelty of the work, in particular the generality of the sufficient conditions (now mentioned in the abstract) and the arguments used in the neural network example proofs (now mentioned before statement of Theorem 2). Separate to this submission, we have also written a longer paper which includes all proofs.
We have also added a sentence to the third paragraph of the introduction to provide more comparison of the pros and cons of a fixed representation vs fine-tuning. |
rJNa3C4Yg | Performance guarantees for transferring representations | [
"Daniel McNamara",
"Maria-Florina Balcan"
] | A popular machine learning strategy is the transfer of a representation (i.e. a feature extraction function) learned on a source task to a target task. Examples include the re-use of neural network weights or word embeddings. Our work proposes novel and general sufficient conditions for the success of this approach. If the representation learned from the source task is fixed, we identify conditions on how the tasks relate to obtain an upper bound on target task risk via a VC dimension-based argument. We then consider using the representation from the source task to construct a prior, which is fine-tuned using target task data. We give a PAC-Bayes target task risk bound in this setting under suitable conditions. We show examples of our bounds using feedforward neural networks. Our results motivate a practical approach to weight sharing, which we validate with experiments. | [
"Theory",
"Transfer Learning"
] | https://openreview.net/pdf?id=rJNa3C4Yg | ByVRBv7jl | official_review | 1,489,364,428,235 | rJNa3C4Yg | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper124/AnonReviewer2"
] | title: A new theoretical angle into transfer learning
rating: 6: Marginally above acceptance threshold
review: This paper proposes sufficient conditions for success of transfer learning
pros:
originality: To the best of my knowledge, this work is original
significance: transfer learning has lead to considerable improvement in deep learning and a theoretical approach for formulating when and how it succeeds is very important and much needed
cons.
clarity: the paper is not completely well-written and in places hard to follow
quality: overall, I like this paper due to the problem it considers and its approach, however, the paper would improve significantly with filling in the gaps mentioned below:
* The authors provide no intuition or insight into how the bound is derived and what the different terms mean, e.g., how does function w look in practice for different datasets? what are the ways to measure or approximate it?
* The assumptions are given without any justification of why they are needed and what are the cases that the hold. e.g., The property that is assumed in Theorem 1. Section 4: assuming that lower level features are more transferrable.
* There are some vague terms used right before Theorem 1 that are not appropriate for a theory paper: If w does not grow too "quickly", \hat{R} is "small", etc
confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
rJNa3C4Yg | Performance guarantees for transferring representations | [
"Daniel McNamara",
"Maria-Florina Balcan"
] | A popular machine learning strategy is the transfer of a representation (i.e. a feature extraction function) learned on a source task to a target task. Examples include the re-use of neural network weights or word embeddings. Our work proposes novel and general sufficient conditions for the success of this approach. If the representation learned from the source task is fixed, we identify conditions on how the tasks relate to obtain an upper bound on target task risk via a VC dimension-based argument. We then consider using the representation from the source task to construct a prior, which is fine-tuned using target task data. We give a PAC-Bayes target task risk bound in this setting under suitable conditions. We show examples of our bounds using feedforward neural networks. Our results motivate a practical approach to weight sharing, which we validate with experiments. | [
"Theory",
"Transfer Learning"
] | https://openreview.net/pdf?id=rJNa3C4Yg | SyRBdFTjx | comment | 1,490,028,613,912 | rJNa3C4Yg | [
"everyone"
] | [
"ICLR.cc/2017/pcs"
] | decision: Accept
title: ICLR committee final decision |
H1aKXVNKx | Predicting Surgery Duration with Neural Heteroscedastic Regression | [
"Nathan Ng",
"Rodney A Gabriel",
"Julian McAuley",
"Charles Elkan",
"Zachary C Lipton"
] | Scheduling surgeries is a challenging task due to the fundamental uncertainty of the clinical environment, as well as the risks and costs associated with under- and over-booking. We investigate neural regression algorithms to estimate the parameters of surgery case durations, focusing on the issue of heteroscedasticity. We seek to simultaneously estimate the duration of each surgery, as well as a surgery-specific notion of our uncertainty about its duration. Estimating this uncertainty can lead to more nuanced and effective scheduling strategies, as we are able to schedule surgeries more efficiently while allowing an informed and case-specific margin of error. Using surgery records from the UC San Diego Health System, we demonstrate potential improvements on the order of 18% (in terms of minutes overbooked) compared to current scheduling techniques, as well as strong baselines that do not account for heteroscedasticity. | [
"Deep learning",
"Supervised Learning"
] | https://openreview.net/pdf?id=H1aKXVNKx | rJ45F7usl | official_review | 1,489,676,684,515 | H1aKXVNKx | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper70/AnonReviewer3"
] | title: good motivation but incremental improvements
rating: 5: Marginally below acceptance threshold
review: This paper proposes the use of an MLP that predicts both mean and std for predicting the time of a surgical operation.
They extend the method also to Laplace distribution.
The method is simple, not novel, but the combination of the method and the application is novel.
What worries me is the marginal improvements reported in table 1. Most of the improvement comes from the use of an MLP, more than the prediction of the variance - see the difference between Gaussian and Gaussian HS, and Laplace and Laplace HS.
My conclusion is that the choice of the distribution/loss in conjunction with the use of an MLP is more important than anything else, and in particular, it is more important than predicting variance (which is the main point of the abstract).
confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
H1aKXVNKx | Predicting Surgery Duration with Neural Heteroscedastic Regression | [
"Nathan Ng",
"Rodney A Gabriel",
"Julian McAuley",
"Charles Elkan",
"Zachary C Lipton"
] | Scheduling surgeries is a challenging task due to the fundamental uncertainty of the clinical environment, as well as the risks and costs associated with under- and over-booking. We investigate neural regression algorithms to estimate the parameters of surgery case durations, focusing on the issue of heteroscedasticity. We seek to simultaneously estimate the duration of each surgery, as well as a surgery-specific notion of our uncertainty about its duration. Estimating this uncertainty can lead to more nuanced and effective scheduling strategies, as we are able to schedule surgeries more efficiently while allowing an informed and case-specific margin of error. Using surgery records from the UC San Diego Health System, we demonstrate potential improvements on the order of 18% (in terms of minutes overbooked) compared to current scheduling techniques, as well as strong baselines that do not account for heteroscedasticity. | [
"Deep learning",
"Supervised Learning"
] | https://openreview.net/pdf?id=H1aKXVNKx | rkpmOKpsg | comment | 1,490,028,580,875 | H1aKXVNKx | [
"everyone"
] | [
"ICLR.cc/2017/pcs"
] | decision: Reject
title: ICLR committee final decision |
H1aKXVNKx | Predicting Surgery Duration with Neural Heteroscedastic Regression | [
"Nathan Ng",
"Rodney A Gabriel",
"Julian McAuley",
"Charles Elkan",
"Zachary C Lipton"
] | Scheduling surgeries is a challenging task due to the fundamental uncertainty of the clinical environment, as well as the risks and costs associated with under- and over-booking. We investigate neural regression algorithms to estimate the parameters of surgery case durations, focusing on the issue of heteroscedasticity. We seek to simultaneously estimate the duration of each surgery, as well as a surgery-specific notion of our uncertainty about its duration. Estimating this uncertainty can lead to more nuanced and effective scheduling strategies, as we are able to schedule surgeries more efficiently while allowing an informed and case-specific margin of error. Using surgery records from the UC San Diego Health System, we demonstrate potential improvements on the order of 18% (in terms of minutes overbooked) compared to current scheduling techniques, as well as strong baselines that do not account for heteroscedasticity. | [
"Deep learning",
"Supervised Learning"
] | https://openreview.net/pdf?id=H1aKXVNKx | B1PJU9gix | official_review | 1,489,180,127,449 | H1aKXVNKx | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper70/AnonReviewer1"
] | title: Review
rating: 5: Marginally below acceptance threshold
review: Summary:
This work models the distribution of surgery durations using unimodal
parameteric distributions (viz., Gaussian and Laplace) by regressing their
parameters using multi-layer perceptrons based on patient and clinical
environment attributes.
Using the uncertainty (or standard-deviation) estimates, they report
improvements of 18% in scheduling surgeries.
This is the first application of heteroscedastic neural regression to clinical
medical data.
Comments:
1. In Table 1, it is not clear what "Current Method" corresponds to.
Assessment:
Clarity:
The method has been presented clearly, with all the details to reproduce the
results (although it is not clear if the medical data is publicly available).
Novelty & Significance:
The method presented uses multi-layer perceptrons (MLPs) for regressing
parameters of univariate unimodal parametric distributions, which is not quite
novel, and is a simple application MLP to this specific domain (clinical medical
data).
confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
H1aKXVNKx | Predicting Surgery Duration with Neural Heteroscedastic Regression | [
"Nathan Ng",
"Rodney A Gabriel",
"Julian McAuley",
"Charles Elkan",
"Zachary C Lipton"
] | Scheduling surgeries is a challenging task due to the fundamental uncertainty of the clinical environment, as well as the risks and costs associated with under- and over-booking. We investigate neural regression algorithms to estimate the parameters of surgery case durations, focusing on the issue of heteroscedasticity. We seek to simultaneously estimate the duration of each surgery, as well as a surgery-specific notion of our uncertainty about its duration. Estimating this uncertainty can lead to more nuanced and effective scheduling strategies, as we are able to schedule surgeries more efficiently while allowing an informed and case-specific margin of error. Using surgery records from the UC San Diego Health System, we demonstrate potential improvements on the order of 18% (in terms of minutes overbooked) compared to current scheduling techniques, as well as strong baselines that do not account for heteroscedasticity. | [
"Deep learning",
"Supervised Learning"
] | https://openreview.net/pdf?id=H1aKXVNKx | SkLCopvoe | comment | 1,489,652,686,146 | B1PJU9gix | [
"everyone"
] | [
"~Zachary_Chase_Lipton1"
] | title: Thanks for the feedback
comment: Dear reviewer,
Thanks for the thoughtful feedback. We'd like to offer the following responses and clarifications:
Good catch that we didn't define "current method" in the extended abstract. The current method is the ad-hoc times that are currently entered into the system to reserve the rooms. These are the actual "human-expert" times predicted by the surgeons and administrators.
We'd like to point out that this work is more novel than the reviewer acknowledges. While several papers have proposed neural heteroscedastic regression, we are to our knowledge one of only two papers to revisit the idea in the context of modern deep learning (multiple hidden layers, rectifier activations, dropout regularization). Moreover, our paper is the only one, to our knowledge, to demonstrate the efficacy of neural heteroscedastic regression on a dataset of real-world importance. The other paper only tested the idea on generic UCI dataset and the classic papers address synthetic & toy problems.
We'd also like to let the reviewer know that we've gone a step further and improved the results by using gamma distributions. These are especially suited to our problem because the distribution of surgery durations can only be long-tailed on only one side (no surgery can take less than 0 minutes). The gamma predictive distribution indeed gives lower NLL than both Gaussian and Laplace. We plan to update the draft with these numbers and the relevant empirical analysis in the next week.
|
H1aKXVNKx | Predicting Surgery Duration with Neural Heteroscedastic Regression | [
"Nathan Ng",
"Rodney A Gabriel",
"Julian McAuley",
"Charles Elkan",
"Zachary C Lipton"
] | Scheduling surgeries is a challenging task due to the fundamental uncertainty of the clinical environment, as well as the risks and costs associated with under- and over-booking. We investigate neural regression algorithms to estimate the parameters of surgery case durations, focusing on the issue of heteroscedasticity. We seek to simultaneously estimate the duration of each surgery, as well as a surgery-specific notion of our uncertainty about its duration. Estimating this uncertainty can lead to more nuanced and effective scheduling strategies, as we are able to schedule surgeries more efficiently while allowing an informed and case-specific margin of error. Using surgery records from the UC San Diego Health System, we demonstrate potential improvements on the order of 18% (in terms of minutes overbooked) compared to current scheduling techniques, as well as strong baselines that do not account for heteroscedasticity. | [
"Deep learning",
"Supervised Learning"
] | https://openreview.net/pdf?id=H1aKXVNKx | H1fdpGtoe | comment | 1,489,739,114,375 | rJ45F7usl | [
"everyone"
] | [
"~Zachary_Chase_Lipton1"
] | title: Fixed mistake in NLL reporting fixed, please re-evaluate the draft
comment: Dear reviewer,
Thanks for taking the time to review our paper. The purpose of estimating the conditional variance is precisely to get *good estimates of the variance*.
The superiority (in this respect) of the predictions of the heteroscedastic models is born out by figures 1 and 2 which show just how strongly the predicted standard deviations correlate with observed errors.
We realize that some amount of the confusion owes to a bug in our initial reporting. The initial table 1 had a scaling bug in calculating NLL numbers. This reporting bug came to light when we were adding late-breaking results on a gamma predictive distribution. Thus the original table 1 didn't make it clear just how much the heteroscedastic models improve over their homoscedastic counterparts.
We've updated the draft with the fixed numbers and it's obvious that the heteroscedastic modeling fits the observed errors dramatically better.
We also added a line to the results table 1 showing results using a gamma predictive distribution, which slightly outperforms even the best heteroscedastic laplacian regression model. We hope you'll take the chance to re-assess the review. |
BJiMcB4Kl | Training Triplet Networks with GAN | [
"Maciej Zieba",
"Lei Wang"
] | Triplet networks are widely used models that are characterized by good performance in classification and retrieval tasks. In this work we propose to train a triplet network by putting it as the discriminator in Generative Adversarial Nets (GANs). We make use of the good capability of representation learning of the discriminator to increase the predictive quality of the model. We evaluated our approach on Cifar10 and MNIST datasets and observed significant improvement on the classification performance using the simple k-nn method. | [] | https://openreview.net/pdf?id=BJiMcB4Kl | Hy_LWhvsx | comment | 1,489,645,903,864 | BJiMcB4Kl | [
"everyone"
] | [
"~Maciej_Mateusz_Zieba1"
] | title: Answers to the stated questions
comment: Dear Reviewers,
we would like to thank you for your commends.
Below we present answers to the stated questions.
Q1: Are your experimental results directly comparable to the semi-supervised experiments in (Salimans et al, 2016)? If the main point of this paper is to “incorporate discriminator in a metric learning task instead of involving it in classification”, we should have that direct comparison.
A1: A direction comparison of our method with the semi-supervised classification in (Salimans et al, 2016) shows the following result: In (Salimans et al, 2016) the classification accuracy for Cifar10 is 81.27% (4000 labeled examples) and 82.27% (8000 labeled examples). Our method takes the penultimate layer of our triplet network model and obtains the classification accuracy 81.59% (5000 labeled examples) with a 9-nearest-neighbour classifier.
However, the major benefit of developing this Triplet-based approach is that it does not need to access class labels (as in (Salimans et al, 2016)). Instead, this approach only needs to access the relationship (similar or dissimilar) between some portion of training examples. In this kind of applications classification-based learning models will not work. In addition, our approach will produce a metric that could be applied to search, compare and rank data. This cannot be done effectively through a classifier as that learned in (Salimans et al, 2016).
To highlight the benefits of using our approach, we compared its retrieval performance with two alternatives via the criterion of mean average precision (mAP). Specifically, for Cifar10 our approach achieves mAP=0.6353. If only using triplet (without incorporating GAN), the result is mAP=0.5367; and if only using GAN, the result is mAP=0.2003. This comparison clearly shows the advantage of our approach on learning a better metric for search or retrieval tasks. By the way, we intended to compare with the classification model in (Salimans et al, 2016) in terms of mAP value. However, such result is not available in that work because it focuses on classification. We are now re-training their model to make this comparison.
Q2: Is triplet network a well-defined term? “Triplet networks are one of the most commonly used techniques in deep learning metric (Yao et al., 2016; Zhuang et al., 2016). “ It is not used in these two reference papers.
A2: As far as we have observed, the “triplet network” term has been used by Hoffer & Ailon (2015) (See the title). In the revised version we will cite this paper immediately after the use of “triplet network” to avoid confusion. On the other hand, by referring to (Yao et al., 2016; Zhuang et al., 2016), we just want to express that currently this kind of models have been widely applied in practical metric learning tasks in computer vision.
Q3: I also would like to see how the accuracies are sensitive to #labeled examples and #features. It’d be desired to add more experimental results.
A3: We agree that this kind of additional evaluation would be beneficial. Below we present the classification and retrieval results obtained on MNIST data (m-number of features, N-number of examples, 9-NN is used as classification model). As seen, the performance of our approach is relatively stable, and it improves with the increasing number of labeled examples and features. In addition, we are working on additional experiments for Cifar10. However, it takes more time than on MNIST. We are going to report the results on Cifar10 (including the mAP result mentioned at the end of A1) in Appendix to the extended abstract.
N=100 N=200 N=500 N=1000
m=16 accuracy 97.61% 98.50% 98.59% 98.86%
mAP 0.8929 0.9244 0.9588 0.9700
m=16 m=32 m=64 m=128 m=256
N=100 accuracy 97.61% 98.26% 98.31% 98.69% 98.65%
mAP 0.8929 0.9118 0.9056 0.9321 0.9414 |
BJiMcB4Kl | Training Triplet Networks with GAN | [
"Maciej Zieba",
"Lei Wang"
] | Triplet networks are widely used models that are characterized by good performance in classification and retrieval tasks. In this work we propose to train a triplet network by putting it as the discriminator in Generative Adversarial Nets (GANs). We make use of the good capability of representation learning of the discriminator to increase the predictive quality of the model. We evaluated our approach on Cifar10 and MNIST datasets and observed significant improvement on the classification performance using the simple k-nn method. | [] | https://openreview.net/pdf?id=BJiMcB4Kl | BJHbOCmox | official_review | 1,489,393,660,845 | BJiMcB4Kl | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper78/AnonReviewer2"
] | title: review
rating: 6: Marginally above acceptance threshold
review: This paper proposes to adopt a triplet network as the discriminator in the GANs.
Semi-supervised experiments on MNIST and CIFAR show that this approach can outperform either Triplet Network on labeled data or GANs on unlabeled data.
I am probably not the best reviewer for this paper, but I think this proposed approach could be interesting to ICLR audience and incline to accept this paper.
Question: Are your experimental results directly comparable to the semi-supervised experiments in (Salimans et al, 2016)? If the main point of this paper is to “incorporate discriminator in a metric learning task instead of involving it in classification”, we should have that direct comparison.
Question: is triplet network a well-defined term? “Triplet networks are one of the most commonly used techniques in deep learning metric (Yao et al., 2016; Zhuang et al., 2016). “ It is not used in these two reference papers.
I also would like to see how the accuracies are sensitive to #labeled examples and #features. It’d be desired to add more experimental results.
confidence: 2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper |
BJiMcB4Kl | Training Triplet Networks with GAN | [
"Maciej Zieba",
"Lei Wang"
] | Triplet networks are widely used models that are characterized by good performance in classification and retrieval tasks. In this work we propose to train a triplet network by putting it as the discriminator in Generative Adversarial Nets (GANs). We make use of the good capability of representation learning of the discriminator to increase the predictive quality of the model. We evaluated our approach on Cifar10 and MNIST datasets and observed significant improvement on the classification performance using the simple k-nn method. | [] | https://openreview.net/pdf?id=BJiMcB4Kl | SyFks7Wig | official_review | 1,489,218,273,192 | BJiMcB4Kl | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper78/AnonReviewer1"
] | rating: 7: Good paper, accept
review: This paper describes using triplet loss to train GAN, and obtained better feature, compared to original GAN and original triplet loss.
This work can be viewed as extension of Semi-supervised training with GAN by using triplet loss.
There is a clean gain through the CIFAR-10 experiment. So I recommend to accept.
confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
BJiMcB4Kl | Training Triplet Networks with GAN | [
"Maciej Zieba",
"Lei Wang"
] | Triplet networks are widely used models that are characterized by good performance in classification and retrieval tasks. In this work we propose to train a triplet network by putting it as the discriminator in Generative Adversarial Nets (GANs). We make use of the good capability of representation learning of the discriminator to increase the predictive quality of the model. We evaluated our approach on Cifar10 and MNIST datasets and observed significant improvement on the classification performance using the simple k-nn method. | [] | https://openreview.net/pdf?id=BJiMcB4Kl | Sk3mayNpl | comment | 1,491,496,228,423 | Hy_LWhvsx | [
"everyone"
] | [
"~Maciej_Mateusz_Zieba1"
] | title: Revision
comment: The revised version of the extended abstract was uploaded. |
BJiMcB4Kl | Training Triplet Networks with GAN | [
"Maciej Zieba",
"Lei Wang"
] | Triplet networks are widely used models that are characterized by good performance in classification and retrieval tasks. In this work we propose to train a triplet network by putting it as the discriminator in Generative Adversarial Nets (GANs). We make use of the good capability of representation learning of the discriminator to increase the predictive quality of the model. We evaluated our approach on Cifar10 and MNIST datasets and observed significant improvement on the classification performance using the simple k-nn method. | [] | https://openreview.net/pdf?id=BJiMcB4Kl | ryxbVdFaig | comment | 1,490,028,585,455 | BJiMcB4Kl | [
"everyone"
] | [
"ICLR.cc/2017/pcs"
] | decision: Accept
title: ICLR committee final decision |
SkPxL0Vte | Deep Pyramidal Residual Networks with Stochastic Depth | [
"Yoshihiro Yamada",
"Masakazu Iwamura",
"Koichi Kise"
] | In generic object recognition tasks, ResNet and its improvements have broken the lowest error rate records.
ResNet enables us to make a network deeper by introducing residual learning.
Some ResNet improvements achieve higher accuracy by focusing on channels.
Thus, the network depth and channels are thought to be important for high accuracy.
In this paper, in addition to them, we pay attention to use of multiple models in data-parallel learning. We refer to it as data-parallel multi-model learning.
We observed that the accuracy increased as models concurrently used increased on some methods, particularly on the combination of PyramidNet and the stochastic depth proposed in the paper.
As a result, we confirmed that the methods outperformed the conventional methods;
on CIFAR-100, the proposed methods achieved error rates of 16.13\% and 16.18\% in contrast to PiramidNet achieving that of 18.29\% and the current state-of-the-art DenseNet-BC 17.18\%.
| [] | https://openreview.net/pdf?id=SkPxL0Vte | HJujaNkqx | official_review | 1,488,043,423,691 | SkPxL0Vte | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper118/AnonReviewer2"
] | title: Difficult to parse. Lacks details to be useful.
rating: 4: Ok but not good enough - rejection
review: This paper is unfortunately very difficult to parse due to the language. It combines two methods of the literature and shows an improvement, but in the absence of a standalone detailed description of the models and/or of an open-source implementation reproducing the results, it's not particularly useful as-is. I'd recommend the authors to put together a more complete manuscript detailing the model, preferably with code.
confidence: 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature |
SkPxL0Vte | Deep Pyramidal Residual Networks with Stochastic Depth | [
"Yoshihiro Yamada",
"Masakazu Iwamura",
"Koichi Kise"
] | In generic object recognition tasks, ResNet and its improvements have broken the lowest error rate records.
ResNet enables us to make a network deeper by introducing residual learning.
Some ResNet improvements achieve higher accuracy by focusing on channels.
Thus, the network depth and channels are thought to be important for high accuracy.
In this paper, in addition to them, we pay attention to use of multiple models in data-parallel learning. We refer to it as data-parallel multi-model learning.
We observed that the accuracy increased as models concurrently used increased on some methods, particularly on the combination of PyramidNet and the stochastic depth proposed in the paper.
As a result, we confirmed that the methods outperformed the conventional methods;
on CIFAR-100, the proposed methods achieved error rates of 16.13\% and 16.18\% in contrast to PiramidNet achieving that of 18.29\% and the current state-of-the-art DenseNet-BC 17.18\%.
| [] | https://openreview.net/pdf?id=SkPxL0Vte | HyiHuFTil | comment | 1,490,028,610,874 | SkPxL0Vte | [
"everyone"
] | [
"ICLR.cc/2017/pcs"
] | decision: Reject
title: ICLR committee final decision |
SkPxL0Vte | Deep Pyramidal Residual Networks with Stochastic Depth | [
"Yoshihiro Yamada",
"Masakazu Iwamura",
"Koichi Kise"
] | In generic object recognition tasks, ResNet and its improvements have broken the lowest error rate records.
ResNet enables us to make a network deeper by introducing residual learning.
Some ResNet improvements achieve higher accuracy by focusing on channels.
Thus, the network depth and channels are thought to be important for high accuracy.
In this paper, in addition to them, we pay attention to use of multiple models in data-parallel learning. We refer to it as data-parallel multi-model learning.
We observed that the accuracy increased as models concurrently used increased on some methods, particularly on the combination of PyramidNet and the stochastic depth proposed in the paper.
As a result, we confirmed that the methods outperformed the conventional methods;
on CIFAR-100, the proposed methods achieved error rates of 16.13\% and 16.18\% in contrast to PiramidNet achieving that of 18.29\% and the current state-of-the-art DenseNet-BC 17.18\%.
| [] | https://openreview.net/pdf?id=SkPxL0Vte | Bk7Nu3goe | official_review | 1,489,188,906,660 | SkPxL0Vte | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper118/AnonReviewer1"
] | title: Combine Pyramid Nets and Networks with Stochastic Depth
rating: 6: Marginally above acceptance threshold
review: Take two ideas that worked, combine them, see if it works better. If the results of the workshop submission are correct, the answer is yes. This paper is extremely light on details, but it is a workshop submission, and the workshop format is a poster, so there should be ample space to highlight the methodology and details of implementation.
confidence: 3: The reviewer is fairly confident that the evaluation is correct |
SkPxL0Vte | Deep Pyramidal Residual Networks with Stochastic Depth | [
"Yoshihiro Yamada",
"Masakazu Iwamura",
"Koichi Kise"
] | In generic object recognition tasks, ResNet and its improvements have broken the lowest error rate records.
ResNet enables us to make a network deeper by introducing residual learning.
Some ResNet improvements achieve higher accuracy by focusing on channels.
Thus, the network depth and channels are thought to be important for high accuracy.
In this paper, in addition to them, we pay attention to use of multiple models in data-parallel learning. We refer to it as data-parallel multi-model learning.
We observed that the accuracy increased as models concurrently used increased on some methods, particularly on the combination of PyramidNet and the stochastic depth proposed in the paper.
As a result, we confirmed that the methods outperformed the conventional methods;
on CIFAR-100, the proposed methods achieved error rates of 16.13\% and 16.18\% in contrast to PiramidNet achieving that of 18.29\% and the current state-of-the-art DenseNet-BC 17.18\%.
| [] | https://openreview.net/pdf?id=SkPxL0Vte | SkEHXoJcl | comment | 1,488,069,435,749 | SkPxL0Vte | [
"everyone"
] | [
"~Masakazu_Iwamura1"
] | title: Implementation of the proposed methods
comment: Implementation of the proposed methods are available here:
https://github.com/AkTgWrNsKnKPP/PyramidNet_with_Stochastic_Depth |
B1lpelBYl | Accelerating SGD for Distributed Deep-Learning Using an Approximted Hessian Matrix | [
"Sebastien Arnold",
"Chunming Wang"
] | We introduce a novel method to compute a rank $m$ approximation of the inverse of the Hessian matrix, in the distributed regime. By leveraging the differences in gradients and parameters of multiple Workers, we are able to efficiently implement a distributed approximation of the Newton-Raphson method. We also present preliminary results which underline advantages and challenges of second-order methods for large stochastic optimization problems. In particular, our work suggests that novel strategies for combining gradients will provide further information on the loss surface. | [
"Deep learning",
"Optimization"
] | https://openreview.net/pdf?id=B1lpelBYl | SJqyBefjx | official_review | 1,489,269,985,807 | B1lpelBYl | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper157/AnonReviewer2"
] | title: Interesting approach
rating: 7: Good paper, accept
review: Compared to the other reviewer I found the approach interesting. While I'm not so keen on exact time complexity, algorithmically the approach seems scalable. I agree that the experimental section is a bit disappointing, and that there might be real concerns on how this particular approximation of the curvature works in practice. But given that it is a workshop submission, I find the proposal very simple and elegant, and I wager that with a bit of care and dedication it could work surprisingly well in practice.
My score however rests heavily on the fact that this is just a workshop submission, I think a lot of work still needs to be done to convert this work in a proper paper.
confidence: 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature |
B1lpelBYl | Accelerating SGD for Distributed Deep-Learning Using an Approximted Hessian Matrix | [
"Sebastien Arnold",
"Chunming Wang"
] | We introduce a novel method to compute a rank $m$ approximation of the inverse of the Hessian matrix, in the distributed regime. By leveraging the differences in gradients and parameters of multiple Workers, we are able to efficiently implement a distributed approximation of the Newton-Raphson method. We also present preliminary results which underline advantages and challenges of second-order methods for large stochastic optimization problems. In particular, our work suggests that novel strategies for combining gradients will provide further information on the loss surface. | [
"Deep learning",
"Optimization"
] | https://openreview.net/pdf?id=B1lpelBYl | ryn5ZXlse | official_review | 1,489,150,355,927 | B1lpelBYl | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper157/AnonReviewer1"
] | title: Time complexity, baselines, hyperparameter selection
rating: 3: Clear rejection
review: I am not quite sure about the time complexity of O(m^3 + n).
" The algorithm does require computation of the eigenvalues and eigenvectors of the m × m matrix G^H × G". Should not G^H \times G first be computed? Given than G is in R^{m x n}, I would expect m x n somewhere in the complexity formula. To compute g as the average of gradients you would need m x n, right?
The experimental results are disappointing:
a) SGD as the only baseline and no comparison with second-order methods and their approximates/alternatives
b) little networks of 16k parameters which raises the question of scalability
c) weird hyperparameter selection "we keep most of our hyper-parameters constant, including learning rates (0.0003 and 0.01)" given that "several experiments diverged when using too large a learning rate, whereas this was beneficial to the convergence rate of SGD" suggesting that the selection of the learning rate was in favor of the proposed approach.
confidence: 3: The reviewer is fairly confident that the evaluation is correct |
B1lpelBYl | Accelerating SGD for Distributed Deep-Learning Using an Approximted Hessian Matrix | [
"Sebastien Arnold",
"Chunming Wang"
] | We introduce a novel method to compute a rank $m$ approximation of the inverse of the Hessian matrix, in the distributed regime. By leveraging the differences in gradients and parameters of multiple Workers, we are able to efficiently implement a distributed approximation of the Newton-Raphson method. We also present preliminary results which underline advantages and challenges of second-order methods for large stochastic optimization problems. In particular, our work suggests that novel strategies for combining gradients will provide further information on the loss surface. | [
"Deep learning",
"Optimization"
] | https://openreview.net/pdf?id=B1lpelBYl | SJIPOKpjl | comment | 1,490,028,638,407 | B1lpelBYl | [
"everyone"
] | [
"ICLR.cc/2017/pcs"
] | decision: Accept
title: ICLR committee final decision |
r1Cy5yrKx | Tactics of Adversarial Attack on Deep Reinforcement Learning Agents | [
"Yen-Chen Lin",
"Zhang-Wei Hong",
"Yuan-Hong Liao",
"Meng-Li Shih",
"Ming-Yu Liu",
"Min Sun"
] | We introduce two novel tactics for adversarial attack on deep reinforcement learning (RL) agents: strategically-timed and enchanting attack. For strategically- timed attack, our method selectively forces the deep RL agent to take the least likely action. For enchanting attack, our method lures the agent to a target state by staging a sequence of adversarial attacks. We show that both DQN and A3C agents are vulnerable to our proposed tactics of adversarial attack. | [
"Deep learning",
"Reinforcement Learning"
] | https://openreview.net/pdf?id=r1Cy5yrKx | ByuHsCkix | comment | 1,489,132,351,732 | SJ5o6kCqx | [
"everyone"
] | [
"~Yen-Chen_Lin1"
] | title: Re: Interesting and relevant topic, clearly work in progress
comment: We thank the reviewer for detail comments. Unfortunately, due to a strict 3 pages limit for ICLR workshop this year, we have to go straight to our method and results in this submission. Similarly due to space, we focus on the attack tactics for this submission.
Based on the reviews, we added the following ideas about defending in section C of our Appendix: (1) train RL agent with adversarial example, (2) detect adversarial example first and then try to mitigate the effect. We hope to have enough interesting results on defending attacks to share in the future.
We are indeed working on a 6-8 pages conference submission which will include proper introduction and motivation, and a summary of the related work. |
r1Cy5yrKx | Tactics of Adversarial Attack on Deep Reinforcement Learning Agents | [
"Yen-Chen Lin",
"Zhang-Wei Hong",
"Yuan-Hong Liao",
"Meng-Li Shih",
"Ming-Yu Liu",
"Min Sun"
] | We introduce two novel tactics for adversarial attack on deep reinforcement learning (RL) agents: strategically-timed and enchanting attack. For strategically- timed attack, our method selectively forces the deep RL agent to take the least likely action. For enchanting attack, our method lures the agent to a target state by staging a sequence of adversarial attacks. We show that both DQN and A3C agents are vulnerable to our proposed tactics of adversarial attack. | [
"Deep learning",
"Reinforcement Learning"
] | https://openreview.net/pdf?id=r1Cy5yrKx | H1KJ5eBix | official_review | 1,489,467,873,496 | r1Cy5yrKx | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper142/AnonReviewer1"
] | title: An application of existing NN attacks in an RL setting
rating: 7: Good paper, accept
review: This paper explains the adaptation of a (Carlini & Wagner, 2016) (mis)classification attack to making the agent choose it's worse (lowest Q score or lower prob for \pi) action instead of best. It also explains an extension of the single time step (s, a, r, s') version to a sequence version, through the use of a forward model (Oh et al., 2015). Side note: the \delta (attack vectors) seem quite significant (the difference in frames in perceptible, e.g. Figure 2).
It is an interesting application of a "classic" attack, the comparison (in terms of performance) to (Huang et al., 2017) is unclear. The experimental evaluation is weak, but sufficient for a workshop.
confidence: 2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper |
r1Cy5yrKx | Tactics of Adversarial Attack on Deep Reinforcement Learning Agents | [
"Yen-Chen Lin",
"Zhang-Wei Hong",
"Yuan-Hong Liao",
"Meng-Li Shih",
"Ming-Yu Liu",
"Min Sun"
] | We introduce two novel tactics for adversarial attack on deep reinforcement learning (RL) agents: strategically-timed and enchanting attack. For strategically- timed attack, our method selectively forces the deep RL agent to take the least likely action. For enchanting attack, our method lures the agent to a target state by staging a sequence of adversarial attacks. We show that both DQN and A3C agents are vulnerable to our proposed tactics of adversarial attack. | [
"Deep learning",
"Reinforcement Learning"
] | https://openreview.net/pdf?id=r1Cy5yrKx | SkSqiNbjg | official_comment | 1,489,222,541,379 | ByuHsCkix | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper142/AnonReviewer2"
] | title: Thank you for the clarification and update
comment: Sorry, I was not aware of the strict 3-page limit. I will update my review accordingly. |
r1Cy5yrKx | Tactics of Adversarial Attack on Deep Reinforcement Learning Agents | [
"Yen-Chen Lin",
"Zhang-Wei Hong",
"Yuan-Hong Liao",
"Meng-Li Shih",
"Ming-Yu Liu",
"Min Sun"
] | We introduce two novel tactics for adversarial attack on deep reinforcement learning (RL) agents: strategically-timed and enchanting attack. For strategically- timed attack, our method selectively forces the deep RL agent to take the least likely action. For enchanting attack, our method lures the agent to a target state by staging a sequence of adversarial attacks. We show that both DQN and A3C agents are vulnerable to our proposed tactics of adversarial attack. | [
"Deep learning",
"Reinforcement Learning"
] | https://openreview.net/pdf?id=r1Cy5yrKx | rJoUuY6jg | comment | 1,490,028,626,748 | r1Cy5yrKx | [
"everyone"
] | [
"ICLR.cc/2017/pcs"
] | decision: Accept
title: ICLR committee final decision |
r1Cy5yrKx | Tactics of Adversarial Attack on Deep Reinforcement Learning Agents | [
"Yen-Chen Lin",
"Zhang-Wei Hong",
"Yuan-Hong Liao",
"Meng-Li Shih",
"Ming-Yu Liu",
"Min Sun"
] | We introduce two novel tactics for adversarial attack on deep reinforcement learning (RL) agents: strategically-timed and enchanting attack. For strategically- timed attack, our method selectively forces the deep RL agent to take the least likely action. For enchanting attack, our method lures the agent to a target state by staging a sequence of adversarial attacks. We show that both DQN and A3C agents are vulnerable to our proposed tactics of adversarial attack. | [
"Deep learning",
"Reinforcement Learning"
] | https://openreview.net/pdf?id=r1Cy5yrKx | SJ5o6kCqx | official_review | 1,489,005,985,843 | r1Cy5yrKx | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper142/AnonReviewer2"
] | title: Interesting and relevant topic, clearly work in progress
rating: 7: Good paper, accept
review: Thank you for pointing me to this work; I was not aware of work in this area, and the topic is quite exciting.
The main problem with this paper is that it is still very clearly work in progress. The problem is not very well-motivated, and the authors rush right into the content without giving any context (it is almost as if they outright assume the reader has read Huang et al. 2016 immediately before reading the current paper or are very well familiar with it.) This work needs proper introduction and motivation, a summary of the related work it builds on, a smoother narrative, etc. I would reject this paper as a conference submission for these reasons: it is just not ready.
Despite this, the authors have results and the topic is very interesting. This is the kind of paper that I think makes an ideal workshop paper: the topic is worth considering and relevant, and results are preliminary but interesting; it could stimulate some discussion which could (a) influence the direction of the work (b) lead to a broader interest in this type of work. So for these reasons, I am recommending accept.
Possible discussion point: identifying the flaws/vulnerabilities with deep RL-trained policies is only the first step. How do we then modify our deep RL algorithms to produce policies that are robust to these type of attacks? Even some speculation on this point would be nice, as I expect it to be a major discussion point once work in this area matures.
------ Edit post-response from authors:
The authors told me about the strict 3-page limit, which I was not aware of. With this limit in mind, I think the authors did a fairly good job of compressing the main ideas and results into the space they had. The page limit does unfortunately still detract from the smoothness of the intro/setup, but the description is still clear enough to understand what follows.
confidence: 3: The reviewer is fairly confident that the evaluation is correct |
r1Cy5yrKx | Tactics of Adversarial Attack on Deep Reinforcement Learning Agents | [
"Yen-Chen Lin",
"Zhang-Wei Hong",
"Yuan-Hong Liao",
"Meng-Li Shih",
"Ming-Yu Liu",
"Min Sun"
] | We introduce two novel tactics for adversarial attack on deep reinforcement learning (RL) agents: strategically-timed and enchanting attack. For strategically- timed attack, our method selectively forces the deep RL agent to take the least likely action. For enchanting attack, our method lures the agent to a target state by staging a sequence of adversarial attacks. We show that both DQN and A3C agents are vulnerable to our proposed tactics of adversarial attack. | [
"Deep learning",
"Reinforcement Learning"
] | https://openreview.net/pdf?id=r1Cy5yrKx | HJgBOYvsl | comment | 1,489,635,384,162 | H1KJ5eBix | [
"everyone"
] | [
"~Yen-Chen_Lin1"
] | title: Re: An application of existing NN attacks in an RL setting
comment: Thanks a lot for your comments!
I would love to clarify that the “strategically-timed attack” we proposed in our paper also determines “when to attack”, i.e., it wants to reduce the total rewards gained by the agent by only attacking it at selective timesteps. Therefore, it goes beyond an adaption of misclassification attack to RL tasks.
The reason why the difference in frames is perceptible is that we enlarge the value of perturbation 250x for visualization, sorry for the confusion. We will clarify it in our future revision.
About the comparison, as we mentioned in our abstract and experiment conclusion, our strategically-timed attack (attacking on average only 25% of timesteps) can reach the same effect as attacking the agent at every timesteps (i.e., Huang’s strategy). |
rk4fr1HYx | Cosegmentation Loss: Enhancing segmentation with a Few Training Samples by Transferring Region Knowledge to Unlabeled Images | [
"Wataru Shimoda",
"Keiji Yanai"
] | We treat semantic segmentation where a few pixel-wise labeled samples
and a large number of unlabeled samples are available. For this
situation we propose cosegmentation loss which enables us to transfer
the knowledge of a few pixel-wise labeled samples to a large number of
unlabeled images. In the experiments, we used human-part segmentation
with a few pixel-wise labeled images and 1715 unlabeled images, and
proved that the proposed co-segmentation loss helped make effective use
of unlabeled images.
| [] | https://openreview.net/pdf?id=rk4fr1HYx | ByMbYo8se | official_review | 1,489,578,234,107 | rk4fr1HYx | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper136/AnonReviewer2"
] | title: interesting idea, but not properly evaluated
rating: 5: Marginally below acceptance threshold
review: The idea of using co-segmentation for semi-supervised segmentation training is potentially interesting - but the authors do not compare it to existing baselines for semi-supervised segmentation.
In particular, the authors claim:
" • We propose a semi-supervised method for semantic segmentation which requires no imagelevel
class labels for unlabeled samples."
This is misleading in my understanding - the authors do train on the pascal-parts datasets, where practically every image is known to contain a human (and potentially his parts) - so the class labels are practically there, but just do not need to be specified, since they are always the same.
It would not be too hard to apply to the same problem existing techniques for weakly- and semi-supervised learning:
Constrained Convolutional Neural Networks for Weakly Supervised Segmentation
Deepak Pathak, Philipp Krähenbühl and Trevor Darrell
ICCV 2015
Weakly- and Semi-Supervised Learning of a DCNN for Semantic Image Segmentation
George Papandreou, Liang-Chieh Chen, Kevin Murphy, Alan L. Yuille, ICCV 2015
The authors also mention: ". The evaluation protocol is based on a simple mean intersection over union (IOU). In evaluation, we do not take care of the background class."
I do not see why the authors deviate from a standard evaluation pipeline.
confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
rk4fr1HYx | Cosegmentation Loss: Enhancing segmentation with a Few Training Samples by Transferring Region Knowledge to Unlabeled Images | [
"Wataru Shimoda",
"Keiji Yanai"
] | We treat semantic segmentation where a few pixel-wise labeled samples
and a large number of unlabeled samples are available. For this
situation we propose cosegmentation loss which enables us to transfer
the knowledge of a few pixel-wise labeled samples to a large number of
unlabeled images. In the experiments, we used human-part segmentation
with a few pixel-wise labeled images and 1715 unlabeled images, and
proved that the proposed co-segmentation loss helped make effective use
of unlabeled images.
| [] | https://openreview.net/pdf?id=rk4fr1HYx | B1wQbtrol | official_review | 1,489,502,494,566 | rk4fr1HYx | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper136/AnonReviewer3"
] | title: Potentially interesting, but more work is needed. No "exciting new ideas".
rating: 5: Marginally below acceptance threshold
review: The abstract presents a co-segmentation approach that is trained in a semi-supervised manner. As one would expect, the semi-supervised model works better than a model trained just on the small set of fully supervised data and worse than the model that would be obtained if the unlabeled data were labeled, also. From the abstract, however, it is not entirely clear what ingredients are essential to make the approach work: the proposed approach seems like a straightforward combination of prior work on co-segmentation and producing segmentation masks with image-classification convnets (Oquab & Bottou; Zhou, ..., & Torralba). It performs roughly on par with a prior approach by Papandreou et al. (the image-level labels are unlikely to help much on the dataset that is studied in the abstract).
It is also unclear how well the results compare with other co-segmentation approaches (Joulin, Bach, & Ponce) or with generic object proposal algorithms such as SharpMask (Pinheiro et al.). More in general, human body part segmentation doesn't seem like the right task to be studying segmentation approaches with limited supervision on: there exist many datasets with human and / or body part segmentations, so why not use those annotations? The proposed method seems more suitable for segmentation of infrequent object classes for which few annotated examples (not even image-level annotations) are available.
Overall, the approach described here may be of interest, but a lot of additional work is needed to know for sure. Having said that, I think the submission does not meet the bar for the ICLR workshops, because it does not present any clear novel ideas --- it is mostly combining existing approaches in a slightly different learning setting. I would recommend the authors to submit a more detailed version of this study to a venue such as CVPR or ICCV.
Minor comment: Table 2 is very hard to read; the data should be presented in learning curves.
confidence: 3: The reviewer is fairly confident that the evaluation is correct |
rk4fr1HYx | Cosegmentation Loss: Enhancing segmentation with a Few Training Samples by Transferring Region Knowledge to Unlabeled Images | [
"Wataru Shimoda",
"Keiji Yanai"
] | We treat semantic segmentation where a few pixel-wise labeled samples
and a large number of unlabeled samples are available. For this
situation we propose cosegmentation loss which enables us to transfer
the knowledge of a few pixel-wise labeled samples to a large number of
unlabeled images. In the experiments, we used human-part segmentation
with a few pixel-wise labeled images and 1715 unlabeled images, and
proved that the proposed co-segmentation loss helped make effective use
of unlabeled images.
| [] | https://openreview.net/pdf?id=rk4fr1HYx | rJPIuFTjg | comment | 1,490,028,622,633 | rk4fr1HYx | [
"everyone"
] | [
"ICLR.cc/2017/pcs"
] | decision: Reject
title: ICLR committee final decision |
rkmU-pEFl | Disparity Map Prediction from Stereo Laparoscopic Images using a Parallel Deep Convolutional Neural Network | [
"Bálint Antal"
] | One of the main computational challenges in supporting minimally invasive surgery techniques is the efficient 3d reconstruction of stereo endoscopic or laparosocopic images. In this paper, a Convolutional Neural Network based approach is presented, which does not require any prior knowledge on the image acquisition technique. We have evaluated the approach on a publicly available dataset and compared to a previous deep neural network approach. The evaluation showed that the approach outperformed the previous method. | [
"Computer vision",
"Deep learning",
"Applications"
] | https://openreview.net/pdf?id=rkmU-pEFl | B1EBOt6se | comment | 1,490,028,603,819 | rkmU-pEFl | [
"everyone"
] | [
"ICLR.cc/2017/pcs"
] | decision: Reject
title: ICLR committee final decision |
rkmU-pEFl | Disparity Map Prediction from Stereo Laparoscopic Images using a Parallel Deep Convolutional Neural Network | [
"Bálint Antal"
] | One of the main computational challenges in supporting minimally invasive surgery techniques is the efficient 3d reconstruction of stereo endoscopic or laparosocopic images. In this paper, a Convolutional Neural Network based approach is presented, which does not require any prior knowledge on the image acquisition technique. We have evaluated the approach on a publicly available dataset and compared to a previous deep neural network approach. The evaluation showed that the approach outperformed the previous method. | [
"Computer vision",
"Deep learning",
"Applications"
] | https://openreview.net/pdf?id=rkmU-pEFl | B1UqyIlse | official_review | 1,489,162,125,875 | rkmU-pEFl | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper106/AnonReviewer2"
] | title: Novelty and evaluation lacking
rating: 2: Strong rejection
review: This manuscript presents a seemingly simple method for doing disparity map prediction but compares only to a previous publication of the author's own. There are a dozen papers about this problem in other application domains, and so the methodology from them should be a point of comparison. Instead there is one citation to an obscure conference proceedings paper of the author's previous work, which is likely not competitive with state-of-the-art on this sort of problem.
Little motivation is given, and the model selection strategy is not discussed (it sounds as if early stopping is performed on the test set which is very worrying). Table 2 is essentially vacuous.
confidence: 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature |
rkmU-pEFl | Disparity Map Prediction from Stereo Laparoscopic Images using a Parallel Deep Convolutional Neural Network | [
"Bálint Antal"
] | One of the main computational challenges in supporting minimally invasive surgery techniques is the efficient 3d reconstruction of stereo endoscopic or laparosocopic images. In this paper, a Convolutional Neural Network based approach is presented, which does not require any prior knowledge on the image acquisition technique. We have evaluated the approach on a publicly available dataset and compared to a previous deep neural network approach. The evaluation showed that the approach outperformed the previous method. | [
"Computer vision",
"Deep learning",
"Applications"
] | https://openreview.net/pdf?id=rkmU-pEFl | HJhCmdeig | official_review | 1,489,171,411,803 | rkmU-pEFl | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper106/AnonReviewer1"
] | title: No novelty
rating: 3: Clear rejection
review: I completely agree with Reviewer2. The method isn't novel, and compares only to author's previous work.
confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
H1ZaRZVKg | On Improving the Numerical Stability of Winograd Convolutions | [
"Kevin Vincent",
"Kevin Stephano",
"Michael Frumkin",
"Boris Ginsburg",
"Julien Demouth"
] | Deep convolutional neural networks rely on heavily optimized convolution algorithms. Winograd convolutions provide an efficient approach to performing such convolutions. Using larger Winograd convolution tiles, the convolution will become more efficient but less numerically accurate. Here we provide some approaches to mitigating this numerical inaccuracy. We will exemplify these approaches by working on a tile much larger than any previously documented: F(9x9, 5x5). Using these approaches, we will show that such a tile can be used to train modern networks and provide performance benefits. | [
"Deep learning"
] | https://openreview.net/pdf?id=H1ZaRZVKg | SyBUg-Moe | comment | 1,489,272,908,774 | BksAtQeie | [
"everyone"
] | [
"~Kevin_Vincent1"
] | title: Clarification of terms
comment: Thank you for the review.
To clarify, we do not claim a 1.4x speedup for all of Inception-v3. We claim a 1.4x speedup for the single 5x5 convolution layer in Inception-v3. Our proposed F(9x9, 5x5) does not affect any other convolutions or layers in Inception-v3.
You are correct on our use of "successfully trained", we observe practically identical final error rates when using F(9x9, 5x5); compared with both the published network results and our own tests using direct convolutions. |
H1ZaRZVKg | On Improving the Numerical Stability of Winograd Convolutions | [
"Kevin Vincent",
"Kevin Stephano",
"Michael Frumkin",
"Boris Ginsburg",
"Julien Demouth"
] | Deep convolutional neural networks rely on heavily optimized convolution algorithms. Winograd convolutions provide an efficient approach to performing such convolutions. Using larger Winograd convolution tiles, the convolution will become more efficient but less numerically accurate. Here we provide some approaches to mitigating this numerical inaccuracy. We will exemplify these approaches by working on a tile much larger than any previously documented: F(9x9, 5x5). Using these approaches, we will show that such a tile can be used to train modern networks and provide performance benefits. | [
"Deep learning"
] | https://openreview.net/pdf?id=H1ZaRZVKg | BkYmdY6ie | comment | 1,490,028,576,757 | H1ZaRZVKg | [
"everyone"
] | [
"ICLR.cc/2017/pcs"
] | decision: Accept
title: ICLR committee final decision |
H1ZaRZVKg | On Improving the Numerical Stability of Winograd Convolutions | [
"Kevin Vincent",
"Kevin Stephano",
"Michael Frumkin",
"Boris Ginsburg",
"Julien Demouth"
] | Deep convolutional neural networks rely on heavily optimized convolution algorithms. Winograd convolutions provide an efficient approach to performing such convolutions. Using larger Winograd convolution tiles, the convolution will become more efficient but less numerically accurate. Here we provide some approaches to mitigating this numerical inaccuracy. We will exemplify these approaches by working on a tile much larger than any previously documented: F(9x9, 5x5). Using these approaches, we will show that such a tile can be used to train modern networks and provide performance benefits. | [
"Deep learning"
] | https://openreview.net/pdf?id=H1ZaRZVKg | ry_ADBy9l | official_review | 1,488,046,031,958 | H1ZaRZVKg | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper63/AnonReviewer2"
] | title: Useful data point on the potential of Winograd convolutions for wider filters.
rating: 7: Good paper, accept
review: Good short note on how one might implement bigger support convolutions using the Winograd technique. The heuristics proposed might have wider applicability. It would be great if someone figured out more general principles for automatically designing these kernels in a numerically stable way.
confidence: 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature |
H1ZaRZVKg | On Improving the Numerical Stability of Winograd Convolutions | [
"Kevin Vincent",
"Kevin Stephano",
"Michael Frumkin",
"Boris Ginsburg",
"Julien Demouth"
] | Deep convolutional neural networks rely on heavily optimized convolution algorithms. Winograd convolutions provide an efficient approach to performing such convolutions. Using larger Winograd convolution tiles, the convolution will become more efficient but less numerically accurate. Here we provide some approaches to mitigating this numerical inaccuracy. We will exemplify these approaches by working on a tile much larger than any previously documented: F(9x9, 5x5). Using these approaches, we will show that such a tile can be used to train modern networks and provide performance benefits. | [
"Deep learning"
] | https://openreview.net/pdf?id=H1ZaRZVKg | BksAtQeie | official_review | 1,489,152,467,007 | H1ZaRZVKg | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper63/AnonReviewer1"
] | title: Improved stability and speed for Winograd convs
rating: 7: Good paper, accept
review: This paper shows how Winograd convolutions can be made more numerically stable for large tile-sizes, which are more efficient. The authors show significant reduction in numerical errors and a roughly 1.4x speed increase for inception-v3, which is quite meaningful.
It is stated that "we have been able to successfully train Alexnet and Inception v3" - does this mean that the final error rate is (almost) unchanged for the network using the new convolution routines?
Given the importance of efficient convolution routines for deep learning and the solid results, I think this paper should be accepted.
confidence: 3: The reviewer is fairly confident that the evaluation is correct |
ByKjYVEYl | Weak Adversarial Boosting | [
"Sreekalyan Deepakreddy",
"Raghav Kulkarni"
] | The "adversarial training" methods have recently been emerging as a promising avenue of research. Broadly speaking these methods achieve efficient training as well as boosted performance via an adversarial choice of data, features, or models. However, since the inception of the Generative Adversarial Nets (GAN),
much of the attention is focussed on adversarial "models", i.e., machines learning by pursuing competing goals.
In this note we investigate the
effectiveness of several (weak) sources of adversarial "data" and "features". In particular we demonstrate:
(a) low precision classifiers can be used as a source of adversarial data-sample closer to the decision boundary
(b) training on these adversarial data-sample can give significant boost to the precision and recall compared to the non-adversarial sample.
We also document the use of these methods for improving the performance of classifiers when only limited (and sometimes no) labeled data is available. | [
"Semi-Supervised Learning"
] | https://openreview.net/pdf?id=ByKjYVEYl | Hk0QuY6je | comment | 1,490,028,581,619 | ByKjYVEYl | [
"everyone"
] | [
"ICLR.cc/2017/pcs"
] | decision: Reject
title: ICLR committee final decision |
ByKjYVEYl | Weak Adversarial Boosting | [
"Sreekalyan Deepakreddy",
"Raghav Kulkarni"
] | The "adversarial training" methods have recently been emerging as a promising avenue of research. Broadly speaking these methods achieve efficient training as well as boosted performance via an adversarial choice of data, features, or models. However, since the inception of the Generative Adversarial Nets (GAN),
much of the attention is focussed on adversarial "models", i.e., machines learning by pursuing competing goals.
In this note we investigate the
effectiveness of several (weak) sources of adversarial "data" and "features". In particular we demonstrate:
(a) low precision classifiers can be used as a source of adversarial data-sample closer to the decision boundary
(b) training on these adversarial data-sample can give significant boost to the precision and recall compared to the non-adversarial sample.
We also document the use of these methods for improving the performance of classifiers when only limited (and sometimes no) labeled data is available. | [
"Semi-Supervised Learning"
] | https://openreview.net/pdf?id=ByKjYVEYl | HJ1hUXMcx | official_review | 1,488,234,150,730 | ByKjYVEYl | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper71/AnonReviewer2"
] | title: Official review For Weak Adversarial Boosting
rating: 2: Strong rejection
review: The authors describe recent work where employing adversarial sample data can give significant improvement.
The authors work is not a generative adversarial network (GAN) nor training with adversarial examples, hence I would have difficulty labeling this paper as 'weak adversarial boosting'. In particular, note that adversarial examples are examples generated via gradient propagation in the model that perceptually indistinguishable to humans but are misclassified by the machine learning system. The data the authors describe are instead more like 'hard negatives' as humans judge that the algorithm incorrectly classified these examples.
The authors show that by employing these hard negatives and committee of experts they could improve the quality of the classifier. Both techniques of employing hard negatives and a committee of classifiers are known to be useful for training all sorts of machine learning systems and I do not see what is new in this work.
Additional issues:
- Authors have almost no references to prior work including but not limited to GAN's, adversarial examples, boosting, security issues, hard negative mining.
- Results based on non-publicly available data so not reproducible.
- Results are minimal on one data set and two experiments.
confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
ByKjYVEYl | Weak Adversarial Boosting | [
"Sreekalyan Deepakreddy",
"Raghav Kulkarni"
] | The "adversarial training" methods have recently been emerging as a promising avenue of research. Broadly speaking these methods achieve efficient training as well as boosted performance via an adversarial choice of data, features, or models. However, since the inception of the Generative Adversarial Nets (GAN),
much of the attention is focussed on adversarial "models", i.e., machines learning by pursuing competing goals.
In this note we investigate the
effectiveness of several (weak) sources of adversarial "data" and "features". In particular we demonstrate:
(a) low precision classifiers can be used as a source of adversarial data-sample closer to the decision boundary
(b) training on these adversarial data-sample can give significant boost to the precision and recall compared to the non-adversarial sample.
We also document the use of these methods for improving the performance of classifiers when only limited (and sometimes no) labeled data is available. | [
"Semi-Supervised Learning"
] | https://openreview.net/pdf?id=ByKjYVEYl | ryxMdi7oe | official_review | 1,489,381,384,140 | ByKjYVEYl | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper71/AnonReviewer1"
] | title: Confusing description of methods and insufficient experiments
rating: 2: Strong rejection
review: This work proposes using a weak classifier to produce data to augment a supervised classifier. The way it does is poorly explained and seems similar to existing work on hard negative mining.
In section 3.1, does 'randomly sample outside B' mean sampling from A - B? Does A ∩ B mean the set of examples which were positively labeled by A and B? If this is the case I don't see how that would raise the precision of B to that of A, especially since the negatives that are being used to train B come from the positive set of A. This method of training seems very close to that described in section 3.2.
In particular in section 3.2, it is very unclear why using A ∩ B as the positive labels would increase the recall of B. Using the intersection would reduce the number of positive labels and seems like it would reduce the recall compared to the original.
The experimental results are also very weak. There is no description of 'correlated classifier' and no clear definition of what it means to be correlated. The authors also describe training on a random 1000 samples but 1000 samples of what? Since there are only 1000 labels, what are the results of training directly on the 1000 labeled examples? Are the 1000 positives from the low-precision classifier just examples from unlabelled data?
I also wouldn't describe these techniques as 'adversarial'. Adversarial is normally taken to mean something which intentionally exploits a weakness of the model. None of the described methods intentionally exploit any weakness.
confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
S1nFVFNYx | A Smooth Optimisation Perspective on Training Feedforward Neural Networks | [
"Hao Shen"
] | We present a smooth optimisation perspective on training multilayer Feedforward Neural Networks (FNNs) in the supervised learning setting. By characterising the critical point conditions of an FNN based optimisation problem, we identify the conditions to eliminate local optima of the cost function. By studying the Hessian structure of the cost function at the global minima, we develop an approximate Newton FNN algorithm, which demonstrates promising convergence properties. | [
"Theory",
"Supervised Learning",
"Optimization"
] | https://openreview.net/pdf?id=S1nFVFNYx | BJBOzPa5e | comment | 1,488,970,349,419 | HkBDqkp5g | [
"everyone"
] | [
"~Hao_Shen1"
] | title: Reply to AnonReviewer1
comment: 1) I'd like to thank the reviewer for his/her interest in the proposed approach, as well as these constructive comments. In what follows, I address these points accordingly.
2) The matrix P is a necessary condition to ensure local minima free in training FNNs. One simple case is the FNN architecture with only one hidden layer. If the number of processing units in the hidden layer is equal to the number of patterns, then the matrix P is guaranteed to be of full rank, i.e., $T \cdot n_{L}$. However, in a general scenario, the rank of matrix P is dependent on the properties of the Khatri-Rao product of identically partitioned matrices. It is worth noticing that a form of column-wise Kronecker product of two matrices is also called the Khatri–Rao product, which is not the case here. How to ensure matrix P to have a full rank in a general setting is still an open question.
3) The assumption of the global minimum being reachable is based on the universal approximation (UA) theorem of FNNs. The UA theorem only guarantees the existence of an FNN, but is unfortunately not constructive.
4) I'd be happy to provide more experiments in briefly addressing the reviewer's comments by reducing the introduction. |
S1nFVFNYx | A Smooth Optimisation Perspective on Training Feedforward Neural Networks | [
"Hao Shen"
] | We present a smooth optimisation perspective on training multilayer Feedforward Neural Networks (FNNs) in the supervised learning setting. By characterising the critical point conditions of an FNN based optimisation problem, we identify the conditions to eliminate local optima of the cost function. By studying the Hessian structure of the cost function at the global minima, we develop an approximate Newton FNN algorithm, which demonstrates promising convergence properties. | [
"Theory",
"Supervised Learning",
"Optimization"
] | https://openreview.net/pdf?id=S1nFVFNYx | ryBCmxfjg | official_review | 1,489,269,709,160 | S1nFVFNYx | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper89/AnonReviewer2"
] | title: Review for smooth optimization perspective paper
rating: 5: Marginally below acceptance threshold
review: The paper follows an interesting angle on optimizing neural networks. I think the write up can be improved considerably. I'm not sure if this is not the effect of having to restrict itself to 3 pages. E.g. after reading the paper, I'm not sure I know how to implement the proposed approximate Newton method in the paper.Section 3 provides some theoretical analysis of the optimization algorithm, but is not clear that those theorems sums to an algorithm that I could code up.
I think (and maybe it is a bit harsh) that even as a workshop submission the paper is not yet clear enough (at least the pdf) and more effort needs to be put in explaining what the different constraints in the theorems mean (and how to achieve them) and in particular how do you get the algorithm.
The other aspect that I feel uncomfortable with, regarding the approach taken by the authors, is that it seems to me that the constraint of rank(P) = T * n_L, where T is the number of examples, spells out that the network memorized the training set rather than learned. IMHO, while pursuing this quest of either proving that the error surface has no "bad" local minima, or removing them is very valuable, is only so if we can get insights of why this works in a case where you *learn* a good solution, i.e. one that generalizes. I would like if the result is somehow independent of the dataset size, and has more something to do with the underlying structure of the data and nature of the network.
confidence: 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature |
S1nFVFNYx | A Smooth Optimisation Perspective on Training Feedforward Neural Networks | [
"Hao Shen"
] | We present a smooth optimisation perspective on training multilayer Feedforward Neural Networks (FNNs) in the supervised learning setting. By characterising the critical point conditions of an FNN based optimisation problem, we identify the conditions to eliminate local optima of the cost function. By studying the Hessian structure of the cost function at the global minima, we develop an approximate Newton FNN algorithm, which demonstrates promising convergence properties. | [
"Theory",
"Supervised Learning",
"Optimization"
] | https://openreview.net/pdf?id=S1nFVFNYx | HkBDqkp5g | official_review | 1,488,939,612,857 | S1nFVFNYx | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper89/AnonReviewer1"
] | title: Interesting approach, needs more support
rating: 6: Marginally above acceptance threshold
review: The paper presents a smooth optimization perspective on feed forward neural networks and discusses a condition where the local optima does not exist in training. Next, by studying Hessian, it develops an approximate newton algorithm and provides an experiment showing the convergence attitude on the four regions classification benchmark.
Although the approach is interesting, the paper lacks some important pieces: Theorem 1 relies on the matrix P to be full rank but it does not provide any cases or sufficient conditions when this holds. also Theorem 1 assumes the global minimum w* is reachable but does not provide any insights into when this holds. even a couple of examples would be good. I understand that this is a short version but the author could easily fit this in by reducing the introduction.
confidence: 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature |
S1nFVFNYx | A Smooth Optimisation Perspective on Training Feedforward Neural Networks | [
"Hao Shen"
] | We present a smooth optimisation perspective on training multilayer Feedforward Neural Networks (FNNs) in the supervised learning setting. By characterising the critical point conditions of an FNN based optimisation problem, we identify the conditions to eliminate local optima of the cost function. By studying the Hessian structure of the cost function at the global minima, we develop an approximate Newton FNN algorithm, which demonstrates promising convergence properties. | [
"Theory",
"Supervised Learning",
"Optimization"
] | https://openreview.net/pdf?id=S1nFVFNYx | rk3vGCmse | comment | 1,489,392,228,169 | ryBCmxfjg | [
"everyone"
] | [
"~Hao_Shen1"
] | title: Reply to AnonReviewer2
comment: The author appreciates the comments from the reviewer and his/her understanding about the challenge in squeezing such a tedious but straightforward analysis in three pages. Apparently, the author didn't succeed it. Nevertheless, these constructive comments will significantly improve the quality of future submissions.
A brief technical reply to the reviewer's comments is that
1) the condition $rank(P) = T * n_L$ has direct implications about the architecture of the network, and that
2) the interpretation of the number T is more subtle than the sample size.
|
S1nFVFNYx | A Smooth Optimisation Perspective on Training Feedforward Neural Networks | [
"Hao Shen"
] | We present a smooth optimisation perspective on training multilayer Feedforward Neural Networks (FNNs) in the supervised learning setting. By characterising the critical point conditions of an FNN based optimisation problem, we identify the conditions to eliminate local optima of the cost function. By studying the Hessian structure of the cost function at the global minima, we develop an approximate Newton FNN algorithm, which demonstrates promising convergence properties. | [
"Theory",
"Supervised Learning",
"Optimization"
] | https://openreview.net/pdf?id=S1nFVFNYx | SkuVutTog | comment | 1,490,028,591,933 | S1nFVFNYx | [
"everyone"
] | [
"ICLR.cc/2017/pcs"
] | decision: Accept
title: ICLR committee final decision |
ryh9ZySFg | Memory Matching Networks for Genomic Sequence Classification | [
"Jack Lanchantin",
"Ritambhara Singh",
"Yanjun Qi"
] | When analyzing the genome, researchers have discovered that proteins bind to DNA based on certain patterns on the DNA sequence known as "motifs". However, it is difficult to manually construct motifs for protein binding location prediction due to their complexity. Recently, external learned memory models have proven to be effective methods for reasoning over inputs and supporting sets. In this work, we present memory matching networks (MMN) for classifying DNA sequences as protein binding sites. Our model learns a memory bank of encoded motifs, which are dynamic memory modules, and then matches a new test sequence to each of the motifs to classify the sequence as a binding or non-binding site. | [
"Deep learning",
"Supervised Learning",
"Applications"
] | https://openreview.net/pdf?id=ryh9ZySFg | rkT-S9Vog | comment | 1,489,442,052,736 | SkIbkxbol | [
"everyone"
] | [
"~Jack_Lanchantin1"
] | title: Comment 1
comment: Q1: The most substantial problem is the authors' suggestion that the memory in their model is "dynamically learned" without any description of how that aspect is implemented. Though the authors describe how the memory is used, I'm clueless as to how the memory is specified or "learned". This is a significant absence, and a dealbreaker for my recommendation as a reviewer.
A1: Thank you for pointing out this issue in our writing as it is certainly an important part of the paper. We have revised Section 2 of the manuscript by adding the following descriptions:
+ Each memory module is learned via a lookup table with a constant input at each position (e.g. 1 as input to the first position and t as input to position t).
+ To learn each of the L memory matrices, we use a separate lookup table.
+ Essentially each vector of a lookup table learns to encode the representation of that position with an embedding vector of dimension p, where p is a hyperparameter we specify (more details below in A3).
+ Each lookup table is progressively learned during training. However, it is indeed different from the writing module in traditional memory papers such as the Neural Turing Machine.
+ We encode the test input using another lookup table with the actual sequence as input to produce the input matrix S. Thus, this lookup table is of dimension p, with input size 4 (for A,C,G,T, which are represented as 1,2,3,4). |
ryh9ZySFg | Memory Matching Networks for Genomic Sequence Classification | [
"Jack Lanchantin",
"Ritambhara Singh",
"Yanjun Qi"
] | When analyzing the genome, researchers have discovered that proteins bind to DNA based on certain patterns on the DNA sequence known as "motifs". However, it is difficult to manually construct motifs for protein binding location prediction due to their complexity. Recently, external learned memory models have proven to be effective methods for reasoning over inputs and supporting sets. In this work, we present memory matching networks (MMN) for classifying DNA sequences as protein binding sites. Our model learns a memory bank of encoded motifs, which are dynamic memory modules, and then matches a new test sequence to each of the motifs to classify the sequence as a binding or non-binding site. | [
"Deep learning",
"Supervised Learning",
"Applications"
] | https://openreview.net/pdf?id=ryh9ZySFg | rkVIuKpox | comment | 1,490,028,620,232 | ryh9ZySFg | [
"everyone"
] | [
"ICLR.cc/2017/pcs"
] | decision: Accept
title: ICLR committee final decision |
ryh9ZySFg | Memory Matching Networks for Genomic Sequence Classification | [
"Jack Lanchantin",
"Ritambhara Singh",
"Yanjun Qi"
] | When analyzing the genome, researchers have discovered that proteins bind to DNA based on certain patterns on the DNA sequence known as "motifs". However, it is difficult to manually construct motifs for protein binding location prediction due to their complexity. Recently, external learned memory models have proven to be effective methods for reasoning over inputs and supporting sets. In this work, we present memory matching networks (MMN) for classifying DNA sequences as protein binding sites. Our model learns a memory bank of encoded motifs, which are dynamic memory modules, and then matches a new test sequence to each of the motifs to classify the sequence as a binding or non-binding site. | [
"Deep learning",
"Supervised Learning",
"Applications"
] | https://openreview.net/pdf?id=ryh9ZySFg | Bk2eBcEsg | comment | 1,489,442,036,310 | SkIbkxbol | [
"everyone"
] | [
"~Jack_Lanchantin1"
] | title: Comment 2
comment: Q2: Furthermore, the authors also decline to describe the second layer of LSTMs in the form of the g function. The authors state that the f function maps DNA sequences to a vector, and they state that the g' function shares the weights from f. The authors suggest that g is an LSTM, but fail to explain why or how the LSTM operates on a single vector.
A2: Thank you for helping us realize this issue. Our old writing was indeed confusing. We have revised Section 2 with a much better logic flow.
+ f() is a bidirectional attention LSTM on S
+ g’() is a bidirectional attention LSTM on each M_i to encode position dependencies of all t positions in M_i. g’() shares the same weights with f(). In other words, the same LSTM maps not only S into a vector, but each memory matrix M_i.
+ g() is another bidirectional LSTM (without attention) taking the outputs of g′(M_1), g′(M_2), …, g′(M_L) as inputs to encode dependencies among the memory motifs. The g’() output at each index 1, 2, .., L, produces the final memory vectors m_1, m_2, …, m_L. |
ryh9ZySFg | Memory Matching Networks for Genomic Sequence Classification | [
"Jack Lanchantin",
"Ritambhara Singh",
"Yanjun Qi"
] | When analyzing the genome, researchers have discovered that proteins bind to DNA based on certain patterns on the DNA sequence known as "motifs". However, it is difficult to manually construct motifs for protein binding location prediction due to their complexity. Recently, external learned memory models have proven to be effective methods for reasoning over inputs and supporting sets. In this work, we present memory matching networks (MMN) for classifying DNA sequences as protein binding sites. Our model learns a memory bank of encoded motifs, which are dynamic memory modules, and then matches a new test sequence to each of the motifs to classify the sequence as a binding or non-binding site. | [
"Deep learning",
"Supervised Learning",
"Applications"
] | https://openreview.net/pdf?id=ryh9ZySFg | rJHRNqEix | comment | 1,489,441,997,242 | SkIbkxbol | [
"everyone"
] | [
"~Jack_Lanchantin1"
] | title: Comment 4
comment: Q4: If the authors care about producing a useful tool for biologists, they will have to relax their assumption of a dataset balanced between positive and negative examples. TFBS in mammalian genomes is picking needles out of a haystack.
A4: Thank you for bringing up a very important aspect in this line of work.
+ The current datasets we use for comparing the proposed matching-network with baselines are from the Alipanahi et. al, Nature Biotech 2015 paper: Predicting the sequence specificities of DNA- and RNA-binding proteins by deep learning
+ Alipanahi et. al used balanced datasets for TFBS tasks. To compare with this state-of-the-art CNN baseline, we use the same datasets as they did.
+ We certainly agree with this view, and we have been working on creating a dataset which has a more realistic split of samples. |
ryh9ZySFg | Memory Matching Networks for Genomic Sequence Classification | [
"Jack Lanchantin",
"Ritambhara Singh",
"Yanjun Qi"
] | When analyzing the genome, researchers have discovered that proteins bind to DNA based on certain patterns on the DNA sequence known as "motifs". However, it is difficult to manually construct motifs for protein binding location prediction due to their complexity. Recently, external learned memory models have proven to be effective methods for reasoning over inputs and supporting sets. In this work, we present memory matching networks (MMN) for classifying DNA sequences as protein binding sites. Our model learns a memory bank of encoded motifs, which are dynamic memory modules, and then matches a new test sequence to each of the motifs to classify the sequence as a binding or non-binding site. | [
"Deep learning",
"Supervised Learning",
"Applications"
] | https://openreview.net/pdf?id=ryh9ZySFg | SkIbkxbol | official_review | 1,489,202,941,897 | ryh9ZySFg | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper133/AnonReviewer2"
] | title: Interesting direction, incomplete description
rating: 3: Clear rejection
review: The authors suggest a novel approach to classifying transcription factor (TF) binding to DNA sequences based on a neural network model that utilizes memory data structures. In the authors' experiments on a previously studied dataset, their technique exceeds the accuracy of convolutional and recurrent neural networks.
Pros:
-The authors have taken on an important problem--TF binding is critical to gene regulation and our present models are insufficient to precisely predict bound sites in large mammalian genomes. Advances in this space have great value, and the authors may have an advance here.
Major Cons:
-The most substantial problem is the authors' suggestion that the memory in their model is "dynamically learned" without any description of how that aspect is implemented. Though the authors describe how the memory is used, I'm clueless as to how the memory is specified or "learned". This is a significant absence, and a dealbreaker for my recommendation as a reviewer.
-Furthermore, the authors also decline to describe the second layer of LSTMs in the form of the g function. The authors state that the f function maps DNA sequences to a vector, and they state that the g' function shares the weights from f. The authors suggest that g is an LSTM, but fail to explain why or how the LSTM operates on a single vector.
Minor Cons:
-The authors suggest that the memory matrix column size p can take values other than 4. Doesn't this dimension refer to the 4 nucleotides? When p is set to different values, how are the authors representing the DNA sequences?
-If the authors care about producing a useful tool for biologists, they will have to relax their assumption of a dataset balanced between positive and negative examples. TFBS in mammalian genomes is picking needles out of a haystack.
-The language in the paper is sloppy in multiple places. E.g. current TFBS motifs aren't "manually" defined as the authors state; they are computationally defined using different methods. The authors also suggest that doctors may be reluctant to use their model, which is irrelevant; doctors do not examine TFBS.
confidence: 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature |
ryh9ZySFg | Memory Matching Networks for Genomic Sequence Classification | [
"Jack Lanchantin",
"Ritambhara Singh",
"Yanjun Qi"
] | When analyzing the genome, researchers have discovered that proteins bind to DNA based on certain patterns on the DNA sequence known as "motifs". However, it is difficult to manually construct motifs for protein binding location prediction due to their complexity. Recently, external learned memory models have proven to be effective methods for reasoning over inputs and supporting sets. In this work, we present memory matching networks (MMN) for classifying DNA sequences as protein binding sites. Our model learns a memory bank of encoded motifs, which are dynamic memory modules, and then matches a new test sequence to each of the motifs to classify the sequence as a binding or non-binding site. | [
"Deep learning",
"Supervised Learning",
"Applications"
] | https://openreview.net/pdf?id=ryh9ZySFg | ryql9ogol | official_review | 1,489,185,265,857 | ryh9ZySFg | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper133/AnonReviewer1"
] | title: simple model, interesting application
rating: 7: Good paper, accept
review: This work uses a softmax over a set of trained templates to predict whether or not a given protein will bind to a DNA sequence. Given that the templates are fixed, it seems a bit of a stretch to refer to the model as a "memory" model; but the task and the results are nice.
confidence: 3: The reviewer is fairly confident that the evaluation is correct |
ryh9ZySFg | Memory Matching Networks for Genomic Sequence Classification | [
"Jack Lanchantin",
"Ritambhara Singh",
"Yanjun Qi"
] | When analyzing the genome, researchers have discovered that proteins bind to DNA based on certain patterns on the DNA sequence known as "motifs". However, it is difficult to manually construct motifs for protein binding location prediction due to their complexity. Recently, external learned memory models have proven to be effective methods for reasoning over inputs and supporting sets. In this work, we present memory matching networks (MMN) for classifying DNA sequences as protein binding sites. Our model learns a memory bank of encoded motifs, which are dynamic memory modules, and then matches a new test sequence to each of the motifs to classify the sequence as a binding or non-binding site. | [
"Deep learning",
"Supervised Learning",
"Applications"
] | https://openreview.net/pdf?id=ryh9ZySFg | rkzrS94ol | comment | 1,489,442,106,044 | SkIbkxbol | [
"everyone"
] | [
"~Jack_Lanchantin1"
] | title: Response to comments of Reviewer 2
comment: We would like to thank Reviewer 2 for the thorough and important comments which made our manuscript unclear. We have revised the paper accordingly, and we hope that we have properly explained the missing details which were crucial to understanding our methods. Below we explain our responses (in separate comments since openreview won't allow them all to be in one post). |
ryh9ZySFg | Memory Matching Networks for Genomic Sequence Classification | [
"Jack Lanchantin",
"Ritambhara Singh",
"Yanjun Qi"
] | When analyzing the genome, researchers have discovered that proteins bind to DNA based on certain patterns on the DNA sequence known as "motifs". However, it is difficult to manually construct motifs for protein binding location prediction due to their complexity. Recently, external learned memory models have proven to be effective methods for reasoning over inputs and supporting sets. In this work, we present memory matching networks (MMN) for classifying DNA sequences as protein binding sites. Our model learns a memory bank of encoded motifs, which are dynamic memory modules, and then matches a new test sequence to each of the motifs to classify the sequence as a binding or non-binding site. | [
"Deep learning",
"Supervised Learning",
"Applications"
] | https://openreview.net/pdf?id=ryh9ZySFg | Syr1r9Nig | comment | 1,489,442,013,036 | SkIbkxbol | [
"everyone"
] | [
"~Jack_Lanchantin1"
] | title: Comment 3
comment: Q3: The authors suggest that the memory matrix column size p can take values other than 4. Doesn't this dimension refer to the 4 nucleotides? When p is set to different values, how are the authors representing the DNA sequences?
A3: Thank you for asking this since it is important to understand how our model encodes the DNA sequence both in the input and memory spaces.
+ Related to our answer to Q1, p is a hyperparameter we specify since it is simply the embedding dimension. We tried the case of setting p=2, 4, 8, 16. p=4 gave the best overall classification performance.
+ Since each position of both the input DNA sequence and memory units are represented in an embedding space of dimension p, we can vary the hidden dimension p to any size.
+ Interestingly, we also tried another memory-NN structure by (1) encoding the input sequence as one-hot vectors, (2) setting p=4 for the memory lookup tables, and (3) then adding a softmax operation on each column vector of M_i. The rest of the network are the same as the proposed. train the whole network. Training on the same datasets, this actually resulted in better accuracy (mean AUC of 0.94), but we implemented this after the submission so the results were not reported. We hypothesized that the softmax outputs from the memory units in this new structure can learn probability distributions of the 4 nucleotides (A,C,G,T) at each memory position. |
ryh9ZySFg | Memory Matching Networks for Genomic Sequence Classification | [
"Jack Lanchantin",
"Ritambhara Singh",
"Yanjun Qi"
] | When analyzing the genome, researchers have discovered that proteins bind to DNA based on certain patterns on the DNA sequence known as "motifs". However, it is difficult to manually construct motifs for protein binding location prediction due to their complexity. Recently, external learned memory models have proven to be effective methods for reasoning over inputs and supporting sets. In this work, we present memory matching networks (MMN) for classifying DNA sequences as protein binding sites. Our model learns a memory bank of encoded motifs, which are dynamic memory modules, and then matches a new test sequence to each of the motifs to classify the sequence as a binding or non-binding site. | [
"Deep learning",
"Supervised Learning",
"Applications"
] | https://openreview.net/pdf?id=ryh9ZySFg | ryfnE5Ejl | comment | 1,489,441,962,152 | SkIbkxbol | [
"everyone"
] | [
"~Jack_Lanchantin1"
] | title: Comment 5
comment: Q5: The language in the paper is sloppy in multiple places. E.g. current TFBS motifs aren't "manually" defined as the authors state; they are computationally defined using different methods. The authors also suggest that doctors may be reluctant to use their model, which is irrelevant; doctors do not examine TFBS.
A5: Thank you for pointing out the wording issues.
+ “Manually created” was a poor choice of words there. We have removed any reference of “manual creation” in the revised manuscript.
+ Additionally, we should have said “biomedical researchers” instead of the term “doctors”. We have updated this in our manuscript. |
ryh9ZySFg | Memory Matching Networks for Genomic Sequence Classification | [
"Jack Lanchantin",
"Ritambhara Singh",
"Yanjun Qi"
] | When analyzing the genome, researchers have discovered that proteins bind to DNA based on certain patterns on the DNA sequence known as "motifs". However, it is difficult to manually construct motifs for protein binding location prediction due to their complexity. Recently, external learned memory models have proven to be effective methods for reasoning over inputs and supporting sets. In this work, we present memory matching networks (MMN) for classifying DNA sequences as protein binding sites. Our model learns a memory bank of encoded motifs, which are dynamic memory modules, and then matches a new test sequence to each of the motifs to classify the sequence as a binding or non-binding site. | [
"Deep learning",
"Supervised Learning",
"Applications"
] | https://openreview.net/pdf?id=ryh9ZySFg | rktp-qEsl | comment | 1,489,441,216,840 | ryql9ogol | [
"everyone"
] | [
"~Jack_Lanchantin1"
] | title: Response to the comments of Reviewer 1
comment: We would like to thank Reviewer 1 for the helpful comments on improving the clarity of our paper.
+ We agree the memory modules we learned from training is a bit different than the memory module in the neural turing machine since there is no explicit “writing” scheme.
+ But since they are implicitly learned and written to via backprop using the training samples, we like to think of them as memory which we can read from.
+ We agree that the memory “units” are indeed “templates”, which is a better choice of wording. We have added this distinction into the manuscript and use the term “memory templates” instead. |
B1fUVMzKg | Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization | [
"Xun Huang",
"Serge Belongie"
] | Gatys et al. (2015) recently introduced a neural algorithm that renders a content image in the style of another image, achieving so-called \emph{style transfer}. However, their framework requires a slow iterative optimization process, which limits its practical application. Fast approximations with feed-forward neural networks have been proposed to speed up neural style transfer. Unfortunately, the speed improvement comes at a cost: the network is usually tied to a fixed set of styles and cannot adapt to arbitrary new styles. In this paper, we present a simple yet effective approach that for the first time enables arbitrary style transfer in real-time. At the heart of our method is a novel adaptive instance normalization~(AdaIN) layer that aligns the mean and variance of the content features with those of the style features. Our method achieves speed comparable to the fastest existing approach, without the restriction to a pre-defined set of styles. | [
"Computer vision",
"Unsupervised Learning",
"Applications"
] | https://openreview.net/pdf?id=B1fUVMzKg | H1RsTFssl | comment | 1,489,898,918,191 | B1fUVMzKg | [
"everyone"
] | [
"~Xun_Huang1"
] | title: Updated results
comment: We have updated new results with improved quality and qualitative comparisons with Ulyanov et al. 2017, Chen and Schmidt 2016, and Gatys et al. 2016. The main difference is to use relu4_1 instead of relu3_1 of the VGG network. |
Subsets and Splits