forum_id
stringlengths 8
20
| forum_title
stringlengths 4
171
| forum_authors
sequencelengths 0
25
| forum_abstract
stringlengths 4
4.27k
| forum_keywords
sequencelengths 0
10
| forum_pdf_url
stringlengths 38
50
| note_id
stringlengths 8
13
| note_type
stringclasses 6
values | note_created
int64 1,360B
1,736B
| note_replyto
stringlengths 8
20
| note_readers
sequencelengths 1
5
| note_signatures
sequencelengths 1
1
| note_text
stringlengths 10
16.6k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
SklVpqHi | Temporal Convolutional Networks: A Unified Approach to Action Segmentation | [
"Colin Lea",
"Rene Vidal",
"Austin Reiter",
"Gregory D. Hager"
] | The dominant paradigm for video-based action segmentation is composed of two steps: first, for each frame, compute low-level features using Dense Trajectories or a Convolutional Neural Network that encode spatiotemporal information locally, and second, input these features into a classifier that captures high-level temporal relationships, such as a Recurrent Neural Network (RNN). While often effective, this decoupling requires specifying two separate models, each with their own complexities, and prevents capturing more nuanced long-range spatiotemporal relationships. We propose a unified approach, as demonstrated by our Temporal Convolutional Network (TCN), that hierarchically captures relationships at low-, intermediate-, and high-level time-scales. Our model achieves superior or competitive performance using video or sensor data on three public action segmentation datasets and can be trained in a fraction of the time it takes to train an RNN. | [] | https://openreview.net/pdf?id=SklVpqHi | HJY3sBTs | review | 1,473,235,888,788 | SklVpqHi | [
"everyone"
] | [
"(anonymous)"
] | title: Review
rating: 10: Top 5% of accepted papers, seminal paper
review: This is a very interesting paper that focuses on action segmentation in videos of arbitrary length. The main point that the paper tries to make is that almost all method to dates first extract some spatio-temporal local features from short video intervals, then encode them using models that capture higher-order correlations. To this end, the paper proposes to unify the process by processing the video in its entirety at once. Namely, the network does not process frames or clips, instead uses the whole video as input to the first layer, which convolves it and output a "latent video" of half the size (because of temporal max-pooling). These layers stand for the encoder part of the network and similar, but mirrored, layers follow for the decoding part with up-pooling (or "deconvolutions", although this is not a correct term technically). The final result is a temporal action segmentation.
Some questions that would be interesting to have them answered are the following.
- In the experiments section it is mentioned that some of the convolutional filters learn spatio-temporal shifts. Would this be possible to somehow visualize? Are these filters different from what one would get using a 3D-CNN?
- How could one use this network in case when the video is not already recorded, e.g. during live streaming?
- The datasets used are rather small and it is not very clear whether there is a train/test split (also, since they are so small). Are there train/test splits? Also, how does the network avoid the overfitting that might occur in the presence of so few data?
- It is quite interesting the fact that you can use the same network also with accelerometers. An obvious question is, could you reuse these networks for guiding visual networks and/or for learning the temporal shifts between different actions?
- Perhaps it is worth checking the works "Rank Pooling for Action Recognition, PAMI 2016" and "Dynamic Image Networks for Action Recognition, CVPR 2016", which also propose mechanisms for pooling long-term temporal information from videos.
confidence: 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature |
SklVpqHi | Temporal Convolutional Networks: A Unified Approach to Action Segmentation | [
"Colin Lea",
"Rene Vidal",
"Austin Reiter",
"Gregory D. Hager"
] | The dominant paradigm for video-based action segmentation is composed of two steps: first, for each frame, compute low-level features using Dense Trajectories or a Convolutional Neural Network that encode spatiotemporal information locally, and second, input these features into a classifier that captures high-level temporal relationships, such as a Recurrent Neural Network (RNN). While often effective, this decoupling requires specifying two separate models, each with their own complexities, and prevents capturing more nuanced long-range spatiotemporal relationships. We propose a unified approach, as demonstrated by our Temporal Convolutional Network (TCN), that hierarchically captures relationships at low-, intermediate-, and high-level time-scales. Our model achieves superior or competitive performance using video or sensor data on three public action segmentation datasets and can be trained in a fraction of the time it takes to train an RNN. | [] | https://openreview.net/pdf?id=SklVpqHi | ryKRPZgn | review | 1,473,415,121,523 | SklVpqHi | [
"everyone"
] | [
"~Silvia_Laura_Pintea1"
] | title: Temporal Convolutional Networks: A Unified Approach to Action Segmentation
rating: 8: Top 50% of accepted papers, clear accept
review: 1. Paper and Review Summary.
The paper proposes a temporal convolutional network towards action segmentation problems.
The authors claim that the individual components involved in their temporal network (1D convolutions, pooling and channel-wise normalization) enable better performance for the action segmentation task.
Moreover, they propose to solve the problem of action segmentation in one optimization rather than decoupling the feature representation from the temporal modeling.
-------------------
Positive Points:
-------------------
+ The idea o using a temporal convolutional network for action segmentation seems novel.
+ The method achieves promising results in practice.
+ Claimed training efficiency for the proposed model.
-------------------
Negative Points:
-------------------
- The exposition is rather unclear (figures and formulas).
- Certain claims would benefit from more in-depth analysis.
- More accent on what is the difference between the proposed model and existing models such as spatio-temporal CNNs and 3D convolutional networks (Shuiwang Ji et. al, PAMI, 2013).
2. Paper strengths.
The paper proposes the idea of using temporal convolutions towards action segmentation problems which seems to be effective in practice.
The experimental results look promising on the considered datasets when compared to other existing methods, and there is training efficiency when compared to other temporal models such as RNNs/LSTMs.
3. Paper suggested improvements.
Relate work could be better structured per categories: (i) temporal deep learning methods, (ii) methods focusing on action segmentation.
Also, the paper would benefit from a more clear distinction between the proposed model and existing works such as spatio-temporal CNNs and 3D convolutional networks.
Figure 1 is rather unclear and, neither the text nor the caption properly explains the figure.
A more clear diagram of the proposed temporal network with more explicit layer dimensionality and input-output correspondences would be welcomed.
Formulas in section 2 would benefit from more clarification. There are many subscripts and indexes that impede the readability.
In the introduction the authors claim:
- "1D convolutions capture how features at lower levels change over time",
- "pooling enables efficient computation of long-range temporal patterns",
- "[channel-wise] normalization improves robustness towards various environmental conditions".
Gaining insight into why this is so, would be of interest for the reader.
It would be wonderful if the authors could support each one of these claims with a small qualitative or quantitative evaluation.
In the experimental section, especially for the "JIGSAWS" dataset on the Vision-based data, the proposed method obtains considerably better results than the ST-CNN (which is generalized in this proposed model) especially for the "edit" measure.
An analysis of what part of the proposed model entails this difference in performance, would be welcomed.
-------------------
Detailed Comments:
-------------------
- In formula (3) D_t^(1) is never defined.
- In section 4, paragraph 3 the authors print numeric results in the text which makes it very difficult to visualize the difference.
Please put these results in a table.
- Section 2, end of page "These helped at times and hurt in others.", needs rephrasing.
- "The aforementioned solution was superior in aggregate.", unclear, what do you mean by: "superior in aggregate".
- Page numbering would also be welcomed.
confidence: 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature |
SklVpqHi | Temporal Convolutional Networks: A Unified Approach to Action Segmentation | [
"Colin Lea",
"Rene Vidal",
"Austin Reiter",
"Gregory D. Hager"
] | The dominant paradigm for video-based action segmentation is composed of two steps: first, for each frame, compute low-level features using Dense Trajectories or a Convolutional Neural Network that encode spatiotemporal information locally, and second, input these features into a classifier that captures high-level temporal relationships, such as a Recurrent Neural Network (RNN). While often effective, this decoupling requires specifying two separate models, each with their own complexities, and prevents capturing more nuanced long-range spatiotemporal relationships. We propose a unified approach, as demonstrated by our Temporal Convolutional Network (TCN), that hierarchically captures relationships at low-, intermediate-, and high-level time-scales. Our model achieves superior or competitive performance using video or sensor data on three public action segmentation datasets and can be trained in a fraction of the time it takes to train an RNN. | [] | https://openreview.net/pdf?id=SklVpqHi | rJ3tKsQn | review | 1,473,653,123,786 | SklVpqHi | [
"everyone"
] | [
"(anonymous)"
] | title: Temporal Convolutional Networks
rating: 7: Good paper, accept
review: This paper proposes an encoder-decoder neural network that takes as input the per-frame features generated by a CNN and produces as output the class labels associated with the video segments. The encoder-decoder framework consists of (de)-convolution, (de)-pooling and normalization layers, and the claim is that this network can learn long-range dynamics of actions as it can see a larger temporal receptive field. Experiments are provided on three datasets and demonstrate significant promise.
Pros:
1) An interesting way to train a CNN for temporal dynamics. As the per-frame features used are low-dimensional, the framework could be trained for sequences of (chosen) arbitrary lengths. Further, it could also incorporate other features including trajectories, or other data modalities such as accelerometer data (as described).
2) Experimental results on 3 datasets show significant promise over ST-CNN or RNN based schemes.
Criticisms:
1) It is not clear from the exposition why the proposed scheme is better than "correctly trained" LSTMs/RNNs.
2) As far I see, the proposal is to use a temporal pooling classifier on the output of a CNN. It is not clear why an encoder-decoder framework is useful. Why not directly train a classifier on the output of the encoder? In that case, how is the method better than a late/slow-fusion strategy (as described in papers such as Karpathy et al. CVPR 2014)?
3) It is not clear if the whole framework is trained end-to-end. The paper claims in the introduction to use a unified approach -- is it just that it proposes to use a neural network as a classifier than an SVM or are the gradients from the encoder-decoder propagated back to the per-frame CNN?
4) Another important detail missing from the experiments is the actual number of input frames passed to the encoder-decoder setup. The paper says it is the avearge number of frames in the sequences. It will be good to describe what this number is for each dataset.
Overall, I think this paper has some interesting ideas and good experimental results. But the technical exposition is missing important details. It can be a good workshop paper.
confidence: 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature |
SklVpqHi | Temporal Convolutional Networks: A Unified Approach to Action Segmentation | [
"Colin Lea",
"Rene Vidal",
"Austin Reiter",
"Gregory D. Hager"
] | The dominant paradigm for video-based action segmentation is composed of two steps: first, for each frame, compute low-level features using Dense Trajectories or a Convolutional Neural Network that encode spatiotemporal information locally, and second, input these features into a classifier that captures high-level temporal relationships, such as a Recurrent Neural Network (RNN). While often effective, this decoupling requires specifying two separate models, each with their own complexities, and prevents capturing more nuanced long-range spatiotemporal relationships. We propose a unified approach, as demonstrated by our Temporal Convolutional Network (TCN), that hierarchically captures relationships at low-, intermediate-, and high-level time-scales. Our model achieves superior or competitive performance using video or sensor data on three public action segmentation datasets and can be trained in a fraction of the time it takes to train an RNN. | [] | https://openreview.net/pdf?id=SklVpqHi | HJroLBX2 | comment | 1,473,627,804,744 | SklVpqHi | [
"everyone"
] | [
"~Dinesh_Jayaraman1"
] | title: 1-D temporal convolutions to exploit long-range temporal associations for action segmentation
comment: This paper uses intuitions developed in recent work on semantic segmentation of static images, to develop a simple and efficient architecture to exploit long-range temporal associations in video, for the task of temporal segmentation of actions in video. The temporal convolutional formulation proposed here offers a natural way to aggregate information over longer and longer time window contexts in a hierarchy of layers before making decisions about individual time-steps.
Strengths:
> Intuitive and technically sound algorithm design
> Strong empirical results on 3 datasets
> Strong relevance to the workshop
Weaknesses:
> The paper sets up as a departure from the paradigm of computing temporally local features and then assimilating them into final judgements in two decoupled stages (see Abstract and Intro). In my reading, this contrast is not quite as sharp as first made out to be. The proposed model also uses 2 decoupled stages: one for extracting local features (in the extreme case of frame-wise features), and then a second that integrates those features in the proposed temporal CNN model. It thus appears quite compatible with the prior paradigm.
> Exposition can be clearer, e.g., (1) writing around Eq 1 is more complicated than necessary (2) Sec 4, para 2 claiming temporal offset learning etc. is hard to understand.
> The familiar convolution-pool-normalization pipeline (from spatial convolution work) is almost introduced as a new contribution in the paper (Sec 1, para 2), from reading the introduction. This claim can be weakened a little, to only claim the adaptation of this architecture to the temporal setting.
Questions, suggestions, clarifications:
> One technical choice in particular stuck out as slightly odd and warranting some explanation. e.g. why does the decoder (Sec 2, para 7) project back into F0 feature maps (where F0 is the feature dimension of the input)? This seems a somewhat arbitrary choice given that the final output is only 1-channel, and might unnecessarily increase the complexity of the model for high dimensional input features (large F0).
> It is a little unclear why external sensor signals such as accelerometers are introduced (Sec 2, para1 and Sec 3, Salad and Gesture datasets) in this task. Some clarification around this could aid readability.
> Implementation: "cross-entropy" loss is a little puzzling, where softmax seems appropriate. Are annotations in the form of individual labels per time-step, or distributions of labels?
Recommendation: Strong accept
Explanation: This submission has clear technical merit and strong results. While there are minor issues around exposition and some claims, these may be overlooked due to length restrictions and the fact that the paper is likely to be of high interest and offer discussion points for workshop attendees. |
rkPOKkrj | Integrated Variational And Nearest Neighbor field (IVANN) for Optical Flow | [
"Zhuoyuan Chen",
"Ying Wu",
"Hailin Jin",
"Zhe Lin",
"Scott Cohen"
] | It is a fundamental problem to construct accurate dense correspondences between two images. Despite the efforts and promising methods handling relatively small motion, one remaining challenge is induced by large and complex non-rigid motion. Aiming at this challenge, the new method proposed exploits the mutual boosting between the variational flow and the nearest-neighbor field (NNF). The proposed method “IVANN” gives a very effective solution under rather complex motion, and currently achieved state-of-the-art performance on both the Middlebury[3] and MPI-
Sintel benchmarks[7]. | [] | https://openreview.net/pdf?id=rkPOKkrj | rkZcNDnj | review | 1,473,176,712,905 | rkPOKkrj | [
"everyone"
] | [
"(anonymous)"
] | title: Review 1
rating: 7: Good paper, accept
review: This paper focuses on generating optical flow fields for videos. More specifically, the desired goal is to provide more reliable optical flow estimates for videos where large and complex displacement is present. This paper proposes the integration of two different types of optical flow into a single framework via quadratic binary programming.
As the initialization is often crucial for computing good optical flow vector fields, the paper proposes to incorporate Nearest Neighbor Fields (NNF) into the pipeline. NNFs are compute using PatchMatch [Barnes et al., SIGGRAPH 2009] and afterwards apply Non-Rigid Dense Correspondence to aggregate the NNF. Last, NNF is compined with variational flow to obtain the final optical flow field.
The paper is interesting in that it proposes this new initialization that could potentially make the model more robust. The following questions should be addressed:
- For texture-poor scenes how well does PatchMatch and the resulting NNF perform? Are they accurate or the initialization is more inconsistent than a 0-initialization?
- Where is the multi-scale component of the method, as presented in Figure 2?
- How does the binary programming solver works in eq. (2)? Does it combine the two vectors somehow, or does it select one of the two per vector?
- Since the advantage is on large and complex motions, the paper would become experimentally stronger if this conditions would be created artificially and the proposed method would shown to work better than the baselines. For instance, one can generate more videos where intermediate frames are removed, thus artificially resulting in larger displacements and more complex motions.
confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
rkPOKkrj | Integrated Variational And Nearest Neighbor field (IVANN) for Optical Flow | [
"Zhuoyuan Chen",
"Ying Wu",
"Hailin Jin",
"Zhe Lin",
"Scott Cohen"
] | It is a fundamental problem to construct accurate dense correspondences between two images. Despite the efforts and promising methods handling relatively small motion, one remaining challenge is induced by large and complex non-rigid motion. Aiming at this challenge, the new method proposed exploits the mutual boosting between the variational flow and the nearest-neighbor field (NNF). The proposed method “IVANN” gives a very effective solution under rather complex motion, and currently achieved state-of-the-art performance on both the Middlebury[3] and MPI-
Sintel benchmarks[7]. | [] | https://openreview.net/pdf?id=rkPOKkrj | BJf4GeRo | review | 1,473,278,505,961 | rkPOKkrj | [
"everyone"
] | [
"~Jan_C_van_Gemert1"
] | title: Review
rating: 7: Good paper, accept
review: 1. Paper Summary.
The paper is on optical flow estimation. It offers a middle ground between the PatchMatch nearest-neighbor field and Horn-Schunck-like variational optical flow. Results on Middlebury are excellent, on Sintel they are good.
2. Paper Strengths.
+ Well written
+ Well versed in the literature
+ Best performer on the Middlebury dataset
3. Paper Weaknesses.
- Not excellent on Sintel (but still good)
4. Preliminary Rating.
Poster
5. Preliminary Evaluation.
Detailed comments:
Clarity: Fig 2, I find this figure hard to parse.
Clarity: Please spell out the QPBO abbreviation and maybe add a bit more detail what happens to be self-contained.
Results: It would be strong to discuss the weak points of the method. What are they?
Results: Why does the method not work as well on MPI-Sintel? I would have expected better results on Sintel than on Middlebury cause Sintel has larger motions.
Clarity, section 3, the "EPE matched" numbers for Sintel are not presented in the paper.
Suggestion: Will code become available?
confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
rkPOKkrj | Integrated Variational And Nearest Neighbor field (IVANN) for Optical Flow | [
"Zhuoyuan Chen",
"Ying Wu",
"Hailin Jin",
"Zhe Lin",
"Scott Cohen"
] | It is a fundamental problem to construct accurate dense correspondences between two images. Despite the efforts and promising methods handling relatively small motion, one remaining challenge is induced by large and complex non-rigid motion. Aiming at this challenge, the new method proposed exploits the mutual boosting between the variational flow and the nearest-neighbor field (NNF). The proposed method “IVANN” gives a very effective solution under rather complex motion, and currently achieved state-of-the-art performance on both the Middlebury[3] and MPI-
Sintel benchmarks[7]. | [] | https://openreview.net/pdf?id=rkPOKkrj | Sy6juSZn | review | 1,473,497,253,569 | rkPOKkrj | [
"everyone"
] | [
"~Jose_Oramas1"
] | rating: 10: Top 5% of accepted papers, seminal paper
review: The paper proposes IVANN, a method to deal with the large and complex motion when computing optical flow.
IVANN consists on the matching of similar patches between neighboring frames via a denoised nearest neighbor fields (NNF). Moreover, non-rigid dense correspondence is applied to increase the robustness achieved by NNF.
Finally, and adaptive fusion between the output of traditional variational flow and the output of the NNF stage is performed in order to compensate for known weaknesses of NNF.
The proposed method is evaluated on the Middlebury and MPI-Sintel benchmarks.
Strong Points
- The content of the paper is clear and easy to follow.
- The proposed method achieves competitive results in various benchmarks.
Weak Points
- Even thought the authors refer to the computation complexity of the method at the end of the introduction, no indication is given regarding the computation times of the current implementation.
- A discussion of limitations of the proposed method is not present
confidence: 3: The reviewer is fairly confident that the evaluation is correct |
SJGfklStl | Exploring the role of deep learning for particle tracking in high energy physics | [
"Mayur Mudigonda",
"Dustin Anderson",
"Jean-Roch Vilmant",
"Josh Bendavid",
"Maria Spiropoulou",
"Stephan Zheng",
"Aristeidis Tsaris",
"Giuseppe Cerati",
"Jim Kowalkowski",
"Lindsey Gray",
"Panagiotis Spentzouris",
"Steve Farrell",
"Jesse Livezey",
"Prabhat",
"Paolo Calafiura"
] | Tracking particles in a collider is a challenging problem due to collisions, imperfections in sensors and the nonlinear trajectories of particles in a magnetic field. Presently, the algorithms employed to track particles are best suited to capture linear dynamics. We believe that incremental optimization of current LHC (Large Halidron collider) tracking algorithms has reached the point of diminishing returns. These algorithms will not be able to cope with the 10-100x increase in HL-LHC (high luminosity) data rates anticipated to exceed O(100) GB/s by 2025, without large investments in computing hardware and software development or without severely curtailing the physics reach of HL-LHC experiments. An optimized particle tracking algorithm that scales linearly with LHC luminosity (or events detected), rather than quadratically or worse, may lead by itself to an order of magnitude improvement in the track processing throughput without affecting the track identification performance, hence maintaining the physics performance intact. Here, we present preliminary results comparing traditional Kalman filtering based methods for tracking versus an LSTM approach. We find that an LSTM based solution does not outperform a Kalman fiter based solution, arguing for exploring ways to encode apriori information. | [] | https://openreview.net/pdf?id=SJGfklStl | HkzPutpog | comment | 1,490,028,633,741 | SJGfklStl | [
"everyone"
] | [
"ICLR.cc/2017/pcs"
] | decision: Reject
title: ICLR committee final decision |
SJGfklStl | Exploring the role of deep learning for particle tracking in high energy physics | [
"Mayur Mudigonda",
"Dustin Anderson",
"Jean-Roch Vilmant",
"Josh Bendavid",
"Maria Spiropoulou",
"Stephan Zheng",
"Aristeidis Tsaris",
"Giuseppe Cerati",
"Jim Kowalkowski",
"Lindsey Gray",
"Panagiotis Spentzouris",
"Steve Farrell",
"Jesse Livezey",
"Prabhat",
"Paolo Calafiura"
] | Tracking particles in a collider is a challenging problem due to collisions, imperfections in sensors and the nonlinear trajectories of particles in a magnetic field. Presently, the algorithms employed to track particles are best suited to capture linear dynamics. We believe that incremental optimization of current LHC (Large Halidron collider) tracking algorithms has reached the point of diminishing returns. These algorithms will not be able to cope with the 10-100x increase in HL-LHC (high luminosity) data rates anticipated to exceed O(100) GB/s by 2025, without large investments in computing hardware and software development or without severely curtailing the physics reach of HL-LHC experiments. An optimized particle tracking algorithm that scales linearly with LHC luminosity (or events detected), rather than quadratically or worse, may lead by itself to an order of magnitude improvement in the track processing throughput without affecting the track identification performance, hence maintaining the physics performance intact. Here, we present preliminary results comparing traditional Kalman filtering based methods for tracking versus an LSTM approach. We find that an LSTM based solution does not outperform a Kalman fiter based solution, arguing for exploring ways to encode apriori information. | [] | https://openreview.net/pdf?id=SJGfklStl | Bk-2giLoe | official_review | 1,489,576,104,749 | SJGfklStl | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper151/AnonReviewer2"
] | title: marginal topic, partial results
rating: 4: Ok but not good enough - rejection
review: The particular topic of high energy physics would likely be of marginal interest to most attendees, though I might be wrong about this. What's more the paper does not present any results improving on current state of the art based on analytical techniques.
Since the authors' stated goal is to seek advice and input from the learning community at large, it would seem that they would be better served by attending ICLR and striking up conversations with researchers with relevant experience rather than organising a topical workshop.
confidence: 1: The reviewer's evaluation is an educated guess |
SJGfklStl | Exploring the role of deep learning for particle tracking in high energy physics | [
"Mayur Mudigonda",
"Dustin Anderson",
"Jean-Roch Vilmant",
"Josh Bendavid",
"Maria Spiropoulou",
"Stephan Zheng",
"Aristeidis Tsaris",
"Giuseppe Cerati",
"Jim Kowalkowski",
"Lindsey Gray",
"Panagiotis Spentzouris",
"Steve Farrell",
"Jesse Livezey",
"Prabhat",
"Paolo Calafiura"
] | Tracking particles in a collider is a challenging problem due to collisions, imperfections in sensors and the nonlinear trajectories of particles in a magnetic field. Presently, the algorithms employed to track particles are best suited to capture linear dynamics. We believe that incremental optimization of current LHC (Large Halidron collider) tracking algorithms has reached the point of diminishing returns. These algorithms will not be able to cope with the 10-100x increase in HL-LHC (high luminosity) data rates anticipated to exceed O(100) GB/s by 2025, without large investments in computing hardware and software development or without severely curtailing the physics reach of HL-LHC experiments. An optimized particle tracking algorithm that scales linearly with LHC luminosity (or events detected), rather than quadratically or worse, may lead by itself to an order of magnitude improvement in the track processing throughput without affecting the track identification performance, hence maintaining the physics performance intact. Here, we present preliminary results comparing traditional Kalman filtering based methods for tracking versus an LSTM approach. We find that an LSTM based solution does not outperform a Kalman fiter based solution, arguing for exploring ways to encode apriori information. | [] | https://openreview.net/pdf?id=SJGfklStl | SkeHkD1sg | official_review | 1,489,100,599,764 | SJGfklStl | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper151/AnonReviewer1"
] | title: Preliminary work
rating: 4: Ok but not good enough - rejection
review: * The authors seem to acknowledge the premature nature of this submission in the last sentence of discussion, and focus on seeking advice. I scored the paper in standard way.
The paper evaluates LSTM vs kalman filter on particle tracking problem in high-energy physics, using simulated data. The paper presents a well-motivated problem, but provides no significant results/novel insights.
Pros:
-Well-motivated problem
-Part of interesting open project: https://heptrkx.github.io/
Cons:
-Experimental results are not surprising nor conclusive. Imperfect training, insufficient network capacity or expressivity, overfitting, limited data, etc. can easily cause LSTM to underperform best kalman filter with expert knowledge. Experimental descriptions do not show sufficient depths in evaluation.
-The motivated problem/solution is not novel. There are a number of prior work that can be cited for nonlinear state estimation with neural network or combining prior with neural net. Structured VAE (Johnson et. al., 2016), deep KF (Krishnan et. al., 2015), for example, explored incorporating structured prior with rich neural network parametrized observation model. Backprop KF (Haarnoja et. al., 2016) combined discriminative training into state estimation and avoided some problems of these generative model papers.
-Run-time of different implementations should be detailed with scalability analysis, as that is one of the main motivations.
confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
BJNXJgVKg | Similarity preserving compressions of high dimensional sparse data | [
"Raghav Kulkarni",
"Rameshwar Pratap"
] | The rise of internet has resulted in an explosion of data consisting of millions of
articles, images, songs, and videos. Most of this data is high dimensional and
sparse, where the standard compression schemes, such as LSH, become in-
efficient due to at least one of the following reasons: 1. Compression length is
nearly linear in the dimension and grows inversely with the sparsity 2. Randomness
used grows linearly with the product of dimension and compression length.
We propose an efficient compression scheme mapping binary vectors into binary
vectors and simultaneously preserving Hamming distance and Inner Product. Our
schemes avoid all the above mentioned drawbacks for high dimensional sparse
data. The length of our compression depends only on the sparsity and is indepenent
of the dimension of the data, and our schemes work in the streaming setting
as well. We generalize our scheme for real-valued data and obtain compressions
for Euclidean distance, Inner Product, and k-way Inner Product. | [
"Theory"
] | https://openreview.net/pdf?id=BJNXJgVKg | B1sbDREix | official_review | 1,489,458,947,025 | BJNXJgVKg | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper60/AnonReviewer2"
] | title: Lack of novelty
rating: 3: Clear rejection
review: I feel the proposed approach is just a special case of the well-known FJLT, where a sparse +-1 random matrix is used to multiply a signal efficiently while preserving inner products of signal vectors.
The only difference is that in the proposed approach the sampling is without replacement (i.e., one entry can only contribute to one bucket). I don't think it is an important difference. The theoretical results don't show why without replacement sampling matters either.
confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
BJNXJgVKg | Similarity preserving compressions of high dimensional sparse data | [
"Raghav Kulkarni",
"Rameshwar Pratap"
] | The rise of internet has resulted in an explosion of data consisting of millions of
articles, images, songs, and videos. Most of this data is high dimensional and
sparse, where the standard compression schemes, such as LSH, become in-
efficient due to at least one of the following reasons: 1. Compression length is
nearly linear in the dimension and grows inversely with the sparsity 2. Randomness
used grows linearly with the product of dimension and compression length.
We propose an efficient compression scheme mapping binary vectors into binary
vectors and simultaneously preserving Hamming distance and Inner Product. Our
schemes avoid all the above mentioned drawbacks for high dimensional sparse
data. The length of our compression depends only on the sparsity and is indepenent
of the dimension of the data, and our schemes work in the streaming setting
as well. We generalize our scheme for real-valued data and obtain compressions
for Euclidean distance, Inner Product, and k-way Inner Product. | [
"Theory"
] | https://openreview.net/pdf?id=BJNXJgVKg | Syxba3Sjx | official_review | 1,489,517,815,704 | BJNXJgVKg | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper60/AnonReviewer1"
] | title: Clearly written but unsure of novelty
rating: 5: Marginally below acceptance threshold
review: The paper proposes a technique to compress high-dimensional sparse vectors while preserving Hamming distance and inner products. The approach amounts to multiplying the sparse (say, column) vector by a sparse matrix with mutually orthogonal binary or +/-1-valued rows.
The work reads well and is clearly presented. However, the work fails to mention directly related approaches such as the Sparse JL transform or the Fast JL transform. From my understanding these approaches share most (all?) of the benefits of the proposed approach, so I have concerns about the novelty. At minimum a discussion about the differences/tradeoffs compared to these prior techniques is required.
I must say though that I am not very familiar with this area or the mentioned approaches, so it is difficult for me to fully evaluate novelty.
confidence: 3: The reviewer is fairly confident that the evaluation is correct |
BJNXJgVKg | Similarity preserving compressions of high dimensional sparse data | [
"Raghav Kulkarni",
"Rameshwar Pratap"
] | The rise of internet has resulted in an explosion of data consisting of millions of
articles, images, songs, and videos. Most of this data is high dimensional and
sparse, where the standard compression schemes, such as LSH, become in-
efficient due to at least one of the following reasons: 1. Compression length is
nearly linear in the dimension and grows inversely with the sparsity 2. Randomness
used grows linearly with the product of dimension and compression length.
We propose an efficient compression scheme mapping binary vectors into binary
vectors and simultaneously preserving Hamming distance and Inner Product. Our
schemes avoid all the above mentioned drawbacks for high dimensional sparse
data. The length of our compression depends only on the sparsity and is indepenent
of the dimension of the data, and our schemes work in the streaming setting
as well. We generalize our scheme for real-valued data and obtain compressions
for Euclidean distance, Inner Product, and k-way Inner Product. | [
"Theory"
] | https://openreview.net/pdf?id=BJNXJgVKg | ByvQdKTix | comment | 1,490,028,574,938 | BJNXJgVKg | [
"everyone"
] | [
"ICLR.cc/2017/pcs"
] | decision: Reject
title: ICLR committee final decision |
r1FbV6NYe | A Priori Modeling of Information and Intelligence | [
"Marcus Abundis"
] | This workshop explores primitive structural fundaments in information, and then intelligence, as a model of ‘thinking like nature’ (natural informatics). It examines the task of designing a general adaptive intelligence from a low-order (non- anthropic) perspective, to arrive at a least-ambiguous and most-general computa- tional/developmental foundation. | [] | https://openreview.net/pdf?id=r1FbV6NYe | S1qqvq55x | comment | 1,488,787,346,259 | B1a3jcKqg | [
"everyone"
] | [
"~Marcus_Abundis1"
] | title: Who then, works on *general intelligence*?
comment: Thank you for your comment. As my work stresses the modeling of *general* intelligence, it is necessarily 'broad' in its presentation. Still, I take your note to indicate ICLR does not cover general intelligence, despite obvious 'learning representation' issues. If the reviewer knows of more appropriate venues for submitting work on *general intelligence* I would be truly grateful to hear of them. Thank you for your consideration! |
r1FbV6NYe | A Priori Modeling of Information and Intelligence | [
"Marcus Abundis"
] | This workshop explores primitive structural fundaments in information, and then intelligence, as a model of ‘thinking like nature’ (natural informatics). It examines the task of designing a general adaptive intelligence from a low-order (non- anthropic) perspective, to arrive at a least-ambiguous and most-general computa- tional/developmental foundation. | [] | https://openreview.net/pdf?id=r1FbV6NYe | HyQv2clse | official_review | 1,489,181,787,005 | r1FbV6NYe | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper108/AnonReviewer3"
] | title: Not a good fit for ICLR & lacking references
rating: 3: Clear rejection
review: This paper explores the idea of artificial general intelligence and how it can broadly be achieved.
While interesting, the contribution seems to be too broad and vague to fit into the ICLR program on representation learning, since ICLR typically involves concrete challenges in and methods for learning representations. It seems that this work would be of greater interest either to cognitive science conferences or AI conferences (e.g. AAAI or IJCAI).
Perhaps most importantly, the paper does not include any references to prior work in this area and how the ideas in the paper fit with such existing work. This is crucial for evaluating the usefulness of the ideas and placing the work among existing related literature.
confidence: 3: The reviewer is fairly confident that the evaluation is correct |
r1FbV6NYe | A Priori Modeling of Information and Intelligence | [
"Marcus Abundis"
] | This workshop explores primitive structural fundaments in information, and then intelligence, as a model of ‘thinking like nature’ (natural informatics). It examines the task of designing a general adaptive intelligence from a low-order (non- anthropic) perspective, to arrive at a least-ambiguous and most-general computa- tional/developmental foundation. | [] | https://openreview.net/pdf?id=r1FbV6NYe | rytdAkjqx | official_comment | 1,488,809,585,344 | S1qqvq55x | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper108/AnonReviewer2"
] | title: Reply
comment: Representation learning here refers to a fairly specific set of technological issues. More suitable venues would be, depending on how you frame your work, either cognitive science venues like the Cognitive Science Society or applied AI venues like AAAI or IJCAI.
There's another issue here, though. You're not the first (or the ten-thousandth) person to write about issues like these. Unless your work is self-evidently novel and important in a way that's almost never the case, you *need* to situate your work in the context of specific open questions that are being actively studied within the communities that you're submitting your work to. Any academic conference will take a lack of recent citations as a big red flag. |
r1FbV6NYe | A Priori Modeling of Information and Intelligence | [
"Marcus Abundis"
] | This workshop explores primitive structural fundaments in information, and then intelligence, as a model of ‘thinking like nature’ (natural informatics). It examines the task of designing a general adaptive intelligence from a low-order (non- anthropic) perspective, to arrive at a least-ambiguous and most-general computa- tional/developmental foundation. | [] | https://openreview.net/pdf?id=r1FbV6NYe | HyeHBOYail | comment | 1,490,028,605,336 | r1FbV6NYe | [
"everyone"
] | [
"ICLR.cc/2017/pcs"
] | decision: Reject
title: ICLR committee final decision |
r1FbV6NYe | A Priori Modeling of Information and Intelligence | [
"Marcus Abundis"
] | This workshop explores primitive structural fundaments in information, and then intelligence, as a model of ‘thinking like nature’ (natural informatics). It examines the task of designing a general adaptive intelligence from a low-order (non- anthropic) perspective, to arrive at a least-ambiguous and most-general computa- tional/developmental foundation. | [] | https://openreview.net/pdf?id=r1FbV6NYe | HyJOXJ4sg | comment | 1,489,396,582,735 | HyQv2clse | [
"everyone"
] | [
"~Marcus_Abundis1"
] | title: References Added . . .
comment: With the understanding that the paper remains too broad for ICLR, I have none-the-less added the missing references and a short section that discusses the current literature. Thank you for your consideration! |
r1FbV6NYe | A Priori Modeling of Information and Intelligence | [
"Marcus Abundis"
] | This workshop explores primitive structural fundaments in information, and then intelligence, as a model of ‘thinking like nature’ (natural informatics). It examines the task of designing a general adaptive intelligence from a low-order (non- anthropic) perspective, to arrive at a least-ambiguous and most-general computa- tional/developmental foundation. | [] | https://openreview.net/pdf?id=r1FbV6NYe | B1a3jcKqg | official_review | 1,488,722,869,160 | r1FbV6NYe | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper108/AnonReviewer2"
] | title: Way outside the scope of ICLR
rating: 2: Strong rejection
review: This paper proposes *a workshop* on information, intelligence, evolution, subjectivity and their relationship. It's coming from an unaffiliated researcher.
I don't see any specific proposals that I disagree with, but I think this is straightforwardly inappropriate for ICLR. The paper doesn't discuss any concrete issues involving representation learning in the ICLR sense, and is too broad to be meaningfully evaluated or used by the community.
confidence: 3: The reviewer is fairly confident that the evaluation is correct |
SyuncaEKx | Adversarial Autoencoders for Novelty Detection | [
"Valentin Leveau",
"Alexis Joly"
] | In this paper, we address the problem of novelty detection, \textit{i.e} recognizing at test time if a data item comes from the training data distribution or not. We focus on Adversarial autoencoders (AAE) that have the advantage to explicitly control the distribution of the known data in the feature space. We show that when they are trained in a (semi-)supervised way, they provide consistent novelty detection improvements compared to a classical autoencoder. We further improve their performance by introducing an explicit rejection class in the prior distribution coupled with random input images to the autoencoder. | [
"Deep learning",
"Unsupervised Learning",
"Semi-Supervised Learning"
] | https://openreview.net/pdf?id=SyuncaEKx | B19SegZje | official_review | 1,489,203,266,544 | SyuncaEKx | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper112/AnonReviewer2"
] | rating: 5: Marginally below acceptance threshold
review: This paper uses adversarial auto-encoders for the purposes of novelty detection - detecting outliers that do not belong to the training data distribution. Three criteria are developed for determining novelty: reconstruction error, 1 - probability under the latent prior, and probability of belonging to an explicit rejection class. The idea is interesting, but there are no baselines comparing to other methods in the literature, so it's unclear exactly how good the proposed approach is. What about simply training a generative model like a VAE and evaluating the approximate log-likelihood with some threshold? Or perhaps using the discriminator in a GAN to determine if a test point is real or fake data? I'm sure there are other good baselines in the literature, some of which are cited in the introduction.
I would also recommend applying this to other datasets than MNIST, or even a synthetic dataset.
There is a typo in the paragraph at the end of section 2: P(f(x) | y(x = 0)) => P(f(x) | y(x) = 0).
In the Gaussian mixture, does C_i refer to component i? I think this doesn't refer to class i in the sense of supervision, but it's not entirely clear.
confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
SyuncaEKx | Adversarial Autoencoders for Novelty Detection | [
"Valentin Leveau",
"Alexis Joly"
] | In this paper, we address the problem of novelty detection, \textit{i.e} recognizing at test time if a data item comes from the training data distribution or not. We focus on Adversarial autoencoders (AAE) that have the advantage to explicitly control the distribution of the known data in the feature space. We show that when they are trained in a (semi-)supervised way, they provide consistent novelty detection improvements compared to a classical autoencoder. We further improve their performance by introducing an explicit rejection class in the prior distribution coupled with random input images to the autoencoder. | [
"Deep learning",
"Unsupervised Learning",
"Semi-Supervised Learning"
] | https://openreview.net/pdf?id=SyuncaEKx | Skolqvtig | comment | 1,489,758,707,402 | Hkor7qlil | [
"everyone"
] | [
"~Valentin_Leveau2"
] | title: Answers to Reviewer 2
comment: Thank you very much for your feedback and recommendations. Here are a few comments and answers to them:
“Although this is the first occurrence of adversarial auto-encoder used for novelty detection, using auto-encoder based approaches for this application is not novel.”
→ You are absolutely right. This is precisely the purpose of the paper: showing how the adversarial training can improve the baseline auto-encoder by enforcing a known prior distribution in the latent space. And we did show in this preliminary study that it can potentially improve a lot.
“The description of the experiments and the model seems clear, except some part like defining the novelty rate when using explicit rejection class.”
→ You are right. We forgot to give that precision in the paper. The novelty rate is defined according to the proportion of noise images added in the training set. In our experiments, we used as much noisy images as the initial number of images in the training set (60K images). So that, in Equation 3, we used p(y=0)=0.5
“Claims like "This might be related to the fact that, whatever the used prior distribution, randomly generated images are distributed according to a normal distribution at the center of the feature space (because of the central limit theorem)" needs to be explained if accurate or relevant.”
→ Let x be the random input image(s) and f_w(x) the activation function of a neuron in the latent space of the auto-encoder. Since the last layer of the encoder is a linear layer, f_w(x) can be re-written as a sum of random variables (the activation values of the previous layer multiplied by a weight). As these random variables are obtained through a deterministic function of the i.i.d. random images x given as input of the network, they are themselves i.i.d. Consequently the central limit theorem applies and the f_w(x)’s are independently and approximately normally distributed. Now, we agree that the notion of “center of the feature space” is more discutable. A way to see it is to consider that the set of the real images of the training set is a specific sampling of the random images distribution. In that case, the mean of the f_w(x)’s for the real images of the training set can be considered as an estimator of the mean of the normal distribution. Consequently, the mean of the normal distribution is approximately equal to the mean of the prior distribution (what we call the center of the feature space). Note that, empirically, if you plot the features of the random images in the 2D latent space, you observe that phenomenon.
Since, we don’t have enough place to discuss that point in the paper, we suggest simply removing the sentence and keep this for a further work.
“The experimental procedure does not compare to simple baseline like mixture of Gaussians.”
→ Our goal was to show how the adversarial training can improve the baseline auto-encoder by enforcing a known prior distribution in the latent space. For a full paper submission (and not a 3 page workshop paper), we would surely have explored other baselines (e.g. GMM but also GAN, VAE, DAE, CAE, etc.). Our opinion is that a workshop track is well adapted to host preliminary studies contrary to conference tracks for which one can be more exigent in terms of experimental load.
“Moreover, apart from visualization purpose, restricting the models to a 2D latent space is not well justified for the purpose of novelty detection.”
→ Sure. Our goal was to gain knowledge on the contribution of adversarial learning over a baseline autoencoder, not to win the performance race.
“Using accuracy as performance is not fully informative and the choice of thresholding remains arbitrary. Using confusion matrices and precision-recall curves might help understand more what is going on”
→ Actually, we did not use accuracy but Mean Average Precision (as explained in section “Protocol and settings”). The term “accuracy” does even not appear in the paper. Mean Average Precision does not involve any thresholding and is among the most fully informative metric summarizing the precision-recall curve. Investigating other metrics or plots would not be possible in a 3 pages paper. |
SyuncaEKx | Adversarial Autoencoders for Novelty Detection | [
"Valentin Leveau",
"Alexis Joly"
] | In this paper, we address the problem of novelty detection, \textit{i.e} recognizing at test time if a data item comes from the training data distribution or not. We focus on Adversarial autoencoders (AAE) that have the advantage to explicitly control the distribution of the known data in the feature space. We show that when they are trained in a (semi-)supervised way, they provide consistent novelty detection improvements compared to a classical autoencoder. We further improve their performance by introducing an explicit rejection class in the prior distribution coupled with random input images to the autoencoder. | [
"Deep learning",
"Unsupervised Learning",
"Semi-Supervised Learning"
] | https://openreview.net/pdf?id=SyuncaEKx | H1KCtvFjg | comment | 1,489,758,672,887 | B19SegZje | [
"everyone"
] | [
"~Valentin_Leveau2"
] | title: Answers to Reviewer 1
comment: Thank you very much for your feedback and recommendations. Here are a few comments and answers to them:
"The idea is interesting, but there are no baselines comparing to other methods in the literature, so it's unclear exactly how good the proposed approach is."
→ Actually, there is a baseline: the reconstruction error of the (non-adversarial) auto-encoder. The objective of the paper was precisely to show how the adversarial training can improve this baseline by enforcing a known prior distribution in the latent space.
"What about simply training a generative model like a VAE and evaluating the approximate log-likelihood with some threshold?"
→ We focused on adversarial auto-encoders because it has been shown in the paper of Makhzani et al. that they outperform VAE in the context of semi-supervised learning. We agree that it would be relevant to also make that comparison in the specific case of novelty detection. We will do that in the next few weeks.
“Or perhaps using the discriminator in a GAN to determine if a test point is real or fake data?”
→ Actually, this was the first thing we tested before switching to variational auto-encoders (because it was not working well). After convergence, the discriminator of a GAN is not able to determine if a test point is real or fake data because the distribution of the fake data converges to the one of the real data. Thus, the discriminator strongly overfits and is near random on novelty detection. Using intermediate versions of the discriminator could be an option that we also experimented but that was theoretically and experimentally not convincing.
“I'm sure there are other good baselines in the literature, some of which are cited in the introduction.”
→ For a full paper submission and not a 3 page workshop paper, we would surely have explored such other baselines. |
SyuncaEKx | Adversarial Autoencoders for Novelty Detection | [
"Valentin Leveau",
"Alexis Joly"
] | In this paper, we address the problem of novelty detection, \textit{i.e} recognizing at test time if a data item comes from the training data distribution or not. We focus on Adversarial autoencoders (AAE) that have the advantage to explicitly control the distribution of the known data in the feature space. We show that when they are trained in a (semi-)supervised way, they provide consistent novelty detection improvements compared to a classical autoencoder. We further improve their performance by introducing an explicit rejection class in the prior distribution coupled with random input images to the autoencoder. | [
"Deep learning",
"Unsupervised Learning",
"Semi-Supervised Learning"
] | https://openreview.net/pdf?id=SyuncaEKx | Hkor7qlil | official_review | 1,489,179,459,036 | SyuncaEKx | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper112/AnonReviewer1"
] | rating: 4: Ok but not good enough - rejection
review: The paper proposes to use Adversarial Auto Encoders in the context of anomaly/novelty detection. They explore the use of different priors, semi-supervised learning and using an anomaly class.
Although this is the first occurrence of adversarial auto-encoder used for novelty detection, using auto-encoder based approaches for this application is not novel.
The description of the experiments and the model seems clear, except some part like defining the novelty rate when using explicit rejection class. Claims like "This might be related to the fact that, whatever the used prior distribution, randomly generated images are distributed according to a normal distribution at the center of the feature space (because of the central limit theorem)" needs to be explained if accurate or relevant.
The experimental procedure does not compare to simple baseline like mixture of Gaussians. Moreover, apart from visualization purpose, restricting the models to a 2D latent space is not well justified for the purpose of novelty detection. Using confusion matrices and precision-recall curves might help understand more what is going on.
Although the use of adversarial auto-encoder for anomaly detection might be worth exploring, the experimental procedure needs to be more rigorous in order to draw any conclusion.
confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
SyuncaEKx | Adversarial Autoencoders for Novelty Detection | [
"Valentin Leveau",
"Alexis Joly"
] | In this paper, we address the problem of novelty detection, \textit{i.e} recognizing at test time if a data item comes from the training data distribution or not. We focus on Adversarial autoencoders (AAE) that have the advantage to explicitly control the distribution of the known data in the feature space. We show that when they are trained in a (semi-)supervised way, they provide consistent novelty detection improvements compared to a classical autoencoder. We further improve their performance by introducing an explicit rejection class in the prior distribution coupled with random input images to the autoencoder. | [
"Deep learning",
"Unsupervised Learning",
"Semi-Supervised Learning"
] | https://openreview.net/pdf?id=SyuncaEKx | BkPHOKaix | comment | 1,490,028,606,884 | SyuncaEKx | [
"everyone"
] | [
"ICLR.cc/2017/pcs"
] | decision: Reject
title: ICLR committee final decision |
Hkx-gCfYl | Coupling Distributed and Symbolic Execution for Natural Language Queries | [
"Lili Mou",
"Zhengdong Lu",
"Hang Li",
"Zhi Jin"
] | In this paper, we propose to combine neural execution and symbolic execution to query a table with natural languages. Our approach makes use the differentiability of neural networks and transfers (imperfect) knowledge to the symbolic executor before reinforcement learning. Experiments show our approach achieves high learning efficiency, high execution efficiency, high interpretability, as well as high performance. | [] | https://openreview.net/pdf?id=Hkx-gCfYl | BJEEAY-je | comment | 1,489,243,691,859 | BJ9QtLgoe | [
"everyone"
] | [
"~Lili_Mou1"
] | title: Thanks. Related work added.
comment: Thank you.
Special thanks to the recommendation of the paper (https://arxiv.org/pdf/1511.04586.pdf), where the authors train neural attention with IBM Model 4. Our main idea works in an opposite way: we first make use of fully differentiable neural networks to learn meaningful (although imperfect) intermediate execution steps, and then guide an external symbolic system, which is more natural in our semantic parsing scenario.
We revised the paper with discussion at the end of Section 1. Due to page limitation, we had included more discussion in our extended version (Section 4 in https://arxiv.org/pdf/1612.02741.pdf); the suggested paper will also be discussed next time we update the arXiv version (i.e., in mini-batch fashion).
|
Hkx-gCfYl | Coupling Distributed and Symbolic Execution for Natural Language Queries | [
"Lili Mou",
"Zhengdong Lu",
"Hang Li",
"Zhi Jin"
] | In this paper, we propose to combine neural execution and symbolic execution to query a table with natural languages. Our approach makes use the differentiability of neural networks and transfers (imperfect) knowledge to the symbolic executor before reinforcement learning. Experiments show our approach achieves high learning efficiency, high execution efficiency, high interpretability, as well as high performance. | [] | https://openreview.net/pdf?id=Hkx-gCfYl | BJ9QtLgoe | official_review | 1,489,164,578,056 | Hkx-gCfYl | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper37/AnonReviewer1"
] | title: official review
rating: 7: Good paper, accept
review: This paper proposes to combine the distributed and symbolic execution for natural language queries. Based on the finding that the symbolic executor's column selection generally aligns with the field attention of the distributed enquirer, the authors incorporate the symbolic executor to the loss of the distributed enquirer by augmenting a field attention cross entropy loss into the original loss. This information is also used in pre-train the policy for the REINFORCE algorithm. The experiments show by combining the distributed and symbolic execution this way, the model achieve better performance.
I like the idea of incorporating the symbolic executor model into the neural model via attention. Similar ideas have been proposed in other papers too (for example https://arxiv.org/pdf/1511.04586.pdf -- section 2.6) It would be nice if the authors can refer more to the related works.
confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
Hkx-gCfYl | Coupling Distributed and Symbolic Execution for Natural Language Queries | [
"Lili Mou",
"Zhengdong Lu",
"Hang Li",
"Zhi Jin"
] | In this paper, we propose to combine neural execution and symbolic execution to query a table with natural languages. Our approach makes use the differentiability of neural networks and transfers (imperfect) knowledge to the symbolic executor before reinforcement learning. Experiments show our approach achieves high learning efficiency, high execution efficiency, high interpretability, as well as high performance. | [] | https://openreview.net/pdf?id=Hkx-gCfYl | BJwW6_Wog | official_review | 1,489,239,294,805 | Hkx-gCfYl | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper37/AnonReviewer2"
] | title: official review
rating: 6: Marginally above acceptance threshold
review: * Summary: the paper proposes to combine a distributed enquirer with a symbolic executor for the task of question answering. The idea is simple: the distributed enquirer is used for the policy initialization for the symbolic executer which is trained using REINFORCE. The proposed method outperforms the baseline SEMPRE on a QA dataset.
* Discussion:
- The paper is quite difficult to read, not because of the idea is complicated. Several details (e.g. math symbols) about the distributed enquirer can be safely omitted. Figure 1 hardly helps. It seems like the authors tried to shorten a long paper by "copy and paste".
- The experimental results are impressive. However, why didn't the authors choose Yin et al (2016b) as the baseline? Table 2e is unclear: are the results on the dev or test set? If they are on the dev set, I was surprised to see that the performance on the test set is even substantially higher than the dev set. If they are on the test set, I then have no idea why the accuracy 96.5 is not on table 2a.
TL,DR; I think the idea and the experimental results are good enough, but the paper must be rewritten in a clearer way.
confidence: 3: The reviewer is fairly confident that the evaluation is correct |
Hkx-gCfYl | Coupling Distributed and Symbolic Execution for Natural Language Queries | [
"Lili Mou",
"Zhengdong Lu",
"Hang Li",
"Zhi Jin"
] | In this paper, we propose to combine neural execution and symbolic execution to query a table with natural languages. Our approach makes use the differentiability of neural networks and transfers (imperfect) knowledge to the symbolic executor before reinforcement learning. Experiments show our approach achieves high learning efficiency, high execution efficiency, high interpretability, as well as high performance. | [] | https://openreview.net/pdf?id=Hkx-gCfYl | Hyaf_Fpjg | comment | 1,490,028,564,588 | Hkx-gCfYl | [
"everyone"
] | [
"ICLR.cc/2017/pcs"
] | decision: Accept
title: ICLR committee final decision |
Hkx-gCfYl | Coupling Distributed and Symbolic Execution for Natural Language Queries | [
"Lili Mou",
"Zhengdong Lu",
"Hang Li",
"Zhi Jin"
] | In this paper, we propose to combine neural execution and symbolic execution to query a table with natural languages. Our approach makes use the differentiability of neural networks and transfers (imperfect) knowledge to the symbolic executor before reinforcement learning. Experiments show our approach achieves high learning efficiency, high execution efficiency, high interpretability, as well as high performance. | [] | https://openreview.net/pdf?id=Hkx-gCfYl | HJPhaKbjg | comment | 1,489,243,566,566 | BJwW6_Wog | [
"everyone"
] | [
"~Lili_Mou1"
] | title: Thanks. Paper Revised
comment: We thank the reviewer for constructive comments.
- Equations (1) and (2) are highlighted to better demonstrate how the neural and symbolic worlds can be coupled. We have now saved some space and clarified the points raised by the reviewer.
We retained most experimental results in the paper because we still hope our 3-page workshop submission can be as interesting as possible. Our extended version could be found at: https://arxiv.org/pdf/1612.02741.pdf
- We did use Yin et al (2016b) as our baseline. And Tables 2a and 2e are test performance.
96.5% in Table 2e is not included in Table 2a because 96.5% is achieved by our proposed coupling approach (after one-round co-training), whereas Table 2a's Distributed and Symbolic columns refer to either single of the world.
Besides, the 96.4% performance is obtained with step-by-step supervision; therefore it's also not included in Table 2a (where only denotations are used for supervision). We have clarified these in the revised paper.
Thanks again for the review. We're also happy to further clarify and improve our paper should there be any problem.
|
BJBkkaNYe | Training a Subsampling Mechanism in Expectation | [
"Colin Raffel",
"Dieterich Lawson"
] | We describe a mechanism for subsampling sequences and show how to compute its expected output so that it can be trained with standard backpropagation. We test this approach on a simple toy problem and discuss its shortcomings. | [
"Theory",
"Deep learning",
"Structured prediction"
] | https://openreview.net/pdf?id=BJBkkaNYe | BJikI5lsx | official_review | 1,489,180,131,221 | BJBkkaNYe | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper105/AnonReviewer3"
] | title: Reasonable start, but not there yet
rating: 5: Marginally below acceptance threshold
review: This paper seeks to build a neural network component for subsampling data. The idea is to build a layer that takes as input a discrete sequence s_1, ..., s_T along with a probability e_t of keeping each element t of the sequence. The layer is meant to independently choose whether or not to keep each element s_t based on the probability e_t, and then to assemble the kept inputs into a subsequence. Rather than execute the layer by sampling, the paper proposes to instead compute a marginal distribution over the outputs under this model. It proposes a dynamic program that runs in O(T^3) time and evaluate the method on a simple toy problem.
While the big idea seems reasonable and the paper is written clearly, I don't think it's developed enough to warrant publication at the workshop at this point. The main issues are as follows:
- I'm not convinced the algorithm is optimal:
-- I'm not convinced the O(T^3) cost is necessary. I would think that a matrix of $output position$ x $input symbol$ could be computed in O(T^2) time using dynamic programming, and then after having computed this, the expected output could be computed in O(T^2) time by summing over the $input symbol$ dimension. Am I missing something?
-- The algorithm should be implemented in a numerically stable way using log-sum-exps.
- The experiment is very simple and there are no baselines.
- One motivation for subsampling is to shorten a sequence. However, under the marginalization approach, the sequence doesn't actually get shortened. Please discuss this.
Typos:
"extented"
confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
BJBkkaNYe | Training a Subsampling Mechanism in Expectation | [
"Colin Raffel",
"Dieterich Lawson"
] | We describe a mechanism for subsampling sequences and show how to compute its expected output so that it can be trained with standard backpropagation. We test this approach on a simple toy problem and discuss its shortcomings. | [
"Theory",
"Deep learning",
"Structured prediction"
] | https://openreview.net/pdf?id=BJBkkaNYe | HycScE4ie | official_comment | 1,489,418,818,256 | rJke6qlil | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper105/AnonReviewer3"
] | title: Ok, increasing score by 1
comment: Thanks for the reply. I'll bump up my score by 1. |
BJBkkaNYe | Training a Subsampling Mechanism in Expectation | [
"Colin Raffel",
"Dieterich Lawson"
] | We describe a mechanism for subsampling sequences and show how to compute its expected output so that it can be trained with standard backpropagation. We test this approach on a simple toy problem and discuss its shortcomings. | [
"Theory",
"Deep learning",
"Structured prediction"
] | https://openreview.net/pdf?id=BJBkkaNYe | ByMpgjljl | comment | 1,489,182,906,462 | Hk4ip5esl | [
"everyone"
] | [
"~Colin_Raffel1"
] | title: Response
comment: Thanks for your detailed review! To address your comments individually:
> The authors propose a dynamic programming algorithm with the computational complexity of O(T^3) and provide some results on toy task.
In fact, the dynamic program is O(T^2) - this was an error in the originally-posted version of the manuscript, which has been recently fixed. Sorry that it was not updated early enough for you to see this change.
> The writing of the paper needs some work and the ideas need to be presented more clearly, though I understand that the space limitation makes it difficult to explain everything coherently. Especially, the notation is vague.
Thanks for this feedback. I will add the changes you suggested, and also try to clean up the notation. I think with a bit more additional space the exposition could be made better.
> Actually, we have tried a very similar idea for character-level neural machine translation a few years ago in order to learn hierarchical alignments, but we never managed to make it work...
Very cool that you were trying a similar idea! Would be interested in discussing further. When applying this approach and related ideas, we also experienced non-monotonic alignments and vanishing gradients. We have some current ongoing work for mitigating both issues.
> In the first page, you say “U \le T”, but neither U nor T has not defined anywhere in the document.
They are listed in the definitions of the input and output sequence, as s = {s0, s1, . . . , sT −1}, y = {y0, y1, . . . , yU−1}, but you are right that this could be made clearer; we will address this.
> Please present a more precise probabilistic for e_t. You just say p(y_0=s_0) = e_0, but a more general formal definition would be useful.
Good idea. The basic idea is that they e_t is the "probability of including element s_t in the output sequence". In practice, they are computed as a function of the network states. We will be sure this information is clear in the paper.
> A figure or the visualization of the automata/algorithm that generates the task would be useful(perhaps in the appendix).
We do in fact have such a diagram. We will add it in an appendix.
> This is more of a curious empirical question, but can this algorithm generalize to the sequences longer than the ones that it has been trained on?
In practice, we found that it was able to generalize. In particular, because of the curriculum learning strategy we employed, we found that it was able to learn the correct algorithm on short examples and apply them to longer examples.
Thank you again for your comments. We hope we have addressed your concerns. |
BJBkkaNYe | Training a Subsampling Mechanism in Expectation | [
"Colin Raffel",
"Dieterich Lawson"
] | We describe a mechanism for subsampling sequences and show how to compute its expected output so that it can be trained with standard backpropagation. We test this approach on a simple toy problem and discuss its shortcomings. | [
"Theory",
"Deep learning",
"Structured prediction"
] | https://openreview.net/pdf?id=BJBkkaNYe | rJke6qlil | comment | 1,489,181,927,459 | BJikI5lsx | [
"everyone"
] | [
"~Colin_Raffel1"
] | title: Responses
comment: Hi, thanks for your thorough review!
To immediately address your first concern, I believe you reviewed the initially posted version of this manuscript, not the most recent one - we recently updated it to reflect the error in stating that the dynamic program had cubic complexity. We apologize for not having the corrected version online early enough for you to consider it.
To address your second concern, this is indeed how it is implemented - you can see here in the example code posted along with the paper:
http://nbviewer.jupyter.org/github/craffel/subsampling_in_expectation/blob/master/Subsampling%20in%20Expectation.ipynb#TensorFlow-example
If you think it's appropriate, we can include this information in the manuscript.
In terms of the experiment, we appreciate the criticism that it is overly simple and there are no baselines. The purpose of this abstract was solely to propose the approach and show a proof-of-concept that it works; unfortunately, there was not sufficient space for further experiments.
Finally, while as you suggest marginalization does not actually shorten the sequence as you say, it does have the effect of placing sequence elements closer together in the output sequence. For example, if the input sequence was
[a, b, c, d, e]
and the subsampling probabilities were
[1, 0, 0, 1, 0]
then the expected output would be
[a, d, 0, 0, 0]
As you say, this sequence is not shorter, but if there is an important dependency between a and d and the remaining symbols are distractors, the resulting timelag between them has been made substantially shorter. We tried to mention this effect in the bullet points at the beginning of the abstract, but we can try to make it clearer in later versions.
Thank you again for the review, we hope we have addressed your concerns! |
BJBkkaNYe | Training a Subsampling Mechanism in Expectation | [
"Colin Raffel",
"Dieterich Lawson"
] | We describe a mechanism for subsampling sequences and show how to compute its expected output so that it can be trained with standard backpropagation. We test this approach on a simple toy problem and discuss its shortcomings. | [
"Theory",
"Deep learning",
"Structured prediction"
] | https://openreview.net/pdf?id=BJBkkaNYe | HJXHdKpsx | comment | 1,490,028,603,003 | BJBkkaNYe | [
"everyone"
] | [
"ICLR.cc/2017/pcs"
] | decision: Accept
title: ICLR committee final decision |
BJBkkaNYe | Training a Subsampling Mechanism in Expectation | [
"Colin Raffel",
"Dieterich Lawson"
] | We describe a mechanism for subsampling sequences and show how to compute its expected output so that it can be trained with standard backpropagation. We test this approach on a simple toy problem and discuss its shortcomings. | [
"Theory",
"Deep learning",
"Structured prediction"
] | https://openreview.net/pdf?id=BJBkkaNYe | Hk4ip5esl | official_review | 1,489,182,107,807 | BJBkkaNYe | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper105/AnonReviewer1"
] | title: Interesting and plausible idea but needs more work.
rating: 6: Marginally above acceptance threshold
review: Subsampling In Expectation
Summary:
This paper proposes a way to sample a shorter sequence y=(y_0, y_1, ..., y_t) from the input sequence x=(x_0, ..., x_k) according to the probabilities e=(e_0, ..., e_k). The authors propose a dynamic programming algorithm with the computational complexity of O(T^3) and provide some results on toy task.
A General Comment: The writing of the paper needs some work and the ideas need to be presented more clearly, though I understand that the space limitation makes it difficult to explain everything coherently. Especially, the notation is vague. However, the idea makes sense and it is correct in principal. Actually, we have tried a very similar idea for character-level neural machine translation a few years ago in order to learn hierarchical alignments, but we never managed to make it work. One main limitation we observed at the time for char-level NMT that, this kind of algorithm can only generate monotonic alignments, and for the language pairs such as, Ch-En or Tr-En where the alignments can be highly non-monotonic, we could not observe much improvements and also the vanishing gradients arising from the products and the sigmoids were crippling the training. Efficiency was also another issue for us. But authors of this paper shows that in principle this idea works in the toy cases, I guess the challenge remains to find the right architecture and a way to scale the algorithm to right tasks.
More detailed comments:
In the first page, you say “U \le T”, but neither U nor T has not defined anywhere in the document.
Please present a more precise probabilistic for e_t. You just say p(y_0=s_0) = e_0, but a more general formal definition would be useful.
A figure or the visualization of the automata/algorithm that generates the task would be useful(perhaps in the appendix).
This is more of a curious empirical question, but can this algorithm generalize to the sequences longer than the ones that it has been trained on?
Conclusion,
Pros: - A simple algorithm to subsample the sequences.
- Interesting results on a toy task.
Cons:
- The proposed algorithm is O(T^3) which is quite difficult to scale for the long sequences and realistic tasks.
- The experiments are not convincing enough.
- The writing is not clear enough and needs some more work(this is a minor cons).
confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
rkB_5hEKe | Classless Association using Neural Networks | [
"Federico Raue",
"Sebastian Palacio",
"Andreas Dengel",
"Marcus Liwicki"
] | In this paper, we propose a model for the classless association between two instances of the same unknown class. This scenario is inspired by the Symbol Grounding Problem and the association learning in infants. Our model has two parallel Multilayer Perceptrons (MLPs) and relies on two components. The first component is a EM-training rule that matches the output vectors of a MLP to a statistical distribution. The second component exploits the output classification of one MLP as target of the another MLP in order to learn the agreement of the unknown class. We generate four classless datasets (based on MNIST) with uniform distribution between the classes. Our model is evaluated against totally supervised and totally unsupervised scenarios. In the first scenario, our model reaches good performance in terms of accuracy and the classless constraint. In the second scenario, our model reaches better results against two clustering algorithms. | [] | https://openreview.net/pdf?id=rkB_5hEKe | SyDke2lje | official_review | 1,489,186,783,013 | rkB_5hEKe | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper104/AnonReviewer1"
] | title: Classless association using neural networks
rating: 3: Clear rejection
review: I can honestly say that despite several readings, I have no idea what this paper is actually about. I believe the problem is relating two objects, despite not having a label that classifies the two objects as being of the same class. From there, my comprehension goes downhill: EM algorithm mixed with pseudo-classes and a weighting scheme. Networks using the output from another network as the targets of other networks. Target uniform statistical distributions. Why a weighting scheme? What's going on?
I acknowledge that perhaps the workshop format is too small, and therefor limits too severely the required space to explain an idea. Perhaps. But I can safely say that almost nobody will glean any insight from this manuscript in the time that a reasonable person is willing to give a manuscript. I would say that if the authors are confident of this work, they should write up a longer manuscript (or return to a longer one) that takes the time and space necessary to more effectively motivate the problem, and introduce the parts of the architecture, again with motivation, so that the reader has a chance of understanding the manuscript.
confidence: 2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper |
rkB_5hEKe | Classless Association using Neural Networks | [
"Federico Raue",
"Sebastian Palacio",
"Andreas Dengel",
"Marcus Liwicki"
] | In this paper, we propose a model for the classless association between two instances of the same unknown class. This scenario is inspired by the Symbol Grounding Problem and the association learning in infants. Our model has two parallel Multilayer Perceptrons (MLPs) and relies on two components. The first component is a EM-training rule that matches the output vectors of a MLP to a statistical distribution. The second component exploits the output classification of one MLP as target of the another MLP in order to learn the agreement of the unknown class. We generate four classless datasets (based on MNIST) with uniform distribution between the classes. Our model is evaluated against totally supervised and totally unsupervised scenarios. In the first scenario, our model reaches good performance in terms of accuracy and the classless constraint. In the second scenario, our model reaches better results against two clustering algorithms. | [] | https://openreview.net/pdf?id=rkB_5hEKe | HkD59HVsl | comment | 1,489,422,990,917 | HJTkxXNsl | [
"everyone"
] | [
"~Federico_Raue1"
] | title: RE: classless association using Neural Networks
comment:
Thank you for your time. Hopefully, our responses have addressed your concerns
>>> This is what I understand: let's assume a young child is playing with toys from 2 different brands. The toys include several pieces of different types (10 MNIST classes).
>>>The aim is to learn to put the same brand types into same buckets. We want a bucket to have the same time of toy of the same brand (purity, all block type t of brand j is in
>>>bucket b) also the object types are the same for 2 brands in the bucket (association, all block type t of both brands is in bucket b). The ultimate goal (future work) is to
>>>learn association between diffferent streams (e.g. what parents say when the child holds a lego).
The analogy is correct.
>>>This work models this problem with a MLP to first induce a feature vector z^{1,2} for 2 streams. A pseudo-class \hat{z^{1,2}} is predicted using these feature vectors. In the M-step the parameters are updated so that the distribution defined by \hat{z^{1,2}} matches the target distribution \phi.
We want to point out that the model has two MLPs
>>>two issues I observed:
>>>>1) they do not provide any information about how they evaluated other clustering algorithms. If they are fed with raw pixels, I don't think the comparison would be fair
>>>because there is no featurization of raw fixels where the proposed model have this power. Comparison on a single layer MLP autoencoder's hidden features or output of PCA
>>>would be more fair.
The reported resuls of both clustering algorithms is based on raw pixels. We have evaluated the same datasets using pca (64, 128, 256), and the results are quite similar to Table 1. Moreover, these results are similar to Jenckel, et al, where they did not find any improve between raw pixels vs pca for character recognition in Historical documents.
1) MNIST input 1, input 2
* pca - 64: 64.1 (std:1.8), 63.9 (std:3.2)
* pca - 128: 63.5 (std:2.3), 63.6 (std:2.1)
* pca - 256: 63.6 (std:2.4), 63.4 (std:3.3)
2) Rotated MNIST input 1, input 2
* pca - 64: 63.9 (std:2.2), 63.3 (std:3.2)
* pca - 128: 63.7 (std:3.8), 61.6 (std:2.8)
* pca - 256: 65.1 (std:2.4), 63.9 (std:1.6)
3) Inverted MNIST input 1, input 2
* pca - 64: 64.9 (std:2.8), 64.1 (std:3.3)
* pca - 128: 64.6 (std:2.0), 64.2 (std:3.3)
* pca - 256: 65.1 (std:1.7), 63.5 (std:2.8)
4) Random Rotated MNIST input 1, input 2
* pca - 64: 64.4 (std:1.7), 14.9 (std:0.4)
* pca - 128: 63.9 (std:1.9), 14.8 (std:0.3)
* pca - 256: 65.5 (std:2.2), 14.9 (std:0.5)
[1] Jenckel, et al (2016). Clustering Benchmark for Characters in Historical Documents. Workshop on Document Analysis Systems, DAS16.
>>2) The experiments are almost oracle type. The model knows the number of classes and the target distribution. I am not sure if other clustering algorithms make use of target
>>distribution information. In a real life scenario, none of these assumptions are true. An early attempt in that direction would make this work acceptable for workshop
>>publication.
We agree that the experiments are the ideal case, where the number of classes and the statistical distribution is known. However, our model can be extended where the task is not constrained to the number of classes (which are defined by the language-linguistic). For example, the classes in MNIST (one, two, three, ... zero) constraint that the input sample
can only be in those ten buckets (supervised tasks). In contrast, our association task inspired by the symbol grounding problem is not constrained to the number of classes because we are only interested in learning two elements are the same based on their correlation.
With this in mind, our model only requires changing the size of the vectors z^{1}, z^{2}, and \phi for learning the association. |
rkB_5hEKe | Classless Association using Neural Networks | [
"Federico Raue",
"Sebastian Palacio",
"Andreas Dengel",
"Marcus Liwicki"
] | In this paper, we propose a model for the classless association between two instances of the same unknown class. This scenario is inspired by the Symbol Grounding Problem and the association learning in infants. Our model has two parallel Multilayer Perceptrons (MLPs) and relies on two components. The first component is a EM-training rule that matches the output vectors of a MLP to a statistical distribution. The second component exploits the output classification of one MLP as target of the another MLP in order to learn the agreement of the unknown class. We generate four classless datasets (based on MNIST) with uniform distribution between the classes. Our model is evaluated against totally supervised and totally unsupervised scenarios. In the first scenario, our model reaches good performance in terms of accuracy and the classless constraint. In the second scenario, our model reaches better results against two clustering algorithms. | [] | https://openreview.net/pdf?id=rkB_5hEKe | S1fSuKTsg | comment | 1,490,028,602,119 | rkB_5hEKe | [
"everyone"
] | [
"ICLR.cc/2017/pcs"
] | decision: Reject
title: ICLR committee final decision |
rkB_5hEKe | Classless Association using Neural Networks | [
"Federico Raue",
"Sebastian Palacio",
"Andreas Dengel",
"Marcus Liwicki"
] | In this paper, we propose a model for the classless association between two instances of the same unknown class. This scenario is inspired by the Symbol Grounding Problem and the association learning in infants. Our model has two parallel Multilayer Perceptrons (MLPs) and relies on two components. The first component is a EM-training rule that matches the output vectors of a MLP to a statistical distribution. The second component exploits the output classification of one MLP as target of the another MLP in order to learn the agreement of the unknown class. We generate four classless datasets (based on MNIST) with uniform distribution between the classes. Our model is evaluated against totally supervised and totally unsupervised scenarios. In the first scenario, our model reaches good performance in terms of accuracy and the classless constraint. In the second scenario, our model reaches better results against two clustering algorithms. | [] | https://openreview.net/pdf?id=rkB_5hEKe | HJTkxXNsl | official_review | 1,489,412,069,411 | rkB_5hEKe | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper104/AnonReviewer2"
] | title: classless association using neural networks
rating: 4: Ok but not good enough - rejection
review: I agree with R1, the workshop format is too small to efficiently describe an idea.
This is what I understand: let's assume a young child is playing with toys from 2 different brands. The toys include several pieces of different types (10 MNIST classes). The aim is to learn to put the same brand types into same buckets. We want a bucket to have the same time of toy of the same brand (purity, all block type t of brand j is in bucket b) also the object types are the same for 2 brands in the bucket (association, all block type t of both brands is in bucket b). The ultimate goal (future work) is to learn association between diffferent streams (e.g. what parents say when the child holds a lego).
This work models this problem with a MLP to first induce a feature vector z^{1,2} for 2 streams. A pseudo-class \hat{z^{1,2}} is predicted using these feature vectors. In the M-step the parameters are updated so that the distribution defined by \hat{z^{1,2}} matches the target distribution \phi.
two issues I observed:
1) they do not provide any information about how they evaluated other clustering algorithms. If they are fed with raw pixels, I don't think the comparison would be fair because there is no featurization of raw fixels where the proposed model have this power. Comparison on a single layer MLP autoencoder's hidden features or output of PCA would be more fair.
2) The experiments are almost oracle type. The model knows the number of classes and the target distribution. I am not sure if other clustering algorithms make use of target distribution information. In a real life scenario, none of these assumptions are true. An early attempt in that direction would make this work acceptable for workshop publication.
confidence: 3: The reviewer is fairly confident that the evaluation is correct |
rkB_5hEKe | Classless Association using Neural Networks | [
"Federico Raue",
"Sebastian Palacio",
"Andreas Dengel",
"Marcus Liwicki"
] | In this paper, we propose a model for the classless association between two instances of the same unknown class. This scenario is inspired by the Symbol Grounding Problem and the association learning in infants. Our model has two parallel Multilayer Perceptrons (MLPs) and relies on two components. The first component is a EM-training rule that matches the output vectors of a MLP to a statistical distribution. The second component exploits the output classification of one MLP as target of the another MLP in order to learn the agreement of the unknown class. We generate four classless datasets (based on MNIST) with uniform distribution between the classes. Our model is evaluated against totally supervised and totally unsupervised scenarios. In the first scenario, our model reaches good performance in terms of accuracy and the classless constraint. In the second scenario, our model reaches better results against two clustering algorithms. | [] | https://openreview.net/pdf?id=rkB_5hEKe | SJsnwCMjl | comment | 1,489,328,050,953 | SyDke2lje | [
"everyone"
] | [
"~Federico_Raue1"
] | title: RE: Classless association using neural networks
comment: We thank the reviewer for the time. Unfortunately, given the strict limit of 3 pages, it is challenging to give more information about the motivation and the elements of our model. Hopefully, our responses have addressed your concerns.
* The presented task is to learn the association between two disjoint input streams where both streams represent the same unknown class. This task is motivated by the Symbol Grounding Problem, which is the binding of abstract concepts with the real world via sensory input, such as visual system. More formally, our task is defined by two disjoint input streams x^(1) and x^(2) that represent the same unlabeled class. The goal is to learn the association by classifying both with the same pseudo-class c^(1) = c^(2).
* Our training rule relies on matching a statistical distribution and a mini-batch of output vectors of MLPs as an alternative loss function that does not require classes. With this in mind, we have introduced a new learning parameter (weighting vectors) that modifies the raw output vectors (z) based on the statistical constraint (\phi). In addition, the weighting vectors help to classify the input samples. As a result, the pseudo-classes -obtained in the classification step in Equation 4- change during training and similar elements are grouped together (Figure 1, 2, and 3).
* Motivated by the association learning between both streams. We have proposed to use the pseudo-classes of one network as a target of the other network, and vice versa. It can be seen in Figure 1, 2 and 3, each row in the first and second columns (MLP^(1) and MLP^(2)) represents a pseudo class (index) between 0-9. After the model is trained, both networks agree on classifying similar input samples (or digits) with the same index.
* In summary, the two previous components are used in an EM-approach.
- Initial step: all input samples x(1) and x(2) have random pseudo-classes c(1) and c(2), where histogram of pseudo-classes is similar to the desired statistical distribution
- E-step classifies the output vectors based on the weighting vectors (Equation 4) and approximates the current statistical distribution of the mini-batch (Equation 3). Note that the pseudo-classes are assigned to the samples after a number of iterations. In other words, the updated of the pseudo-classes is not online.
- M-step updates the weighting vectors (\gamma^(1), \gamma^(2)) and the parameters of the networks (\theta^(1),\theta^(2))
We have updated our paper in order to clarify more the model and still keeping the page limit.
|
r1bMV7Ntg | Episode-Based Active Learning with Bayesian Neural Networks | [
"Feras Dayoub",
"Niko Suenderhauf",
"Peter Corke"
] | We investigate different strategies for active learning with Bayesian deep neural networks. We focus our analysis on scenarios where new, unlabeled data is obtained episodically, such as commonly encountered in mobile robotics applications. An evaluation of different strategies for acquisition, updating, and final training on the CIFAR-10 dataset shows that incremental network updates with final training on the accumulated acquisition set are essential for best performance, while limiting the amount of required human labeling labor. | [
"Computer vision",
"Deep learning"
] | https://openreview.net/pdf?id=r1bMV7Ntg | BJ3muFasx | comment | 1,490,028,580,064 | r1bMV7Ntg | [
"everyone"
] | [
"ICLR.cc/2017/pcs"
] | decision: Reject
title: ICLR committee final decision |
r1bMV7Ntg | Episode-Based Active Learning with Bayesian Neural Networks | [
"Feras Dayoub",
"Niko Suenderhauf",
"Peter Corke"
] | We investigate different strategies for active learning with Bayesian deep neural networks. We focus our analysis on scenarios where new, unlabeled data is obtained episodically, such as commonly encountered in mobile robotics applications. An evaluation of different strategies for acquisition, updating, and final training on the CIFAR-10 dataset shows that incremental network updates with final training on the accumulated acquisition set are essential for best performance, while limiting the amount of required human labeling labor. | [
"Computer vision",
"Deep learning"
] | https://openreview.net/pdf?id=r1bMV7Ntg | BkW4eYfsg | comment | 1,489,305,640,949 | ByiUJ81se | [
"everyone"
] | [
"~Niko_Suenderhauf1"
] | title: Reply to Reviewer 2
comment: Thank you for your constructive feedback.
We added a citation to Gal et al. 2017 wich appeared on arxiv after we submitted our paper. Notice that before we cited their NIPS 2016 workshop contribution, which was a poster that essentially covers the contents of their new arxiv submission. Gal et al. 2017 showed that the max entropy acquisition function yields results comparable to more complex acquisition functions. For that reason (and since the Gal et al. NIPS workshop poster was known to us), we selected the max entropy function for the experiments conducted in this paper.
We furthermore updated the submission to include the random selection baseline as requested: Fig. 1 (right) now shows the performance of a network trained on 74% randomly selected images from the training dataset as a baseline. We show the averaged performance from 10 independent runs (as for all other experiments).
We hope these revisions make the paper a more valuable contribution. |
r1bMV7Ntg | Episode-Based Active Learning with Bayesian Neural Networks | [
"Feras Dayoub",
"Niko Suenderhauf",
"Peter Corke"
] | We investigate different strategies for active learning with Bayesian deep neural networks. We focus our analysis on scenarios where new, unlabeled data is obtained episodically, such as commonly encountered in mobile robotics applications. An evaluation of different strategies for acquisition, updating, and final training on the CIFAR-10 dataset shows that incremental network updates with final training on the accumulated acquisition set are essential for best performance, while limiting the amount of required human labeling labor. | [
"Computer vision",
"Deep learning"
] | https://openreview.net/pdf?id=r1bMV7Ntg | ByiUJ81se | official_review | 1,489,096,531,249 | r1bMV7Ntg | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper69/AnonReviewer2"
] | title: Limited novelty and significance
rating: 4: Ok but not good enough - rejection
review: The paper evaluates Bayesian Neural Networks for active learning on CIFAR10 dataset. It investigates incremental vs full-batch training for network updates. It uses more complex dataset (CIFAR10 over MNIST) than a prior work that did comparison only on MNIST (Gal et. al., 2016).
Pros:
-Simple and clear presentation
-It presents episode-based active learning setting, which is closer to application scenarios in robotics
Cons:
-It ignores comparing against different acquisition functions and classifiers, which are important for evaluating good active learning techniques, and instead only compares simple and heuristic ways to pick incremental or full data to train on
-Improvements are small. In addition, it’s reasonable to show accuracy on 70% randomly selected data, etc.
-It has limited novelty, since Gal et. al., 2016 already applied BNN for active learning. They also have updated paper with new results (https://arxiv.org/abs/1703.02910)
confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
r1bMV7Ntg | Episode-Based Active Learning with Bayesian Neural Networks | [
"Feras Dayoub",
"Niko Suenderhauf",
"Peter Corke"
] | We investigate different strategies for active learning with Bayesian deep neural networks. We focus our analysis on scenarios where new, unlabeled data is obtained episodically, such as commonly encountered in mobile robotics applications. An evaluation of different strategies for acquisition, updating, and final training on the CIFAR-10 dataset shows that incremental network updates with final training on the accumulated acquisition set are essential for best performance, while limiting the amount of required human labeling labor. | [
"Computer vision",
"Deep learning"
] | https://openreview.net/pdf?id=r1bMV7Ntg | rkxCf4gog | official_review | 1,489,154,760,415 | r1bMV7Ntg | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper69/AnonReviewer1"
] | title: review
rating: 4: Ok but not good enough - rejection
review: The submission investigates episodic active learning - meaning the additional samples arrive sequentially in batches and groundtruth labels can be acquired from an oracle in an active learning fashion.
more specifically, the submission investigates bayesian neural networks. the choice is not well motivated and a bit unclear. no comparison is provided.
Different strategies for step-wise training and label acquisition are evaluated.
The evaluation is diffuse as there are multiple competing goals, e.g.:
- queries to the oracle
- accuracy of the final model
- accuracy of intermediate models
- computation time (?)
for the first three plots are shown - but the results are presented in a way that makes the strategies difficult to compare.
main points criticism:
- it is strange that the fully supervised case performs slightly worse than two of the incremental approaches. it might be noise or there might be a problem with the supervised baseline. there is no satisfying explanation to this observation in the submission
- the submission seems too much out of context. no direct related work is cited to the problem; the theme of sequentially retrieving labels is common to most active learning and experimental design papers; performance is typically plotted over the number of samples.
- no baselines are computed - only the 5 strategies the authors came up with
- no prior strategies were drawn from the related work
the authors make a point in the conclusion that their setting doesn't allow to re-observe samples. but from reading the setup, it seems this is only true for the active learning scheme that is only allowed to pick from the current batch/pool (training is still performed on all selected samples by some strategies). but if this is an important point - the submission fails to highlight it's importance in the experiments. a baseline should be shown - where the active learning is picking from all the whole set. but the suspicion remains that there is not too much of a difference. therefore i would be important to show those numbers.
confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
r1IvyjVYl | Fast Adaptation in Generative Models with Generative Matching Networks | [
"Sergey Bartunov",
"Dmitry P. Vetrov"
] | We develop a new class of deep generative model called generative matching networks (GMNs) which is inspired by the recently proposed matching networks for one-shot learning in discriminative tasks.
By conditioning on the additional input dataset, generative matching networks may instantly learn new concepts that were not available during the training but conform to a similar generative process, without explicit limitations on the number of additional input objects or the number of concepts they represent.
Our experiments on the Omniglot dataset demonstrate that GMNs can significantly improve predictive performance on the fly as more additional data is available and generate examples of previously unseen handwritten characters once only a few images of them are provided. | [
"Deep learning",
"Unsupervised Learning"
] | https://openreview.net/pdf?id=r1IvyjVYl | SkkLueGjl | comment | 1,489,270,855,051 | HkO5a9lie | [
"everyone"
] | [
"~Sergey_Bartunov1"
] | title: Reply
comment: Thank you for the review.
We have uploaded a new version of the manuscript with a refined figure 1, which should make the generative process more clear.
> Does it mean the basic version uses standard normal to be the prior and further the model employs an inference network?
That's correct.
Also note that similarly to the generative model, our inference network is conditional, i.e. has the form of q(z | x, X).
> But in order to argue that the latent variable z brings the stochasticity and generalization ability, the current experiments are not sufficient enough and lack of baseline models.
Our baselines are the standard VAE and the conditional generative model that resembles in it's structure the neural statistician model (which is another ICLR submission). It is unclear how to estimate the predictive log-likelihood in the original neural statistician model, hence we had to make an adaptation which is more tractable and still allows to make the point of usefulness of the proposed matching procedure.
To the best of our knowledge, there are no other variants of conditional VAEs that would be relevant for the comparison in a similar setting (fast generalization from multi-class and multi-object data).
> Even the comparison with VAE only shows marginal improvement.
When not conditioned on any additional data, our model can not indeed perform significantly better than the VAE, because for both models the architecture and the amount of information available are the same.
However, we show that generative matching networks can perform much better in terms of predictive log-likelihood as we provide more conditioning objects.
In fact, nearly same performance in the unconditioned regime is already a good result, since the proposed model can be safely used in the absence of new data. |
r1IvyjVYl | Fast Adaptation in Generative Models with Generative Matching Networks | [
"Sergey Bartunov",
"Dmitry P. Vetrov"
] | We develop a new class of deep generative model called generative matching networks (GMNs) which is inspired by the recently proposed matching networks for one-shot learning in discriminative tasks.
By conditioning on the additional input dataset, generative matching networks may instantly learn new concepts that were not available during the training but conform to a similar generative process, without explicit limitations on the number of additional input objects or the number of concepts they represent.
Our experiments on the Omniglot dataset demonstrate that GMNs can significantly improve predictive performance on the fly as more additional data is available and generate examples of previously unseen handwritten characters once only a few images of them are provided. | [
"Deep learning",
"Unsupervised Learning"
] | https://openreview.net/pdf?id=r1IvyjVYl | rJs4dYaix | comment | 1,490,028,595,114 | r1IvyjVYl | [
"everyone"
] | [
"ICLR.cc/2017/pcs"
] | decision: Accept
title: ICLR committee final decision |
r1IvyjVYl | Fast Adaptation in Generative Models with Generative Matching Networks | [
"Sergey Bartunov",
"Dmitry P. Vetrov"
] | We develop a new class of deep generative model called generative matching networks (GMNs) which is inspired by the recently proposed matching networks for one-shot learning in discriminative tasks.
By conditioning on the additional input dataset, generative matching networks may instantly learn new concepts that were not available during the training but conform to a similar generative process, without explicit limitations on the number of additional input objects or the number of concepts they represent.
Our experiments on the Omniglot dataset demonstrate that GMNs can significantly improve predictive performance on the fly as more additional data is available and generate examples of previously unseen handwritten characters once only a few images of them are provided. | [
"Deep learning",
"Unsupervised Learning"
] | https://openreview.net/pdf?id=r1IvyjVYl | Hk1sOgzjg | comment | 1,489,270,935,347 | BJD37aesg | [
"everyone"
] | [
"~Sergey_Bartunov1"
] | title: Reply
comment: Thank you for your review.
We have uploaded a new version of the paper where we changed a figure 1 to hopefully make the generative process and the notation used more clear. |
r1IvyjVYl | Fast Adaptation in Generative Models with Generative Matching Networks | [
"Sergey Bartunov",
"Dmitry P. Vetrov"
] | We develop a new class of deep generative model called generative matching networks (GMNs) which is inspired by the recently proposed matching networks for one-shot learning in discriminative tasks.
By conditioning on the additional input dataset, generative matching networks may instantly learn new concepts that were not available during the training but conform to a similar generative process, without explicit limitations on the number of additional input objects or the number of concepts they represent.
Our experiments on the Omniglot dataset demonstrate that GMNs can significantly improve predictive performance on the fly as more additional data is available and generate examples of previously unseen handwritten characters once only a few images of them are provided. | [
"Deep learning",
"Unsupervised Learning"
] | https://openreview.net/pdf?id=r1IvyjVYl | BJD37aesg | official_review | 1,489,191,855,333 | r1IvyjVYl | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper95/AnonReviewer1"
] | title: Interesting work
rating: 7: Good paper, accept
review: The paper reports on a conditional VAE that generates samples similar to few samples it is conditioned upon.
The conditioning samples in this work are taken from few different classes. It is shown empirically that a vector summary of the conditioning dataset that simply averages representations of the individual samples doesn't encode the information well. Instead the authors propose to aggregate that information similarly to the method used in matching networks. This method seems to be working better.
Overall it's an interesting idea. The model description could be clearer however.
confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
r1IvyjVYl | Fast Adaptation in Generative Models with Generative Matching Networks | [
"Sergey Bartunov",
"Dmitry P. Vetrov"
] | We develop a new class of deep generative model called generative matching networks (GMNs) which is inspired by the recently proposed matching networks for one-shot learning in discriminative tasks.
By conditioning on the additional input dataset, generative matching networks may instantly learn new concepts that were not available during the training but conform to a similar generative process, without explicit limitations on the number of additional input objects or the number of concepts they represent.
Our experiments on the Omniglot dataset demonstrate that GMNs can significantly improve predictive performance on the fly as more additional data is available and generate examples of previously unseen handwritten characters once only a few images of them are provided. | [
"Deep learning",
"Unsupervised Learning"
] | https://openreview.net/pdf?id=r1IvyjVYl | HkO5a9lie | official_review | 1,489,182,096,436 | r1IvyjVYl | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper95/AnonReviewer2"
] | title: interesting but needs further work to improve
rating: 5: Marginally below acceptance threshold
review: This paper proposes a interesting conditional generative model by using generative matching networks so that the model is able to carry out one-shot or few-shot learning.
However, the notations are very confusing and lack of clarity.
Figure 1 shows a neural structure but there seems no corresponding text explaining it.
The authors introduce the model by telling the story from basic version, but I couldn’t follow the further modification.
Does it mean the basic version uses standard normal to be the prior and further the model employs an inference network?
I can understand the authors attempt to train a conditional generative distribution to produce data with an intermediate latent variable.
But in order to argue that the latent variable z brings the stochasticity and generalization ability, the current experiments are not sufficient enough and lack of baseline models.
Even the comparison with VAE only shows marginal improvement.
I think this paper needs a bit more work on the design of the paper presentations and experiment.
confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
Hk6dkJQFx | Revisiting Batch Normalization For Practical Domain Adaptation | [
"Yanghao Li",
"Naiyan Wang",
"Jianping Shi",
"Jiaying Liu",
"Xiaodi Hou"
] | Deep neural networks (DNN) have shown unprecedented success in various computer vision applications such as image classification and object detection. However, it is still a common annoyance during the training phase, that one has to prepare at least thousands of labeled images to fine-tune a network to a specific domain. Recent study shows that a DNN has strong dependency towards the training dataset, and the learned features cannot be easily transferred to a different but relevant task without fine-tuning. In this paper, we propose a simple yet powerful remedy, called Adaptive Batch Normalization (AdaBN) to increase the generalization ability of a DNN. By modulating the statistics from the source domain to the target domain in all Batch Normalization layers across the network, our approach achieves deep adaptation effect for domain adaptation tasks. In contrary to other deep learning domain adaptation methods, our method does not require additional components, and is parameter-free. It archives state-of-the-art performance despite its surprising simplicity. Furthermore, we demonstrate that our method is complementary with other existing methods. Combining AdaBN with existing domain adaptation treatments may further improve model performance. | [] | https://openreview.net/pdf?id=Hk6dkJQFx | r1U87aQje | official_review | 1,489,388,366,007 | Hk6dkJQFx | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper38/AnonReviewer2"
] | rating: 5: Marginally below acceptance threshold
review: I previously reviewed the paper and am attaching my review below.
I still have concern regarding the incorrect arguments raised in the paper (such as the nature of matching the source/target domain distributions), and it seems that the authors simply reorganized the sections down to the appendix section. As a result I am keeping my original recommendation, but will not argue if the AC decides to accept the paper.
*** Original review ***
Overall I think this is an interesting paper which shows empirical performance improvement over baselines. However, my main concern with the paper is regarding its technical depth, as the gist of the paper can be summarized as the following: instead of keeping the batch norm mean and bias estimation over the whole model, estimate them on a per-domain basis. I am not sure if this is novel, as this is a natural extension of the original batch normalization paper. Overall I think this paper is more fit as a short workshop presentation rather than a full conference paper.
Detailed comments:
Section 3.1: I respectfully disagree that the core idea of BN is to align the distribution of training data. It does this as a side effect, but the major purpose of BN is to properly control the scale of the gradient so we can train very deep models without the problem of vanishing gradients. It is plausible that intermediate features from different datasets naturally show as different groups in a t-SNE embedding. This is not the particular feature of batch normalization: visualizing a set of intermediate features with AlexNet and one gets the same results. So the premise in section 3.1 is not accurate.
Section 3.3: I have the same concern as the other reviewer. It seems to be quite detatched from the general idea of AdaBN. Equation 2 presents an obvious argument that the combined BN-fully_connected layer forms a linear transform, which is true in the original BN case and in this case as well. I do not think it adds much theoretical depth to the paper. (In general the novelty of this paper seems low)
Experiments:
- section 4.3.1 is not an accurate measure of the "effectiveness" of the proposed method, but a verification of a simple fact: previously, we normalize the source domain features into a Gaussian distribution. the proposed method is explicitly normalizing the target domain features into the same Gaussian distribution as well. So, it is obvious that the KL divergence between these two distributions are closer - in fact, one is *explicitly* making them close. However, this does not directly correlate to the effectiveness of the final classification performance.
- section 4.3.2: the sensitivity analysis is a very interesting read, as it suggests that only a very few number of images are needed to account for the domain shift in the AdaBN parameter estimation. This seems to suggest that a single "whitening" operation is already good enough to offset the domain bias (in both cases shown, a single batch is sufficient to recover about 80% of the performance gain, although I cannot get data for even smaller number of examples from the figure). It would thus be useful to have a comparison between these approaches, and also a detailed analysis of the effect from each layer of the model - the current analysis seems a bit thin.
confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
Hk6dkJQFx | Revisiting Batch Normalization For Practical Domain Adaptation | [
"Yanghao Li",
"Naiyan Wang",
"Jianping Shi",
"Jiaying Liu",
"Xiaodi Hou"
] | Deep neural networks (DNN) have shown unprecedented success in various computer vision applications such as image classification and object detection. However, it is still a common annoyance during the training phase, that one has to prepare at least thousands of labeled images to fine-tune a network to a specific domain. Recent study shows that a DNN has strong dependency towards the training dataset, and the learned features cannot be easily transferred to a different but relevant task without fine-tuning. In this paper, we propose a simple yet powerful remedy, called Adaptive Batch Normalization (AdaBN) to increase the generalization ability of a DNN. By modulating the statistics from the source domain to the target domain in all Batch Normalization layers across the network, our approach achieves deep adaptation effect for domain adaptation tasks. In contrary to other deep learning domain adaptation methods, our method does not require additional components, and is parameter-free. It archives state-of-the-art performance despite its surprising simplicity. Furthermore, we demonstrate that our method is complementary with other existing methods. Combining AdaBN with existing domain adaptation treatments may further improve model performance. | [] | https://openreview.net/pdf?id=Hk6dkJQFx | HkKhiPysx | official_review | 1,489,103,792,909 | Hk6dkJQFx | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper38/AnonReviewer1"
] | title: simple but effective domain adaptation approach
rating: 7: Good paper, accept
review: (I previously reviewed this paper for the main conference. I will copy most of my comments here, removing criticisms that have since been addressed.)
Pros:
The method is very simple and easy to understand and apply.
The experiments demonstrate that the method compares favorably with existing methods on standard domain adaptation tasks.
The analysis in section 5.3.3 shows that only a small number of target domain samples are needed for adaptation of the network.
Good results for remote sensing domain adaptation included in appendix.
Cons:
There is little novelty -- the method is arguably too simple to be called a “method.” Rather, it’s the most straightforward/intuitive approach when using a network with batch normalization for domain adaptation. (The alternative -- using the BN statistics from the source domain for target domain examples -- is less natural, to me.)
Overall, there’s not much novelty here, but the paper includes sufficient experimentation and interesting analysis, and it’s hard to argue that simplicity is a bad thing when the method is clearly competitive with or outperforming prior work on the standard benchmarks (in a domain adaptation tradition that started with “Frustratingly Easy Domain Adaptation”).
confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
Hk6dkJQFx | Revisiting Batch Normalization For Practical Domain Adaptation | [
"Yanghao Li",
"Naiyan Wang",
"Jianping Shi",
"Jiaying Liu",
"Xiaodi Hou"
] | Deep neural networks (DNN) have shown unprecedented success in various computer vision applications such as image classification and object detection. However, it is still a common annoyance during the training phase, that one has to prepare at least thousands of labeled images to fine-tune a network to a specific domain. Recent study shows that a DNN has strong dependency towards the training dataset, and the learned features cannot be easily transferred to a different but relevant task without fine-tuning. In this paper, we propose a simple yet powerful remedy, called Adaptive Batch Normalization (AdaBN) to increase the generalization ability of a DNN. By modulating the statistics from the source domain to the target domain in all Batch Normalization layers across the network, our approach achieves deep adaptation effect for domain adaptation tasks. In contrary to other deep learning domain adaptation methods, our method does not require additional components, and is parameter-free. It archives state-of-the-art performance despite its surprising simplicity. Furthermore, we demonstrate that our method is complementary with other existing methods. Combining AdaBN with existing domain adaptation treatments may further improve model performance. | [] | https://openreview.net/pdf?id=Hk6dkJQFx | SJZ-nJDie | comment | 1,489,595,385,123 | r1U87aQje | [
"everyone"
] | [
"~Yanghao_Li1"
] | title: Response to Reviewer2
comment: Thanks a lot for your comments and suggestions of our work. Actually, we have already updated the paper in this workshop submission to address your comments. Our responses are shown as follows (some of them are the same as our previous comment):
1. About section 3.1 (section 2.1 in this version)
We have updated our writing. However, we still think aligning the distribution of training data is not just a side effect. It is the key way to achieve the purpose of BN which is to avoid the problem of vanishing gradients and help optimization. In original BN paper, the authors’ motivation is to address the problem of “internal covariate shift”, which means “the change in the distributions of layers’ inputs”. Thus, BN is proposed to “reduce internal covariate shift” and make “the distribution of nonlinearity inputs more stable”.
(2) We also directly visualize the intermediate features with Inception-BN network instead of our BN features. The figure can be seen at this link (https://s30.postimg.org/fdamc2l1t/a2d_feature_tsne.png). Red circles are features of samples from training domain (Amazon) while blue ones are testing features (DSLR). It blends much more than that in Figure 1. This demonstrates the statistics of BN layer indeed contain the traits of the data domain. The features of intermediate features of CNN cannot be separated directly in terms of different domains.
2. About section 3.3, section 4.3.1
We have revised section 3.3 to make it clearer and we have removed the previous section 4.3.1.
3. About section 4.3.2 (section 5.3.3 in this version)
We have updated additional experimental results in the workshop submission.
(1) We have experiments with smaller number of samples and found that the performance will drop more (e.g. 0.652 with 16 samples, 0.661 with 32 samples.) We have updated the results in the section “Sensitivity to target domain size”.
(2) In the section “Adaptation Effect for Different BN Layers” (section 5.3.4), we add the detailed analysis of adaptation effect for different BN layers of our AdaBN method.
|
Hk6dkJQFx | Revisiting Batch Normalization For Practical Domain Adaptation | [
"Yanghao Li",
"Naiyan Wang",
"Jianping Shi",
"Jiaying Liu",
"Xiaodi Hou"
] | Deep neural networks (DNN) have shown unprecedented success in various computer vision applications such as image classification and object detection. However, it is still a common annoyance during the training phase, that one has to prepare at least thousands of labeled images to fine-tune a network to a specific domain. Recent study shows that a DNN has strong dependency towards the training dataset, and the learned features cannot be easily transferred to a different but relevant task without fine-tuning. In this paper, we propose a simple yet powerful remedy, called Adaptive Batch Normalization (AdaBN) to increase the generalization ability of a DNN. By modulating the statistics from the source domain to the target domain in all Batch Normalization layers across the network, our approach achieves deep adaptation effect for domain adaptation tasks. In contrary to other deep learning domain adaptation methods, our method does not require additional components, and is parameter-free. It archives state-of-the-art performance despite its surprising simplicity. Furthermore, we demonstrate that our method is complementary with other existing methods. Combining AdaBN with existing domain adaptation treatments may further improve model performance. | [] | https://openreview.net/pdf?id=Hk6dkJQFx | Hy6MuY6se | comment | 1,490,028,565,370 | Hk6dkJQFx | [
"everyone"
] | [
"ICLR.cc/2017/pcs"
] | decision: Accept
title: ICLR committee final decision |
ByXrfaGFe | Transferring Knowledge to Smaller Network with Class-Distance Loss | [
"Seung Wook Kim",
"Hyo-Eun Kim"
] | Training a network with small capacity that can perform as well as a larger capacity network is an important problem that needs to be solved in real life applications which require fast inference time and small memory requirement. Previous approaches that transfer knowledge from a bigger network to a smaller network show little benefit when applied to state-of-the-art convolutional neural network architectures such as Residual Network trained with batch normalization. We propose class-distance loss that helps teacher networks to form densely clustered vector space to make it easy for the student network to learn from it. We show that a small network with half the size of the original network trained with the proposed strategy can perform close to the original network on CIFAR-10 dataset. | [] | https://openreview.net/pdf?id=ByXrfaGFe | Byc9s1U5e | official_comment | 1,488,481,170,048 | S1Ph1Zr5l | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper36/AnonReviewer2"
] | title: thanks
comment: Thanks for the response. It might be worth having the cross-entropy results in there as well for reference.
The proposed method seems even better in light of the fact that the usual cross entropy knowledge distillation does not work in this case.
|
ByXrfaGFe | Transferring Knowledge to Smaller Network with Class-Distance Loss | [
"Seung Wook Kim",
"Hyo-Eun Kim"
] | Training a network with small capacity that can perform as well as a larger capacity network is an important problem that needs to be solved in real life applications which require fast inference time and small memory requirement. Previous approaches that transfer knowledge from a bigger network to a smaller network show little benefit when applied to state-of-the-art convolutional neural network architectures such as Residual Network trained with batch normalization. We propose class-distance loss that helps teacher networks to form densely clustered vector space to make it easy for the student network to learn from it. We show that a small network with half the size of the original network trained with the proposed strategy can perform close to the original network on CIFAR-10 dataset. | [] | https://openreview.net/pdf?id=ByXrfaGFe | S13f_tTjl | comment | 1,490,028,563,775 | ByXrfaGFe | [
"everyone"
] | [
"ICLR.cc/2017/pcs"
] | decision: Accept
title: ICLR committee final decision |
ByXrfaGFe | Transferring Knowledge to Smaller Network with Class-Distance Loss | [
"Seung Wook Kim",
"Hyo-Eun Kim"
] | Training a network with small capacity that can perform as well as a larger capacity network is an important problem that needs to be solved in real life applications which require fast inference time and small memory requirement. Previous approaches that transfer knowledge from a bigger network to a smaller network show little benefit when applied to state-of-the-art convolutional neural network architectures such as Residual Network trained with batch normalization. We propose class-distance loss that helps teacher networks to form densely clustered vector space to make it easy for the student network to learn from it. We show that a small network with half the size of the original network trained with the proposed strategy can perform close to the original network on CIFAR-10 dataset. | [] | https://openreview.net/pdf?id=ByXrfaGFe | H14hj1Zox | comment | 1,489,202,092,428 | BkvUXIlse | [
"everyone"
] | [
"~Seung_Wook_Kim1"
] | title: Reply to AnonReviewer1
comment: Thank you for your review.
Questions:
- What is the performance of the Teacher on CIFAR10?
110-layer 'Baseline' and 'Class-distance loss' resnets in the table 1 refers to the performances of teacher models.
- Did you compare with knowledge distillation baseline (that matches softmax logits of teacher and student networks) ?
Yes. We couldn't get the error rate go down below 9% by training a student model with traditional cross-entropy transfer
- Do you use Batch Normalization in your residual networks?
Yes. All resnets are trained with batch normalization.
|
ByXrfaGFe | Transferring Knowledge to Smaller Network with Class-Distance Loss | [
"Seung Wook Kim",
"Hyo-Eun Kim"
] | Training a network with small capacity that can perform as well as a larger capacity network is an important problem that needs to be solved in real life applications which require fast inference time and small memory requirement. Previous approaches that transfer knowledge from a bigger network to a smaller network show little benefit when applied to state-of-the-art convolutional neural network architectures such as Residual Network trained with batch normalization. We propose class-distance loss that helps teacher networks to form densely clustered vector space to make it easy for the student network to learn from it. We show that a small network with half the size of the original network trained with the proposed strategy can perform close to the original network on CIFAR-10 dataset. | [] | https://openreview.net/pdf?id=ByXrfaGFe | S1Ph1Zr5l | comment | 1,488,420,783,031 | H1lqJpgqg | [
"everyone"
] | [
"~Seung_Wook_Kim1"
] | title: reply to AnonReviewer2
comment: Thank you for your comment.
Regarding your question, TF-baseline refers to student model trained with feature vector transfer. We couldn't get the error rate go down below 9% by training a student model with traditional cross-entropy transfer (stated in your comment). This is expected as previous works (Srivastava et al. (2015) and Chen et al. (2016)) indicated that the cross-entropy transfer strategy did not outperform baseline networks trained from scratch where baseline networks are sufficiently deep neural networks with strong regularizers such as batch-norm.
We'll add suggested citations as well. |
ByXrfaGFe | Transferring Knowledge to Smaller Network with Class-Distance Loss | [
"Seung Wook Kim",
"Hyo-Eun Kim"
] | Training a network with small capacity that can perform as well as a larger capacity network is an important problem that needs to be solved in real life applications which require fast inference time and small memory requirement. Previous approaches that transfer knowledge from a bigger network to a smaller network show little benefit when applied to state-of-the-art convolutional neural network architectures such as Residual Network trained with batch normalization. We propose class-distance loss that helps teacher networks to form densely clustered vector space to make it easy for the student network to learn from it. We show that a small network with half the size of the original network trained with the proposed strategy can perform close to the original network on CIFAR-10 dataset. | [] | https://openreview.net/pdf?id=ByXrfaGFe | BkvUXIlse | official_review | 1,489,163,086,754 | ByXrfaGFe | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper36/AnonReviewer1"
] | title: Promising Work
rating: 7: Good paper, accept
review: This paper investigates Knowledge Distillation for network compression. In their approach, the authors propose to match feature vector of the sofmax preactivation. In addition, they introduce a new loss function for training the teacher, i.e. they add a regularisation term so that class-wise clusters of feature vectors are more dense.
Authors evaluate their approach on the CIFAR10 dataset using Resnet for both teacher and student. Contrary to previous approaches using Knowledge Distillation, they show that their approach is able to leverage the Teacher to improve the Student performances with such network architectures.
Questions:
- What is the performance of the Teacher on CIFAR10?
- Did you compare with knowledge distillation baseline (that matches softmax logits of teacher and student networks) ?
- Do you use Batch Normalization in your residual networks?
Pros:
- The paper is clear an easy to follow
- Authors show that Knowledge Distillation is useful for recent network architecture (Resnets).
Con:
- Experiences on only one dataset.
I recommend acceptance.
confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
ByXrfaGFe | Transferring Knowledge to Smaller Network with Class-Distance Loss | [
"Seung Wook Kim",
"Hyo-Eun Kim"
] | Training a network with small capacity that can perform as well as a larger capacity network is an important problem that needs to be solved in real life applications which require fast inference time and small memory requirement. Previous approaches that transfer knowledge from a bigger network to a smaller network show little benefit when applied to state-of-the-art convolutional neural network architectures such as Residual Network trained with batch normalization. We propose class-distance loss that helps teacher networks to form densely clustered vector space to make it easy for the student network to learn from it. We show that a small network with half the size of the original network trained with the proposed strategy can perform close to the original network on CIFAR-10 dataset. | [] | https://openreview.net/pdf?id=ByXrfaGFe | H1lqJpgqg | official_review | 1,488,142,216,259 | ByXrfaGFe | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper36/AnonReviewer2"
] | title: good paper
rating: 8: Top 50% of accepted papers, clear accept
review: In this work, the authors propose to transfer knowledge from a teacher model to a smaller student model with two variations:
1) knowledge is transferred by matching the feature vector before the softmax
2) the teacher is trained with an additional regularization term to make the feature vectors more dense within the same class.
This is a solid piece work that should be accepted. One question:
- Does the TF-baseline refer to student model trained with traditional cross-entropy knowledge transfer? or the feature vector transfer? If the latter, can you please have additional baseline numbers for student models trained with (standard) cross-entropy loss transfer?
Minor comments:
- Citations: Should really cite Bucila et al. 2006 for knowledge distillation and LeCun et al. 1990 (Optimal Brain Damange) for model compression, as these predate some of the more recent work (Hinton 2015, Han 2016, Jaderberg 2014, etc.)
- "Mimic learning": probably best just to stick to "knowledge distillation"
confidence: 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature |
HyDt5XMKg | Non-Associative Learning Representation in the Nervous System of the Nematode Caenorhabditis elegans | [
"Ramin M. Hasani",
"Magdalena Fuchs",
"Victoria Beneder",
"Radu Grosu"
] | Caenorhabditis elegans (C. elegans) illustrated remarkable behavioral plasticities including complex non-associative and associative learning representations. Understanding the principles of such mechanisms presumably leads to constructive inspirations for the design of efficient learning algorithms. In the present study, we postulate a novel approach on modeling single neurons and synapses to study the mechanisms underlying learning in the C. elegans nervous system. In this regard, we construct a precise mathematical model of sensory neurons where we include multi-scale details from genes, ion channels and ion pumps, together with a dynamic model of synapses comprised of neurotransmitters and receptors kinetics. We recapitulate mechanosensory habituation mechanism, a non-associative learning process, in which elements of the neural network tune their parameters as a result of repeated input stimuli. Accordingly, we quantitatively demonstrate the roots of such plasticity in the neuronal and synaptic-level representations. Our findings can potentially give rise to the development of new bio-inspired learning algorithms. | [
"Theory"
] | https://openreview.net/pdf?id=HyDt5XMKg | SyKtffn9g | official_review | 1,488,884,353,150 | HyDt5XMKg | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper30/AnonReviewer1"
] | title: Difficult to understand the primary contribution here
rating: 3: Clear rejection
review: This submission is difficult to review, since it leaves the reader in suspense as to what the specific contributions it makes are.
The authors model the circuity involved in C. elegans mechanosensory habituation. However, they don't appear to provide many specifics of their model, rather almost all the space is taken with background information. The key findings do not appear to be novel (neurons can have state that is modified by history of the neuron's experience), and are assumptions of their model (based on known experimental results), so it is unclear that they are significant new contributions (given the paucity of details about their specific approach, it is difficult to judge).
Reasons to accept:
- The authors promise that their approach may give rise to new "bio-inspired learning algorithms."
- They provide a good background regarding C elegans and habituation.
Reasons to reject:
- Submission is unclear about their model or specific contributions.
- Although the authors make a reference to this work leading to "better learning algorithms," no specifics are provided. This paper doesn't appear to have much connection with representation learning and may be more suited for a different venue.
- Abstract advertises insights that give rise to "new bio-inspired learning algorithms" but doesn't appear to provide any general insights into learning.
Minor issue: there are a number of grammatical errors.
confidence: 2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper |
HyDt5XMKg | Non-Associative Learning Representation in the Nervous System of the Nematode Caenorhabditis elegans | [
"Ramin M. Hasani",
"Magdalena Fuchs",
"Victoria Beneder",
"Radu Grosu"
] | Caenorhabditis elegans (C. elegans) illustrated remarkable behavioral plasticities including complex non-associative and associative learning representations. Understanding the principles of such mechanisms presumably leads to constructive inspirations for the design of efficient learning algorithms. In the present study, we postulate a novel approach on modeling single neurons and synapses to study the mechanisms underlying learning in the C. elegans nervous system. In this regard, we construct a precise mathematical model of sensory neurons where we include multi-scale details from genes, ion channels and ion pumps, together with a dynamic model of synapses comprised of neurotransmitters and receptors kinetics. We recapitulate mechanosensory habituation mechanism, a non-associative learning process, in which elements of the neural network tune their parameters as a result of repeated input stimuli. Accordingly, we quantitatively demonstrate the roots of such plasticity in the neuronal and synaptic-level representations. Our findings can potentially give rise to the development of new bio-inspired learning algorithms. | [
"Theory"
] | https://openreview.net/pdf?id=HyDt5XMKg | ByM0KwK5g | comment | 1,488,710,089,662 | Bk0nOF45e | [
"everyone"
] | [
"~Ramin_M._Hasani1"
] | title: Biological Learning Principles Abstract
comment: Thank you for your review and comments.
I would like to add some comments:
- Reviewer is fully aware of the fact that explaining the details of the equations used for synaptic connectivity and neural dynamics, requires way more that 3 single-column pages. Authors attempted to provide a compact clear picture over the idea of sources of biological learning and simultaneously pointed out several notes and findings which can build up well-founded understanding for readers with any areas of expertise.
- Authors intentionally structured the paper to make it possible for the ICLR readers which are mostly computer scientists, get connected to the overall picture of the non-associative learning mechanism in C. elegans, within the extended abstract, and provide the details of the equations in the poster and discuss it interactively during the workshop.
- We believe that the reviewer is totally right about including the equations. Accordingly, I mention some of them here and would demand the reviewer’s opinion about how to integrate them inside the text with their proper explanation?
We have mentioned within the text of the abstract where we can modify in the neuron model in order to create the effect of gene modifications in a sensory neuron habituation. examples include:
1) Conductance of K-Channel decreases over time, due to the gene functions described in [1]. We then proposed to have the following expression for the maximum conductance of the potassium channel, G_K:
G_K, is set to a dynamic variable expressed as follows:
G_K = 10 exp(-0.02*t) +3,
where parameters are determined empirically.
We also hypothesised that the calcium pump plays a key-role in the suppression of the calcium level in the sensory neuron. For the Calcium pump, its maximum conductance has been set to a dynamic variable in a sigmoid-like function:
G_pump = 10 / (exp(-0.01(t+200)) + 1)
Furthermore, We hypothesised that an inactivation-calcium gate should play a role in the learning mechanism.
Therefore, we design an inactivation gate, h, as follows:
dh/dt = (h_inf – h) / tau_h,
where h_inf = 1 / 1 + H * exp((v-v_half)/k_h),
where the h_inf is the steady state value of the inactivation gate, with gate rate parameters H, v_half and k_h.
v_half = -45 mV, H = 1/, tau_h = 2 s, k_h =1 1/mV.
Figure 1C in the paper is generated by including all the three dynamics described above, within the model of the neuron.
2) We have also mentioned that considering S(t) and G_max of a synapse and their modifications, result in similar habituation and dishabituation behavior on a postsynaptic cell observed in the experimental results:
The overall synaptic current: I_syn = G_max G(V_pre) S(t) (E_Syn – V_Post)
Where G(V_pre) = m(t)
And dm/dt = (m_inf – m) / tau_m
And m_inf = 1/(exp((V_shift – V_pre)/V_range) +1)
And S(t) = n(t). s(t)
And dn/dt = (n_inf – n). k_n – n.k_r – n. m(t)
n(t) describes the amount of available neurotransmitter vesicles. With each firing of the neuron, n · m vesicles are removed. Vesicles are refilled from a reserve pool with a rate k_n and move to the reserve-pool with a rate k_r. This type of model is described in [2]. With the right choice of parameters (Below), this leads to a decrease in the postsynaptic signal after a series of pulses. Without stimulation, the signal strength recovers over time.
s(t) is modelled in the following and provides the probability of the neurotransmitters arriving at the postsynaptic receptors [3]:
ds/dt = -s/tau_F + h
dh/dt = -h/tau_R – h0 . delta(t-t0)
where t0 is the time of the beginning of neurotransmitter release.
• Parameters of m(t): tau_m = 5 ms, V_range = 4 mV, V_shift = -30 mV.
• Parameters of S(t): n_inf = 10000, k_r = 0.01 1/ms and k_n = 0.08 1/ms, tau_R = 2.5 ms, tau_F = 5ms, h0 = 10, t0 = recorded at V_pre > 59mV,
This is part of the analyses we have conducted fully quantitative. We kept the paper in a high-level description for the readability. Accordingly, we targeted to include all these mathematical descriptions within the poster we provide there at the workshop.
I would sincerely ask the reviewer to reevaluate our work, given our recent comments.
Thank you very much for your kind consideration.
References
[1] Shi-Qing Cai, Yi Wang, Ki Ho Park, Xin Tong, Zui Pan, and Federico Sesti. Auto-phosphorylation of a voltage-gated k+ channel controls non-associative learning. The EMBO journal, 28(11):1601–1611, 2009.
[2] David Sterratt, Bruce Graham, Andrew Gillies, and David Willshaw. Principles of computational modelling in neuroscience. Cambridge University Press, 2011.
[3] Erik De Schutter. Computational modeling methods for neuroscientists. The MIT Press, 2009.
|
HyDt5XMKg | Non-Associative Learning Representation in the Nervous System of the Nematode Caenorhabditis elegans | [
"Ramin M. Hasani",
"Magdalena Fuchs",
"Victoria Beneder",
"Radu Grosu"
] | Caenorhabditis elegans (C. elegans) illustrated remarkable behavioral plasticities including complex non-associative and associative learning representations. Understanding the principles of such mechanisms presumably leads to constructive inspirations for the design of efficient learning algorithms. In the present study, we postulate a novel approach on modeling single neurons and synapses to study the mechanisms underlying learning in the C. elegans nervous system. In this regard, we construct a precise mathematical model of sensory neurons where we include multi-scale details from genes, ion channels and ion pumps, together with a dynamic model of synapses comprised of neurotransmitters and receptors kinetics. We recapitulate mechanosensory habituation mechanism, a non-associative learning process, in which elements of the neural network tune their parameters as a result of repeated input stimuli. Accordingly, we quantitatively demonstrate the roots of such plasticity in the neuronal and synaptic-level representations. Our findings can potentially give rise to the development of new bio-inspired learning algorithms. | [
"Theory"
] | https://openreview.net/pdf?id=HyDt5XMKg | S19b8vT5g | comment | 1,488,971,266,488 | SyKtffn9g | [
"everyone"
] | [
"~Ramin_M._Hasani1"
] | title: Re: Difficult to understand the primary contribution here
comment: Thank you very much for your comments and review.
I would like to add some key comments in defense of our work:
- I will highlight the key contributions of our work and stress on the fact that our presentation well-suits the ICLR venue:
- We provided a clear-compact overview on the mechanisms of the non-associative learning within the nervous system of the C. elegans. Within the first part of the paper, we built up several insights towards understanding the mechanism of learning by provided key notes on the structure and dynamics of the nervous system during the learning process.
- We have constructed a detailed mathematical simulation platform for our analyses and tried to draw a general overview on the principles we found by using our simulator, in such a compact report. However, as the reviewer is fully aware, explaining the details of a neuronal model and synaptic connectivity models is extremely difficult in such a compact version. We therefore, structured our work to be understandable for a larger audience who are not much familiar with the field. We also planned to include the details of the equations and analyses, within the poster-session at the ICLR workshop in order to establish a clear picture on our novel work, interactively.
- The first Key finding states the novel fact that an additional layer of input neurons can be placed in a network and their properties and state can depend on the structure of the input features and data. The second key finding states that synapses can have states and that can significantly changes the behavior of the entire network. We have precisely followed the effects of the gene modifications on the global behavior of the worm in several biological experiments and correspondingly added suitable dynamics variables in the model (which were noted within the text). Figures illustrates a small example of such experiments and comparison.
- Within the paper, we stated that our findings may lead to new learning algorithms. This is explained in the text within the concepts introduced in the first part of the paper as well as the key findings. The reader can potentially get inspired to include our findings in the existing learning algorithms and correspondingly improve the quality of the learning. This is of course on the priority-list of our future works.
- The workshop track of ICLR this year “will focus and favor late-breaking developments and very novel ideas.” I believe that designing a simulation platform with which one can easily turn attractive behavioural features such as learning various representations, to mathematical equations and useful conclusions, can be extremely interesting for the well-regarded ICLR Audience.
Furthermore, our extended abstract tries to introduce a novel principle on modelling the sources of learning in the brain of C. elegans considerably compact. The topic can easily get sophisticated to comprehend for the majority of the ICLR audience without providing proper background information. We therefore attempted to provide a high-level background description on the topic, while including novel findings even within the introductory part. examples include:
"Sensory (input) neurons within the network are subjected to a mediation during the non- associative training process (repeated tap stimulation) (Kindt et al., 2007). "
This indicates that one can set a layer of input-neurons which their threshold of activation is tunable depending on the type of the input features to be learned.
“Within a neural circuit, only some of the interneurons are proposed to be the substrate of memory (Sugi et al., 2014).”
This implies that only some neurons within the nervous system can have states and some of them are stateless. That makes the process of learning faster and more efficient. Like other biological sources of optimal networks such as beta cell hubs in islet functional architecture [1], C. elegans’ brain network consists of hubs (neurons with states) which are actively involved in learning.
I would like to sincerely ask the reviewer to reevaluate our short paper, given our recent comments.
Thank you very much for your consideration.
References:
[1] Johnston, Natalie R., et al. "Beta cell hubs dictate pancreatic islet responses to glucose." Cell Metabolism 24.3 (2016): 389-401.
|
HyDt5XMKg | Non-Associative Learning Representation in the Nervous System of the Nematode Caenorhabditis elegans | [
"Ramin M. Hasani",
"Magdalena Fuchs",
"Victoria Beneder",
"Radu Grosu"
] | Caenorhabditis elegans (C. elegans) illustrated remarkable behavioral plasticities including complex non-associative and associative learning representations. Understanding the principles of such mechanisms presumably leads to constructive inspirations for the design of efficient learning algorithms. In the present study, we postulate a novel approach on modeling single neurons and synapses to study the mechanisms underlying learning in the C. elegans nervous system. In this regard, we construct a precise mathematical model of sensory neurons where we include multi-scale details from genes, ion channels and ion pumps, together with a dynamic model of synapses comprised of neurotransmitters and receptors kinetics. We recapitulate mechanosensory habituation mechanism, a non-associative learning process, in which elements of the neural network tune their parameters as a result of repeated input stimuli. Accordingly, we quantitatively demonstrate the roots of such plasticity in the neuronal and synaptic-level representations. Our findings can potentially give rise to the development of new bio-inspired learning algorithms. | [
"Theory"
] | https://openreview.net/pdf?id=HyDt5XMKg | rk_fOYTjl | comment | 1,490,028,559,795 | HyDt5XMKg | [
"everyone"
] | [
"ICLR.cc/2017/pcs"
] | decision: Reject
title: ICLR committee final decision |
HyDt5XMKg | Non-Associative Learning Representation in the Nervous System of the Nematode Caenorhabditis elegans | [
"Ramin M. Hasani",
"Magdalena Fuchs",
"Victoria Beneder",
"Radu Grosu"
] | Caenorhabditis elegans (C. elegans) illustrated remarkable behavioral plasticities including complex non-associative and associative learning representations. Understanding the principles of such mechanisms presumably leads to constructive inspirations for the design of efficient learning algorithms. In the present study, we postulate a novel approach on modeling single neurons and synapses to study the mechanisms underlying learning in the C. elegans nervous system. In this regard, we construct a precise mathematical model of sensory neurons where we include multi-scale details from genes, ion channels and ion pumps, together with a dynamic model of synapses comprised of neurotransmitters and receptors kinetics. We recapitulate mechanosensory habituation mechanism, a non-associative learning process, in which elements of the neural network tune their parameters as a result of repeated input stimuli. Accordingly, we quantitatively demonstrate the roots of such plasticity in the neuronal and synaptic-level representations. Our findings can potentially give rise to the development of new bio-inspired learning algorithms. | [
"Theory"
] | https://openreview.net/pdf?id=HyDt5XMKg | Bk0nOF45e | official_review | 1,488,390,325,871 | HyDt5XMKg | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper30/AnonReviewer2"
] | title: Learning without learning equation
rating: 3: Clear rejection
review: It is nice to see some discussion of the biological underpinning of learning, and C.elegans is indeed a great model system. It is also very tempting to hear about modeling genetic mechanisms and I was really intrigued by the abstract. Unfortunately, the basic think that of explaining the way S(t) depends on the experiments is completely omitted. It is clear if the synaptic current are modified by a variable that the resulting behaviour of the postsynaptic neuron can be modified, but not showing this model instead of the well known conductance model itself is disappointing.
confidence: 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature |
rJT7bB4Kx | Multimodal Compact Bilinear Pooling for Multimodal Neural Machine Translation | [
"Jean-Benoit Delbrouck",
"Stephane Dupont"
] | In state-of-the-art Neural Machine Translation, an attention mechanism is used during decoding to enhance the translation. At every step, the decoder uses this mechanism to focus on different parts of the source sentence to gather the most useful information before outputting its target word. Recently, the effectiveness of the attention mechanism has also been explored for multimodal tasks, where it becomes possible to focus both on sentence parts and image regions. Approaches to pool two modalities usually include element-wise product, sum or concatenation.
In this paper, we evaluate the more advanced Multimodal Compact Bilinear pooling method, which takes the outer product of two vectors to combine the attention features for the two modalities. This has been previously investigated for visual question answering. We try out this approach for multimodal image caption translation and show improvements compared to basic combination methods. | [
"Natural language processing"
] | https://openreview.net/pdf?id=rJT7bB4Kx | ByJvq0kcx | official_review | 1,488,083,542,614 | rJT7bB4Kx | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper75/AnonReviewer2"
] | title: Sensible idea, but comparison to related work insufficient
rating: 4: Ok but not good enough - rejection
review: For the task of translating a sentence describing an image from one language to another language with the image as additional input for the translation task, the paper uses multimodal attention. For the multimodal attention, the paper explores to use Multimodal Compact Bilinear Pooling (MCB) [Fukui 2016].
Strength:
- Using MCB for this task seems to makes sense has not previously explored to my knowledge and slightly improves performances.
- The paper evaluates the task and ablations on the Multi30k dataset.
Main Weaknesses:
Discussion and comparison to related work:
1. There has been a large number of works looking at the multimodal translation problem, e.g. [Elliott 2015], [Iacer 2016], but the paper reads like, it is the first work looking at this problem. Specifically, the model from [Iacer 2016] is very similar to this work, apart from MCB.
2. Please cite prior work more precisely: The work misses the citation for tensor sketch algorithm from [Pham and Pagh 2013]; specifically also in Figure 1, where the visualization and algorithm seems to be based on Fukui 2016.
3. [Iacer 2016] also reports the results of using Moses, a statistical machine translation pipeline, which does not use the image an achieves 52. Meteor, higher than any result reported in this work.
4. See for https://staff.fnwi.uva.nl/s.c.frank/mmt_wmt_slides.pdf for many more results on the same dataset and task, many approaches achieving > 50 METEOR.
Further Weaknesses:
1. Please cite the actual publications not the arXives, whenever available.
While the paper integrates MCB [Fukui] in multimodal translation, which has not been done before to my knowledge, the paper significantly lacks coverage and comparison to related work, making it not acceptable in this form. Most notably, the approach is very similar to [Iacer 2016], apart from using MCB, but the paper does not cite [Iacer 2016].
References:
[Pham and Pagh 2013] Ninh Pham and Rasmus Pagh. 2013. Fast and scalable polynomial kernels via explicit feature maps. In Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’13, pages 239–247, New York, NY, USA. ACM.
[Elliott 2015] Elliott, Desmond, Stella Frank, and Eva Hasler. "Multilingual Image Description with Neural Sequence Models." arXiv preprint arXiv:1510.04709 (2015).
[Iacer 2016] Calixto, Iacer, Desmond Elliott, and Stella Frank. "Dcu-uva multimodal mt system report." Proceedings of the First Conference on Machine Translation, Berlin, Germany. 2016.
confidence: 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature |
rJT7bB4Kx | Multimodal Compact Bilinear Pooling for Multimodal Neural Machine Translation | [
"Jean-Benoit Delbrouck",
"Stephane Dupont"
] | In state-of-the-art Neural Machine Translation, an attention mechanism is used during decoding to enhance the translation. At every step, the decoder uses this mechanism to focus on different parts of the source sentence to gather the most useful information before outputting its target word. Recently, the effectiveness of the attention mechanism has also been explored for multimodal tasks, where it becomes possible to focus both on sentence parts and image regions. Approaches to pool two modalities usually include element-wise product, sum or concatenation.
In this paper, we evaluate the more advanced Multimodal Compact Bilinear pooling method, which takes the outer product of two vectors to combine the attention features for the two modalities. This has been previously investigated for visual question answering. We try out this approach for multimodal image caption translation and show improvements compared to basic combination methods. | [
"Natural language processing"
] | https://openreview.net/pdf?id=rJT7bB4Kx | r1xVdF6je | comment | 1,490,028,583,877 | rJT7bB4Kx | [
"everyone"
] | [
"ICLR.cc/2017/pcs"
] | decision: Reject
title: ICLR committee final decision |
rJT7bB4Kx | Multimodal Compact Bilinear Pooling for Multimodal Neural Machine Translation | [
"Jean-Benoit Delbrouck",
"Stephane Dupont"
] | In state-of-the-art Neural Machine Translation, an attention mechanism is used during decoding to enhance the translation. At every step, the decoder uses this mechanism to focus on different parts of the source sentence to gather the most useful information before outputting its target word. Recently, the effectiveness of the attention mechanism has also been explored for multimodal tasks, where it becomes possible to focus both on sentence parts and image regions. Approaches to pool two modalities usually include element-wise product, sum or concatenation.
In this paper, we evaluate the more advanced Multimodal Compact Bilinear pooling method, which takes the outer product of two vectors to combine the attention features for the two modalities. This has been previously investigated for visual question answering. We try out this approach for multimodal image caption translation and show improvements compared to basic combination methods. | [
"Natural language processing"
] | https://openreview.net/pdf?id=rJT7bB4Kx | SJN0KBKox | comment | 1,489,750,476,056 | B17g-Uesg | [
"everyone"
] | [
"~Jean-Benoit_Delbrouck1"
] | title: No title
comment: Dear reviewer,
Thank you for your comment.
We agree that our statement you quote is incorrect and has been taken out of our draft. The "previous work" has also been updated according to your comments.
As stated in our first review's response (see below), our main focus wasn't to compare our work to any monomodal or multimodal baseline but rather to show that more complex combination techniques help a system to translate better. Yet, we agree that to compare our work to others, and therefore to give it a more significant impact, we should have used state of the art models like [Iacer 2016].
Also, we'll try to enhance the explanation of the proposed attention models in this paper by reducing the basic model section, which could be easily shortened.
Best, |
rJT7bB4Kx | Multimodal Compact Bilinear Pooling for Multimodal Neural Machine Translation | [
"Jean-Benoit Delbrouck",
"Stephane Dupont"
] | In state-of-the-art Neural Machine Translation, an attention mechanism is used during decoding to enhance the translation. At every step, the decoder uses this mechanism to focus on different parts of the source sentence to gather the most useful information before outputting its target word. Recently, the effectiveness of the attention mechanism has also been explored for multimodal tasks, where it becomes possible to focus both on sentence parts and image regions. Approaches to pool two modalities usually include element-wise product, sum or concatenation.
In this paper, we evaluate the more advanced Multimodal Compact Bilinear pooling method, which takes the outer product of two vectors to combine the attention features for the two modalities. This has been previously investigated for visual question answering. We try out this approach for multimodal image caption translation and show improvements compared to basic combination methods. | [
"Natural language processing"
] | https://openreview.net/pdf?id=rJT7bB4Kx | SJu_HBtox | comment | 1,489,749,360,098 | ByJvq0kcx | [
"everyone"
] | [
"~Jean-Benoit_Delbrouck1"
] | title: No title
comment: Dear reviewer,
Thank you for your helpful comment.
The missing references you pointed out has been added to the paper.
I understand that a weakness is the low performance (Bleu scores) reported in our work. The main difference is that we dont use dropout which seems to significantly improve the translations.
Originally, our main focus wasn't to propose a state of the art model, but rather to show that combining multimodal attention vectors with more complex techniques actually improves the scores, wether or not they are state of the art. Yet, we agree that to compare to previous work, a similar model like [Iacer 2016] should have been used.
All your further comments, like the lack of precision whilst citing previous work has been taken into account. Our workshop draft has been updated to address these points. |
rJT7bB4Kx | Multimodal Compact Bilinear Pooling for Multimodal Neural Machine Translation | [
"Jean-Benoit Delbrouck",
"Stephane Dupont"
] | In state-of-the-art Neural Machine Translation, an attention mechanism is used during decoding to enhance the translation. At every step, the decoder uses this mechanism to focus on different parts of the source sentence to gather the most useful information before outputting its target word. Recently, the effectiveness of the attention mechanism has also been explored for multimodal tasks, where it becomes possible to focus both on sentence parts and image regions. Approaches to pool two modalities usually include element-wise product, sum or concatenation.
In this paper, we evaluate the more advanced Multimodal Compact Bilinear pooling method, which takes the outer product of two vectors to combine the attention features for the two modalities. This has been previously investigated for visual question answering. We try out this approach for multimodal image caption translation and show improvements compared to basic combination methods. | [
"Natural language processing"
] | https://openreview.net/pdf?id=rJT7bB4Kx | rkNm6SEsl | comment | 1,489,423,643,650 | rJT7bB4Kx | [
"everyone"
] | [
"~Desmond_Elliott1"
] | title: Experimental protocol and a suggestion
comment: I like this approach to training a multimodal translation model but the results are difficult to interpret, given the details in the paper.
I encourage you to follow the Shared Task evaluation procedure for measuring the BLEU scores on the test data. This procedure is described on the Shared Task web page (http://www.statmt.org/wmt17/multimodal-task.html) with hyperlinks to the processing scripts. If you follow this procedure, it will make it easier to compare your against other papers.
I also have a suggestion: you may want to use a decompounder on the German vocabulary. 19,000 types is quite high for the German dataset, and this could be reduced to ~ 15,000 by following the exact preprocessing steps described in Caglayan et al. (WMT 2016). A reduced German vocabulary should give you better BLEU scores because the model will be easier to train. You could also think about using the Moses compounder (Koehn et al. (2007)), Byte Pair Encodings (Sennrich et al. (2016)), or a pretrained neural decompounder (Daiber et al. (2016)).
Daiber et al. (2015) http://jodaiber.github.io/doc/compound_analogy.pdf
Koehn et al .(2007) http://www.aclweb.org/anthology/P07-2045
Caglayan et al. (2016) https://arxiv.org/abs/1605.09186
Sennrich et al. (2016) https://arxiv.org/abs/1508.07909
|
rJT7bB4Kx | Multimodal Compact Bilinear Pooling for Multimodal Neural Machine Translation | [
"Jean-Benoit Delbrouck",
"Stephane Dupont"
] | In state-of-the-art Neural Machine Translation, an attention mechanism is used during decoding to enhance the translation. At every step, the decoder uses this mechanism to focus on different parts of the source sentence to gather the most useful information before outputting its target word. Recently, the effectiveness of the attention mechanism has also been explored for multimodal tasks, where it becomes possible to focus both on sentence parts and image regions. Approaches to pool two modalities usually include element-wise product, sum or concatenation.
In this paper, we evaluate the more advanced Multimodal Compact Bilinear pooling method, which takes the outer product of two vectors to combine the attention features for the two modalities. This has been previously investigated for visual question answering. We try out this approach for multimodal image caption translation and show improvements compared to basic combination methods. | [
"Natural language processing"
] | https://openreview.net/pdf?id=rJT7bB4Kx | B17g-Uesg | official_review | 1,489,162,474,784 | rJT7bB4Kx | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper75/AnonReviewer1"
] | title: results are not consistent with prior work
rating: 5: Marginally below acceptance threshold
review: The paper investigates the problem of combining variable-length information from two different modalities. The specific method considered in the paper is compact bilinear pooling, which is compared against simpler methods in the context of multimodal machine translation. Two versions of the algorithm are considered, which differ in whether the information extracted from text influences the computation of attention weights for the elements of the representation of the image.
As mentioned by another review, a major issue of the paper is that that the prior work by [Calixto 2016] is not mentioned. Besides the statement "To our knowledge, there is currently no multimodal translation architecture that convincingly surpass [sic] a monomodal attention baseline" contradicts the results reported in [Calixto 2016]. They do report an improvement over the text-only NMT. This makes it hard to trust the results of this paper.
The writing of the paper could be improved. A lot of space is used to explain the basic model, but the proposed methods are explained extremely briefly. A few sentences explaining the compact bilinear pooling could help. The Algorithm 1 is not very helpful without any explanation. Most importantly, the explanation of the pre-attention mechanism, which is perhaps is the main novelty, is very vague.
Typos and minor writing issues:
- bottom of page 2: c_t^t is rather confusing notation
- bottom of page 2: "in a multiplicative but" - a word is missing
- beginning of Section 3: I believe it should be \alpha instead of \epsilon, and it makes sense to say "learning rate \alpha" to prevent confusion
Pros: the idea of pre-attention seems novel
Cons: results are not consistent with the prior work (which has not been mentioned), writing is not clear
[Calixto 2016] Calixto, Iacer, Desmond Elliott, and Stella Frank. "Dcu-uva multimodal mt system report." Proceedings of the First Conference on Machine Translation, Berlin, Germany. 2016.
confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
Syw2ZgrFx | Reinterpreting Importance-Weighted Autoencoders | [
"Chris Cremer",
"Quaid Morris",
"David Duvenaud"
] | The standard interpretation of importance-weighted autoencoders is that they maximize a tighter lower bound on the marginal likelihood. We give an alternate interpretation of this procedure: that it optimizes the standard variational lower bound, but using a more complex distribution. We formally derive this result, and visualize the implicit importance-weighted approximate posterior. | [
"Unsupervised Learning"
] | https://openreview.net/pdf?id=Syw2ZgrFx | r1Smkgfoe | official_review | 1,489,268,509,028 | Syw2ZgrFx | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper160/AnonReviewer2"
] | title: Simple and clear result, useful insights
rating: 7: Good paper, accept
review: The authors describe a simple and clear reinterpretation of importance
weighted autoencoders (Burda et al., 2016). I recommend this paper for
acceptance. It connects to much recent work on expressive variational
approximations, especially in those leveraging truncated Markov chains
as variational approximations. Further, it brings interesting ideas to
the table following this simple derivation.
A cool result is that this interpretation relaxes the idea of IWAEs to
be more broadly applicable to any divergence measure. Perhaps a key
experiment would not be so much comparing IWAEs with itself, but in
what this perspective allows, such as IWAE-based variational families
with alpha-divergences or operator variational objectives. Or alternatively,
combining the SIR approach with other rich posterior approximations.
With this perspective in mind, it's not necessarily clear if one
should even use IWAEs over other expressive variational
approximations. From my understanding of the field, most benchmarks
display IWAEs performing worse (in terms of held-out log-likelihood)
to others such as the variational Gaussian process (Tran et al.,
2016) and inverse autoregressive flows (Kingma et al., 2016).
This isn't a fault of this paper—it's great that the casting brings
these questions to bear—but I think it's something the paper should
certainly address if it aims to be more substantial in an extended
paper in the future.
The notation is not described in the paper; while experts in the field
can understand this, the work would benefit from properly laying out
definitions and properties.
References
+ Dinh, L., Sohl-Dickstein, J., & Bengio, S. (2017). Density estimation using Real NVP. Presented at the International Conference on Learning Representations.
+ Kingma, D. P., Salimans, T., & Welling, M. (2016). Improving Variational Inference with Inverse Autoregressive Flow. Presented at the Neural Information Processing Systems.
+ Li, Y., & Turner, R. E. (2016). Rényi Divergence Variational Inference. Presented at the Neural Information Processing Systems.
+ Ranganath, R., Altosaar, J., Tran, D., & Blei, D. M. (2016). Operator Variational Inference. Presented at the Neural Information Processing Systems.
+ Tran, D., Ranganath, R., & Blei, D. M. (2016). The Variational Gaussian Process. Presented at the International Conference on Learning Representations.
confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
Syw2ZgrFx | Reinterpreting Importance-Weighted Autoencoders | [
"Chris Cremer",
"Quaid Morris",
"David Duvenaud"
] | The standard interpretation of importance-weighted autoencoders is that they maximize a tighter lower bound on the marginal likelihood. We give an alternate interpretation of this procedure: that it optimizes the standard variational lower bound, but using a more complex distribution. We formally derive this result, and visualize the implicit importance-weighted approximate posterior. | [
"Unsupervised Learning"
] | https://openreview.net/pdf?id=Syw2ZgrFx | SytDdtpjl | comment | 1,490,028,640,789 | Syw2ZgrFx | [
"everyone"
] | [
"ICLR.cc/2017/pcs"
] | decision: Accept
title: ICLR committee final decision |
Syw2ZgrFx | Reinterpreting Importance-Weighted Autoencoders | [
"Chris Cremer",
"Quaid Morris",
"David Duvenaud"
] | The standard interpretation of importance-weighted autoencoders is that they maximize a tighter lower bound on the marginal likelihood. We give an alternate interpretation of this procedure: that it optimizes the standard variational lower bound, but using a more complex distribution. We formally derive this result, and visualize the implicit importance-weighted approximate posterior. | [
"Unsupervised Learning"
] | https://openreview.net/pdf?id=Syw2ZgrFx | rJ77QL8ix | official_review | 1,489,556,250,633 | Syw2ZgrFx | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper160/AnonReviewer1"
] | title: IWAE bound derived as VAE bound with particular implicit distribution q_IW
rating: 8: Top 50% of accepted papers, clear accept
review: This paper introduces a new perspective on IWAE.
It is shown that the IWAE bound can be interpreted as a VAE bound with a particular implicit inference model q_IW. This implicit posterior distribution is a function of both the variational parameters, and the generative model parameters.
The derivation seems novel, and adds an interesting new link to IWAE and VAE objectives.
A potential drawback of the IWAE posterior, in comparison to alternative methods for building complex posteriors, is that it is relatively expensive; you may require a large number of samples to converge to the true distribution, probabily especially so in high dimensional space. However, that's besides the point of this paper, and I think it still proposes a valid and interesting perspective on IWAE.
confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
BkyScySKl | Joint Embeddings of Scene Graphs and Images | [
"Eugene Belilovsky",
"Matthew Blaschko",
"Jamie Ryan Kiros",
"Raquel Urtasun",
"Richard Zemel"
] | Multimodal representations of text and images have become popular in recent years. Text however has inherent ambiguities when describing visual scenes, leading to the recent development of datasets with detailed graphical descriptions in the form of scene graphs. We consider the task of joint representation of semantically precise scene graphs and images. We propose models for representing scene graphs and aligning them with images. We investigate methods based on bag-of-words, subpath representations, as well as neural networks. Our investigation proposes and contrasts several models which can address this task and highlights some unique challenges in both designing models and evaluation. | [] | https://openreview.net/pdf?id=BkyScySKl | S1y76deie | official_review | 1,489,173,782,988 | BkyScySKl | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper143/AnonReviewer1"
] | title: Review
rating: 7: Good paper, accept
review: This paper investigates a set of simple models for generating scene and image graph embeddings. Scene graphs are represented either with count features on their constituent nodes, count features on their constituent nodes and short paths, or convolutionally. A margin objective is then used to learn projections from the space of image features and graph representations into a joint embedding space. This paper finds that on a dataset of scene graphs, the representation based on path counts outperforms the other two approaches both in identifying similar images to the one for which the graph was annotated, and in retrieving the target image.
This is a clean, focused, and well presented contribution. It's an interesting result that the approach based on path counts outperforms the convolutional / GraphNN approach---it seems like count-based models have generally been on the way out (at least in machine translation and language modeling). Presumably it's the relatively small size of the training data that makes them still useful here. It might be useful to mention how you think this approach might scale to larger datasets. How big is the object vocabulary? How many subpaths occur in the test set but not the training set? Are you doing anything else (e.g. smoothing, backoff) to deal with count sparsity?
confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
BkyScySKl | Joint Embeddings of Scene Graphs and Images | [
"Eugene Belilovsky",
"Matthew Blaschko",
"Jamie Ryan Kiros",
"Raquel Urtasun",
"Richard Zemel"
] | Multimodal representations of text and images have become popular in recent years. Text however has inherent ambiguities when describing visual scenes, leading to the recent development of datasets with detailed graphical descriptions in the form of scene graphs. We consider the task of joint representation of semantically precise scene graphs and images. We propose models for representing scene graphs and aligning them with images. We investigate methods based on bag-of-words, subpath representations, as well as neural networks. Our investigation proposes and contrasts several models which can address this task and highlights some unique challenges in both designing models and evaluation. | [] | https://openreview.net/pdf?id=BkyScySKl | BynU_tTog | comment | 1,490,028,627,677 | BkyScySKl | [
"everyone"
] | [
"ICLR.cc/2017/pcs"
] | decision: Accept
title: ICLR committee final decision |
BkyScySKl | Joint Embeddings of Scene Graphs and Images | [
"Eugene Belilovsky",
"Matthew Blaschko",
"Jamie Ryan Kiros",
"Raquel Urtasun",
"Richard Zemel"
] | Multimodal representations of text and images have become popular in recent years. Text however has inherent ambiguities when describing visual scenes, leading to the recent development of datasets with detailed graphical descriptions in the form of scene graphs. We consider the task of joint representation of semantically precise scene graphs and images. We propose models for representing scene graphs and aligning them with images. We investigate methods based on bag-of-words, subpath representations, as well as neural networks. Our investigation proposes and contrasts several models which can address this task and highlights some unique challenges in both designing models and evaluation. | [] | https://openreview.net/pdf?id=BkyScySKl | S1VYt7lcl | official_review | 1,488,103,804,323 | BkyScySKl | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper143/AnonReviewer2"
] | title: Strong baselines for scene graph prediction
rating: 8: Top 50% of accepted papers, clear accept
review: The submission studies scene graph prediction via joint embeddings of images and graphs. It evaluates two different embeddings (one of which is a simple baseline). The graph embeddings are evaluated in ranking experiments. Interestingly, a representation that is essentially a "bag of small subgraphs" performs very competitively; it substantially outperforms graph network representations.
Scene graph prediction will likely become an increasingly important topic in computer vision, and this submission presents some string baselines for the problem. Having such baselines is extremely important: so, even though the paper does not introduce new algorithms, I would recommend that this submission is accepted.
It would be interesting to see to what extent these results generalize to larger datasets that have a long tail of relationship and node types, such as VisualGenome; I encourage the authors to perform such experiments for a future full version of this paper.
The submission should probably cite this related work: https://arxiv.org/pdf/1701.02426.pdf
confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
BkDDM04Ke | Conditional Image Synthesis With Auxiliary Classifier GANs | [
"Augustus Odena",
"Christopher Olah & Jonathon Shlens"
] | Synthesizing high resolution photorealistic images has been a long-standing challenge
in machine learning. In this paper we introduce new methods for the improved
training of generative adversarial networks (GANs) for image synthesis.
We construct a variant of GANs employing label conditioning that results in
128 × 128 resolution image samples exhibiting global coherence. We expand
on previous work for image quality assessment to provide two new analyses for
assessing the discriminability and diversity of samples from class-conditional image
synthesis models. These analyses demonstrate that high resolution samples
provide class information not present in low resolution samples. Across 1000
ImageNet classes, 128 × 128 samples are more than twice as discriminable as artificially
resized 32 × 32 samples. In addition, 84.7% of the classes have samples
exhibiting diversity comparable to real ImageNet data. | [] | https://openreview.net/pdf?id=BkDDM04Ke | rkafBhgsg | official_review | 1,489,188,117,339 | BkDDM04Ke | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper116/AnonReviewer1"
] | title: Not much different
rating: 5: Marginally below acceptance threshold
review: This work proposes to add a class label to both the generator and discriminator of the GAN network. This is intuitive, but is NOT novel. Conditioning the posterior distribution on the class label is an old idea. I also agree with the other reviewer that filling the appendix with a lot of new and relevant content is poor form.
The presentation is a bit sloppy. The curves in Figure 4 are missing a legend.
confidence: 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature |
BkDDM04Ke | Conditional Image Synthesis With Auxiliary Classifier GANs | [
"Augustus Odena",
"Christopher Olah & Jonathon Shlens"
] | Synthesizing high resolution photorealistic images has been a long-standing challenge
in machine learning. In this paper we introduce new methods for the improved
training of generative adversarial networks (GANs) for image synthesis.
We construct a variant of GANs employing label conditioning that results in
128 × 128 resolution image samples exhibiting global coherence. We expand
on previous work for image quality assessment to provide two new analyses for
assessing the discriminability and diversity of samples from class-conditional image
synthesis models. These analyses demonstrate that high resolution samples
provide class information not present in low resolution samples. Across 1000
ImageNet classes, 128 × 128 samples are more than twice as discriminable as artificially
resized 32 × 32 samples. In addition, 84.7% of the classes have samples
exhibiting diversity comparable to real ImageNet data. | [] | https://openreview.net/pdf?id=BkDDM04Ke | BykxQ0C9g | official_review | 1,489,064,678,637 | BkDDM04Ke | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper116/AnonReviewer2"
] | title: Uninsightful and in need of much more work
rating: 2: Strong rejection
review: Let me preface my review by saying that I didn’t read the appendix because I think it is bad form to add a paper worth of additional material to what is supposed to be an extended abstract, and the main text unfortunately did not inspire me to read further either.
The authors propose to combine two ideas for improving generative modeling with GANs: conditioning the generator on class labels and training the discriminator to reconstruct the labels.
Given that both ideas are simple and have been used in isolation, the project has little to offer conceptually. This could still be an interesting abstract if it evaluated well the effect of combing both ideas. Unfortunately, this does not seem to be the case.
Any evaluation based on samples is necessarily very limited, as a model which simply stores the training data will score as well as the true distribution of natural images. A more useful comparison would have been to compare samples of two generators with the same architecture and trained on the same data, one trained with the proposed changes and one without.
The value of the analysis in Figure 2 is not at all clear to me. Showing the effect of throwing away high-spatial frequency information tells me that the classifier is using that information, and that the generator is not merely interpolating low-resolution images. But it tells me very little about the effectiveness of the proposed changes to GAN training.
The paper also seems sloppily written. E.g., in the introduction the authors claim that Balle et al. (2015) describe an advance in the state of the art in image denoising. Looking at the paper I couldn’t find such a claim or a comparison to the state of the art (non-parametric methods such as BM3D and discriminative methods such as feed-forwardly trained neural nets). The authors cite Toderici et al. (2016) as an example of image models advancing compression, but to my knowledge this paper uses binarized hidden states of a recurrent neural network and no image model.
confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
BkDDM04Ke | Conditional Image Synthesis With Auxiliary Classifier GANs | [
"Augustus Odena",
"Christopher Olah & Jonathon Shlens"
] | Synthesizing high resolution photorealistic images has been a long-standing challenge
in machine learning. In this paper we introduce new methods for the improved
training of generative adversarial networks (GANs) for image synthesis.
We construct a variant of GANs employing label conditioning that results in
128 × 128 resolution image samples exhibiting global coherence. We expand
on previous work for image quality assessment to provide two new analyses for
assessing the discriminability and diversity of samples from class-conditional image
synthesis models. These analyses demonstrate that high resolution samples
provide class information not present in low resolution samples. Across 1000
ImageNet classes, 128 × 128 samples are more than twice as discriminable as artificially
resized 32 × 32 samples. In addition, 84.7% of the classes have samples
exhibiting diversity comparable to real ImageNet data. | [] | https://openreview.net/pdf?id=BkDDM04Ke | r1tHOKpje | comment | 1,490,028,609,241 | BkDDM04Ke | [
"everyone"
] | [
"ICLR.cc/2017/pcs"
] | decision: Reject
title: ICLR committee final decision |
BJdmMd4Yg | Who Said What: Modeling individual labelers improves classification | [
"Melody Y. Guan",
"Varun Gulshan",
"Andrew M. Dai",
"Geoffrey E. Hinton"
] | Data are often labeled by many different experts, with each expert labeling a small fraction of the data and each sample receiving multiple labels. When experts disagree, the standard approaches are to treat the majority opinion as the truth or to model the truth as a distribution, but these do not make any use of potentially valuable information about which expert produced which label. We propose modeling the experts individually and then learning averaging weights for combining them, possibly in sample-specific ways. This allows us to give more weight to more reliable experts and take advantage of the unique strengths of individual experts at classifying certain types of data. We show that our approach performs better than three competing methods in computer-aided diagnosis of diabetic retinopathy. | [
"Computer vision",
"Deep learning",
"Supervised Learning"
] | https://openreview.net/pdf?id=BJdmMd4Yg | B1sFvSbie | comment | 1,489,225,603,147 | rkhWNJ-jg | [
"everyone"
] | [
"~Melody_Yun_Jia_Guan1"
] | title: Response to AnonReviewer1
comment: Thanks so much to the reviewer for their time and comments! Responses to the three details you mentioned are below:
1.
"how could this be detected, since they may have provided a disproportionate number of labels in the test set as well?"
They do not provide a disproportionate number of labels in the test set because the doctors used in training/validation are disjoint from the doctors used in the test set.
- "3 retina specialists graded all images in the test dataset, and any disagreements were discussed until a consensus label was obtained" (Appendix C)
- "we remove grades of doctors who graded test set images from training and validation sets to reduce the chance that the model is overfitting on certain experts." (Appendix E)
For additional clarity we updated the paper to include this second point from Appendix in page 2 paragraph 1 as well (see revision).
"Is there any rebalancing between doctors (as opposed to classes)?"
We do not rebalance between doctors. In a sense this is an implementation choice (i.e. it is reasonable to try rebalancing the doctors) but we also felt that it was better to allow doctors who labelled more examples to have more say. This is because we can create better models for doctors with more data, which means that a) all else being equal, their models will have better predictions, and b) their models' reliabilities can be more confidently estimated so if a doctor is bad this will be reflected in its weight. Also note that the baseline of using the average labeler opinion favors more frequent labelers as well so this is not a phenomenon limited to our approach.
2.
Please note that the test distribution is not assumed to be known (it would indeed be a questionable assumption)! Rather, "Our assumed test class distribution for computing the log prior correction was the mean distribution of all known images (those of the training and validation sets)" (page 8, paragraph 2). Also yes, all baselines in comparisons use this adjustment.
3.
We have updated page 3 paragraph 1 to include the formula for additional clarity (see revision).
Thanks again! We hope that this resolves all your concerns! |
BJdmMd4Yg | Who Said What: Modeling individual labelers improves classification | [
"Melody Y. Guan",
"Varun Gulshan",
"Andrew M. Dai",
"Geoffrey E. Hinton"
] | Data are often labeled by many different experts, with each expert labeling a small fraction of the data and each sample receiving multiple labels. When experts disagree, the standard approaches are to treat the majority opinion as the truth or to model the truth as a distribution, but these do not make any use of potentially valuable information about which expert produced which label. We propose modeling the experts individually and then learning averaging weights for combining them, possibly in sample-specific ways. This allows us to give more weight to more reliable experts and take advantage of the unique strengths of individual experts at classifying certain types of data. We show that our approach performs better than three competing methods in computer-aided diagnosis of diabetic retinopathy. | [
"Computer vision",
"Deep learning",
"Supervised Learning"
] | https://openreview.net/pdf?id=BJdmMd4Yg | ByBNOYase | comment | 1,490,028,588,700 | BJdmMd4Yg | [
"everyone"
] | [
"ICLR.cc/2017/pcs"
] | decision: Reject
title: ICLR committee final decision |
BJdmMd4Yg | Who Said What: Modeling individual labelers improves classification | [
"Melody Y. Guan",
"Varun Gulshan",
"Andrew M. Dai",
"Geoffrey E. Hinton"
] | Data are often labeled by many different experts, with each expert labeling a small fraction of the data and each sample receiving multiple labels. When experts disagree, the standard approaches are to treat the majority opinion as the truth or to model the truth as a distribution, but these do not make any use of potentially valuable information about which expert produced which label. We propose modeling the experts individually and then learning averaging weights for combining them, possibly in sample-specific ways. This allows us to give more weight to more reliable experts and take advantage of the unique strengths of individual experts at classifying certain types of data. We show that our approach performs better than three competing methods in computer-aided diagnosis of diabetic retinopathy. | [
"Computer vision",
"Deep learning",
"Supervised Learning"
] | https://openreview.net/pdf?id=BJdmMd4Yg | S17AN1Vil | official_review | 1,489,396,939,463 | BJdmMd4Yg | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper83/AnonReviewer2"
] | title: Empirical results on an unique dataset using mixing of experts
rating: 5: Marginally below acceptance threshold
review: Despite the interesting results, the paper is a very empirical paper on mixing of experts. Mixing of experts is a very old issue. Authors should cite some references around mixing of experts that would help the reader to understand the problem at hand and to asses the contributions.
Honestly i do not understand all the nets proposed (Figure 4, doesn't help me). In general section 3 should be improved.
Around the idea of mixing of experts i remember a paper that was published in Nature (i think so) where the authors proposed a very interesting idea for mixing experts beyond the typical weight associated to the expert reliability. The authors propose to ask an additional question to the experts about what they think the other experts are going to answer. And use this additional question to detect where an expert is a good expert. For instance, when one expert is sure about his decision but at the same time he knows that the problem is hard he thinks that the other experts (or some group) are going to fail and then he is going to claim that his answer is A but others experts answer is going to be B.
In my opinion, these are the things that would be interesting to explore, beyond weighting opinion. Perhaps a NN could help to solve this problem without that additional question. Perhaps this paper is in that direction but sorry i couldn't understand it.
confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
BJdmMd4Yg | Who Said What: Modeling individual labelers improves classification | [
"Melody Y. Guan",
"Varun Gulshan",
"Andrew M. Dai",
"Geoffrey E. Hinton"
] | Data are often labeled by many different experts, with each expert labeling a small fraction of the data and each sample receiving multiple labels. When experts disagree, the standard approaches are to treat the majority opinion as the truth or to model the truth as a distribution, but these do not make any use of potentially valuable information about which expert produced which label. We propose modeling the experts individually and then learning averaging weights for combining them, possibly in sample-specific ways. This allows us to give more weight to more reliable experts and take advantage of the unique strengths of individual experts at classifying certain types of data. We show that our approach performs better than three competing methods in computer-aided diagnosis of diabetic retinopathy. | [
"Computer vision",
"Deep learning",
"Supervised Learning"
] | https://openreview.net/pdf?id=BJdmMd4Yg | HkotYbjil | comment | 1,489,865,090,557 | S17AN1Vil | [
"everyone"
] | [
"~Melody_Yun_Jia_Guan1"
] | title: Response to AnonReviewer2
comment: Thank you very much for your time and review! We have tried our best to to incorporate your feedback and clarify things for you.
We do think it would be a good idea to add references; we have revised our paper to discuss prior literature in crowdsourcing which deals with the same problem space tackled by our paper. Refer to the the "Estimating doctor reliability with EM" paragraph of section 3.
We would also like to clarify that our work is distinct from the usual "mixture of experts" papers (MoE concept introduced by [1,2]). **Usual "mixture of experts" models are not about modeling individual experts, but rather training latent experts on the same data.** (In more detail: These latent experts are trained using a training set where each data point has a single label and there is also no information on the origin of the label. Our paper concerns datasets labelled by multiple *observed* experts where each data point has multiple overlapping labels from a subset of the experts. In this context we are combining experts in a way not explored before, learning from the identity of individual experts by modeling them (with each modeled expert trained on a restricted subset of the data), learning their specialties, and learning how to combine them.)
The Nature paper referenced is probably "A solution to the single-question crowd wisdom problem" [3]. We find this paper extremely interesting as well, but like the reviewer pointed out, it involves asking extra questions (what each expert thinks the popular opinion would be) and that is not feasible for existing large datasets which have already been labeled by several experts (often with huge expenses). Our goal was to develop a method that could be applied to existing labeled datasets, as is the case with the vast majority of real world datasets.
To help readers better understand the nets, we rewrote section 3. We also moved the paragraph on binary loss in section 4 to the appendix (Appendix J) in order to provide more space for section 3. But due to the 3-page limit for workshop papers, there was only so much more we could add, so Appendix D we also added 3 additional paragraphs of detailed explanation of the net with references to parts of Figure 4. We also provided the loss inputs in tabular form (previously this information was only provided in text from). We hope that these changes are helpful, and if the reviewer has any specific points of confusion we would be very happy to address those in further comments!
Hopefully our response helps the reviewer better understand the context and content of the paper. We believe our approach to be novel, simple and useful, and thank the reviewer for their helpful comments.
[1] R. A. Jacobs, M. I. Jordan, S. J. Nowlan, and G. E. Hinton. 1991. Adaptive mixtures of local experts. Neural Computing. 3, 1 (February 1991), 79-87
[2] M. I. Jordan and R. A. Jacobs. 1994. Hierarchical mixtures of experts and the EM algorithm. Neural Computing. 6, 2 (March 1994), 181-214
[3] D. Prelec, H. S. Seung, and J. McCoy. 2017. A solution to the single-question crowd wisdom problem. Nature. 541 (January 2017), 532–535 |
BJdmMd4Yg | Who Said What: Modeling individual labelers improves classification | [
"Melody Y. Guan",
"Varun Gulshan",
"Andrew M. Dai",
"Geoffrey E. Hinton"
] | Data are often labeled by many different experts, with each expert labeling a small fraction of the data and each sample receiving multiple labels. When experts disagree, the standard approaches are to treat the majority opinion as the truth or to model the truth as a distribution, but these do not make any use of potentially valuable information about which expert produced which label. We propose modeling the experts individually and then learning averaging weights for combining them, possibly in sample-specific ways. This allows us to give more weight to more reliable experts and take advantage of the unique strengths of individual experts at classifying certain types of data. We show that our approach performs better than three competing methods in computer-aided diagnosis of diabetic retinopathy. | [
"Computer vision",
"Deep learning",
"Supervised Learning"
] | https://openreview.net/pdf?id=BJdmMd4Yg | rkhWNJ-jg | official_review | 1,489,200,131,965 | BJdmMd4Yg | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper83/AnonReviewer1"
] | rating: 6: Marginally above acceptance threshold
review: This work aims to improve classification accuracy in cases where there is high disagreement among labelers, some of which may be due to systemic differences between labelers. The general approach is simple and interesting, making separate predictions for each labeler individually, and averaging at test time. Weights to make this a weighted average are also learned, and two additional conditionings for the model are explored. A single dataset, to classify diabetic retinopathy, is explored.
Overall I feel this is an interesting approach, though a few details could be better explained and justified, in my opinion:
- If a single doctor does more labeling than any other doctor, the majority vote may tend to favor this labeler (they have more chances to be in the majority). Would the learned weights then mostly just favor the most frequent doctor, and how could this be detected, since they may have provided a disproportionate number of labels in the test set as well? Is there any rebalancing between doctors (as opposed to classes)?
- The appendix mentions a step where the biases are adjusted to account for class frequencies in the test set. IMO this is a slightly questionable step, assuming that the test distribution is known, but this indeed may be the case in many situations. Also I'm supposing that all baselines in comparisons also used this adjustment -- is this the case?
- I feel the summary of the loss theta_ll' could be a bit clearer: What is the final loss exactly?
confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
ByWhxeHtx | Bottom Up or Top Down? Dynamics of Deep Representations via Canonical Correlation Analysis | [
"Maithra Raghu",
"Jason Yosinski",
"Jascha Sohl-Dickstein"
] | We present a versatile quantitative framework for comparing representations in deep neural networks, based on Canonical Correlation Analysis, and use it to analyze the dynamics of representation learning during the training process of a deep network. We find that layers converge to their final representation from the bottom-up, but that the representations themselves migrate downwards in the net-work over the course of learning. | [
"Theory",
"Deep learning"
] | https://openreview.net/pdf?id=ByWhxeHtx | HJRpSdese | official_review | 1,489,171,909,924 | ByWhxeHtx | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper155/AnonReviewer2"
] | title: Interesting approach to study dynamics of DNNs with a rather incomplete presentation
rating: 5: Marginally below acceptance threshold
review: This work studies similarities between data representations of DNNs during training using canonical correlation analysis (CCA). Authors present two conclusions based on this analysis framework. First, during training, the lower layers converge faster to the final distribution (up to affine transformation) compared to the upper layers. Then, authors observe that The final layer correlates more with the lower layers during the early stages of training compared to the final stages.
The observed properties are rather interesting and especially the second observation would be a quite surprising observation. It is known that for DNN training, the neural networks need to be over-parametrised, but little is known about the reasons why. The proposed explanation of a low-level image representations crawling down from upper layers sounds like an intriguing explanation, however it is not clear whether the observed effect is not only an artifact of the non-linear operation of the logit layer (as it seems from the Figure 1).
From the technical perspective, the paper is really brief and unfortunately is missing out some important details (what final layer is used in the Figure 3 experiment, how are convolutional features handled, reason for non-symmetry of the tensors in Figure 1). The structure of the manuscript is rather unusual as it does not contain final discussion/conclusions.
In general, it is a quite interesting idea, however feels a bit unfinished. Furthermore, considering the goals of the ICLR Workshop, it does not seem to fall to any of the "late-breaking developments, very novel ideas and position papers" categories. If these requirements were relaxed and the work was a bit extended, I believe it would be an interesting workshop submission paper.
Pros:
- Neat and simple idea how to study properties of image representations during training
- Interesting perspective on the hidden units as vectors in function space which nicely fits to the CCA analysis
Cons:
- Seems to be unfinished, missing some important details
- Unfortunately, does not fit the requirements of the ICLR 2017 Workshops
confidence: 3: The reviewer is fairly confident that the evaluation is correct |
ByWhxeHtx | Bottom Up or Top Down? Dynamics of Deep Representations via Canonical Correlation Analysis | [
"Maithra Raghu",
"Jason Yosinski",
"Jascha Sohl-Dickstein"
] | We present a versatile quantitative framework for comparing representations in deep neural networks, based on Canonical Correlation Analysis, and use it to analyze the dynamics of representation learning during the training process of a deep network. We find that layers converge to their final representation from the bottom-up, but that the representations themselves migrate downwards in the net-work over the course of learning. | [
"Theory",
"Deep learning"
] | https://openreview.net/pdf?id=ByWhxeHtx | HyR_KUgjx | official_review | 1,489,164,662,448 | ByWhxeHtx | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper155/AnonReviewer3"
] | title: Interesting direction for future research, currently too preliminary for ICLR workshop focus areas
rating: 3: Clear rejection
review: Thanks to the authors for sharing this technique and the direction they're heading with their research.
This work applies canonical-correlation analysis between layers of a deep neural network to its own intermediate layers, in a FCN and CNN setting. Through this visualization, the authors observe a bottom-up convergence pattern in two networks trained for classification, an FCN for MNIST and CNN for CIFAR-10. This is interpreted as the network learning converging to low-level representations quickly, and building upwards toward higher-level representations more slowly during training.
The authors also make an observation about what they describe as the "1% rows / higher layers of the network" being similar to their final representations. This is interpreted as the network learning final representations most quickly which are then "squeezed from the top down" to fit into lower layers through training.
This point is unclear, as there is no label corresponding to 1% rows on the diagrams, but it likely refers to the stage at 3% in the training where the "out" layer has correlation between 0.7 and 0.9 with all layers for the MNIST example, and 0.1- 0.65 in the CIFAR-10 example.
Since the gradient signal is strongest at the top layer, the phenomenon may be simply a characteristic of gradient descent rather than a feature of representation learning by deep networks. Moreover, initialization and training algorithm will heavily influence this pattern in the visualization. These points are not explored in the current version of the paper, weakening the conjectures about representation learning by the network.
CCA as a method of studying correlation patterns among layers in a deep network is interesting, and I look forward to seeing more work from the authors in this area. For the purposes of the ICLR workshop track, which seeks to emphasize late-breaking developments, very novel ideas and position papers, I assess this as not appropriate.
confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
ByWhxeHtx | Bottom Up or Top Down? Dynamics of Deep Representations via Canonical Correlation Analysis | [
"Maithra Raghu",
"Jason Yosinski",
"Jascha Sohl-Dickstein"
] | We present a versatile quantitative framework for comparing representations in deep neural networks, based on Canonical Correlation Analysis, and use it to analyze the dynamics of representation learning during the training process of a deep network. We find that layers converge to their final representation from the bottom-up, but that the representations themselves migrate downwards in the net-work over the course of learning. | [
"Theory",
"Deep learning"
] | https://openreview.net/pdf?id=ByWhxeHtx | Skrvdtpog | comment | 1,490,028,636,780 | ByWhxeHtx | [
"everyone"
] | [
"ICLR.cc/2017/pcs"
] | decision: Reject
title: ICLR committee final decision |
rJnjwsYde | Variational Reference Priors | [
"Eric Nalisnick",
"Padhraic Smyth"
] | In modern probabilistic learning, we often wish to perform automatic inference for Bayesian models. However, informative priors are often costly to elicit, and in consequence, flat priors are chosen with the hopes that they are reasonably uninformative. Yet, objective priors such as the Jeffreys and Reference would often be preferred over flat priors if deriving them was generally tractable. We overcome this problem by proposing a black-box learning algorithm for Reference prior approximations. We derive a lower bound on the mutual information between data and parameters and describe how its optimization can be made derivation-free and scalable via differentiable Monte Carlo expectations. We experimentally demonstrate the method's effectiveness by recovering Jeffreys priors and learning the Variational Autoencoder's Reference prior. | [] | https://openreview.net/pdf?id=rJnjwsYde | r1ZMuFaog | comment | 1,490,028,552,615 | rJnjwsYde | [
"everyone"
] | [
"ICLR.cc/2017/pcs"
] | decision: Accept
title: ICLR committee final decision |
rJnjwsYde | Variational Reference Priors | [
"Eric Nalisnick",
"Padhraic Smyth"
] | In modern probabilistic learning, we often wish to perform automatic inference for Bayesian models. However, informative priors are often costly to elicit, and in consequence, flat priors are chosen with the hopes that they are reasonably uninformative. Yet, objective priors such as the Jeffreys and Reference would often be preferred over flat priors if deriving them was generally tractable. We overcome this problem by proposing a black-box learning algorithm for Reference prior approximations. We derive a lower bound on the mutual information between data and parameters and describe how its optimization can be made derivation-free and scalable via differentiable Monte Carlo expectations. We experimentally demonstrate the method's effectiveness by recovering Jeffreys priors and learning the Variational Autoencoder's Reference prior. | [] | https://openreview.net/pdf?id=rJnjwsYde | rk0lV27il | comment | 1,489,384,437,793 | rkEy9ugix | [
"everyone"
] | [
"~Eric_Nalisnick1"
] | title: Author Response
comment: Thanks for your thoughtful comments, Reviewer #1. We, in general, agree with your assessment. Below are a few responses and comments.
1. Indeed, the step from the 1d models to the VAE is large. We left out discussion of some intermediate models (ex: Gaussian mixtures) because we wanted to include the VAE result, which we thought would be of more interest to the ICLR community.
2. On whether the VAE result is trustworthy: assuming a euclidean latent space, the VAE's true reference prior is a function that approaches infinity at the domain's extremes. Our reference prior approximation clearly exhibits these characteristics, and therefore we think it's extremely unlikely that optimization is finding some pathological or unrepresentative solution.
3. On scale-invariance: reference priors are usually identifiable only up to proportionality; so yes, they are scale invariant. Actually, our method allows the user to sidestep these problems with the reference prior (ex: inability to be normalized) because we can learn an approximation that is well-behaved, a proper distribution, etc.
Thanks again,
Eric |
rJnjwsYde | Variational Reference Priors | [
"Eric Nalisnick",
"Padhraic Smyth"
] | In modern probabilistic learning, we often wish to perform automatic inference for Bayesian models. However, informative priors are often costly to elicit, and in consequence, flat priors are chosen with the hopes that they are reasonably uninformative. Yet, objective priors such as the Jeffreys and Reference would often be preferred over flat priors if deriving them was generally tractable. We overcome this problem by proposing a black-box learning algorithm for Reference prior approximations. We derive a lower bound on the mutual information between data and parameters and describe how its optimization can be made derivation-free and scalable via differentiable Monte Carlo expectations. We experimentally demonstrate the method's effectiveness by recovering Jeffreys priors and learning the Variational Autoencoder's Reference prior. | [] | https://openreview.net/pdf?id=rJnjwsYde | HJkyXKNjx | comment | 1,489,437,398,715 | SJG51Nfsl | [
"everyone"
] | [
"~Eric_Nalisnick1"
] | title: Author Response
comment: Thanks for your attentive comments, Reviewer #2.
On estimator variance: the estimator does have high variance, but it is not as bad as the harmonic mean estimator's, to which I believe you're referring. When using a finite negative value for alpha, the estimator becomes very similar to the harmonic mean (but exponentiated), and this is why we use the VR-max estimate instead. We found learning to be stable when in less that 20 dimensions.
Does the reference prior yield a better density model than the spherical Gaussian one?: preliminary experiments were inconclusive. The reference prior resulted in a better model for 25d but worse in 2d, but in each case the difference was slight, <0.1 .
In the VAE experiment, what keeps the prior from expanding to be infinitely broad?: firstly, the neural network sampler must have finite weights, resulting in the prior having finite domain. Secondly, if the decoder network uses units that can saturate, the prior will stop expanding once the downstream activations becomes sufficiently large.
Is the true reference prior guaranteed to be proper? What happens if it is improper?: it most likely won't be proper, which is a benefit of our methodology since it allows us to find an approximation that is proper (or has other properties the user desires). Yet, MCMC usually still works for improper posteriors though: http://stats.stackexchange.com/questions/211917/sampling-from-an-improper-distribution-using-mcmc-and-otherwise
Thanks again,
Eric |
rJnjwsYde | Variational Reference Priors | [
"Eric Nalisnick",
"Padhraic Smyth"
] | In modern probabilistic learning, we often wish to perform automatic inference for Bayesian models. However, informative priors are often costly to elicit, and in consequence, flat priors are chosen with the hopes that they are reasonably uninformative. Yet, objective priors such as the Jeffreys and Reference would often be preferred over flat priors if deriving them was generally tractable. We overcome this problem by proposing a black-box learning algorithm for Reference prior approximations. We derive a lower bound on the mutual information between data and parameters and describe how its optimization can be made derivation-free and scalable via differentiable Monte Carlo expectations. We experimentally demonstrate the method's effectiveness by recovering Jeffreys priors and learning the Variational Autoencoder's Reference prior. | [] | https://openreview.net/pdf?id=rJnjwsYde | SJG51Nfsl | official_review | 1,489,285,002,328 | rJnjwsYde | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper12/AnonReviewer2"
] | title: Interesting approach to learning priors for generative models
rating: 7: Good paper, accept
review: This extended abstract proposes an interesting method to learn a reference prior distribution using a variational formulation with the reparameterization trick. There is a need for this sort of work, since the generic prior distributions commonly used in VAEs and GANs are somewhat unsatisfying. The idea of learning a reference prior is interesting, and I haven’t seen it discussed in the context of deep generative models.
The contributions and experiments seem sufficient for a workshop paper, so I would recommend acceptance.
I’m a little concerned about the variance of the estimates in Eqn. (3); this resembles the likelihood weighting based estimate of p(D), which can have extremely large, or even infinite variance. Are the estimates stable?
The VAE example is interesting. What can we learn from the shape of the learned prior? Does the bimodal structure imply the distribution is multimodal? Does the reference prior yield a better density model than the spherical Gaussian one?
In the VAE experiment, what keeps the prior from expanding to be infinitely broad? Is the true reference prior guaranteed to be proper? What happens if it is improper?
confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
rJnjwsYde | Variational Reference Priors | [
"Eric Nalisnick",
"Padhraic Smyth"
] | In modern probabilistic learning, we often wish to perform automatic inference for Bayesian models. However, informative priors are often costly to elicit, and in consequence, flat priors are chosen with the hopes that they are reasonably uninformative. Yet, objective priors such as the Jeffreys and Reference would often be preferred over flat priors if deriving them was generally tractable. We overcome this problem by proposing a black-box learning algorithm for Reference prior approximations. We derive a lower bound on the mutual information between data and parameters and describe how its optimization can be made derivation-free and scalable via differentiable Monte Carlo expectations. We experimentally demonstrate the method's effectiveness by recovering Jeffreys priors and learning the Variational Autoencoder's Reference prior. | [] | https://openreview.net/pdf?id=rJnjwsYde | rkEy9ugix | official_review | 1,489,172,956,366 | rJnjwsYde | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper12/AnonReviewer1"
] | title: An interesting and original proposal to bring (uninformative) reference priors to deep generative models. This work might contribute an important and original argument for the discussion of how to choose priors for deep generative models.
rating: 7: Good paper, accept
review: The paper is well written and presents a novel variational approach to find approximate reference priors for arbitrary models. The derivation of their method is clear and easy to follow. In the experimental section, the authors first show that their method recovers the well known Jeffreys prior for 1-dimensional toy models with high accuracy. They then show that they can also find an approximate reference prior for a VAE model with 2-dimensional latent space: This prior is significantly different from the widely used isotropic Gaussian, e.g., is is multimodal. This could be a significant result and might contribute important arguments for the discussion of how to choose priors and how to choose the model structure for deep generative models. Unfortunately, and probably due to the 3 page constraint for this workshop, I’m not convinced that these results are 100% trustworthy: The step from 1d models, where the proposed method works as expected, to VAE-style latent variable models seems rather big and I can imagine various ways how the optimization might fail and produce misleading results. Additional results for models of intermediate complexity and more details/diagnostics could greatly enhance this paper (but would probably break the 3 page limit). I’m also wondering whether there is a scale-invariance / degeneracy in the model: Scaling the mean/stddev. of the prior and posterior by a constant factor should result in an equivalent model.
Nevertheless, I think this is very interesting work which has the potential to initiate a new discussion about priors for generative models.
Pro:
- original approach; well motivated
- experiments show the method works on 1d toy models
- the result for the 2-dimensional VAE is surprising and might form some kind of argument for future work on latent variable models -> potential high impact.
Con:
- weak experimental section: I’m not convinced that the result for the 2d VAE is trustworthy.
confidence: 3: The reviewer is fairly confident that the evaluation is correct |
SJlj8CNYl | Unsupervised Motion Flow estimation by Generative Adversarial Networks | [
"Stefano Alletto",
"Luca Rigazio"
] | In this paper we address the challenging problem of unsupervised motion flow estimation. Under the assumption that image reconstruction is a super-set of the motion flow estimation problem, we train a convolutional neural network to interpolate adjacent video frames and then compute the motion flow via region-based sensitivity analysis by backpropagation. We postulate that better interpolations should result in better motion flow estimation. We then leverage the modeling power of energy-based generative adversarial networks (EbGAN's) to improve interpolations over standard L2 loss. Preliminary experiments on the KITTI database confirm that better interpolations from EbGAN's significantly improve motion flow estimation compared to both hand-crafted features and deep networks relying on standard losses such as L2. | [
"Computer vision",
"Deep learning",
"Unsupervised Learning"
] | https://openreview.net/pdf?id=SJlj8CNYl | ryKBGigsg | official_review | 1,489,183,297,015 | SJlj8CNYl | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper121/AnonReviewer1"
] | title: Official review
rating: 5: Marginally below acceptance threshold
review: The paper proposes an unsupervised learning approach to image matching. The authors train a deep network for video frame interpolation, and use the trained model to infer the correspondences between frames with backpropagation-based sensitivity analysis. The authors show that adversarial training of the interpolation network improves the accuracy of the predicted matches.
This general approach to learning to match images has been introduced by (Long et al., ECCV 2016). The contribution of the paper is in adding adversarial loss to the method and showing it improves the quality of the predicted matches.
The paper is written clearly, contains novel and fairly interesting results.
Pros:
- The fact that adversarial training on image interpolation indirectly improves the quality of the matches (~10% relative improvement in accuracy@5, ~20% relative decrease in EPE) is interesting.
- The method is using a somewhat non-standard GAN formulation based on EbGAN. It is not clear if this formulation is advantageous, though
- The method is compared to relevant baselines
Cons:
- Limited novelty: "take an existing method and add a GAN" is not a very original approach
- The results of the method are on par with (Long et al., ECCV 2016) and worse than another unsupervised method by (Yu et al., arxiv 2016)
I am not sure how to calibrate my score for the workshop track, so please take the rating with a grain of salt. This is not a bad paper, but I don't see "very novel ideas" in it.
confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
SJlj8CNYl | Unsupervised Motion Flow estimation by Generative Adversarial Networks | [
"Stefano Alletto",
"Luca Rigazio"
] | In this paper we address the challenging problem of unsupervised motion flow estimation. Under the assumption that image reconstruction is a super-set of the motion flow estimation problem, we train a convolutional neural network to interpolate adjacent video frames and then compute the motion flow via region-based sensitivity analysis by backpropagation. We postulate that better interpolations should result in better motion flow estimation. We then leverage the modeling power of energy-based generative adversarial networks (EbGAN's) to improve interpolations over standard L2 loss. Preliminary experiments on the KITTI database confirm that better interpolations from EbGAN's significantly improve motion flow estimation compared to both hand-crafted features and deep networks relying on standard losses such as L2. | [
"Computer vision",
"Deep learning",
"Unsupervised Learning"
] | https://openreview.net/pdf?id=SJlj8CNYl | H1yB628ig | official_review | 1,489,583,414,714 | SJlj8CNYl | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper121/AnonReviewer2"
] | rating: 6: Marginally above acceptance threshold
review: The paper proposes a GAN architecture that given frames t,t+2 interpolates to find t+1, building upon the method of Long et al for optical flow estimation through frame interpolation, by adding a discriminator to the output image. However, it does not compare against Long et al., so we do not know at the end, if adding the adversarial network helps. If the authors could clarify that, it would be important for the paper. My other note would be for them to provide one paragraph describing the method of Long et al a bit more in detail, as now someone needs to read Long's paper to get the full picture.
confidence: 3: The reviewer is fairly confident that the evaluation is correct |
SJlj8CNYl | Unsupervised Motion Flow estimation by Generative Adversarial Networks | [
"Stefano Alletto",
"Luca Rigazio"
] | In this paper we address the challenging problem of unsupervised motion flow estimation. Under the assumption that image reconstruction is a super-set of the motion flow estimation problem, we train a convolutional neural network to interpolate adjacent video frames and then compute the motion flow via region-based sensitivity analysis by backpropagation. We postulate that better interpolations should result in better motion flow estimation. We then leverage the modeling power of energy-based generative adversarial networks (EbGAN's) to improve interpolations over standard L2 loss. Preliminary experiments on the KITTI database confirm that better interpolations from EbGAN's significantly improve motion flow estimation compared to both hand-crafted features and deep networks relying on standard losses such as L2. | [
"Computer vision",
"Deep learning",
"Unsupervised Learning"
] | https://openreview.net/pdf?id=SJlj8CNYl | r1g3rdY6sx | comment | 1,490,028,612,379 | SJlj8CNYl | [
"everyone"
] | [
"ICLR.cc/2017/pcs"
] | decision: Reject
title: ICLR committee final decision |
rknkNR7Ke | Trace Norm Regularised Deep Multi-Task Learning | [
"Yongxin Yang",
"Timothy M. Hospedales"
] | We propose a framework for training multiple neural networks simultaneously. The parameters from all models are regularised by the tensor trace norm, so that each neural network is encouraged to reuse others' parameters if possible -- this is the main motivation behind multi-task learning. In contrast to many deep multi-task learning models, we do not predefine a parameter sharing strategy by specifying which layers have tied parameters. Instead, our framework considers sharing for all shareable layers, and the sharing strategy is learned in a data-driven way. | [] | https://openreview.net/pdf?id=rknkNR7Ke | rJSX_K6ox | comment | 1,490,028,572,685 | rknkNR7Ke | [
"everyone"
] | [
"ICLR.cc/2017/pcs"
] | decision: Accept
title: ICLR committee final decision |
rknkNR7Ke | Trace Norm Regularised Deep Multi-Task Learning | [
"Yongxin Yang",
"Timothy M. Hospedales"
] | We propose a framework for training multiple neural networks simultaneously. The parameters from all models are regularised by the tensor trace norm, so that each neural network is encouraged to reuse others' parameters if possible -- this is the main motivation behind multi-task learning. In contrast to many deep multi-task learning models, we do not predefine a parameter sharing strategy by specifying which layers have tied parameters. Instead, our framework considers sharing for all shareable layers, and the sharing strategy is learned in a data-driven way. | [] | https://openreview.net/pdf?id=rknkNR7Ke | S14afRlix | official_review | 1,489,195,707,882 | rknkNR7Ke | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper54/AnonReviewer1"
] | title: Interesting idea but weak results (for now)
rating: 7: Good paper, accept
review: The authors introduce the idea to train multi-task networks by constructing separate networks for different tasks and then putting a limit on the tensor-norm on shareable layers. In this ways it's not needed to explicitly design sharing (but different tasks still need to share the architecture). The proposed tensor losses are not differentiable, so to optimize them with SGD during training the author use sub-gradient descent. These are certainly interesting ideas which warrant acceptance. The presented results, improving accuracy on Omniglot from about 34% to about 36% are very weak though, considering that SOTA for deep learning (but using metric learning) is above 90% (e.g. from matching networks). Or is this not a fair comparison? In any case, the paper certainly warrants workshop acceptance.
confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
rknkNR7Ke | Trace Norm Regularised Deep Multi-Task Learning | [
"Yongxin Yang",
"Timothy M. Hospedales"
] | We propose a framework for training multiple neural networks simultaneously. The parameters from all models are regularised by the tensor trace norm, so that each neural network is encouraged to reuse others' parameters if possible -- this is the main motivation behind multi-task learning. In contrast to many deep multi-task learning models, we do not predefine a parameter sharing strategy by specifying which layers have tied parameters. Instead, our framework considers sharing for all shareable layers, and the sharing strategy is learned in a data-driven way. | [] | https://openreview.net/pdf?id=rknkNR7Ke | rkJy3vwsx | official_review | 1,489,628,118,951 | rknkNR7Ke | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper54/AnonReviewer2"
] | title: Review
rating: 6: Marginally above acceptance threshold
review: ## Quality:
This is an interesting paper presenting a cute idea. It reads a bit like a "this didn't work as well"-alternative approach to its sister paper implementing the same idea with Tensor Factorisation, which is on ICLR main conference: https://openreview.net/pdf?id=SkhU2fcll
## Clarity:
Very clear, well-written.
## Significance:
This seems the biggest problem. The results are quite weak, clearly inferior to the sister paper.
Most importantly, why is the baseline omniglot STL accuracy around 0.34 while in the sister paper the accuracy for the same baseline appears to be around 0.65 in Fig4 top left? Am I missing something here?
In any case I believe that there should be a comparison against normal explicit sharing of the weights as baseline, which is easy to add in the plots.
Apart from that there is some smaller remarks that impact signficance:
+ there must be a lot of computational overhead to compute an SVD on each weight layer, which I assume needs to be computed after every weight update? What was the additional compute time?
+ The number of parameters is still the same as if these networks were trained independently, so parameter reduction is one advantage of hard explicit sharing which falls away here.
## Other remarks:
A relevant application of MTL is multilingual acoustic model training in speech.
See eg Scanzio et al 2008 (https://scholar.google.com/scholar?cites=2941155962830961778), which has all but the last layers shared, and Sercu et al 2015 (https://arxiv.org/abs/1509.08967) which is a CNN-based model and has multiple FC layers split.
Overall, PRO: cute idea, novel, well-written paper. CON: a bit too similar to sister paper on main track, weak results (and please clarify the difference in baseline?)
confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
r1QXQkSYg | Out-of-class novelty generation: an experimental foundation | [
"Mehdi Cherti",
"Balázs Kégl",
"Akın Kazakçı"
] | Recent advances in machine learning have brought the field closer to computational creativity research. From a creativity research point of view, this offers the potential to study creativity in relationship with knowledge acquisition. From a machine learning perspective, however, several aspects of creativity need to be better defined to allow the machine learning community to develop and test hypotheses in a systematic way. We propose an actionable definition of creativity as the generation of out-of-distribution novelty. We assess several metrics designed for evaluating the quality of generative models on this new task. We also propose a new experimental setup. Inspired by the usual held-out validation, we hold out entire classes for evaluating the generative potential of models. The goal of the novelty generator is then to use training classes to build a model that can generate objects from future (hold-out) classes, unknown at training time - and thus, are novel with respect to the knowledge the model incorporates. Through extensive experiments on various types of generative models, we are able to find architectures and hyperparameter combinations which lead to out-of-distribution novelty. | [
"Deep learning",
"Unsupervised Learning"
] | https://openreview.net/pdf?id=r1QXQkSYg | SkUcM-bjl | official_review | 1,489,207,949,881 | r1QXQkSYg | [
"everyone"
] | [
"ICLR.cc/2017/workshop/paper134/AnonReviewer1"
] | rating: 7: Good paper, accept
review: This paper attempts to formalize a notion of creativity in generative models. The idea is to see if a generative model trained on one dataset can be used to generate novel samples that resemble elements of another dataset. In this case, it is examined whether a generative model trained on digits could be used to generate samples that look like alphabetical characters. Several metrics for determining the alphabetical nature of the generated samples are given; this is used as a proxy for novelty. It is shown that these can be useful in choosing models that generate novel samples outside of the classes the model was initially trained on.
I can agree with the premise that when it comes to out-of-class novelty, likelihood is probably not a good measure since it will penalize models that generate samples that are too far outside of the data distribution. However, I'm not yet convinced that the conclusions drawn here would generalize beyond the specific examples given in the paper. It would be good in a future iteration to see this same analysis on another dataset, or perhaps even to reverse the existing experiment (train on alphabetical characters, evaluate on digits). Another possibility would be to test on several different alphabets, like those found in Omniglot.
Although I think this particular analysis is limited (it is a workshop submission), I do think it proposes an interesting direction for measuring the novelty of samples from a generative model. I could see this being a potentially useful direction for measuring interesting properties of generative models in terms of creativity.
How are the panagrams (a)-(d) generated? Are letters chosen based on Euclidean distance to some reference characters?
confidence: 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
r1QXQkSYg | Out-of-class novelty generation: an experimental foundation | [
"Mehdi Cherti",
"Balázs Kégl",
"Akın Kazakçı"
] | Recent advances in machine learning have brought the field closer to computational creativity research. From a creativity research point of view, this offers the potential to study creativity in relationship with knowledge acquisition. From a machine learning perspective, however, several aspects of creativity need to be better defined to allow the machine learning community to develop and test hypotheses in a systematic way. We propose an actionable definition of creativity as the generation of out-of-distribution novelty. We assess several metrics designed for evaluating the quality of generative models on this new task. We also propose a new experimental setup. Inspired by the usual held-out validation, we hold out entire classes for evaluating the generative potential of models. The goal of the novelty generator is then to use training classes to build a model that can generate objects from future (hold-out) classes, unknown at training time - and thus, are novel with respect to the knowledge the model incorporates. Through extensive experiments on various types of generative models, we are able to find architectures and hyperparameter combinations which lead to out-of-distribution novelty. | [
"Deep learning",
"Unsupervised Learning"
] | https://openreview.net/pdf?id=r1QXQkSYg | HJIEbENog | comment | 1,489,416,494,485 | rJf7vaeie | [
"everyone"
] | [
"~mehdi_cherti1"
] | title: Answer
comment: Thank you for your comments and suggestions.
We are working on a more detailed analysis to understand under
which conditions we obtain a model that generates novelty. |
Subsets and Splits