venue
stringclasses 11
values | review_openreview_id
stringlengths 8
13
| replyto_openreview_id
stringlengths 9
13
| writer
stringlengths 2
110
| title
stringlengths 14
49
| content
stringlengths 29
44.2k
| time
stringdate 2013-02-06 08:34:00
2025-08-01 01:24:28
|
|---|---|---|---|---|---|---|
ICLR.cc/2017/conference
|
BkxfBmS4l
|
rkslf-rVl
|
~Minmin_Chen1
|
Response by Reviewer
|
{"title": "Thanks for your feedback", "comment": "Dear reviewer, thank you for your constructive feedback. Indeed our main goal is to come up with a simple and efficient framework for generating document representations. I would like to argue that simplicity does not deny originality. The reason we can simply average word embeddings at test time to form document representation is because of the new model architecture proposed, which represents documents with corrupted average of word embeddings at learning time, and learns the document embedding with word embeddings together. The corruption at learning time enables fast learning (comparing to [2][3]), as well as a data-dependent regularization. As far as I know, it is the first do so. I believe the paper contains quite thorough analyses of the proposed work on the sentiment analysis and document classification tasks. I would like to see the community start exploring and benefiting from this simple idea, while we work on testing it on more tasks. For RNN-LM, we used the implementation provided by the author, which was tested on the same dataset in their 2015 ICLR submission [1]. It builds two language models, one for the positive class and one for negative. It then computes the probability of each LM generating the document and assigns the one with higher score as the prediction. I included skip-thought vectors as another baseline in the manuscript thanks to the feedback from another reviewer. The encoder and decoder in the method are constructed from gated RNN. The method produces two models, uni-skip and bi-skip. Among them, the bi-skip is a bi-directional model that generates one forward and one backward encoding of the document. Its performance is not satisfactory on this dataset, and it takes long time to test due to the high-dimensional encoders used. Again thanks for your feedback and please let me know if you have other questions. [1] Mesnil, Gr\u00e9goire, et al. \"Ensemble of generative and discriminative techniques for sentiment analysis of movie reviews.\" arXiv preprint arXiv:1412.5335 (2014).[2] Eric H Huang, Richard Socher, Christopher D Manning, and Andrew Y Ng. Improving word representations via global context and multiple word prototypes. In ACL, pp. 873\u2013882, 2012.[3] Lebret, R\u00e9mi, and Ronan Collobert. \"\" The Sum of Its Parts\": Joint Learning of Word and Phrase Representations with Autoencoders.\" arXiv preprint arXiv:1506.05703 (2015)."}
|
2016-12-19 18:00:38
|
ICLR.cc/2017/conference
|
SkFcq0B0x
|
BJz3FYxnl
|
~Minmin_Chen1
|
Response by Reviewer
|
{"title": "On data-dependent regularization", "comment": "Thank you for your interest! For this experiment, I set a cut-off of 100 to remove words that appear less than 100 times throughout the document. These words will have very small norm as they are rare. It was explained in page 7 of the paper. You can also see that in table 3 the words of smallest norms learned by the other methods are the ones appearing around 100 times in the corpus. "}
|
2017-04-20 06:55:02
|
ICLR.cc/2017/conference
|
rkRf7aJ7x
|
B1GOWV5eg
|
AnonReviewer4
|
Response by Reviewer
|
{"title": "On data-dependent regularization", "comment": "Thank you for your interest! For this experiment, I set a cut-off of 100 to remove words that appear less than 100 times throughout the document. These words will have very small norm as they are rare. It was explained in page 7 of the paper. You can also see that in table 3 the words of smallest norms learned by the other methods are the ones appearing around 100 times in the corpus. "}
|
2016-12-03 03:59:18
|
ICLR.cc/2017/conference
|
S1YAMSN4e
|
B1GOWV5eg
|
AnonReviewer3
|
Official Review by AnonReviewer3
|
{"title": "Simple but effective idea with a very thorough evaluation", "rating": "8: Top 50% of accepted papers, clear accept", "review": "This paper shows that extending deep RL algorithms to decide which action to take as well as how many times to repeat it leads to improved performance on a number of domains. The evaluation is very thorough and shows that this simple idea works well in both discrete and continuous actions spaces.A few comments/questions:- Table 1 could be easier to interpret as a figure of histograms.- Figure 3 could be easier to interpret as a table.- How was the subset of Atari games selected?- The Atari evaluation does show convincing improvements over A3C on games requiring extended exploration (e.g. Freeway and Seaquest), but it would be nice to see a full evaluation on 57 games. This has become quite standard and would make it possible to compare overall performance using mean and median scores.- It would also be nice to see a more direct comparison to the STRAW model of Vezhnevets et al., which aims to solve some of the same problems as FiGAR.- FiGAR currently discards frames between action decisions. There might be a tradeoff between repeating an action more times and throwing away more information. Have you thought about separating these effects? You could train a model that does process intermediate frames. Just a thought.Overall, this is a nice simple addition to deep RL algorithms that many people will probably start using.--------------------I'm increasing my score to 8 based on the rebuttal and the revised paper.", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}
|
2017-01-20 11:30:07
|
ICLR.cc/2017/conference
|
r1T5EsJPx
|
rklD4OJPx
|
~Sahil_Sharma1
|
Response by Reviewer
|
{"title": "Citation: Thanks for pointing this out", "comment": "Thanks for pointing this out. In fact a few of the citations in our paper are outdated (we cite the arxiv versions of the paper and not the conference versions). We will definitely correct this (including citations for STRAW and A3C) in the next revision."}
|
2017-01-20 14:54:12
|
ICLR.cc/2017/conference
|
H1GdCWzEx
|
B1GOWV5eg
|
AnonReviewer4
|
Official Review by AnonReviewer4
|
{"title": "review", "rating": "8: Top 50% of accepted papers, clear accept", "review": "This paper proposes a simple but effective extension to reinforcement learning algorithms, by adding a temporal repetition component as part of the action space, enabling the policy to select how long to repeat the chosen action for. The extension applies to all reinforcement learning algorithms, including both discrete and continuous domains, as it is primarily changing the action parametrization. The paper is well-written, and the experiments extensively evaluate the approach with 3 different RL algorithms in 3 different domains (Atari, MuJoCo, and TORCS).Here are some comments and questions, for improving the paper:The introduction states that \"all DRL algorithms repeatedly execute a chosen action for a fixed number of time steps k\". This statement is too strong, and is actually disproved in the experiments \u2014 repeating an action is helpful in many tasks, but not in all tasks. The sentence should be rephrased to be more precise.In the related work, a discussion of the relation to semi-MDPs would be useful to help the reader better understand the approach and how it compares and differs (e.g. the response from the pre-review questions)Experiments:Can you provide error bars on the experimental results? (from running multiple random seeds)It would be useful to see experiments with parameter sharing in the TRPO experiments, to be more consistent with the other domains, especially since it seems that the improvement in the TRPO experiments is smaller than that of the other two domains. Right now, it is hard to tell if the smaller improvement is because of the nature of the task, because of the lack of parameter sharing, or something else.The TRPO evaluation is different from the results reported in Duan et al. ICML \u201916. Why not use the same benchmark?Videos only show the policies learned with FiGAR, which are uninformative without also seeing the policies learned without FiGAR. Can you also include videos of the policies learned without FiGAR, as a comparison point?How many laps does DDPG complete without FiGAR? The difference in reward achieved seems quite substantial (557K vs. 59K).Can the tables be visualized as histograms? This seems like it would more effectively and efficiently communicate the results.Minor comments:-- On the plot in Figure 2, the label for the first bar should be changed from 1000 to 3500.-- \u201cidea of deciding when necessary\u201d - seems like it would be better to say \u201cidea of only deciding when necessary\"-- \"spaces.Durugkar et al.\u201d \u2014 missing a space.-- \u201cR={4}\u201d \u2014 why 4? Could you use a letter to indicate a constant instead? (or a different notation)", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
|
2016-12-17 01:01:30
|
ICLR.cc/2017/conference
|
HknP87kQg
|
H1pnGXJmx
|
~Sahil_Sharma1
|
Response by Reviewer
|
{"title": "Response to AnonReviewer3: How intermediate frames are handled", "comment": "Thanks for reviewing the paper.In what follows | denotes the concatenation operator.Suppose action decisions were taken at time steps (..., 20, 26, 39) The decision at time step 39 was to repeat the action a_{39} for 3 time steps. At time step 42 the observation presented to the agent is concatenation of the frame for current time step (42) with the last three action decision time steps's frames, that is f_{20}|f_{26}|f_{39}|f_{42}. Hence it is the action decision points which decide the frames which are concatenated and presented as input. If the next action decision is to be taken at time step 50 (action repetition of 8 was chosen at time step 42), then the next input is f_{26}|f_{39}|f_{42}|f_{50}. In short, the intermediate frames are indeed discarded. In the case of this example, the network just gets to see the frames on which it has either already made action decisions (frames 20, 26, 39), or those on which it wants to make action decisions currently (frame 42).Please let us know if any further clarifications are required."}
|
2016-12-02 16:53:11
|
ICLR.cc/2017/conference
|
ryhHnCDng
|
rywBok98x
|
~Aravind_Lakshminarayanan1
|
Response by Reviewer
|
{"title": "An additional comment regarding not-repeating", "comment": "One can think about the suggestion of not having to execute the repetitive plan but rather find the optimal action at each time step similar to Model Predictive Control. In MPC, at every time step, the optimal control plans for the entire trajectory using the current state as the initial state but executes only the first action thereby discarding the remaining part of the planned trajectory. A plan is re-computed again with the next state as the initial state. This sort of a dynamic planning is useful to account for unexpected perturbations.However, in our experiments, the Atari simulator is close to deterministic for most of the games. Therefore, close to nil unexpected perturbations are potentially encountered in a test-run of the agent and a computed plan can be executed without having to re-plan (Here, a plan is just a \"local\" repetitive plan around the current state rather than an entire trajectory). Going by papers that have pointed out the weaknesses of DQN / A3C (Reactive policies learned with a specific simulator) to random unexpected perturbations at test-time, FiGAR (or for that matter, STRAW) is expected to suffer from such unexpected perturbations and a re-planning is necessary for such cases similar to MPC. It is a good avenue for future work to explore robust planning algorithms for non-deterministic simulators with discrete action spaces. "}
|
2017-03-28 12:51:16
|
ICLR.cc/2017/conference
|
SyJIkklQx
|
rkRf7aJ7x
|
~Aravind_Lakshminarayanan1
|
Response by Reviewer
|
{"title": "Response to AnonReviewer4: A few questions", "comment": "Thanks for the questions.Q1. For the paradigm to work, no assumptions are made about whether the policy for action selection ($\\pi_{a}$) and that for action repetition share any parameters or not.In practice, for the figar-a3c setup all but the final output layer are shared between the two policies and the critic (details on the architecture are in appendix A).For figar-trpo no parameters are shared between the two policies.For figar-ddpg all but final layers of the two policies are shared.Q2. Semi- MDPs (SMDPs) :SMDPs are MDPs with durative actions. The assumption in SMDPs is that actions take some \"holding time\" to complete. Typically, they are modeled with two distributions, one corresponding to the next state transition and the other corresponding to the holding time, which denotes the number of time steps between the current action from the policy until the next action from the policy. The rewards over the entire holding time of an action is the credit assigned to picking the action. Relation to FiGAR:In our framework, we naturally have durative actions due to the policy structure where the decision consists of both the choice of the action and the time scale of its execution. Therefore, we convert the MDP to a semi-MDP trivially. In fact, we give more structure to semi-MDP because we are clear that during the holding time, we \"repeat\" the chosen action, while in a semi-MDP, what happens during the holding time is not specified. One can think of the part of the policy that outputs the probability distribution over the time scales as a holding time distribution, and thereby, our framework naturally fits into semi-MDPs with the action repetition characterizing the holding time. As an additional note, we also ensure appropriate discounting factor exponent and sum of rewards over the holding time, both of which obey the semi-MDP framework. Q3. The average action repetition was 2. Although the average action repetition in this case is small, we believe that deep exploration in the form of the choice to execute larger action repetitions allows the agent to learn smoother policies."}
|
2016-12-03 09:01:57
|
ICLR.cc/2017/conference
|
r1UiDTNVg
|
HJso2T14e
|
~Sahil_Sharma1
|
Response by Reviewer
|
{"title": "Thanks for your comments and questions", "comment": "Q1: After learning is complete, did you try forward propagating through the network to find actions for every time-step as opposed to repeating actions? Concretely, if at t=5, action suggested by the network is a_3 with a repetition of 4, instead of sticking with a_3 for times t={5,6,7,8} perform action a_3 for just t=5, and forward prop through the policy again at t=6. I understand that the goal is to explore temporal abstractions, but for all the problems considered in this paper, a forward prop is not expensive at all. Hence, there is no computational bottle neck forcing action repetition during test-time. It is understandable that repeating actions speeds up training. However, at test time, the performance can potentially improve by not repeating. This idea is quite popular in variants of Receding Horizon Control and MCTS.We haven\u2019t tried this yet. We will try out these experiments and possibly include them in the final manuscript if they appear to be significant. However, the reason for wanting to learn temporal abstractions is not only the computational speedup. The reason is that such abstractions help the agent learn more-human like policies which in-turn lead to improvement in performance. My intuition is that such a change in the way the final policy is used would lead to a drop in performance. However, it is certainly worth trying out.Q2: Can you share hyper-parameter settings of section 5.2? How many iterations of TRPO was run, and how many trajectory samples per iteration? The performance on Ant-v1 task is too low for both TRPO and FIGAR. Running for more iterations and initializing the network better (smaller weights) might improve performance significantly for both. It might be informative to share the learning curves comparing FIGAR with TRPO. With the current results, it is a stretch to say that FIGAR \"outperforms\" TRPO.As mentioned in Section 5.2, Appendix C contains all experimental details including the number of iterations we ran the networks for. Q3: Considering that the action repetition is 1 for a majority of MuJoCo tasks (discounting Ant) and TORCS, why do you expect FIGAR to perform better? Also, the FIGAR policies seem to have more parameters than baselines they are compared against -- is this true? Have you compared to baselines with equal number of parameters?We expect FiGAR to perform better in this case because we believe that deep exploration in the form of the choice to execute larger action repetitions in the initial phase forces the agent to pick the better actions (since at the beginning of training, it\u2019d have to repeat these actions for a reasonably large number of time steps) and we believe this would lead to better policies. FiGAR policies do have more number of parameters. We have not compared to baselines with equal number of parameters. We found larger hidden layer sizes to not improve performance for either TRPO or FiGAR. "}
|
2016-12-19 02:36:13
|
ICLR.cc/2017/conference
|
HJso2T14e
|
B1GOWV5eg
|
~Aravind_Rajeswaran1
|
Response by Reviewer
|
{"title": "some questions and comments", "comment": "Hi, the main idea is quite interesting. I was curious about the following. My primary question is Q1, and others are predominantly comments.Q1: After learning is complete, did you try forward propagating through the network to find actions for every time-step as opposed to repeating actions? Concretely, if at t=5, action suggested by the network is a_3 with a repetition of 4, instead of sticking with a_3 for times t={5,6,7,8} perform action a_3 for just t=5, and forward prop through the policy again at t=6.I understand that the goal is to explore temporal abstractions, but for all the problems considered in this paper, a forward prop is not expensive at all. Hence, there is no computational bottle neck forcing action repetition during test-time. It is understandable that repeating actions speeds up training. However, at test time, the performance can potentially improve by not repeating. This idea is quite popular in variants of Receding Horizon Control and MCTS.2: Can you share hyper-parameter settings of section 5.2? How many iterations of TRPO was run, and how many trajectory samples per iteration? The performance on Ant-v1 task is too low for both TRPO and FIGAR. Running for more iterations and initializing the network better (smaller weights) might improve performance significantly for both. It might be informative to share the learning curves comparing FIGAR with TRPO. With the current results, it is a stretch to say that FIGAR \"outperforms\" TRPO.3: Considering that the action repetition is 1 for a majority of MuJoCo tasks (discounting Ant) and TORCS, why do you expect FIGAR to perform better? Also, the FIGAR policies seem to have more parameters than baselines they are compared against -- is this true? Have you compared to baselines with equal number of parameters?"}
|
2016-12-16 19:44:53
|
ICLR.cc/2017/conference
|
ryanipVNl
|
SkazJMfVg
|
~Sahil_Sharma1
|
Response by Reviewer
|
{"title": "Response: AnonReviewer2", "comment": "Thanks for reviewing the paper, the comments and questions! We believe addressing these questions will increase the quality of the work, and we will definitely do that.- The scores reported on A3C in this paper and in the Mnih et al. publication (table S3) differ significantly. Where does this discrepancy come from? If it's from a different training regime (fewer iterations, for instance), did the authors confirm that running their replication to the same settings as Mnih et al provide similar results?The reason why the scores differ significantly is because of 3 reasons:1. Mnih et al. publication [1] reports average scores on best 5 replicas out of 50 replicas that they started with. We did not mimic this setup because we do not possess the compute resources to run 50 different replicas for each game.2. The evaluation method used was very different. They used human starts evaluation metric. However, in the absence of the same human trajectories it would be very difficult to ensure a fair or repeatable evaluation setup. 3. In fact we have found that the scores in general and evaluation setup in specific reported by Mnih et al. [1] are difficult to reproduce not only for us, but also researchers at Deepmind. In Unifying Count-Based Exploration and Intrinsic Motivation [2] the scores reported for A3C differ very drastically from those reported by the original A3C publication [1], even though the same evaluation metric (human starts) was followed, and hopefully the same set of human start trajectories was used (we do not know this for sure). The scores for many games are orders of magnitude lower for A3C in [2].In conclusion we\u2019d like to say that the training as well as testing setup of [1] are difficult to reproduce, which in turn makes it difficult to replicate the scores. - It is intriguing that the best results of FiGAR are reported on games where few actions repeat dominate. This seems to imply that for those, the performance overhead of FiGAR over A3C is high since A3C uses an action repeat of 4 (and therefore has 4 times fewer gradient updates). A3C could be run for a comparable computation cost with a lower action repeat, which would probably result in increased performance of A3C. Nevertheless, the automatic determination of the appropriate action repeat is interesting, even if the overall message seems to be to not repeat actions too often.It is true that for many games the lower action repetitions dominate in the sense that they are chosen for a large fraction of time. However, the average action repetition (ARR) is a fairer metric to compare the computation cost since FiGAR still makes up by choosing large action repetition at other points in time. Table 5,6,7 in Appendix B seek to demonstrate the action repetition distribution and the ARR for all the games. It can be seen that for 28 out of 31 games, the average action repetition for FiGAR is greater than 4 (which is the ARR for A3C). Concretely for the best 4 games by gameplay performance, the average action repetitions are (numbers taken from Table 7, page 18, Appendix B):Atlantis: 7.2Seaquest: 5.33Asterix: 4.22Wizard of wor: 9.87- Slightly problematic notation, where r sometimes denotes rewards, sometimes denotes elements of the repetition set R (top of page 5)Thanks for pointing this out. We will change this in the next revision.- In the equation at the bottom of page 5 - since the sum is indexed over decision steps, not time steps, shouldn't the rewards r_k be modified to be the sum of rewards (appropriately discounted) between those time steps?Thanks for pointing this out. The question is how should the reward for a macro action m = (a,x) be constructed. Should it be the discounted sum of intermediate rewards encountered during the execution of m or should it be the cumulative undiscounted sum of rewards? We went with the second formalism since we did not want to penalize the agent for choosing larger action repetitions. This we believe would encourage the agent to pick larger action repetitions. - The section on DDPG is confusingly written. \"Concatenating\" loss is a strange operation; doesn't FiGAR correspond to a loss to roughly looks like Q(x,mu(x)) + R log p(x) (with separate loss for learning the critic)? It feels that REINFORCE should be applied for the repetition variable x (second term of the sum) and reparametrization for the action a (first term)? Sorry for this. In DDPG, there is only a single loss function, the critic loss function. There is no loss function for the actor. The actor simply receives gradients from the critic. This is because the actor\u2019s proposed policy is directly fed to the critic and the critic provides the actor with gradients which the proposed policy follows for improvement. Hence, the actor does not really have a loss function per se, but only gradients provided by the critic. In FiGAR the total policy \\pi is a concatenation of vectors \\pi_{a} and \\pi_{x}. Hence the gradients for the total policy are also simply the concatenation of the gradients for the policies \\pi_{a} and \\pi_{x}. This is what we meant by the concatenation operator. We will make the section clearer in the next revision.- Is the 'name_this_game' name in the tables intentional?It is. This is the name of a game in the Atari 2600 domain. Here is a video which shows gameplay in this game: https://www.youtube.com/watch?v=7obD1q85_kw. Note that this video is in no way related to FiGAR and only demonstrates general gameplay in this game.- A potential weakness of the method is that the agent must decide to commit to an action for a fixed number of steps, independently of what happens next. Have the authors considered a scheme in which, at each time step, the agent decides to stick with the current decision or not? (It feels like it might be a relatively simple modification of FiGAR).We agree with the reviewer and explain the need for and a possible solution for stopping macro-actions.Atari, TORCS and MuJoCo represent environments which are largely deterministic with a minimal degree of stochasticity in environment dynamics. In such highly deterministic environments we would expect FiGAR agents to build a latent model of the environment dynamics and hence be able to execute large action repetitions without dying. This is exactly what we see in a highly deterministic environment like the game \u201cFreeway\u201d. Figure 1 (a) demonstrates that the chicken is able to judge the speed of the approaching cars appropriately and cross the road in a manner which takes it to the goal without colliding with the cars and at the same time avoiding them narrowly.Having said that, certainly the ability to stop an action repetition (or a macro-action) in general would be very important, especially in stochastic environments. In our setup, we do not consider the ability to stop executing a macro-action that the agent has committed to. However, this is a necessary skill in the event of unexpected changes in the environment while executing a chosen macro-action. Thus, stop and start actions for stopping and committing to macro-actions can be added to the basic dynamic time scale setup for more robust policies. We believe the modification could work for more general stochastic worlds like Minecraft and leave it for future work.We will also add this discussion to the conclusion section to reflect possible shortcomings of FiGAR.[1] - Asynchronous Method for Deep Reinforcement Learning, Mnih et al, ICML 2016[2] - Unifying Count-Based Exploration and Intrinsic Motivation, Bellemare et al, NIPS 2016-- Sahil & Aravind"}
|
2016-12-19 02:57:02
|
ICLR.cc/2017/conference
|
S1v-fAE4e
|
S1YAMSN4e
|
~Sahil_Sharma1
|
Response by Reviewer
|
{"title": "Response: AnonReviewer3", "comment": "Thanks for reviewing the paper, the comments and questions! We believe addressing these questions will increase the quality of the work, and we will definitely do that.- Table 1 could be easier to interpret as a figure of histograms.Thanks for pointing this out. We will definitely add a histogram to the final version of the paper corresponding to Table 1.- Figure 3 could be easier to interpret as a table.We added a histogram version of this data (Figure3) because we wanted to illustrate that regardless of the action repetition set chosen, the rough \"magnitude\" of improvement is still the same. That all FiGAR variants continue to significantly outperform the baseline in the chosen games. We will definitely add a corresponding table of raw data in the appendix so that it can be looked up.- How was the subset of Atari games selected?The subset was chosen arbitrarily.- The Atari evaluation does show convincing improvements over A3C on games requiring extended exploration (e.g. Freeway and Seaquest), but it would be nice to see a full evaluation on 57 games. This has become quite standard and would make it possible to compare overall performance using mean and median scores.The results we have reported on 31 games was possible only after 3 months of computing. It might be difficult to report numbers on all 57 games, however we will definitely try to make the number of games on which we report results as large as possible (We already have results on 2 more games that we will add to the final version of the paper).- It would also be nice to see a more direct comparison to the STRAW model of Vezhnevets et al., which aims to solve some of the same problems as FiGAR.STRAW was run on a very small subset of games, namely 8 Atari games. The intersection of games on which both our work and STRAW report results is even smaller at 5 games. Such a comparison is likely to be skewed. Having said that, we could definitely add scores reported by STRAW on the 5 games that we have evaluated on to Table 4.- FiGAR currently discards frames between action decisions. There might be a tradeoff between repeating an action more times and throwing away more information. Have you thought about separating these effects? You could train a model that does process intermediate frames. Just a thought.As pointed out in one of the comments:\"After learning is complete, did you try forward propagating through the network to find actions for every time-step as opposed to repeating actions? Concretely, if at t=5, action suggested by the network is a_3 with a repetition of 4, instead of sticking with a_3 for times t={5,6,7,8} perform action a_3 for just t=5, and forward prop through the policy again at t=6.\"This is definitely an experiment worth trying out and we intend to do that and include results if they turn out to be significant. Having said that, this only makes use of the discarded frames in the testing phase, not in the training phase.A possible way to trade-off between discarding frames and action repeats is to construct a separate, second network (consisting of a convnet followed by an LSTM) which processes every kth frame, much like A3C, and concatenate representations learnt by this network to those learnt by the usual A3C network, while making decisions on action selection. The reason one would like to do this is because as you rightly pointed out, skipped frames might also contain crucial information needed for finding out the optimal action as well as action repetition in the next action decision step. We will definitely explore this direction of work as future research. Thanks for the idea!"}
|
2016-12-19 04:14:34
|
ICLR.cc/2017/conference
|
SkazJMfVg
|
B1GOWV5eg
|
AnonReviewer2
|
Official Review by AnonReviewer2
|
{"title": "Review", "rating": "7: Good paper, accept", "review": "This paper provides a simple method to handle action repetitions. They make the action a tuple (a,x), where a is the action chosen, and x the number of repetitions. Overall they report some improvements over A3C/DDPG, dramatic in some games, moderate in other. The idea seems natural and there is a wealth of experiment to support it.Comments:- The scores reported on A3C in this paper and in the Mnih et al. publication (table S3) differ significantly. Where does this discrepancy come from? If it's from a different training regime (fewer iterations, for instance), did the authors confirm that running their replication to the same settings as Mnih et al provide similar results?- It is intriguing that the best results of FiGAR are reported on games where few actions repeat dominate. This seems to imply that for those, the performance overhead of FiGAR over A3C is high since A3C uses an action repeat of 4 (and therefore has 4 times fewer gradient updates). A3C could be run for a comparable computation cost with a lower action repeat, which would probably result in increased performance of A3C. Nevertheless, the automatic determination of the appropriate action repeat is interesting, even if the overall message seems to be to not repeat actions too often.- Slightly problematic notation, where r sometimes denotes rewards, sometimes denotes elements of the repetition set R (top of page 5)- In the equation at the bottom of page 5 - since the sum is not indexed over decision steps, not time steps, shouldn't the rewards r_k be modified to be the sum of rewards (appropriately discounted) between those time steps?- The section on DDPG is confusingly written. \"Concatenating\" loss is a strange operation; doesn't FiGAR correspond to a loss to roughly looks like Q(x,mu(x)) + R log p(x) (with separate loss for learning the critic)? It feels that REINFORCE should be applied for the repetition variable x (second term of the sum) and reparametrization for the action a (first term)? - Is the 'name_this_game' name in the tables intentional?- A potential weakness of the method is that the agent must decide to commit to an action for a fixed number of steps, independently of what happens next. Have the authors considered a scheme in which, at each time step, the agent decides to stick with the current decision or not? (It feels like it might be a relatively simple modification of FiGAR).", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
|
2016-12-17 01:04:50
|
ICLR.cc/2017/conference
|
HywXXFCQg
|
rJ6M9dA7x
|
~Sahil_Sharma1
|
Response by Reviewer
|
{"title": "Re: Parameter Sharing", "comment": "Thanks for the additional questions. The extent of parameter-sharing in the setups was not tuned. We merely followed one possible setup in each of the 3 sets of experiments. Other setups could potentially perform better or worse. For TRPO, the decision to not sharing parameters (layers) was due to 2 factors:i) In TRPO, the neural networks we used were rather shallow at only two hidden layers deep. Hence, we believe that sharing of layers could potentially lead to negligible gains in terms of optimality of policy learnt.ii) At the same time, we did want to experiment with what happens when different extents of weight sharing are enforced in the FiGAR setup. the FiGAR-TRPO experiments are also meant to demonstrate that in a zero-sharing setup, FIGAR still manages to learn sensible policies, outperforming the baselines in many tasks.We can later add more experiments with layer-sharing for FIGAR-TRPO to be consistent in the comparisons to the other two setups. "}
|
2016-12-14 08:30:22
|
ICLR.cc/2017/conference
|
Byusd6V4g
|
H1GdCWzEx
|
~Sahil_Sharma1
|
Response by Reviewer
|
{"title": "Responses: AnonReviewer4", "comment": "Thanks for reviewing the paper, the comments and questions! We believe addressing these questions will increase the quality of the work, and we will certainly do that.-The introduction states that \"all DRL algorithms repeatedly execute a chosen action for a fixed number of time steps k\". This statement is too strong, and is actually disproved in the experiments \u2014 repeating an action is helpful in many tasks, but not in all tasks. The sentence should be rephrased to be more precise.We agree, the statement in its current form is incorrect. We will change it to \u201cmany DRL algorithms execute a chosen action for fixed number of time steps k\u201d in the next revision.-In the related work, a discussion of the relation to semi-MDPs would be useful to help the reader better understand the approach and how it compares and differs (e.g. the response from the pre-review questions)Definitely. We will add this to the next revision.-Experiments:-Can you provide error bars on the experimental results? (from running multiple random seeds)The current set of experiments took us nearly 3 months to run. Running them for a significantly large number of random seeds (say 3 or 5) would be very difficult due to the limited nature of compute resources available to us.-It would be useful to see experiments with parameter sharing in the TRPO experiments, to be more consistent with the other domains, especially since it seems that the improvement in the TRPO experiments is smaller than that of the other two domains. Right now, it is hard to tell if the smaller improvement is because of the nature of the task, because of the lack of parameter sharing, or something else.We agree. It might take us some time to add those results since we do not have access to the compute resources right now. We will definitely try to add these to the final version.-The TRPO evaluation is different from the results reported in Duan et al. ICML \u201916. Why not use the same benchmark?The evaluation procedure we have used is very similar to that used by Duan et al. ICML \u201816. The only difference is that instead of reporting average performance on training trajectories (the number of these trajectories used varies across training epochs), we report performance on a testing epoch consisting of a fixed number of trajectories, which has been inserted between every two consecutive training epochs. Note that this is to be consistent with the notion of \u201csolving a task\u201d as introduced by openai.com. We test for 100 episodes between every 2 training epochs.-Videos only show the policies learned with FiGAR, which are uninformative without also seeing the policies learned without FiGAR. Can you also include videos of the policies learned without FiGAR, as a comparison point?We have already included the videos for baseline as well. Probably youtube\u2019s default player did not suggest the correct order for the videos. We will make sure that the next revision has a link to a playlist which contains all the videos.We will additionally also add videos of Atari gameplay in the final version of the paper.-How many laps does DDPG complete without FiGAR? The difference in reward achieved seems quite substantial (557K vs. 59K).DDPG completes 2 laps without FiGAR. The complete task consists of 20 laps.-Can the tables be visualized as histograms? This seems like it would more effectively and efficiently communicate the results.Definitely. We will add the histograms in the main paper and shift the tables to the appendix in the final version.-- Sahil & Aravind"}
|
2016-12-19 03:21:22
|
ICLR.cc/2017/conference
|
H1pnGXJmx
|
B1GOWV5eg
|
AnonReviewer3
|
Response by Reviewer
|
{"title": "Responses: AnonReviewer4", "comment": "Thanks for reviewing the paper, the comments and questions! We believe addressing these questions will increase the quality of the work, and we will certainly do that.-The introduction states that \"all DRL algorithms repeatedly execute a chosen action for a fixed number of time steps k\". This statement is too strong, and is actually disproved in the experiments \u2014 repeating an action is helpful in many tasks, but not in all tasks. The sentence should be rephrased to be more precise.We agree, the statement in its current form is incorrect. We will change it to \u201cmany DRL algorithms execute a chosen action for fixed number of time steps k\u201d in the next revision.-In the related work, a discussion of the relation to semi-MDPs would be useful to help the reader better understand the approach and how it compares and differs (e.g. the response from the pre-review questions)Definitely. We will add this to the next revision.-Experiments:-Can you provide error bars on the experimental results? (from running multiple random seeds)The current set of experiments took us nearly 3 months to run. Running them for a significantly large number of random seeds (say 3 or 5) would be very difficult due to the limited nature of compute resources available to us.-It would be useful to see experiments with parameter sharing in the TRPO experiments, to be more consistent with the other domains, especially since it seems that the improvement in the TRPO experiments is smaller than that of the other two domains. Right now, it is hard to tell if the smaller improvement is because of the nature of the task, because of the lack of parameter sharing, or something else.We agree. It might take us some time to add those results since we do not have access to the compute resources right now. We will definitely try to add these to the final version.-The TRPO evaluation is different from the results reported in Duan et al. ICML \u201916. Why not use the same benchmark?The evaluation procedure we have used is very similar to that used by Duan et al. ICML \u201816. The only difference is that instead of reporting average performance on training trajectories (the number of these trajectories used varies across training epochs), we report performance on a testing epoch consisting of a fixed number of trajectories, which has been inserted between every two consecutive training epochs. Note that this is to be consistent with the notion of \u201csolving a task\u201d as introduced by openai.com. We test for 100 episodes between every 2 training epochs.-Videos only show the policies learned with FiGAR, which are uninformative without also seeing the policies learned without FiGAR. Can you also include videos of the policies learned without FiGAR, as a comparison point?We have already included the videos for baseline as well. Probably youtube\u2019s default player did not suggest the correct order for the videos. We will make sure that the next revision has a link to a playlist which contains all the videos.We will additionally also add videos of Atari gameplay in the final version of the paper.-How many laps does DDPG complete without FiGAR? The difference in reward achieved seems quite substantial (557K vs. 59K).DDPG completes 2 laps without FiGAR. The complete task consists of 20 laps.-Can the tables be visualized as histograms? This seems like it would more effectively and efficiently communicate the results.Definitely. We will add the histograms in the main paper and shift the tables to the appendix in the final version.-- Sahil & Aravind"}
|
2016-12-02 16:35:00
|
ICLR.cc/2017/conference
|
rklD4OJPx
|
S1v-fAE4e
|
AnonReviewer3
|
Response by Reviewer
|
{"title": "Citation", "comment": "Thank you for the detailed response and the resulting corrections. I noticed that your reference to the STRAW paper is missing the first author of that work. The correct citation is actually:\"Strategic Attentive Writer for Learning Macro-Actions\"Alexander Vezhnevets, Volodymyr Mnih, John Agapiou, Simon Osindero, Alex Graves, Oriol Vinyals, Koray Kavukcuoglu"}
|
2017-01-20 11:28:23
|
ICLR.cc/2017/conference
|
rJ6M9dA7x
|
SyJIkklQx
|
AnonReviewer4
|
Response by Reviewer
|
{"title": "Parameter sharing", "comment": "Thanks for your replies. Regarding parameter sharing, were these set ups tuned? Why not share any parameters for the TRPO policies?"}
|
2016-12-14 07:51:48
|
ICLR.cc/2017/conference
|
rywBok98x
|
B1GOWV5eg
|
~Sahil_Sharma1
|
Response by Reviewer
|
{"title": "Revision in response to reviewer comments and questions", "comment": "We thank all the reviewers for asking interesting questions and pointing out important flaws in the paper. We have uploaded a revised version of the paper that we believe addresses the questions raised. Major features of the revision are:1. We have added results on 2 more Atari 2600 games: Enduro and Q-bert. FiGAR seems to improve performance rather dramatically on Enduro with the FiGAR agent being close to 100 times better than the baseline A3C agent. (Note that the baseline agent performs very poorly according to the published results as well)2. In response to AnonReviewer3\u2019s comment about skipping intermediate frames, we have added Appendix F (page 23) by conducting experiments on what happens when FiGAR does not discard any intermediate frames (during evaluation phase). The general pattern seems to be that for games wherein lower action repetition is preferred, gains are made in terms of improved gameplay performance. However, for 24 out of 33 games the performance becomes worse, which depicts the importance of the temporal abstractions learnt by the action repetition part of the policy (\\pi_{\\theta_{x}}). This does not address the reviewer\u2019s question completely since at train time we still skip all the frames, as suggested by the action repetition policy. We have added a small discussion on future works section (section 6, page 10) which could potentially address this comment.3. In response to AnonReviewer3\u2019s suggestion to turn table1 into a bar graph we have done so (Figure 3, page 8) and it indeed does look much better.4. In response to AnonReviewer3\u2019s suggestion to compare directly to STRAW we have added Table 5 (Appendix A, page 14) which contains performance of STRAW models on all games which we have also experimented with. The general conclusion seems to be that in some games STRAW does better and in some games FiGAR does better.5. In response to AnonReviewer4\u2019s comment, we conducted experiments on shared representations for the FiGAR-TRPO agent. Appendix G (page 24) contains the results of the experiments. In general we observe that FiGAR-TRPO with shared representations does marginally better than FiGAR-TRPO, but not much better. The performance goes down on some tasks and improves on others. The average action repetition rate of the best policies learnt improves.6. In response to AnonReviewer4\u2019s comment on SMDPs we have added the relevant discussion to related works section (page 3).7. In response to AnonReviewer2\u2019s comment on the confusing nature of FiGAR-DDPG section, we have rewritten the section. It is hopefully clearer now. 8. In response to AnonReviewer2\u2019s comment on the confusing notation \u2018r\u2019 for action repetition we have completely changed the notation for action repetition to the letter \u2018w\u2019.9. In response to AnonReviewer2\u2019s comment on the potential weakness of the FiGAR framework, we have added a discussion on the shortcomings of the FiGAR in section 6 (page 10).10.We have corrected several typos as pointed out by the reviewers. "}
|
2017-01-16 06:42:06
|
ICLR.cc/2017/conference
|
B1PMhG8_x
|
B1GOWV5eg
|
pcs
|
ICLR committee final decision
|
{"title": "ICLR committee final decision", "comment": "The basic idea of this paper is simple: run RL over an action space that models both the actions and the number of times they are repeated. It's a simple idea, but seems to work really well on a pretty substantial variety of domains, and it can be easily adapted to many different settings. In several settings, the improvement using this approach are dramatic. I think this is an obvious accept: a simple addition to existing RL algorithms that can often perform much better. Pros: + Simple and intuitive approach, easy to implement + Extensive evaluation, showing very good performance Cons: - Sometimes unclear _why_ certain domains benefit so much from this", "decision": "Accept (Poster)"}
|
2017-02-06 15:53:51
|
ICLR.cc/2017/conference
|
Hk0_CEbQe
|
BJUyn9gmg
|
~Joji_Toyama1
|
Response by Reviewer
|
{"title": "Thanks for your comment.", "comment": "According to 1, we know this paper and it came about a week after we submitted our paper in openreview.According to 2, we should correct the reference as you commented. We will revise the paper. Thanks for advice.According to 3, we denote NMT as the monomodal translation using dl4mt baseline and we trained it by ourself. We are sure the score reported by Huang et al is with -norm. There is also their scores reported in (http://www.statmt.org/wmt16/pdf/W16-2346.pdf), which is without norm and the scores reported in Huang et al are higher than those reported in (http://www.statmt.org/wmt16/pdf/W16-2346.pdf), which indicates that the report in Huang et al is with -norm. The reason why we did not put the score without norm is that we were not sure what \"CMU_1 MNMT_C\" denotes (We were not sure what \"C\" indicates. We may be missing the part explaining about it though). About your claim that we should put other competitors score such as Moses, what we compare our model to is limited to only end-to-end neural machine models because we want to show that the neural machine translation can actually benefit from images in our way, but as you told, we should have put other competitor's score for richer comparison.Thanks for your valuable comment!!"}
|
2016-12-04 06:57:58
|
ICLR.cc/2017/conference
|
BJ2wTaWNx
|
B1G9tvcgx
|
AnonReviewer3
|
Official Review by AnonReviewer3
|
{"title": "Unclear motivation & unconvincing results", "rating": "3: Clear rejection", "review": "I have problems understanding the motivation of this paper. The authors claimed to have captured a latent representation of text and image during training and can translate better without images at test time, but didn't demonstrate convincingly that images help (not to mention the setup is a bit strange when there are no images at test time). What I see are only speculative comments: \"we observed some gains, so these should come from our image models\". The qualitative analysis doesn't convince me that the models have learned latent representations; I am guessing the gains are due to less overfitting because of the participation of images during training. The dataset is too small to experiment with NMT. I'm not sure if it's fair to compare their models with NMT and VNMT given the following description in Section 4.1 \"VNMT is fine-tuned by NMT and our models are fine-tuned with VNMT\". There should be more explanation on this.Besides, I have problems with the presentation of this paper.(a) There are many symbols being used unnecessary. For example: f & g are used for x (source) and y (target) in Section 3.1. (b) The ' symbol is not being used in a consistent manner, making it sometimes hard to follow the paper. For example, in section 3.1.2, there are references about h'_\\pi obtained from Eq. (3) which is about h_\\pi (yes, I understand what the authors mean, but there can be better ways to present that).(c) I'm not sure if it's correct in Section 3.2.2 h'_z is computed from \\mu and \\sigma. So how \\mu' and \\sigma' are being used ?(d) G+O-AVG should be something like G+O_{AVG}. The minus sign makes it looks like there's an ablation test there. Similarly for other symbols.Other things: no explanations for Figure 2 & 3. There's a missing \\pi symbol in Appendix A before the KL derivation.", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}
|
2016-12-16 20:26:48
|
ICLR.cc/2017/conference
|
rkklCqe7g
|
Syh3nLemx
|
(anonymous)
|
Response by Reviewer
|
{"title": "Small addition", "comment": "Regarding this,If you were participating to the challenge where the test set ground-truth sentences were not accessible except for official evaluators, you would have selected G+O-TXT for your best system and not G. So I think it is not good to further base your qualitative analysis on G vs VNMT and not G+O-TXT vs VNMT. Again if that was a competition submission, you wouldn't have access to your test scores and you wouldn't do your qualitative analysis using 'G'.."}
|
2016-12-03 19:32:54
|
ICLR.cc/2017/conference
|
SJNO3xMVg
|
B1G9tvcgx
|
AnonReviewer2
|
Official Review by AnonReviewer2
|
{"title": "Promising research direction but not quite there", "rating": "3: Clear rejection", "review": "This paper proposes a multimodal neural machine translation that is based upon previous work using variational methods but attempts to ground semantics with images. Considering way to improve translation with visual information seems like a sensible thing to do when such data is available. As pointed out by a previous reviewer, it is not actually correct to do model selection in the way it was done in the paper. This makes the gains reported by the authors very marginal. In addition, as the author's also said in their question response, it is not clear if the model is really learning to capture useful image semantics. As such, it is unfortunately hard to conclude that this paper contributes to the direction that originally motivated it.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
|
2016-12-16 23:44:43
|
ICLR.cc/2017/conference
|
BJUyn9gmg
|
B1G9tvcgx
|
(anonymous)
|
Response by Reviewer
|
{"title": "General comments", "comment": "1. Just for information, regarding your claim \"We also present the first translation task with which one uses a parallel corpus and images in training, while using a source corpus in translating\", there's a recent paper called \"Zero-resource Machine Translation by Multimodal Encoder-decoder Network with Multimedia Pivot\" which also proposes the very same idea (https://arxiv.org/pdf/1611.04503v1.pdf) but I don't know which one comes earlier.2. It would be nice to clarify that when you refer to Multi30k and the WMT16 challenge results, you actually refer to multimodal machine translation task (Task 1) where you have 1 English and 1 German descriptions and not the second one where you have 5 for each.3. Your results table is not very clear. What is NMT? Is it a monomodal(textual) dl4mt baseline that you trained yourself? You report the METEOR scores with '-norm' parameter but are you sure that Huang et al' reported with -norm as well? -norm gives substantially higher METEOR scores and the primary competition metric was METEOR without -norm. Also you may want to include the competition winner and Moses baseline in your results for a richer comparison.4. At the end of section 4.1, BLUE -> BLEU"}
|
2016-12-03 19:24:13
|
ICLR.cc/2017/conference
|
HkrDj-1Xe
|
B1G9tvcgx
|
AnonReviewer2
|
Response by Reviewer
|
{"title": "General comments", "comment": "1. Just for information, regarding your claim \"We also present the first translation task with which one uses a parallel corpus and images in training, while using a source corpus in translating\", there's a recent paper called \"Zero-resource Machine Translation by Multimodal Encoder-decoder Network with Multimedia Pivot\" which also proposes the very same idea (https://arxiv.org/pdf/1611.04503v1.pdf) but I don't know which one comes earlier.2. It would be nice to clarify that when you refer to Multi30k and the WMT16 challenge results, you actually refer to multimodal machine translation task (Task 1) where you have 1 English and 1 German descriptions and not the second one where you have 5 for each.3. Your results table is not very clear. What is NMT? Is it a monomodal(textual) dl4mt baseline that you trained yourself? You report the METEOR scores with '-norm' parameter but are you sure that Huang et al' reported with -norm as well? -norm gives substantially higher METEOR scores and the primary competition metric was METEOR without -norm. Also you may want to include the competition winner and Moses baseline in your results for a richer comparison.4. At the end of section 4.1, BLUE -> BLEU"}
|
2016-12-02 14:55:25
|
ICLR.cc/2017/conference
|
SJJs3z8Oe
|
B1G9tvcgx
|
pcs
|
ICLR committee final decision
|
{"title": "ICLR committee final decision", "comment": "The area chair agrees with the reviewers that this paper is not of sufficient quality for ICLR. The experimental results are weak (there might be even be some issues with the experimental methodology) and it is not at all clear whether the translation model benefits from the image data. The authors did not address the final reviews.", "decision": "Reject"}
|
2017-02-06 15:56:07
|
ICLR.cc/2017/conference
|
SyB7NBW7e
|
HkrDj-1Xe
|
~Joji_Toyama1
|
Response by Reviewer
|
{"title": "Thanks for your comment.", "comment": "Thanks for your question. We do not have the obvious evidence which shows our model is actually capturing useful semantics but we have some clues. We conclude that our model is capturing the semantic of the sentence because of the fact that our model scores the higher METEOR score compared to baselines, and some translation examples in qualitative analysis (such as Figure 6). "}
|
2016-12-04 07:22:05
|
ICLR.cc/2017/conference
|
BkiFO-Gml
|
Hk0_CEbQe
|
(anonymous)
|
Response by Reviewer
|
{"title": "C is for Constrained", "comment": "C means \"Constrained\", i.e. the model was only trained with the given training corpus and not without some other resources (that would be a U: Unconstrained)"}
|
2016-12-04 21:20:03
|
ICLR.cc/2017/conference
|
Syh3nLemx
|
Sy4luWyQx
|
~Joji_Toyama1
|
Response by Reviewer
|
{"title": "Thanks for your comment.", "comment": "In our paper, we propose 4 different architectures, G, G+O-AVG, G+O-RNN, and G+O-TXT. Their difference is how the image information is integrated into a latent variable. In the validation, we tune the best hyper parameters for each architecture with the validation dataset. The selection of architecture is not a part of hyper parameters setting.Then, we evaluate each architecture with test data.According to your comment that we should select only G+O-TXT for test, we thought it is not a problem to show all architecture's results since model selection is not part of the parameter tuning. Figure 4 suggests that G model's validation scores during the training iteration is higher than others in most cases. It can be the explanation why G scores higher than G+O-TXT in the test.I hope this can be the answer for your question. If you have further questions, it is welcomed."}
|
2016-12-03 14:54:44
|
ICLR.cc/2017/conference
|
Bkgf7ZeM4e
|
B1G9tvcgx
|
AnonReviewer1
|
Official Review by AnonReviewer1
|
{"title": "There are major issues", "rating": "4: Ok but not good enough - rejection", "review": "The paper proposes an approach to the task of multimodal machine translation, namely to the case when an image is available that corresponds to both source and target sentences. The idea seems to be to use a latent variable model and condition it on the image. In practice from Equation 3 and Figure 3 one can see that the image is only used during training to do inference. That said, the approach appears flawed, because the image is not really used for translation.Experimental results are weak. If the model selection was done properly, that is using the validation set, the considered model would only bring 0.6 METEOR and 0.2 BLEU advantage over the baseline. In the view of the overall variance of the results, these improvements can not be considered significant. The qualitative analysis in Subsection 4.4 appears inconclusive and unconvincing.Overall, there are major issues with both the approach and the execution of the paper.", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}
|
2016-12-16 22:56:26
|
ICLR.cc/2017/conference
|
rJLKBSbmg
|
rkklCqe7g
|
~Joji_Toyama1
|
Response by Reviewer
|
{"title": "Thanks for your further comment.", "comment": "We are not participating the competition, therefore it is not a competition submission of course, and we just evaluate our models with published dataset. So we thought it is not a problem to do further analysis to the model which scores the best in the test dataset.I hope this can be the answer for your question. "}
|
2016-12-04 07:27:57
|
ICLR.cc/2017/conference
|
Sy4luWyQx
|
B1G9tvcgx
|
AnonReviewer1
|
Response by Reviewer
|
{"title": "Thanks for your further comment.", "comment": "We are not participating the competition, therefore it is not a competition submission of course, and we just evaluate our models with published dataset. So we thought it is not a problem to do further analysis to the model which scores the best in the test dataset.I hope this can be the answer for your question. "}
|
2016-12-02 14:40:44
|
ICLR.cc/2017/conference
|
HJpxUk8Eg
|
B1ElR4cgg
|
AnonReviewer2
|
Official Review by AnonReviewer2
|
{"title": "", "rating": "7: Good paper, accept", "review": "This is a parallel work with BiGAN. The idea is using auto encoder to provide extra information for discriminator. This approach seems is promising from reported result.", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
|
2016-12-19 22:57:56
|
ICLR.cc/2017/conference
|
r1zt4GhNx
|
H1d32I-Ee
|
(anonymous)
|
Response by Reviewer
|
{"title": "Comparison with Salismans et al.", "comment": "Great paper! I really enjoyed reading it. The comparison with Salismans et al. might be a little unfair. ALI [1] was trained for 6475 epochs (which is a pretty large number I believe) whereas the Salisman et al. model [2] was trained for 1200 epochs only. I'm curious about how much of the improvement in the sample quality for cifar10 is because of the model being better and how much of it is due to the longer training regime.Would it be possible for the authors to share a graph of the Discriminator and Generator losses versus training progress? It would probably help us understand how important it was to run the training procedure for 6475 epochs (which seems like a rather large number). [1]: https://github.com/IshmaelBelghazi/ALI/blob/master/experiments/ali_cifar10.py#L28[2]: https://github.com/openai/improved-gan/blob/master/mnist_svhn_cifar10/train_cifar_feature_matching.py#L130"}
|
2016-12-24 15:30:01
|
ICLR.cc/2017/conference
|
HkD8tKxfe
|
B1ElR4cgg
|
(anonymous)
|
Response by Reviewer
|
{"title": "details of semi-supervised learning experiments", "comment": "What is the specific setting of the semi-supervised learning experiments? Do you use the feature from encoder or from the discriminator? And how do you add label information to the training process? In the arXiv version it is said that a L2-SVM is trained on the last few layers of the encoder, but the performance is much worse than reported in openreview version of the paper. "}
|
2016-11-21 14:48:46
|
ICLR.cc/2017/conference
|
BkaCFtnGe
|
HkD8tKxfe
|
~Mohamed_Ishmael_Belghazi1
|
Response by Reviewer
|
{"title": "Reply to details of semi-supervised learning experiments comment by Anonymous", "comment": "In the OpenReview version, we incorporate semi-supervised learning into the training of ALI by following Salimans et al.'s setup of having the discriminator output a distribution over K + 1 labels. The first K labels correspond to the classes found in the labeled dataset. The K + 1th label is that of the samples. Given this output distribution, the discriminator's value function can be broken down into supervised and unsupervised parts. The supervised part receives labeled data and their encodings. The unsupervised part receives unlabeled data and samples as well as their respective encodings. The main difference between our setup and Salimans et al.'s is that the latter does not have encodings to be fed to the discriminator."}
|
2016-11-30 17:18:12
|
ICLR.cc/2017/conference
|
Sk5zkq5Ul
|
B1ElR4cgg
|
~Vincent_Dumoulin1
|
Response by Reviewer
|
{"title": "Submission updated", "comment": "We recently updated the submission taking the reviewers' comments into account. The main change is the addition of a section discussing alternative approaches to feedforward inference in GANs and section with a toy experiment comparing ALI's behaviour with those approaches."}
|
2017-01-16 18:21:06
|
ICLR.cc/2017/conference
|
BkwcJR6mx
|
SJZmHKuQe
|
~Vincent_Dumoulin1
|
Response by Reviewer
|
{"title": "Clarifications", "comment": "Thank you for your questions.1) The adaptation of the Salimans et al. (2016) method is what is responsible for the misclassification rate reduction. Note that the semi-supervised method used in the first version of the ALI arXiv paper differs from the one used in this submission in one key aspect. In the former, semi-supervised learning is done after ALI has been trained. In the latter, semi-supervised learning is integrated into ALI's training procedure. In that sense, it is unsurprising that the Salimans et al. (2016) method performs better than a linear SVM trained on features extracted from ALI's encoder.2) The inference network is used for both methods mentioned in 1). In the first version of the arXiv paper, the encoder is used after unsupervised training as a feature extractor. We show that it outperforms the Radford et al. (2015) semi-supervised result with GANs. In this submission, the discriminator adapted for the semi-supervised learning task receives the encoder's output as input. We show that doing so allows us to match the Salimans et al. (2016) semi-supervised results without having to do feature matching.3) This paper does not claim that the conditional image generation results are better than previous work. The results are presented to support the claim that ALI is also amenable to class-conditional generative modeling.We updated the submission to highlight and clarify the difference between the semi-supervised learning results presented in the first arXiv version of the paper and this version."}
|
2016-12-13 19:44:14
|
ICLR.cc/2017/conference
|
H1d32I-Ee
|
B1ElR4cgg
|
AnonReviewer3
|
Official Review by AnonReviewer3
|
{"title": "interesting extension of GANs, promising results", "rating": "8: Top 50% of accepted papers, clear accept", "review": "This paper extends the GAN framework to allow for latent variables. The observed data set is expanded by drawing latent variables z from a conditional distribution q(z|x). The joint distribution on x,z is then modeled using a joint generator model p(x,z)=p(z)p(x|z). Both q and p are then trained by trying to fool a discriminator. This constitutes a worthwhile extension of GANs: giving GANs the ability to do inference opens up many applications that could previously only be addressed by e.g. VAEs.The results are very promising. The CIFAR-10 samples are the best I've seen so far (not counting methods that use class labels). Matching the semi-supervised results from Salimans et al. without feature matching also indicates the proposed method may improve the stability of training GANs.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
|
2016-12-16 12:23:12
|
ICLR.cc/2017/conference
|
rJ_4JqL8x
|
r1zt4GhNx
|
~Olivier_Mastropietro1
|
Response by Reviewer
|
{"title": "Response to (anonymous)", "comment": "Good point! It might not be clearly specified, but the architectures in the appendix were not used for semi-supervised learning. For comparison with Salisman et al. model, we used an architecture very close to theirs. The differences are that we need to accomodate the encoder. This results in a ConvNet which has exactly the same structure (reversed order of feature maps, nonlinearities, etc) as their generator and with convolution instead of deconvolution. There is also a very small amount of extra parameters added into their discriminator to take this into account. Also, we did stop training at 1200 epochs and the 6475 mentionned in the appendix is for the samples in the purely unsupervised setting."}
|
2017-01-13 17:32:32
|
ICLR.cc/2017/conference
|
Bk-h2liEl
|
HJpxUk8Eg
|
~Vincent_Dumoulin1
|
Response by Reviewer
|
{"title": "Response to AnonReviewer2", "comment": "We thank you for your review."}
|
2016-12-23 19:36:09
|
ICLR.cc/2017/conference
|
SJZmHKuQe
|
B1ElR4cgg
|
AnonReviewer1
|
Response by Reviewer
|
{"title": "Response to AnonReviewer2", "comment": "We thank you for your review."}
|
2016-12-09 19:25:12
|
ICLR.cc/2017/conference
|
Sy7Qhz8_x
|
B1ElR4cgg
|
pcs
|
ICLR committee final decision
|
{"title": "ICLR committee final decision", "comment": "The reviewers were positive about this paper and agree that it will make a contribution to the community.", "decision": "Accept (Poster)"}
|
2017-02-06 15:54:02
|
ICLR.cc/2017/conference
|
BkfkpgoNe
|
H1d32I-Ee
|
~Vincent_Dumoulin1
|
Response by Reviewer
|
{"title": "Response to AnonReviewer3", "comment": "We thank you for your feedback."}
|
2016-12-23 19:36:58
|
ICLR.cc/2017/conference
|
HkjImAJ7e
|
B1ElR4cgg
|
AnonReviewer2
|
Response by Reviewer
|
{"title": "Response to AnonReviewer3", "comment": "We thank you for your feedback."}
|
2016-12-03 05:08:34
|
ICLR.cc/2017/conference
|
SJ4HkJ8Vl
|
B1ElR4cgg
|
AnonReviewer1
|
Official Review by AnonReviewer1
|
{"title": "official review", "rating": "7: Good paper, accept", "review": "After reading the rebuttal, I decided to increase my score. I think ALI somehow stabilizes the GAN training as demonstrated in Fig. 8 and learns a reasonable inference network.---------------Initial Review:This paper proposes a new method for learning an inference network in the GAN framework. ALI's objective is to match the joint distribution of hidden and visible units imposed by an encoder and decoder network. ALI is trained on multiple datasets, and it seems to have a good reconstruction even though it does not have an explicit reconstruction term in the cost function. This shows it is learning a decent inference network for GAN.There are currently many ways to learn an inference network for GANs: One can learn an inference network after training the GAN by sampling from the GAN and learning a separate network to map X to Z. There is also the infoGAN approach (not cited) which trains the inference network at the same time with the generative path. I think this paper should have an extensive comparison with these other methods and have a discussion for why ALI's inference network is superior to previous works.Since ALI's inference network is stochastic, it would be great if different reconstructions of a same image is included. I believe the inference network of the BiGAN paper is deterministic which is the main difference with this work. So maybe it is worth highlighting this difference.The quality of samples is very good, but there is no quantitative experiment to compare ALI's samples with other GAN variants. So I am not sure if learning an inference network has contributed to better generative samples. Maybe including an inception score for comparison can help.There are two sets of semi-supervised results: The first one concatenate the hidden layers of the inference network and uses an L2-SVM afterwards. Ideally, concatenating feature maps is not the best way for semi-supervised learning and one would want to train the semi-supervised path at the same time with the generative path. It would have been much more interesting if part of the hidden code was a categorical distribution and another part of it was a continuous distribution like Gaussian, and the inference network on the categorical latent variable was used directly for classification (like semi-supervised VAE). In this case, the inference network would be trained at the same time with the generative path. Also if the authors can show that ALI can disentangle factors of variations with a discrete latent variable like infoGAN, it will significantly improve the quality of the paper.The second semi-supervised learning results show that ALI can match the state-of-the-art. But my impression is that the significant gain is mainly coming from the adaptation of Salimans et al. (2016) in which the discriminator is used for classification. It is unclear to me why learning an inference network help the discriminator do a better job in classification. How do we know the proposed method is improving the stability of the GAN? My understanding is that one of the main points of learning an inference network is to learn a mapping from the image to the high-level features such as class labels. So it would have been more interesting if the inference path was directly used for semi-supervised learning as I explained above.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
|
2017-01-20 19:36:02
|
ICLR.cc/2017/conference
|
rkLwVWjEx
|
SJ4HkJ8Vl
|
~Vincent_Dumoulin1
|
Response by Reviewer
|
{"title": "Response to AnonReviewer1", "comment": "We thank you for you feedback.REVIEWER POINT\u201cI think this paper should have an extensive comparison with these other methods and have a discussion for why ALI's inference network is superior to previous works.\u201dRESPONSEThe reviewer raises an important and salient point that, while we have shown that ALI does learn to do inference reasonably well, the paper doesn\u2019t do enough to directly compare with alternative ways of doing feedforward inference in a GAN setting. To address these concerns, we will shortly add two new sections to the paper: (1) a review of alternative approaches and (2) a new experiment to highlight the role of the inference network during learning. These additions are summarized below.Here is a list of alternative approaches and why they may or may not be fit for comparison with ALI:* Learning the inverse mapping from GAN samples: This corresponds to learning an encoder to reconstruct Z, i.e. encode(decode(Z ~ p(Z))) ~= Z. We are not aware of any work that reports results for this approach. Could you point out to such work if it exists?* InfoGAN: While InfoGAN should be cited as related work, InfoGAN actually does not do inference, it only estimates the discrete latent code which describes specific aspects of the image. This is why the InfoGAN paper doesn\u2019t show any reconstructions, rather it shows generated samples where they vary the latent code. Additionally, InfoGAN uses a fixed reconstruction cost for the latent code C and requires a tractable approximate posterior, q(C|X), that can be sampled from and evaluated. ALI only requires that inference networks can be sampled from, allowing it to represent arbitrarily complex posterior distributions. Combining InfoGAN and ALI could be an exciting area for future work.* Post-hoc learned inference: As verification that learning inference jointly with generation is beneficial, one can first train a GAN and then freeze the decoder and learn the encoder using the procedure proposed by ALI. In this setting, the encoder and the decoder cannot interact together during training and the encoder must work with whatever the decoder has learned during GAN training.To address this point we performed an experiment on a toy dataset for which q(X) is a 2D gaussian mixture with 25 mixture components laid out on a grid. The covariance matrices and centroids have been chosen such that the distribution exhibits lots of modes separated by large low-probability regions, which makes it a decently hard task despite the 2D nature of the dataset.We trained ALI and GAN on 100,000 q(X) samples. The decoder and discriminator architectures are identical between ALI and GAN. Each model was trained 10 times using Adam with random learning rate and beta_1 values, and the weights were initialized by drawing from a gaussian distribution with a random standard deviation.We measured the extent to which the trained models covered all 25 modes by drawing 10,000 samples from their p(X) distribution and assigning each sample to a q(X) mixture component according to the mixture responsibilities. We defined a dropped mode as one that wasn\u2019t assigned to *any* sample (which is a generous definition). Using this definition, we found that ALI models covered 13.4 \u00b1 5.8 modes on average (min: 8, max: 25) while GAN models covered 10.4 \u00b1 9.2 modes on average (min: 1, max: 22).We then selected the best-covering ALI and GAN models, and the GAN model was augmented with the following inference mechanisms:* Learned inverse mapping* Post-hoc learned inferenceThe encoders learned for GAN inference have the same architecture as ALI\u2019s encoder. We then compared each model\u2019s inference capabilities by reconstructing 10,000 held-out samples from q(X).A figure summarizing the experiment can be found at https://raw.githubusercontent.com/IshmaelBelghazi/ALI/master/paper/mixture_plot.png.The three columns correspond to the three different strategies for learning inference:1. ALI (our proposed strategy).2. Learning an inverse mapping from GAN samples.3. Post-hoc learned inferenceThe five rows correspond to:1. X ~ q(X) samples, i.e. test set examples, colour-coded by mixture component. They're the same for all three columns.2. Z_hat ~ q(Z | X) samples, i.e. the latent codes, also colour-coded.3. X_hat ~ p(X | Z = Z_hat) samples, i.e. the reconstructions, also colour-coded.4. Z ~ p(Z) samples, i.e. prior samples. They're the same for all three columns.5. X_tilde ~ p(X | Z) samples, i.e. generator samples.Here is what we observe:* The ALI encoder models a marginal distribution q(Z) that matches p(Z) fairly well (row 2, column 1). The learned representation does a decent job at clustering and organizing the different mixture components.* The GAN generator (row 5, columns 2-3) has more trouble reaching all the modes than the ALI generator (row 5, column 1), even over 10 runs of hyperparameter search.* Learning an inverse mapping from GAN samples does not work very well: the encoder has trouble covering the prior marginally and the way it clusters mixture components is not very well organized (row 2, column 2).* Learning inference post-hoc doesn't work as well as training the encoder and the decoder jointly. As had been discussed above, it appears that adversarial training benefits from learning inference at training time in terms of mode coverage. This also negatively impacts how the latent space is organized (row 2, column 3). However, it appears to be better at matching q(Z) and p(Z) than when inference is learned through inverse mapping from GAN samples.To summarize, this experiment provides evidence that adversarial training benefits from learning an inference mechanism jointly with the decoder. Furthermore, it shows that our proposed approach for learning inference in an adversarial setting is superior to the other approaches investigated.We will update the manuscript shortly to incorporate these additional results and cite the appropriate relevant work.REVIEWER POINT\u201cSince ALI's inference network is stochastic, it would be great if different reconstructions of a same image is included. I believe the inference network of the BiGAN paper is deterministic which is the main difference with this work. So maybe it is worth highlighting this difference.\u201dRESPONSEOur experience is that the added stochasticity does not make much of a difference: at the end of training, very little noise ends up being injected at the encoder\u2019s output. This falls in line with the invertibility results derived by Donahue et al. (2016). We agree that left unexplained, this difference may confuse readers and we will update the manuscript to address this question.REVIEWER POINT\u201cThe quality of samples is very good, but there is no quantitative experiment to compare ALI's samples with other GAN variants. So I am not sure if learning an inference network has contributed to better generative samples. Maybe including an inception score for comparison can help.\u201dRESPONSEWe do not claim that learning an inference network contributes to better generative samples, simply that doing so does not come at the expense of sample quality. However, as explained above, experiments on a toy dataset suggest that ALI is better at mode coverage than GAN.Regarding the use of the Inception score, we are hesitant to use it, as Odena et al. (2016) [1] found that \u201c[the] Inception accuracy can not measure whether a model has collapsed. A model that simply memorized one example from each ImageNet class would do very well by this metric.\u201dREVIEWER POINT\u201cThere are two sets of semi-supervised results [...]\u201dRESPONSEThe point which we try to make with the semi-supervised results is that: * In the first method, one can train a shallow classifier using learned features from an unsupervised model. We used the *exact same* procedure as DCGAN to build an L2-SVM on top of existing features, with the exception that the features were taken from the inference network\u2019s high-level representation as opposed to the discriminator. In this case ALI outperformed the results reported in the DCGAN paper, which suggests that for this procedure the inference network buys us something over the discriminator. To ensure that the comparison is fair, we are currently running an experiment in which DCGAN and ALI share the same architecture. We will update the manuscript when it is complete.* In the second method, one co-trains the discriminator for label classification. With this procedure ALI matches the more recent \u201cImproved methods for training GANs\u201d results while using a simpler architecture (no feature matching). In this setting, the inference network *is* used, as it provides one of the two inputs which the discriminator uses to produce its prediction.While an inference network on categorical variables like in semi-supervised VAEs does sound elegant and straightforward, it is not directly applicable to ALI. Like GANs, ALI requires that the conditional distributions p(x | z) and q(z | x) can be sampled from in a way that allows gradient backpropagation, which cannot be achieved in a straightforward manner using discrete random variables.[1] Odena, A., Olah, C., & Shlens, J. (2016). Conditional Image Synthesis With Auxiliary Classifier GANs. arXiv preprint arXiv:1610.09585."}
|
2016-12-23 20:09:01
|
ICLR.cc/2017/conference
|
rktjgoeXl
|
HkjImAJ7e
|
~Alex_Lamb1
|
Response by Reviewer
|
{"title": "Connection to BiGAN", "comment": "Both methods were developed independently and published at roughly the same time (BiGAN released May 31 2016, ALI released 2 days later on June 2 2016). Both papers acknowledge that the methods were developed independently and are very similar. There are differences in the experiments and ALI considered stochastic encoders, but the two proposed models are essentially the same. "}
|
2016-12-03 19:44:33
|
ICLR.cc/2017/conference
|
Hk5_yKIVl
|
B1E7Pwqgl
|
AnonReviewer1
|
Response by Reviewer
|
{"title": "Connection to BiGAN", "comment": "Both methods were developed independently and published at roughly the same time (BiGAN released May 31 2016, ALI released 2 days later on June 2 2016). Both papers acknowledge that the methods were developed independently and are very similar. There are differences in the experiments and ALI considered stochastic encoders, but the two proposed models are essentially the same. "}
|
2016-12-20 09:52:50
|
ICLR.cc/2017/conference
|
rJPZZKVEx
|
B1E7Pwqgl
|
AnonReviewer2
|
Official Review by AnonReviewer2
|
{"title": "Interesting idea, but improperly evaluated", "rating": "3: Clear rejection", "review": "This paper introduces CoopNets, an algorithm which trains a Deep-Energy Model (DEM, the \u201cdescriptor\u201d) with the help of an auxiliary directed bayes net, e.g. \u201cthe generator\u201d. The descriptor is trained via standard maximum likelihood, with Langevin MCMC for sampling. The generator is trained to generate likely samples under the DEM in a single, feed-forward ancestral sampling step. It can thus be used to shortcut expensive MCMC sampling, hence the reference to \u201ccooperative training\u201d.The above idea is interesting and novel, but unfortunately is not sufficiently validated by the experimental results. First and foremost, two out of the three experiments do not feature a train /test split, and ignore standard training and evaluation protocols for texture generation (see [R1]). Datasets are also much too small. As such these experiments only seem to confirm the ability of the model to overfit. On the third in-painting tasks, baselines are almost non-existent: no VAEs, RBMs, DEM, etc which makes it difficult to evaluate the benefits of the proposed approach.In a future revision, I would also encourage the authors to answer the following questions experimentally. What is the impact of the missing rejection step in Langevin MCMC (train with, without ?). What is the impact of the generator on the burn-in process of the Markov chain (show sample auto-correlation). How bad is approximation of training the generator from ({\\tilde{Y}, \\hat{X}) instead of ({\\tilde{Y}, \\tilde{X}) ? Run comparative experiments.The paper would also greatly benefit from a rewrite focusing on clarity, instead of hyperbole (\u201cpioneering work\u201d in reference to closely related, but non-peer reviewed work) and prose (\u201ctale of two nets\u201d). For example, the authors fail to specify the exact form of the energy function: this seems like a glaring omission.PROS:+ Interesting and novel ideaCONS:- Improper experimental protocols- Missing baselines- Missing diagnostic experiments[R1] Heess, N., Williams, C. K. I., and Hinton, G. E. (2009). Learning generative texture models with extended fields of-experts.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
|
2016-12-18 21:34:54
|
ICLR.cc/2017/conference
|
H1m27Uevx
|
B1E7Pwqgl
|
~Yang_Lu1
|
Response by Reviewer
|
{"title": "Second Revision", "comment": "We have added three new results: (1) Synthesis results by GAN for comparison. (Figure 6(b))(2) Synthesis results by algorithm G alone for comparison. (Figure 6(a))(3) Synthesis results on 224x224 resolution. (Figure 8)"}
|
2017-01-21 03:21:14
|
ICLR.cc/2017/conference
|
SJRnkerNl
|
B1E7Pwqgl
|
AnonReviewer1
|
Official Review by AnonReviewer1
|
{"title": "Official Review", "rating": "4: Ok but not good enough - rejection", "review": "The authors proposes an interesting idea of connecting the energy-based model (descriptor) and the generator network to help each other. The samples from the generator are used as the initialization of the descriptor inference. And the revised samples from the descriptor is in turn used to updatethe generator as the target image. The proposed idea is interesting. However, I think the main flaw is that the advantages of having that architecture are not convincingly demonstrated in the experiments. For example, readers will expect quantative analysis on how initializing with the samples from the generator helps? Also, the only quantative experiment on the reconstruction is also compared to quite old models. Considering that the model is quite close to the model of Kim & Bengio 2016, readers would also expect a comparison to that model. ** Minor- I'm wondering if the analysis on the convergence is sound when considering the fact that samples from SGLD are biased samples (with fixed step size). - Can you explain a bit more on how you get Eqn 8? when p(x|y) is also dependent on W_G?", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
|
2016-12-19 05:27:17
|
ICLR.cc/2017/conference
|
HkL92MUOl
|
B1E7Pwqgl
|
pcs
|
ICLR committee final decision
|
{"title": "ICLR committee final decision", "comment": "While the paper may have an interesting theoretical contribution, it seems to greatly suffer from problems in the presentation: the basic motivation is of the system is hardly mentioned in the introduction, and the conclusion does not explain much either. I think the paper should be rewritten, and, as some of the reviewers point out, the experiments strengthened before it can be accepted for publication. (I appreciate the last-minute revisions by the authors, but I think it really came too late, 14th/21st/23rd Jan, to be taken seriously into account in the review process.)", "decision": "Reject"}
|
2017-02-06 15:55:57
|
ICLR.cc/2017/conference
|
SJeGkQvUg
|
B1E7Pwqgl
|
~Yang_Lu1
|
Response by Reviewer
|
{"title": "Reply to the Reviewers", "comment": "Dear Reviewers, Thank you for reviewing our paper and thank you for your comments! We have uploaded a revision for your consideration. Because the reviewers questioned the small training sizes in our experiments on textures and objects, we have opted to replace these experiments by a new experiment on 14 categories from standard datasets such as ImageNet and MIT place, where each training set consists of 1000 images randomly sampled from the category. Please see the experiment section as well as the appendix for the synthesis results. These are all we have got, without cheery picking. As can be seen, our method can generate meaningful and varied images. We haven\u2019t had time to tune the code. In fact, we had to recruit a new author (Ms. Ruiqi Gao) to help us run the code due to our various time constraints. With more careful tuning (including increasing image resolution), we expect to further improve the quality of synthesis. About the comparison with separate training method by either Algorithm D for descriptor or Algorithm G for generator individually, the separate training methods currently cannot produce synthesis results that are comparable to those produced by the cooperative training. This illustrates the advantage of cooperative training over separate training. In fact, the main motivation for this work is to overcome the difficulty with separate training by cooperative training. We have added a quantitative comparison with GAN for the face completion experiment, because our method is intended as an alternative to GAN. Our original code was written in MatConvNet. We moved to TensorFlow in order to use existing code of GAN. We then rewrote Algorithm G in TensorFlow for image completion. GAN did not do well in this experiment. We are still checking and tuning our code to improve GAN performance. We want to emphasize that we are treating the following two issues separately:(1) Train-test split and quantitative evaluation of generalizability. (2) Image synthesis judged qualitatively. While the face completion experiment is intended to address (1), the synthesis experiment is intended to address (2). In fact, the generator network captures people\u2019s imagination mainly because of (2) (at least this is the case with ourselves), and some GAN papers are more qualitative than quantitative. We will continue to work on experiments, to further address the questions raised by the reviewers and to continue to strengthen the quantitative side. We have also made some minor changes to incorporate the reviewers\u2019 suggestions on wording and additional references. As to the energy function, in particular, f(Y; W), for the descriptor, it is defined by a bottom-up ConvNet that maps the image Y to a score (very much like a discriminator), and we give the details of this ConvNet in the experiment section. We feel we made this clear in the original version. As to equation (8), we have expanded the derivation. Equations (16) and (17) are about finite step Langevin dynamics. Finally please allow us to make some general comments regarding our paper. Our paper addresses the core issue of this conference, i.e., learning representations in the form of probabilistic generative models. There are two types of such papers: (1) Build on the successes of GAN. (2) Explore new connections and new routes. We believe that papers in these two categories should be judged differently. Our paper belongs to category (2). It explores the connection between undirected model (descriptor) and directed model (generator). It also explores the connection between MCMC sampling (descriptor) and ancestral sampling (generator). Furthermore, it explores the new ground where two models interact with each other via synthesized data. We have also tried hard to gain a theoretical understanding of our method in appendix. There have been a lot of papers in category (1) recently. We hope that the conference will be more open to the relatively fewer papers in category (2). In fact we are heartened that all three reviewers find our work interesting, and we can continue to improve our experiments. Thanks for your consideration, and thanks for your comments that have helped us improve our work. "}
|
2017-01-14 05:39:09
|
ICLR.cc/2017/conference
|
SyI6m6f4e
|
B1E7Pwqgl
|
AnonReviewer3
|
Official Review by AnonReviewer3
|
{"title": "", "rating": "6: Marginally above acceptance threshold", "review": "This paper proposed a new joint training scheme for two probabilistic models of signals (e.g. images) which are both deep neural network based and are termed generator and descriptor networks. In the new scheme, termed cooperative training, the two networks train together and assist each other: the generator network provides samples that work as initial samples for the descriptor network, and the descriptor network updates those samples to help guide training of the generator network.This is an interesting approach for coupling the training of these two models. The paper however is quite weak on the empirical studies. In particular:- The training datasets are tiny, from sets of 1 image to 5-6. What is the reason for not using larger sets? I think the small datasets are leading to over training and are really masking the true value of the proposed cooperative training approach.- For most of the experiments presented in the paper it is hard to assess the specific value brought by the proposed cooperative training approach because baseline results are missing. There are comparisons provided for face completion experiments - but even there comparisons with descriptor or generator network trained separately or with other deep auto-encoders are missing. Thus it is hard to conclude if and how much gain is obtained by cooperative training over say individually training the descriptor and generator networks.Another comment is that in the \u201crelated work\u201d section, I think relation with variational auto encoders (Kingma and Welling 2013) should be included.Despite limitations mentioned above, I think the ideas presented in the paper are intuitively appealing and worth discussing at ICLR. Paper would be considerably strengthened by adding more relevant baselines and addressing the training data size issues.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
|
2016-12-17 13:55:09
|
ICLR.cc/2017/conference
|
SyKz5JJQl
|
B1E7Pwqgl
|
AnonReviewer2
|
Official Review by AnonReviewer3
|
{"title": "", "rating": "6: Marginally above acceptance threshold", "review": "This paper proposed a new joint training scheme for two probabilistic models of signals (e.g. images) which are both deep neural network based and are termed generator and descriptor networks. In the new scheme, termed cooperative training, the two networks train together and assist each other: the generator network provides samples that work as initial samples for the descriptor network, and the descriptor network updates those samples to help guide training of the generator network.This is an interesting approach for coupling the training of these two models. The paper however is quite weak on the empirical studies. In particular:- The training datasets are tiny, from sets of 1 image to 5-6. What is the reason for not using larger sets? I think the small datasets are leading to over training and are really masking the true value of the proposed cooperative training approach.- For most of the experiments presented in the paper it is hard to assess the specific value brought by the proposed cooperative training approach because baseline results are missing. There are comparisons provided for face completion experiments - but even there comparisons with descriptor or generator network trained separately or with other deep auto-encoders are missing. Thus it is hard to conclude if and how much gain is obtained by cooperative training over say individually training the descriptor and generator networks.Another comment is that in the \u201crelated work\u201d section, I think relation with variational auto encoders (Kingma and Welling 2013) should be included.Despite limitations mentioned above, I think the ideas presented in the paper are intuitively appealing and worth discussing at ICLR. Paper would be considerably strengthened by adding more relevant baselines and addressing the training data size issues.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
|
2016-12-02 12:33:21
|
ICLR.cc/2017/conference
|
HJFPH0o7g
|
H1AOHfOQx
|
~Jianwen_Xie1
|
Response by Reviewer
|
{"title": "Reply to AnonReviewer3", "comment": "Dear Reviewer, Thanks for reviewing our paper and thanks for your good question!You are definitely correct that we may take more samples from each chain of Langevin dynamics. We take only one sample after running a finite number of Langevin steps within each learning iteration, mainly because of the auto-correlation between consecutive steps. During the learning algorithm, the parameters change gradually, so that the target distribution of the Langevin dynamics also changes gradually. We therefore only take one sample from the last Langevin step within each learning iteration as a matter of burn-in. Of course this may be overly cautious, and we may instead average over the second half of the Langevin chain or take more samples as you suggested. We can do experiments to study this issue. About using one Monte Carlo sample in each learning iteration versus averaging many samples, the learning algorithms for both the descriptor and the generator are stochastic gradient or stochastic approximation of Robbins-Monro, where the expectation is replaced by a single Monte Carlo sample in each iteration. The algorithm converges because it essentially accumulates the effects of the Monte Carlo samples over the iterations, so that the Monte Carlo variance is gradually eliminated. The basic idea is that instead of generating many Monte Carlo samples with fixed parameters, we may keep updating the parameters while generating these samples with the gradually changing parameters. Of course the contrast between one sample versus many samples is not mutually exclusive. We may use a small number of samples as a middle ground, and we can do experiments to study this issue. This is similar to the mini-batch training algorithm in supervised learning, which is also of Robbins-Monro type. The cooperative training is beneficial to both the descriptor and generator. For the descriptor, each Langevin chain is initialized from a fresh independent sample provided by the generator, thus eliminating auto-correlation between samples of different learning iterations. Without the generator, we may have to run persistent chains, which may take a long time to explore the sample space due to auto-correlation between different learning iterations. For the generator, it learns from synthesized samples where the latent factors are known, so that the learning is effectively supervised. It appears that we may insert the generator in the loop of any MCMC for any target distribution. The generator helps rejuvenate the MCMC by supplying fresh independent samples in each iteration, while the MCMC guides the generator towards the target distribution. In the end, we can generate independent samples from the target distribution using the learned generator directly without MCMC. Thanks again for your question, which helps us clarify our presentation. "}
|
2016-12-12 07:44:33
|
ICLR.cc/2017/conference
|
H1AOHfOQx
|
B1E7Pwqgl
|
AnonReviewer3
|
Response by Reviewer
|
{"title": "Reply to AnonReviewer3", "comment": "Dear Reviewer, Thanks for reviewing our paper and thanks for your good question!You are definitely correct that we may take more samples from each chain of Langevin dynamics. We take only one sample after running a finite number of Langevin steps within each learning iteration, mainly because of the auto-correlation between consecutive steps. During the learning algorithm, the parameters change gradually, so that the target distribution of the Langevin dynamics also changes gradually. We therefore only take one sample from the last Langevin step within each learning iteration as a matter of burn-in. Of course this may be overly cautious, and we may instead average over the second half of the Langevin chain or take more samples as you suggested. We can do experiments to study this issue. About using one Monte Carlo sample in each learning iteration versus averaging many samples, the learning algorithms for both the descriptor and the generator are stochastic gradient or stochastic approximation of Robbins-Monro, where the expectation is replaced by a single Monte Carlo sample in each iteration. The algorithm converges because it essentially accumulates the effects of the Monte Carlo samples over the iterations, so that the Monte Carlo variance is gradually eliminated. The basic idea is that instead of generating many Monte Carlo samples with fixed parameters, we may keep updating the parameters while generating these samples with the gradually changing parameters. Of course the contrast between one sample versus many samples is not mutually exclusive. We may use a small number of samples as a middle ground, and we can do experiments to study this issue. This is similar to the mini-batch training algorithm in supervised learning, which is also of Robbins-Monro type. The cooperative training is beneficial to both the descriptor and generator. For the descriptor, each Langevin chain is initialized from a fresh independent sample provided by the generator, thus eliminating auto-correlation between samples of different learning iterations. Without the generator, we may have to run persistent chains, which may take a long time to explore the sample space due to auto-correlation between different learning iterations. For the generator, it learns from synthesized samples where the latent factors are known, so that the learning is effectively supervised. It appears that we may insert the generator in the loop of any MCMC for any target distribution. The generator helps rejuvenate the MCMC by supplying fresh independent samples in each iteration, while the MCMC guides the generator towards the target distribution. In the end, we can generate independent samples from the target distribution using the learned generator directly without MCMC. Thanks again for your question, which helps us clarify our presentation. "}
|
2016-12-09 11:28:53
|
ICLR.cc/2017/conference
|
SktJh1fXe
|
SyKz5JJQl
|
~Yang_Lu1
|
Response by Reviewer
|
{"title": "Reply to AnonReviewer2", "comment": "Dear Reviewer, Thank you for spending time reviewing our paper!We totally agree with you that we should pay attention to the issue of overfitting and generalizability in terms of train/test split. This is exactly the reason we have conducted the face completion experiment on the testing data. We can also conduct similar experiments on texture examples. Thank you for the reference. We will study it, and will study the issue of generalizability in texture experiments quantitatively. Meanwhile we do wish to point out that the main purpose of our texture experiments is to show qualitatively that our method is capable of generating realistic and varied high-resolution images (224x224). Because of the learned generator, the image generation is very efficient. About the image completion experiment and Table 1, as mentioned above, our goal is to test whether the learned model can generalize to the testing images. We have not implemented more recent methods on image completion. We can study them and implement them. Our quantitative experiment shows that the learned model can indeed generalize to the testing images. Another feature of our method is that even if the recovered image may be different from the original image (in terms of reconstruction error) when the occluding mask is big (e.g., a man\u2019s face is recovered into a woman\u2019s face in one of the examples shown in the paper), the recovered image is still perceptually plausible. Thanks for your good questions. "}
|
2016-12-04 19:17:52
|
ICLR.cc/2017/conference
|
BJaJDsH4x
|
B186cP9gx
|
AnonReviewer3
|
Official Review by AnonReviewer3
|
{"title": "interesting insights, but lack of control experiments", "rating": "4: Ok but not good enough - rejection", "review": "The paper analyzes the properties of the Hessian of the training objective for various neural networks and data distributions. The authors study in particular, the eigenspectrum of the Hessian, which relates to the difficulty and the local convexity of the optimization problem.While there are several interesting insights discussed in this paper such as the local flatness of the objective function, as well as the study of the relation between data distribution and Hessian, a somewhat lacking aspect of the paper is that most described effects are presented as general, while tested only in a specific setting, without control experiments, or mathematical analysis.For example, regarding the concentration of eigenvalues to zero in Figure 6, it is unclear whether the concentration effect is really caused by training (e.g. increasing insensitivity to local perturbations), or the consequence of a specific choice of scale for the initial parameters.In Figure 8, the complexity of the data is not defined. It is not clear whether two fully overlapping distributions (the Hessian would then become zero?) is considered as complex or simple data.Some of the plots legends (Fig. 1 and 2) and labels are unreadable in printed format. Plots of Figure 3 don't have the same range for the x-axis. The image of Hessian matrix of Figure 1 does not render properly in printed format.", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}
|
2016-12-19 18:28:52
|
ICLR.cc/2017/conference
|
HJQ_Mb1vl
|
rywRYhZVl
|
~Levent_Sagun1
|
Response by Reviewer
|
{"title": "Interpretation and response/revisions to the questions", "comment": "Clearly, we can't say that the Hessian of the loss function is degenerate everywhere in this empirical study in which we don't attempt to check all the points in theoretical or empirical ways. What we can say, however, is that the Hessian of the loss function is degenerate at the point where training begins and ends. This is indeed the body of our work. Moreover, we can presume the statement will be valid for the intermediate points over the course of the training, as well. We clarified this further in the text.1) The actual one that is computed through the Hessian vector product using Lop: Barak A. Pearlmutter, \u201cFast Exact Multiplication by the Hessian\u201d, Neural Computation, 1994, -we added the reference in the text. (Also the approximate Hessian gives similar results.)2) We use gradient descent (the batch method, when the minibatch size is equal to the number of examples in the dataset) in our main expeirments. We revised the text to clarify this. The only exception is the line interpolation that is presented in the conclusion which compares GD and SGD. However, we would like to note that the network is *not* trained to its local minimum. We train the network for a long time even after the training cost is stabilized. Yet, the norm of the gradient is typically at the order of 10^{-3}. It is pretty laborious to get a network that uses a log-loss to a level where the norm of the gradient is at the order of say 10^{-10} or lower. The basin that the point is in at the stopping time is, for all practical purposes, a local minimum (see ICLR 2015 Workshop, Explorations on high dimensional landscapes, Sagun et al.).3) Thank you for the suggestion! Preliminary results with the mean square error also gives singular Hessian, pretty much in line with the observations laid out in our work. We added this new experiment in our work, as well.4) The short answer is (b). We included this in the previous bullet point, and we attempted to clarify it further. This point is actually one of the practical challenges in modern day deep learning. Pretty much none of the current models *converge* in the sense that they find a local minimum, the models stop way before it finds one, and for all practical purposes the point that did *not* converge doesn't perform any worse when one keeps training the model until it finds a point where the norm of the gradient is zero within the numerical accuracy. As references in the paper, there are reasons for GD to converge to a point where its Hessian has only non-negative eigenvalues. From a practical point of view, reaching stability in this dynamics would take a long time. In our work, we focus on the points we can find in practice, the ones whose grad_norm is relatively small, so that the gradient doesn't give any substantial signal for the gradient based model to move in the weight space. "}
|
2017-01-20 03:22:19
|
ICLR.cc/2017/conference
|
rkuW-dvXe
|
B186cP9gx
|
AnonReviewer3
|
Response by Reviewer
|
{"title": "Interpretation and response/revisions to the questions", "comment": "Clearly, we can't say that the Hessian of the loss function is degenerate everywhere in this empirical study in which we don't attempt to check all the points in theoretical or empirical ways. What we can say, however, is that the Hessian of the loss function is degenerate at the point where training begins and ends. This is indeed the body of our work. Moreover, we can presume the statement will be valid for the intermediate points over the course of the training, as well. We clarified this further in the text.1) The actual one that is computed through the Hessian vector product using Lop: Barak A. Pearlmutter, \u201cFast Exact Multiplication by the Hessian\u201d, Neural Computation, 1994, -we added the reference in the text. (Also the approximate Hessian gives similar results.)2) We use gradient descent (the batch method, when the minibatch size is equal to the number of examples in the dataset) in our main expeirments. We revised the text to clarify this. The only exception is the line interpolation that is presented in the conclusion which compares GD and SGD. However, we would like to note that the network is *not* trained to its local minimum. We train the network for a long time even after the training cost is stabilized. Yet, the norm of the gradient is typically at the order of 10^{-3}. It is pretty laborious to get a network that uses a log-loss to a level where the norm of the gradient is at the order of say 10^{-10} or lower. The basin that the point is in at the stopping time is, for all practical purposes, a local minimum (see ICLR 2015 Workshop, Explorations on high dimensional landscapes, Sagun et al.).3) Thank you for the suggestion! Preliminary results with the mean square error also gives singular Hessian, pretty much in line with the observations laid out in our work. We added this new experiment in our work, as well.4) The short answer is (b). We included this in the previous bullet point, and we attempted to clarify it further. This point is actually one of the practical challenges in modern day deep learning. Pretty much none of the current models *converge* in the sense that they find a local minimum, the models stop way before it finds one, and for all practical purposes the point that did *not* converge doesn't perform any worse when one keeps training the model until it finds a point where the norm of the gradient is zero within the numerical accuracy. As references in the paper, there are reasons for GD to converge to a point where its Hessian has only non-negative eigenvalues. From a practical point of view, reaching stability in this dynamics would take a long time. In our work, we focus on the points we can find in practice, the ones whose grad_norm is relatively small, so that the gradient doesn't give any substantial signal for the gradient based model to move in the weight space. "}
|
2016-12-08 23:47:11
|
ICLR.cc/2017/conference
|
BJNQN-vXe
|
B186cP9gx
|
AnonReviewer1
|
Response by Reviewer
|
{"title": "Interpretation and response/revisions to the questions", "comment": "Clearly, we can't say that the Hessian of the loss function is degenerate everywhere in this empirical study in which we don't attempt to check all the points in theoretical or empirical ways. What we can say, however, is that the Hessian of the loss function is degenerate at the point where training begins and ends. This is indeed the body of our work. Moreover, we can presume the statement will be valid for the intermediate points over the course of the training, as well. We clarified this further in the text.1) The actual one that is computed through the Hessian vector product using Lop: Barak A. Pearlmutter, \u201cFast Exact Multiplication by the Hessian\u201d, Neural Computation, 1994, -we added the reference in the text. (Also the approximate Hessian gives similar results.)2) We use gradient descent (the batch method, when the minibatch size is equal to the number of examples in the dataset) in our main expeirments. We revised the text to clarify this. The only exception is the line interpolation that is presented in the conclusion which compares GD and SGD. However, we would like to note that the network is *not* trained to its local minimum. We train the network for a long time even after the training cost is stabilized. Yet, the norm of the gradient is typically at the order of 10^{-3}. It is pretty laborious to get a network that uses a log-loss to a level where the norm of the gradient is at the order of say 10^{-10} or lower. The basin that the point is in at the stopping time is, for all practical purposes, a local minimum (see ICLR 2015 Workshop, Explorations on high dimensional landscapes, Sagun et al.).3) Thank you for the suggestion! Preliminary results with the mean square error also gives singular Hessian, pretty much in line with the observations laid out in our work. We added this new experiment in our work, as well.4) The short answer is (b). We included this in the previous bullet point, and we attempted to clarify it further. This point is actually one of the practical challenges in modern day deep learning. Pretty much none of the current models *converge* in the sense that they find a local minimum, the models stop way before it finds one, and for all practical purposes the point that did *not* converge doesn't perform any worse when one keeps training the model until it finds a point where the norm of the gradient is zero within the numerical accuracy. As references in the paper, there are reasons for GD to converge to a point where its Hessian has only non-negative eigenvalues. From a practical point of view, reaching stability in this dynamics would take a long time. In our work, we focus on the points we can find in practice, the ones whose grad_norm is relatively small, so that the gradient doesn't give any substantial signal for the gradient based model to move in the weight space. "}
|
2016-12-08 16:02:35
|
ICLR.cc/2017/conference
|
rkJ0ADgEx
|
rkuW-dvXe
|
~Levent_Sagun1
|
Response by Reviewer
|
{"title": "Clarification for the model", "comment": "- Referring to the previous comment: If the number of hidden units is k then the number of eigenvalues in figure 1 (right) is (784 + 1)*k + (k + 1)*k + (k + 1)*10- The x-axis is the numerical values for the eigenvalues, the range is linear and includes all the eigenvalues so there are no further eigenvalues above or below the limits in the histograms.- Figure 4 is after convergence for all 5 systems, and the models are *not* regularized. (We will add clarifications for all of these points in the text)"}
|
2016-12-15 19:28:06
|
ICLR.cc/2017/conference
|
S1nUxb0me
|
B186cP9gx
|
AnonReviewer2
|
Official Review by AnonReviewer2
|
{"title": "This paper presents empirical evidence for the singularity of the Hessian. This works has interesting experiments and observations but the paper needs more work and it is not a complete work.", "rating": "4: Ok but not good enough - rejection", "review": "Studying the Hessian in deep learning, the experiments in this paper suggest that the eigenvalue distribution is concentrated around zero and the non zero eigenvalues are related to the complexity of the input data. I find most of the discussions and experiments to be interesting and insightful. However, the current paper could be significantly improved.Quality:It seems that the arguments in the paper could be enhanced by more effort and more comprehensive experiments. Performing some of the experiments discussed in the conclusion could certainly help a lot. Some other suggestions:1- It would be very helpful to add other plots showing the distribution of eigenvalues for some other machine learning method for the purpose of comparison to deep learning.2- There are some issues about the scaling of the weights and it make sense to normalize the weights each time before calculating the Hessian otherwise the result might be misleading.3- It might worth trying to find a quantity that measures the singularity of Hessian because it is difficult to visually conclude something from the plots.4- Adding some plots for the Hessian during the optimization is definitely needed because we mostly care about the Hessian during the optimization not after the convergence.Clarity:1- There is no reference to figures in the main text which makes it confusing for the reading to know the context for each figure. For example, when looking at Figure 1, it is not clear that the Hessian is calculated at the beginning of optimization or after convergence.2- The texts in the figures are very small and hard to read.", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}
|
2016-12-13 23:12:20
|
ICLR.cc/2017/conference
|
SkSsnfU_l
|
B186cP9gx
|
pcs
|
ICLR committee final decision
|
{"title": "ICLR committee final decision", "comment": "This is quite an important topic to understand, and I think the spectrum of the Hessian in deep learning deserves more attention. However, all 3 official reviewers (and the public reviewer) comment that the paper needs more work. In particular, there are some concerns that the experiments are too preliminary/controlled and about whether the algorithm has actually converged. One reviewer also comments that the work is lacking a key insight/conclusion. I like the topic of the paper and would encourage the authors to pursue it more deeply, but at this time all reviewers have recommended rejection.", "decision": "Reject"}
|
2017-02-06 15:56:12
|
ICLR.cc/2017/conference
|
rywRYhZVl
|
B186cP9gx
|
(anonymous)
|
Official Review by (anonymous)
|
{"title": "Interesting empirical observations, but unfortunately no theory", "rating": "4: Ok but not good enough - rejection", "review": "The work presents some empirical observations to support the statement that \u201cthe Hessian of the loss functions in deep learning is degenerate\u201d. But what does this statement refer to? To my understanding, there are at least three interpretations:(i) The Hessian of the loss functions in deep learning is degenerate at any point in the parameter space, i.e., any network weight matrices.(ii) The Hessian of the loss functions in deep learning is degenerate at any critical point.(iii) The Hessian of the loss functions in deep learning is degenerate at any local minimum, or any global minimum.None of these interpretations is solidly supported by the observations provided in the paper.More comments are as follows:1) The authors state that \u201cwe don\u2019t have much information on what the actual Hessian looks like.\u201d Then I just wonder what Hessian is investigated. Is it the actual one or approximate one? Please clarify and provide the references for computing the actual Hessian.2) It is not clear whether the optimization was done by a batch gradient descent algorithm, i.e., batch back propagation (BP) algorithm, or a stochastic BP algorithm. If the training was done via a stochastic BP algorithm, it is hard to conclude that the the Neural Network has been trained to its local minimum. When it was done by a full-batch BP algorithm, what was the accumulating point? Was it local minimum or global minimum?3) Since the negative log likelihood function was used as at the end of training, it is essentially a joint learning approach in both the Newton weight matrices and the negative log likelihood vector. Certainly, the whole loss function is not convex in these two parameters. But if least squares error function is used at the end, would it make any difference in claiming the degeneracy of the Hessian?4) Finally, the statement \u201cThere are still negative eigenvalues even when they are small in magnitude\u201d is very puzzling. Potential reasons are:(a) If the training algorithm did converge, the accumulating points were not local minima, i.e., they were saddle points.(b) Training algorithms did not converge, or have not converged yet.(c) The calculation of the actual Hessian might be inaccurate.", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}
|
2016-12-17 08:27:11
|
ICLR.cc/2017/conference
|
HyAY3ZUEx
|
B186cP9gx
|
AnonReviewer1
|
Official Review by AnonReviewer1
|
{"title": "Important problem, but should be better situated in related work", "rating": "3: Clear rejection", "review": "This paper investigates the hessian of small deep networks near the end of training. The main result is that many eigenvalues are approximately zero, such that the Hessian is highly singular, which means that a wide amount of theory does not apply.The overall point that deep learning algorithms are singular, and that this undercuts many theoretical results, is important but it has already been made: Watanabe. \u201cAlmost All Learning Machines are Singular\u201d, FOCI 2007. This is one paper in a growing body of work investigating this phenomenon. In general, the references for this paper could be fleshed out much further\u2014a variety of prior work has examined the Hessian in deep learning, e.g., Dauphin et al. \u201cIdentifying and attacking the saddle point problem in high dimensional non-convex optimization\u201d NIPS 2014 or the work of Amari and others.Experimentally, it is hard to tell how results from the small sized networks considered here might translate to much larger networks. It seems likely that the behavior for much larger networks would be different. A reason for optimism, though, is the fact that a clear bulk/outlier behavior emerges even in these networks. Characterizing this behavior for simple systems is valuable. Overall, the results feel preliminary but likely to be of interest when further fleshed out.This paper is attacking an important problem, but should do a better job situating itself in the related literature and undertaking experiments of sufficient size to reveal large-scale behavior relevant to practice.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
|
2016-12-20 01:42:29
|
ICLR.cc/2017/conference
|
BJncUT0Ll
|
BJaJDsH4x
|
~Levent_Sagun1
|
Response by Reviewer
|
{"title": "Regarding the controlled experiments", "comment": "We realized that the text hasn't been clear in the experiments that we performed. We clarified the text. Figure 1 and figure 2 (left) show the beginning and the end of the model with 10-hidden units which is consistent with figure 6. In both cases of the simple data and MNIST the initial point is chosen randomly on the surface of a sphere centered at zero with fixed radius (the radius is depends on the number of hidden units).The complexity of data can be tricky to describe, a data can be more complex for a certain model but less so for another one. Thank you for pointing this out, we clarified this point on the data complexity by the ease of separability in the text. We also updated the axes and labels in fig 1 and 2. We changed the rendering format for the Hessian matrix increasing the sharpness and resolution. For figure 4 we would like to emphesize the two components of the spectrum, therefore we picked the widest possible representation for each case separately. However, we would also like to note that another series of experiments will explore the scale of eigenvalues depending on the iteration number during the training."}
|
2017-01-19 23:06:59
|
ICLR.cc/2017/conference
|
B1XV6ryQx
|
B186cP9gx
|
AnonReviewer2
|
Response by Reviewer
|
{"title": "Regarding the controlled experiments", "comment": "We realized that the text hasn't been clear in the experiments that we performed. We clarified the text. Figure 1 and figure 2 (left) show the beginning and the end of the model with 10-hidden units which is consistent with figure 6. In both cases of the simple data and MNIST the initial point is chosen randomly on the surface of a sphere centered at zero with fixed radius (the radius is depends on the number of hidden units).The complexity of data can be tricky to describe, a data can be more complex for a certain model but less so for another one. Thank you for pointing this out, we clarified this point on the data complexity by the ease of separability in the text. We also updated the axes and labels in fig 1 and 2. We changed the rendering format for the Hessian matrix increasing the sharpness and resolution. For figure 4 we would like to emphesize the two components of the spectrum, therefore we picked the widest possible representation for each case separately. However, we would also like to note that another series of experiments will explore the scale of eigenvalues depending on the iteration number during the training."}
|
2016-12-02 19:36:10
|
ICLR.cc/2017/conference
|
BkVnDDg4e
|
BJNQN-vXe
|
~Levent_Sagun1
|
Response by Reviewer
|
{"title": "Response to questions for details", "comment": "- Yes, Figure 1 is at the end of training.- We are counting the number of weights.- They are per layer, but the ones in figure 1 have one hidden layer, not two, we apologize for the confusion. If the number of hidden units is k then the number of eigenvalues in figure 1 (right) is (784 + 1)*k + (k + 1)*k + (k + 1)*10- The matrix in figure 1 (left) is much smaller due to the initial space constraints, it's when k = 2, so there are (784 + 1)*2 + (2 + 1)*2 + (2 + 1)*10 = 1606 in each axis, with a total of 1606^2 values. It would be a good idea to make this bigger by saving the entries of the Hessian separately for each column.- The geometry of the bottom has been explored in another ICLR submission at https://arxiv.org/abs/1611.01540 in which the authors find paths between solutions which is intimately related to our work through the flatness of the landsacape at the bottom. Another work that makes use of the ideas developed here is https://arxiv.org/abs/1611.01838. Even though the main goal of that work is very different (convolve the loss function to obtain a smoother one with nicer training properties), the degenerate structure seems to have been inspirational."}
|
2016-12-15 18:57:48
|
ICLR.cc/2017/conference
|
S14tvRl4g
|
S1nUxb0me
|
~Levent_Sagun1
|
Response by Reviewer
|
{"title": "Update", "comment": "Thank you for the comments and the review. Quantification of singularity is crucial but it may be tricky and even misleading given the degenerate structure, therefore we thought it would be equally important to lay out the observations first, and then as a separate work consider the quantification. We revised the paper in an attempt to address most of the issues mentioned above and in other comments. Also, we will add a comparison with another ML method, very soon. "}
|
2016-12-16 02:54:52
|
ICLR.cc/2017/conference
|
rkKCtIxEl
|
B1XV6ryQx
|
~Levent_Sagun1
|
Response by Reviewer
|
{"title": "Eigenvalues of the Hessian at the beginning", "comment": "We looked at the eigenvalues at the end of training to see the kind of critical points the training finds. In other words, when we follow a gradient based method to locate a point with the norm of the gradient zero, in an ideal case where all eigenvalues are non-zero, the question would be to look at the Hessian there to describe the nature of the critical point judging by the number of negative eigenvalues it has. It turns out, in the case of neural networks most of them are zero or very small. This leads to a drastically different geometry for the bottom of the landscape. We completely agree that the Hessian throughout the optimization is also important from the point of view of the training, however, we believe that this is a different question than just wondering about the shape of the bottom of the landscape. We will gladly work on this aspect, as well.In Figure 7, the norm of the weights of the parameters are, in fact, very similar to each other. We believe that the change in the sizes of eigenvalues should depend on a quantity other than the norm of the weights. However, it's a great idea to perform a controlled experiment in which the norm of the weights for separate layers are monitored to show that this is indeed not caused by the norm of the weights."}
|
2016-12-15 17:58:41
|
ICLR.cc/2017/conference
|
B1_lfv0Ig
|
HyAY3ZUEx
|
~Levent_Sagun1
|
Response by Reviewer
|
{"title": "Regarding the references", "comment": "Thank you very much for pointing out at the FOCI 2007 paper, it is certainly relevant, even though the main object is the Fisher information matrix rather than the Hessian of the cost function. Dauphin et al. is interested only in saddle points near the path of training. These two works are indeed remotely related to the line of research that our work is invested in. However, their objective and results are different. We have added more references and clarified our contributions."}
|
2017-01-19 15:57:35
|
ICLR.cc/2017/conference
|
BJXGr-1De
|
B186cP9gx
|
~Levent_Sagun1
|
Response by Reviewer
|
{"title": "Revisions", "comment": "In light of the constructive feedback, we revised the paper with multiple edits and figure revisions. We also included a new experiment showing that the phenomena that we observe is not specific to the log-loss, that it holds in the case of MSE loss, as well. We also noticed that the title can be misleading in that it may suggest that our work's focus is on the singularity of Hessian only. Indeed, the good part of the work is related to the singularity, however, we have a second main message regarding the eigenvalue spectrum: the discrete part depends on data. We revised the title and parts of the body of the text to emphasize this point."}
|
2017-01-20 03:33:30
|
ICLR.cc/2017/conference
|
Syj3I86Xe
|
rkTpYikXx
|
~Edouard_Grave1
|
Response by Reviewer
|
{"title": "re: Cache size on PTB, WikiText vocabulary differences, and Lambada question", "comment": "Thank you for your comment.In Table 1, we use a cache of size 500 (we report validation perplexity in Figure 2 and test perplexity in Table 1). A cache of size 100 gets a test perplexity of 74.2 on the Penn TreeBank dataset.We do not really have further insights beyond dataset sizes as the difference in performance improvements. We believe that with more training data, the difference between models tends to be less important. It was already observed by Goodman (2001) that the improvement due to cache models is decreasing when the training set size increases.When training on the WikiText-103 dataset, we use the hierarchical softmax because of the large vocabulary. As of now, our implementation only support linear interpolation when using the hierarchical softmax. Hence, we could only generate the left subfigure of Fig. 3 for WikiText-103, which unfortunately does not provide interesting insight. It should be noted that the hyper-parameters theta and lambda were chosen on the validation set."}
|
2016-12-13 12:22:25
|
ICLR.cc/2017/conference
|
ryJvm8a7g
|
SJuL9N1me
|
~Edouard_Grave1
|
Response by Reviewer
|
{"title": "re: comments", "comment": "Thank you for your comment.(1) Early experiments with training the cache component showed little improvement over only applying it at test time. It also makes the learning more complicated (because gradients must be back-propagated through the cache), especially for very large caches. We thus decided to only apply it at test time.(2) On the PTB dataset, the best results are obtained for a cache of size 500. This is probably the case because articles from the Wall Street Journal are shorter than Wikipedia articles (and thus, a shorter cache is enough to store full articles).(3) We performed early experiments and obtained around 17% accuracy on the LAMBADA dataset. As far as we know, this is the state-of-the-art for language models (which only read the passage from left to right, and not from right to left). We focused on perplexity in the paper, as we are mostly interested in language modeling applications."}
|
2016-12-13 12:23:04
|
ICLR.cc/2017/conference
|
rJhNxHy7l
|
B184E5qee
|
AnonReviewer2
|
Response by Reviewer
|
{"title": "re: comments", "comment": "Thank you for your comment.(1) Early experiments with training the cache component showed little improvement over only applying it at test time. It also makes the learning more complicated (because gradients must be back-propagated through the cache), especially for very large caches. We thus decided to only apply it at test time.(2) On the PTB dataset, the best results are obtained for a cache of size 500. This is probably the case because articles from the Wall Street Journal are shorter than Wikipedia articles (and thus, a shorter cache is enough to store full articles).(3) We performed early experiments and obtained around 17% accuracy on the LAMBADA dataset. As far as we know, this is the state-of-the-art for language models (which only read the passage from left to right, and not from right to left). We focused on perplexity in the paper, as we are mostly interested in language modeling applications."}
|
2016-12-02 18:40:52
|
ICLR.cc/2017/conference
|
rJIKOwrNl
|
SJv48zBNl
|
~Edouard_Grave1
|
Response by Reviewer
|
{"title": "re: review", "comment": "Thank you for your review and questions.The main message of this paper is to show that a simple method for augmenting RNN with memory is very competitive with more complex approaches, on the task of language modeling. We do not claim that it is the best way to do so (compared e.g. to pointer networks), but that it is the most efficient. In particular, our method is drastically faster at train time (no overhead compared to training a model without memory), or can even be applied to pre-trained models, for free. This allows our method to scale to much larger cache sizes and datasets, leading to much better performance than using more complicated models.Regarding comparisons to memory augmented models, we do compare to Pointer Networks (c.f. Table 2) and Memory Networks (c.f. Table 3), which were tailored for language modeling. More precisely, on the WikiText-2 dataset (c.f. Table 2), our approach outperforms pointer networks (Merity et al., 2016), with a 14.7% reduction in perplexity. We were also able to apply our model on the larger WikiText-103 dataset (we believe training pointer networks cannot scale to this dataset), leading to further reduction in perplexity (WikiText-2 and WikiText-103 share the same validation & test sets). We also compare our method to Memory Networks (Sukhbaatar et al., 2015) on the text8 dataset (c.f. Table 3), where we observe a 32% reduction in perplexity. We thus believe that our claim that our simple approach is competitive with more complicated models is well supported by results reported in the paper.- \"In the experiment results, for your neural cache model, are those results with linear interpolation or global normalization, or the best model? Can you show results for both?\"As stated in the paragraph \"Results\" of section 5.1, we report results with linear interpolation (except in Figure 2 & 3, where it is shown that both methods obtain similar results on PTB and WikiText-2, with linear interpolation being easier to apply).- \"Why is the neural cache model worse than LSTM on Ctrl (Lambada dataset)? Please also show accuracy on this dataset.\"As explained in section 5.3 (& Figure 5), performance on the control set of Lambada degrades when increasing the value of the interpolation parameter of the cache. This is because only a small number of examples of the control set contain the target word in the context (and therefore, the cache model in not useful for these examples).- \"It is also interesting that the authors mentioned that training the cache component instead of only using it at test time gives little improvements. Are the results about the same or worse? \"The results were very similar, while it is much easier to train without the cache component."}
|
2016-12-19 14:02:38
|
ICLR.cc/2017/conference
|
BkHDNXcfe
|
B184E5qee
|
~Dzmitry_Bahdanau1
|
Response by Reviewer
|
{"title": "Accuracy on LAMBADA", "comment": "Hi, great paper! Could you please also report the accuracy on LAMBADA dataset, not just perplexity?"}
|
2016-11-28 21:40:44
|
ICLR.cc/2017/conference
|
SJv48zBNl
|
B184E5qee
|
AnonReviewer1
|
Official Review by AnonReviewer1
|
{"title": "review", "rating": "5: Marginally below acceptance threshold", "review": "This paper proposes a simple extension to a neural network language model by adding a cache component. The model stores <previous hidden state, word> pairs in memory cells and uses the current hidden state to control the lookup. The final probability of a word is a linear interpolation between a standard language model and the cache language model. Additionally, an alternative that uses global normalization instead of linear interpolation is also presented. Experiments on PTB, Wikitext, and LAMBADA datasets show that the cache model improves over standard LSTM language model.There is a lot of similar work on memory-augmented/pointer neural language models, and the main difference is that the proposed method is simple and scales to a large cache size.However, since the technical contribution is rather limited, the experiments need to be more thorough and conclusive. While it is obvious from the results that adding a cache component improves over language models without memory, it is still unclear that this is the best way to do it (instead of, e.g., using pointer networks). A side-by-side comparison of models with pointer networks vs. models with cache with roughly the same number of parameters is needed to convincingly argue that the proposed method is a better alternative (either because it achieves lower perplexity, faster to train but similar test perplexity, faster at test time, etc.)Some questions:- In the experiment results, for your neural cache model, are those results with linear interpolation or global normalization, or the best model? Can you show results for both? - Why is the neural cache model worse than LSTM on Ctrl (Lambada dataset)? Please also show accuracy on this dataset. - It is also interesting that the authors mentioned that training the cache component instead of only using it at test time gives little improvements. Are the results about the same or worse?", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
|
2016-12-19 08:11:27
|
ICLR.cc/2017/conference
|
rkTpYikXx
|
B184E5qee
|
AnonReviewer3
|
Official Review by AnonReviewer1
|
{"title": "review", "rating": "5: Marginally below acceptance threshold", "review": "This paper proposes a simple extension to a neural network language model by adding a cache component. The model stores <previous hidden state, word> pairs in memory cells and uses the current hidden state to control the lookup. The final probability of a word is a linear interpolation between a standard language model and the cache language model. Additionally, an alternative that uses global normalization instead of linear interpolation is also presented. Experiments on PTB, Wikitext, and LAMBADA datasets show that the cache model improves over standard LSTM language model.There is a lot of similar work on memory-augmented/pointer neural language models, and the main difference is that the proposed method is simple and scales to a large cache size.However, since the technical contribution is rather limited, the experiments need to be more thorough and conclusive. While it is obvious from the results that adding a cache component improves over language models without memory, it is still unclear that this is the best way to do it (instead of, e.g., using pointer networks). A side-by-side comparison of models with pointer networks vs. models with cache with roughly the same number of parameters is needed to convincingly argue that the proposed method is a better alternative (either because it achieves lower perplexity, faster to train but similar test perplexity, faster at test time, etc.)Some questions:- In the experiment results, for your neural cache model, are those results with linear interpolation or global normalization, or the best model? Can you show results for both? - Why is the neural cache model worse than LSTM on Ctrl (Lambada dataset)? Please also show accuracy on this dataset. - It is also interesting that the authors mentioned that training the cache component instead of only using it at test time gives little improvements. Are the results about the same or worse?", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
|
2016-12-03 02:11:17
|
ICLR.cc/2017/conference
|
BJCdudaQx
|
SJE4wLpme
|
(anonymous)
|
Response by Reviewer
|
{"title": "re: re: Accuracy on LAMBADA", "comment": "Thank you for your answer. I think it's important to report the accuracy, because this is the metric that the dataset authors had in mind, and also because this number can be referred to in subsequent work. You should not shy the seemingly low number 17% because the dataset is challenging. Also, as far as I know, your performance is SOTA, because all the better results I am aware of were obtained using a trick, whereby the model was forced to select a word from the context. If you achieved 17% without this trick, this is definitely worth reporting."}
|
2016-12-13 13:32:38
|
ICLR.cc/2017/conference
|
H1YGZUMNx
|
B184E5qee
|
AnonReviewer3
|
Official Review by AnonReviewer3
|
{"title": "Review", "rating": "9: Top 15% of accepted papers, strong accept", "review": "This paper not only shows that a cache model on top of a pre-trained RNN can improve language modeling, but also illustrates a shortcoming of standard RNN models in that they are unable to capture this information themselves. Regardless of whether this is due to the small BPTT window (35 is standard) or an issue with the capability of the RNN itself, this is a useful insight. This technique is an interesting variation of memory augmented neural networks with a number of advantages to many of the standard memory augmented architectures.They illustrate the neural cache model on not just the Penn Treebank but also WikiText-2 and WikiText-103, two datasets specifically tailored to illustrating long term dependencies with a more realistic vocabulary size. I have not seen the ability to refer up to 2000 words back previously.I recommend this paper be accepted. There is additionally extensive analysis of the hyperparameters on these datasets, providing further insight.I recommend this interesting and well analyzed paper be accepted.", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}
|
2016-12-17 06:01:53
|
ICLR.cc/2017/conference
|
Syy-TzIOe
|
B184E5qee
|
pcs
|
ICLR committee final decision
|
{"title": "ICLR committee final decision", "comment": "Reviewers agree that this paper is based on a \"trick\" to build memory without requiring long-distance backprop. This method allows the model to utilize a cache-like mechanism, simply by storing previous states. Everyone agrees that this roughly works (although there could be stronger experimental evidence), and provides long-term memory to simple models. Reviewers/authors also agree that it might not work as well as other pointer-network like method, but there is controversy over whether that is necessary. - Further discussion indicated a sense by some reviewers that this method could be quite impactful, even if it was not a huge technical contribution, due to its speed and computational benefits over pointer methods. - The clarity of the writing and good use of references was appreciated - This paper is a nice complement/rebuttal to \"Frustratingly Short Attention Spans in Neural Language Modeling\". Including the discussion about this paper as it might be helpful as it was controversial: \"\"\" The technical contribution may appear \"limited\" but I feel part of that is necessary to ensure the method can scale to both large datasets and long term dependencies. For me, this is similar to simpler machine learning methods being able to scale to more data (though replacing \"data\" with \"timesteps\"). More complex methods may do better with a small number of data/timesteps but they won't be able to scale, where other specific advantages may come in to play. (Timesteps) Looking back 2000 timesteps is something I've not seen done and speaks to a broader aspect of language modeling - properly capturing recent article level context. Most language models limit BPTT to around 35 timesteps, with some even arguing we don't need that much (i.e. \"Frustratingly Short Attention Spans in Neural Language Modeling\" that's under review for ICLR). From a general perspective, this is vaguely mad given many sentences are longer than 35 timesteps, yet we know both intuitively and from the evidence they present that the rest of an article is very likely to help modeling the following words, especially for PTB or WikiText. This paper introduces a technique that not only allows for utilizing dependencies far further back than 35 timesteps but shows it consistently helps, even when thrown against a larger number of timesteps, a larger dataset, or a larger vocabulary. Given it is also a post-processing step that can be applied to any vaguely RNN type model, it's widely applicable and trivial to train in comparison to any more complicated models. (Data) Speaking to AnonReviewer1's comment, \"A side-by-side comparison of models with pointer networks vs. models with cache with roughly the same number of parameters is needed to convincingly argue that the proposed method is a better alternative (either because it achieves lower perplexity, faster to train but similar test perplexity, faster at test time, etc.)\" Existing pointer network approaches for language modeling are very slow to train - or at least more optimal methods are yet to be discovered - and has such limited the BPTT length they tackle. Merity et al. use 100 at most and that's the only pointer method for language modeling attending to article style text that I am aware of. Merity et al. also have a section of their paper specifically discussing the training speed complications that come from integrating the pointer network. There is a comparison to Merity et al. in Table 1 and Table 2. The scaling becomes more obvious on the WikiText datasets which have a more realistic long tail vocabulary than PTB's 10k. For WikiText-2, at a cache size of 100, Merity et al. get 80.8 with their pointer network method while the neural cache model get 81.6. Increasing the neural model cache size to 2000 however gives quite a substantial drop to 68.9. They're also able to apply their method to WikiText-103, a far larger dataset than PTB or WikiText-2, and show that it still provides improvements even when there is more data and a larger vocabulary. Scaling to this dataset is only sanely possible as the neural cache model doesn't add to the training time of the base neural model at all - that it's equivalent to training a standard LSTM. \"\"\"", "decision": "Accept (Poster)"}
|
2017-02-06 15:57:43
|
ICLR.cc/2017/conference
|
r1fa4Lpmg
|
rJhNxHy7l
|
~Edouard_Grave1
|
Response by Reviewer
|
{"title": "re: Questions", "comment": "Thank you for your comment.Contrary to our model which uses h_t as a representation for x_{t+1}, the model of Merity et al. uses h_t as a representation for x_t. This requires to learn an additional transformation between the current activation and those in the cache. This transformation is then trained jointly with the rest of the model, limiting the size of cache (because the BPTT algorithm is performed for L time-steps, where L is the cache size). Contrary to our approach, Merity et al. uses dynamic interpolation (through the sentinel vector in the pointer softmax).Our method is more scalable at train time, since it does not require to perform BPTT over L time-steps, where L is the size of the cache (contrary to Merity et al.). We can thus scale to much larger cache size easily.Merity et al. only reported results for cache of size 100."}
|
2016-12-13 12:22:47
|
ICLR.cc/2017/conference
|
SJE4wLpme
|
BkHDNXcfe
|
~Edouard_Grave1
|
Response by Reviewer
|
{"title": "re: Accuracy on LAMBADA", "comment": "Thank you for your comment.We performed early experiments and obtained around 17% accuracy on the LAMBADA dataset. As far as we know, this is the state-of-the-art for language models (which only read the passage from left to right, and not from right to left). We focused on perplexity in the paper, as we are mostly interested in language modeling applications. We need to run additional experiments to get a definitive accuracy number on the LAMBADA dataset."}
|
2016-12-13 12:23:25
|
ICLR.cc/2017/conference
|
SJuL9N1me
|
B184E5qee
|
AnonReviewer1
|
Response by Reviewer
|
{"title": "re: Accuracy on LAMBADA", "comment": "Thank you for your comment.We performed early experiments and obtained around 17% accuracy on the LAMBADA dataset. As far as we know, this is the state-of-the-art for language models (which only read the passage from left to right, and not from right to left). We focused on perplexity in the paper, as we are mostly interested in language modeling applications. We need to run additional experiments to get a definitive accuracy number on the LAMBADA dataset."}
|
2016-12-02 18:15:44
|
ICLR.cc/2017/conference
|
BkjpniLEg
|
B184E5qee
|
AnonReviewer2
|
Official Review by AnonReviewer2
|
{"title": "Review", "rating": "7: Good paper, accept", "review": "The authors present a simple method to affix a cache to neural language models, which provides in effect a copying mechanism from recently used words. Unlike much related work in neural networks with copying mechanisms, this mechanism need not be trained with long-term backpropagation, which makes it efficient and scalable to much larger cache sizes. They demonstrate good improvements on language modeling by adding this cache to RNN baselines.The main contribution of this paper is the observation that simply using the hidden states h_i as keys for words x_i, and h_t as the query vector, naturally gives a lookup mechanism that works fine without tuning by backprop. This is a simple observation and might already exist as folk knowledge among some people, but it has nice implications for scalability and the experiments are convincing.The basic idea of repurposing locally-learned representations for large-scale attention where backprop would normally be prohibitively expensive is an interesting one, and could probably be used to improve other types of memory networks.My main criticism of this work is its simplicity and incrementality when compared to previously existing literature. As a simple modification of existing NLP models, but with good empirical success, simplicity and practicality, it is probably more suitable for an NLP-specific conference. However, I think that approaches that distill recent work into a simple, efficient, applicable form should be rewarded and that this tool will be useful to a large enough portion of the ICLR community to recommend its publication.", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}
|
2016-12-20 13:06:11
|
ICLR.cc/2017/conference
|
B1uj8o-Ee
|
B16dGcqlx
|
AnonReviewer2
|
Official Review by AnonReviewer2
|
{"title": "Review", "rating": "6: Marginally above acceptance threshold", "review": "The paper extends the imitation learning paradigm to the case where the demonstrator and learner have different points of view. This is an important contribution, with several good applications. The main insight is to use adversarial training to learn a policy that is robust to this difference in perspective. This problem formulation is quite novel compared to the standard imitation learning literature (usually first-order perspective), though has close links to the literature on transfer learning (as explained in Sec.2).The basic approach is clearly explained, and follows quite readily from recent literature on imitation learning and adversarial training.I would have expected to see comparison to the following methods added to Figure 3:1) Standard 1st person imitation learning using agent A data, and apply the policy on agent A. This is an upper-bound on how well you can expect to do, since you have the correct perspective.2) Standard 1st person imitation learning using agent A data, then apply the policy on agent B. Here, I expect it might do less well than 3rd person learning, but worth checking to be sure, and showing what is the gap in performance.3) Reinforcement learning using agent A data, and apply the policy on agent A. I expect this might do better than 3rd person imitation learning but it might depend on the scenario (e.g. difficulty of imitation vs exploration; how different are the points of view between the agents). I understand this is how the expert data is collected for the demonstrator, but I don\u2019t see the performance results from just using this procedure on the learner (to compare to Fig.3 results).Including these results would in my view significantly enhance the impact of the paper.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
|
2016-12-16 17:38:40
|
ICLR.cc/2017/conference
|
Hyv9khVLx
|
B1uj8o-Ee
|
~Bradly_C_Stadie1
|
Response by Reviewer
|
{"title": "Added these three experiments in Appendix A ", "comment": "Question: I would have expected to see comparison to the following methods added to Figure 3:1) Standard 1st person imitation learning using agent A data, and apply the policy on agent A. This is an upper-bound on how well you can expect to do, since you have the correct perspective.2) Standard 1st person imitation learning using agent A data, then apply the policy on agent B. Here, I expect it might do less well than 3rd person learning, but worth checking to be sure, and showing what is the gap in performance.3) Reinforcement learning using agent A data, and apply the policy on agent A. I expect this might do better than 3rd person imitation learning but it might depend on the scenario (e.g. difficulty of imitation vs exploration; how different are the points of view between the agents). I understand this is how the expert data is collected for the demonstrator, but I don\u2019t see the performance results from just using this procedure on the learner (to compare to Fig.3 results).Answer: Thank you for suggesting these experiment. We have added all of them to Appendix A. We feel like this suggestion significantly improved the quality of the paper. Please see earlier responses/the paper itself for a discussion of these experiments. "}
|
2017-01-12 07:26:21
|
ICLR.cc/2017/conference
|
SkpPEJzEx
|
B16dGcqlx
|
AnonReviewer5
|
Response by Reviewer
|
{"title": "Added these three experiments in Appendix A ", "comment": "Question: I would have expected to see comparison to the following methods added to Figure 3:1) Standard 1st person imitation learning using agent A data, and apply the policy on agent A. This is an upper-bound on how well you can expect to do, since you have the correct perspective.2) Standard 1st person imitation learning using agent A data, then apply the policy on agent B. Here, I expect it might do less well than 3rd person learning, but worth checking to be sure, and showing what is the gap in performance.3) Reinforcement learning using agent A data, and apply the policy on agent A. I expect this might do better than 3rd person imitation learning but it might depend on the scenario (e.g. difficulty of imitation vs exploration; how different are the points of view between the agents). I understand this is how the expert data is collected for the demonstrator, but I don\u2019t see the performance results from just using this procedure on the learner (to compare to Fig.3 results).Answer: Thank you for suggesting these experiment. We have added all of them to Appendix A. We feel like this suggestion significantly improved the quality of the paper. Please see earlier responses/the paper itself for a discussion of these experiments. "}
|
2016-12-16 22:02:13
|
ICLR.cc/2017/conference
|
SJ1rDje4g
|
Hku195k7l
|
~Bradly_C_Stadie1
|
Response by Reviewer
|
{"title": "Reply to reviewer comments ", "comment": "Question: On p.5 you choose to instantiate the mutual information by introducing another classifier. Is this common? Can you add a reference? Or is it novel? What are the impacts of this choice? Answer: This is not novel. A similar idea is proposed in for instance J. S. Bridle, A. J. Heading, and D. J. MacKay, \u201cUnsupervised classifiers, mutual information and \u2019phantom targets\u2019,\u201d in NIPS, 1992.D. Barber and F. V. Agakov, \u201cKernelized infomax clustering,\u201d in NIPS, 2005, pp. 17\u201324.A. Krause, P. Perona, and R. G. Gomes, \u201cDiscriminative clustering by regularized information maximization,\u201d in NIPS, 2010, pp. 775\u2013783.Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. Infogan: Interpretable representation learning by information maximizing generative adversarial nets. CoRR, abs/1606.03657, 2016.We have updated the paper to explicitly mention these references in section 5.1.Question: If I understand correctly, the imitation task is reduced to predicting whether the demonstrations come from Expert or Non-expert. Then the label is associated with a (known/stored) reward function, and that reward function is used in TRPO. Is that correct? It seems that is a lot of information, compared to the standard imitation learning setup, where the policy must be fully learned from demonstrations. Am I misunderstanding something?Answer: This is incorrect. TRPO receives as a reward the (learned) log probability that a set of demonstration frames belongs to the expert class, and attempts to maximize this number. In doing so, it is trying to learn a policy that generates more frames that look like they belong to the expert class. TRPO never has access to the raw reward of the system. This problem setup is identical to that presented by Ho in Generative Adversarial Imitation Learning. Line 29 of Algorithm 1 makes this explicit. It is also shown and specified in figure 2. Question: With so many terms being simultaneously optimized, it is not so surprising that it is difficult to get stable learning. How do you optimize hyper-parameters in this setting? Presumably you cannot use a hold-out set. Fig.6 & 7 show some sensitivity analysis, but nothing is shown about the learning rates.Answer: Hyperparameters were optimized with a standard hold-out procedure, where the same hyperparameter choices were required across all problem instances. For the discriminator, we use ADAM and a learning rate of 0.001. This worked out of the box. For the RL generator, we used the standard TRPO implementation from RLLab and made no changes. Question: I would have expected to see comparison to the following methods added to Figure 3:1) Standard 1st person imitation learning using agent A data, and apply the policy on agent A. This is an upper-bound on how well you can expect to do, since you have the correct perspective.2) Standard 1st person imitation learning using agent A data, then apply the policy on agent B. Here, I expect it might do less well than 3rd person learning, but worth checking to be sure, and showing what is the gap in performance.3) Reinforcement learning using agent A data, and apply the policy on agent A. I expect this might do better than 3rd person imitation learning but it might depend on the scenario (e.g. difficulty of imitation vs exploration; how different are the points of view between the agents). It would be nice to show this.Answer: 1) and 2) are great suggestions for further calibrating the results, thank you, we plan to incorporate them in the next revision of the paper.3) is actually already how the expert demonstration data is generated."}
|
2016-12-15 23:29:34
|
ICLR.cc/2017/conference
|
Hku195k7l
|
B16dGcqlx
|
AnonReviewer2
|
Response by Reviewer
|
{"title": "Reply to reviewer comments ", "comment": "Question: On p.5 you choose to instantiate the mutual information by introducing another classifier. Is this common? Can you add a reference? Or is it novel? What are the impacts of this choice? Answer: This is not novel. A similar idea is proposed in for instance J. S. Bridle, A. J. Heading, and D. J. MacKay, \u201cUnsupervised classifiers, mutual information and \u2019phantom targets\u2019,\u201d in NIPS, 1992.D. Barber and F. V. Agakov, \u201cKernelized infomax clustering,\u201d in NIPS, 2005, pp. 17\u201324.A. Krause, P. Perona, and R. G. Gomes, \u201cDiscriminative clustering by regularized information maximization,\u201d in NIPS, 2010, pp. 775\u2013783.Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. Infogan: Interpretable representation learning by information maximizing generative adversarial nets. CoRR, abs/1606.03657, 2016.We have updated the paper to explicitly mention these references in section 5.1.Question: If I understand correctly, the imitation task is reduced to predicting whether the demonstrations come from Expert or Non-expert. Then the label is associated with a (known/stored) reward function, and that reward function is used in TRPO. Is that correct? It seems that is a lot of information, compared to the standard imitation learning setup, where the policy must be fully learned from demonstrations. Am I misunderstanding something?Answer: This is incorrect. TRPO receives as a reward the (learned) log probability that a set of demonstration frames belongs to the expert class, and attempts to maximize this number. In doing so, it is trying to learn a policy that generates more frames that look like they belong to the expert class. TRPO never has access to the raw reward of the system. This problem setup is identical to that presented by Ho in Generative Adversarial Imitation Learning. Line 29 of Algorithm 1 makes this explicit. It is also shown and specified in figure 2. Question: With so many terms being simultaneously optimized, it is not so surprising that it is difficult to get stable learning. How do you optimize hyper-parameters in this setting? Presumably you cannot use a hold-out set. Fig.6 & 7 show some sensitivity analysis, but nothing is shown about the learning rates.Answer: Hyperparameters were optimized with a standard hold-out procedure, where the same hyperparameter choices were required across all problem instances. For the discriminator, we use ADAM and a learning rate of 0.001. This worked out of the box. For the RL generator, we used the standard TRPO implementation from RLLab and made no changes. Question: I would have expected to see comparison to the following methods added to Figure 3:1) Standard 1st person imitation learning using agent A data, and apply the policy on agent A. This is an upper-bound on how well you can expect to do, since you have the correct perspective.2) Standard 1st person imitation learning using agent A data, then apply the policy on agent B. Here, I expect it might do less well than 3rd person learning, but worth checking to be sure, and showing what is the gap in performance.3) Reinforcement learning using agent A data, and apply the policy on agent A. I expect this might do better than 3rd person imitation learning but it might depend on the scenario (e.g. difficulty of imitation vs exploration; how different are the points of view between the agents). It would be nice to show this.Answer: 1) and 2) are great suggestions for further calibrating the results, thank you, we plan to incorporate them in the next revision of the paper.3) is actually already how the expert demonstration data is generated."}
|
2016-12-03 01:03:28
|
ICLR.cc/2017/conference
|
SJezwxzEg
|
B16dGcqlx
|
AnonReviewer3
|
Official Review by AnonReviewer3
|
{"title": "Interesting idea but need more experiments", "rating": "5: Marginally below acceptance threshold", "review": "This paper proposed a novel adversarial framework to train a model from demonstrations in a third-person perspective, to perform the task in the first-person view. Here the adversarial training is used to extract a novice-expert (or third-person/first-person) independent feature so that the agent can use to perform the same policy in a different view point.While the idea is quite elegant and novel (I enjoy reading it), more experiments are needed to justify the approach. Probably the most important issue is that there is no baseline, e.g., what if we train the model with the image from the same viewpoint? It should be better than the proposed approach but how close are they? How the performance changes when we gradually change the viewpoint from third-person to first-person? Another important question is that maybe the network just blindly remembers the policy, in this case, the extracted feature could be artifacts of the input image that implicitly counts the time tick in some way (and thus domain-agonistic), but can still perform reasonable policy. Since the experiments are conduct in a synthetic environment, this might happen. An easy check is to run the algorithm on multiple viewpoint and/or with blurred/differently rendered images, and/or with random initial conditions.Other ablation analysis is also needed. For example, I am not fully convinced by the gradient flipping trick used in Eqn. 5, and in the experiments there is no ablation analysis for that (GAN/EM style training versus gradient flipping trick). For the experiments, Fig. 4,5,6 does not have error bars and is not very convincing.", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
|
2016-12-16 23:28:05
|
ICLR.cc/2017/conference
|
SJu-_seVe
|
HyfOgMJmg
|
~Bradly_C_Stadie1
|
Response by Reviewer
|
{"title": "Answer to questions", "comment": "Question: How is the expert data generated? Answer: The expert data is generated via an RL algorithm (TRPO) that has access to full state information and reward signal. In fact, this RL algorithm is just the out of the box TRPO from RLLab being used to solve the reacher, point, and pendulum environments from RLLab. Question: Also, there is no detailed specification of network architecture (e.g., what layers did you use? How complicated is D_F, D_R and D_D) in the paper besides Fig. 2. Please elaborate.Answer: The joint feature extractor is 2 convolutional layers with 5 filters of size 3 each. Each layer is followed by a max-polling layer with size 2. The input images are size 50 x 50 with 3 channels, RGB. Both the domain classifier and the class discriminator take as input the domain-agnostic image features and pass them through 2 MLP layers of size 128 before going through the final MLP layer of size 2 and a softmax layer. We have clarified this in the updated paper in Appendix A.In addition, if any further architecture questions should arise, the discriminator and generator code is available here: https://github.com/bstadie/third_person_im/blob/master/sandbox/bradly/third_person/discriminators/discriminator.py"}
|
2016-12-15 23:32:16
|
ICLR.cc/2017/conference
|
B1N4238Le
|
SJezwxzEg
|
~Bradly_C_Stadie1
|
Response by Reviewer
|
{"title": "significantly improved experiments", "comment": "We thank you for your constructive feedback, which significantly improved the quality of this paper. We were able to incorporate 16 new experiments and other improvements. We believe this has further strengthened the work and the paper (beyond its state at initial submission) and hope that you will potentially consider updating your score with this in mind. "}
|
2017-01-13 20:44:28
|
ICLR.cc/2017/conference
|
ryctCj4Ux
|
SJezwxzEg
|
~Bradly_C_Stadie1
|
Response by Reviewer
|
{"title": "Added more experiments ", "comment": "Question: While the idea is quite elegant and novel (I enjoy reading it), more experiments are needed to justify the approach. Probably the most important issue is that there is no baseline, e.g., what if we train the model with the image from the same viewpoint? Answer: We have added new experiments, including training the model with the image from the same viewpoint. We have also added RL in the test domain and standard first person imitation (i.e. our approach, but without the domain confusion). All discussed in Appendix A. As we see from these graphs, simply using the cost/policy recovered from first person imitation on the third person agent is not sufficient to learn the task presented. We see that generally RL and first person imitation learning do perform better on these tasks (in terms of sample efficiency and overall performance). However, we feel that third person imitation performs comparably, and this fact is significant enough to warrant strong consideration for accepting this paper. Question: How the performance changes when we gradually change the viewpoint from third-person to first-person? Answer: We have added this experiment to appendix A. We see that for the point env, performance linearly declines with the difference in camera angle between the expert and the novice. For reacher, the story is more complex and the behavior is more step like. Thank you for suggesting this experiment! Question: Another important question is that maybe the network just blindly remembers the policy, in this case, the extracted feature could be artifacts of the input image that implicitly counts the time tick in some way (and thus domain-agonistic), but can still perform reasonable policy. Since the experiments are conduct in a synthetic environment, this might happen. An easy check is to run the algorithm on multiple viewpoint and/or with blurred/differently rendered images, and/or with random initial conditions.Answer: We sample different initial conditions as in the underlying RL Lab environments. We see that the controls generated by different initial conditions are quantitatively different. Further, when we cross-apply the controls generated from one set of initial conditions to another set of initial conditions, we see that performance is generally poor. We are hesitant to add these graphs to the paper, as there are already 16 additional graphs as a result of this rebuttal (on top of the 15 graphs in the original paper for a total of 31). Question: Other ablation analysis is also needed. For example, I am not fully convinced by the gradient flipping trick used in Eqn. 5, and in the experiments there is no ablation analysis for that (GAN/EM style training versus gradient flipping trick). Answer: Figure 5 contains an ablation analysis for the gradient trick, concretely, it compares what happen with and without domain confusion, and with and without the velocity information. The experiments show that having both domain confusion (i.e. gradient flipping trick) and velocity information outperforms ablated variants. Figure 6 analyzes performance as a function of the parameter lambda, which weighs the domain confusion loss (against the other losses). It shows the approach is robust to choices of lambda, however, extreme choices don\u2019t perform well: lambda too small results in domain confusion loss largely ignored and poor performance; lambda too large results in domain confusion loss dominating too much (at the expense of the other losses).Question: For the experiments, Fig. 4,5,6 does not have error bars and is not very convincing.Answer: We ran these experiments multiple times, and results were consistent, as in the reported graphs. We felt that the error bars were distracting for these experiments because they cluttered the graphs. For the final, we will add error bars, and increase the size of the graphs to maintain readability. Our additional experiments (added in rebuttal phase into Appendix A) have error bars, which further suggest the robustness of the algorithm.Note that there are now 31 experiments detailed in this paper. We feel that adding any more may cause it to burst at the seams! :) "}
|
2017-01-12 07:22:04
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.