Question about TransformerEncoderDecoder layer interaction
First, thank you for your great work on this implementation! It's been very helpful for understanding Transformer details.
I noticed one potential implementation detail in TransformerEncoderDecoder.forward
:
for (encoder, decoder) in zip(...):
encoder_output = encoder(...)
decoder_output = decoder(..., cross_input=encoder_output)
This uses layer-wise alternation where each decoder layer immediately consumes its corresponding encoder layer's output.
According to the original Attention Is All You Need paper (Figure 1) and common implementations (HuggingFace, Fairseq):
- All encoder layers should complete first
- The final encoder output becomes the memory
- All decoder layers should use this same memory
Suggested modification:
# Complete all encoder layers first
for encoder in self.encoder_stack:
encoder_output = encoder(...)
# Then run decoders using final encoder output
for decoder in self.decoder_stack:
decoder_output = decoder(..., cross_input=encoder_output)
This would:
✅ Match the original paper's architecture
✅ Maintain representation consistency
✅ Align with standard implementations
Was this an intentional design choice? I'd appreciate your perspective on this. Thanks again for sharing your excellent work!
Thank you so much for this excellent observation and correction! You're absolutely right.
I initially interpreted Figure 1 as showing layer-by-layer correspondence between encoder and decoder stacks, and I thought it would be interesting to have decoder layers pay attention to different layers of the encoder output. However, I didn't realize that's not the standard implementation. After checking the Annotated Transformer implementation (http://nlp.seas.harvard.edu/2018/04/03/attention.html), I can see the standard approach is exactly what you described:
- Complete all encoder layers sequentially first
- Use the final encoder output as "memory"
- Pass this memory to all decoder layers for cross-attention
Your suggested implementation is spot-on:
def forward(self, embed_encoder_input, embed_decoder_input, padding_mask=None):
# Complete all encoder layers first
encoder_output = embed_encoder_input
for encoder in self.encoder_stack:
encoder_output = encoder(encoder_output, padding_mask)
# Then all decoder layers use the final encoder output
decoder_output = embed_decoder_input
for decoder in self.decoder_stack:
decoder_output = decoder(decoder_output, encoder_output, padding_mask)
return decoder_output
I've updated the implementation to match this standard approach. Thank you for the correction.