Does transformers utilize PyTorch SDPA's flash_attention for openai/gpt-oss-20b?

#89
by NooBaymax - opened

I'm investigating if the flash_attention backend from PyTorch's scaled_dot_product_attention (SDPA) is leveraged when running the openai/gpt-oss-20b model via the transformers library. How can we verify this behavior? I'm looking for methods, code snippets, or official documentation that confirm whether this optimization is active by default or if specific configurations are required to enable it.

Sign up or log in to comment