Cannot use listed and ready TPU from VM for pretraining

I signed up for the TRC program for the third time in two years. Now I barely created a preemptible v3-8 TPU. Before that, I could efficiently allocate five non-preemptible v3-8 TPUs. Even with that allocation, TPU is listed as READY and HEALTHY. However, when I want to access it from the pretraining script, I run into this error that I have never encountered before:
Failed to connect to the Tensorflow master. The TPU worker may not be ready (still scheduling), or the Tensorflow master address is incorrect
The TPU is accessible, ready, and healthy, and the master URL is correct (it is automatically retrieved from the TPU_NAME, which I also double-checked).

I also get this:

Attempting refresh to obtain initial access_token
Refreshing access_token

 

0 1 202
1 REPLY 1

Since this would be hard to reproduce on my end, you can try the workaround suggested here