Increase Filestream rate limit?


I am using wanbd with pytorch DDP. I’ve made sure that I only log on the first GPU.

    def _run_train_batch(self, step, source, targets):
        _, loss = self.model(source, labels=targets)
        ppl = torch.exp(loss)

        # log to wandb, if rank 0
        if self.gpu_id == 0:
            wandb.log({"train/loss": loss.item(), "train/ppl": ppl.item()})

        if step % 100 == 0 and self.gpu_id == 0:
            print(f"[GPU{self.gpu_id}] | Step {step} | Loss: {loss.item():.2f} | Perplexity: {ppl.item():.2f}")

        return loss, ppl

It runs into

wandb: 429 encountered (Filestream rate limit exceeded, retrying in 2.3 seconds.), retrying request

Is it possible to have my rate limit increased?

Hi @demistry, thanks for writing in! If you are still facing rate limit errors, would you mind sharing the name of the affected entity so I can take a look?

Hi @luis_bergua1, the entity name is iu-cogai.

Thanks @demistry! We just doubled the limits of your entity

Thanks a lot @luis_bergua1!