Tune logs

I use tune.Tuner to train my customized parallel peetingZoo environment, but there is data within this environment that I want to be recorded as logs in wandb, can anyone help me configure it?

Hi @alyssonpereira41, here are our docs on using wandb with Ray Tune. Are you using the WandbLoggerCallback currently?

Yes, I had already found the documentation and already applied it, but I have another problem in relation to the parameters that are being returned from my environment and they are not faithful to those that rllib returns by default, for example when I record the reward metrics they Shouldn’t they be the same ones returned?
Follow the images to get an idea of what I’m talking about


I collect data as follows
´´´python
class MyCallbacks(DefaultCallbacks):

def on_episode_step(self, *, worker: RolloutWorker, base_env, policies=None, episode: Episode, env_index=None, **kwargs):
    for agent_id in episode.get_agents():
        agent_info = episode.last_info_for(agent_id)

        # episode.custom_metrics[f"reward agente {agent_id} pela função"] = episode.agent_rewards[agent_id]
        episode.custom_metrics[f"Reward agente {agent_id}"] = agent_info["rw"]
        episode.custom_metrics[f"Lucro agente {agent_id}"] = agent_info["rw_pr"]
        episode.custom_metrics[f"Variabilidade agente {agent_id}"] = agent_info["rw_va"]        
        episode.custom_metrics[f"Sustentabilidade agente {agent_id}"] = agent_info["rw_su"]
        episode.custom_metrics[f"Ocupacao Yard agente {agent_id}"] = agent_info["yard"]

´´´
and my information is saved as follows

info[f"{agent}"] = {"rw": self.reward[f"{agent}"],
                    "rw_pr": self.rw_pr,
                    "rw_va": self.rw_va,
                    "rw_su": self.rw_su,
                    "VA": self.variabilidade,
                    "SU": self.sustentabilidade,
                    "F": self.F,
                    "acoes": self.acoes,
                    "atrasos_reais": self.atrasos_reais,
                    "acao_on_state_plan": self.acao_on_state_plan,
                    "carga_on_state_plan": self.carga_on_state_plan,
                    "patio_on_state_plan": self.patio_on_state_plan,
                    "yard" : (self.YA[agent].cont/self.YA[agent].Y)*100
                    }  

and the second graphic because you can’t put two images in one post

Hi @alyssonpereira41 sorry for the delayed response. Is this still an issue?

I’m not sure I’m following the issue. Is the plot incorrect or is what you are logging incorrect? Do you mind helping clarify this a bit?

Thank you,
Nate