Infinite loop in and accuracy of the model is not increasing

specific_config = {
                    'learning_rate': 0.0001,
                    'batch_size': 32,
                    'epochs': 50,
                    'layer_1': 256,
                    'layer_2': 256,
                    'layer_3': 128,
                    'layer_4': 128,
                    'nestrov': True,
                    'optimizer': 'adam',
                    'activation': 'relu',
                    'dropout': 0.3,
                    'layer_multiplier': 1

def get_activation(conf):
    string = conf.activation
    if string == 'relu':
        return tf.nn.relu
    elif string == 'elu':
        return tf.nn.tanh
    elif string == 'tanh':
        return tf.nn.elu

def get_optimizer(config):
    if config.optimizer == 'adam':
        opt = tf.keras.optimizers.Adam( learning_rate=config.learning_rate,
    elif config.optimizer == 'sgd':
        opt = tf.keras.optimizers.SGD(  learning_rate=config.learning_rate,
    return opt

run = wandb.init(project='icr-competition', config=specific_config)
config = wandb.config

model = tf.keras.Sequential()
for i in range(1, 5):
    nodes = config['layer_'+str(i)] * config.layer_multiplier
    model.add(tf.keras.layers.Dense(nodes, activation=get_activation(config)))
model.add(tf.keras.layers.Dense(1, activation=tf.nn.softmax))

                optimizer= get_optimizer(config),

history =
                                inputs, output, 

I am getting the following error when I run the cell for, It would be very helpful if someone can tell me what I am doing wrong in here.

Hi @nataltiger26 thanks for writing in! The inputs from the code are missing, so I wasn’t able to reproduce it. There doesn’t seem though to be an error related to wandb.

Could you please run in the CLI the command: wandb disabled to make the calls to wandb no-op, and check if you would still get the same convergence? You can activate wandb again by executing wandb enabled command.

Hi @nataltiger26 just checking in here to see if the issue still persists for you, and if you tried to disable wandb to see if you get same accuracy in your training? Thanks!

Yeah, that issue still persists (getting the same accuracy) even after disabling Wandb.

Thank you @nataltiger26 for confirming this, the issue doesn’t then seem to be related to the use of wandb. I will close this Support ticket on our side.

However, if you could please provide a full reproducible code, hopefully someone from our Community here might be able to help you further. You may also try raising this issue in another forum related to Keras, tensorflow or ML in general.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.