Hi, I am training the 3D stardist model using 20 128x128x128 volumes. Each volume about 100 cells. Mean bounding box size for a cell is about 30. Here is an example:

I trained the model for 400 epochs. Loss functions reach a plateau by the of training. Here are the scores for the training which are close to the scores I get for a small validation set. To get this score I had to use (1,1,1) subsample grid, otherwise scores were near zero for (2,2,2) which makes sense because the dataset has a low resolution to begin with. I would like to optimize the parameters so we can improve the scores for the training data. I just want to explore the fitting power of the model and tame overfitting later. Any suggestions for parameters to explore?

Thanks,

Abbas

I didnâ€™t use augmentation yet. here are the full list of params:

```
{'n_dim': 3,
'axes': 'ZYXC',
'n_channel_in': 1,
'n_channel_out': 97,
'train_checkpoint': 'weights_best.h5',
'train_checkpoint_last': 'weights_last.h5',
'train_checkpoint_epoch': 'weights_now.h5',
'n_rays': 96,
'grid': (1, 1, 1),
'anisotropy': (1.0, 1.0, 1.0175438596491229),
'backbone': 'resnet',
'rays_json': {'name': 'Rays_GoldenSpiral',
'kwargs': {'n': 96, 'anisotropy': (1.0, 1.0, 1.0175438596491229)}},
'resnet_n_blocks': 4,
'resnet_kernel_size': (3, 3, 3),
'resnet_kernel_init': 'he_normal',
'resnet_n_filter_base': 32,
'resnet_n_conv_per_block': 3,
'resnet_activation': 'relu',
'resnet_batch_norm': False,
'net_conv_after_resnet': 128,
'net_input_shape': (None, None, None, 1),
'net_mask_shape': (None, None, None, 1),
'train_patch_size': (64, 64, 64),
'train_background_reg': 0.0001,
'train_foreground_only': 0.9,
'train_dist_loss': 'mae',
'train_loss_weights': (1, 0.2),
'train_epochs': 400,
'train_steps_per_epoch': 100,
'train_learning_rate': 0.0003,
'train_batch_size': 6,
'train_n_val_patches': None,
'train_tensorboard': True,
'train_reduce_lr': {'factor': 0.5, 'patience': 40, 'min_delta': 0},
'use_gpu': True}
```