Continuing the discussion from Automated uploader to OMERO in python:
As @will-moore suggested, I continue the discussion in a new topic, with some more context.
I’m tasked to setup omero on a rather large perimeter (multiple institutes), where people decided before not to use a DB for their microscopy data. To drive adoption, I’d like to make the import of data as seamless as possible, while trying to gather as much metadata as possible
The solution I want to implement
In the institutes, the data are stored on a windows network file system (samba), usually with a research group granularity.
An experimental form filling strategy has already been tested here to help researcher setup tags / labels for their microscopy data. I plan on using this to generate a per-user configuration file that would also sit on the network drive (maybe on a per project basis?).
I want to task a bot / daemon / cron job to parse these disks, and automatically import the data in the DB. Note that I don’t want to treat those drives as DrobBoxes, but actually import the images into a ManagedRepository.
My main concern is I want the import to be robust with regard to nested sub-directories on the source drive, maybe by translating anything lower than the group / user / project / dataset hierarchy to tags (?)
I’m quite proficient in python - less so in bash though I can read it - and even less in java
Are there similar usage / code snippets out there? Pointers?
Thanks a lot!