Error when exporting to database in HPC cluster

cellprofiler

#1

Hi,

I keep getting an error when running the export to database module in the high performance computing cluster at our institution. The main error it throws up is:

Error detected during run of module ExportToDatabase
Traceback (most recent call last):
File “/home/nel56/.conda/envs/cellprofiler/lib/python2.7/site-packages/cellprofiler/pipeline.py”, line 1779, in run_with_yield
self.run_module(module, workspace)
File “/home/nel56/.conda/envs/cellprofiler/lib/python2.7/site-packages/cellprofiler/pipeline.py”, line 2031, in run_module
module.run(workspace)
File “/home/nel56/.conda/envs/cellprofiler/lib/python2.7/site-packages/cellprofiler/modules/exporttodatabase.py”, line 2161, in ru
n
self.connection.get_state())
File “/home/nel56/.conda/envs/cellprofiler/lib/python2.7/site-packages/cellprofiler/modules/exporttodatabase.py”, line 2174, in ha
ndle_interaction
return self.handle_interaction_execute(*args, **kwargs)
File “/home/nel56/.conda/envs/cellprofiler/lib/python2.7/site-packages/cellprofiler/modules/exporttodatabase.py”, line 2188, in ha
ndle_interaction_execute
commands.execute_all(cursor)
File “/home/nel56/.conda/envs/cellprofiler/lib/python2.7/site-packages/cellprofiler/modules/exporttodatabase.py”, line 4715, in ex
ecute_all
execute(cursor, query, binding)
File “/home/nel56/.conda/envs/cellprofiler/lib/python2.7/site-packages/cellprofiler/modules/exporttodatabase.py”, line 275, in exe
cute
cursor.execute(query)
OperationalError: disk I/O error

So I think the main problem is the OperationalError: disk I/O error. I have no idea why this is coming up, but the result is an empty database, and no CellProfiler Analyst properties file, which is what I really need.

Does anyone have any ideas? I really need this to work. I have attached the pipeline from the CellProfiler GUI and the full output from the cluster.

Cheers,

EdLynseydata.cpproj (159.4 KB)
FullOutput from CellProfiler Ed 1.pdf (62.9 KB)


#2

Hi Ed,
I’d check that your temp directory, e.g. see below, has enough space. The Cpmeasurements hdf5 files can be voluminous. And/or that you have write permission. Either could be the cause of the Disk I/O error.

/localscratch/user_scratch_files/1377246.1.bigmem.q/CpmeasurementsgftGVH.hdf5

David


#3

Hi David,

Thanks for this. Turns out it was an error between the shared lustre and the NFS, meaning it couldn’t write to the temp directory. IT administrators found a workaround using the -o command switch and copying back to my home directory.

Cheers!

Ed