1) Message boards : News : BHDB application (Message 2755)
Posted 27 Mar 2018 by [DPC]Blah.NET
Post:
I've been having the "exceeded disk limit" problems as well, so I decided to look a little deeper into what the application is actually doing while it is being run. You probably already know this , but I was surprised by the fact that the app seems to be writing duplicate copies of several results files, including one set of files that can easily become 800-900MB+ (data1.dat2 and data1.dat3).

Example of duplicate files (in the "slots" directory of a BHDB3 task in progress):

# sha256sum -b data*.dat*
9e4f9639efb55652dd12704758fca1e5ff84f36b1f053fd7deefdc75cd3f9fb9 *data0.dat
13fa73e139c7516e11c061fba4ce5120ee5a89e48d9f311de770aa52ee4a3d80 *data0.dat2
13fa73e139c7516e11c061fba4ce5120ee5a89e48d9f311de770aa52ee4a3d80 *data0.dat3
ee8ebfbc970f3b22336e87fc05c83166d795a7707a54bbb6d39774a89c12e5ec *data1.dat
d0af02bb3d08ae0042ce3854794bcf5f6592775cb9d9fe702c32c5ca5fa62a7f *data1.dat2
d0af02bb3d08ae0042ce3854794bcf5f6592775cb9d9fe702c32c5ca5fa62a7f *data1.dat3
cbf374f3985ddaa108b17a4f3dc7d69678f6616a64f04d7424886c2d2cf81911 *data2.dat
6c1d6146828c22d3038ee65542e9d50fb5d5ce3b2f9ad33a87cd77cbb8cdba3b *data2.dat2
6c1d6146828c22d3038ee65542e9d50fb5d5ce3b2f9ad33a87cd77cbb8cdba3b *data2.dat3

# ls -1sh data*.dat*
4.0K data0.dat
1.6M data0.dat2
1.6M data0.dat3
364K data1.dat
362M data1.dat2
362M data1.dat3
4.0K data2.dat
1.5M data2.dat2
1.5M data2.dat3

I don't want to be rude, but is it really necessary to write these duplicate *.dat3 files? If these duplicates are not really needed, I think it should be fairly simple to prevent the failure a lot of the WU's by fixing the code to not generate these big duplicate files.







Copyright © 2024 Copernicus Astronomical Centre of the Polish Academy of Sciences
Project server and website managed by Krzysztof 'krzyszp' Piszczek