Message boards :
Number crunching :
Large upload Files
Message board moderation
Author | Message |
---|---|
Send message Joined: 4 Feb 15 Posts: 17 Credit: 133,036,975 RAC: 0 |
Are the upload files always going to be so large? They are running between 25 and 50 Mb which is huge when most project uploads are well under 1 Mb. Proud Founder and member of BOINC@AUSTRALIA |
Send message Joined: 4 Feb 15 Posts: 847 Credit: 144,180,465 RAC: 0 |
No. This application have large results files but this change quite often depend of the starting parameters. For current batch and probably few more it will stay same then we start test other settings. Krzysztof 'krzyszp' Piszczek Member of Radioactive@Home team My Patreon profile Universe@Home on YT |
Send message Joined: 21 Feb 15 Posts: 4 Credit: 1,267,748 RAC: 0 |
Still, couldn't the upload files be somehow optimized? 50 MB is way too huge for a normal internet connection. |
Send message Joined: 4 Feb 15 Posts: 24 Credit: 7,035,527 RAC: 0 |
He has already answered that: "I know that results files are large, but is nothing what I can do with this at the moment (the files are compressed, before compression they are up to 400MB together)." |
Send message Joined: 23 Feb 15 Posts: 11 Credit: 183,472,470 RAC: 0 |
I have set all machines to No New Work. I have over 500 files uploading that have passed the deadline although they finished days ago before the deadline. if my i74930 machines on win7 with unlimited date on a cable connection can't handle this, than this project is not for me. |
Send message Joined: 23 Feb 15 Posts: 11 Credit: 183,472,470 RAC: 0 |
Krzysztof. Please talk to your users and tell them in the News section that your project requires large upload files that might not finish uploading before the deadline. Better still, suspend issuing New Work Units until this can be fixed. Use your test server to fix problems or try solutions but not the production server as this will not work for most people. No other projects have this problem. Good luck with it. The project production is relatively new and problems are expected but that is the purpose of the Test Server to try to fix them. |
Send message Joined: 23 Feb 15 Posts: 5 Credit: 10,017,978 RAC: 0 |
I would recommend that crunchers with slower internet speed or limited bandwidth to down load fewer tasks. I have noticed that a small portion of users were downloading a disproportionate amount of WUs. Anecdotally I came across a computer that had downloaded about 1000 WUs out of a 16000 WU batch! Krzysztof, it should be possible to limit the number of WUs sent to individual computers such as setting it to 2x or 3x # of cores. This would keep computers from piling up WUs waiting to send results data back to the server. |
Send message Joined: 4 Feb 15 Posts: 24 Credit: 7,035,527 RAC: 0 |
I would recommend that crunchers with slower internet speed or limited bandwidth to down load fewer tasks. I have noticed that a small portion of users were downloading a disproportionate amount of WUs. Anecdotally I came across a computer that had downloaded about 1000 WUs out of a 16000 WU batch! I agree with that, it is obviously the underlying cause of some of the described problems above. Calculation time per Wu lies between 3-5 hours, with having four days deadline. The uploading_slower_than_deadline problem would mean nearly impossible low upload rates. Impossible because, they will be disconnected, i've already tested it. |
Send message Joined: 21 Feb 15 Posts: 64 Credit: 65,733,511 RAC: 0 |
On the other hand: a project, that has rarely wu's available (typical 2 days work / 2 days no work) volunteers tend to make use of the buffering possibilities. It's a very young project, admins , dev's and volunteers need to learn how to handle all that in the best possible way. Keep in mind that there is a possibility to have a backup project. |
Send message Joined: 4 Feb 15 Posts: 847 Credit: 144,180,465 RAC: 0 |
Firstly, I have to mension again, that size of the results files isn't a bug and this is a reason why I limit new WU's quantities - just to give server possibility to handle uploads correctly. If somebody 'bunker' work units I can't do with this really nothing. I will send an information about files sizes on main website now. Krzysztof 'krzyszp' Piszczek Member of Radioactive@Home team My Patreon profile Universe@Home on YT |
Send message Joined: 24 Feb 15 Posts: 32 Credit: 609,507,165 RAC: 0 |
As a big supporter of the Test project, I was very disappointed to see the unmanageable file sizes are not going to change in the foreseeable future. I have removed this project from my servers and moved them to Einstein. Good luck with the project but I will no longer participate in it. |
Send message Joined: 23 Feb 15 Posts: 11 Credit: 183,472,470 RAC: 0 |
Users can only do two things. 1. Point out an apparent problem and 2. Switch to another project. Now a number of us have pointed out that the size of the upload files are unmanageable at the users end. In my case, my cache is set at two days although the maximum is 10. BOINC software manages that quite well and only downloads more when the software thinks that it can handle the work in the allotted time. Please note that the BOINC Project software also gives you the choice of limiting the number of units sent, beyond which the user gets a message that his request accede the daily limit and will not get new workunits. As regards the slowness of the upload. I am on cable and the Techs at Optus my cable provider tells me that my upload speed is 1.4 Mbps and the problem is at your end as can be proven by pinging your server. It is not capable of supporting receiving such large uploads from many sources. This can be likened to a DOS attack. Please note also that Universe is the only project that have this problem. I can upload to any of the other 50+ active projects in a matter of minutes not hours. Finally, since you have indicated that you can not do anything about this in the foreseeable future, I am also setting my machines to No new work and crunch some other more suitable project. If and when the project is fixed I might return for I like the science behind it. Good Luck. Keep on crunching and have a nice day... |
Send message Joined: 23 Feb 15 Posts: 11 Credit: 183,472,470 RAC: 0 |
Users can only do two things. 1. Point out an apparent problem and 2. Switch to another project. Now a number of us have pointed out that the size of the upload files are unmanageable at the users end. In my case, my cache is set at two days although the maximum is 10. BOINC software manages that quite well and only downloads more when the software thinks that it can handle the work in the allotted time. Please note that the BOINC Project software also gives you the choice of limiting the number of units sent, beyond which the user gets a message that his request accede the daily limit and will not get new workunits. As regards the slowness of the upload. I am on cable and the Techs at Optus my cable provider tells me that my upload speed is 1.4 Mbps and the problem is at your end as can be proven by pinging your server. It is not capable of supporting receiving such large uploads from many sources. This can be likened to a DOS attack. Please note also that Universe is the only project that have this problem. I can upload to any of the other 50+ active projects in a matter of minutes not hours. Finally, since you have indicated that you can not do anything about this in the foreseeable future, I am also setting my machines to No new work and crunch some other more suitable project. If and when the project is fixed I might return for I like the science behind it. Good Luck. Keep on crunching and have a nice day... |
Send message Joined: 24 Feb 15 Posts: 32 Credit: 609,507,165 RAC: 0 |
Users can only do two things. 1. Point out an apparent problem and 2. Switch to another project. Now a number of us have pointed out that the size of the upload files are unmanageable at the users end. In my case, my cache is set at two days although the maximum is 10. BOINC software manages that quite well and only downloads more when the software thinks that it can handle the work in the allotted time. Please note that the BOINC Project software also gives you the choice of limiting the number of units sent, beyond which the user gets a message that his request accede the daily limit and will not get new workunits. As regards the slowness of the upload. I am on cable and the Techs at Optus my cable provider tells me that my upload speed is 1.4 Mbps and the problem is at your end as can be proven by pinging your server. It is not capable of supporting receiving such large uploads from many sources. This can be likened to a DOS attack. Please note also that Universe is the only project that have this problem. I can upload to any of the other 50+ active projects in a matter of minutes not hours. Thanks Pete. In my case I have the maximum bandwidth AT&T will give a residential customer and my cache was set to 0.01 and still it choked my network. Will be interested to see what others think. I too like the science and will see what happens here but cannot participate. |
Send message Joined: 4 Feb 15 Posts: 24 Credit: 7,035,527 RAC: 0 |
Hey friends, i begin to suspect that the problem is somehow related with some of the hops from your location to the destination server - which became only noticeable because of the size of the files. At my location .- Germany - i can use the maximum of my given upload rate. I had none of the problems you've described above, which leads me to this conclusion. However, Pete has already pointed out the right thing - as long as the project demands mean to upload files like that, they have to be crunched by the guys which are in the lucky upload situations like me. Which means that there are other projects for which the cores of these guys are actually not working for. There is work enough for all. |
Send message Joined: 23 Feb 15 Posts: 5 Credit: 10,017,978 RAC: 0 |
I am in California using a low-end U-Verse (At&T) connection. I have my Boinc buffer set to 0.3 minimum days and 0.1 additional days. I see that the uploads are taking me around 7 minutes for the largest file. I however am not seeing any overall throughput problem even when running 8 threads. Cruncher Pete a possible hurdle may be that you are using 9 computers over the same ISP connection. The large uploads make this project more of a turtle race than a hare race. All projects have their quirks and need to be fiddled with to run with one's system. For instance, PrimeGrids LLR runs hot so I limit the number of active threads or GPUGrid has a 24hr bonus, so I use .01/.01 buffers giving me only one task at a time, no idle tasks. |
Send message Joined: 21 Feb 15 Posts: 4 Credit: 1,267,748 RAC: 0 |
... Krzysztof, that is not really true. Many projects limit the number of units a cruncher can download. Of those that I currently crunch, for example SRBase. Therefore, there must be this option available to you as well. Limit each cruncher to a reasonable number of WUs that can be downloaded and crunched concurrently and that will take care of the hoarding and subsequently also of the upload troubles. |
Send message Joined: 4 Feb 15 Posts: 24 Credit: 7,035,527 RAC: 0 |
That's true, there must be such an option, afaik Gerasim@home crew uses it. |
Send message Joined: 4 Feb 15 Posts: 12 Credit: 73,458,501 RAC: 0 |
That's true, there must be such an option, afaik Gerasim@home crew uses it. Gerasim uses custom BOINC on the server side. They re-wrote it to run on windows for some reason. In doing so, they changed many features. Dublin, California Team: SETI.USA |
Send message Joined: 4 Feb 15 Posts: 847 Credit: 144,180,465 RAC: 0 |
That's true, there must be such an option, afaik Gerasim@home crew uses it. Yes, there is but not for user - for host and for thread... Unfortunately to many restrictions on this cause problems with feeding computers. Also is another problem, The server is in Poland and I can confirm that sometimes trans-atlantic connections can be really slow. I have had few VPS in USA and on two of them (with 100Mb/s line) I have average transfer about 100kB/s, with other - full line speed. Maybe there is a problem? Anyway, as you know from test server, depend on application version results files size are vary, and every new app (which based on previous and it results) have different requirements in sizes point of view... I can't predict how it will be on next app but I know that we will try to minimise file sizes (but not lost any important data). Krzysztof 'krzyszp' Piszczek Member of Radioactive@Home team My Patreon profile Universe@Home on YT |