Message boards :
Number crunching :
Too much WUs
Message board moderation
Author | Message |
---|---|
Send message Joined: 21 Feb 15 Posts: 9 Credit: 1,503,908 RAC: 0 |
Hi! As I tried to get new work für the ULX app this morning there wasn't any yet. I forgot to set it back to no new work and left for some hours. Now I discovered that there were over 1.000 WUs downloaded for this host in the last hours... :-O This is too much of course, I will have to cancel the most of them. Please consider introducing a limit per core/per day or whatever. Life is Science, and Science rules. To the universe and beyond Proud member of BOINC@Heidelberg My BOINC-Stats |
Send message Joined: 4 Feb 15 Posts: 847 Credit: 144,180,465 RAC: 0 |
Ok, I will. Can you let me know, what Manager says about time left (predicted time) for not yet started tasks, please? I have real problem with setting proper value of rsc_fpops_est variable recently... Krzysztof 'krzyszp' Piszczek Member of Radioactive@Home team My Patreon profile Universe@Home on YT |
Send message Joined: 21 Feb 15 Posts: 9 Credit: 1,503,908 RAC: 0 |
Can you let me know, what Manager says about time left (predicted time) for not yet started tasks, please? Well, estimated time here for my tasks is only 1 minute 28 seconds. Really less of course, and since I have a cache setting of 10 days it's logical that the manager downloaded so much. But the tasks surely won't last so short - I just started some of the new WUs and after some minutes I can estimate the run time around 3-4 hours for my host. Life is Science, and Science rules. To the universe and beyond Proud member of BOINC@Heidelberg My BOINC-Stats |
Send message Joined: 10 Mar 15 Posts: 11 Credit: 309,361 RAC: 0 |
According to BM, my BHspin tasks take an estimated 3:25:51 and ULX estimated to take 3:02:51. My last 20 tasks have actually taken an average of 3:05:00 for BHspin and 1:48:00 for ULX (18 tasks). Not too bad. The problem is that projects dont have the option of setting FPOPS on a per application basis because that would be useful. Hence on PrimeGrid i get estimates of 400+ hours for GPU tasks that take about 54 hours. And now I cant get any more work because I've reached a limit so I guess you did set a limit. |
Send message Joined: 4 Feb 15 Posts: 847 Credit: 144,180,465 RAC: 0 |
Our project uses FPOPS settings per app. My problem is that all four of my hosts shows completely stupid values of estimated time... Krzysztof 'krzyszp' Piszczek Member of Radioactive@Home team My Patreon profile Universe@Home on YT |
Send message Joined: 21 Feb 15 Posts: 12 Credit: 307,800,598 RAC: 0 |
All my machines are now running in High Priority Mode because the cached WU are showing 30 days required to complete a single WU. You need to reset the estimated time. For example, I have 1 machine running Linux that has 12 WU in progress and all were started at the same time. 11 WU say they will finish in 40 minutes and the 12th WU says 30 days and 3 hours required to finish. All should be finish in 1 1/2 hours total. The 11 WU in cache are estimating it will require 339 days to complete them. While the machines are in this state I can not run other projects. |
Send message Joined: 21 Feb 15 Posts: 9 Credit: 1,503,908 RAC: 0 |
Meanwhile i let one of my hosts get new work and controlled how much it got. I got 56 in total, almost every time 7. According that I have an 8-core machine I assume you set the limit to 7 per core, can it be? I think that's still too much. What about 2, 3 or maximum 4 per core? Would fit better. Life is Science, and Science rules. To the universe and beyond Proud member of BOINC@Heidelberg My BOINC-Stats |
Send message Joined: 21 Feb 15 Posts: 12 Credit: 307,800,598 RAC: 0 |
Meanwhile i let one of my hosts get new work and controlled how much it got. No, 7 WU / thread is no where near enough. On my machines 7 WU will last 10.5 hours on a single thread. If your processor can't handle the load then control the number of WU you get with the "additional work buffer" setting in the BOINC manager. You said you have it set for 10 days so you aren't even getting the work you are asking for. |
Send message Joined: 21 Feb 15 Posts: 9 Credit: 1,503,908 RAC: 0 |
I have to admit that I withdraw my suggestion of lowering the limit now, since my 8-core-host is relatively new and I just develop a feeling for how the work flows now. 7 WUs/core is indeed acceptable (for me). You said you have it set for 10 days so you aren't even getting the work you are asking for. Well, I never said I want a full cache. Also, I'm not doing the project exclusively and the 10-days-setting is just a value based on years of experience to get enough work from every project I currently crunch on. Life is Science, and Science rules. To the universe and beyond Proud member of BOINC@Heidelberg My BOINC-Stats |
Send message Joined: 21 Feb 15 Posts: 12 Credit: 307,800,598 RAC: 0 |
Then don't ask for one. The project was giving you what you are asking for and others can use. It isn't the responsibility of the project to "protect you from yourself". If I want a full cache then I should be able to get one. If I don't want it full then I drop the size of my request. |
Send message Joined: 28 Feb 15 Posts: 253 Credit: 200,562,581 RAC: 0 |
I had a quite different problem with three ULX that I downloaded yesterday, but it is now fixed. The initial estimate was 44 days on my Haswell machine (Win7 64-bit with a new install of BOINC 7.6.16). But that quickly reduced to a more accurate value as the first two of them were running. In fact, the first two completed in 7 hours 50 minutes, and the estimates for the remaining one (and three more downloaded early this morning) are 5 hours 21 minutes, much more reasonable. So I don't think you need to change anything on ULX for me. But I have not gotten the BHSpin on this machine yet, though I did on another machine a few weeks ago and the estimates were too short, as usual, so I downloaded too many. Hopefully that won't happen this time. EDIT: I just downloaded nine 0.03 BHSpin, and the estimates are 3 hours 13 minutes, which looks very good to me. |