1) Message boards : News : No tasks (Message 4552)
Posted 28 Nov 2020 by VietOZ
Post:
Glad to hear you're ok and home Krzysztof! Please get rest and get well soon! That's the most important thing right now. Other things can wait , no problem.
2) Message boards : Number crunching : Block ip for bunkering? (Message 4195)
Posted 6 May 2020 by VietOZ
Post:
i have mine uploaded already :) So no stress for me ... at least when the event starts
3) Message boards : Number crunching : I abort all the WU ! non sense to let crunch if updonload is impossibel (Message 4130)
Posted 23 Mar 2020 by VietOZ
Post:
reading your posts, i really want the stuffs you're smoking
4) Message boards : Number crunching : Upload/download problems and SETIBZH/Sprint (Message 4124)
Posted 22 Mar 2020 by VietOZ
Post:
Kudos to you, Krzysztof!!
Communication is there, action is there. What's not to like?
I'll be doing some other stuffs, but I'll definitely coming back to this project.
Thank you!
5) Message boards : Number crunching : Upload/download problems and SETIBZH/Sprint (Message 4114)
Posted 21 Mar 2020 by VietOZ
Post:
Firstly. I'm apologise for the recent problems. There was two of them:
1. I release new batch of ULX work units which generates large result files AND large temporary files. Firstly, I found only first issue and cancel tasks when I noticed what happens. Then I make changes to outgoing template, generate second batch and was happy... Wrong. The second issue impact loads of crunchers with impossibility to send back results.
I got info, that this application generate 25MB results data and had created results template up to 50MB, unfortunately some of tasks had 66-70MB of data...
Finally, I cancelled most of the tasks.

2. Second problem was size of results in aspect of bandwidth. We have 1GB/s symmetric line (approx. 250MB/s) used for server and NFS connection to store machine. This gives us about 100MB/s usage bandwidth for results. After few simple calculations I recognised, that this is not enough to serve large number of connections with this size of data.

Finally.
On next week I will make credits for all users who had finish tasks but not granted credit due to tasks cancellation and not enough space problem on their disk. I just need firstly to play with some tasks on server which are already started.


No worries Krzysztof! Sometime sheit happens.
I appreciated the explanations and for me that's important. I wasn't gonna post anything and just crunch on. But if someone frame sheit on me and my friends, I ain't gonna stay quiet.
6) Message boards : Number crunching : Upload/download problems and SETIBZH/Sprint (Message 4111)
Posted 21 Mar 2020 by VietOZ
Post:
Some of the files are only 20 bytes so its not the bandwidth that is the issue here but the # of connections and database transactions going on in the background. The website is loading fine, another clue its not a bandwidth issue.


https://universeathome.pl/universe/server_status.php

Look at that page and look at what you've just said. Tell me what's wrong?
    Download server
    Upload server
    Scheduler
    feeder


Also explain why the server still slow up to this point when all the "hoarded" tasks had been credited almost 24hrs ago?

7) Message boards : Number crunching : Upload/download problems and SETIBZH/Sprint (Message 4108)
Posted 21 Mar 2020 by VietOZ
Post:
Wouldnt it be a good idea to limit the number of WUs per CPU-Cores and per host ?
e.g. SETI@Home only gives out 100/150 WUs per CPU or GPU.
For people running 24/7 this makes no difference but it could reduce extrem large bunkers.


There are still ways around it. The top SETI guys can get up to 6k of WUs per instance. And they all had to do it to avoid the down time every Tuesday.
Limit WUs only impact the regular users, not the bunker guys (note: they also run 24/7). Why?
If the server has some problem or need down time to maintain, these regular users won't have enough WUs while the bunker guys still working on their cache. Bunkers always find ways to bunker. Limit only hurts the regular users and make things more inconvenient.
8) Message boards : Number crunching : Upload/download problems and SETIBZH/Sprint (Message 4103)
Posted 21 Mar 2020 by VietOZ
Post:
People hoarding thousands of tasks for weeks increases the load on the servers.
https://stats3.free-dc.org/stats.php?page=user&proj=uni&name=56939
https://stats3.free-dc.org/stats.php?page=user&proj=uni&name=48820
https://stats3.free-dc.org/stats.php?page=user&proj=uni&name=49710


I thought you're more knowledgeable than that. What does that have to do with the server load?
FYI, the bunkers we "hoarded" had been uploaded to the server long before the Sprint started. Just waited for the right time and we'll report and get credits. There was no upload whatsoever when all this mess happened.
If you read this thread https://universeathome.pl/universe/forum_thread.php?id=507
The admin released a new batch of ULX work, didn't go right so he decided to cancel it. Somehow, that also cancel the current batch of BHSpin hence why we got a lot of invalids ... "Completed, can't validate" ... because those WUs didn't make it to the wingman to run.
Then, he put out a new batch of ULX again without generating BHSpin. That caused the server only had ULX works and users were forced to download ULX works, which has big results file sizes (5MB - 50MB).
When users started to return those results, that's when the server couldn't handle and became sluggish.

Don't create more hate to people that do bunker. Bunkers had been widely use for competitions, you know it ... I know it. Not everyone like it and I respect that. But to blame this sheit on us is kinda low. There are ways to do it, if you do it right ... the server hardly felt any kind of hit at all.
So we released our bunkers already, why the server still sluggish? explain
9) Message boards : Number crunching : Few tasks downloading and no connections? (Message 4098)
Posted 20 Mar 2020 by VietOZ
Post:
"Hamsters" means the server, not the FB participants. Please don't take it out of context and create unnecessary misunderstanding.







Copyright © 2024 Copernicus Astronomical Centre of the Polish Academy of Sciences
Project server and website managed by Krzysztof 'krzyszp' Piszczek