Message boards : Number crunching : Double your task throughput on Linux
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3 · 4 · 5 · Next

AuthorMessage
Profile Tom M

Send message
Joined: 18 Jul 17
Posts: 138
Credit: 1,379,173,617
RAC: 0
Message 5442 - Posted: 15 May 2022, 23:10:03 UTC - in response to Message 5438.  

Yes, very much so :(

I hope this contest will end soon...

Me too.
Try this in your cc_config.xml file.
<max_file_xfers_per_project>8</max_file_xfers_per_project>
That is in the options.
Then shutdown/restart BOINC. Wait 20-30 seconds before you restart it.

I have been seeing a slow trickle of uploads without hitting updates. If it works for you, you might get more downloads...
Tom M
A proud member of the OFA (Old Farts Assoc.)
ID: 5442 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
arcturus

Send message
Joined: 24 Mar 22
Posts: 17
Credit: 137,338,000
RAC: 0
Message 5589 - Posted: 20 May 2022, 14:48:34 UTC
Last modified: 20 May 2022, 14:56:21 UTC

If you will, please look at this work unit:

https://universeathome.pl/universe/workunit.php?wuid=102103724

which shows that my computer is slower than one 5 generations older and stock speed 2 Ghz less despite having the same OS and libraries. Could it be mine is running VirtualBox in Windows (host) with Linux as guest while the other is running Linux standalone? My machine is running 4.6 Ghz all core. I have a hard time believing the other machine is overclocked that much to explain it.

Edit: Windows 11 BTW
ID: 5589 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Jim1348

Send message
Joined: 28 Feb 15
Posts: 253
Credit: 200,562,581
RAC: 0
Message 5596 - Posted: 20 May 2022, 17:23:09 UTC - in response to Message 5589.  

If you will, please look at this work unit:

https://universeathome.pl/universe/workunit.php?wuid=102103724

Did you cite the right one, or you changed it? It looks OK to me now.
ID: 5596 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
rsNeutrino

Send message
Joined: 1 Nov 17
Posts: 29
Credit: 291,940,933
RAC: 0
Message 5597 - Posted: 20 May 2022, 18:00:01 UTC - in response to Message 5589.  
Last modified: 20 May 2022, 18:01:57 UTC

Could it be mine is running VirtualBox in Windows (host) with Linux as guest while the other is running Linux standalone?

That would be my guess. Last I've heard, VirtualBox is a type 2 hypervisor and slows things down because the guest OS does not run on bare metal. https://en.wikipedia.org/wiki/Hypervisor
Hyper-V is type 1, runs VMs by using the CPUs virtualization extentions with native performance, and is build into Windows. Windows itself becomes (unnoticable) a VM after activating Hyper-V.
I think the system architechture of Win 11 changed so that is has to be active by default now and every Win 11 installation today is like a VM in that regard ("for security", like the TPM thing).

In Win 10 you needed Pro to run your own Hyper-V VMs or do some tricks to force it on Home.

My system runs BOINC on Ubuntu on Hyper-V in parallel to Win 10.
Details have been written here a while ago: https://universeathome.pl/universe/forum_thread.php?id=550
ID: 5597 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
arcturus

Send message
Joined: 24 Mar 22
Posts: 17
Credit: 137,338,000
RAC: 0
Message 5598 - Posted: 20 May 2022, 18:01:10 UTC - in response to Message 5596.  
Last modified: 20 May 2022, 18:04:40 UTC

Wow hard to believe it makes that much of a difference!
ID: 5598 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
rsNeutrino

Send message
Joined: 1 Nov 17
Posts: 29
Credit: 291,940,933
RAC: 0
Message 5599 - Posted: 20 May 2022, 18:14:31 UTC

ID: 5599 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Jim1348

Send message
Joined: 28 Feb 15
Posts: 253
Credit: 200,562,581
RAC: 0
Message 5600 - Posted: 20 May 2022, 19:01:35 UTC

Maybe I mentioned this before, but I have written up my experience on running Ubuntu with WSL on Win10.
https://www.cpdn.org/forum_thread.php?id=9025#63462
It works very well now, by using BoincTasks to manage it. You just assign different ports to the Linux side and the Windows side, and you can run some cores on each.

I have compared the speed on my Ryzen 3600 Win10 machine to the speed on another Ryzen 3600 native Ubuntu 20.04.4 machine, and they are identical insofar as I can see. But I will mention that the WSL machine, after running flawlessly for a week or two, will suddenly quit, and I have to reboot (I run it 24/7).
Maybe it is a Windows update? I don't see the cause.
ID: 5600 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
arcturus

Send message
Joined: 24 Mar 22
Posts: 17
Credit: 137,338,000
RAC: 0
Message 5603 - Posted: 20 May 2022, 22:19:05 UTC

Time permitting I think I'll test on a few other machines running Windows 10 and install Linux clean on a separate drive using bios to control the boot sequence. Years ago I made sure the installation was done by unplugging the primary drive holding Windows first just to be sure.
ID: 5603 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Tom M

Send message
Joined: 18 Jul 17
Posts: 138
Credit: 1,379,173,617
RAC: 0
Message 5754 - Posted: 13 Aug 2022, 11:27:22 UTC

Context.
In a system running both CPU and GPU tasks.

What ratio (percentage too) are you running each of your systems at in terms of available threads/cpu's vs. Boinc Manager thread Limit?

For instance a 3950x has 32 threads aka: cpus on the Boinc Manager. If you are running 30/32 = 93.75%

And what is your maximum # of U@H project threads set at in your app_config.xml file? For instance "32"

What I am looking for is additional examples of where you are finding your maximum production point in U@H.

Thank you.
Tom M
A proud member of the OFA (Old Farts Assoc.)
ID: 5754 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Keith Myers
Avatar

Send message
Joined: 10 May 20
Posts: 310
Credit: 4,733,484,700
RAC: 0
Message 5755 - Posted: 13 Aug 2022, 15:30:47 UTC - in response to Message 5754.  

I run all my hosts at 95% in the Manager. On the 5950X I use a 28 thread limit in app_config. I have 2-3 gpus running also in each host taking away the available threads from cpu computation.

A proud member of the OFA (Old Farts Association)
ID: 5755 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Grant (SSSF)

Send message
Joined: 23 Apr 22
Posts: 167
Credit: 69,772,000
RAC: 0
Message 5756 - Posted: 13 Aug 2022, 21:24:40 UTC

Even on my general use system, i've always used all cores & threads for BOINC processing as it doesn't get used heavily (just email, web browsing & the odd video). If the GPU application requires significant CPU support, then i would make use of an app_config.xml file & gpu_usage & cpu_usage values.
If the system ran out of GPU work, then the released CPU cores/threads would start doing CPU work again instead of sitting there idle. More GPU work comes along then the CPU cores/threads would go back to supporting the GPU work.


What matters is your Run time v CPU time for CPU Tasks.
For my dedicated cruncher the difference between them for 5 hours work is 20 seconds or so. For my daily use machine it's less than a minute.

It there's 5 or more minutes difference over 5 or so hours, then it might be worth freeing up a core/thread. When you see systems where it takes 8+ hours to do 5 or less hours of work, that's a sign of an extremely overcommitted system. Either a CPU core/thread trying to do CPU work while also being needed to support a GPU, or some other programme sucking up CPU time (eg people running BOINC using all cores/threads while also running Folding@home) or even some mining malware or similar on their system.
Or they're just running some CPU intensive software on their system (eg video transcoding etc), that's when you need to leave a core/thread or more free from BOINC.
Grant
Darwin NT
ID: 5756 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Tom M

Send message
Joined: 18 Jul 17
Posts: 138
Credit: 1,379,173,617
RAC: 0
Message 5757 - Posted: 14 Aug 2022, 2:16:52 UTC - in response to Message 5756.  
Last modified: 14 Aug 2022, 2:18:54 UTC


==============
What matters is your Run time v CPU time for CPU Tasks.
For my dedicated cruncher the difference between them for 5 hours work is 20 seconds or so. For my daily use machine it's less than a minute.
==============
It there's 5 or more minutes difference over 5 or so hours, then it might be worth freeing up a core/thread.


On the EPYC system.
I am currently running 96.87% of 64 cores (~62) on the BOINC Manager.

I am currently running 4 gpu threads, 1 primegrid thread, 2 TN-Grid threads and have a project limit of 50 on the U@H threads. It currently looks like the wall clock time is about 2.5 minutes slower than the CPU time.

According to Psensor I am running @90% CPU load. System Monitor is also showing threads running below 50% on occasion. I can drive the CPU load up to 99% but haven't tried that when I was paying attention to the wall clock vs. CPU time.

Sounds like another experiment is coming.

I have just set the cTDP from "auto" to 180 which is the "nominal" maximum cTDP.
I have also turned off the Virtual Machine setting in bios.
The determinism slider is still on "power".

Once the Psensor for how hot the CPU is went back up to 81C the CPU slowed back down to under 2.6 GHz instead of slightly over it.

Tom M
A proud member of the OFA (Old Farts Assoc.)
ID: 5757 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Tom M

Send message
Joined: 18 Jul 17
Posts: 138
Credit: 1,379,173,617
RAC: 0
Message 5762 - Posted: 14 Aug 2022, 13:05:09 UTC - in response to Message 5757.  


==============
What matters is your Run time v CPU time for CPU Tasks.
For my dedicated cruncher the difference between them for 5 hours work is 20 seconds or so. For my daily use machine it's less than a minute.
==============
It there's 5 or more minutes difference over 5 or so hours, then it might be worth freeing up a core/thread.


On the EPYC system.

Sounds like another experiment is coming.


Since it appears production went down overnight when I set the cTDP to the nominal 180 watts. I have reset that to "auto".

I just set the U@H project limit in its app_config.xml to 64. It appears it is now running at 60 (I think I counted) because I also set the gpus to reserve a half a thread per gpu task (4 gpu tasks).

As predicted the CPU load went to 99% under the Psensor. And the System Monitor is showing 62 cpus running at over 95%. I am still running two CPU threads "idle" so they are getting as low as 87%.

Should have some results by the end of the day.

Tom M
A proud member of the OFA (Old Farts Assoc.)
ID: 5762 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Tom M

Send message
Joined: 18 Jul 17
Posts: 138
Credit: 1,379,173,617
RAC: 0
Message 5765 - Posted: 15 Aug 2022, 0:33:23 UTC - in response to Message 5762.  
Last modified: 15 Aug 2022, 0:34:12 UTC



As predicted the CPU load went to 99% under the Psensor. And the System Monitor is showing 62 cpus running at over 95%. I am still running two CPU threads "idle" so they are getting as low as 87%.

Should have some results by the end of the day.



First results are not encouraging. The production graph/RAC are showing a slight drop. And some of the differences between wall clock and CPU time have increased to near 4 minutes from 2.5 minutes. But still need a longer period of experiment to confirm the trend.

On theoretical grounds I have been running with the Virtual CPU setting off in the bios.

Just turned that on. Curious to see if any production changes.

Check back in, in the morning.

Respectfully,
Tom M
A proud member of the OFA (Old Farts Assoc.)
ID: 5765 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Keith Myers
Avatar

Send message
Joined: 10 May 20
Posts: 310
Credit: 4,733,484,700
RAC: 0
Message 5766 - Posted: 15 Aug 2022, 1:55:17 UTC - in response to Message 5765.  


On theoretical grounds I have been running with the Virtual CPU setting off in the bios.

Just turned that on. Curious to see if any production changes.

Check back in, in the morning.

Respectfully,
Tom M

It won't.

A proud member of the OFA (Old Farts Association)
ID: 5766 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Tom M

Send message
Joined: 18 Jul 17
Posts: 138
Credit: 1,379,173,617
RAC: 0
Message 5771 - Posted: 17 Aug 2022, 12:12:46 UTC - in response to Message 5766.  


On theoretical grounds I have been running with the Virtual CPU setting off in the bios.

Just turned that on. Curious to see if any production changes.

Check back in, in the morning.


It won't.


It certainly looks like no change in production.

In other news
It looks like I can reduce the ratio of wall clock time to CPU processing time to under 1 minute by requiring a full thread per GPU task. To be more confident of this finding I have switched the GPU processing back to 0.5 CPU thread to GPU thread. Both CPU load indicators show a slight increase in the CPU load (from 95-96% to 97-98%).

Actual change in production as shown by GUI graph/RAC scores has been nearly flat, so far. This should give me an excellent way to see if there is a result when I drop the CPU processing load to 90%.

My impression is you get more CPU production at 90% load then you get at 96-99% CPU load.

This is looking like I will be able to validate that impression one way or the other.

Tom M





A proud member of the OFA (Old Farts Assoc.)
ID: 5771 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Tom M

Send message
Joined: 18 Jul 17
Posts: 138
Credit: 1,379,173,617
RAC: 0
Message 5772 - Posted: 18 Aug 2022, 21:39:51 UTC - in response to Message 5771.  


My impression is you get more CPU production at 90% load then you get at 96-99% CPU load.


I had to drop the CPU processing count in u@h down to 40 (from 53 for the 90% load) before the reported GHz on the CPU got up to near the 2.7 GHz "all core" speed the specifications for the Epyc 7601 claim.

Somehow I don't think that would produce a net "gain" in total production :)

Tom M
A proud member of the OFA (Old Farts Assoc.)
ID: 5772 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Keith Myers
Avatar

Send message
Joined: 10 May 20
Posts: 310
Credit: 4,733,484,700
RAC: 0
Message 5773 - Posted: 19 Aug 2022, 7:07:32 UTC - in response to Message 5772.  

It depends on the generation of the Epyc processor. Seems like Naples and Milan respond similarly in that you need to drop the thread count down to 50-75% of the total available threads in the cpu before the remaining loaded cores approach the maximum turbo boost clock speed.

Whether a fluke for the Rome generation processors or possibly just the 7402P model that Ian and I both have, but I can load all the cores up to 90-95% of the total threads and still have the cores run at their full boosted clock speed.

Running on Power determinism and both cTDP and PPT set to the max 200W allowed makes all the cores lock to the max turbo speed of 3.35Ghz when loaded with Universe tasks.

Can't say the same for the 7443P Milan cpu though. Same 95% of threads loaded has the cores running ~3.60-3.65Ghz. Have to drop down to 60-75% of total cores for the clocks to approach back to the max 4Ghz turbo speed. Same Power determinism being used at 200W.

But the better IPC of the Milan processor makes up for the lesser realized clocks compared to the Rome processor.

A proud member of the OFA (Old Farts Association)
ID: 5773 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Tom M

Send message
Joined: 18 Jul 17
Posts: 138
Credit: 1,379,173,617
RAC: 0
Message 5774 - Posted: 19 Aug 2022, 15:18:04 UTC - in response to Message 5773.  

[quote]
Running on Power determinism and both cTDP and PPT set to the max 200W allowed makes all the cores lock to the max turbo speed of 3.35Ghz when loaded with Universe tasks.
/quote]

I need to see if I can locate a "PPT" setting for the 7601 cpu. It sounds like that would be an excellent experiment after I get done with the 90% load experiment.

Early results sound encouraging. Within a day of balancing the CPU to 90% load (which meant setting the project max file in the U@H folder to 53 threads/cpus) the RAC passed the 600,000 mark.

Tom M
A proud member of the OFA (Old Farts Assoc.)
ID: 5774 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Keith Myers
Avatar

Send message
Joined: 10 May 20
Posts: 310
Credit: 4,733,484,700
RAC: 0
Message 5775 - Posted: 19 Aug 2022, 17:12:09 UTC

Should be in the same place as the cTDP setting.
Advanced Settings>>AMD CBS>>NBIO>>SMU Options

A proud member of the OFA (Old Farts Association)
ID: 5775 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Previous · 1 · 2 · 3 · 4 · 5 · Next

Message boards : Number crunching : Double your task throughput on Linux




Copyright © 2024 Copernicus Astronomical Centre of the Polish Academy of Sciences
Project server and website managed by Krzysztof 'krzyszp' Piszczek