Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Zogo
Jul 29, 2003

mdxi posted:

The SA FAH team is getting very close to 5M WUs. Also, user MaxxBot has bumped Phuzun out of first place within the team.

Yeah, that'll be a big milestone in a few months.

Only 34 teams have done that.

Adbot
ADBOT LOVES YOU

Zarin
Nov 11, 2008

I SEE YOU
Fall temps need to get here so I can fold again :negative:

Quaint Quail Quilt
Jun 19, 2006


Ask me about that time I told people mixing bleach and vinegar is okay
I went from ~300 to ~200th in 1.5 days in folding@home

I noticed my 5800x3d was throttling so I changed some settings and it stays 6c lower than that now

CPPC -Enabled
CPPC preferred cores -Disabled
GLOBAL C-STATES -Enabled/Auto (don't disable)

W11 Performance plan: balanced.
W11 Power settings: balanced.

Undervolting my Nvidia 3800 FE helps my cpu stay cooler as well.

mdxi
Mar 13, 2006

to JERK OFF is to be close to GOD... only with SPURTING

Latest news from WCG is that things are increasingly good on their end, but now their data center is having networking issues, and everyone there has been on vacation for 4 days.

Having worked in the datacenter world for several years, and adjacent to it for the past decade, I don't know what to say about this except "apply Hanlon's razor and check the WCG forums next week."

Fake comedy edit: did radium move to Toronto and start a home-based "datacenter"?

mdxi
Mar 13, 2006

to JERK OFF is to be close to GOD... only with SPURTING

I haven't felt like delving into the multiple active threads on the WCG forums to find out exactly what's happening, but over the past 4 days my machines crunched close to a thousand WUs, after having seen none for about three weeks. I read one comment last night (before decided that I didn't care to investigate further) which said it was a larger, burst test run over the weekend. I guess that's progress, and progress is good?

The SAGoons team continues to work toward 5M WUs crunched for FAH. Less than 40k to go now :commissar:

Hilariously, since DENIS@Home ranks teams by RAC instead of total credit, the Somethingawful team (currently just myself) is in 8th place. That's going to drop some, since I added Milkyway@Home as a project, and will drop a lot whenever WCG manages to get back into production.

Speaking of, I've added myself to the Milkyway goon squad. Meant to do that last week before buggering off for the weekend, but it's done now.

mdxi fucked around with this message at 16:17 on Aug 22, 2022

mdxi
Mar 13, 2006

to JERK OFF is to be close to GOD... only with SPURTING

WCG posted:

We have taken additional measures to increase the quantity of WUs we can send out, and we have been able to increase the quantity of WUs in flight at any given time. Volunteers should see this reflected on their devices now, and perhaps even over this past week.

We are also relieved to share that the hosting data centre has assigned additional personnel on site to resolve our networking issues, meaning a fix is imminent. We will share with you any further updates we receive from the data centre. The network fix will allow us to bring our remaining servers online, stabilizing and further increasing the WU supply.

Thus, until we are able to deploy all dedicated servers, we must continuously adjust and monitor tasks scheduled in Aurora/Mesos to keep the tasks balanced and the workunits flowing, and so far this process is unduly intensive and sporadic. For example, a recurring job may saturate the scheduler by creating a large number of downstream jobs. This flood of new jobs might then throttle the processing rate of other waiting jobs and thereby interrupt the supply of work. To fix the problem, we would need to temporarily deschedule the parent job, decrease its frequency, or decrease the priority of its children in such a way that does not starve other stages of the pipeline.

Last week, we mentioned that we have begun to investigate concerns over statistics, credit, streaks, and database dumps raised by volunteers. We will have an update on some of these issues next week. We also plan to release a more structured breakdown from the tech team similar to a CHANGELOG starting next week or the week after so that we can increase the frequency and clarity of updates.

Future Plans for Aurora/Mesos Replacement by SLURM at the WCG
With the above in mind, although we should be able to immediately deploy additional server resources for Aurora/Mesos job scheduling once networking issues are resolved, our team has greater familiarity and experience with the SLURM scheduler, an alternative to Aurora/Mesos. SLURM is a mature technology currently in use at many of the world’s foremost supercomputing centres, and we intend a full transition to SLURM soon after WCG full restart.

Pending some investigation, we may also look to expand our message-passing layer and implement a publisher/subscriber model and some notion of back-pressure to dictate the chain of downloading data from researchers and creating workunits with which to stock the feeder. From what we have observed, we can expect the move to SLURM will distribute our internal server resources more efficiently than Aurora/Mesos currently does, while losing no functionality. This should be relatively straightforward to port since it overlaps with the existing skill-set of the team.

However, this work is not a higher priority than addressing long-standing concerns of volunteers, which we are finally carving out the bandwidth to address.

Thanks for your patience and have a great weekend!

SLURM: https://en.wikipedia.org/wiki/Slurm_Workload_Manager
Aurora Mesos: https://aurora.apache.org/

Here's an interesting tidbit we weren't previously aware of: WCG was/is depending on a dead Apache project.

mdxi fucked around with this message at 19:11 on Aug 27, 2022

Lobsterboy
Aug 18, 2003

start smoking (what's up, gold?)
old pc actually picked up WCG workunits about 2 weeks ago and has been chugging along slowly but surely, so at least thats a thing now.

mdxi
Mar 13, 2006

to JERK OFF is to be close to GOD... only with SPURTING

Lobsterboy posted:

old pc actually picked up WCG workunits about 2 weeks ago and has been chugging along slowly but surely, so at least thats a thing now.

Yeah, a week ago things were really looking up. WUs were coming by the hundreds and the spate of network issues appeared to have subsided. By yesterday that had slowed to a trickle of WU retries and some people were reporting networking issues again.

Today, I'm personally seeing more WUs again, on some of my machines but not all. Things are clearly not quite stable yet, and the WCG forums are a wash of doomposting because staff aren't making updates. Ugh.

mdxi
Mar 13, 2006

to JERK OFF is to be close to GOD... only with SPURTING

Welp, WCG has let their TLS cert lapse, so now you can't get to their website (via newest browsers which obey HSTS) and clients are refusing to talk to their backend servers:
code:
Sep 09 08:06:52 node01 boinc[697]: 09-Sep-2022 08:06:52 [World Community Grid] Reporting 2 completed tasks
Sep 09 08:06:52 node01 boinc[697]: 09-Sep-2022 08:06:52 [World Community Grid] Not requesting tasks: some download is stalled
Sep 09 08:06:54 node01 boinc[697]: 09-Sep-2022 08:06:54 [World Community Grid] Scheduler request failed: SSL peer certificate or SSH remote key was not OK
Oh yeah, and their download problems are still a thing :gonk:

Lobsterboy
Aug 18, 2003

start smoking (what's up, gold?)
not a good hand-off for WCG huh

Sneeze Party
Apr 26, 2002

These are, by far, the most brilliant photographs that I have ever seen, and you are a GOD AMONG MEN.
Toilet Rascal
Sheep-It is a distributed Blender renderfarm with participants from all over the world. I just started rendering yesterday. It's kinda neat, because you can see thumbnails of what you're rendering. It's not for :science: science :science:, but it's neat. If you want to help people around the world render their artwork, you can join this SAGoons team that I just put together.

I'm the only member :(

edit: WE HAVE HIT TWO MEMBERS.

edit2: three members now :)

edit3: aaaand I accidently removed myself from my own team, omg

And I'm back in.

https://www.sheepit-renderfarm.com/team/1918

Sneeze Party fucked around with this message at 18:54 on Sep 23, 2022

unpronounceable
Apr 4, 2010

You mean we still have another game to go through?!
Fallen Rib
It’s the coldest time of the year in my apartment, and this thread reminded me I used to run this stuff some 10 years ago or so, so I’m at it again. I’ve been running WCG, SiDock@home fora few days, and just signed up for GPUGrid.net

We’ll see how well my 6-7 year old hardware fares.

mdxi
Mar 13, 2006

to JERK OFF is to be close to GOD... only with SPURTING

WCG has declared themselves back up and out of testing. They say that everything should be operationally solid now, with the only wrinkles being around catching up on user stats:

WCG posted:

There is work that remains to be done. In particular, while we were able to restore the My Contribution page functionality and you may have noticed that results over the past 2 days are now reflected - we must now carefully iterate through a modified version of the stats update procedure to add back each day that was missed. The results tab of the My Contribution page does reflect accurately the validation status and assigned credit of your workunits.

When complete stats are available we will begin a grace period for streaks of one month, extend all streaks that were active before the transition, and finally restore the normal cacluation of streaks when the grace period ends.

Finally, we are preparing a well deserved Badge of Honor for all the volunteers who submitted a valid result during the transition and testing phase, yourself included. We are also preparing yet another badge for all citizen scientists who join or return to the grid before the New Year.

Good if true.

Edit, a few hours later: :lol: the entire site, including forums, is returning a variety of errors now. No updates on social media. The last thing I saw on the forums was people wondering why in the world they'd make an announcement like that on a Saturday :smithcloud:

mdxi fucked around with this message at 06:09 on Oct 2, 2022

unpronounceable
Apr 4, 2010

You mean we still have another game to go through?!
Fallen Rib
How common is it for jobs to just get stuck? I got a couple jobs for SiDock@home that have been stuck on the same % completion all day, time elapsed almost at 1.5 days, when others finished in ~20 hours.

mdxi posted:

Edit, a few hours later: :lol: the entire site, including forums, is returning a variety of errors now. No updates on social media. The last thing I saw on the forums was people wondering why in the world they'd make an announcement like that on a Saturday :smithcloud:

I did download and complete a bunch of jobs, but right now all of my WCG downloads are stuck. I guess this explains it.

mdxi
Mar 13, 2006

to JERK OFF is to be close to GOD... only with SPURTING

Less than 30k WUs to go before goons hit 5M units crunched for FAH.

mdxi
Mar 13, 2006

to JERK OFF is to be close to GOD... only with SPURTING

DENIS@Home is in a hiatus due to the beginning of fall semester (it seems their project leads are also active profesors y profesoras) and a need to make adjustments to their simulation code.

quote:

[...] after 3 iterations of the algorithm, the model is gradually improving, but one of the markers is greatly limiting the improvement. We full understand why this is happening, so we need to stop and analyze it in detail before continuing.

Thanks to all the simulations you have done we have a wide variety of situations to study and we hope to resume the simulations shortly (ideally 1 week, but it is easy for it to be two).

On the other hand, the applications that we had in Beta mode have already been sufficiently analyzed to be considered stable, and until we modify the application again, the beta tasks will be stopped.
This happened 9 days ago, so work there may be restarting soon.

mdxi
Mar 13, 2006

to JERK OFF is to be close to GOD... only with SPURTING

  • DENIS has not yet restarted, or posted any new info
  • Milkyway is currently down for maintenance. They're working known problems and have a plan for migrating to new hardware, but their tech lead is currently travelling and is somewhat limited in what can be done in the moment
  • WCG has been sucking less but communication has still been poo poo and things are definitely not back to normal. User and team stats are once again updating daily. The backlog of testing WUs has not been worked though for stats purposes.
  • FAH is crunching as usual

That's the updates from the projects I'm on.

Rexxed
May 1, 2010

Dis is amazing!
I gotta try dis!

I've left a VM with six cores on Rosetta@home and it was idle for a lot of the summer (not that I'm complaining when it's hot). It seems to have picked up work in the last few weeks and is going again, though. I'm sure a lot of people switched off when it wasn't doing anything for so long.

unpronounceable
Apr 4, 2010

You mean we still have another game to go through?!
Fallen Rib
A couple days ago I was looking into new projects to run for when WCG doesn't work, and saw that there were some GPU tasks I could definitely run from PrimeGrid. So my current projects are WCG and SiDock, both CPU, and now PrimeGrid for GPU.

mdxi
Mar 13, 2006

to JERK OFF is to be close to GOD... only with SPURTING

Learned an interesting one today:

If you are attached to a project which has a protracted pause in available WUs, the BOINC scheduler will eventually come to see it as so high priority that other attached projects may not be checked for work until all cached WUs for all other projects are complete (leaving BOINC with no work to do and forcing an update on lower-priority projects).

Obviously this is only a problem if you're attached to multiple projects and want to ensure that as much work as possible is in-flight for all projects.

The solution is to set idle projects to nomorework or suspended, and optionally run a manual update. The downside is that now you have to pay attention to when projects become un-idle and change their state again.

mdxi
Mar 13, 2006

to JERK OFF is to be close to GOD... only with SPURTING

WCG posted:

We recently increased our DB2 storage pool and switched to a more coarse-grained scheduling method for creating and packaging new workunits for each project. This change may have temporarily disrupted WU scheduling, but we will need to monitor further and likely explore additional possible causes before we can consider the issue resolved.

Another (less optimistic) theory is that other tasks, specifically OPNG, were the cause of our recent storage issues and database-wide system errors. We have no solid evidence yet, only an observation that there is typically a decline in available OPNG work around the same time the download issues are less prevalent. A high load on the storage server and scheduler coincide with the database crashes and a phenomenon whereby the download/upload server groups intermittently register as down from the perspective of our load balancer.

We continue to monitor the system to determine what the best course of action is to stabilize our internal network.

Things have been much better for me over the past 2 days. Machines are crunching WCG continually, and maintaining full work queues. I'm very cautiously optimistic that they may finally be ironing out more problems.

mdxi
Mar 13, 2006

to JERK OFF is to be close to GOD... only with SPURTING

Since WCG has restricted the flow of OPNG WUs and halted ARP1 WUs, things have been very stable. Today is my 8th day of nonstop crunching while all machines are maintaining full queues. Some people (especially those interested in OPNG) are still reporting issues getting WUs, but from my point of view there are plenty of OPN1 and MCM1 tasks for everybody.

Also, according to a new update, the ARP1 halt may not have been on the WCG side, and two other projects are getting ready to restart:

WCG posted:

SCC and HSTB projects are busy with validation and preparing for the new restart. We are happy to report that the ARP project is finalizing storage and network setup to enable restart. We will provide a more detailed account of the situation directly from the ARP team soon.
I'm particularly looking forward to resuming crunching for SCC.

Edit: less than 25k WUs to go until the FAH goons hit 5 million

mdxi
Mar 13, 2006

to JERK OFF is to be close to GOD... only with SPURTING

Like clockwork, if I opine that WCG is getting better, then it will fall over.

At least this time we have enough info on the issues the team has been dealing with to see a clear correlation between the resumption of ARP1 WUs and the service going down. So it seems clear that large WUs like ARP, which are 50-100MB each, cause some sort of congestion/oversubscription issue for their.. situation. There are too many possibilities that fit this pattern for me to hazard a guess at a root cause (not that it stops several people on the WCG forums), but the correlation and/or proximate cause now seems pretty clear.

The really awesome question at this point is: why would they turn ARP back on, on a Friday afternoon, and then walk off? Just goddamn.

unpronounceable
Apr 4, 2010

You mean we still have another game to go through?!
Fallen Rib
In the past week or so, I haven't had issues with WCG WUs, though I was getting Open Pandemics (OPN1?) tasks when I'd set my projects to just MCM and ARP. Just yesterday, though, I got a bunch of stuck downloads for GPU Open Pandemics tasks (OPNG?). I did get some ARP tasks, but yeah, lots is just not working.

unpronounceable
Apr 4, 2010

You mean we still have another game to go through?!
Fallen Rib

unpronounceable posted:

In the past week or so, I haven't had issues with WCG WUs, though I was getting Open Pandemics (OPN1?) tasks when I'd set my projects to just MCM and ARP. Just yesterday, though, I got a bunch of stuck downloads for GPU Open Pandemics tasks (OPNG?). I did get some ARP tasks, but yeah, lots is just not working.

Something must have changed, because today I noticed that my OPNG tasks started downloading without issue. I'm not sure what's happened to my CPU WCG tasks, but I'm happy that something is running smoothly for it.

EDIT: Soon after I said this, I downloaded a bunch of MCM1 WUs :toot:

unpronounceable fucked around with this message at 07:41 on Nov 12, 2022

mdxi
Mar 13, 2006

to JERK OFF is to be close to GOD... only with SPURTING

unpronounceable posted:

Something must have changed

They've updated their storage backend to an SSD array.

mdxi
Mar 13, 2006

to JERK OFF is to be close to GOD... only with SPURTING

Update from the Smash Childhood Cancer team

quote:

Making Chemotherapy Work Better: PAX-FOXO1 inhibitors for childhood muscle cancer
In work supported by the Megan's Mission Foundation, the Children's Cancer Therapy Development Institute (cc-TDI.org; under leadership of Dr. C. Keller) has collaborated with Dr. Tyuji Hoshino at Chiba University and the World Community Grid (WCG) to develop the unlikely drug: an inhibitor of a 'transcription factor'. In the childhood muscle cancer rhabdomyosarcoma, the two normal transcription factors PAX3 and FOXO1 'break' and then fuse to one another. The resulting PAX-FOXO1 transcription factor drives a program of other genes to go on that lead to chemotherapy resistance, relapse and sometimes demise. Through the WCG, 8 million compounds were screened and a mere 24 compounds were identified. Thus far, 5 have been validated to stop the action of the PAX-FOXO1 transcription factor. The cc-TDI project lead is Kiyo Nagamori.

Stopping Metastasis of Childhood Sarcomas
In work supported by the Pheonix Spangler Foundation, the Children's Cancer Therapy Development Institute (cc-TDI.org) has collaborated with Dr. Tyuji Hoshino at Chiba University and the World Community Grid (WCG) to develop a small molecule inhibitor of the Osteopontin protein. Osteopontin is a protein made by cancer cells as a way to invite blood vessels to grow nearer. These vessels are then a pathway to spread throughout the body. In work evaluating computationally-modeling chemicals and experimentally identified compounds from collaborator Aykut Uren at Georgetown University, we have identified compounds that bind Osteopontin. The next step is to determine if these compounds stop the blood vessel formation and metastasis that occur as a result of Osteopontin. Genetically-engineered mice with normal levels or absence of Osteopontin make this work possible. The cc-TDI project lead is Shefali Chauhan.

Stopping the Driver of TrkB Neuroblastoma
In studies honoring Alyssa, the Children's Cancer Therapy Development Institute (cc-TDI.org) has collaborated with Dr. Tyuji Hoshino and Dr. Akira Nakagawara at Chiba University and the World Community Grid (WCG) to develop next-generation, selective inhibitor of the TrkB protein. TrkB drives the growth and progression of childhood nerve cell cancer, neuroblastoma. TrkB is the sister protein to TrkA, which has become of great pharmaceutical interest in sarcomas and lung cancers. The TrkB inhibitors developed by evolving chemicals derived from computational modeling have increased solubility but retain activity against TrkB. The cc-TDI project lead is Xiaolei Lian.

mdxi
Mar 13, 2006

to JERK OFF is to be close to GOD... only with SPURTING

And an off-WCG update from cc-TDI, the research group who are the US side of Smash Childhood Cancer:

quote:

cc-TDI has developed the world's first genetically-engineered mice that contain the Pax7-Foxo1 gene fusion found in some forms of alveolar rhabdomyosarcoma. The advantage of this particular mouse model is that it will allow researchers to turn on the cancer-causing gene then test tumors against targeted drugs, an important experimental feature to now be used in international collaboration!

The $53,000 cost to produce this mouse was made possible by a crowdfunding campaign in honor of Suede, Tina, Gary, Leo and Kathryn that raised $18,196 (including $13,000 from David Sullivan via Golf Fights Cancer in honor of Princess Kiley) and was significantly enabled most recently by the Megan's Mission Foundation and its community of Team Megan Bugg supporters with a contribution of $10,000. To help with the last $24,804 needed to complete this project you can join via the crowdfunding campaign.

~Coxy
Dec 9, 2003

R.I.P. Inter-OS Sass - b.2000AD d.2003AD
SH/SC > Should I buy a $53,000 mouse???

mdxi
Mar 13, 2006

to JERK OFF is to be close to GOD... only with SPURTING

FAH team SAGoons now has less than 20k WUs to go before hitting 5 million, and less than 9B points until hitting 50B.

The team is also up to #63 in the overall rankings.

Edit: and #30 in the monthly rankings, up 14 spots from last month (so far). We're right behind Gamers Nexus and Microcenter on the monthly board. Who got new GPUs aside from me and MaxxBot?

mdxi fucked around with this message at 18:20 on Nov 22, 2022

Zogo
Jul 29, 2003

mdxi posted:

FAH team SAGoons now has less than 20k WUs to go before hitting 5 million, and less than 9B points until hitting 50B.

The team is also up to #63 in the overall rankings.

Edit: and #30 in the monthly rankings, up 14 spots from last month (so far). We're right behind Gamers Nexus and Microcenter on the monthly board. Who got new GPUs aside from me and MaxxBot?

We've been passing a lot of teams recently. MaxxBott just passed 3 billion points. It looks like this could be our most productive month ever. Maybe 1.6 billion points.

mdxi
Mar 13, 2006

to JERK OFF is to be close to GOD... only with SPURTING

DENIS has announced that they will resume crunching in January.

skybolt_1
Oct 21, 2010
Fun Shoe
Posting in here in the hope that some of you might be able to help me out with some issues I've been running into with both BOINC and FAH. I have been flipping back and forth between BOINC and FAH trying to figure out the best way that I can utilize my new-to-me Geforce RTX 2060 when I am not using my computer. This seems to be a lot to ask, because what I have found with both FAH and BOINC is that neither platform seems to understand the concept of "run only when I am not using my computer, and don't permanently prevent the machine from entering sleep mode after 4 hours of inactivity." I am unable to get BOINC to honor the first requirement; it will run GPUGRID tasks regardless of whether or not the machine is in use, even though "Suspend GPU computing when the computer is in use" is selected. I was unable to get FAH to honor the second requirement; the machine would never, EVER enter sleep mode.

Most of the forums for things like GPUgrid are deader than this forum so I figured that I would try here first. Any ideas?

Dead Goon
Dec 13, 2002

No Obvious Flaws



Zogo posted:

We've been passing a lot of teams recently. MaxxBott just passed 3 billion points. It looks like this could be our most productive month ever. Maybe 1.6 billion points.

I'd like to think it is the addition of my R5 5600 and 1050Ti :D

mdxi
Mar 13, 2006

to JERK OFF is to be close to GOD... only with SPURTING

~Coxy posted:

SH/SC > Should I buy a $53,000 mouse???

I was busy last week with work and holiday stuff, and my brain completely missed the joke. This morning I got the joke.

Well done :golfclap:

unpronounceable
Apr 4, 2010

You mean we still have another game to go through?!
Fallen Rib

skybolt_1 posted:

I am unable to get BOINC to honor the first requirement; it will run GPUGRID tasks regardless of whether or not the machine is in use, even though "Suspend GPU computing when the computer is in use" is selected.

Did you set the client to always run on the gpu vs. running based on preferences? That's the only thing I can think of offhand, if you've already set the "in use" option and timeframe.

skybolt_1
Oct 21, 2010
Fun Shoe

unpronounceable posted:

Did you set the client to always run on the gpu vs. running based on preferences? That's the only thing I can think of offhand, if you've already set the "in use" option and timeframe.

Nope, it's set to run based on preferences.

I have heard that for a while it wasn't recommended to use a cc_config.xml to control the behavior of BOINC but that might have changed? I don't use one today, just the GUI configuration tools.

skybolt_1
Oct 21, 2010
Fun Shoe
I added the suspend_debug flag to my cc_config.xml and got a bit more data. Of particular interest, this is what popped up in my log:

code:
28-Nov-2022 11:07:50 [---] Windows is suspending operations
28-Nov-2022 11:07:50 [---] Suspending computation - requested by operating system
28-Nov-2022 11:07:50 [---] [suspend] net_susp: yes; file_xfer_susp: no; reason: requested by operating system
28-Nov-2022 11:07:50 [---] Suspending network activity - requested by operating system
28-Nov-2022 11:07:51 [---] [suspend] net_susp: yes; file_xfer_susp: no; reason: requested by operating system
28-Nov-2022 11:07:51 [---] [suspend] net_susp: yes; file_xfer_susp: no; reason: requested by operating system
28-Nov-2022 11:07:52 [---] [suspend] net_susp: yes; file_xfer_susp: no; reason: requested by operating system
28-Nov-2022 11:33:55 [---] Resuming after OS suspension
28-Nov-2022 11:33:55 [---] Resuming computation
28-Nov-2022 11:33:55 [---] [suspend] net_susp: no; file_xfer_susp: no; reason: unknown reason
28-Nov-2022 11:33:55 [---] Resuming network activity
28-Nov-2022 11:33:59 [GPUGRID] Sending scheduler request: Requested by project.
28-Nov-2022 11:33:59 [GPUGRID] Not requesting tasks: don't need (CPU: not highest priority project; NVIDIA GPU: job cache full)
28-Nov-2022 11:33:59 [---] Windows is resuming operations
28-Nov-2022 11:33:59 [---] Suspending computation - computer is in use
What is interesting is that in the BOINC manager, GPUGRID remains running even after I suspend it through the manager, but does not increment i.e. it does not appear to be completing any work and the time spent doesn't increment as would be expected. Also, there are the usual Python processes running in task manager consuming CPU and GPU. Once I kill the primary Python process GPUGRID shows as "suspended" as expected.

Sounds like this is a GPUGRID-specific problem, so I should probably go over to those forums and see what their thoughts are...

mdxi
Mar 13, 2006

to JERK OFF is to be close to GOD... only with SPURTING

Also, be sure to check your global compute prefs and the project-specific preferences, both on the GPUGrid site. I think that compute prefs pulled as part of an update (i.e. runtime) override prefs set though a manager app (which happen at startup).

Adbot
ADBOT LOVES YOU

mdxi
Mar 13, 2006

to JERK OFF is to be close to GOD... only with SPURTING

It's December 1st, and F@H team SAGoons is up 6 places on the monthly scoreboard, to #24.

We've overtaken GamersNexus and some other teams I don't recognize, but somehow Microcenter is still ahead of us. I had no idea so many people were repping MC while crunching.

We should also be up to #62 in the overall rankings by tomorrow. And shockingly, in about 3 months, we may overtake the Dutch Power Cows, who have been part of pretty much every grid computing project ever.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply