Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Gothmog1065
May 14, 2009
I've been asked to take a look and see if I can figure out why a small doctor's office is having some fairly bad latency and slowness when accessing their EMR. I don't have a whole lot yet as I only got about an hour to look at some of the physical stuff.

A few stats on the clinic:

Providers: 5-6
Patients per day: 125-170
Nurses: 4
Support staff: 10-15

So they use Allscripts as their EMR. There is an MDF that houses the demarc, two switches that seem to run most of the hardline connections. However, where the servers are, there is an older Netgear JGS524-v2. It's a gigabit unmanaged switch.

For the servers, it looks like they have a Windows Server (looks 2012r2 or 2016), that they pull remote desktops off of. This is a blade server, typical redundancy setups. There is a database server, and 3 others, not sure what they are off, the tags weren't descriptive enough. All of these are behind the Netgear switch. Most of the employess (All Physicians, nurses and most of the support) use remote desktops to access Allscripts.

From the descriptions and what I've talked to the employees about, the slowness isn't bad in the morning, but gets worse as the nurses and providers get logged in. It seems to get slightly better later in the afternoon. The descriptions that I've been given seem to be where they are having latency between actions with mouse/keyboard and responses on the screen. This started after they had a server upgrade to the main server.

The primary bottleneck to me seems to be over that Netgear. My first thought would be to either upgrade it to a newer model gigabit or upgrade to a 10g switch. However, from an extremely basic search, it doesn't seem like there's any 10g unmanaged switches. I'm not sure if I'm going to be their ongoing tech support yet, so I'm not sure if I want to slap in a managed switch.

Should I be looking at something else first?

Adbot
ADBOT LOVES YOU

manofsloth
Sep 2, 2011
College Slice
Any chance you can get someone to log in at the console when it gets slow to do some testing locally? Maybe it's not a network issue at all. I would however be curious to know if power-cycling the switch(es) when it slows down has any effect. Another easy thing to check would be the negotiated link speed on the server NICs to make sure everything is actually running at gigabit.

Gothmog1065
May 14, 2009

manofsloth posted:

Any chance you can get someone to log in at the console when it gets slow to do some testing locally? Maybe it's not a network issue at all. I would however be curious to know if power-cycling the switch(es) when it slows down has any effect. Another easy thing to check would be the negotiated link speed on the server NICs to make sure everything is actually running at gigabit.

Yeah, I'm going to sit down with the server for a bit and see what they have specc'd on it, see what kind of NIC they have on the server itself. I might have them try to power cycle the switch at the end of the day/week as to not affect patient care.

The way they made it sound is something about Allscripts said they had something pointing to the C drive, which is dumb on multiple levels. However, it will be nice to do some monitoring on the main CPU to see just how much of the hardware is being used.

namol
Mar 21, 2007
Which version of all scripts is this? I ask because for a few years now all scripts has been pushing folks to their SaaS version which is delivered via RDP. Since about September some of our clinicians have been reporting similar issues with the SaaS version. We’ve gone through a few data center migrations on the allscripts side but they continue to have issues with over provisioning and other slowness issues when users kick off reports.

Khorne
May 1, 2002
Look at actual statistics while the problem is happening to figure out what is going on. I'd pay specific attention to CPU usage, Memory usage, page file usage, latency to the local server, latency to the outside world, and total throughput.

What was the server upgrade? Hardware, software, or what?

It sounds like:
  • The local hardware/software might be hitting memory limits and hitting disk hard. Disk is probably spinning rust with poor performance given the rest of the stack mentioned.
  • Consumer netgear router has issues when under load. Very common. They'll crap out well before 1g depending on traffic patterns.
  • If the software were updated it could have to do with bandwidth to the outside world or external factors. You're talking about increased traffic at high load times, and "throughout the day" is vague when some people might start working earlier than others. Does it peak at some time and then perform normally?

Gothmog1065
May 14, 2009

namol posted:

Which version of all scripts is this? I ask because for a few years now all scripts has been pushing folks to their SaaS version which is delivered via RDP. Since about September some of our clinicians have been reporting similar issues with the SaaS version. We’ve gone through a few data center migrations on the allscripts side but they continue to have issues with over provisioning and other slowness issues when users kick off reports.

I know for a fact that it's not cloud based, the server is in-house. They DID upgrade around that time. I'll get the version number here shortly.


Khorne posted:

Look at actual statistics while the problem is happening to figure out what is going on. I'd pay specific attention to CPU usage, Memory usage, page file usage, latency to the local server, latency to the outside world, and total throughput.

What was the server upgrade? Hardware, software, or what?

Both. Not sure what the actual hardware they upgraded to was. I'm sitting down next Friday and going to do an actual in depth inspection of the equipment, find out what they have, what the statistics look like, etc.

quote:

It sounds like:
  • The local hardware/software might be hitting memory limits and hitting disk hard. Disk is probably spinning rust with poor performance given the rest of the stack mentioned.
  • Consumer netgear router has issues when under load. Very common. They'll crap out well before 1g depending on traffic patterns.
  • If the software were updated it could have to do with bandwidth to the outside world or external factors. You're talking about increased traffic at high load times, and "throughout the day" is vague when some people might start working earlier than others. Does it peak at some time and then perform normally?

[*] I don't think they'd be hitting hardware limits, but I'm not even sure what they have yet. I just did a very very broad walk around, and got the full consult green lit today, so I'll be sitting down and doing various monitoring and checking out hardware specs.
[*] This is my biggest concern, and it would not surprise me in the slightest that this is the issue. I don't know how long it's been there, but it's been there for a while.
[*]This is the only thing that doesn't really hit on my radar since they're still an in-house clinic. They don't use the cloud-based SaaS solution from Allscripts, but there's no telling what kind of garbage they put in the servers to try and 'force' people into their SaaS solutions. From what I've been told is it is much more responsive in the morning, but as the clinic picks up it slows down well into the afternoon, as the providers don't really seem to stop until near the end of the day. Then again, as you said, I'm getting vague 'all day' stuff from very non-technical people.\

Edit: Professional Version 18.3 is the version they are on.

Gothmog1065 fucked around with this message at 23:06 on Nov 25, 2019

Gothmog1065
May 14, 2009
Hey guys, thanks for all the help. Basically the issue comes down to lovely database management.

CPU is a pair of Xeon E5540's
RAM is 96 GB (Which is an odd amount to me, but it could be a 3x bank... Never know?)
Running Windows server 2016.

There are 2x2 bays with 2TB accessble on each hardware RAID. I'm assuming mirrored (So they 3 logical drives, C on one set, D and E split on another set). Unknown if HDD or SDD, hopefully the latter.
There is a 4TB USB drive which I assume they use for "backups".

D and E both seem to be shared drives that incoming scans and faxes get pushed to.

I went in and did some basic monitoring over lunch. The CPU averaged 15-25% usage. Network would occasionally spike to a full 1Gbps, but it usually stayed below 1Mbps. Ram utilization was at 80% (66GB was MSQL :barf:).

However, what is causing the slowdown is the fact they're doing all their database writing to the C drive. This is maxxing the drive speed constantly, I didn't get a good reading on the actual throughput, but it was slammed out.

I spoke with a coworker who is our Sr Integration guy and does a lot with Oracle databases, and a few of the problems he said they were probably having is the fact they were writing the database to the C drive (:hurr:), they were probably writing their transaction logs to the folder, and those transaction logs were probably filling up if they weren't backing the database up properly (if at all). The "core" solution would be to move the databases to either D or E so that they aren't choking the primary drive out with all the data writing. Other things he mentioned were reindexing the tables, which I'm not sure have been done at all.

I don't know the RAID layout so I'm not sure if they are trying to mirror or stripe it, I hope to god it's mirrored as I'm pretty drat sure it's only 2 disks.

However, the alleged hard stop on this is Allscripts is throwing a temper-tantrum as they're wanting the DB store on the same logical drive as the server, which is... illogical. I'm wondering if they're throwing a tantrum about having it on a separate SERVER versus a separate logical drive which would make more sense. From what I can tell is they are saying something about it breaking their maintenance contract, if they don't want to, they should host it in the cloud.

So I'm at a point where I'm not go prodding into their settings though I don't think it would be wise, but is this aligning with things you've seen from allscripts?

Numinous
May 20, 2001

College Slice
Just did a quick glance - E5540s were released in Jan 2009. Is this database server from 2009? If that's the case, and based on what you are saying it's pretty unlikely there is any SSD based storage going on here.

I think you're going down a reasonable path here. If they have two RAID arrays on the database server and the database and logs sit on one then you can likely improve performance by moving logs to the other. C drive D drive whatever.... I might have words with the original designer about it if I knew them but for a small office like that with likely an equally small IT budget I wouldn't necessarily say it's wrong or bad to use the C drive as a viable place for stuff.

Also, server from 2009??? I think you sorta need to at least broach the subject of an overhaul of their tech. You could likely replace all of those old servers with 2 new ones built exactly the same and use Hyper-V to mirror one to the other. Their entire environment running on one physical server basically and they would have a hell of a lot more fault tolerance than they have today. Or, ya know, skip the cap-ex and just go cloud... more expensive in the long run but easier to swallow.

Gothmog1065
May 14, 2009

Numinous posted:

Just did a quick glance - E5540s were released in Jan 2009. Is this database server from 2009? If that's the case, and based on what you are saying it's pretty unlikely there is any SSD based storage going on here.

Yeah, that was their "upgrade" ~ 6 months ago.

quote:

I think you're going down a reasonable path here. If they have two RAID arrays on the database server and the database and logs sit on one then you can likely improve performance by moving logs to the other. C drive D drive whatever.... I might have words with the original designer about it if I knew them but for a small office like that with likely an equally small IT budget I wouldn't necessarily say it's wrong or bad to use the C drive as a viable place for stuff.

Also, server from 2009??? I think you sorta need to at least broach the subject of an overhaul of their tech. You could likely replace all of those old servers with 2 new ones built exactly the same and use Hyper-V to mirror one to the other. Their entire environment running on one physical server basically and they would have a hell of a lot more fault tolerance than they have today. Or, ya know, skip the cap-ex and just go cloud... more expensive in the long run but easier to swallow.

So yeah. Kind of at the "they're tired of spending money" deal. I would add some drives in the extra bays, do a raid 5 on 2 sets of 3 SSD (keep each column a raid), pool storage and put a logical over those and slap the DB over there. I probably wouldn't migrate the server off the current arrays. That would offload 100% of the database read/writes to there, and leave the rest of the drives for servery stuff.

I'm not sure if we could migrate the existing server to those (if I got them up to 2TB per RAID5) with minimal downtime, upgrade the other two columns, migrate back and have a huge pool over all of the drives, and split them up from there, but that may be getting in too deep.

Gothmog1065
May 14, 2009
So a question on RAID (if anyone who reads this knows, the tag and title are now completely off).

So I confirmed they have 2x mirrored RAIDs with 3 logical drives. C is on the first RAID, D/E is on the second RAID. They are both HDD's, and from what I've seen, they're generic slow archival-style drives (WD Green/Red, etc).

I have two options that I'm looking at. Both involve buying SSD's to upgrade to.

A> Use the RAID array, pull out Drive 0, put in a SSD, let it rebuild the mirror, pull out Drive 1, and repeat for all 4 drives.

B> Just set up a RAID 5 with 4 SSD's (maybe smaller since 2TB enterprise SAS drives are expensive as gently caress).

My main question with A is: Will swapping like this work? After I swap the second drive in the array, and it rebuilds, will it see the increased speeds and then go on as normal? I'd make sure the server is backed up properly before doing this of course, just want to make sure I'm not going to just smash into a brick wall doing it this way.

The Controller is a LSI Raid Controller, 2008 Falcon (I think -- the name might be slightly off). It does support RAID 5 for option B, but are there going to be issues with Option A with this as well?

Also, is there a Manager for Windows for this controller? From my limited searches, it seems to be CLI, but am I going to have to install the drivers? Do I just need to install the LSI MegaRaid driver from Dell (It's a Dell PowerEdge C6100)?

And lastly, is there a recommendation on enterprise grade SSDs? I know the Samsungs run about 5-600 for the 2TB drives I want, is there another good cheap brand, or does it really matter for enterprise stuff?

Rudager
Apr 29, 2008

Gothmog1065 posted:

Yeah, that was their "upgrade" ~ 6 months ago.

So yeah. Kind of at the "they're tired of spending money" deal.

They're sick of spending money because they're spending money on a server that's so old that a 10 year old CPU was considered an OK upgrade. I'm going to go out on a limb, but I bet someone also sold them on moving from Server 2008R2 to 2016 since 2008R2 is coming up to it's end of life. Neither of those, or your new proposal, address the core issue which is that's it's 10+ years old.

Any money going towards "fixing" it needs to be put towards replacing it.

You spend a few grand putting all these new SSD's in there, but the 10+ year old motherboard will die in 6 months and you'll be tearing your hair out trying to find a second hand replacement of unknown condition while everyone's screaming at you that it needs to be fixed because it's costing them so much money!

Inept
Jul 8, 2003

This all sounds like a mess. If their system crashed and lost all patient data, what would they do? Would they go out of business? If they need to be HIPAA compliant, do they have a BAA with your company since you'll have access to PHI while doing this? I'd try to get them to seriously consider the SaaS option instead of relying on all of this outdated junk where they likely don't have maintenance contracts or support for anything.

Gothmog1065
May 14, 2009

Rudager posted:

They're sick of spending money because they're spending money on a server that's so old that a 10 year old CPU was considered an OK upgrade. I'm going to go out on a limb, but I bet someone also sold them on moving from Server 2008R2 to 2016 since 2008R2 is coming up to it's end of life. Neither of those, or your new proposal, address the core issue which is that's it's 10+ years old.

Any money going towards "fixing" it needs to be put towards replacing it.

You spend a few grand putting all these new SSD's in there, but the 10+ year old motherboard will die in 6 months and you'll be tearing your hair out trying to find a second hand replacement of unknown condition while everyone's screaming at you that it needs to be fixed because it's costing them so much money!

TBH, I'm not sure what they had before. I could power up the computer and find out but :effort:

I will agree with the "They need newer" idea, I'm probably going to write up a proposal with multiple options (full replacement and migration, upgrade the HDDs, SaaS, etc). I hate to drive the bus over the other guy, but he's putting them in a pretty nasty predicament with what he's done. I'm going to try and talk them into newer hardware or the SaaS. For their size I think it would be better to do that.


Inept posted:

This all sounds like a mess. If their system crashed and lost all patient data, what would they do? Would they go out of business? If they need to be HIPAA compliant, do they have a BAA with your company since you'll have access to PHI while doing this? I'd try to get them to seriously consider the SaaS option instead of relying on all of this outdated junk where they likely don't have maintenance contracts or support for anything.

They probably wouldn't go out of business, just back to paper again. I'm not worried about a BAA or HIPPA (I work for the Hospital next to them so I have LOADS of HIPPA fun all day every day). As for the SaaS, with what they're paying in monthly maintenance fees, that will probably be cheaper.

Gothmog1065
May 14, 2009
Would this be an acceptable upgrade?

Not a huge fan of the SSD but that would get them by and allow replacements as they fail...

https://www.newegg.com/p/2NS-0008-4ZX52

I would have them Raid 5 or 10 it, maybe over the entire 8 drives, and split logical from that. That PERC H330 can do 50 if that is preferable.

Gothmog1065 fucked around with this message at 19:59 on Dec 11, 2019

Gothmog1065
May 14, 2009
I'm back, much to everyone's chagrin and amidst the groans. I can now be quite a bit more forceful since the physicians are seeing actual results.


Inept posted:

If they need to be HIPAA compliant, do they have a BAA with your company since you'll have access to PHI while doing this?

They do now! As for the rest of your questions now that I have proper answers:

quote:

This all sounds like a mess. If their system crashed and lost all patient data, what would they do? Would they go out of business? I'd try to get them to seriously consider the SaaS option instead of relying on all of this outdated junk where they likely don't have maintenance contracts or support for anything.

They have an offsite backup as well as their local (garbage) backup, so no, they'd not go out of business, just have to spend poo poo loads of money to get back on track. They are pretty adamant against cloud because of reasons. Baby steps.

TL;DR: I want to fix their lovely local backup first. What is the best way to do this for ~4TB? NAS? Something like the WD MyCloud or are there better options?

So the basic gist of the whole deal is the previous tech got complacent as gently caress and basically stopped really doing poo poo. He half-assed a lot of things (Think putting WAPs on the floor, tiny switches in all the offices instead of running lines) and bought meh equipment (TP-Link WAPs, I'm a Ubiquti guy, but they do work), and really questionable decisions (5 APs in about a 50ft radius, one being a Wi-Fi router that still routed, yay double natting!), lots of patchwork items. Basically what broke the camel's back with the clinic is when they told him to swap out the storage drives where the DB sat with new SSD's to help throughput, he proceeded to not do it, bill them for hours of work, and then not tell them that he didn't.

So now I'm the tech guy, I do have a BAA for HIPPA and a CYA contract. So on to the server!

As stated before, it's a Dell C6100, and has 4 blades/modular units in it.
1 - DB server. Has the old as gently caress xeon processors and 96 GB of ram. Has 2 RAIDs it uses, one for the C drive, one for the D/E drives. Both RAID 1. The D/E now has a pair of SSDs which has helped tremendously with the read/writes.
2 & 3 - Hyper-V servers with remote desktops. As far as I can tell it's just servers that pull remote desktops for the applications for reasons (Potentially licensing of FAT clients, I dunno). I also don't know why he has two.
4 - the Domain/DNS server. Pretty self-explanatory, low usage.

I believe all of them run Server 2016

So on the floor on a roll cart, they have the "old" servers. Old DB, old ADDS server, and another two that did.. stuff, the UPS. Going to try and have all of these decommissioned and the HDDs pulled and destroyed.

So here's what I wrote the office manager in an email (very stripped down version):

HIGH PRIORITY:
  • BACKUP: Holy poo poo do they need a new backup solution (Which is going to be my first real question). Right now they're using a garbage rear end Seagate SRD0NF2, and from what I looked up it's a Shingled drive which makes writing to it a nasty endeavor that takes 10 hours to back up the 4TB of drives it's backing up. It also shouldn't take 6 hours to restore 500GB of data. That was loving ridiculous.
  • Switch removal/replacment: Mostly because it's cheap. Try to eliminate/replace all those 10/100 switches. One room I think I can eliminate it when I move the second WAP out of the room (that was sitting right next to another one... For speed!)
Non Urgent items (Basically do as I get free time):
  • Wireless optimization. Reduce the WAPs that the guy put in one area, rename them, streamline them so their computers won't be spazzy trying to connect to the 'best' connection. Potentially create a guest network once I get into the firewall deeper so I can segregate the networks, and put in the typical disclaimers/accept pages.
  • Decommission legacy servers.
  • Upgrade the switch next to the servers, maybe create a mini-IDF.
  • Clean up demarc room and the random wiring and the switches in there. This room is at least locked.
Budged because it needs to be done, but not right now:
  • Get a rack that can be locked and secure the server. Honestly, I'm probably going to change this thinking about it and move it to high priority as they can be cheap as gently caress off of eBay/Craigslist.
  • Upgrade the server

So I think now my biggest question is the backup, and the next is the server.

If I set up a new server, is it better to just run a RAID 10? I'm going to talk to my hardware guys at my job to see how they set up the RAID arrays, how many disks in each set (IE, RAID 10 with 24 drives, 6+6 / 6+6 or a 60 with the same setup). I know 5 and 50 aren't good for DB writes, however having the parity can be important. I know the basics of RAID but getting more than just the bare samples it gets interesting. Probably a lot of this will be answered by the guys where I work since we do tons of DBs and almost all of it virtualized, but is it better to have two separate servers (one for the DB, one for the other), or with this, would it be just as well to have it all on one bare metal device with a raid split out into logical drives?

A lot of the new server will be up in the air for some time. I feel like I'm getting on track by starting with their DB (and probably securing all of that first).

MF_James
May 8, 2008
I CANNOT HANDLE BEING CALLED OUT ON MY DUMBASS OPINIONS ABOUT ANTI-VIRUS AND SECURITY. I REALLY LIKE TO THINK THAT I KNOW THINGS HERE

INSTEAD I AM GOING TO WHINE ABOUT IT IN OTHER THREADS SO MY OPINION CAN FEEL VALIDATED IN AN ECHO CHAMBER I LIKE

RAID10 is possibly overkill for what they need, but that's hard to tell you.

The questions you definitely need to answer before getting into what RAID to use are the following:

1) How much USABLE space do they need, right now, just for current data you plan on putting onto the array?

2) How many drives do you have?

3) What size are they?

4) Is there room for additional drives?

5) How much space do you expect them to need over the next 1-3-5 years?

6) Do their application vendor(s) recommend a specific RAID? (this is important because rather than guessing you have the vendor to say we need X to ensure performance)

Look at this to understand different RAID levels and what they offer: https://www.prepressure.com/library/technology/raid

Now, see if what you want/need are possible with what you have or if you need to buy more/different stuff with this: http://www.raid-calculator.com/

Gothmog1065
May 14, 2009

MF_James posted:

RAID10 is possibly overkill for what they need, but that's hard to tell you.

The questions you definitely need to answer before getting into what RAID to use are the following:

1) How much USABLE space do they need, right now, just for current data you plan on putting onto the array?

2) How many drives do you have?

3) What size are they?

4) Is there room for additional drives?

5) How much space do you expect them to need over the next 1-3-5 years?

6) Do their application vendor(s) recommend a specific RAID? (this is important because rather than guessing you have the vendor to say we need X to ensure performance)

Look at this to understand different RAID levels and what they offer: https://www.prepressure.com/library/technology/raid

Now, see if what you want/need are possible with what you have or if you need to buy more/different stuff with this: http://www.raid-calculator.com/

RAID 10 I was looking at because of the Database server, however, it may be easier to split that off separately. This is also considering I'm going from old as poo poo 7200 RPM drives in raid 1, so yes, 10 may very well be overkill.

1) Currently they have 4TB total between the 3 volumes on this one server I'd say not much more than an additional ~2TB for the other 3 servers (Two that can be combined down into one - the virtual desktops).
2) They currently have 10 total drives, in 5 RAID 1 sets. The DB uses 2 sets (One for C and one for where the DB resides on D and E). This is not including the backup (Which I want to put into a Synology type enclosure, just debating if the 1618+ will be too much, etc).
3) Most of this is in the OP. Medium sized clinic, 6 providers, 27 employees total. Their workspace is kind of full, so unless they move they won't be getting much bigger.
4) In their current box, not really. It's a 12-bay 3.5" server, and 10 are used.
5) Probably not much more honestly. Most of the space they'll be using will be for scanned/faxxed referrals, so potentially doubled at max.
6) Allscripts says RAID 10, but they also say the server should never go more than 14 days without a reboot because... Reasons.

Again, my biggest consideration is to make sure that if I do put something like ESX on the bare metal and run their servers off the same raid, that the DB writes aren't going to kill it. From what I'm reading is the "deeper" a raid is (IE: having more disks in a RAID 6) helps the write speeds since there are more drives to write to. I'm just looking at potential configurations for a new server so I can get them the best they can get with some future expandibility as well.

That said, things like is it better to have a 4 disk raid 6 with 4TB drives, or 10 1 TB disks? Questions like that are what are more or less plaguing me now.

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

Gothmog1065 posted:

Again, my biggest consideration is to make sure that if I do put something like ESX on the bare metal and run their servers off the same raid, that the DB writes aren't going to kill it. From what I'm reading is the "deeper" a raid is (IE: having more disks in a RAID 6) helps the write speeds since there are more drives to write to. I'm just looking at potential configurations for a new server so I can get them the best they can get with some future expandibility as well.

Depending on the specific implementation, a RAID5/6 will have the read performance of the disks in aggregate, but only the write IOPS of one disk. The reason DB stores were always on a raid 10 or something similar was due to the fact that you could get more IOPS out of the system with a reasonable amount of overhead and chance of failure.

These days databases go on SSDs, anything media, office file, or pdf goes on HDDs.

Assuming the raid card can handle it, and the drive bays are there:
Raid 1/10 of SSDs, 3-4x as large as the database and transaction logs are now.
Raid 6 of 4-8TB nearline SAS/SATA drives for basically everything else.

Gothmog1065
May 14, 2009

Methylethylaldehyde posted:

Depending on the specific implementation, a RAID5/6 will have the read performance of the disks in aggregate, but only the write IOPS of one disk. The reason DB stores were always on a raid 10 or something similar was due to the fact that you could get more IOPS out of the system with a reasonable amount of overhead and chance of failure.

These days databases go on SSDs, anything media, office file, or pdf goes on HDDs.

Assuming the raid card can handle it, and the drive bays are there:
Raid 1/10 of SSDs, 3-4x as large as the database and transaction logs are now.
Raid 6 of 4-8TB nearline SAS/SATA drives for basically everything else.

Thanks, it was the read then, not the write. That clarifies a lot.

When I talk them into a new server box itself, I'll be able to get what I need, so there shouldn't be any "ifs". That really puts what I'm going to need in perspective. Now to shove a backup down their throats first.

Adbot
ADBOT LOVES YOU

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

Gothmog1065 posted:

Thanks, it was the read then, not the write. That clarifies a lot.

When I talk them into a new server box itself, I'll be able to get what I need, so there shouldn't be any "ifs". That really puts what I'm going to need in perspective. Now to shove a backup down their throats first.

Backups are a must. Most of the synology units are pretty great, the DS918+ is probably more or less ideal as a dedicated backup target. Cheap enough to probably get away with, robust enough that it should work well enough longer term.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply