Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
skipdogg
Nov 29, 2004
Resident SRT-4 Expert

theperminator posted:

We've lost faith in Dell and their Equallogic line, and we're thinking of switching over to something else.

What are the small-medium business options from HP Like?

HP Storage is a loving clusterfuck, much like the rest of the company. I love our EMC VNXe units for small/mid deployments. I would check them out if they meet your needs. We've standardized as a company on EMC storage though. The big project next year is to retire an IBM V7000 setup and move it over to a VNX at one of our data centers. The other 2 main data centers are already running VNX systems.

Adbot
ADBOT LOVES YOU

wolrah
May 8, 2006
what?
Does anyone have experience with the Intel Modular Server's storage system and particularly it's "Shared LUN" feature? I'm under the impression that this would be a better choice to provide shared storage to the Hyper-V blades housed within compared to running a *nix storage distro of some sort on one of the blades and using iSCSI from there.

The previous administrators of this machine built a whole bunch of small LUNs, one per VHD, and attached them to the individual blades so changing anything is a real pain in the rear end right now and any kind of failover or migration between blades is just a dream.

I'm usually the one supporting using random *nix appliances instead of licensed features, but since we already have the hardware and it's only a few hundred bucks to license the feature it seems like the obvious solution unless it has some fatal flaw.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

So we've got a Pure FA420, Tintri T540, and an all flash NetApp 8060 in our lab if anyone is curious about any of those arrays. I've spent the past day getting the NetApp set up and running some benchmarks against it, but I haven't played with the others too much yet.

JockstrapManthrust
Apr 30, 2013
Hopefully you can get cdot 8.3 rc1 on that 8060 as the boost to AFF performance looks great.

devmd01
Mar 7, 2006

Elektronik
Supersonik

bigmandan posted:

We're pretty comfortable with racking and cabling the equipment, but the setup services include config of the storage units on 4 new hosts we are getting as well. This will be our fist SAN in our environment so having the setup and configuration done for us will be good.

On another note, has any here had experience with Storage Centre Live Volumes? I've read the documentation on it and watch the video Dell put out on it. Seems pretty interesting on paper but I'd like to hear what it's like using it in a production environment.

How much of each tier did you buy? Make sure you have them explain auto-tiering and storage profiles, its pretty straightforward.

KennyG
Oct 22, 2002
Here to blow my own horn.
No one likes to talk actual costs paid but I'm trying to figure out how far I can push our VAR or if it's even worth the effort. In the realm of production NFS/CIFS appliance on a 500+TB scale what is a reasonable cost per gig? 40-50 apps driving 100-200 iops each - call it 10k total tops. I have some wildly different quotes. Last year we did a deal at $1.15/gb at about 120tb. I have a quote from Dell for a new deployment that's $.25, yet EMCs is $2.02. I have talked to EMC about dells quote and they seem uninterested in changing the pricing to meet the market. I like EMC more than dell but not 8x.

Where I sit, looking at the market, disks in the capacities I'm looking at should be ~$0.50 for hdd and approaching $2.50 per effective flash gig. I know everyone has strengths and weaknesses but I can't help but think they are trying to exploit what they incorrectly feel is imperfect market information.

Anyone run Isilon or .75+pb Compellent arrays behind fluidfs?

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug
Anyone here run a vsan in production? I find it funny when people don't understand distributed raid and the under lying arch of vsans. It's like wtf did you think it was?

They are all hyped for it till I explain what a vsan is an it's limitations.

evol262
Nov 30, 2010
#!/usr/bin/perl

Dilbert As gently caress posted:

Anyone here run a vsan in production? I find it funny when people don't understand distributed raid and the under lying arch of vsans. It's like wtf did you think it was?

They are all hyped for it till I explain what a vsan is an it's limitations.

If by vsan you mean a distributed block device and not vSAN®, then yes. We run Ceph RBD and Swift/Cinder, though those are less comparable than Ceph in production. We probably also have Nutanix in production somewhere, but I haven't seen it.

People here mostly don't care where their storage comes from as long as it's fast and it doesn't poo poo the bed. I'm not sure I'd even try to explain loosely-coupled scale-out storage to non-technical people. Even then, I imagine it's the same hype as :cloud:. People get excited about implementing it, buy have no idea how to make it do useful stuff once they have.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Dilbert As gently caress posted:

Anyone here run a vsan in production? I find it funny when people don't understand distributed raid and the under lying arch of vsans. It's like wtf did you think it was?

They are all hyped for it till I explain what a vsan is an it's limitations.

I'm not sure what you'd say about vSAN specifically that doesn't apply to a bunch of scale out SAN solutions like Isilon, ExtemeIO, Nimble, Equallogic, Nutanix, Simplivity, etc...

The biggest problem with vSAN is immaturity, but there's nothing inherently wrong with the architecture.

Rated PG-34
Jul 1, 2004




Our cluster use the Isilon filesystem and we sometimes get a slight delay for files to be fully accessible to other nodes in the cluster. The files appear as truncated in the intervening period. Is anyone aware of a fix for this sort of behavior?

KS
Jun 10, 2003
Outrageous Lumpwad

KennyG posted:

No one likes to talk actual costs paid but I'm trying to figure out how far I can push our VAR or if it's even worth the effort. In the realm of production NFS/CIFS appliance on a 500+TB scale what is a reasonable cost per gig? 40-50 apps driving 100-200 iops each - call it 10k total tops. I have some wildly different quotes. Last year we did a deal at $1.15/gb at about 120tb. I have a quote from Dell for a new deployment that's $.25, yet EMCs is $2.02. I have talked to EMC about dells quote and they seem uninterested in changing the pricing to meet the market. I like EMC more than dell but not 8x.

Where I sit, looking at the market, disks in the capacities I'm looking at should be ~$0.50 for hdd and approaching $2.50 per effective flash gig. I know everyone has strengths and weaknesses but I can't help but think they are trying to exploit what they incorrectly feel is imperfect market information.

Anyone run Isilon or .75+pb Compellent arrays behind fluidfs?

I definitely like to talk actual costs paid, because it shifts the power away from the vendors and towards the consumers. We're not signing NDAs here.

Compellent is decent block storage, but layering a NAS head in front of it doesn't put it in the same class as Isilon or Netapp for scale out NAS. That's probably a big reason for the price gap. That said, it doesn't look like you have high IO requirements or the need to scale, and you might be fine with the cheaper solution.

I've never seen Dell's NAS head in the wild. I'd guess at 500+ TB you'd be one of their bigger customers -- might want to set up reference calls with other installations that size.

bigmandan
Sep 11, 2001

lol internet
College Slice

devmd01 posted:

How much of each tier did you buy? Make sure you have them explain auto-tiering and storage profiles, its pretty straightforward.

Dell Compellent SC4020
6X 400GB SLC Wi (One Hot Spare) 1TB Usable R10
6X 1.6TB eMLC Ri (One Hot Spare) 6.4TB Usable R5-5 (7.4TB Flash, 29.60% Capacity)
24X 1TB 7.2K NLS (Two Hot Spare) 17.6TB Usable R5-9

The auto-tiering and storage profiles are pretty straight forward. The thing I was asking about was the Live Volumes (replication), specifically I'm interested in HA Synchronous Live Volumes.

Erwin
Feb 17, 2006

Signed off on our Nimble purchase last week. One of the big reasons I didn't want another EMC is because their whole support experience is awful. Today I have a faulty drive on the old EMC array and getting the SR in and taken care of has made me so happy that I made the choice I did. To start with, the loving 'attach' button does nothing on their site, in any browser. Even that used to work.

Something Awesome
Feb 14, 2007
i mean awful
I know there have been some book recommendations in this thread but frankly it is long and I can't seem to find the exact titles. I currently work in a T2'ish role at an F200 and would like to specialize in storage because frankly I think it's the tits. My real world experience is fairly limited but I need to start somewhere, I would ultimately like to land some EMC certs start job hunting for a junior storage role. Any book recommendations would be appreciated as would any general advice in this particular specialty.

evol262
Nov 30, 2010
#!/usr/bin/perl

Something Awesome posted:

I know there have been some book recommendations in this thread but frankly it is long and I can't seem to find the exact titles. I currently work in a T2'ish role at an F200 and would like to specialize in storage because frankly I think it's the tits. My real world experience is fairly limited but I need to start somewhere, I would ultimately like to land some EMC certs start job hunting for a junior storage role. Any book recommendations would be appreciated as would any general advice in this particular specialty.

My general advice would be to spend a year in a junior admin position as a generalist that gets to do a little of everything (dba, storage, network, virt, sysadmin, scripting) before you specialize, but failing that, it's hard to beat the EMC education stuff as long as you have a general overview of NFS, LUNs, fabrics, etc

OldPueblo
May 2, 2007

Likes to argue. Wins arguments with ignorant people. Not usually against educated people, just ignorant posters. Bing it.

Something Awesome posted:

I know there have been some book recommendations in this thread but frankly it is long and I can't seem to find the exact titles. I currently work in a T2'ish role at an F200 and would like to specialize in storage because frankly I think it's the tits. My real world experience is fairly limited but I need to start somewhere, I would ultimately like to land some EMC certs start job hunting for a junior storage role. Any book recommendations would be appreciated as would any general advice in this particular specialty.

If you give me an e-mail address I can send some free online NetApp training your way, that's all I got.

Vanilla
Feb 24, 2002

Hay guys what's going on in th
I see a lot of Nimble users here. I have an opportunity to work for them.

How do you find the kit? The people? Any bad things you've seen as a customer?

Who would you consider their competitors? Did you look at Pure?

Thanks

Moey
Oct 22, 2010

I LIKE TO MOVE IT

Vanilla posted:

I see a lot of Nimble users here. I have an opportunity to work for them.

How do you find the kit? The people? Any bad things you've seen as a customer?

Who would you consider their competitors? Did you look at Pure?

Thanks

People have been super nice. Their techs have been responsive and proactive. We hear from our account rep all the time, she uses us a lot to answer questions for new clients. Nothing bad from a customer prospective.

Nimble was a lot cheaper for us to get into. We also only require iSCSI block storage, Pure's flexibility was overkill. I also see Nimble sticking around more than Pure, as they seem to be gaining traction a lot faster.

Out of curiosity, what state/area and roll you looking at taking?

Erwin
Feb 17, 2006

I haven't received my array yet, but I'll answer the applicable questions.

Vanilla posted:

The people?
One of the local sales reps is the type that will just send you calendar invites out of nowhere saying "is this day a good day to spend 30 minutes learning about Nimble?" I went through CDW who put me in touch directly with a sales engineer who handled the whole process. He was very knowledgeable and easy to get a hold of, and is also handling the install. I'm not even sure that sales rep knows that I've now bought a Nimble array.

quote:

Any bad things you've seen as a customer?
If you'll be in sales, I'd encourage prospects to buy through someone besides CDW. Purchasing something this expensive through them was painful and unnecessarily complicated.

quote:

Who would you consider their competitors?
Feature parity seems to be in the other hybrid startups - Tegile was the other quote I got with similar features*. Tegile supports iSCSI, FC, NFS, and CIFS, and they use that as a selling point, but it didn't matter to me. They also use the fact that they use eMLCs as a selling point. Again, eh. Tintri seems to sit in the same space as well. EMC is a competitor in that they sell "hybrid arrays" and will undercut their own mother to sell an array in the sub-50TB range. For ~Big Arrays~ EMC is actually a competitor, but that's in a space that Nimble probably doesn't fit as well.

quote:

Did you look at Pure?
I didn't get a quote, but I looked at them a little. I stuck with hybrid though because it's for primary storage. If I was doing a VDI project or something for a single special dataset, they would make more sense. Their sales guys didn't hound me and understood my standpoint. Seems like a good choice for specific use cases.

*edit: To be clear, by feature parity I mean what they do, not how they do it. Both have high IOPS, some amount of capacity, thin snapshots, compression, replication, etc. Obviously how they accomplish these things can be wildly different.

Erwin fucked around with this message at 20:23 on Nov 5, 2014

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

I'm working with Nimble on putting together a deal for a customer and they seem pretty good so far. Easy enough to deal with and fairly quick turnaround on getting quotes back. It's a pretty small deal, but they're still putting in the effort to win it. The VAR I'm working with now has traditionally partnered exclusively with NetApp and really only sold other storage if the customer was adamant about getting it. So we've sold some Pure, and Tintri, and Tegile, but NetApp is really what we pitch. We've recently added Nimble as a second preferred storage partner because they are easy to work with, the product is easy to use, and you get very good performance for the price. They've carved out a pretty solid niche for themselves, and I'm still not sold on their financials yet, but they seem to be growing quickly enough that it won't be a problem.

Pure isn't really competitive due to the extreme cost difference. I also have my doubts about Pure's longevity, after talking to some of their people recently, and seeing how hard it has been for them to land accounts, at least on the west coast. Tintri is good, easy, fairly fast VMware only storage. It's even simpler than Nimble, and has some nice integration with VMware. It's probably a good choice if you want VM only storage that is dead simple to manage, though I wonder about their longevity as well. Their features roadmap doesn't have anything very compelling on it. Tegile is just sort of there.

If you're in the PacNW and looking for a good partner to work with on a Nimble purchase, I can point you to one of our sale reps.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

NippleFloss posted:

I'm not sure what you'd say about vSAN specifically that doesn't apply to a bunch of scale out SAN solutions like Isilon, ExtemeIO, Nimble, Equallogic, Nutanix, Simplivity, etc...

The biggest problem with vSAN is immaturity, but there's nothing inherently wrong with the architecture.

vSAN has a long way to go but it's just funny when you go to a meeting and explain the underlying arch to your director and he goes "oh".

I get that there is a lot more QA, Support, and assurance to a business when purchasing a name but it is kinda funny when people go "wait, it's not black magic!?!"

Amandyke
Nov 27, 2004

A wha?

Rated PG-34 posted:

Our cluster use the Isilon filesystem and we sometimes get a slight delay for files to be fully accessible to other nodes in the cluster. The files appear as truncated in the intervening period. Is anyone aware of a fix for this sort of behavior?

What version of OneFS are you running?

bigmandan
Sep 11, 2001

lol internet
College Slice
The rest of items for our SC4020's were received today! I decided to check everything out before we bring it over to our data centre and was quite surprised on how heavy the SSD's drives are compared to consumer drives. One thing I found interesting was that the platter drives in the disk shelf came pre-installed, but SSD's for the controller head were not.

Rated PG-34
Jul 1, 2004




Amandyke posted:

What version of OneFS are you running?

Is there a way to determine this without direct access to the isilon server? I'm not a system admin.

KennyG
Oct 22, 2002
Here to blow my own horn.
On the subject of costs, in q4 $3.7m will buy you 4 Vplex engines, 4 recoverpoints, 4 XTREMIO 20tb bricks, 12 isilon x nodes @134tb with 128gigs of ram and 1.6tb ssd each and a pair of maxed out vnx 5400s @~200tb, 4 FC switches and a crap load of licensing and everything is encrypted. This is gear for 2 sites...

Xmas is going to loving rock. :ohyeah:
Pics next January of the install fest.

Thanks Ants
May 21, 2004

#essereFerrari


:fap:

orange sky
May 7, 2007

KennyG posted:

On the subject of costs, in q4 $3.7m will buy you 4 Vplex engines, 4 recoverpoints, 4 XTREMIO 20tb bricks, 12 isilon x nodes @134tb with 128gigs of ram and 1.6tb ssd each and a pair of maxed out vnx 5400s @~200tb, 4 FC switches and a crap load of licensing and everything is encrypted. This is gear for 2 sites...

Xmas is going to loving rock. :ohyeah:
Pics next January of the install fest.

Holy poo poo, nice. I wish my company was selling you that :10bux:

Amandyke
Nov 27, 2004

A wha?

Rated PG-34 posted:

Is there a way to determine this without direct access to the isilon server? I'm not a system admin.

You'd likely have to ask a storage admin then.

Wicaeed
Feb 8, 2005
Holy mother of gently caress EMC licensing :psypop:

We buy a product a month ago and get an activation email, have to go to their LAC website to activate our entitlements, get sent an email with a loving certificate saying we can use their software and then have to call their licensing support rep to be told it's a 48 hour turn around to actually claim the license.

:rant:

Internet Explorer
Jun 1, 2005





Wicaeed posted:

Holy mother of gently caress EMC licensing :psypop:

We buy a product a month ago and get an activation email, have to go to their LAC website to activate our entitlements, get sent an email with a loving certificate saying we can use their software and then have to call their licensing support rep to be told it's a 48 hour turn around to actually claim the license.

:rant:

Welcome to doing anything with EMC. Good luck with their support. Look on the bright side, they used to snail mail you the certificate.

Rated PG-34
Jul 1, 2004




Amandyke posted:

You'd likely have to ask a storage admin then.

Okay, I asked a sysadmin and it's version 7.0.1.10.

$ isi uname -a
Isilon OneFS v7.0.1.10 Isilon OneFS v7.0.1.10 B_7_0_1_233(RELEASE): 0x700015000A000E9:Tue

devmd01
Mar 7, 2006

Elektronik
Supersonik
For those of you involved in a new SAN evaluation, selection, and migration, how did that process work for you? I'm going to an interview on Monday where my key role in the first year would be doing just that and migrating them away from a 5+ year old EMC, and I want to be able to talk intelligently about going through that process. I have enough hands-on experience with Compellent and fiber channel administration but always on existing systems, or I wasn't involved in the selection/migration process.

First step in my head is data, data, data. How many servers, expected growth, aggregate disk space needed, 95% percentile/average iops per lun, what TYPE of iops/applications using the san, existing storage fabric evaluation end-to-end, as well as 3-5 year business initiatives such as DR or 100% virtualization and expected natural growth.

Next step is compiling it into a usable RFP that we could hand to vendors. Sure, any SAN vendor would be happy to assist you with gathering the data and telling you what you need, but I think it's important to have an idea of what those numbers should look like ahead of time. Once that's compiled, start doing research not only the incumbent's existing offerings, but key competitors. Once you have an idea of what the market looks like, start the vendor contact game and start that process of getting as many free lunches/sports tickets as possible.

Get the proposals, evaluate side by side, narrow to 2-3, ask for demos and start the bidding wars, professional services for the install included of course. Once the SAN is in, hook up a test server and hammer away and play with it for a bit before beginning migration.

Anyone have additional thoughts, ideas on how to go about this?

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

devmd01 posted:

Storage refresh

When sizing for performance break things down into application groups and give that to the vendors. They will generally have tools for sizing thing like SQL, Exchange, file services, Oracle, SAP, etc...knowing what type of application it is will give them a good idea about how the workload breaks down beyond just raw IOPS numbers, and will also allow them to estimate things like compression or deduplication savings. It's generally more important to make sure you size performance correctly because capacity is fairly easy to get with large disk sizes, but unless you're heavily leveraging flash you can end up without enough spindles to support.

But generally the vendor will be pretty good at solving performance and capacity equations to get what you need. You should focus on how you want to do things. Do you want to leverage array based snapshots for backup? How easy is it to recover the data? Do you want things like zero space cloning? If so, is getting a clone up a very manual process or can it done easily? Can it be handed off to someone like a DBA, so that they can clone their own test databases?

If you aren't using snapshots for backup how will you manage it? If it's something like VEEAM or Commvault or NetBackup you should investigate how the different competitors integrate with those tools.

Do you want de-duplication? Compression? Are there benefits to either/both for your workloads? Does your environment have off-peak hours when post-process scans can run, or would inline be a better fit?

Will you be leveraging array based replication for DR? If so, how much work would be involved to stand up your DR site? Do they have an SRA for SRM? Are you leveraging things like Exchange 2010 DAGs and SQL 2012 Always-On availability groups that obviate the need for smarter storage?

What's the management overhead like? Will you be creating a bunch of different raid groups of different types to house different classes of data, or will you just get one pool of storage for everything? Do you prefer the flexibility or the simplicity? How many off box tools are available to manage the array and automate tasks? Powershell integration? Single-pane-of-glass monitoring for the whole storage environment? Alerting and reporting?

If you're leveraging VMware, what does their integration look like? What VAAI primitives do they support? Will they have VVOL support? VASA provider? Do they have a vCenter plugin, and, if so, does it do anything useful?

How easy is it to add capacity? How easy is it to add performance? What types of things can be done non-disruptively? What types of things require an outage? What does your refresh process 3-5 years down the road look like? Rip and replace, or can they rotate new stuff in/out easily by, say, just replacing controllers live? How easy will it be to evacuate data when it's time for refresh?

There are a lot of questions here and they really tie in to how you want your infrastructure to look generally. You can just treat storage as dumb blocks of IOPS and capacity, but most vendors have a lot of value add on top of that and you need to decide which of those things are important to you.

Vanilla
Feb 24, 2002

Hay guys what's going on in th

Internet Explorer posted:

Welcome to doing anything with EMC. Good luck with their support. Look on the bright side, they used to snail mail you the certificate.

In a box the size of a TV

devmd01
Mar 7, 2006

Elektronik
Supersonik

NippleFloss posted:

right click save as

Perfect, this is exactly what I needed, thanks!

GrandMaster
Aug 15, 2004
laidback

Vanilla posted:

In a box the size of a TV

Haha yes! EMC sent me a physical entitlement certificate for a classroom training course in a giant box, padded with bubble wrap. I then had to jump online and use the code on the certificate to register.
Ridiculous.

goobernoodles
May 28, 2011

Wayne Leonard Kirby.

Orioles Magician.
Can someone recommend a 24 port switch that will be dedicated to iSCSI traffic? Pretty small VMware environment of IBM 3 hosts (Qlogic HBAs) and a IBM DS3300 SAN running about 20 vms.

Moey
Oct 22, 2010

I LIKE TO MOVE IT
Are your hosts just connecting 1 gig? What kind of switches does do you currently use? You will probably want to keep these a similar brand just so working on them is similar. Make sure to budget for two switches so you have redundancy as well.

You will be able to find something from each vendor, so it really boils down to personal preference. I am really liking working with Juniper's stuff for the past year and a half.

A pair of the 24 port Juniper EX3300 would probably fit the bill (with 1gbe connections). Throw them into a virtual chassis and then you can manage both units as one logical switch.

goobernoodles
May 28, 2011

Wayne Leonard Kirby.

Orioles Magician.

Moey posted:

Are your hosts just connecting 1 gig? What kind of switches does do you currently use? You will probably want to keep these a similar brand just so working on them is similar. Make sure to budget for two switches so you have redundancy as well.

You will be able to find something from each vendor, so it really boils down to personal preference. I am really liking working with Juniper's stuff for the past year and a half.

A pair of the 24 port Juniper EX3300 would probably fit the bill (with 1gbe connections). Throw them into a virtual chassis and then you can manage both units as one logical switch.
Yeah, just 1gb. I currently have a Netgear GSM7248 and GS724AT switches on the rack that the hosts and iSCSI and and other things plug into. I'm hoping to get a full infrastructure refresh (servers, switches, SAN) on the budget for next year, but I want to go through our existing setup and redo the iSCSI networking as a stopgap until that point. We've had occasional, ongoing issues with this setup and I really want to get the iSCSI traffic segregated.

On another note, are there any relatively cost effective SANs that allow for mixing flash and mechanical drives that I should look into? I'd like to be able to put the SQL databases for an application server or two and an RDS server on flash and put the rest on cheaper disks.

goobernoodles fucked around with this message at 23:28 on Nov 14, 2014

Adbot
ADBOT LOVES YOU

Thanks Ants
May 21, 2004

#essereFerrari


Dell's MD3 series can do that, I'm not sure if they have any of the tiering features that generally make SSD worth having.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply