Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
KS
Jun 10, 2003


Outrageous Lumpwad

I'm so glad you posted this. We've been going through a SAN nightmare for the last month and I wanted to make a thread, but the audience in SH/SC for the enterprise level stuff seems to be very limited.

The production SAN I'm dealing with comprises two HP EVA8100s with 168 disks each, two Cisco MDS9509 switches, and a bunch of HP blades with Qlogic cards. It should be massive performance overkill for our needs. One EVA houses a 5 node SLES file server cluster, while the other houses a 6-server ESX cluster, ~10 HP-UX servers, and ~20 windows servers on 3 distinct and relatively equal sized disk groups.

Performance is terrible. On servers with dual 2gig HBAs, sequential read performance maxes out between 70-120MB/sec. The striking thing is that single path performance is equal or better to the multipath performance. It just seems to be a per-vdisk speed limit.

We hooked up a server using a spare brocade switch to a spare EVA3000 and saw an identical pattern, but much better performance. Linux servers get 200MB/sec multipathing, but a single path transfer is the same speed, and two single path transfers down each HBA can saturate the fiber at 400MB/sec. I've verified using iostat that multipathing is actually working correctly -- each path seems to be capped at exactly 100MB/sec. A windows server using MPIO gets 300MB/sec.

Stuff we've tried:
updated HBA drivers
updated HBA firmware
Emulex HBA
Different SAN switch
Different EVA
Dell server with qlogic 2460s

This EVA is supposed to provide around 2.4GB/sec sustained reads as configured, and we're struggling along with basically single spindle speeds.

The Linux servers are using multipathd, and the path checker is getting errors like SCSI error : <1 0 0 1> return code = 0x20000. I've started to read that this means the fabric is sending RSCNs, and that they could hurt performance fabric-wide. Anyone have more info? Are we messing up our switch config or something? Is there a way to trace what's causing them?

The Cisco switches are using WWPN zoning and have an ISL trunk between them that will be removed soon, as each switch serves a seperate fabric now.

I am not the SAN admin, just the guy who discovered the problem. Advice is appreciated!

(1000101, small wonder I was complaining about our ESX performance, huh?)

KS fucked around with this message at 04:36 on Aug 30, 2008

Adbot
ADBOT LOVES YOU

KS
Jun 10, 2003


Outrageous Lumpwad

JollyRancher posted:

You need to get some professional services people in there to troubleshoot. With that kind of hardware it's pretty clear you should have the budget to make that problem go away.

It sounds like you have not experienced the joys of working for the government. I'm sure it's in the budget for CY09 A few of us are interested in a quicker fix, however. It's a long shot, I know.

KS
Jun 10, 2003


Outrageous Lumpwad

Update:

We shut down our entire SAN last weekend and brought up one linux server, then one 2k3 server. Performance was identical to the performance we get in the middle of the day at peak load.

We found some benchmarks here .These are reads in MB/sec over numbers of I/O streams.



This jibes with what we're seeing. How is that single stream performance anywhere near acceptable? I can throw 4 SATA disks in a software RAID-5 and beat that read performance.

What are the strategies, if any, we should be implementing here? Striping volumes across multiple vdisks? Tweaks to increase the number of i/o "streams" per server? How will we ever get acceptable ESX performance?

KS fucked around with this message at 15:57 on Sep 2, 2008

KS
Jun 10, 2003


Outrageous Lumpwad

rage-saq posted:

What is your maximum queue depth and/or execution throttle? You might want to try messing with that figure to see if you can improve your single server scenario.

Messing around with this a bit today. We're using QLogic mezzanine cards in a mix of 20P G3s and 25Ps.

BL20P default was queue depth 16 and execution throttle 16.
Dell server with QLA2640s was queue depth 16 and execution throttle 255.

I've been tweaking queue depth a bunch but not execution throttle. I'll have to try more settings tomorrow.

I started messing with I/O schedulers too. Out of the box it was using CFQ on the 8 paths underlying the mpath device. I think there's some performance to be gained here.

This thread, especially jcstroo's post, drives me crazy.

I'll hang out in #SHSC, thanks.

KS
Jun 10, 2003


Outrageous Lumpwad

Misogynist posted:

If I can find some sane way to replicate this to another Thumper, I will be a very, very happy man.

I talked to a guy at a USENIX conference last year whose company was using dozens of X4500s as the storage backend for one of the biggest ESX deployments I've ever heard of (300k+ IOPS). At the time I was dealing with HP SAN issues and was very jealous. Now not so much, but NFS definitely seems to scale better than fiber channel for big clusters. 4540s are probably the best storage possible for ESX.

KS
Jun 10, 2003


Outrageous Lumpwad

optikalus posted:

Don't get me wrong, I think it is a great idea, just poor execution. Instead of 45 drives per chassis, I'd stick to 30 or so. That'd give about 3/4" clearance between each drive, which would allow sufficient air flow and reduce radiant transfer.

There are a half dozen vendors with an identical layout including Sun, HP, Overland, and Nexsan. Jamming 48 1.5tb drives in 4u is kinda the next big thing and a centerpiece of d2d backup strategies. Density is important and a temperature variance between drives is not important at all.

KS
Jun 10, 2003


Outrageous Lumpwad

I'm glad to hear that everyone else is having good experiences with Compellent too. We bought a pair of them about six months ago to replace some Hitachi arrays and they have been nearly perfect. We are incredibly happy with the performance we're getting, which is good, because I recommended them.

We did have an array go down last month, which was scary, and I might as well throw this out there in case it helps anyone else: if your controllers have 8gig emulex cards in them and you're using brocade switches with 6.3+ firmware, there is a TSB out on Brocade's site warning of incompatibility. It was a very tense Sunday as the array gradually lost connectivity to 50+ servers. The problem appears after a controller reboot, and it took Compellent swapping the cards for Qlogics (in <3 hours) to get us back on our feet.

KS
Jun 10, 2003


Outrageous Lumpwad

Intraveinous posted:

Do you have a link to the TSB? I recently upgraded my brocade switch firmwares, and since then, I'll randomly lose one MPIO link at a time.

Since we're not fully in production on it yet, that hasn't been a problem as of yet, but I'm fairly certain we've got four 4port 8gb Emulex cards.

I don't have a link. You need brocade support THROUGH BROCADE to have access to them, and ours is through Hitachi.

(if you get the tsb, send it to me please!)

KS fucked around with this message at 21:45 on Jun 13, 2011

KS
Jun 10, 2003


Outrageous Lumpwad

Before we moved to Compellent we used Hitachi AMS arrays. I'm trying to use one of them for some temporary d2d backup storage and the HSNM2 management software seems to completely suck. We're talking minutes to open an individual window, such as editing a host group or adding a LUN. It's ridiculous.

Wondering if anyone else has these units and has seen this problem, or hasn't? We've tried installing it on several boxes and it's currently on an 8-core server with a gigabit connection to the array -- no change.

The crappy management software was one of the big reasons we didn't even consider Hitachi storage this time around.

KS
Jun 10, 2003


Outrageous Lumpwad

ozmunkeh posted:

Does anyone have any idea when Dell might be refreshing the Compellent controller offerings? I'm sure those with larger server rooms or datacenters can be a little more flexible about physical space but having to give up 6U before even attaching any disk shelves is a little much for those of us with limited rack space.

Even if they refresh it I can't see them getting much smaller -- they have 7 expansion slots, which is probably too much for a 2u design.

KS
Jun 10, 2003


Outrageous Lumpwad

If you're using Compellent's default recommended storage profile in a system with with SSD as tier 1, it's going to act as a write cache. All writes go to the top tier of storage in RAID 0+1. They get progressed down to lower tiers and rewritten as RAID-5 in the background. It does not, however, act like a read cache, except in the sense that recently written data will be on SSD...

Having experience now working with EMC VNXe, Hitachi AMS, HP EVA, and Compellent in large ESX environments, I don't think you can go wrong with the Compellent. I now have three of them. Replay Manager 6 killed my last few complaints about the system. I am excited to see where they go with dedupe and the 64-bit series 50 controllers, but even today it is fast and reliable.

I like to say that Compellent is 3PAR for people who care about costs. It's not quite there, but it's close.

KS fucked around with this message at 00:27 on Nov 13, 2011

KS
Jun 10, 2003


Outrageous Lumpwad

Serfer posted:

I don't know what Dell's Compellent support is like yet, but I'm willing to risk it rather than deal with EMC support again.

Copilot is the best support I have ever had to deal with and Dell has not hosed it up (yet?).

KS
Jun 10, 2003


Outrageous Lumpwad

Misogynist posted:

Just walked into an environment where the management is considering a substantial investment in an HDS stack. I've never worked with or really heard much from them, besides the BlueArc stuff we've had that long predates the Hitachi acquisition. Can anyone give me their impressions on the technology?

My current employer "upgraded" from an AMS 500 to an AMS 2300 shortly before I started. While the 500 management is OK, the java UI on the 2300 is so horrendously godawful that I want to murder the team responsible. We're talking 60+ second wait times just to hit the "edit host group" button and have the info appear.

Feature-wise, my impression coming from a mid range HP shop was it seemed horribly outdated compared to EVAs, but maybe we didn't have the right licenses. I tried using them as D2D backup targets after retirement and just could not get the performance where it should be, even on big sequential writes.


Spamtron7000 posted:

Just to spite them I'm going to replace the DataDomains with Quantum DXi's.

Got a pair of 6702s about 2 months ago and they are loving awesome. I like backup targets that can saturate 10gb connections.

KS
Jun 10, 2003


Outrageous Lumpwad

Ragzilla posted the biggest Compellent noob pitfall, but I wanted to draw attention to it: you must do snapshots of some sort on all volumes in order for data progression to work properly. Without it, data will not be paged down to lower tiers or even RAID-5 on Tier 1.

If you're space constrained and have a volume that has a 100% change rate every day (Exchange and SQL come to mind) a daily snapshot with 26 hour retention seems to be about the minimum you can get away with. You will still come out ahead on storage over no snapshots and the data sitting in RAID-10.

My only other advice is to use the recommended storage profile for nearly everything. The system is pretty smart, but you can outsmart it if you try. Treat custom storage profiles like VMWare memory reservations/limits: something to be avoided unless you know you need it.

KS fucked around with this message at 22:14 on Dec 22, 2011

KS
Jun 10, 2003


Outrageous Lumpwad

Mierdaan posted:

A bloo bloo bloo, this is my life.

Thankfully we get to buy some Compellent this year.

I've worked with a bunch of vendors and I really like our Compellents, but Netapp and 3PAR are the other vendors I'd recommend without hesitation in the entry enterprise space. I'm sorta surprised to hear you're moving away from Netapp. I doubt Compellent really picks up a lot of their customers.

Be sure you buy series 40 controllers. They will last you a lot longer.

KS
Jun 10, 2003


Outrageous Lumpwad

szlevi posted:

I think it's a bigger deal that they finally announced Storage Center 6.0, their first 64-bit OS: they doubled the cache (RAM, that is) in the controllers, effective immediately as I heard and, more importantly, they will be able to use smaller block size for data progression (current is 512k I think.)

It is definitely a bigger deal, but it requires series 40 or better controllers at the moment, hence my recommendation. 512k is the smallest page size -- the default is 2mb.

6.0 also adds full VAAI support which is nice.

KS
Jun 10, 2003


Outrageous Lumpwad

Wicaeed posted:

Ah, I figured it out. I had to disconnect all the current sessions and reconnect, then rescan for disks to see the newly created LUN

You should definitely not have to disconncet the current sessions to see a new LUN. That's not a feature limitation of the software iscsi initiator for sure.

KS
Jun 10, 2003


Outrageous Lumpwad

Just curious, do they fit in the 2.5" bays on DL-series servers?

KS
Jun 10, 2003


Outrageous Lumpwad

Get SC 6. It's been out for months. That's admittedly about a year later than it should be, but it supports the full VAAI feature set.

A bit surprised to hear it's been so bad -- the only previous hate in the thread was from the guy who works for EMC. Currently have 120 TB of Compellent over 3 arrays and 80+ of it is VMware. While I'm sad they've fallen behind on a few things (flash cache), it has been really solid for the last year and a half.


Intraveinous posted:

(their lab had identified a problem in some limited cases with Emulex 8Gb FC cards, so they proactively replaced them with Qlogic 8Gb FC cards)

Their lab didn't identify poo poo -- we discovered it. It caused a rather long outage here due to an obscure Emulex TSB and a conflict with Brocade FW 6.3+. Small world . But Copilot was awesome (Same engineer was leading the effort for 16+ hours) and the part was there within 2 hours of dispatch once they figured out the cause.

KS fucked around with this message at 01:00 on May 3, 2012

KS
Jun 10, 2003


Outrageous Lumpwad

I'm looking to add some secondary storage for D2D backups -- about 25TB, and I don't want to take more than 2-3U and ~$30k. Not opposed to roll-your-own for this, but only if it saves a bunch of money. Performance needs to not suck. It needs to export as either NFS or iscsi.

I know I can get a 380 G8 with 12x 3TB drives in it, but what would I run on it? Nexenta adds $10k to the bill, and that's a hard pill to swallow. I don't know enough about the collection OpenSolaris forks to know if they're at a point where they're usable for something like this with ZFS, or if I should just go with something I know better.

Also looking at the Nexsan E18, and if anyone has other suggestions I'd love to hear them.

KS
Jun 10, 2003


Outrageous Lumpwad

We moved from FC to 10g ISCSI to support a converged network/storage fabric. When we bought UCS, support for FCOE in the Nexus 5k series was basically nonexistent -- you could present storage to ports locally on the 5k, but you could not trunk into a 6140. Updates have made it better now, but that ship has sailed.

It is also considerably cheaper. Switchport cost is relatively equal, but on the HBA side it's not really close. For my DC that isn't UCS, I have 19 ESX hosts. The first 7 we bought with dual 8gb FC HBAs for $1700 each including cables. The next 12 used 10g CNAs for $900, which eliminated 4 1gig network ports per host as well. The HBA cost savings paid for one of the 10g switches.

Last, you can present storage direct to VMs, which lets you do all sorts of tricks with snapshotting. VMware's NPIV support sucks. The MS iscsi initiator does not.

KS fucked around with this message at 19:21 on Aug 17, 2012

KS
Jun 10, 2003


Outrageous Lumpwad

Compellent arrays do all writes to 10 and rewrite to RAID-5 in the background. They should be using RAID10/RAID-5 for 15k disks and RAID10-DM/RAID-6 for the bigger 7.2k disks per Compellent's best practices. You can't even specify just RAID-10 without turning on advanced mode, I believe.

PM me if you'd like the doc, but they're right.

KS fucked around with this message at 19:51 on Aug 17, 2012

KS
Jun 10, 2003


Outrageous Lumpwad

Local replication is not often used for this kind of scenario. The common method used to solve the problem is to buy enterprise-grade storage that is highly available internally -- SAS drives with two paths, dual controllers with redundant power supplies, dual switches, etc. You will not get that in your budget, but that is what you should probably aim for if you want to improve reliability.

Replication involves a manual failover process and generally some data loss until you get up into the arrays that can do synchronous replication, which is what you'd want for the local replication situation you're proposing. It is not really the solution I'd recommend for a single cluster.

KS
Jun 10, 2003


Outrageous Lumpwad

I have a dedicated VM with Java 6 Update 29 just for that. My desktop kept getting infected when I tried to use 6U29+firefox to browse the web, and it doesn't play well in Chrome.

Still better than the Hitachi or HP interfaces I came from.

KS
Jun 10, 2003


Outrageous Lumpwad

I am pretty sure you should be seeing 4 paths, but it's not a valid MPIO setup. I still maintain you have the vswitch stuff set up wrong. Reference this article:

quote:

There is another important point to note when it comes to the configuration of iSCSI port bindings. On vSwitches which contain multiple vmnic uplinks, each VMkernel (vmk) port used for iSCSI bindings must be associated with a single vmnic uplink. The other uplink(s) on the vSwitch must be placed into an unused state.


This doesn't match the pic you posted in the virtualization thread.


Your setup is showing 3 paths each to .110, .111, .112, and .113, all from one vmhba, which makes no sense. It's like you have a 3rd vmknic defined somewhere.

Is your switch VLAN-capable? You could set up two VLANs and two fault domains and be ready to migrate to a second switch when you get it without host reconfiguration. It's probably a lot easier if you're just getting this into production, because conversion to two fault domains requires storage interruption.



edit: here's what it looks like for me on a hardware iscsi setup with dual controllers. All paths show to the control ports. Ignore the warnings on the iscsi HBAs -- they have two IP addresses each because of a transition from 1g to 10g, so they show unbalanced until I add a 2nd 10g card.



KS fucked around with this message at 16:07 on Sep 4, 2012

KS
Jun 10, 2003


Outrageous Lumpwad

Cool. I completely forgot about the ability to override the vswitch failover settings on a per-interface basis, so that looks reasonable.

KS
Jun 10, 2003


Outrageous Lumpwad

Compellent has a really cool copy tool out of the box -- you zone the two arrays together, present LUNs to the controller ports on the Compellent, and the array has a point and click process to claim the external LUN and do a block-by-block copy to a Compellent volume. I used it for a ~40 TB migration off a Hitachi AMS2300 2 years ago over just 3-4 Sundays and I can't imagine having to migrate without it. I guess it would be easier now that I'm 100% virtualized.

Dell needs to steal this feature and give it to the EQL arrays.

KS
Jun 10, 2003


Outrageous Lumpwad

wyoak posted:

My impression of the Compellent migration tool is that it took the source LUN offline during the migration - am I wrong about that, and you can leave the source online while it's migrating? That'd be really nice for us, we're about to do something similar.

It does a one-pass read of the LUN, so if your app is up and you're making changes it won't capture them and you won't get a consistent copy. You're going to need some downtime, but not a ton -- it saturated 8gb FC when I was doing it, so it happened pretty quick.

KS
Jun 10, 2003


Outrageous Lumpwad

The Compellent Fault Domain concept follows your physical infrastructure. You will have one fault domain per physical switch. The two fault domains should have separate subnets. Whether those physical switches are dedicated to storage or are carrying network traffic as well doesn't really matter -- you should have a dedicated VLAN on each switch.

At least one port on each controller goes to each switch -- best practice would be 2 + from each controller to each switch. Virtual ports fail IPs from one port to another within the same fault domain to protect against controller failure. Any IP associated with a fault domain can live on any controller port within that fault domain, which should always be on the same switch/in the same VLAN.

Look at page 36 of "Storage Center 5.5 Connectivity Guide" on the KC.

Some operating systems (Win 2003 software iscsi at least) don't MPIO properly when all interfaces are in the same subnet, so just don't do it.

KS fucked around with this message at 02:41 on Sep 18, 2012

KS
Jun 10, 2003


Outrageous Lumpwad

Well, I built a Supermicro SC847-based system based on some pointers I got here. Stood it up today and I'm quite impressed.

Hardware is 2x E5620, 96 GB, 10gbit Intel x520-DA2 NIC, 32x 3TB 7200 RPM hard drives, and a ZeusRAM for ZIL. The drives are in 4x 8-disk RAID-Z2s, so 60.9 TB usable in 4u. Running NAS4Free 9.0.0.1.

Iometer results from a server using the MS software initiator on a single path:
Sequential read: 505 MB/sec
Sequential write: 290 MB/sec
4k random read: 25k IOPS, 104 MB/sec, 1.2 ms avg latency.
4k random write: ~28k IOPS, 116 MB/sec, 1.1 ms avg latency.

Around 20% CPU util during these tests.

Still have a bunch more benchmarking to do before I decide on a final config and roll it into production. LACP, Jumbo Frames, different drive configs, and a hardware initiator are still on the table. I am also going to do a round without the ZIL to quantify the benefits of a $2500 8GB SSD. I am guessing my sequential write speeds are limited by the pool setup, and I want to get that number up since it is a backup server.

I still can't get over those random IO numbers. It makes me want to put a database on it. Or my entire Dev VMware cluster. The ZeusRAM is the coolest toy I've had to play with in a while.

Our $400k prod SAN can't touch those numbers, but it's also rock solid and does online updates. It's impressive what you can put together for <$15k, but I'd never use it for primary storage.

KS fucked around with this message at 04:36 on Oct 5, 2012

KS
Jun 10, 2003


Outrageous Lumpwad

I read the negative descriptions of tiered disk in this thread and wonder, because the reality for us has been so different over the last 2 years we've had Compellent arrays. We bought in just a few weeks before Dell bought the company.

Our biggest system is around 64TB usable, made up of 60 15k 3.5" disks and only 24 7k 3.5" disks. It's essentially sized so that live data stays on 15k and 7k is used for replays. Our peak period is about 2 hours of 475 MB/sec transfers and 9000 IOPS. The array handles the load just fine.

Maybe there are partners out there that do sizing differently, and I could definitely see a disaster if sized improperly, but it's worked out well for us. All writes go to the fast disk, period. The tiering works outside of peak hours, and the system has been solid. The ecosystem (Replay Manager, powershell scripting, etc) is the best I've worked with.

At the same time, I'm a NetApp fan from way back. In 2012 I tried really hard to replace both of our arrays with NetApps during a planned DC move. Netapp has the advantage of a flash cache, but the lack of tiering seems like a disadvantage -- they quoted a system with 144 15k drives to meet the combination of size and throughput requirement, and obviously couldn't come close on price.

From my experience, tiering has few drawbacks if sized right. It's cheaper than an all-15k system, and it doesn't require any additional management. That's not a bad thing.

Now would I get a Compellent again today? Absolutely not. The fact that a system they were selling in 2011 doesn't have VAAI support is either an embarrassment or a bad joke. The benefits of flash from a latency perspective are too big to ignore, and Compellent has lagged behind in that department.

KS fucked around with this message at 21:55 on Apr 11, 2013

KS
Jun 10, 2003


Outrageous Lumpwad

Right, but only in the 6.x code branch, which is only on the series 40 and 8000 controllers. Series 30 wasn't EOL until like mid-2011. They've been promising a 5.6 branch with VAAI for the series 30s for at least a year.

KS
Jun 10, 2003


Outrageous Lumpwad

NippleFloss posted:

You could write it inline as it's being read from the lower tier, but presumably you're also busy destaging stuff from the higher tier to make room, since space on the higher tier must be constrained, otherwise you would run everything there all the time.

At least in the Compellent world, if you fill your top tier to 100% you're either sized very wrong or have <5% free space left on the whole array. Data progression works to keep enough space available on top tier to handle writes -- the lower tier disks fill first. A healthy array has zero space allocated on the lower tier disks to writes. Not sure if this holds with other vendors.

KS
Jun 10, 2003


Outrageous Lumpwad

That's funny, because in Dell's Compellent line you get a popup when doing the second assignment asking you override it if you DON'T want read only.

One thing that vmware recommends is turning off automount -- start up diskpart and type automount disable.

KS
Jun 10, 2003


Outrageous Lumpwad

I have a fax server (running Windows) with a 1TB storage volume that's producing 70GB snapshots per day, despite receiving only about 1GB of faxes per day. Any suggestions for tools at the OS level to track down the source of the change rate? There are 8M+ files involved here. I can't replicate it for DR until I can find it and fix it.

KS
Jun 10, 2003


Outrageous Lumpwad

Array snapshots. So that represents 70GB of block changes across the 1TB volume.

KS
Jun 10, 2003


Outrageous Lumpwad

Internet Explorer posted:

What version of Windows is writing to that volume? If it is 2003 or earlier consider disabling the last accessed timestamp on files. NTFSDisableLastAccessUpdate
HKLM\System\CurrentControlSet\Control\FileSystem\ (REG_DWORD)

Thanks, going to try this. It is indeed 2003.

Thankfully no defrag. We're not that bad -- just bad enough to still be using Server 2003 for a hugely important app. But I'm sure I'm not alone there

KS fucked around with this message at 03:58 on May 2, 2013

KS
Jun 10, 2003


Outrageous Lumpwad

Already looked at changed files, nothing obvious. It's a Compellent array, so 2MB blocks for snapshots. I think the access time thing is a strong possibility given the profile of the server -- we're going to try it on Sunday.

KS
Jun 10, 2003


Outrageous Lumpwad

There are <8,000 changed files on a daily basis when searching by date modified, totaling less than 1GB.

However, a service constantly scans directories containing 700k+ files. It's read-only -- it looks for faxes containing a barcode and links them to a web app. Access time seems like a really good explanation. I'll update after Sunday.

Adbot
ADBOT LOVES YOU

KS
Jun 10, 2003


Outrageous Lumpwad

Here's an update as promised. Sunday and Monday's snapshots were 2 and 9 GB respectively. Thanks to Internet Explorer for the fix. That helps quite a bit.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply