|
Evilkiksass posted:I have been working on designing a case using an atom motherboard and 2 5 hard drive raid cages as the base for it. Right now I am prototyping this in Google SketchUp. I was interested in getting your guys's opinion:
|
# ¿ Aug 1, 2008 06:26 |
|
|
# ¿ Apr 27, 2024 00:10 |
|
Just thought I'd let people know that the Promise SmartStor NS4300N is on sale at Frys for $300. It is almost $400 everywhere else. Not a bad deal for 4 drive SATA 3.0 NAS with tons of options. My friend has two and highly recommended it. http://shop2.frys.com/product/5351488
|
# ¿ Aug 16, 2008 05:56 |
|
Alowishus posted:Yeah I have a NS4300N (got it last time it was on sale at Fry's), and it is pretty decent. Promise has been pretty good with OS upgrades, and they have plugin modules to turn it into a DLNA, iTunes and Bittorrent server. The CPU in it is a little weak so you're not going to be able to saturate gigabit with it, but it'll do standard file serving and video streaming duty just fine. I'm hoping the streaming issue doesn't affect normal Windows video playback?
|
# ¿ Aug 16, 2008 07:29 |
|
I think I finally found a place that has the Chenbro ES34069 case in stock. It is sold out EVERYWHERE. Looking forward to building a system on it, looks like a really neat box. DLCinferno fucked around with this message at 18:26 on Sep 27, 2008 |
# ¿ Sep 24, 2008 19:21 |
|
That looks like a sweet case, but goddamn is that site sketchy as hell. Still, for $95 it's definitely worth the risk. I'm still waiting for a few parts for the Chenbro, but once they arrive I'll post a trip report. Word to the wise, it ended up being way more of a cost/hassle once I had to get a PCI Express x1 riser and a SATA card to match (my motherboard only had 4 SATA connections, and I wanted a decent system drive - which will be a 160GB 2.5"). Here's hoping it fits without problems. That said, the case is loving sexy as hell, even though it's more than I'd ever pay for a regular case.
|
# ¿ Oct 7, 2008 06:04 |
|
I'm running a raid 5 through mdadm with four disks...I created an ext3 partition on each of the drives before building an array from the partitions. Should/could I create the array from the disks without initializing the partitions, and then partition the entire array afterward? That didn't make sense to me at the time, but I want to make sure I'm choosing the best option (and I don't even know if that is possible).
|
# ¿ Nov 3, 2008 03:14 |
|
Well, so now I'm confused. After creating the ext3 partitions on each of the drives, and building an array across the partitions, I formatted the array (at /dev/md0) with mkfs as ext3, although I didn't create a partition on it first (fdisk sees no partitions at /dev/md0). Everything seems to be working just fine - I've been copying stuff to it over samba all day, but am I going to run into problems in the future? What if my system goes down and I need to assemble the array on a different box? Can I do it without problems? I'm at about 99% utilization on my existing drives, so if I have to copy everything back in order to rebuild the array I need to do it soon or I'm going to lose data due to lack of disk space (or spend a troublesome amount of time ferrying the data onto different removable drives).
|
# ¿ Nov 3, 2008 04:30 |
|
Halibut Barn posted:It sounds like you created the array properly, but like Stonefish said, the individual partitions should be marked as the "Linux RAID Autodetect" type, not ext3. That's easy enough to fix though, and I don't think it's really that critical. They just wouldn't be counted by mdadm when it's autodetecting arrays. necrobobsledder posted:Someone posted their experience with a Chenbro case that would meet your needs. Here's the specs: 4 1TB Barracuda ES.2 DG45FC MiniITX motherboard w/4 SATA ports 2GB DDR2 800 RAM Celeron 2.0Ghz 160GB 2.5" WD Scorpio SATA PCIe-1x right-angle riser Rosewill RC-213 PCIe-1x SATA Controller The case comes with a power supply that has an external brick so you don't crowd the case. So far it has seemed very effective. I chose to add the SATA controller for the system drive, but I had alot of problems getting an OS working with it. Windows install kept crashing, openSUSE would install just fine, but just sat there on boot and would never get into the OS (it did once when the CD was still in the drive for some reason). I think those problems were due to the controller card expecting drivers to be loaded, but since I didn't have a USB floppy I couldn't test it. Eventually I decided to try Ubuntu 8.10 and it installed perfectly and booted without a hitch. So far I've been very happy. The heat issues were because I tried running the system without a case fan in the motherboard space (motherboard+components are physically separated from the 4 drive bays, which have their own two fans). Turns out this motherboard seems to run kinda hot. The CPU was fine, but the board was hitting 96C on the graphics/memory hub controller...I dropped a spare fan I had kicking around on it and it fell to 65C, which is fine for me, but I do need to get some airflow moving still. I'll probably buy two fans, one for the hub and I'll just screw it into a heatsink on the motherboard and one for exhaust in the motherboard space. There's not much room once everything is in there, and it probably won't be able to hold more than a 40mm, but it should be fine in the end. I'm also going to have to get a new splitter for the 3v power cable...the one spare they give you has a SATA connector, but a 4-pin floppy connector. There is a 3-pin connector for a fan on the motherboard (apart from the CPU fan of course). The SATA controller for the system drive BARELY fit, and it definitely needed the right-angle riser to do so, which was kinda hard to find. It isn't very secure. There's also a PCI riser that Chenbro makes specifically for the case (I have one if you would like it), but obviously you'll need the motherboard for it. The Intel one I bought only had one expansion slot. The expansion card kinda blocks the minimal back airflow...I wish I had dremmeled it a bit to open up the flow, although there are side ports in the case right on top of the CPU which helps I'm sure. There is a slimline DVD drive slot, a memory card reader slot (gotta buy the Chenbro sanctioned one I think) and a little 2.5" drive holder that fits very nicely under the DVD slot and holds the drive securely. The biggest problem is getting enough ports on your motherboard to hook up everything, although the memory card reader runs off USB. Overall thoughts - with the case all buttoned up, it looks awesome. The 4 hot-swappable drives are sexy looking, and the whole thing is very solid and well-built. No lovely plastic holding your drives, they are satisfyingly secure. The fact that the drives have plenty of cooling and are completely segregated from the rest of the system heat is very nice. The only downside is finding the right mix of components to support everything you want in such a small space (mini-itx), which can affect cooling as well. I'll post some pictures tomorrow evening, and feel free to ask me any questions. Definitely the most I've ever spent on a case (~$200, it was drat hard to find one too), but well worth it for something that is going to be on 24x7 acting as a DHCP/DNS/file/torrent server. I chose to use software RAID even though the Intel motherboard supports RAID (not sure if it is true hardware or fake raid) because I wanted to learn and control the system more.
|
# ¿ Nov 3, 2008 06:58 |
|
fibre hope you have a good reason for it though because it will cost a pretty penny
|
# ¿ Feb 12, 2009 05:30 |
|
Lobbyist posted:So I just purchased 5 1TB SATA drives. Am I doomed to failure if I run them in Linux software RAID5? ZFS still only runs on Solaris? What are my options? I decided against hardware raid because I didn't want to be tied to a specific raid controller and frankly I couldn't be happier about that decision. Someone spilled a beer in my office on the filing desk next to the server, which ran down into the top of the case and fried the motherboard. Black melted plastic and everything. I was worried about the drives, but I pulled them out, plugged them into my main computer and typed a single mdadm --assemble command and immediately I could mount and use the entire array. Otherwise I would have been waiting for some new hardware to arrive and hoping the specific controller was still manufactured. Not to mention how easy it is to grow the array when I need to. Lot's of people I talked to were ambivalent about software raid but no one could really say why besides the fact that they didn't trust it. I've had only good experiences and would happily recommend it. When I rebuild my file server I'll probably look at ZFS just because it seems cool and accounts for the raid write hole.
|
# ¿ Jun 5, 2009 00:58 |
|
This is dissapointing: Snow Leopard kisses ZFS bye-bye Although I don't use anything Mac, I'd like more usage of ZFS. On a side note, does anyone have suggestions for a case that can house at least 8 3.5" internal drives with good cooling for the drives and a decent layout?
|
# ¿ Jun 11, 2009 00:38 |
|
Interlude posted:What's the going opinion of using the Western Digital "Raid Edition" drives as opposed to the Black versions? Worth the $55 price increase for a RAID-5 array? quote:Question
|
# ¿ Jun 14, 2009 04:40 |
|
necrobobsledder posted:Psst, you can use a software utility from WD to enable TLER on even the Green drives. The primary reason to buy the Black series of drives is for the warranty and the far superior hardware (extra drive head motors apparently), not for the firmware features. To me, I think of extra warranty as insurance, and for maybe $15 more, I can get nearly double the insurance, so it's great as an external backup drive for me.
|
# ¿ Jun 14, 2009 18:03 |
|
cypherks posted:How many disks do you have in your R5 setup? I'm confused as to why you'd run backups against an R5 array... (p.s. - also, he's talking about backing up to his server)
|
# ¿ Jul 7, 2009 02:27 |
|
Has anyone tried or is familiar with GlusterFS? Sounds like an interesting excursion. quote:GlusterFS is a clustered file-system capable of scaling to several peta-bytes. It aggregates various storage bricks over Infiniband RDMA or TCP/IP interconnect into one large parallel network file system. Storage bricks can be made of any commodity hardware such as x86-64 server with SATA-II RAID and Infiniband or GigE interconnect.
|
# ¿ Jul 14, 2009 03:41 |
|
KennyG posted:I rewrote the thing in .NET and modeled raid arrays from 2-64 disks 1000 times each. I just optimized the code and am working on a 1,000,000 rep sim to try and improve the reliability of some of the averages. Wow, that is great. Very nice job. Looking forward to the 1m rep.
|
# ¿ Jul 29, 2009 19:55 |
|
So I built an OpenSolaris server and I'm absolutely loving ZFS so far. I was running mdadm on Ubuntu before (have had great success with OS software raid) and it's been an interesting venture moving to ZFS. Want to share a file system with a windows machine? Single command: zfs set sharesmb=on pool/media. Done! No samba configuration, it's all automatic and the share is mounted and ready after each reboot without any config editing. Couldn't be easier. I'd be happy to go into more details on the setup if anyone is interested (including some challenges getting rtorrent working on Solaris), but what I'm really interested in is some feedback on disk spindown. Basically, what I'd like to know is what people think about MTBF and its relationship to disk spin up/down. Right now I've got the 4 drives in my array set to spin down after 4 hours of inactivity, which reduces the power consumption from 82 watts to 54 watts. However, I've also been reading that each spinup event is roughly equivalent to 10 hours of activity on the disk. The electricity gains are negligible (about $22 a year if all disks were always spun down), so I guess I'm asking if it is harder on the disk to leave them spinning 24x7 or have them spin up at least once a day for at least 4 hours (I usually turn on music/video when I get home from work). I use the server as a torrent box so it is definitely 24x7, but the torrents are served from the system drive so the array isn't used unless I'm hitting the music/video shares. Should I set the spindown time to something much longer like 24 hours? What I'd like is to maximize the disk lifetimes. I've got a massive Chieftec case with 3 92mm side fans pointed directly onto the drives so the temps are pretty good no matter how hard they are working. Thoughts?
|
# ¿ Sep 6, 2009 08:23 |
|
moep posted:+1 Yeah, I used those first, but wanted the new version. I found this great link that documented the process but it was also for 0.8.2, but I cribbed some of it for 0.8.5, especially the use of gmake. http://www.jcmartin.org/compiling-rtorrent-on-opensolaris/ I also found this bug ticket filed for libtorrent: http://libtorrent.rakshasa.no/ticket/1003 it had a link to two patches, one for libtorrent: http://xnet.fi/software/memory_chunk.patch ...and the other for rtorrent: http://xnet.fi/software/scgi.patch The patch made it so that I could compile the newest version of libtorrent, but rtorrent still wouldn't compile. I think I may have screwed up some system setting, not really sure, but at this point I called one of my friends who has alot more experience and he was finally able to get it to compile. I'm not sure what exactly he did, but I'll see if he still remembers. There's also some discussion about the problem on the Solaris mailing list here: http://mail-index4.netbsd.org/pkgsrc-bugs/2009/05/02/msg031938.html ...it looks like they fixed it for 0.8.2 so maybe once the update makes it into release then rtorrent will be able to compile normally. movax posted:How do you have your drives set to spin down? I'm running OpenSolaris Nevada (the new/beta branch thing), and I haven't figured that out yet... I added the following lines to /etc/power.conf and then ran /usr/sbin/pmconfig to reload the new config. I'm sure it's obvious - set the four drives to 120 minutes and the system to always stay running. code:
|
# ¿ Sep 7, 2009 00:57 |
|
moep posted:Thanks for that. I managed to get libtorrent-0.12.5 compiled but like you I’m stuck on rtorrent-0.8.5. I’ll just wait for the packages to get updated someday, until then I’ll have to settle with 0.8.2. I haven't been able to get ahold of my friend, but I should have been more clear...I didn't install the patches from the guide, although I did use the newest versions of the packages.
|
# ¿ Sep 9, 2009 04:51 |
|
adorai posted:Nope, ZFS is not GPL and will never be integrated into the kernel. It can run in userland, but will not perform well. Not necessarily true: http://www.sun.com/emrkt/campaign_docs/expertexchange/knowledge/solaris_zfs_gen.html#10 Wouldn't expect it soon though. On another note - I'm curious if anyone has experience running MogileFS? http://www.danga.com/mogilefs/
|
# ¿ Sep 26, 2009 04:52 |
|
edit: I'll respond tomorrow with more details
|
# ¿ Oct 26, 2009 07:59 |
|
Debrain posted:Anyone have any experience updating the firmware to the Seagate Barracuda 7200.11 drives to SD1A and running it in raid? I have one of the 500gig hard drives laying around and figured I could learn how to set up a raid. Seems like a lot of people are complaining about all the problems with their hard drives not showing up in bios. I have four of the 1TB drives that had the firmware issue. Never had any problems with them, but I decided not to risk it and upgraded all the drives. Everything went smoothly, just make sure you read up on it and get the right version of the firmware for your drive. It's a pretty simple process in the end.
|
# ¿ Feb 4, 2010 08:13 |
|
How cool does it keep the drives? And is the fan pretty loud?
|
# ¿ Feb 28, 2010 04:07 |
|
I've got that case (it's roughly the size of an endtable) and with 90 degree cables the door fits just fine.
|
# ¿ Apr 14, 2010 01:51 |
|
soj89 posted:Cost is a big issue. This one little sentence will completely define the end result. Literally, the scale could be from $800 spent stuffing a whitebox with some TB drives to $25k for a lower-end enterprise solution. What is your budget?
|
# ¿ Jun 1, 2010 02:05 |
|
I'm also shopping for an OpenSolaris replacement (Nexenta might be it), but fyi: http://github.com/behlendorf/zfs/wiki
|
# ¿ Sep 12, 2010 23:36 |
|
FISHMANPET posted:I wish my OpenSolaris server had dual NICs so I could run my VM off of one and the actual OS on another. I've had terrible luck with both Xen and VirtualBox and my NICs where after a week or two of uptime the connection will stop for a few seconds up to a few minutes. Makes streaming things and working on the machine pretty unbearable. Why not try ESXi?
|
# ¿ Oct 23, 2010 04:48 |
|
Thermopyle posted:I'm thinking about moving on from WHS. You should be fine with almost all your assumptions except potentially the actual RAIDing of your drives. Be aware that unlike WHS, which distributes data across any combination of drive sizes, mdadm will require you to choose the smallest size drive within the array as the size to use for each of the devices that array is built from. This means you have one 500GB drive and 15 2TB drives, you'll waste 1.5TB on all 15 of them. The way to use all your disk space is to create separate arrays for each combination of drive sizes, but in order to support at least RAID 5 on all your data you'll need at least 3 drives of each size. Assuming this isn't too much of a burden, you can proceed with the rest of your plan, and if you use lvm you'll be able to combine all your mdadm arrays into a single big pool. Your computer should be plenty powerful enough to handle this and will probably get fairly close to saturating your gig network during reads, especially with that many spindles. Ubuntu is a fine choice for an OS. For reference, one of my servers is running Ubuntu Server edition with a 4 disk mdadm array of 7200rpm 1TB Seagates and I can get about 80-85MB/s transfer, with maybe a 20% cpu hit on the Core2Duo 2.16GHz (I think that's the cpu if I remember right). Once you set it up and configure samba or whatever you're going to use to access the data, you can pretty much ignore it and it will just work. Make sure to setup mdadm notifications though, in case you lose a disk you'll want to get an email or something right away so you can replace it.
|
# ¿ Nov 7, 2010 07:15 |
|
Saukkis posted:Another way to accomplish the same is to split all the drives into suitably sized partitions, create arrays from the partitions and then combine them with LVM. I'm using an extreme version of this scenario with all my drives split to 10+ partitions. I did it for flexibility when changing and adding drives before RAID expansion was a practical option in Linux. True, but I didn't recommend that because you need to be very clever about how you're choosing your RAID levels on the partition arrays and which ones are going into the same array, otherwise a single drive failing could end up wiping out the entire array. In a simple example, assume two 500GB drives and one 1TB drive. Partition the TB in half and create a RAID5 array across the four partitions. Unfortunately, if that TB drive goes down, it will effectively kill two devices in the array and render it useless. I'd be curious to see what your partition/array map looks like - it must have a taken awhile to setup properly if you have over ten partitions on some disks?
|
# ¿ Nov 7, 2010 20:18 |
|
Thermopyle posted:Thanks. This is helpful. I've got enough 2TB and 1.5TB drives, but I'm going to have a problem with only have 2-1TB, 2-750GB, 1-500GB, and 1-400GB. In that case, you do actually have enough drives to safely do what Saukkis suggested. For example, if you didn't mind losing 150GB, you could create 250GB partitions on each of the drives and build four RAID5 arrays from those partitions. Each array would have only one partition per drive, so you could lose an entire disk without losing any data. A little more complex to setup, but it would work.
|
# ¿ Nov 8, 2010 03:22 |
|
Thermopyle posted:Are the arrays I create on one machine easily transferable to another machine with different hardware? Sure are. Literally, unplug from one machine, plug into the new one, and run one mdadm --assemble command per array. As long as the computer can see the same physical drives/partitions, it doesn't matter what hardware it's running. That's one of the main reasons I like ZFS/mdadm at home - no need to buy pricey hardware controllers, but you get most of the same benefits.
|
# ¿ Nov 8, 2010 04:35 |
|
For anyone looking at or owning any of the 4KB-sector drives, here's a pretty good article on how to compensate for potential performance issues, as well as some discussion about what to expect in the future from drive manufacturers (i.e. more of the same): http://www.ibm.com/developerworks/linux/library/l-4kb-sector-disks/index.html?ca=dgr-lnxw074KB-Disksdth-LX
|
# ¿ Nov 8, 2010 23:09 |
|
Factory Factory posted:Regarding VMware vSphere (ESXi), Christ on a cracker; for a piece of software I understand conceptually and have used before, it looks completely impenetrable as far as setting it up goes. Am I right that if I want RAID on a VMware box, it has to be hardware RAID or nothing? It won't put something ZFS-like together and I couldn't do that on a guest unless I did direct access to drives that weren't already storing the hypervisor itself? VirtualBox on Ubuntu server will work just fine, although you won't get a nice UI to manage your VMs (at least, I don't know of a remote management UI for headless VBox). Depending on your hardware, ESXi might actually be pretty easy to use/setup. You can actually provide guests direct access to physical drives, which is what you'd want to do if you were running OpenIndiana or something with ZFS support. Here's some links to help you do so: http://www.vm-help.com/forum/viewtopic.php?f=14&t=1025 http://www.vm-help.com/esx40i/SATA_RDMs.php Before attempting ESXi you should investigate your hardware to ensure it is compatible. Besides the mostly enterprise-focused officially supported list, this is a great resource for home builders. Be sure to read the details - for example, my motherboard is supported but the on-board NIC is not, so I get crazy errors when trying to install ESXi unless I also drop a supported NIC into an empty slot: http://www.vm-help.com/esx40i/esx40_whitebox_HCL.php For what it's worth, I'm currently running Ubuntu Server with VirtualBox and it's just fine. I really like VMWare and I'm used to it from work, and although comparisons are really difficult on different types of hardware, ESXi seems slightly faster - but then again people report VBox faster on some OSes so who knows. Is your webserver Windows or Linux? I'd rather run Windows in VMWare (for no good reason besides the fact that I know it works well already).
|
# ¿ Nov 13, 2010 03:24 |
|
adorai posted:If you are going to run linux, you may as well use KVM. If you are going to run virtualbox, why wouldn't you run openindiana? I don't quite understand your post. KVM vs ESXi is reasonable, but not KVM + linux vs ??? . Why is running VirtualBox a given for OpenIndiana? There are lots of reasons NOT to run OI, for example, it is built on the remnants of OpenSolaris but doesn't really have the same major organizational support. I ran OpenSolaris for a year and a half, and I love ZFS, but I'm cautious of OI (not the only article along those lines). Also, why would you buy a raid card to run two drives in raid 1 and have the rest passed through? If your motherboard supports it, why not provide direct access for the drives you want to the guest OS? Running the host in a raid 1 isn't so critical, you should always be able to rebuild elsewhere. A major reason that good software raid is so great is that it's not tied to any type of hardware. Are you advocating a cheap isilogic card for expandability, which might not matter at all if his motherboard has everything he needs right now? Or are you thinking the host should always be raid 1'ed?
|
# ¿ Nov 13, 2010 04:32 |
|
Methylethylaldehyde posted:OI is basically the last publicly available Sun/Oracle ON build, plus some bugfixes to that code. It's also binary compatible with all the regular solaris crap. Yeah, I realize that. I'm still running an instance of OI for all my data that was on ZFS when I was running OpenSolaris. My point is that I wouldn't necessarily recommend OI for someone just starting a server when its future is anything but concrete. I know some people will disagree with my approach, but I'm kinda just treading water in mdadm until btrfs is ready for prime-time or a really viable ZFS alternative is exposed. I expect to deprecate my OI install for FUSE at some point, although maybe not for my regular datastore. Meanwhile, Factory has a bunch of options on what to run, but my recommendation is still to strongly research OI and what he needs from it first...but which virtualization base to run is almost a separate question.
|
# ¿ Nov 13, 2010 05:55 |
|
. double post DLCinferno fucked around with this message at 06:02 on Nov 13, 2010 |
# ¿ Nov 13, 2010 05:56 |
|
Saukkis posted:It really doesn't require much cleverness, simply remember to build the arrays from separate sdX devices. Cool, makes sense. Saukkis posted:I think if you use partitions se to RAID autodetect type you don't even need the assemble command. During boot up Linux kernel will see a bunch of partitions that seem to belong to an array and then it figures out which of them go together. Didn't work for me when my server failed and I had to move the drives to a new machine, but in either case, it's easy to move drives.
|
# ¿ Nov 13, 2010 22:31 |
|
Factory Factory posted:After using the suggestions ITT to look around more, I've decided to roll my own NAS but not reuse hardware (just because it's such power-hungry stuff). Instead I'm building a Mini-ITX server based off a Chenbro server case with hot-swap bays. Of course, the only Mini-ITX board I could find that had both USB 3.0 and at least 4 SATA ports also takes a nice big 73W TDP Core i3 (I will likely underclock it a bit), but hey, could be worse. I can run a GUI so I can click things and not care about using precious CPU cycles. Chenbro also does a version of the case with a 120W power supply if you want to do an Atom build. I'm not tracking on your proposed installation plan. You are installing the OS onto an mdadm array? What is going to run mdadm? FWIW, I've got that exact same case for my backup server, and while I went with the Intel DG45FC board (no USB 3.0), I tried a couple different things to get a 5th drive into the case, which also has a dedicated space for a 2.5" drive. Tiny PCI single-drive SATA controller with a right-angle PCI express riser...worked for awhile, but was WAY too tight in the case. There's almost no room and it got pretty warm. Thought about wrapping an eSATA cable back into the case from the external port on the motherboard but that seemed sloppy. I ended up with a USB CF card reader plugged into one of the internal USB ports. Works great and 8 GB is more than enough for Ubuntu Server. If your motherboard supports booting from USB I'd recommend it.
|
# ¿ Nov 15, 2010 09:39 |
|
Thermopyle posted:As mentioned earlier I'm experimenting with different configurations of mdadm RAID5 arrays and LVM. Honestly, you probably won't notice the difference between the two with normal usage. The wiki page gives a pretty clear comparison though. Personally, I use ext4, but since I didn't want to revert to ZFS at this time I'm really just holding out for BTRFS to go stable.
|
# ¿ Nov 24, 2010 09:07 |
|
|
# ¿ Apr 27, 2024 00:10 |
|
Thermopyle posted:Thanks to advice given earlier in the thread by DLCInferno and others, I've now moved all my data from WHS to an Ubuntu machine with mdadm+LVM. Really good to hear it was successful. Cheers.
|
# ¿ Dec 31, 2010 06:00 |