Displaying 20 results from an estimated 176 matches for "200gb".
2005 Nov 13
3
Adding Nested Partitions To A Mount Point
...e point and mount an additional drive to
that. I would like to be able to have this second drive treated as an
extention of the original disk, under Samba. This would mean that the
capacity of the second drive would be pooled with the first drive, on
which it is mounted.
For example:
I have a 200GB hard drive mounted to the directory "/pub". I would like
to add an additional 60GB drive to this. Can I create a directory on
the 200GB drive called "/pub/temp" and mount the 60GB drive to that
mount point? Then be able to have Samba consider the whole mess as a
single 26...
2006 Mar 10
3
pool space reservation
What is a use case of setting a reservation on the base pool object?
Say I have a pool of 3 100GB drives dynamic striped (pool size of 300GB), and I set the reservation to 200GB. I don''t see any commands that let me ever reduce a pool''s size, so how is the 200GB reservation used?
Related question: is there a plan in the future to allow me to replace the 3 100GB drives with 2 200GB drives?
--Naveen
This message posted from opensolaris.org
2017 Nov 09
2
GlusterFS healing questions
Hi,
We ran a test on GlusterFS 3.12.1 with erasurecoded volumes 8+2 with 10
bricks (default config,tested with 100gb, 200gb, 400gb bricksizes,10gbit
nics)
1.
Tests show that healing takes about double the time on healing 200gb vs
100, and abit under the double on 400gb vs 200gb bricksizes. Is this
expected behaviour? In light of this would make 6,4 tb bricksizes use ~ 377
hours to heal.
100gb brick heal: 18 hours (8+2...
2017 Nov 09
0
GlusterFS healing questions
Hi Rolf,
answers follow inline...
On Thu, Nov 9, 2017 at 3:20 PM, Rolf Larsen <rolf at jotta.no> wrote:
> Hi,
>
> We ran a test on GlusterFS 3.12.1 with erasurecoded volumes 8+2 with 10
> bricks (default config,tested with 100gb, 200gb, 400gb bricksizes,10gbit
> nics)
>
> 1.
> Tests show that healing takes about double the time on healing 200gb vs
> 100, and abit under the double on 400gb vs 200gb bricksizes. Is this
> expected behaviour? In light of this would make 6,4 tb bricksizes use ~ 377
> hours to heal...
2006 Dec 12
1
ZFS Storage Pool advice
...Now how would ZFS be used for the best performance?
What I''m trying to ask is if you have 3 LUNS and you want to create a ZFS storage pool, would it be better to have a storage pool per LUN or combine the 3 LUNS as one big disks under ZFS and create 1 huge ZFS storage pool.
Example:
LUN1 200gb ZFS Storage Pool "pooldata1"
LUN2 200gb ZFS Storage Pool "pooldata2"
LUN3 200gb ZFS Storage Pool "pooldata3"
or
LUN 600gb ZFS Storage Pool "alldata"
--
This messages posted from opensolaris.org
2017 Nov 09
2
GlusterFS healing questions
...:
> Hi Rolf,
>
> answers follow inline...
>
> On Thu, Nov 9, 2017 at 3:20 PM, Rolf Larsen <rolf at jotta.no> wrote:
>>
>> Hi,
>>
>> We ran a test on GlusterFS 3.12.1 with erasurecoded volumes 8+2 with 10
>> bricks (default config,tested with 100gb, 200gb, 400gb bricksizes,10gbit
>> nics)
>>
>> 1.
>> Tests show that healing takes about double the time on healing 200gb vs
>> 100, and abit under the double on 400gb vs 200gb bricksizes. Is this
>> expected behaviour? In light of this would make 6,4 tb bricksizes use...
2009 Mar 27
18
Growing a zpool mirror breaks on Adaptec 1205sa PCI
Setup: Osol.11 build 109
Athlon64 3400+ Aopen AK-86L mobo
adeptec 1250sa Sata PCI controller card
[re-posted from accidental post to osol `general'' group]
I''m having trouble with an adaptec 1205sa (non-raid) SATA PCI card.
It was all working fine when I plugged 2 used sata 200gb disks of a
windows xp machine into it. Booted my osol server and added a zpool
mirror using those 2 drives and the adaptec 1205sa SATA PCI card.
So I know it works.
On every boot I see a message like this
Press F3 to enter configuration utility
Primary channel: WDC WD200-blah-blah...
2017 Nov 09
0
GlusterFS healing questions
...ollow inline...
>>
>>> On Thu, Nov 9, 2017 at 3:20 PM, Rolf Larsen <rolf at jotta.no> wrote:
>>>
>>> Hi,
>>>
>>> We ran a test on GlusterFS 3.12.1 with erasurecoded volumes 8+2 with 10
>>> bricks (default config,tested with 100gb, 200gb, 400gb bricksizes,10gbit
>>> nics)
>>>
>>> 1.
>>> Tests show that healing takes about double the time on healing 200gb vs
>>> 100, and abit under the double on 400gb vs 200gb bricksizes. Is this
>>> expected behaviour? In light of this would ma...
2016 Jun 04
4
remote backup
Hi list,
i've need to backup a partition of ~200GB with a local connection of 8/2
mbps.
Tool like bacula, amanda can't help me due to low bandwidth in local server.
I'm thinking rsync will be a good choice.
What do you think about?
Thanks in advance.
2007 Apr 14
4
Wiping USB drives
Hi,
I have a dozen of drives, ranging from 10Gb to 200Gb. I want to
wipe them clean before donating them. I have a IDE/SATA to USB
converter that works. I can see the drives properly.
DBAN does not currently support external USB drive. Any other
alternatives?
--
Thanks
http://www.911networks.com
When the network has to work
2009 Mar 30
3
Data corruption during resilver operation
...oo many errors
This is that last thing and apparently the result of a series of steps
I''ve taken to increase a zpool mirrors size.
There was quite a lot of huffing and puffing with the sata controller
that holds this mirror but the short version is:
zpool z3 created as mirror on 2 older 200gb SATAI disks. On an
adaptec 1205sa PCI controller.
After deciding I wanted increase the size of this pool, I detached 1
disk, then pulled it out. I replaced it with a newer bigger sata II
wd750 gb disk. When I attempted to startup and attach this disk, I
didn''t get by the boot process,...
2008 Dec 20
2
General question about ZFS and RAIDZ
Hello to the forum,
with my general question about ZFS and RAIDZ I want the following to know:
Must all harddisks for the storage pool have the same capacity or is it possible to use harddisks with different capacities?
Many thanks for the answers.
Best regards
JueDan
--
This message posted from opensolaris.org
2008 Jan 15
4
Moving zfs to an iscsci equallogic LUN
We have a Mirror setup in ZFS that''s 73GB (two internal disks on a sun fire v440). We currently are going to attach this system to an Equallogic Box, and will attach an ISCSCI LUN from the Equallogic box to the v440 of about 200gb. The Equallogic Box is configured as a Hardware Raid 50 (two hot spares for redundancy).
My question is what''s the best approach to moving the ZFS data off the ZFS mirror to this LUN, or joining this LUN to ZFS but not have a ZFS mirror setup anymore because of the disk waste with the m...
2006 Feb 10
1
4.2 install w/250GB raid arrays won't boot
hi!
raid 1 arrays: I already have 2 systems running this same raid1 config.
1 sys has 2 120 gb
1 sys has 1 120gb and 1 200gb but matching the raid partitions
this system here that is giving me fits right now has 1 250gb and 1
200gb. I tried it w/a new 250 gb for the 2nd drive but the same
results. Will not boot.
in druid, when I am config. the raid arrays, I always create the
boot partitions in hda1 and hdc1 and make...
2015 Nov 18
5
Intel SSD
Hi
I have a supermicro server, motherboard is with C612 chipset and beside
that with LSI3108 raid controller integrated.
Two Intel SSD DC S3710 200GB.
OS: Centos 7.1 up to date.
My problem is that the Intel SSD Data Center Tool (ISDCT) does not
recognize the SSD drives when they connected to the standard S-ATA ports
on the motherboard, but through the LSI raid controller is working.
Does somebody know what could be the problem?
I talked to...
2005 Dec 13
2
Seagate NCQ + Sil3112 (sata_sil)
.... The problem occurs when i
go to install CentOS, i insert the cd and start the boot up process,
when it gets to the section where it detects the hard disks, the driver
is seems crashes, the message it produces complains about no one caring
about a interrupt on IRQ11. The hard disk is a Seagate 200GB 8MB SATA150
NCQ drive, on a SilliconImage 3112 PCI adapter card. Does anyone have
any ideas?
Regards,
Peter
2008 Mar 31
3
Samba Restrictions
...server running on
AIX 5.3. Before we can think about implementing this we need to no if Samba
has any limitation on number of folders, files and shares. The current file
storage system is running on Windows 2003 server and has somewhere in the
region of 51,000 folders and 450,000 files taking up 200GB would samba be
able to cope with this?
Your feedback would be appreciated.
Thanks
Tim
This e-mail and any attachments are confidential and intended solely for the addressee and may also be privileged or exempt from disclosure under applicable law. If you are not the addressee, or have received...
2018 Sep 20
2
4.8.5 + TimeMachine = Disk identity changed on every connect, cannot backup
Hi,
I configured Samba 4.8.5 on Debian (Buster) with vfs_fruit as a TimeMachine destination and while it detects it and does the initial backup to some extent (30GB out of 200GB),
TimeMachine then fails with a message about the disk identity having changed.
Options are “don’t backup” and “backup anyway”. When using “backup anyway”, the backup creates a secondary sparse image and starts from scratch, and won’t even touch the existing sparse image.
My Mac is running in Vir...
2012 Apr 23
5
'filesystem resize max' tries to use devid 1
Back story:
I started my pool with a 200gb partition at the end of my drive (sdc5)
, until I was able to clear out the data at the beginning of my drive.
When I was ready, I ran `btrfs dev add /dev/sdc4 /` then `btrfs dev
del /dev/sdc5 /`,
$ sudo btrfs fi resize max /
Resize ''/'' of ''max''
ERROR: unable t...
2011 Dec 28
3
Btrfs: blocked for more than 120 seconds, made worse by 3.2 rc7
....
It always happens when i write many files with rsync over network. When
i used 3.2rc6 it happened randomly on both machines after 50-500gb of
writes. with rc7 it happens after much less writes, probably 10gb or so,
but only on machine 1 for the time being. machine 2 has not crashed yet
after 200gb of writes and I am still testing that.
machine 1: btrfs on a 6tb sparse file, mounted as loop, on a xfs
filesystem that lies on a 10TB md raid5. mount options
compress=zlib,compress-force
machine 2: btrfs over md raid 5 (4x2TB)=5.5TB filesystem. mount options
compress=zlib,compress-force
past...