search for: 1.2tb

Displaying 20 results from an estimated 29 matches for "1.2tb".

Did you mean: 1.2gb
2011 May 18
1
[Qemu-devel] Qcow2
On Wed, May 18, 2011 at 5:18 PM, <arun.sasi1 at wipro.com> wrote: > Hello Stefan, > > > > Thank you very much for considering my issue... > > > > Here is My problem... > > > > 1) I have 4 VM`s Running on Base server. > > 2) Base server is having 15GB RAM. > > 3) I can start all VM`s apart from my file server. > > 4) File server is
2016 Jun 22
3
Mailboxes on NFS or iSCSI
Hello, we are running Dovecot (2.2.13-12~deb8u1) on Debian stable. Configured with Mailbox++, IMAP, POP3, LMTPD, Managesieved, ACL. Mailboxes are on local 1.2TB RAID, it's about 5310 accounts. We are slowly getting out of space and we are considering to move Mailboxes onto Netapp disk array with two independent network connections. Are there some pitfalls? Not sure we should use NTP or
2008 Jun 03
2
FW: Partitioning help
Hi, I have Centos server 4.5 with 3.3TB raid disk on a 3 ware controller. Now the problem is that I am not able to see the partition in full since it shows only 1.2TB. I have created a partition with Parted and (GNU PARTED) and I have seen it is 3.3 T there. However I have to make this partition on ext3 with make2efs j /dev/sda1 and after the formatting it went to 1.2T. Could someone
2006 Dec 10
1
Help with Samba+JFS
I have a network server running FC5, with a hardware raid 3 card using 5 drives, as one large (1.2TB) partition in JFS. I chose JFS because of a recommendation for performance from a MythTV tutorial, but I don't really know much about file systems and am suspecting JFS to be causing my problems. I run samba, apache and MythTV on this machine, and there is essentially only one problem as far
2005 Nov 06
4
Size of /var/log/lastlog
Hi, Can any of you explain this weirdness: [root at machine log]# cd /var/log/ [root at machine log]# ls -la|grep last -r-------- 1 root root 1254130450140 Nov 6 21:44 lastlog [root at machine log]# du -hs lastlog 52K lastlog What's up with the output of ls? This is x86_64. Thanks, Morten
2004 Sep 22
2
Filename problem (filenames containing slashes aka.\and /)
We _cannot_ go about renaming files, for a number of reasons.. the primary being some of the files are technically not ours to alter without a clients permission (names and all) and you don't understand just how many files there are. There are quite probably thousands of files, spread across _hundreds_ of cds and dvds which comes to a total of roughly 400GB. As to a Mac server, i'm not
2008 Aug 27
1
Finding which GEOM provider is generating errors in a graid3
I have a FreeBSD 6.2-based server running a 1.2TB graid3 volume, which consists of 5x 320gb SATA hard drives. I've been getting errors in /var/log/messages from the graid3 volume, which I suspect means an underlying fault with one of the disks, but is there any way to decipher which one of these drives is throwing errors? I've checked smartctl -a /dev/adXX but nothing shows up there..
2013 Nov 21
3
Sync data
I guys! i have 2 servers in replicate mode, the node 1 has all data, and the cluster 2 is empty. I created a volume (gv0) and start it. Now, how can I synchronize all files on the node 1 by the node 2 ? Steps that I followed: gluster peer probe node1 gluster volume create gv0 replica 2 node1:/data node2:/data gluster volume gvo start thanks! -------------- next part -------------- An HTML
2019 Apr 04
2
[RFC] NEC SX-Aurora VE backend
Hello, we’d like to propose the integration of a new backend into LLVM: NEC SX-Aurora TSUBASA Vector Engine (VE). We hope to get some feedback here and at EuroLLVM about the path and proper procedure of merging. The SX-Aurora VE is a Vector CPU on an PCI-E Accelerator card. It has 48GB memory from six HBM2 stacks, accessible with 1.2TB/s bandwidth, 8 cores with vector and scalar units, each.
2005 Oct 21
5
Migration to Samba using external LDAP server
Hello, we are in the process of implementing a samba server running 3.0.14 and an external LDAP server running Microsoft ADAM. We have it also running with Open LDAP for UNIX under Redhat. It works fine for every user account that accesses the samba instance. The user mapping is done and all works fine. Now we have the major problem of the migration and I would need some guidance here
2023 Nov 06
1
Verify limit-objects from clients in Gluster9 ?
Hello all. Is there a way to check inode limit from clients? df -i /path/to/dir seems to report values for all the volume, not just the dir. For space it works as expected: # gluster v quota cluster_data list Path Hard-limit Soft-limit Used Available Soft-limit exceeded? Hard-limit exceeded?
2009 Oct 08
0
zfs send/receive performance concern
I am running zfs send/receive on a ~1.2Tb zfs spread across 10x200Gb LUNs. Has copied only 650Gb in ~42Hrs. Source pool and destination pool are from the same storage sub system. Last time when ran, took ~20Hrs. Something is terribly wrong here. What do i need to look to figure out the reason? ran zpool iostat and iostat on the given pool for some clue. but still in a state of confusion now.
2005 Nov 07
0
vsftpd resource hogging on 4.2
I've just noticed an odd bit of behaviour with 4.2. I'm using FTP to send a number of large binary files from a windows 2000 machine to a CentOS 4.2 system. Both systems are similarly configured (Athlon64 3200+, 1gig RAM, 80gig system disk, 1.2TB scratch RAID 0 array, gigabit ether to a dumb switch on which they are the only hosts). On the very first file, about 17.5gigs, I got
2002 Jan 03
2
Addendum to previous email re: "Wasted Space"
Also, it's important to note that 'du -h' reports the appropriate amount of space used and that the drive appears, in all other regards, to be properly using the space. Is this perhaps a bug with how Windows reads the available space left on SMB shares? (Likely a Windows problem) -Tal
2008 May 12
2
broken GFS
This is the 2nd time this has happened to me. There was a kernel release over the weekend to .67.0.15, yet, they did not release the updated GFS to go along with it, so when the machine rebooted, there was no gfs file system in the new running kernel which in turn wreaked havoc on my cluster. I truly wish they would not do that :). I guess I shall have to not allow automatic yum updates from
2008 Mar 27
1
overflow: linkname_len
Good morning all, I?m using rsync v3.0.0 on both ends. The source is Fedora Core 4 and the destination is MacOS 10.5.2. I?m trying to sync a file system with roughly 5,000,000 files checking in about 1.2TB. I use the command rsync ?avP ?e shh root@mustang:/filevault . From my MacOS machine and I get the following after about 400GB has been copied:
2007 May 02
0
Can't delete files via FTP
I am getting an error when trying to delete files from FTP. I can upload, download, rename files, and even delete empty folders. But when I try to delete a file, I get below error and the filename changes to '.pureftpd-rename.<alphanumeric string>'. Command: DELE /path/to/file Response: 550 Could not delete /path/to/file Invalid argument A little Background... I have a small
2010 Jan 13
1
Problems with rsync between NAS mounted filesystems
Hi All, Here is my setup. I've to two NAS filesystems mounted on a SunFire V490. They're mounted on /rsync/ieeprodhome/ECF and /rsync/ieeprodhome2/ECF. The ieeprodhome filesystem is 1.008 TB wiht 812GB used and the ieeprodhome2 filesystem is 1.2TB with 813GB used. One other thing to add that might be of importance is that the source filesystem has over 100k files in it.
2011 Apr 09
16
wrong values in "df" and "btrfs filesystem df"
Hallo, linux-btrfs, First I create an array of 2 disks with mkfs.btrfs -d raid0 -m raid1 /dev/sdb1 /dev/sdd1 and mount it at /srv/MM. Then I fill it with about 1,6 TByte. And then I add /dev/sde1 via btrfs device add /dev/sde1 /srv/MM btrfs filesystem balance /srv/MM (it run about 20 hours) Then I work on it, copy some new files, delete some old files - all works well. Only df
2005 Sep 06
1
/var/log/lastlog on x86_64
Hi list, this problem is already known and I'm sorry to bother if an acceptable workaround was already debated on the list. I was getting trouble with a 'grep something /var/log*' which caused the "Memory exhausted" message. With some deeper search I found the lastlog file in /var/log/ to be 1.2T sized. This seems to come from the nfsnobody's uid to be 4294967294 on