Worked fine for me actually.
# md5sum lastlog
ab7557d582484a068c3478e342069326 lastlog
# rsync -avH lastlog /mnt/
sending incremental file list
lastlog
sent 364,001,522 bytes received 35 bytes 48,533,540.93 bytes/sec
total size is 363,912,592 speedup is 1.00
# cd /mnt
# md5sum lastlog
ab7557d582484a068c3478e342069326 lastlog
-Krutika
On Wed, Sep 28, 2016 at 8:21 AM, Krutika Dhananjay <kdhananj at
redhat.com>
wrote:
> Hi,
>
> What version of gluster are you using?
> Also, could you share your volume configuration (`gluster volume info`)?
>
> -Krutika
>
> On Wed, Sep 28, 2016 at 6:58 AM, Ravishankar N <ravishankar at
redhat.com>
> wrote:
>
>> On 09/28/2016 12:16 AM, ML Wong wrote:
>>
>> Hello Ravishankar,
>> Thanks for introducing the sharding feature to me.
>> It does seems to resolve the problem i was encountering earlier. But I
>> have 1 question, do we expect the checksum of the file to be different
if i
>> copy from directory A to a shard-enabled volume?
>>
>>
>> No the checksums must match. Perhaps Krutika who works on Sharding
>> (CC'ed) can help you figure out why that isn't the case here.
>> -Ravi
>>
>>
>> [xxxxx at ip-172-31-1-72 ~]$ sudo sha1sum /var/tmp/oVirt-Live-4.0.4.iso
>> ea8472f6408163fa9a315d878c651a519fc3f438 /var/tmp/oVirt-Live-4.0.4.iso
>> [xxxxx at ip-172-31-1-72 ~]$ sudo rsync -avH
/var/tmp/oVirt-Live-4.0.4.iso
>> /mnt/
>> sending incremental file list
>> oVirt-Live-4.0.4.iso
>>
>> sent 1373802342 bytes received 31 bytes 30871963.44 bytes/sec
>> total size is 1373634560 speedup is 1.00
>> [xxxxx at ip-172-31-1-72 ~]$ sudo sha1sum /mnt/oVirt-Live-4.0.4.iso
>> 14e9064857b40face90c91750d79c4d8665b9cab /mnt/oVirt-Live-4.0.4.iso
>>
>> On Mon, Sep 26, 2016 at 6:42 PM, Ravishankar N <ravishankar at
redhat.com>
>> wrote:
>>
>>> On 09/27/2016 05:15 AM, ML Wong wrote:
>>>
>>> Have anyone in the list who has tried copying file which is bigger
than
>>> the individual brick/replica size?
>>> Test Scenario:
>>> Distributed-Replicated volume, 2GB size, 2x2 = 4 bricks, 2 replicas
>>> Each replica has 1GB
>>>
>>> When i tried to copy file this volume, by both fuse, or nfs mount.
i get
>>> I/O error.
>>> Filesystem Size Used Avail Use% Mounted on
>>> /dev/mapper/vg0-brick1 1017M 33M 985M 4% /data/brick1
>>> /dev/mapper/vg0-brick2 1017M 109M 909M 11% /data/brick2
>>> lbre-cloud-dev1:/sharevol1 2.0G 141M 1.9G 7% /sharevol1
>>>
>>> [xxxxxx at cloud-dev1 ~]$ du -sh /var/tmp/ovirt-live-el7-3.6.2.iso
>>> 1.3G /var/tmp/ovirt-live-el7-3.6.2.iso
>>>
>>> [melvinw at lbre-cloud-dev1 ~]$ sudo cp
/var/tmp/ovirt-live-el7-3.6.2.iso
>>> /sharevol1/
>>> cp: error writing ?/sharevol1/ovirt-live-el7-3.6.2.iso?:
Input/output
>>> error
>>> cp: failed to extend ?/sharevol1/ovirt-live-el7-3.6.2.iso?:
>>> Input/output error
>>> cp: failed to close ?/sharevol1/ovirt-live-el7-3.6.2.iso?:
Input/output
>>> error
>>>
>>>
>>> Does the mount log give you more information? It it was a disk full
>>> issue, the error you would get is ENOSPC and not EIO. This looks
like
>>> something else.
>>>
>>>
>>> I know, we have experts in this mailing list. And, i assume, this
is a
>>> common situation where many Gluster users may have encountered.
The worry
>>> i have what if you have a big VM file sitting on top of Gluster
volume ...?
>>>
>>> It is recommended to use sharding (http://blog.gluster.org/2015/
>>> 12/introducing-shard-translator/) for VM workloads to alleviate
these
>>> kinds of issues.
>>> -Ravi
>>>
>>> Any insights will be much appreciated.
>>>
>>>
>>>
>>> _______________________________________________
>>> Gluster-users mailing listGluster-users at
gluster.orghttp://www.gluster.org/mailman/listinfo/gluster-users
>>>
>>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://www.gluster.org/pipermail/gluster-users/attachments/20160928/04f4d545/attachment.html>