similar to: Dedupping Has_many through, :unique=>true

Displaying 20 results from an estimated 11000 matches similar to: "Dedupping Has_many through, :unique=>true"

2009 Dec 15
7
ZFS Dedupe reporting incorrect savings
Hi, Created a zpool with 64k recordsize and enabled dedupe on it. zpool create -O recordsize=64k TestPool device1 zfs set dedup=on TestPool I copied files onto this pool over nfs from a windows client. Here is the output of zpool list Prompt:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT TestPool 696G 19.1G 677G 2% 1.13x ONLINE - When I ran a
2010 Jun 18
1
Question : Sun Storage 7000 dedup ratio per share
Dear All : Under Sun Storage 7000 system, can we see per share ratio after enable dedup function ? We would like deep to see each share dedup ratio. On Web GUI, only show dedup ratio entire storage pool. Thanks a lot, -- Rex -- This message posted from opensolaris.org
2009 Jun 22
5
has_many through , or habtm , using form
i think there ara two ways of relate products and categories , basically i want to fix one product(e.g hp dv7....) to some categories (notebook,17"notebooks...) i made a table named categorization(incuding category_id,product_id fields) then in models i write these codes below class Product < ActiveRecord::Base has_many :categories, :through => :categorizations
2011 Apr 28
4
Finding where dedup''d files are
Is there an easy way to find out what datasets have dedup''d data in them. Even better would be to discover which files in a particular dataset are dedup''d. I ran # zdb -DDDD which gave output like: index 1055c9f21af63 refcnt 2 single DVA[0]=<0:1e274ec3000:2ac00:STD:1> [L0 deduplicated block] sha256 uncompressed LE contiguous unique unencrypted 1-copy size=20000L/20000P
2011 Jan 28
8
ZFS Dedup question
I created a zfs pool with dedup with the following settings: zpool create data c8t1d0 zfs create data/shared zfs set dedup=on data/shared The thing I was wondering about was it seems like ZFS only dedup at the file level and not the block. When I make multiple copies of a file to the store I see an increase in the deup ratio, but when I copy similar files the ratio stays at 1.00x. -- This
2009 Feb 21
3
belongs_to or has_many
2 tables Items and Categories Categories (id, name) Items (id, name, category_id) Category_id can be null, and there are Categories that has not an Item. -- Posted via http://www.ruby-forum.com/. --~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "Ruby on Rails: Talk" group. To post to this group, send email
2009 Dec 30
3
what happens to the deduptable (DDT) when you set dedup=off ???
I tried the deduplication feature but the performance of my fileserver dived from writing 50MB/s via CIFS to 4MB/s. what happens to the deduped blocks when you set dedup=off? are they written back to disk? is the deduptable deleted or is it still there? thanks -- This message posted from opensolaris.org
2013 Apr 01
5
[RFC] Online dedup for Btrfs
Hello, I was bored this weekend so I hacked up online dedup for Btrfs. It''s working quite well so I think it can be more widely tested. There are two ways to use it 1) Compatible mode - this is a bit slower but will handle being used by older kernels. We use the csum tree to find duplicate blocks. Since it is relatively easy to have crc32c collisions this also involves reading the
2010 Sep 25
4
dedup testing?
Hi all Has anyone done any testing with dedup with OI? On opensolaris there is a nifty "feature" that allows the system to hang for hours or days if attempting to delete a dataset on a deduped pool. This is said to be fixed, but I haven''t seen that myself, so I''m just wondering... I''ll get a 10TB test box released for testing OI in a few weeks, but before
2011 Feb 12
1
existing performance data for on-disk dedup?
Hello. I am looking to see if performance data exists for on-disk dedup. I am currently in the process of setting up some tests based on input from Roch, but before I get started, thought I''d ask here. Thanks for the help, Janice
2009 Aug 18
1
How to Dedup a Spatial Points Data Set
I'm new to spatial analysis and am exploring numerous packages, mostly enjoying sp, gstat, and spBayes. Is there a function that allows the user to dedup a data set with multiple values at the same coordinates and replace those duplicated values with the mean at those coordinates? I've written some cumbersome code that works, but would prefer an efficient R function if it exists.
2009 Jun 16
2
dedup in dovecot?
Hi all Deduplicating data is not really a new thing, but quite efficient in mail systems where an email with an nMB attachment may be sent to multiple recipients. This might call for deduplicating data. Is there a way to do this, or is it far off? If I understand the system correctly, usually an MTA is calling dovecot on every single message, meaning the message itself won't
2009 Nov 12
0
Problem with has_many :through, :uniq => true with polymorph
Didn''t have quite enough space to describe it there...basically i''m having a problem with the :uniq option in my tags. I''m using acts_as_taggable_on_steroid which adds these associations to my Resource class: Resource has_many :taggings, :as => :taggable, :dependent => :destroy, :include => :tag has_many :tags, :through => :taggings, :uniq => true
2010 Aug 18
10
Networker & Dedup @ ZFS
Hi, We are considering using a ZFS based storage as a staging disk for Networker. We''re aiming at providing enough storage to be able to keep 3 months worth of backups on disk, before it''s moved to tape. To provide storage for 3 months of backups, we want to utilize the dedup functionality in ZFS. I''ve searched around for these topics and found no success stories,
2009 Nov 02
24
dedupe is in
Deduplication was committed last night by Mr. Bonwick: > Log message: > PSARC 2009/571 ZFS Deduplication Properties > 6677093 zfs should have dedup capability http://mail.opensolaris.org/pipermail/onnv-notify/2009-November/010683.html Via c0t0d0s0.org.
2007 Dec 11
2
Patch 10463: has_many through using uniq does not honor order
Hi, I''ve just submitted a patch for ActiveRecord; http://dev.rubyonrails.org/ticket/10463 The patch includes new fixtures because I could not find a applicable combination among the existing fixtures. I hope that''s okee. Please +1 or comment it. Thanks, Remco --~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the
2006 May 10
7
has_many :through scope on join attribute
Hi I have a has_many :through. It''s a basic mapping of Project id .... User id .... TeamMembers project_id user_id role What I would like to do is have different roles so I can have in the project model has_many :core_members, :through => :team_members, :source => :user but I would like to limit this to only those with the "core" role in the team members table for
2009 Nov 16
2
ZFS Deduplication Replication
Hello; Dedup on ZFS is an absolutely wonderful feature! Is there a way to conduct dedup replication across boxes from one dedup ZFS data set to another? Warmest Regards Steven Sim
2010 Jan 22
3
mailbox format w/ separate headers/data
In the future, it would be cool if there were a mailbox format (dbox2?) where mail headers and each mime part were stored in separate files. This would enable the zfs dedup feature to be used to maximum benefit. In the zfs filesystem, there is a dedup feature which stores only 1 copy of duplicate blocks. In a normal mail file, the headers will be different for each recipient and the chances of
2009 Dec 10
6
Confusion regarding ''zfs send''
I''m playing around with snv_128 on one of my systems, and trying to see what kinda of benefits enabling dedup will give me. The standard practice for reprocessing data that''s already stored to add compression and now dedup seems to be a send / receive pipe similar to: zfs send -R <old fs>@snap | zfs recv -d <new fs> However, according to the man page,