Hi all I just tested dedup on this test box running OpenIndiana (147) storing bacula backups, and did some more testing on some datasets with ISO images. The results show so far that removing 30GB deduped datasets are done in a matter of minutes, which is not the case with 134 (which may take hours). The tests also show that the write speed to the pool is low, very low, if dedup is enabled. This is a box with a 3GHz core2duo, 8 gigs of RAM, eight 2TB drives and a 80GB x25m for the SLOG (4 gigs) and L2ARC (the rest of it). So far I will conclude that dedup should be useful if storage capacity is crucial, but not if performance is taken into concideration. Mind, this is not a high-end box, but still, I think the numbers show something Vennlige hilsener / Best regards roy -- Roy Sigurd Karlsbakk (+47) 97542685 roy at karlsbakk.net http://blogg.karlsbakk.net/ -- I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er et element?rt imperativ for alle pedagoger ? unng? eksessiv anvendelse av idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og relevante synonymer p? norsk.
Can you provide some specifics to see how bad the writes are? -- This message posted from opensolaris.org
Hi!> Hi all> I just tested dedup on this test box running OpenIndiana (147) storing bacula backups, and did some more testing on some datasets with ISO images. The > results show so far that removing 30GB deduped datasets are done in a matter of minutes, which is not the case with 134 (which may take hours). The tests also show that the write speed to the pool is low, very low, if dedup is enabled. This is a box with a 3GHz core2duo, 8 gigs of RAM, eight 2TB drives and a 80GB x25m for the SLOG (4 gigs) and L2ARC (the rest of it).> So far I will conclude that dedup should be useful if storage capacity is crucial, but not if performance is taken into concideration.> Mind, this is not a high-end box, but still, I think the numbers show somethingHi, it is probably due you have quite low amount of ram. I have similar setup, 10TB dataset that can handle 100MB/s writes easily, system has 24GB of ram. Yours Markus Kovero