Steve Wilhelm
2010-Jul-15 06:00 UTC
Writing ActiveRecord find result to S3 as a compressed file
On Heroku, using a delayed_job, I am trying to periodically write the
result of a
ActiveRecord find that has been converted to JSON to an Amazon S3
bucket as a compressed file.
I have this successfully by writing the JSON results to a temporary
file using File.open and Zlib:GzipWriter.write and then using
AWS::S3::S3Object.store to copy the resulting file to S3. The code
fragment is below.
The problem is the operation aborts I think because it exceeds
Heroku''s dyno memory or file size contraints when the find returns a
large number of rows.
Any suggestions how to use streams or some other approach so that
large find results can be converted to JSON and then stored on S3 as a
compressed file.
Thanks in advance for any advice.
- Steve W.
--------- code fragment ------
tmp_file = "./tmp/#{file_name}"
# first compress the content in a file
File.open(tmp_file, ''w'') do |f|
gz = Zlib::GzipWriter.new(f)
gz.write content
gz.close
end
AWS::S3::Base.establish_connection!(
:access_key_id => S3_CONFIG[''access_key_id''],
:secret_access_key =>
S3_CONFIG[''secret_access_key''],
:use_ssl => true
)
AWS::S3::S3Object.store file_name + ".gz", open(tmp_file),
bucket_name
stored = AWS::S3::Service.response.success?
AWS::S3::Base.disconnect!
File.delete tmp_file
--
Posted via http://www.ruby-forum.com/.
--
You received this message because you are subscribed to the Google Groups
"Ruby on Rails: Talk" group.
To post to this group, send email to
rubyonrails-talk-/JYPxA39Uh5TLH3MbocFF+G/Ez6ZCGd0@public.gmane.org
To unsubscribe from this group, send email to
rubyonrails-talk+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/Ez6ZCGd0@public.gmane.org
For more options, visit this group at
http://groups.google.com/group/rubyonrails-talk?hl=en.