similar to: Design Dilemma - Please Help

Displaying 20 results from an estimated 7000 matches similar to: "Design Dilemma - Please Help"

2010 Oct 02
2
[LLVMdev] Function inlining creates uninitialized stack roots
I'm still putting the final touches on my stack crawler, and I've run into a problem having to do with function inlining and local stack roots. As you know, all local roots must be initialized before you can make any call to a function which might crawl the stack. My compiler ensures that all local variables of a function are allocated, declared as root, and initialized in the first
2007 Apr 03
4
Replacing ERB with Erubis
Hey guys, I''ve been hearing a lot about erubis: http://www.kuwata-lab.com/erubis/ Especially about how much faster it is than straight ERB. In their Ruby on Rails support docs: http://www.kuwata-lab.com/erubis/users-guide.05.html#topics-rails They state that with a few added lines to your environment.rb it will replace ERB completely. I''m wondering if anyone has done this in
2006 Mar 29
1
htdig with omega for multiple URLs (websites)
Olly, many thanks for suggesting htdig, you saved me a lot of time. Htdig looks better than my original idea - wget, you were right. Using htdig, I can crawl and search single website - but I need to integrate search of pages spread over 100+ sites. Learning, learning.... Htdig uses separate document database for every website (one database per URL to initiate crawling). Htdig also can merge
2010 Oct 02
2
[LLVMdev] Function inlining creates uninitialized stack roots
On Sat, Oct 2, 2010 at 12:59 PM, nicolas geoffray < nicolas.geoffray at gmail.com> wrote: > Hi Talin, > > You are not doing something wrong, it is just that the LLVM optimizers > consider llvm.gcroot like a regular function call. The alloca is moved in > the first block most probably because the inliner anticipates another > optimization pass (the mem2reg). > OK, well,
2010 Oct 02
0
[LLVMdev] Function inlining creates uninitialized stack roots
Hi Talin, You are not doing something wrong, it is just that the LLVM optimizers consider llvm.gcroot like a regular function call. The alloca is moved in the first block most probably because the inliner anticipates another optimization pass (the mem2reg). Cheers, Nicolas On Sat, Oct 2, 2010 at 8:28 PM, Talin <viridia at gmail.com> wrote: > I'm still putting the final touches on
2007 Dec 06
3
anybody use OPEN_ID to authenticate?
how did it go? here is the link if you are interested: http://openid.net/what/ -- Posted via http://www.ruby-forum.com/. --~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "Ruby on Rails: Talk" group. To post to this group, send email to rubyonrails-talk-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org To unsubscribe
2010 Oct 02
0
[LLVMdev] Function inlining creates uninitialized stack roots
Sure. I think we can change the GC lowering pass to recognize all llvm.gcroot (not only the ones in the first block), and move them to the first block so that they are initialized by the pass later on. On Sat, Oct 2, 2010 at 10:58 PM, Talin <viridia at gmail.com> wrote: > On Sat, Oct 2, 2010 at 12:59 PM, nicolas geoffray < > nicolas.geoffray at gmail.com> wrote: > >>
2007 Jan 23
3
Someone getting RDig work for Linux?
I got this root at linux:~# rdig -c configfile RDig version 0.3.4 using Ferret 0.10.14 added url file:///home/myaccount/documents/ waiting for threads to finish... root at linux:~# rdig -c configfile -q "Ruby" RDig version 0.3.4 using Ferret 0.10.14 executing query >Ruby< Query: total results: 0 root at linux:~# my configfile I changed from config to cfg, because of maybe
2008 Mar 25
0
Questions about backgroundrb
Cc''ing to the list for archival purposes: On Tue, Mar 25, 2008 at 7:55 PM, Brian Noguchi <brian.noguchi at gmail.com> wrote: > Hi Hemant, > > I''m Brian Noguchi, a developer in the Bay Area. I have some questions about > backgroundrb, and I found your contact info on a forum. I figured its > probably best to get answers straight from the source. > >
2007 Mar 10
6
ActiveResources 0.1.0 Released
See the blog post at http://blog.lonestarsoftware.net/2007/03/09/active_resources-010-released/ Reading through the rails blogosphere last week, I read a post (which I can not find again) that suggested a completely different approach to AJAX use in rails apps. The idea was to create a Javascript proxy to the ActiveRecord models and allow AR operations to be called from the client. I see this
2007 Sep 18
4
basic rdig setup
I''m developing locally on Windows and I have a remote dev box that runs Linux. I''m trying to use RDig just to index using urls, no files. Both use acts_as_ferret for an administrative search that works fine. On the Windows machine, I get no errors, but get no results. On the Linux machine, I get: File Not Found Error occured at <except.c>:93 in xraise Error occured in
2010 Oct 14
1
[LLVMdev] llvm.org robots.txt prevents crawling by Google code search?
On Wed, Oct 13, 2010 at 11:10 PM, Anton Korobeynikov < anton at korobeynikov.info> wrote: > > indexing the llvm.org svn archive. This means that when you search for > an > > LLVM-related symbol in code search, you get one of the many (possibly > > out-of-date) mirrors, rather than the up-to-date llvm.org version. This > is > > sad. > This is intentional. The
2019 Aug 29
2
404s within LLVM documentation
Patrick, how long does the crawl take? I suspect if we fixed internal documentation links so that they point to local copies of documentation when building locally it would be quite quick (no actual idea though). That in turn would probably make it feasible to add to the existing documentation build bots, I think. James On Thu, 29 Aug 2019 at 03:47, Neil Nelson via llvm-dev < llvm-dev at
2006 Mar 17
1
omega crawler: ht://dig or wget?
At wiki page: http://wiki.xapian.org/Omega I added a comment that ht://Dig looks like dead. Does anybody really use it? >From brief glance at docs I had a feeling it is not easy to configure. Maybe better crawler is GNU wget? Mature, stable, maintained? -- Peter Masiar
2006 Jul 25
1
RDig document processing error
Hi all, Am having problems using RDig: With this rdig config... cfg.crawler.start_urls = [''http://www.defensetech.org''] cfg.crawler.include_hosts = [''www.defensetech.org''] cfg.index.path = ''/my/path/to/index'' cfg.verbose = true ...I get this output: $ rdig -c config/rdig_config.rb /usr/local/lib/site_ruby/1.8/ferret/index/term.rb:45:
2007 Apr 18
2
Checking validity of NHS numbers
Hello I need to check that the NHS numbers in my database are valid. I''m storing them as ten-digit strings: the first nine are the identifier and the tenth is a check digit. There are four steps to calculating the check digit (from http://www.connectingforhealth.nhs.uk/systemsandservices/nsts/docs/tech_nn_check_digit.pdf): 1. multiply each of the first nine digits by a weighting factor
2012 Nov 17
1
fast parallel crawling of file systems
Hi, I use a disk space inventory tool called TreeSizePro to scan file filesystems on windows and linux boxes. On Linux systems I export these shares via samba to scan them. TreeSizePro is multi-threaded (32 crawlers) and I run it on windows 7. I am scanning file systems that are local to the linux servers and also nfs mounts that are re-exported via samba. If I scan a windows 2008 server I can
2000 Jun 27
0
FemFind - search engine for SMB/FTP shares
What is FemFind? FemFind is a crawler/search engine for SMB shares. FemFind does also crawl FTP servers and provides a web interface and a Windows client as frontends for searching. What do I need to run it? The FemFind crawler runs on a Unix platform (currently only Linux has been tested). It utilizes a MySQL database. The web interface requires a webserver. In addition some Perl modules
2011 Mar 03
6
Developing a web crawler
Hi, I wish to develop a web crawler in R. I have been using the functionalities available under the RCurl package. I am able to extract the html content of the site but i don't know how to go about analyzing the html formatted document. I wish to know the frequency of a word in the document. I am only acquainted with analyzing data sets. So how should i go about analyzing data that is not
2008 Jan 31
1
Newbie: Using R to analyse Apache logs
hits=-2.5 tests=BAYES_00,FORGED_RCVD_HELO X-USF-Spam-Flag: NO Hi, I have a requirement to scan Apache logs and discover ``exceptions''. Exceptions can be of two types: 1. A single IP generating a large amount of traffic within a given time frame (for definable values of ``large'' and ``time frame''). 2. A single IP hitting a wide set of URLs on the server (indicates