similar to: 回复: range query for terms

Displaying 20 results from an estimated 1000 matches similar to: "回复: range query for terms"

2015 Mar 29
1
range query for terms
Thank you, Olly! I tried to figure out a picture about how index/query related to the B-tree block access on disk. I think I'm all messed up and failed. now I am trying to index docs in json format, and came to a question about prefix mapping: a json doc like: {"starttime":1111,"endtime":2222} considerring mapping prefix to slot number in two ways:
2008 Dec 11
1
Meetme realtime table structure
Hi guys, Sorry if I'll be very very stupid but really I write to this conference first. I have problems with configuration of app_meetme in realtime environment. I use last stable release of asterisk 1.6.0.3 Now situation is following. I create database and table in it. Th table is CREATE TABLE IF NOT EXISTS `booking` ( `bookId` int(10) unsigned NOT NULL auto_increment, `clientId` int(10)
2015 Mar 14
2
range query for terms
first, thank you,xapian! then I'd like to ask if it is possible to do a range query on terms(like the range query on values), or if it is just a wildcard(right truncation) match. the case is searching ip address bettween ?10.10.0.0? and ?10.10.255.255? the user want : 1. query "10.10.10.10" < ip < "10.10.10.12" gives "10.10.10.11" 2. query
2023 Aug 21
2
Increase data length for SMB2 write and read requests for Windows 10 clients
Hello Jeremy, > OH - that's *really* interesting ! I wonder how it is > changing the SMB3+ redirector to do this ? It looks like applications could do something and give a hint to SMB3+ redirector, so far not quite sure how to make it, per process monitor (procmon) could show that write I/O size seems could be pass from the application layers,
2003 Sep 02
1
convert character to POSIXct
Dear list-members, I would like to calculate the difference between two points in time. To convert a 'time (GMT)'-character with the format "1/1/1999 01:01:01" into an object of class "POSIXct"', I first use the strptime() as suggested in the details help(as.POSIXct). e.g. starttime<-strptime("1/1/1999 01:01:01",format="%d/%m/%Y %H:%M:%S")
2024 Apr 08
1
Exceptional slowness with read.csv
No idea, but have you tried using ?scan to read those next 5 rows? It might give you a better idea of the pathologies that are causing problems. For example, an unmatched quote might result in some huge number of characters trying to be read into a single element of a character variable. As your previous respondent said, resolving such problems can be a challenge. Cheers, Bert On Mon, Apr 8,
2013 May 01
0
slow automounted cifs
Samba 4.0.6 git both DC and fileserver with openSUSE 12.3 clients Hi I'm trying to debug why logins to Linux clients are sometimes slow. Here is a login with the user steve2 requesting his (automounted) home folder: ] Kerberos: TGS-REQ authtime: 2013-05-01T20:57:27 starttime: 2013-05-01T20:57:27 endtime: 2013-05-02T06:57:27 renew till: 2013-05-02T20:57:25 Kerberos: AS-REQ steve2 at HH3.SITE
2024 Apr 08
1
Exceptional slowness with read.csv
data.table's fread is also fast. Not sure about error handling. But I can merge 300 csvs with a total of 0.5m lines and 50 columns in a couple of minutes versus a lifetime with read.csv or readr::read_csv On Mon, 8 Apr 2024, 16:19 Stevie Pederson, <stephen.pederson.au at gmail.com> wrote: > Hi Dave, > > That's rather frustrating. I've found vroom (from the package
2011 Dec 10
1
ActiveRecord time and datetime
Hi, Suppose I have a model class which has a time field: class CreateAppointments < ActiveRecord::Migration def change create_table :appointments do |t| t.string :name t.datetime :startTime t.datetime :endTime t.string :description t.timestamps end end end When I test drive it in rails console, I can input any value int he startTime and endTime such
2024 Apr 08
4
Exceptional slowness with read.csv
Greetings, I have a csv file of 76 fields and about 4 million records. I know that some of the records have errors - unmatched quotes, specifically.? Reading the file with readLines and parsing the lines with read.csv(text = ...) is really slow. I know that the first 2459465 records are good. So I try this: > startTime <- Sys.time() > first_records <- read.csv(file_name, nrows
2024 Apr 08
2
Exceptional slowness with read.csv
Hi Dave, That's rather frustrating. I've found vroom (from the package vroom) to be helpful with large files like this. Does the following give you any better luck? vroom(file_name, delim = ",", skip = 2459465, n_max = 5) Of course, when you know you've got errors & the files are big like that it can take a bit of work resolving things. The command line tools awk
2003 Dec 08
3
Strange variable chopping from AGI's
AGI's are resulting in unusual behaviors. Can someone please tell me if this is my inappropriate use of AGI's, inappropriate use of Time::HiRes, or a bug with *: I call this script twice: #!/usr/bin/perl use Time::HiRes qw( gettimeofday ); ($seconds, $microseconds) = gettimeofday; $hirestime = sprintf("%s","$seconds$microseconds"); print "SET VARIABLE
2024 Apr 08
2
Exceptional slowness with read.csv
I solved the mystery, but not the problem. The problem is that there's an unclosed quote somewhere in those 5 additional records I'm trying to access. So read.csv is reading million-character fields. It's slow at that. That mystery solved. However, the the problem persists: how to fix what is obvious to the naked eye - a quote not adjacent to a comma - but that read.csv can't
2006 Jun 30
0
sync reads or big files problem
Hello, friends! I vahe a problem using prototype1.4 in IE6. I am trying to create bandwidth speed test tool. The idea is to download one by one X times (i.e. 10) one and the same ASCII-file with size = 1MB. I would like to measure time and speed of each run, to display the intermediate results after aech run and finally after all X rund display total results as average of all runs. But when I
2007 Feb 12
1
Fwd: Joining a SAMBA 4 TP4 Active Directory with WinXP
Am Montag, 12. Februar 2007 14:43 schrieb paul: > Mag. Leonhard Landrock schrieb: > > *) Start a virtual machine with WinXP SP2 and trying to join the domain > > LEOSENDE.FUN. > > > > The last point (joining the domain) doesn't work. I try the username > > Administrator and the passwort as set with "./setup/provision" but it > > doesn't
2024 Apr 10
2
Exceptional slowness with read.csv
?s 06:47 de 08/04/2024, Dave Dixon escreveu: > Greetings, > > I have a csv file of 76 fields and about 4 million records. I know that > some of the records have errors - unmatched quotes, specifically. > Reading the file with readLines and parsing the lines with read.csv(text > = ...) is really slow. I know that the first 2459465 records are good. > So I try this: >
2014 Apr 11
1
4.0 stopped working after updating xubuntu 13.04
Hi I got some strange issues on my samba4.0.1 install yesterday. It happened a while after updating my xubuntu server 13.04 not 13.10. Everything seems to be working fine except shares. Kerberos authentication seem to function properly, also DNS works fine but shares seem semi-broken. I can't mount any shares on my Windows box, including netlogon, profiles. I have one share that is
2012 Oct 18
1
mount.cifs: regular freezes with s3fs
cifs-utils-5.6 samba Version 4.0.0rc3 openSUSE 12.2 LAN of XP, w7 and Linux clients under Samba4 DC and s3fs fileserver Hi I am testing the possibility of migrating from nfs to cifs to serve our Linux clients. Currently we mount the samba shares, e.g. the home directory, using nfs. The test setup is that instead of: mount -t nfs hh1:/home2 /home2 -osec=rw,krb5 I changed to: mount -t cifs
2005 Aug 24
0
(Fwd) Re: priority of operators in the FOR ( ) statement
Hi On 23 Aug 2005 at 12:03, Ravi.Vishnu at outokumpu.com wrote: > Dear All, > I spent an entire evening in debugging a small, fairly simple program > in R - without success. It was my Guru in Bayesian Analysis, Thomas > Fridtjof, who was able to diagonose the problem. He said that it took > a long time for him also to locate the problem. This program > illustrates in
2024 Apr 10
1
Exceptional slowness with read.csv
That's basically what I did 1. Get text lines using readLines 2. use tryCatch to parse each line using read.csv(text=...) 3. in the catch, use?gregexpr to find any quotes not adjacent to a comma (gregexpr("[^,]\"[^,]",...) 4. escape any quotes found by adding a second quote (using str_sub from stringr) 6. parse the patched text using read.csv(text=...) 7. write out the parsed