Displaying 20 results from an estimated 600 matches similar to: "Reading stopwords from a csv file"
2012 Feb 26
2
tm_map help
Hi all,
I am trying to do some text mining with twitter and I am getting the error:
Error in structure(names(sapply(possibleCompletions, "[", 1)), names = x) :
'names' attribute [1] must be the same length as the vector [0]
When I use tm_map. Has anyone had/seen this error before? The code I
have is shown below and this error only occurs with #qantas, hashtags
like #asx,
2009 Nov 12
2
package "tm" fails to remove "the" with remove stopwords
I am using code that previously worked to remove stopwords using package
"tm". Even manually adding "the" to the list does not work to remove "the".
This package has undergone extensive redevelopment with changes to the
function syntax, so perhaps I am just missing something.
Please see my simple example, output, and sessionInfo() below.
Thanks!
Mark
require(tm)
2012 Oct 25
2
Minería de texto
Cordial Saludo
Actualmente estoy realizando una función para gráficar una nube de palabras el código que tengo es el siguiente:
library(twitteR)library(tm)library(wordcloud)library(RXKCD)library(RColorBrewer)
tweets=searchTwitter(''@afflorezr'', n=1500)
generateCorpus= function(tweets,my.stopwords=c(),min.freq){ #Install the textmining library require(tm) require(wordcloud)
2014 Jul 22
2
Ayuda Error in `colnames<-`(`*tmp*`, value = c(
Buenas tardes, grupo.
Estoy tratando de hacer la comparación de dos archivos de una misma
organización para encontrar las diferencias entre su informe del tema
edl año 2005 y el del año 2013:
Todos los comandos van bien, a exepción del último "colnames", como se
ve en la siguiente secuencia:
> pdf1<-"./PLAN de INSPECCIONES/05_seguridad_ciudadana.pdf"
>
2013 Sep 26
0
R hangs at NGramTokenizer
Hi:
I try to construct a Document-Term Meatrix from a corpus. The commands I used are:
> library(parallel)> library(tm)> library(RWeka)> library(topicmodels)> library(RTextTools)> cl=makeCluster(detectCores())> invisible(clusterEvalQ(cl, library(tm)))> invisible(clusterEvalQ(cl, library(RWeka))) > invisible(clusterEvalQ(cl, library(topicmodels)))>
2010 Mar 31
1
tm package- remove stowords failling
Hi,
I just noticed that by inspecting the matrix term that no all stopwords are
removed, does someone know how to fix that?
library(tm)
data("crude")
d<-tm_map(crude, removeWords, stopwords(language='english'))
dt<-DocumentTermMatrix(d,control=list(minWordLength=3, minDocFreq=2))
inspect( dt)
I am using R version 2.10, tm package 0.5-3
cheers
Welma
[[alternative HTML
2012 Dec 13
2
Tamaño de la matriz de términos y memoria. Paquete TM
Hola a todos!
Tengo algunos problemas con el tamaño de la matriz de términos que obtengo. Los comandos que utilizo son los siguientes:
# carga librerias
library(tm)
library(wordcloud)
library(Rstem)
library(Snowball)
# lee el documento UTF-8 y lo convierte a ASCII
txt <-
2017 Jun 12
0
count number of stop words in R
Thanks for your reply. I know the command
data <- tm_map(data, removeWords, stopwords("english"))
removes English stop words, I don't know how should I count stop words of my string:
str="Mhm . Alright . There's um a young boy that's getting a cookie jar . And it he's uh in bad shape because uh the thing is falling over . And in the picture the mother is
2017 Jun 12
0
count number of stop words in R
Defining data as you mentioned in your respond causes the following error:
Error in UseMethod("tm_map", x) :
no applicable method for 'tm_map' applied to an object of class "character"
I can solve this error by using Corpus(VectorSource(my string)) and the using your command but I cannot see the number of stop words in my string!
On Monday, June 12, 2017 8:36
2017 Jun 12
3
count number of stop words in R
define your string as whatever object you want:
data <- "Mhm . Alright . There's um a young boy that's getting a cookie jar . And it he's uh in bad shape because uh the thing is falling over . And in the picture the mother is washing dishes and doesn't see it . And so is the the water is overflowing in the sink . And the dishes might get falled over if you don't fell
2014 Jul 29
2
wordcloud y tabla de palabras [Avanzando]
Buenas tardes grupo. Saludos cordiales Carlos J., muchas gracias por
tu orientación. Efectivamente, me había dado cuenta que la razón por
la que no se aplicaba colnames era porque no tenía columnas. La
cuestión es que no logro visualizar completamente/claramente en qué
parte del proceso de creación del corpus se puede hacer.
Sin embargo, siguiendo el ejemplo de
2012 Jan 27
2
tm package: handling contractions
I tried making a wordcloud of Obama's State of the Union address using
the tm package to process the text
sotu <- scan(file="c:/R/data/sotu2012.txt", what="character")
sotu <- tolower(sotu)
corp <-Corpus(VectorSource(paste(sotu, collapse=" ")))
corp <- tm_map(corp, removePunctuation)
corp <- tm_map(corp, stemDocument)
corp <- tm_map(corp,
2017 Jun 12
3
count number of stop words in R
You can define stop words as below.
data <- tm_map(data, removeWords, stopwords("english"))
Patrick Casimir, PhD
Health Analytics, Data Science, Big Data Expert & Independent Consultant
C: 954.614.1178
________________________________
From: R-help <r-help-bounces at r-project.org> on behalf of Bert Gunter <bgunter.4567 at gmail.com>
Sent: Monday, June 12, 2017
2014 Jul 25
3
wordcloud y tabla de palabras
Buenas noches grupo. Saludos cordiales.
He seguido en la búsqueda de una forma que me permita realizar la
comparación de dos documentos pertenecientes a los años 2005 y 2013, y
que pueda representar finalmente con wordcloud y con una table en la
que las columnas sean los años de cada informe "2005" y "2013", y las
filas sean las palabras con la frecuencia de cada una de ellas
2007 Nov 11
0
Stopwords in tm package
Hi to all,
I need to append/delete stopwords from the list that i can use from de
TM package. I use Portuguese stopwords.
When i see the list of stopwords using >stopwords("portuguese") I have
some words with special characters like this:
"verdadeiro" "voc??" "voc??s" "vos"
I try to change the portuguese.dat file from
2014 Jul 28
2
wordcloud y tabla de palabras
Hola,
La referencia (gracias por proporcionarla) que has incluido es bastante
clara y se puede seguir.
¿Has podido sobre tus dos discursos utilizar la misma lógica?
La forma de salir de dudas, para empezar, es que adjuntaras el código que
estás empleando por ver si hay algún error evidente. Aunque la forma
adecuada para que te podamos ayudar es con un ejemplo reproducible: código
+ datos.
2011 Apr 18
0
Help with cleaning a corpus
Hi!
I created a corpus and I started to clean through this piece of code:
txt <-tm_map(txt,removeWords, stopwords("spanish"))
txt <-tm_map(txt,stripWhitespace)
txt <-tm_map(txt,tolower)
txt <-tm_map(txt,removeNumbers)
txt <-tm_map(txt,removePunctuation)
But something happpended: some of the documents in the corpus became empty,
this is a problem when i try to make a
2012 Jan 13
4
Troubles with stemming (tm + Snowball packages) under MacOS
Dear all,
I have some troubles using the stemming algorithm provided by the tm
(text mining) + Snowball packages.
Here is my config:
MacOS 10.5
R 2.12.0 / R 2.13.1 / R 2.14.1 (I have tried several versions)
I have installed all the needed packages (tm, rJava, rWeka, Snowball)
+ dependencies. I have desactivated AWT (like written in
2004 Dec 14
1
stopwords
Hi!
I would like to use the lists of stopwords provided with Xapian. Are
there some standard way to remove stopwords automatically, or should I
implement it mysel in the indexer?
Regards,
Georges Dupret
2008 Mar 12
1
how can i use stopwords?
Hi,
I do not understand the stopword function...
I've set the termgenerator like this:
$self->{'Stemmer'} = new Search::Xapian::Stem(german2);
$self->{'Stopper'} = new Search::Xapian::SimpleStopper();
$self->{'TermGenerator'} = new Search::Xapian::TermGenerator;
$self->{'TermGenerator'}->set_stemmer( $self->{'Stemmer'} );