search for: paralelizable

Displaying 15 results from an estimated 15 matches for "paralelizable".

Did you mean: parallelizable
2015 Dec 09
2
SVM hadoop
Buenos días, alguien sabe si hay alguna manera de implementar una máquina de soporte vectorial (svm) con R-hadoop?? Mi interés es hacer procesamiento big data con svm. Se que en R, existen los paquetes {RtextTools} y {e1071} que permiten hacer svm. Pero no estoy segura de que el algoritmo sea paralelizable, es decir, que pueda correr en paralelo a través de la plataforma R-hadoop. Muchas gracias Un saludo MªLuz Morales [[alternative HTML version deleted]]
2015 Dec 10
2
SVM hadoop
...gt; > > Mi interés es hacer procesamiento big data con svm. Se que en R, >>> existen >>> > >>> > > los paquetes {RtextTools} y {e1071} que permiten hacer svm. Pero no >>> estoy >>> > >>> > > segura de que el algoritmo sea paralelizable, es decir, que pueda >>> correr >>> > en >>> > >>> > > paralelo a través de la plataforma R-hadoop. >>> > >>> > > >>> > >>> > > Muchas gracias >>> > >>> > > Un saludo &gt...
2015 Dec 10
3
SVM hadoop
...n sabe si hay alguna manera de implementar una máquina de soporte > vectorial (svm) con R-hadoop?? > > Mi interés es hacer procesamiento big data con svm. Se que en R, existen > los paquetes {RtextTools} y {e1071} que permiten hacer svm. Pero no estoy > segura de que el algoritmo sea paralelizable, es decir, que pueda correr en > paralelo a través de la plataforma R-hadoop. > > Muchas gracias > Un saludo > MªLuz Morales > > [[alternative HTML version deleted]] > > _______________________________________________ > R-help-es mailing list > R-help-es en...
2015 Dec 11
2
SVM hadoop
...ge A. El 11 de diciembre de 2015, 8:49, MªLuz Morales <mlzmrls en gmail.com> escribió: > Hola, > cuando hablas de la opción Rstudio en Amazon, te refieres mediante hadoop? > (esa es la idea que tengo, usar R con hadoop en amazon, pero necesito que > el algoritmo svm sea paralelizable... > > Esto otro que mencionas: > http://www.teraproc.com/front-page-posts/r-on-demand/ > que entorno de paralelización usa? conozco hadoop y spark, > > > Gracias > Un saludo > > > El 10 de diciembre de 2015, 16:03, Carlos Ortega <cof en qualityexcellence.es >...
2019 Feb 07
6
Optimización identificación de casos similares
Buen día a todos, Agradezco su ayuda con lo siguiente: Tengo 100.000 registros con nombres de personas con su respectivo número de documento, quiero identificar casos que tengan un porcentaje de igualdad alto, no del 100% porque ya esos los tengo identificados, sino casos como por ejemplo: Nombre: Juan Pérez Documento: 123456789 Nombre: Juan Pérez Documento: 1234056789 Este caso sería una
2016 Oct 11
2
Alto rendimiento
...ter de este tipo no es trivial....hay ejemplos comentado por ahí de cómo hacerlo en EC2 Amazon. o Puedes ejecutar procesos de machine learning en modo paralelo, pero solamente los que incluyen las librerías MLlib de Spark que no son todos los que hay disponibles en CRAN, no todos los algoritmos son paralelizables. Pero sí que se contemplan los de clúster, los glm, randomForest, gbm, survival, etc (http://spark.rstudio.com/mllib.html). También es capaz de distribuir trabajos sobre H2O, que también tiene básicamente las mismas librerías que las MLlib con el añadido de "deeplearning". o En este caso...
2016 Oct 11
2
Alto rendimiento
Estimado Carlos Gil Bellosta ¿Cómo está usted? En estos lados de América del sur comienza la primavera, desde la ventana miro la parra contando las posibles uvas, siempre aparece un ave que se arrima a la ventana o incluso llegan hasta la computadora como si supiesen usarla. Ahora en R. En ese esquema un modelo lineal tendría que ir con mlib que es aportada por sparklyr, en ese caso tendría
2018 Jan 30
0
[lldb-dev] Trying out lld to link windows binaries (using msvc as a compiler)
...bug information in one .pdb file AFAIK. That would make our links much faster I think as people either are changing headers (and then they know they have to wait) or changing a single/few .cpp files. It would be great to group our 3k obj debug information in groups so that this linking steps can be paralelizable. Is there any support maybe for merging pdb with pdb util and then feeding that to lld-link instead of .obj debug info? I also re-read the post about ghash and it says blink links in 88s, the 28s you talk about is with unrelased optimizations only? On Tue, Jan 30, 2018 at 5:54 AM, Zachary Turner...
2018 Jan 30
2
[lldb-dev] Trying out lld to link windows binaries (using msvc as a compiler)
...file AFAIK. That would make our links much faster I think as people either >> are changing headers (and then they know they have to wait) or changing a >> single/few .cpp files. It would be great to group our 3k obj debug >> information in groups so that this linking steps can be paralelizable. Is >> there any support maybe for merging pdb with pdb util and then feeding that >> to lld-link instead of .obj debug info? >> >> I also re-read the post about ghash and it says blink links in 88s, the >> 28s you talk about is with unrelased optimizations only? >&...
2018 Jan 30
0
[lldb-dev] Trying out lld to link windows binaries (using msvc as a compiler)
...ne .pdb > file AFAIK. That would make our links much faster I think as people either > are changing headers (and then they know they have to wait) or changing a > single/few .cpp files. It would be great to group our 3k obj debug > information in groups so that this linking steps can be paralelizable. Is > there any support maybe for merging pdb with pdb util and then feeding that > to lld-link instead of .obj debug info? > > I also re-read the post about ghash and it says blink links in 88s, the > 28s you talk about is with unrelased optimizations only? > > On Tue, Jan 30,...
2018 Jan 31
0
[lldb-dev] Trying out lld to link windows binaries (using msvc as a compiler)
...ake our links much faster I think as >>> people either are changing headers (and then they know they have to wait) >>> or changing a single/few .cpp files. It would be great to group our 3k obj >>> debug information in groups so that this linking steps can be >>> paralelizable. Is there any support maybe for merging pdb with pdb util and >>> then feeding that to lld-link instead of .obj debug info? >>> >>> I also re-read the post about ghash and it says blink links in 88s, the >>> 28s you talk about is with unrelased optimizations only...
2018 Jan 31
2
[lldb-dev] Trying out lld to link windows binaries (using msvc as a compiler)
...make our links much faster I think as >>>> people either are changing headers (and then they know they have to wait) or >>>> changing a single/few .cpp files. It would be great to group our 3k obj >>>> debug information in groups so that this linking steps can be paralelizable. >>>> Is there any support maybe for merging pdb with pdb util and then feeding >>>> that to lld-link instead of .obj debug info? >>>> >>>> I also re-read the post about ghash and it says blink links in 88s, the >>>> 28s you talk about is...
2018 Jan 30
4
[lldb-dev] Trying out lld to link windows binaries (using msvc as a compiler)
You can make a PDB per lib (consider msvcrtd.pdb which ships with MSVC), but all these per-lib PDBs would have to be merged into a single master PDB at the end, so you still can't avoid that final . In a way, that's similar to the idea behind /DEBUG:FASTLINK (keep the debug info in object files to eliminate the cost of merging types and symbol records) and we know what the problems with
2018 Jan 31
1
[lldb-dev] Trying out lld to link windows binaries (using msvc as a compiler)
...> as > >>>> people either are changing headers (and then they know they have to > wait) or > >>>> changing a single/few .cpp files. It would be great to group our 3k > obj > >>>> debug information in groups so that this linking steps can be > paralelizable. > >>>> Is there any support maybe for merging pdb with pdb util and then > feeding > >>>> that to lld-link instead of .obj debug info? > >>>> > >>>> I also re-read the post about ghash and it says blink links in 88s, > the > >...
2018 Feb 14
0
[lldb-dev] Trying out lld to link windows binaries (using msvc as a compiler)
...hen they know they have to >> >>>> wait) or >> >>>> changing a single/few .cpp files. It would be great to group our 3k >> >>>> obj >> >>>> debug information in groups so that this linking steps can be >> >>>> paralelizable. >> >>>> Is there any support maybe for merging pdb with pdb util and then >> >>>> feeding >> >>>> that to lld-link instead of .obj debug info? >> >>>> >> >>>> I also re-read the post about ghash and it says b...