我每行都有一个csv作为文档。我需要对此执行LDA。我有以下代码:
library(tm)
library(SnowballC)
library(topicmodels)
library(RWeka)
X = read.csv('doc.csv',sep=",",quote="\"",stringsAsFactors=FALSE)
corpus <- Corpus(VectorSource(X))
corpus <- tm_map(tm_map(tm_map(corpus, stripWhitespace), tolower), stemDocument)
corpus <- tm_map(corpus, PlainTextDocument)
BigramTokenizer <- function(x) NGramTokenizer(x, Weka_control(min = 2, max = 2))
dtm <- DocumentTermMatrix(corpus, control = list(tokenize=BigramTokenizer,weighting=weightTfIdf))
这时检查dtm对象给出
<<DocumentTermMatrix (documents: 52, terms: 477)>>
Non-/sparse entries: 492/24312
Sparsity : 98%
Maximal term length: 20
Weighting : term frequency - inverse document frequency (normalized) (tf-idf)
现在,我将在此基础上执行LDA
rowTotals <- apply(dtm , 1, sum)
dtm.new <- dtm[rowTotals> 0, ]
g = LDA(dtm.new,10,method = 'VEM',control=NULL,model=NULL)
我收到以下错误
Error in LDA(dtm.new, 10, method = "VEM", control = NULL, model = NULL) :
The DocumentTermMatrix needs to have a term frequency weighting
文档术语矩阵显然得到了加权。我究竟做错了什么 ?
请帮助。
文档术语矩阵需要具有术语频率加权:
DocumentTermMatrix(corpus,
control = list(tokenize = BigramTokenizer,
weighting = weightTf))
本文收集自互联网,转载请注明来源。
如有侵权,请联系[email protected] 删除。
我来说两句