如何配置Apache Flume 1.4.0从Twitter获取数据并放入HDFS(Apache Hadoop版本2.5)?

哈西特·沙玛(Harshit Sharma)

我正在使用Ubuntu 14.04,我的配置文件如下:

TwitterAgent.sources = Twitter
TwitterAgent.channels = MemChannel
TwitterAgent.sinks = HDFS
TwitterAgent.sources.Twitter.type = com.cloudera.flume.source.TwitterSource
TwitterAgent.sources.Twitter.channels = MemChannel
TwitterAgent.sources.Twitter.consumerKey = Q5JF4gVmrahNk93C913GjgJgB
TwitterAgent.sources.Twitter.consumerSecret = GFM6F0QuqEHn1eKpL1k4CHwdecEp626xLepajp9CAbtRBxEVCC
TwitterAgent.sources.Twitter.accessToken = 152956374-hTFXO9g1RBSn1yikmi2mQClilZe2PqnyqphFQh9t
TwitterAgent.sources.Twitter.accessTokenSecret = SODGEbkQvHYzZMtPsWoI2k9ZKiAd7q21ebtG3SNMu3Y0a
TwitterAgent.sources.Twitter.keywords = hadoop, big data, analytics, bigdata, cloudera, data science, data scientiest, business intelligence, mapreduce, data warehouse, data warehousing, mahout, hbase, nosql, newsql, businessintelligence, cloudcomputing

TwitterAgent.sinks.HDFS.type = hdfs
TwitterAgent.sinks.HDFS.channel = MemChannel
TwitterAgent.sinks.HDFS.hdfs.path = hdfs://localhost:9000/user/flume/tweets/
TwitterAgent.sinks.HDFS.hdfs.fileType = DataStream
TwitterAgent.sinks.HDFS.hdfs.writeFormat = Text
#number of events written to file before it is flushed to HDFS/default 100
TwitterAgent.sinks.HDFS.hdfs.batchSize = 1000
#File size to trigger roll, in bytes (0: never roll based on file size)
TwitterAgent.sinks.HDFS.hdfs.rollSize = 0
#Number of events written to file before it rolled (0 = never roll based #on number of events)
TwitterAgent.sinks.HDFS.hdfs.rollCount = 10000
TwitterAgent.channels.MemChannel.type = memory
#The maximum number of events stored in the channel
TwitterAgent.channels.MemChannel.capacity = 10000
#The maximum number of events the channel will take from a source or give to a sink per #transaction
TwitterAgent.channels.MemChannel.transactionCapacity = 100

我在终端上使用以下命令:

hadoopuser@Hotshot:/usr/lib/flume-ng/apache-flume-1.4.0-bin/bin$ ./flume-ng agent –conf ./conf/ -f /usr/lib/flume-ng/apache-flume-1.4.0-bin/conf/flume.conf -Dflume.root.logger=DEBUG,console -n TwitterAgent

我收到以下错误:

14/10/10 17:24:12 INFO instrumentation.MonitoredCounterGroup: Component type: SINK, name: HDFS started
14/10/10 17:24:12 INFO twitter4j.TwitterStreamImpl: Establishing connection.
14/10/10 17:24:22 INFO twitter4j.TwitterStreamImpl: Connection established.
14/10/10 17:24:22 INFO twitter4j.TwitterStreamImpl: Receiving status stream.
14/10/10 17:24:22 INFO hdfs.HDFSDataStream: Serializer = TEXT, UseRawLocalFileSystem = false
14/10/10 17:24:22 INFO hdfs.BucketWriter: Creating hdfs://localhost:9000/user/flume/tweets//FlumeData.1412942062375.tmp
14/10/10 17:24:22 ERROR hdfs.HDFSEventSink: process failed
java.lang.VerifyError: class org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$RecoverLeaseRequestProto overrides final method getUnknownFields.()Lcom/google/protobuf/UnknownFieldSet;
    at java.lang.ClassLoader.defineClass1(Native Method)
    at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
    at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
    at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
    at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
    at java.lang.Class.getDeclaredMethods0(Native Method)
    at java.lang.Class.privateGetDeclaredMethods(Class.java:2570)
    at java.lang.Class.privateGetPublicMethods(Class.java:2690)
    at java.lang.Class.privateGetPublicMethods(Class.java:2700)
    at java.lang.Class.getMethods(Class.java:1467)
    at sun.misc.ProxyGenerator.generateClassFile(ProxyGenerator.java:426)
    at sun.misc.ProxyGenerator.generateProxyClass(ProxyGenerator.java:323)
    at java.lang.reflect.Proxy$ProxyClassFactory.apply(Proxy.java:672)
    at java.lang.reflect.Proxy$ProxyClassFactory.apply(Proxy.java:592)
    at java.lang.reflect.WeakCache$Factory.get(WeakCache.java:244)
    at java.lang.reflect.WeakCache.get(WeakCache.java:141)
    at java.lang.reflect.Proxy.getProxyClass0(Proxy.java:455)
    at java.lang.reflect.Proxy.newProxyInstance(Proxy.java:738)
    at org.apache.hadoop.ipc.ProtobufRpcEngine.getProxy(ProtobufRpcEngine.java:92)
    at org.apache.hadoop.ipc.RPC.getProtocolProxy(RPC.java:537)
    at org.apache.hadoop.hdfs.NameNodeProxies.createNNProxyWithClientProtocol(NameNodeProxies.java:366)
    at org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:262)
    at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:153)
    at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:602)
    at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:547)
    at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:139)
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2591)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:89)
    at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2625)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2607)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:368)
    at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
    at org.apache.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:226)
    at org.apache.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:220)
    at org.apache.flume.sink.hdfs.BucketWriter$8$1.run(BucketWriter.java:536)
    at org.apache.flume.sink.hdfs.BucketWriter.runPrivileged(BucketWriter.java:160)
    at org.apache.flume.sink.hdfs.BucketWriter.access$1000(BucketWriter.java:56)
    at org.apache.flume.sink.hdfs.BucketWriter$8.call(BucketWriter.java:533)
    at java.util.concurrent.FutureTask.run(FutureTask.java:262)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)

Apache Flume和Apache Hadoop的版本是否存在兼容性问题?我找不到任何有用的资源来帮助我安装Apache Flume 1.5.1版。如果没有兼容性问题,那么我应该怎么做才能在HDFS中获取推文?

雁鹏

Hadoop使用Protobuf 2.5

hadoop-project/pom.xml:    <protobuf.version>2.5.0</protobuf.version>

用protobuf 2.5生成的代码与旧的protobuf库是二进制不兼容的。不幸的是,Flume 1.4软件包protobuf 2.4.1的当前稳定版本。您可以通过将protobuf和番石榴移出Flume的lib目录来解决此问题。

本文收集自互联网,转载请注明来源。

如有侵权,请联系[email protected] 删除。

编辑于
0

我来说两句

0条评论
登录后参与评论

相关文章

来自分类Dev

没有hadoop的Apache Flume

来自分类Dev

Apache Flume 1.5在Hadoop 2 /自动故障转移群集配置中未提供预期的结果

来自分类Dev

Apache Flume Twitter Agent不流数据

来自分类Dev

如何生成图案5 5 5 5 5 4 4 4 4 3 3 3 2 2 1

来自分类Dev

多个用户的phpMyAdmin 4配置(Ubuntu 14.04,Apache2,PHP5)

来自分类Dev

Apache Flume /var/log/flume-ng/flume.log(权限被拒绝)

来自分类Dev

如何将奇数索引转换为索引{0,1,2,3,4,5}?

来自分类Dev

给定一个张量 [5,4,3,4],如何生成一个常数张量,其中每行有 n 个 1 和 m 个零,n=5,4,3,4,m=0,1,2,1。

来自分类Dev

如何在JavaScript中[1,2] + [4,5,6] [1] = 1,25

来自分类Dev

vector <int> + = 1,1,2,2,2,3,4,5,6如何; 可能的?

来自分类Dev

如何配置Apache Web服务器以部署laravel 5

来自分类Dev

如何从Twitter阅读Flume生成的数据文件

来自分类Dev

找不到php5apache2_4.dll错误

来自分类Dev

下划线或lazy.js映射(0,1,2,3,4)+(1,2,3,4,5)->(1,3,5,7,9)

来自分类Dev

Flume HDFS的问题来自Twitter

来自分类Dev

R:如何制作序列(1,1,1,2,3,3,3,4,5,5,5,6,7,7,7,8)

来自分类Dev

R:如何制作序列(1,1,1,2,3,3,3,4,5,5,5,6,7,7,7,8)

来自分类Dev

如何获取apache2的默认配置文件?

来自分类Dev

如何获取apache2的默认配置文件?

来自分类Dev

Flume和Hadoop的数据输入不起作用

来自分类Dev

如何重复序列:r中的1,2,3,4,5,6,1,2,3,4,5,6,7,8,9,10,7,8,9,10

来自分类Dev

如何在R中将c(1,2,3)和c(4,5,6)连接到c(1,4,2,5,3,6)?

来自分类Dev

如何使用张量流将 [1,2,3,4,5,6] 重塑为 [[1,3,5],[2,4,6]]?

来自分类Dev

python 将列表 [0, 1, 2, 3, 4, 5] 转换为 [0, 1, 2], [1,2,3], [2,3,4]

来自分类Dev

如何从htaccess为多个PHP版本配置Apache

来自分类Dev

Apache Sqoop和Flume可以互换使用吗?

来自分类Dev

Apache Flume中是否有本机SQL源?

来自分类Dev

Apache Sqoop和Flume可以互换使用吗?

来自分类Dev

如何使用apache log4j2功能和log4j2配置文件写入CSV文件?

Related 相关文章

  1. 1

    没有hadoop的Apache Flume

  2. 2

    Apache Flume 1.5在Hadoop 2 /自动故障转移群集配置中未提供预期的结果

  3. 3

    Apache Flume Twitter Agent不流数据

  4. 4

    如何生成图案5 5 5 5 5 4 4 4 4 3 3 3 2 2 1

  5. 5

    多个用户的phpMyAdmin 4配置(Ubuntu 14.04,Apache2,PHP5)

  6. 6

    Apache Flume /var/log/flume-ng/flume.log(权限被拒绝)

  7. 7

    如何将奇数索引转换为索引{0,1,2,3,4,5}?

  8. 8

    给定一个张量 [5,4,3,4],如何生成一个常数张量,其中每行有 n 个 1 和 m 个零,n=5,4,3,4,m=0,1,2,1。

  9. 9

    如何在JavaScript中[1,2] + [4,5,6] [1] = 1,25

  10. 10

    vector <int> + = 1,1,2,2,2,3,4,5,6如何; 可能的?

  11. 11

    如何配置Apache Web服务器以部署laravel 5

  12. 12

    如何从Twitter阅读Flume生成的数据文件

  13. 13

    找不到php5apache2_4.dll错误

  14. 14

    下划线或lazy.js映射(0,1,2,3,4)+(1,2,3,4,5)->(1,3,5,7,9)

  15. 15

    Flume HDFS的问题来自Twitter

  16. 16

    R:如何制作序列(1,1,1,2,3,3,3,4,5,5,5,6,7,7,7,8)

  17. 17

    R:如何制作序列(1,1,1,2,3,3,3,4,5,5,5,6,7,7,7,8)

  18. 18

    如何获取apache2的默认配置文件?

  19. 19

    如何获取apache2的默认配置文件?

  20. 20

    Flume和Hadoop的数据输入不起作用

  21. 21

    如何重复序列:r中的1,2,3,4,5,6,1,2,3,4,5,6,7,8,9,10,7,8,9,10

  22. 22

    如何在R中将c(1,2,3)和c(4,5,6)连接到c(1,4,2,5,3,6)?

  23. 23

    如何使用张量流将 [1,2,3,4,5,6] 重塑为 [[1,3,5],[2,4,6]]?

  24. 24

    python 将列表 [0, 1, 2, 3, 4, 5] 转换为 [0, 1, 2], [1,2,3], [2,3,4]

  25. 25

    如何从htaccess为多个PHP版本配置Apache

  26. 26

    Apache Sqoop和Flume可以互换使用吗?

  27. 27

    Apache Flume中是否有本机SQL源?

  28. 28

    Apache Sqoop和Flume可以互换使用吗?

  29. 29

    如何使用apache log4j2功能和log4j2配置文件写入CSV文件?

热门标签

归档