Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

【阿里云】实验3-2.2 Hadoop集群以及hdfs正常启动,但是无法从本地集群上传文件到hdfs。 #12

Open
XuBoIsBest opened this issue Oct 25, 2020 · 1 comment

Comments

@XuBoIsBest
Copy link

XuBoIsBest commented Oct 25, 2020

image

今天在做这个实验的时候,出现了一个问题,具体报错是这样的:
image

解决办法如下:
(1)用jps和hdfs dfsadmin -report命令查看进程和各节点是否正常
(2)查看hdfs元数据目录,确保NameNode和DataNode节点的VERSION文件中CLUSTR_ID等数据一致。

image

(3)如果都正常修改配置文件hdfs-site.xml ,添加如下配置:

image
在做了这些之后还是不通会遇到如下错误:
image
这就说明datanode的50010端口没有打开,注意是datanode 不是namenode:所以就需要在datanode的安全组里面把50010端口添加进去,如图所示:
image
然后重新上传这个文件就成功了。

@Wanghui-Huang Wanghui-Huang changed the title 实验3-2.2 Hadoop集群以及hdfs正常启动,但是无法从本地集群上传文件到hdfs。 Oct 26, 2020
@zhper
Copy link

zhper commented Nov 9, 2022

你好,想请问第一点,什么样的结果才算的正常的呢,第二点,怎么查看hdfs的元数

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
2 participants