使用Terraform部署EKS节点组时出错

Borja Ortiz骆马

使用Terraform在EKS群集中部署节点组时遇到问题该错误似乎是一个插件有问题,但我不知道如何解决它。

如果在AWS控制台(Web)中看到EC2,则可以看到集群的实例,但是集群中有此错误。

错误显示在我的管道中

错误:等待EKS节点组(UNIR-API-REST-CLUSTER-DEV:node_sping_boot)创建:NodeCreationFailure:实例无法加入kubernetes集群。资源ID:
在资源“ aws_eks_node_group”“节点”中的EKS.tf第17行上的[i-05ed58f8101240dc8]
17:资源“ aws_eks_node_group”“节点”
2020-06-01T00:03:50.576Z [DEBUG]插件:插件过程退出:路径= / home / ubuntu / .jenkins / workspace / shop_infraestucture_generator_pipline / shop-proyect-dev / .terraform / plugins / linux_amd64 / terraform-provider-aws_v2.64.0_x4 pid = 13475
2020-06-01T00:03:50.576Z [DEBUG]插件:插件已退出

并在AWS控制台中显示错误

链接

这是我用来创建项目的Terraform中的代码:

EKS.tf,用于创建群集和节点

resource "aws_eks_cluster" "CLUSTER" {
  name     = "UNIR-API-REST-CLUSTER-${var.SUFFIX}"
  role_arn = "${aws_iam_role.eks_cluster_role.arn}"
  vpc_config {
    subnet_ids = [
      "${aws_subnet.unir_subnet_cluster_1.id}","${aws_subnet.unir_subnet_cluster_2.id}"
    ]
  }
  depends_on = [
    "aws_iam_role_policy_attachment.AmazonEKSWorkerNodePolicy",
    "aws_iam_role_policy_attachment.AmazonEKS_CNI_Policy",
    "aws_iam_role_policy_attachment.AmazonEC2ContainerRegistryReadOnly",
  ]
}


resource "aws_eks_node_group" "nodes" {
  cluster_name    = "${aws_eks_cluster.CLUSTER.name}"
  node_group_name = "node_sping_boot"
  node_role_arn   = "${aws_iam_role.eks_nodes_role.arn}"
  subnet_ids      = [
      "${aws_subnet.unir_subnet_cluster_1.id}","${aws_subnet.unir_subnet_cluster_2.id}"
  ]
  scaling_config {
    desired_size = 1
    max_size     = 5
    min_size     = 1
  }
# instance_types is mediumt3 by default
# Ensure that IAM Role permissions are created before and deleted after EKS Node Group handling.
# Otherwise, EKS will not be able to properly delete EC2 Instances and Elastic Network Interfaces.
  depends_on = [
    "aws_iam_role_policy_attachment.AmazonEKSWorkerNodePolicy",
    "aws_iam_role_policy_attachment.AmazonEKS_CNI_Policy",
    "aws_iam_role_policy_attachment.AmazonEC2ContainerRegistryReadOnly",
  ]
}

output "eks_cluster_endpoint" {
  value = "${aws_eks_cluster.CLUSTER.endpoint}"
}

output "eks_cluster_certificat_authority" {
    value = "${aws_eks_cluster.CLUSTER.certificate_authority}"
}

securityAndGroups.tf

resource "aws_iam_role" "eks_cluster_role" {
  name = "eks-cluster-${var.SUFFIX}"

  assume_role_policy = <<POLICY
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "eks.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}
POLICY
}


resource "aws_iam_role" "eks_nodes_role" {
  name = "eks-node-${var.SUFFIX}"

  assume_role_policy = <<POLICY
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "ec2.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}
POLICY
}


resource "aws_iam_role_policy_attachment" "AmazonEKSClusterPolicy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
  role       = "${aws_iam_role.eks_cluster_role.name}"
}

resource "aws_iam_role_policy_attachment" "AmazonEKSServicePolicy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSServicePolicy"
  role       = "${aws_iam_role.eks_cluster_role.name}"
}

resource "aws_iam_role_policy_attachment" "AmazonEKSWorkerNodePolicy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
  role       = "${aws_iam_role.eks_nodes_role.name}"
}

resource "aws_iam_role_policy_attachment" "AmazonEKS_CNI_Policy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
  role       = "${aws_iam_role.eks_nodes_role.name}"
}

resource "aws_iam_role_policy_attachment" "AmazonEC2ContainerRegistryReadOnly" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
  role       = "${aws_iam_role.eks_nodes_role.name}"
}

VPCAndRouting.tf创建我的路由,VPC和子网

resource "aws_vpc" "unir_shop_vpc_dev" {
  cidr_block = "${var.NET_CIDR_BLOCK}"
  enable_dns_hostnames = true
  enable_dns_support = true
  tags = {
    Name = "UNIR-VPC-SHOP-${var.SUFFIX}"
    Environment = "${var.SUFFIX}"
  }
}
resource "aws_route_table" "route" {
  vpc_id = "${aws_vpc.unir_shop_vpc_dev.id}"
  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = "${aws_internet_gateway.unir_gat_shop_dev.id}"
  }
  tags = {
    Name = "UNIR-RoutePublic-${var.SUFFIX}"
    Environment = "${var.SUFFIX}"
  }
}

data "aws_availability_zones" "available" {
  state = "available"
}
resource "aws_subnet" "unir_subnet_aplications" {
  vpc_id = "${aws_vpc.unir_shop_vpc_dev.id}"
  cidr_block = "${var.SUBNET_CIDR_APLICATIONS}"
  availability_zone = "${var.ZONE_SUB}"
  depends_on = ["aws_internet_gateway.unir_gat_shop_dev"]
  map_public_ip_on_launch = true
  tags = {
    Name = "UNIR-SUBNET-APLICATIONS-${var.SUFFIX}"
    Environment = "${var.SUFFIX}"
  }
}

resource "aws_subnet" "unir_subnet_cluster_1" {
  vpc_id = "${aws_vpc.unir_shop_vpc_dev.id}"
  cidr_block = "${var.SUBNET_CIDR_CLUSTER_1}"
  map_public_ip_on_launch = true
  availability_zone = "${var.ZONE_SUB_CLUSTER_2}"
  tags = {
    "kubernetes.io/cluster/UNIR-API-REST-CLUSTER-${var.SUFFIX}" = "shared"
  }
}

resource "aws_subnet" "unir_subnet_cluster_2" {
  vpc_id = "${aws_vpc.unir_shop_vpc_dev.id}"
  cidr_block = "${var.SUBNET_CIDR_CLUSTER_2}"
  availability_zone = "${var.ZONE_SUB_CLUSTER_1}"
  map_public_ip_on_launch = true
  tags = {
    "kubernetes.io/cluster/UNIR-API-REST-CLUSTER-${var.SUFFIX}" = "shared"
  }

}

resource "aws_internet_gateway" "unir_gat_shop_dev" {
  vpc_id = "${aws_vpc.unir_shop_vpc_dev.id}"
  tags = {
    Environment = "${var.SUFFIX}"
    Name = "UNIR-publicGateway-${var.SUFFIX}"
  }
}

我的变量:

SUFFIX="DEV"
ZONE="eu-west-1"
TERRAFORM_USER_ID=
TERRAFORM_USER_PASS=
ZONE_SUB="eu-west-1b"
ZONE_SUB_CLUSTER_1="eu-west-1a"
ZONE_SUB_CLUSTER_2="eu-west-1c"
NET_CIDR_BLOCK="172.15.0.0/24"
SUBNET_CIDR_APLICATIONS="172.15.0.0/27"
SUBNET_CIDR_CLUSTER_1="172.15.0.32/27"
SUBNET_CIDR_CLUSTER_2="172.15.0.64/27"
SUBNET_CIDR_CLUSTER_3="172.15.0.128/27"
SUBNET_CIDR_CLUSTER_4="172.15.0.160/27"
SUBNET_CIDR_CLUSTER_5="172.15.0.192/27"
SUBNET_CIDR_CLUSTER_6="172.15.0.224/27"
MONGO_SSH_KEY=
KIBANA_SSH_KEY=
CLUSTER_SSH_KEY=

将需要更多的日志吗?

萨利姆

根据AWS文档

如果您在AWS管理控制台中收到错误“实例无法加入kubernetes集群”,请确保已启用集群的私有端点访问,或者已正确配置了CIDR块以进行公共端点访问。有关更多信息,请参阅Amazon EKS集群终端节点访问控制。

我注意到您正在切换子网的可用区:

resource "aws_subnet" "unir_subnet_cluster_1" {
  vpc_id = "${aws_vpc.unir_shop_vpc_dev.id}"
  cidr_block = "${var.SUBNET_CIDR_CLUSTER_1}"
  map_public_ip_on_launch = true
  availability_zone = "${var.ZONE_SUB_CLUSTER_2}"

你已经分配var.ZONE_SUB_CLUSTER_2unir_subnet_cluster_1var.ZONE_SUB_CLUSTER_1unir_subnet_cluster_2也许这可能是配置错误的原因。

本文收集自互联网,转载请注明来源。

如有侵权,请联系[email protected] 删除。

编辑于
0

我来说两句

0条评论
登录后参与评论

相关文章

来自分类Dev

使用 terraform 在 EKS 上部署 Kubernetes 仪表板时出错

来自分类Dev

如何使用Terraform for Amazon EKS设置节点污点

来自分类Dev

使用节点代理时出错

来自分类Dev

Terraform EKS在节点组上指定node-role.kubernetes.io标签

来自分类Dev

通过Terraform部署应用程序网关时出错

来自分类Dev

在Heroku上使用Git部署时出错

来自分类Dev

在Heroku上使用Git部署时出错

来自分类Dev

使用Azurerm设置Terraform子网时出错

来自分类Dev

EKS节点组地形-向特定节点添加标签

来自分类Dev

Terraform:在EKS / ECS上部署Docker Compose应用程序

来自分类Dev

使用自动驾驶仪部署景观时出错

来自分类Dev

重新部署使用ARM设置的VM规模时出错

来自分类Dev

使用传单和googlesheets API部署Shinyapp时出错

来自分类Dev

使用 Bosh 在 virtualbox 上部署 CF 时出错

来自分类Dev

使用 devops 多阶段管道部署时出错

来自分类Dev

AWS CDK:部署Redis ElastiCache时出错:子网组与CacheCluster属于不同的VPC

来自分类Dev

使用Terraform创建Azure Synapse池时出错

来自分类Dev

使用 Terraform 创建 Azure 存储帐户时出错

来自分类Dev

添加EKS托管的Windows节点组失败。如何调试?

来自分类Dev

在OpenShift上部署时出错

来自分类Dev

部署 Firebase 函数时出错

来自分类Dev

尝试使用快速枚举从父级删除节点时出错

来自分类Dev

使用密码集合时出错(在创建节点链时)

来自分类Dev

使用节点和猫鼬导出时出错

来自分类Dev

使用 docker compose 运行节点容器时出错

来自分类Dev

使用节点 js 运行 Paypal API 时出错

来自分类Dev

删除节点后使用 escodegen 生成代码时出错

来自分类Dev

使用 ggplot 和 dplyr 绘制多个组时出错

来自分类Dev

添加子节点时出错

Related 相关文章

热门标签

归档