问题 如何在AWS VPC w / Terraform中的两个子网之间路由?


更新: 一直在努力解决这个问题。似乎无法获得具有两个子网和SSH堡垒的工作配置。将赏金用于完整的.tf文件配置: *创建两个私有子网 *创造一个堡垒 *在通过堡垒配置的每个子网上旋转ec2实例(通过堡垒运行一些任意shell命令) *配置了互联网网关 *为私有子​​网上的主机提供nat网关 *具有相应配置的路由和安全组

原帖: 我正在尝试学习Terraform并构建原型。我有一个通过Terraform配置的AWS VPC。除了DMZ子网,我还有一个公共子网“web”,可以从互联网接收流量。我有一个无法从互联网访问的私有子网'app'。我正在尝试配置堡垒主机,以便terraform可以在私有“app”子网上配置实例。我还没有能够让这个工作。

当我进入堡垒时,我无法从堡垒主机SSH到私有子网内的任何实例。我怀疑存在路由问题。我通过几个可用的示例和文档构建了这个原型。许多示例通过aws提供程序使用略有不同的技术和terraform路由定义。

有人可以提供理想或正确的方法来定义这三个子网(公共'网络',公共'dmz'瓦特堡垒和私人'应用程序'),以便“网络”子网上的实例可以访问“应用程序”子网并且DMZ中的堡垒主机可以在私有“app”子网中配置实例?

我的配置片段如下:

resource "aws_subnet" "dmz" {
    vpc_id = "${aws_vpc.vpc-poc.id}"
    cidr_block = "${var.cidr_block_dmz}"
}

resource "aws_route_table" "dmz" {
    vpc_id = "${aws_vpc.vpc-poc.id}"
    route {
        cidr_block = "0.0.0.0/0"
        gateway_id = "${aws_internet_gateway.gateway.id}"
    }
}

resource "aws_route_table_association" "dmz" {
    subnet_id = "${aws_subnet.dmz.id}"
    route_table_id = "${aws_route_table.dmz.id}"
}

resource "aws_subnet" "web" {
    vpc_id = "${aws_vpc.vpc-poc.id}"
    cidr_block = "10.200.2.0/24"
}

resource "aws_route_table" "web" {
    vpc_id = "${aws_vpc.vpc-poc.id}"
    route {
        cidr_block = "0.0.0.0/0"
        instance_id = "${aws_instance.bastion.id}"
    }
}

resource "aws_route_table_association" "web" {
    subnet_id = "${aws_subnet.web.id}"
    route_table_id = "${aws_route_table.web.id}"
}

resource "aws_subnet" "app" {
    vpc_id = "${aws_vpc.vpc-poc.id}"
    cidr_block = "10.200.3.0/24"
}

resource "aws_route_table" "app" {
    vpc_id = "${aws_vpc.vpc-poc.id}"
    route {
        cidr_block = "0.0.0.0/0"
        instance_id = "${aws_instance.bastion.id}"
    }
}

resource "aws_route_table_association" "app" {
    subnet_id = "${aws_subnet.app.id}"
    route_table_id = "${aws_route_table.app.id}"
}

10183
2018-03-06 03:39


起源

如果您需要更多帮助,您需要扩展您的TF文件,同时显示任何安全组/ NACL等,因为我不认为路由是一个问题(除了私有子网中缺少出站Web访问)除非您的堡垒框也充当NAT网关) - ydaetskcoR


答案:


这是一个可能对您有帮助的片段。这是未经测试的,但是从我的一个terraform文件中提取,我在私有子网中配置VM。我知道这适用于一个私有子网,我试图像你原来的问题一样在这里实现两个。

我跳过我的NAT实例来使用Terraform命中和配置私有子网盒。如果您的安全组设置正确,它确实有效。我花了一些实验。

/* VPC creation */
resource "aws_vpc" "vpc_poc" {
  cidr_block = "10.200.0.0/16"
}

/* Internet gateway for the public subnets */
resource "aws_internet_gateway" "gateway" {
  vpc_id = "${aws_vpc.vpc_poc.id}"
}

/* DMZ subnet - public */
resource "aws_subnet" "dmz" {
    vpc_id = "${aws_vpc.vpc_poc.id}"
    cidr_block = "10.200.1.0/24"
    /* may help to be explicit here */
    map_public_ip_on_launch = true
    /* this is recommended in the docs */
    depends_on = ["aws_internet_gateway.gateway"]
}

resource "aws_route_table" "dmz" {
    vpc_id = "${aws_vpc.vpc_poc.id}"
    route {
        cidr_block = "0.0.0.0/0"
        gateway_id = "${aws_internet_gateway.gateway.id}"
    }
}

resource "aws_route_table_association" "dmz" {
    subnet_id = "${aws_subnet.dmz.id}"
    route_table_id = "${aws_route_table.dmz.id}"
}

/* Web subnet - public */
resource "aws_subnet" "web" {
    vpc_id = "${aws_vpc.vpc_poc.id}"
    cidr_block = "10.200.2.0/24"
    map_public_ip_on_launch = true
    depends_on = ["aws_internet_gateway.gateway"]
}

resource "aws_route_table" "web" {
    vpc_id = "${aws_vpc.vpc_poc.id}"
    route {
        cidr_block = "0.0.0.0/0"
        /* your public web subnet needs access to the gateway */
        /* this was set to bastion before so you had a circular arg */
        gateway_id = "${aws_internet_gateway.gateway.id}"
    }
}

resource "aws_route_table_association" "web" {
    subnet_id = "${aws_subnet.web.id}"
    route_table_id = "${aws_route_table.web.id}"
}

/* App subnet - private */
resource "aws_subnet" "app" {
    vpc_id = "${aws_vpc.vpc_poc.id}"
    cidr_block = "10.200.3.0/24"
}

/* Create route for DMZ Bastion */
resource "aws_route_table" "app" {
    vpc_id = "${aws_vpc.vpc_poc.id}"
    route {
        cidr_block = "0.0.0.0/0"
        /* this send traffic to the bastion to pass off */
        instance_id = "${aws_instance.nat_dmz.id}"
    }
}

/* Create route for App Bastion */
resource "aws_route_table" "app" {
    vpc_id = "${aws_vpc.vpc_poc.id}"
    route {
        cidr_block = "0.0.0.0/0"
        /* this send traffic to the bastion to pass off */
        instance_id = "${aws_instance.nat_web.id}"
    }
}

resource "aws_route_table_association" "app" {
    subnet_id = "${aws_subnet.app.id}"
    route_table_id = "${aws_route_table.app.id}"
}

/* Default security group */
resource "aws_security_group" "default" {
  name = "default-sg"
  description = "Default security group that allows inbound and outbound traffic from all instances in the VPC"
  vpc_id = "${aws_vpc.vpc_poc.id}"

  ingress {
    from_port   = "0"
    to_port     = "0"
    protocol    = "-1"
    self        = true
  }

  egress {
    from_port   = "0"
    to_port     = "0"
    protocol    = "-1"
    self        = true
  }
}

/* Security group for the nat server */
resource "aws_security_group" "nat" {
  name        = "nat-sg"
  description = "Security group for nat instances that allows SSH and VPN traffic from internet. Also allows outbound HTTP[S]"
  vpc_id      = "${aws_vpc.vpc_poc.id}"

  ingress {
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    /* this your private subnet cidr */
    cidr_blocks = ["10.200.3.0/24"]
  }
  ingress {
    from_port   = 443
    to_port     = 443
    protocol    = "tcp"
    /* this is your private subnet cidr */
    cidr_blocks = ["10.200.3.0/24"]
  }
  ingress {
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
  ingress {
    from_port   = -1
    to_port     = -1
    protocol    = "icmp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  egress {
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
  egress {
    from_port   = 443
    to_port     = 443
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
  egress {
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    /* this is the vpc cidr block */
    cidr_blocks = ["10.200.0.0/16"]
  }
  egress {
    from_port   = -1
    to_port     = -1
    protocol    = "icmp"
    cidr_blocks = ["0.0.0.0/0"]
  }
}

/* Security group for the web */
resource "aws_security_group" "web" {
  name = "web-sg"
  description = "Security group for web that allows web traffic from internet"
  vpc_id = "${aws_vpc.vpc_poc.id}"

  ingress {
    from_port = 80
    to_port   = 80
    protocol  = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  ingress {
    from_port = 443
    to_port   = 443
    protocol  = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
}

/* Install deploy key for use with all of our provisioners */
resource "aws_key_pair" "deployer" {
  key_name   = "deployer-key"
  public_key = "${file("~/.ssh/id_rsa")}"
}

/* Setup NAT in DMZ subnet */
resource "aws_instance" "nat_dmz" {
  ami               = "ami-67a54423"
  availability_zone = "us-west-1a"
  instance_type     = "m1.small"
  key_name          = "${aws_key_pair.deployer.id}"
  /* Notice we are assigning the security group here */
  security_groups   = ["${aws_security_group.nat.id}"]

  /* this puts the instance in your public subnet, but translate to the private one */
  subnet_id         = "${aws_subnet.dmz.id}"

  /* this is really important for nat instance */
  source_dest_check = false
  associate_public_ip_address = true
}

/* Give NAT EIP In DMZ */
resource "aws_eip" "nat_dmz" {
  instance  = "${aws_instance.nat_dmz.id}"
  vpc       = true
}

/* Setup NAT in Web subnet */
resource "aws_instance" "nat_web" {
  ami               = "ami-67a54423"
  availability_zone = "us-west-1a"
  instance_type     = "m1.small"
  key_name          = "${aws_key_pair.deployer.id}"
  /* Notice we are assigning the security group here */
  security_groups   = ["${aws_security_group.nat.id}"]

  /* this puts the instance in your public subnet, but translate to the private one */
  subnet_id         = "${aws_subnet.web.id}"

  /* this is really important for nat instance */
  source_dest_check = false
  associate_public_ip_address = true
}

/* Give NAT EIP In DMZ */
resource "aws_eip" "nat_web" {
  instance  = "${aws_instance.nat_web.id}"
  vpc       = true
}

/* Install server in private subnet and jump host to it with terraform */
resource "aws_instance" "private_box" {
  ami           = "ami-d1315fb1"
  instance_type = "t2.large"
  key_name      = "${aws_key_pair.deployer.id}"
  subnet_id     = "${aws_subnet.api.id}"
  associate_public_ip_address = false

  /* this is what gives the box access to talk to the nat */
  security_groups = ["${aws_security_group.nat.id}"]

  connection {
    /* connect through the nat instance to reach this box */
    bastion_host = "${aws_eip.nat_dmz.public_ip}"
    bastion_user = "ec2-user"
    bastion_private_key = "${file("keys/terraform_rsa")}"

    /* connect to box here */
    user = "ec2-user"
    host = "${self.private_ip}"
    private_key = "${file("~/.ssh/id_rsa")}"
  }
}

4
2018-04-26 05:43





除非堡垒主机也充当NAT(我不建议你在同一个实例上组合角色),否则web和app子网将不会有任何出站Web访问,但是看起来很好,因为TF会自动添加VPC的本地路由记录。

只要您拥有覆盖VPC范围的本地路由记录,那么路由应该没问题。使用Terraform配置文件(并添加最少的必要资源),我可以在所有3个子网中创建一些基本实例,并在它们之间成功路由,因此您可能会遗漏其他内容,例如安全组或NACL。


3
2018-03-06 17:50



谢谢。同样,我遇到的问题是terraform无法通过堡垒SSH到私有子网上的实例。我认为这是路由,但我会深入挖掘并在此发布更新。 - n8gard


您尚未提供完整的Terraform,但您需要从堡垒主机的堡垒IP或CIDR块允许SSH进入“app”VPC实例,如下所示:

resource "aws_security_group" "allow_ssh" {
  name = "allow_ssh"
  description = "Allow inbound SSH traffic"

  ingress {
      from_port = 22
      to_port = 22
      protocol = "tcp"
      cidr_blocks = ["${aws_instance.bastion.private_ip}/32"]
  }
}

然后在“app”实例资源中,您需要添加安全组:

...
vpc_security_group_ids = ["${aws_security_group.allow_ssh.id}"]
...

https://www.terraform.io/docs/providers/aws/r/security_group_rule.html


3
2018-04-20 02:03



这很有帮助,因为它向我展示了一种表达这种情况的新方法,但遗憾的是,这不是解决方法。必须有其他东西丢失。 - n8gard
如果您需要特定帮助,请提供模板的其余部分。这里没有足够的背景。 - Nathan


您应该使用tcpdump和其他调试工具检查网络问题。 请检查:

  1. Ips可以加密,网络设置正确(例如10.200.2.X可以到达堡垒主机的ip)
  2. iptables /另一个防火墙不会阻止您的流量
  3. 一个ssh服务器正在侦听(ssh到这些主机的那些主机的ip)
  4. 您拥有适合主机的安全组(您可以在EC2实例的保护中看到这一点)
  5. 尝试使用tcpdump嗅探流量

2
2018-04-20 20:41





我没有看到Bastion主持人的原因。

我有类似的使用saltstack,我只是使用VPC内的主服务器控制其余部分,为其分配特定的安全组以允许访问。

CIDR X/24
subnetX.0/26- subnet for control server. <aster server ip EC2-subnet1/32
subnetX.64/26 - private minions 
subentX.128/26 - public minions
subnetX.192/26- private minions 

然后为每个子网创建一个路由表,以满足您对隔离的热爱 将每条路由连接到单个子网。例如。

rt-1  - subnetX.0/26
rt-2  - subnetX.64/26
rt-3  - subnetX.128/26
rt-4  - subnetX.192/26

确保你的路由表有这样的东西,这样rt-1实例的路由就可以连接到每个人

destination: CIDR X/24  Target: local

然后通过安全组inbound.e.g限制连接。 允许来自EC2-subnet1 / 32的SSH

一旦我完成了与控制服务器的所有工作,我就可以删除在我的公共子网中说CIDR X / 24 Target:local的特定路由,因此它不再能够将流量路由到我的本地CIDR。

因为我已经有权在控制服务器中删除路由,所以我没有理由创建复杂的堡垒。


2
2018-04-22 15:59