Fading Coder

One Final Commit for the Last Sprint

Home > Tech > Content

Evolution of Network Diversion Techniques in Deception Security Systems

Tech May 8 3

Evolution of Network Diversion Techniques in Deception Security Systems

The latest version resolves a network issue遗留 from several years ago. No further iterations on network-related modules are anticipated in the near future. This article reviews the evolution of network-related technologies in our business over the years.

Background

The core of our system is a deception service that detects attacker intrusions. Behind this core functionality are service management and intrusion behavior analysis. In front of the core, the primary function is traffic diversion.

Traffic diversion aims to expand the detection surface and increase the probability of detecting attacker intrusions. These functions are closely related to networking, so we focus on basic network functions and their changes with business iterations.

  • Basic traffic diversion modules
    • Host virtual IP diversion
    • Server-deployed proxy node diversion
    • Switch-side virtual network diversion
  • Business iteration changes
    • Internalization of VM IPs
    • Business architecture adjustments
      • How to access the new deception service architecture
      • Solving heterogeneity issues
      • How to analyze malicious traffic

Basic Traffic Diversion Modules

The three traffic diversion modules stack outward from the host layer, forming the basic network形态 of our product.

Host Virtual IP Diversion

Configure multiple virtual IPs on the host and use iptables DNAT rules to transparently forward traffic to these IPs to the deception service.

Background

The service resides in a virtual machine (for security considerations) and listens on specific ports. The virtual machine connects to the host using bridging. This limitation means: the deception service can only be accessed through the virtual machine's IP address, with only one network path.

Consider how to increase access paths to the deception service within the host:

  1. Add an IP and redirect traffic for this IP to the virtual machine hosting the deception service
  2. Add a traffic proxy to forward traffic to the deception service's port

The business adopted the first approach, adding virtual IPs and using iptables to DNAT traffic to the virtual machine, which preserves the source address as perceived by the service.

Implementation

There are several ways to add virtual IPs. The early business used the second approach:

  1. Directly add IPs to the physical network card
  2. Use macvlan virtual network cards, then add IPs to the virtual network
  3. Use netns to isolate IPs, with each IP in its own namespace and MAC address

The second option was chosen because the first approach was too crude, and the third option wasn't adopted due to unfamiliarity with network namespaces at the time.

Create the network card and add the IP:

$ ip link add link br0 name macvlan0 type macvlan
$ ip link set macvlan0 up
$ ip addr add 192.168.32.247/24 dev macvlan0
$ ip addr show macvlan0

Add iptables DNAT rules to redirect traffic to *192.168.32.247* to *192.168.32.245* (the VM IP):

$ iptables -t nat -A PREROUTING -d 192.168.32.247/32 -j DNAT --to-destination 192.168.32.245

Prerequisites:

$ iptables -P FORWARD ACCEPT
$ sysctl -w net.ipv4.ip_forward=1

With iptables, the virtual network card is only used for the kernel to reply to ARP requests. Subsequent TCP communication traffic does not pass through this network card.

A simple network topology is shown below. The virtual machine uses bridged mode, and after adding the DNAT rule, the virtual machine can also be accessed through *192.168.32.247*.

┌───────────────────────────────────────────┐
│  Host                                     │
│         ┌─────────────────┐               │
│         │ vm1             │               │
│         │ 192.168.32.245  │               │
│         │       eth0      │               │
│         └────────│────────┘               │
│           ┌──> vnet1                      │
│           │      │                        │
│           │      │   192.168.32.247       │
│      DNAT │      │ ─── macvlan0           │
│           │      │                        │
│           │      │                        │
│           └──── br0  192.168.32.251       │
└──────────────────│────────────────────────┘
                  eth0

Server-Deployed Proxy Node Diversion

Deploy proxy agents to listen on specified IP addresses and ports, and forward received traffic to the deception service.

Background

Currently, only IPs in the same subnet can be added in the host environment. How to divert traffic from different subnets? Answer: Add a traffic proxy.

The proxy agent is deployed within the customer's business machine. The management side forwards the required proxy rules to the agent, which listens on ports and forwards traffic to the deception service. This creates an issue: the deception service needs to obtain the real peer address. A simple proxy would lose this information, and the peer address obtained would be the agent's local address.

For the Service, *conn.RemoteAddr()* returns the correct result because it's a direct connection:

       conn        
User <-------> Service
 ip1             ip3

*pconn.RemoteAddr()* returns the proxy agent address, which doesn't meet expectations:

       uconn           pconn        
User <-------> Proxy <-------> Service
 ip1            ip2              ip3

How to solve this problem: Add a gateway service on the host. The proxy agent carries metadata such as *uconn.RemoteAddr()* and sends it to the gateway service. The gateway service adds SNAT rules after receiving the metadata and forwards the traffic to the deception service. This way, the deception service perceives the peer address as the real one.

Implementation

Use a custom protocol, adding the following to the header of each connection:

  1. 8-byte magic number to distinguish internal connections
  2. Peer address
  3. Destination address to be forwarded (deception service)

The key step for adding SNAT: Before establishing a TCP connection, you need to specify the source address. The source IP is taken from HostIP (192.168.32.251), and the port is pre-allocated through *net.Listen("tcp", "0.0.0.0:0")*. This fixes the five-tuple of the connection, and subsequent rules are just mechanical parameter filling.

$ iptables -t nat -A POSTROUTING -s 192.168.32.251/32 -p tcp -m tcp --sport 34162 -j SNAT --to-source 192.168.1.55:46421

A simple network topology is shown below, adding a gateway service to receive traffic from the proxy agent and add SNAT rules before establishing the proxy connection.

┌─────────────────────────────────────────────────┐
│  Host                                           │
│               ┌─────────────────┐               │
│               │ vm1             │               │
│               │ 192.168.32.245  │               │
│               │       eth0      │               │
│               └────────│────────┘               │
│         ┌──────────> vnet1                      │
│         │       │      │                        │
│         │       │      │   192.168.32.247       │
│     SNAT│   DNAT│      │ ─── macvlan0           │
│         │       │      │                        │
│         │       │      │                        │
│      Gateway    └──── br0  192.168.32.251       │
└────────────────────────│────────────────────────┘
                        eth0                        

Switch-Side Virtual Network Diversion

Deploy virtual network nodes on the core switch side, create virtual VLAN/VXLAN IPs, listen on specified IP addresses and ports, and forward traffic to the deception service.

Background

Continue to expand the detection surface. On the basis of proxy agents, coverage for same and different subnets is possible. However, this is all for existing subnets, which account for a low proportion in the entire network. If we can cover these non-existent networks, we can greatly increase the detection surface.

How to virtualize non-existent networks? This involves some switch principles (temporarily不考虑 vxlan):

  1. (Layer 2) Switches create a separate broadcast domain through VLAN
  2. (Layer 3) Switches create a virtual interface for the VLAN and assign a gateway and mask (192.168.1.1/24)
  3. (Layer 3) Switch packet routing queries send packets out from the VLAN interface corresponding to the target network segment
    • ACCESS interface strips VLAN tags when packets go out and adds VLAN tags when packets enter
    • TRUNK interface can receive all network packets

The kernel supports vlan virtual network cards, which can be used for kernel parsing and encapsulation/de-encapsulation of vlan tags.

Implementation

Utilize switch characteristics to perform some operations in advance:

  1. Allocate VLANs and corresponding network segments
  2. Allocate trunk ports, so servers connected to this trunk port are equivalent to access switches under the core switch

On the basis of deploying proxy agents inside the access server, only one additional thing needs to be done: manage vlan network cards and IPs.

Early on without exposure to **netns**, these operations were completed within the host. Herewe won't discuss outdated designs.

Some encountered problems and solutions:

  1. An agent needs to connect to the switch's trunk port simultaneously. Used **netns** for isolation, with one network segment (one VLAN may contain multiple network segments) occupying one netns.
  2. Virtualized tens of thousands of IPs. According to the previous agent listening logic, listening on one IP:Port occupies one fd. Used iptables+ipset DNAT to solve
    1. Store IP:Port in a *hash:ip,port* ipset
    2. Use one iptables rule to match the ipset and DNAT traffic to a listening address
    3. Use netns to isolate iptables rules and listening addresses, resulting in fewer fd occupations

The overall solution is as follows:

  1. Create a bridge and mount the physical network card on the bridge
  2. Set up communication network cards and IPs for traffic proxying and rule acquisition, also mounted on the bridge
  3. Create virtual networks
    1. Create netns
    2. Create veth-peer, mount in host space on the bridge, up all network cards
    3. Create a virtual vlan network card on the veth-peer inside the netns and add IP
    4. Create ipset and add iptables rules inside the netns
  4. Agent switches namespaces and listens on ports inside the netns (this way one process can listen on addresses in multiple namespaces)
# Create bridge
$ ip link add br1 type bridge
$ ip link set br1 up
$ ip link set eth0 master br1

# Create communication network card, maintain a basic IP for external communication
$ ip link add link br1 name veth0 type vlan id 100
$ ip link set veth0 up
$ ip addr add 192.168.100.100/24 dev veth0

# Create virtual network
$ ip netns add ns1
$ ip link add veth1 type veth peer name eth0 netns ns1
$ ip netns exec ns1 ip link set eth0 up
$ ip netns exec ns1 ip link set lo up
$ ip link set veth1 up
$ ip link set veth1 master br1

# Set up virtual VLAN(100) network card and add IP
$ ip netns exec ns1 ip link add link eth0 name vlan100 type vlan id 100
$ ip netns exec ns1 ip link set vlan100 up
$ ip netns exec ns1 ip addr add 192.168.110.2/24 dev vlan100
$ ip netns exec ns1 ip route add default via 192.168.110.1 onlink dev vlan100

# Add iptables/ipset rules
$ ip netns exec ns1 ipset create ipset1 hash:ip,port
$ ip netns exec ns1 iptables -t nat -A PREROUTING -p tcp -m set --match-set ipset1 dst,dst -j DNAT --to-destination 127.0.0.1:5550

Some encountered problems:

  1. DNAT to 127.0.0.1 requires enabling `net.ipv4.conf.all.route_localnet=1`
  2. ipset compatibility is poor on 3.10 kernel, without distinguishing between host and namespace, can only be solved through naming

The proxy network topology on the network side is as follows:

┌───────────────────────────────────────────────────────────────┐
│ Host                                                          │
│                   ┌─────────────────┐   ┌─────────────────┐   │
│                   │ ns1             │   │ ns2             │   │
│                   │                 │   │                 │   │
│                   │  192.168.100.2  │   │  192.168.101.8  │   │
│                   │     vlan100     │   │     vlan101     │   │
│                   │        │        │   │        │        │   │
│                   │       eth0      │   │       eth0      │   │
│ 192.168.100.100   └────────│────────┘   └────────│────────┘   │
│      veth0               veth1                 veth2          │
│        │                   │                     │            │
│        └───────────────────│─────────────────────┘            │
│                            │                                  │
│                           br0                                 │
└────────────────────────────│──────────────────────────────────┘
                            eth0     

If using multiple network cards, maintain the basic communication IP, add a new bridge to connect the virtual network and physical network card. This way ns1 and ns2 are isolated and unaffected.

┌───────────────────────────────────────────────────────────────┐
│ Host                                                          │
│                   ┌─────────────────┐   ┌─────────────────┐   │
│                   │ ns1             │   │ ns2             │   │
│                   │                 │   │                 │   │
│                   │  192.168.100.2  │   │  192.168.101.8  │   │
│                   │     vlan100     │   │     vlan101     │   │
│                   │        │        │   │        │        │   │
│                   │       eth0      │   │       eth0      │   │
│ 192.168.100.100   └────────│────────┘   └────────│────────┘   │
│      veth0               veth1                 veth2          │
│        │                   │                     │            │
│        └───────────────────│                     │            │
│                            │                     │            │
│                           br1                   br2           │
└────────────────────────────│─────────────────────│────────────┘
                            eth0                  eth1 

VXLAN Support

Also requires switch configuration, and use bgp protocol to dynamically control routing. Frr provides these functions, just set the AS number.

Other network-related logic refer to several ways to use iproute2 to打通 container networks between different nodes. For how vxlan跨节点通信, the network configuration is almost the same.

The complete network topology is as follows. The vxlan namespace corresponds to a network segment, one vni corresponds to one bridge, and the namespace and vtep device are connected through the bridge.

┌─────────────────────────────────────────────────────────────────────────────────┐
│ Host                                                                            │
│                  ┌─────────────────┐  ┌─────────────────┐  ┌─────────────────┐  │
│                  │ ns1             │  │ ns2             │  │ ns3             │  │
│                  │                 │  │                 │  │                 │  │
│                  │  192.168.100.2  │  │  192.168.101.8  │  │                 │  │
│                  │     vlan100     │  │     vlan101     │  │                 │  │
│                  │        │        │  │        │        │  │    10.1.0.10    │  │
│                  │       eth0      │  │       eth0      │  │       eth0      │  │
│ 192.168.100.100  └────────│────────┘  └────────│────────┘  └────────│────────┘  │
│      veth0              veth1                veth2                veth3         │
│        │                  │                    │                    │           │
│        └──────────────────│                    │                    │── vtep1   │
│                           │                    │                    │           │
│                          br1                  br2                  br3          │
└───────────────────────────│────────────────────│────────────────────────────────┘
                           eth0                 eth1                 eth2          
                            └────────────────────│────────────────────┘  
                                               switch                                 

Business Iteration Changes

This part belongs to the change logic inside the deception service host, not as pure as the previous technical points, mixed with business logic.

This part ignores the three diversion methods mentioned earlier. The current network topology is:

┌───────────────────────────────────────────┐
│  Host                                     │
│         ┌─────────────────┐               │
│         │ vm1             │               │
│         │ 192.168.32.245  │               │
│         │       eth0      │               │
│         └────────│────────┘               │
│                vnet1                      │
│                  │                        │
│                 br0  192.168.32.251       │
└──────────────────│────────────────────────┘
                  eth0

Internalization of VM IP

In cases of duplicate IPs in the network, the judgment of VM survival may be incorrect, leading to VM回收. Internalize the VM to avoid this problem. If IP access fails, it won't affect the VM status. Add a br1 to reuse the previous multi-IP DNAT logic.

┌───────────────────────────────────────────┐
│  Host                                     │
│         ┌─────────────────┐               │
│         │ vm1             │               │
│         │   172.16.23.2   │               │
│         │       eth0      │               │
│         └────────│────────┘               │
│            ┌── vnet1                      │
│            │     │                        │
│        DNAT│    br1  172.16.23.1          │
│            │                              │
│            └─── br0  192.168.32.251       │
└──────────────────│────────────────────────┘
                  eth0

Business Architecture Adjustment

Due to performance considerations and later maintenance, it was decided to move the deception service from the VM to a docker container.

  1. For security considerations, keep the VM (docker is relatively easier to escape)
  2. Better performance, container occupancy is much lower than VM, so more deception services can be started
  3. Stronger maintainability, service deployment logic is persisted in dockerfile
  4. Windows-type VMs remain unchanged

The adjusted network topology is a heterogeneous network architecture as follows:

┌───────────────────────────────────────────────────────────────┐
│  Host                                                         │
│        ┌─────────────────────────┐     ┌─────────────┐        │
│        │ vm1                     │     │ vm2(win)    │        │
│        │ ┌────────┐   ┌────────┐ │     │             │        │
│        │ │10.1.0.2│   │10.1.0.3│ │     │             │        │
│        │ │  eth0  │   │  eth0  │ │     │             │        │
│        │ └───│────┘   └───│────┘ │     │             │        │
│        │   veth1        veth2    │     │             │        │
│        │     └──────│─────┘      │     │             │        │
│        │         docker0         │     │             │        │
│        │        10.1.0.1         │     │             │        │
│        │                         │     │             │        │
│        │        172.16.23.2      │     │ 172.16.23.3 │        │
│        │           eth0          │     │     eth0    │        │
│        └────────────│────────────┘     └──────│──────┘        │
│                   vnet1                     vnet2             │
│                     └───────────────│─────────┘               │
│  192.168.32.251                    br1                        │
│       br0                      172.16.23.1                    │
└────────│──────────────────────────────────────────────────────┘
        eth0                                            

How to Access the New Architecture Deception Service

Previous ways to access the deception service:

  1. Direct IP access, traffic DNAT to VM
  2. Access proxy listening port, forwarded to deception service through gateway

If the gateway's forwarding function is deployed inside the VM, and the host forwards all traffic to the VM, the previous SNAT rules can be reused.

How to forward traffic from the host to the VM?

  1. In the switch-side virtual network diversion logic, there is an iptables rule inside the namespace to DNAT traffic to a port. For IP access, reuse this logic
  2. For proxy traffic access, add a proxy, which is transparent to the deception service

The adjusted traffic chain is as follows:

┌───────────────────────────────────────────────────────────────┐
│  Host                                                         │
│        ┌─────────────────────────┐     ┌─────────────┐        │
│        │ vm1                     │     │ vm2(win)    │        │
│        │ ┌────────┐   ┌────────┐ │     │             │        │
│        │ │  srv1  │   │  srv2  │ │     │     srv3    │        │
│        │ │10.1.0.2│   │10.1.0.3│ │     │             │        │
│        │ └───│────┘   └───│────┘ │     │ 172.16.23.3 │        │
│        │     └──────│─────┘      │     └──────│──────┘        │
│        │           SNAT          │           SNAT             │
│        │            │            │            │               │
│        │       gateway2 (Listen) │      gateway2 (Listen)     │
│        │            │ 172.16.23.2│            │ 192.168.32.251│
│        └────────────│────────────┘            │               │
│                     └───────────────│─────────┘               │
│                                  L4 Proxy                     │
│                                     │                         │
│               192.168.32.100     gateway1 (Listen)            │
│               192.168.32.101        │ 192.168.32.251          │
│       br0 ───────── DNAT ───────────┘                         │
│        │                            │                         │
│        │ ──────── L4 Proxy ─────────┘                         │
└────────│──────────────────────────────────────────────────────┘
        eth0                                            

The adjusted *gateway1* becomes the traffic entry point for all services, and the architecture can be adjusted based on this feature.

How to Solve Heterogeneity Issues

In the access chain of the new architecture, the two types of deception services are clearly heterogeneous, with listening IPs not at the same level—one is 10.* and the other is 172.*. Currently, gateway2 needs to handle two sets of forwarding logic, which is ugly but functionally sound.

Now there is a requirement to打通 the network chain of deception services, where srv1/srv2/srv3 can access each other's service ports.

Homogeneous network architecture solution For homogeneous network architecture solutions, see several ways to use iproute2 to打通 container networks between different nodes.

Heterogeneous network architecture solution A more intuitive solution is to add a layer of processing logic to be compatible with less layers.

With the experience of switch-side virtual network diversion, **netns** can be tried here.

Theoretically, the following network topology can run normally, but there is a problem: the bridge *br-win* is inside the netns, **kvm cannot specify this bridge**.

┌─────────────────────────────────────────────────────────────────────┐
│  Host                                                               │
│     ┌─────────────────────────┐     ┌─────────────────────────┐     │
│     │ vm1                     │     │ netns_win               │     │
│     │ ┌────────┐   ┌────────┐ │     │ ┌────────┐   ┌────────┐ │     │
│     │ │10.1.0.2│   │10.1.0.3│ │     │ │10.2.0.2│   │10.2.0.3│ │     │
│     │ │  eth0  │   │  eth0  │ │     │ │  eth0  │   │  eth0  │ │     │
│     │ └───│────┘   └───│────┘ │     │ └───│────┘   └───│────┘ │     │
│     │   veth1        veth2    │     │   vnet2        vnet3    │     │
│     │     └──────│─────┘      │     │     └──────│─────┘      │     │
│     │         docker0         │     │          br-win         │     │
│     │        10.1.0.1         │     │         10.2.0.1        │     │
│     │                         │     │                         │     │
│     │       172.16.23.2       │     │       172.16.23.3       │     │
│     │           eth0          │     │           eth0          │     │
│     └────────────│────────────┘     └────────────│────────────┘     │
│                vnet1                         veth-win               │
│                  └───────────────│───────────────┘                  │
│  192.168.32.251                 br1                                 │
│       br0                   172.16.23.1                             │
└────────│────────────────────────────────────────────────────────────┘
        eth0                                                            

So we need to adjust it, put br-win inside the host, use veth-peer to keep the IP in the namespace. The adjusted network topology is as follows:

┌─────────────────────────────────────────────────────────────────────┐
│  Host                                                               │
│     ┌─────────────────────────┐                                     │
│     │ vm1                     │                                     │
│     │ ┌────────┐   ┌────────┐ │       ┌────────┐   ┌────────┐       │
│     │ │10.1.0.2│   │10.1.0.3│ │       │10.2.0.2│   │10.2.0.3│       │
│     │ │  eth0  │   │  eth0  │ │       │  eth0  │   │  eth0  │       │
│     │ └───│────┘   └───│────┘ │       └───│────┘   └───│────┘       │
│     │   veth1        veth2    │         vnet2        vnet3          │
│     │     └──────│─────┘      │           └──────│─────┘            │
│     │         docker0         │                br-win               │
│     │        10.1.0.1         │                  │                  │
│     │                         │              veth-win1              │
│     │                         │     ┌────────────│────────────┐     │
│     │                         │     │ netns     eth1          │     │
│     │                         │     │  win    10.2.0.1        │     │
│     │                         │     │                         │     │
│     │       172.16.23.2       │     │       172.16.23.3       │     │
│     │           eth0          │     │           eth0          │     │
│     └────────────│────────────┘     └────────────│────────────┘     │
│                vnet1                         veth-win0              │
│                  └───────────────│───────────────┘                  │
│  192.168.32.251                 br1                                 │
│       br0                   172.16.23.1                             │
└────────│────────────────────────────────────────────────────────────┘
        eth0                                                            

How to Analyze Malicious Traffic

The business uses suricata to analyze traffic, deployed on the bridge where the deception service is located. This position can obtain original traffic data.

If capturing packets at the br0 position, the traffic will carry an L4 Proxy source data header, which will prevent suricata from recognizing some protocols, such as HTTP.

┌────────────────────────────────────────────────────────────────┐
│  Host                                                          │
│    ┌─────────────────────────┐                                 │
│    │ vm1                     │                                 │
│    │ ┌────────┐   ┌────────┐ │             ┌────────┐          │
│    │ │  srv1  │   │  srv2  │ │             │10.2.0.2│          │
│    │ │10.1.0.2│   │10.1.0.3│ │             │  eth0  │          │
│    │ └───│────┘   └───│────┘ │             └───│────┘          │
│    │     └──────│─────┘      │                 │               │
│    │ suricata ──│            │     suuricata ──│               │
│    │           SNAT          │               SNAT              │
│    │            │            │     ┌───────────│───────────┐   │
│    │            │            │     │           │           │   │
│    │       gateway2 (Listen) │     │      gateway2 (Listen)│   │
│    │            │ 172.16.23.2│     │           │172.16.23.3│   │
│    └────────────│────────────┘     └───────────│───────────┘   │
│                 └───────────────│──────────────┘               │
│                              L4 Proxy                          │
│                                 │                              │
│           192.168.32.100     gateway1 (Listen)                 │
│           192.168.32.101        │ 192.168.32.251               │
│   br0 ───────── DNAT ───────────┘                              │
│    │                            │                              │
│    │ ──────── L4 Proxy ─────────┘                              │
└────│───────────────────────────────────────────────────────────┘
    eth0                                                          

Suricata consumes certain resources even without traffic. The entire host machine may have multiple VMs, meaning multiple suricata instances.

Goal: Reduce the number of suricata instances, reuse suricata resources, and reduce service management costs. Gateway1 is the traffic entry point for all services. As long as we get the traffic between gateway1 and gateway2 and remove the header, it's the original traffic.

A more performance-intensive solution is as follows:

  1. Capture packets on the br1 bridge to get all bridge traffic
  2. Create a virtual tap device and specify it as the network card to start suricata
  3. Establish connections and add related caching
  4. Match and reassemble packets
    1. Match traffic headers
    2. Remove traffic headers, modify seq/ack sequence numbers, regenerate a network packet
    3. Write the new packet to the tap device
  5. Receive suricata events and use cached data to correct IP information

The adjusted network topology is shown below, adding vtap0 for suricata to capture and analyze malicious traffic.

┌─────────────────────────────────────────────────────────────────────┐
│  Host                                                               │
│     ┌─────────────────────────┐                                     │
│     │ vm1                     │                                     │
│     │ ┌────────┐   ┌────────┐ │       ┌────────┐   ┌────────┐       │
│     │ │10.1.0.2│   │10.1.0.3│ │       │10.2.0.2│   │10.2.0.3│       │
│     │ │  eth0  │   │  eth0  │ │       │  eth0  │   │  eth0  │       │
│     │ └───│────┘   └───│────┘ │       └───│────┘   └───│────┘       │
│     │   veth1        veth2    │         vnet2        vnet3          │
│     │     └──────│─────┘      │           └──────│─────┘            │
│     │         docker0         │                br-win               │
│     │        10.1.0.1         │                  │                  │
│     │                         │              veth-win1              │
│     │                         │     ┌────────────│────────────┐     │
│     │                         │     │ netns     eth1          │     │
│     │                         │     │  win    10.2.0.1        │     │
│     │                         │     │                         │     │
│     │       172.16.23.2       │     │       172.16.23.3       │     │
│     │           eth0          │     │           eth0          │     │
│     └────────────│────────────┘     └────────────│────────────┘     │
│                vnet1                         veth-win0              │
│                  └───────────────│───────────────┘                  │
│  192.168.32.251                 br1 ─────────────┐                  │
│       br0                   172.16.23.1        vtap0                │
└────────│────────────────────────────────────────────────────────────┘
        eth0                                                            
Tags: Networking

Related Articles

Understanding Strong and Weak References in Java

Strong References Strong reference are the most prevalent type of object referencing in Java. When an object has a strong reference pointing to it, the garbage collector will not reclaim its memory. F...

Comprehensive Guide to SSTI Explained with Payload Bypass Techniques

Introduction Server-Side Template Injection (SSTI) is a vulnerability in web applications where user input is improper handled within the template engine and executed on the server. This exploit can r...

Implement Image Upload Functionality for Django Integrated TinyMCE Editor

Django’s Admin panel is highly user-friendly, and pairing it with TinyMCE, an effective rich text editor, simplifies content management significantly. Combining the two is particular useful for bloggi...

Leave a Comment

Anonymous

◎Feel free to join the discussion and share your thoughts.