Fading Coder

One Final Commit for the Last Sprint

Home > Tech > Content

Transport Layer Mechanisms: Multiplexing, Reliability, and Congestion Management

Tech 2

Transport layer protocols facilitate logical communication channels between application processes running on different hosts. While the network layer handles logical communication between hosts, the transport layer extends this capability to specific processes via ports. These protocols operate exclusively within end systems, managing the handoff of data between applications and the network edge without dictating how data traverses the network core.

Multiplexing and Demultiplexing

Data delivery within the transport layer relies on sockets, which act as intermediaries between the network interface and application processes. Each socket possesses a unique identifier. The process of gathering data from various sockets, encapsulating it with header information, and passing it to the network layer is known as multiplexing. Conversely, demultiplexing involves examining header fields in incoming segments to identify the correct destination socket.

Identification relies on port numbers, which are 16-bit values ranging from 0 to 65535. Ports 0 through 1023 are designated as well-known ports for standard services like HTTP (80) or FTP (21).

Connectionless Demultiplexing

In UDP, a socket is identified by a two-tuple consisting of the destination IP address and destination port number. Consequently, segments arriving from different source IPs or ports but targeting the same destination IP and port are directed to the same socket. The source port acts as a return address, allowing the receiver to reply correctly.

Connection-Oriented Demultiplexing

TCP utilizes a four-tuple for socket identification: source IP, source port, destination IP, and destination port. This ensures that segments from different clients connecting to the same server port are directed to distinct sockets. High-performance web servers often manage multiple concurrent connections within a single process by spawning unique threads or sockets for each client session.

User Datagram Protocol (UDP)

Defined in RFC 768, UDP provides a minimal trannsport service. It adds source and destination port numbers to application data for multiplexing and includes a checksum for integrity verification. UDP is connectionless; no handshake occurs before data transmission.

Advantages of UDP

  • Application Control: Applications dictate exactly when data is sent without interference from congestion control mechanisms.
  • No Connection Setup: Data transmission begins immediately without the latency of a handshake.
  • Stateless: The protocol does not maintain connection state information.
  • Low Overhead: The header is only 8 bytes, compared to 20 bytes for TCP.

Applications like DNS often utilize UDP. Reliability, if needed, can be implemented within the application layer itself, bypassing TCP's rate limiting.

UDP Segment Structure

The UDP header comprises four fields, each 2 bytes: source port, destination port, length, and checksum. The length field covers both header and data. The checksum allows the receiver to detect bit errors introduced during transmission.

Principles of Reliable Data Transfer

Reliable data transfer protocols ensure data moves accurately from sender to receiver despite underlying channel errors.

Evolution of Reliable Protocols

  1. rdt1.0: Assumes a perfectly reliable channel. The sender simply packages data and transmits it.
  2. rdt2.0: Introduces handling for bit errors using checksums, acknowledgments (ACK), and negative acknowledgments (NAK). If a packet is corrupted, the receiver sends a NAK, triggering retransmission.
  3. rdt2.1: Adds sequence numbers (0 and 1) to handle corrupted ACK/NAK packets. This prevents confusion between new packets and retransmissions.
  4. rdt2.2: Eliminates NAKs. The receiver sends an ACK for the last correctly received packet. Duplicate ACKs signal the sender to retransmit.
  5. rdt3.0: Handles packet loss via timers. If an ACK is not received within a specific timeframe, the sender retransmits the packet. This is often called the stop-and-wait protocol.

Pipelining

To improve utilization beyond stop-and-wait, protocols allow multiple packets to be in transit simultaneously. This requires larger sequence number spaces and buffering capabilities. Two primary approaches exist:

  • Go-Back-N (GBN): The sender maintains a window of size N. The receiver only accepts packets in order. If a packet is lost, the sender must retransmit all packets in the current window.
  • Selective Repeat (SR): The sender retransmits only lost or corrupted packets. The receiver buffers out-of-order packets until gaps are filled. The window size must be less than half the sequence number space to avoid ambiguity between new and retransmitted packets.

Transmission Control Protocol (TCP)

TCP prvoides a reliable, connection-oriented byte stream service.

Connection Management

Establishing a connection requires a three-way handshake:

  1. SYN: The client sends a segment with the SYN flag set and an initial sequence number.
  2. SYNACK: The server responds with SYN and ACK flags set, acknowledging the client's sequence number and providing its own initial sequence number.
  3. ACK: The client acknowledges the server's sequence number. Data transfer can begin.

Termination involves exchanging FIN flags to close the connecsion gracefully. Resources such as buffers and variables are released upon completion.

Segment Structure

TCP segments include source and destination ports, sequence and acknowledgment numbers, a receive window field for flow control, header length, flags (ACK, SYN, FIN, etc.), and a checksum. The sequence number corresponds to the byte stream position, not just packet count.

Reliable Data Transfer Implementation

TCP ensures reliability through cumulative acknowledgments and retransmission timers. The sender tracks the oldest unacknowledged byte (SendBase) and the next sequence number to use (NextSeqNum).

# TCP Sender Logic Simulation
current_seq = initial_seq
base_ack = initial_seq

while connection_active:
    event = wait_for_event()
    
    if event == 'APP_DATA':
        segment = create_segment(current_seq, payload)
        if not timer_running:
            start_timer()
        transmit(segment)
        current_seq += len(payload)
        
    elif event == 'TIMEOUT':
        retransmit(base_ack)
        restart_timer()
        
    elif event == 'ACK_RECEIVED':
        ack_val = event.ack_number
        if ack_val > base_ack:
            base_ack = ack_val
            if has_unacked_segments():
                restart_timer()

Fast Retransmit

Long timeout periods can delay recovery. TCP utilizes duplicate ACKs to detect loss faster. If a sender receives three duplicate ACKs for the same segment, it infers a loss and retransmits the missing segment immediately without waiting for the timer to expire.

def process_acknowledgment(ack_value):
    if ack_value > base_ack:
        base_ack = ack_value
        clear_duplicate_counter()
        if unacked_segments_exist():
            restart_timer()
    else:
        # Duplicate ACK detected
        increment_duplicate_count(ack_value)
        if get_duplicate_count(ack_value) == 3:
            execute_fast_retransmit(ack_value)

Flow Control

TCP prevents the sender from overwhelming the receiver's buffer. The receiver advertises a receive window (rwnd) indicating available buffer space. The sender ensures the amount of unacknowledged data does not exceed rwnd.

Congestion Control

Congestion occurs when too many packets enter the network, causing queue overflows and packet loss. TCP employs end-to-end congestion control, inferring network state from packet loss and delay.

TCP Congestion Control Algorithms

TCP maintains a congestion window (cwnd). The sending rate is limited by min(cwnd, rwnd).

  • Slow Start: cwnd begins at 1 MSS and doubles every RTT until it reaches a threshold (ssthresh) or packet loss occurs.
  • Congestion Avoidance: After slow start, cwnd increases linearly (by 1 MSS per RTT).
  • Loss Reaction:
    • Timeout: cwnd is reset to 1 MSS, and slow start resumes. ssthresh is set to half the current cwnd.
    • Triple Duplicate ACK: cwnd is halved, and the protocol enters Fast Recovery, skipping slow start.

This behavior is known as Additive Increase/Multiplicative Decrease (AIMD). It aims to probe for available bandwidth while backing off rapidly during congestion.

Fairness

Ideally, K TCP connections sharing a bottleneck link of rate R should each achieve an average rate of R/K. AIMD promotes convergence toward this fairness over time.

Explicit Congestion Notification (ECN)

ECN allows network routers to mark packets indicating congestion rather than dropping them. The IP header contains bits for ECN. When a receiver gets a marked packet, it sets the ECE flag in the TCP ACK. The sender responds by reducing its congestion window, similar to a loss event, but without actual packet loss.

Related Articles

Understanding Strong and Weak References in Java

Strong References Strong reference are the most prevalent type of object referencing in Java. When an object has a strong reference pointing to it, the garbage collector will not reclaim its memory. F...

Comprehensive Guide to SSTI Explained with Payload Bypass Techniques

Introduction Server-Side Template Injection (SSTI) is a vulnerability in web applications where user input is improper handled within the template engine and executed on the server. This exploit can r...

Implement Image Upload Functionality for Django Integrated TinyMCE Editor

Django’s Admin panel is highly user-friendly, and pairing it with TinyMCE, an effective rich text editor, simplifies content management significantly. Combining the two is particular useful for bloggi...

Leave a Comment

Anonymous

◎Feel free to join the discussion and share your thoughts.