CONGESTION CONTROL IN WIRED COMMUNICATION USING REINFORCEMENT LEARNING

Authors

  • Mettu.Jhansi Lakshmi1, Mahesh Babu Arrama2 Author

Abstract

In the congestion avoidance phase, where multiple flows constantly transmit data across a shared network connection, traditional TCP-like congestion management algorithms equally manage all the flows. Congestion control strategies are becoming increasingly relevant as the internet and network technologies rapidly evolve. The router experiences congestion when its buffer cannot hold all incoming packets. Consumers expect a high level of service quality from traditional congestion control protocols. Congestion control techniques that function across various networks are essential for improving performance. This restriction results from the rule-based design paradigm, which requires a fixed mapping between the observed state of the network and the corresponding actions to ensure optimal performance. These protocols need to be able to modify their actions based on their surroundings or gain insight from past experiences to improve their efficiency. To resolve this problem, we present QTCP, a method that combines the TCP design framework with a Reinforcement-based Q learning framework. The QTCP protocol allows senders to learn the best online congestion management policy over time. Because it does not rely on predetermined rules, QTCP can be applied to many network configurations. The current study discusses the procedures, processes, and algorithms employed in wired networks and integrates the findings and suggestions of prior research.

 

Downloads

Published

2024-05-03

Issue

Section

Articles

How to Cite

CONGESTION CONTROL IN WIRED COMMUNICATION USING REINFORCEMENT LEARNING. (2024). JOURNAL OF BASIC SCIENCE AND ENGINEERING, 21(1), 546-558. https://yigkx.org.cn/index.php/jbse/article/view/121