This study examines how global media laws are evolving to address the widespread challenges posed by misinformation in digital communication environments. As information flows increasingly shift to online platforms, traditional regulatory approaches have become insufficient for managing manipulated content, technologically enhanced falsehoods, and the influence of automated systems. The analysis highlights significant differences in regulatory models across regions, revealing varying degrees of effectiveness, proportionality, and alignment with democratic principles. It also identifies the risks associated with overregulation, including the potential suppression of legitimate expression and the misuse of legal instruments for political purposes. A key finding is that current frameworks often lag rapidly advancing technologies, particularly those that generate synthetic media. The discussion underscores the need for legal structures that promote transparency, platform responsibility, and ethical governance while preserving essential human rights. The study concludes by emphasising the importance of international cooperation, ongoing evaluation of platform practices, and future research focused on cross-border governance and the societal impacts of artificial intelligence-driven misinformation. These insights contribute to a deeper understanding of how media laws must adapt to safeguard democratic values in a complex digital landscape