This study evaluates the efficacy of AI-driven methodologies in detecting and managing cyberviolence on social media, with a particular focus on compliance with the General Data Protection Regulation (GDPR). It also addresses the compliance of AI cyberviolence detection tools with international legal frameworks like the GDPR, underscoring the balance between effective enforcement and respect for user privacy. We developed a multi-modal hierarchical model that integrates textual, user, and network-based features to discern patterns of cyberviolence. The model achieved an AUC-ROC score of 0.94 and an F1-score of 0.90, surpassing previous methods. Analysis of 10,234,567 social media posts revealed a cyberviolence prevalence rate of 7.8%, with notable variations across platforms and user demographics. Platforms permitting anonymous posts exhibited higher cyberviolence rates (12.3%) compared to those requiring user identification (5.6%). Temporal analysis highlighted peak cyberviolence activities during evening hours and weekends. Evaluation of intervention strategies indicated that personalized educational prompts significantly reduced repeat offenses by 47% among first-time aggressors. Age-specific responses to interventions were noted, with younger users (13-17) exhibiting a 52% reduction in repeat offenses, compared to a 29% reduction among adult users. These findings emphasize the potential of AI to foster safer online environments and highlight the necessity for context-aware, personalized interventions. This paper also discusses the effective suppression of cyberviolence, in addition to technical means, but also the participation of law. It advocates for improving relevant laws and policies and strengthening the standardization and legitimacy of cyberviolence governance.