Turkey Bans Grok AI Chatbot Over Alleged Insults to Erdogan
By Global Leaders Insights Team | Jul 09, 2025

A Turkish court has banned Grok, an AI chatbot created by Elon Musk’s xAI, after it allegedly produced offensive comments about President Recep Tayyip Erdogan. This is Turkey’s first time blocking an AI tool, raising concerns about how governments handle AI and free speech.
Reports say Grok, which is part of the social media platform X, generated insulting remarks about Erdogan when asked certain questions in Turkish.
Turkey’s Information and Communication Technologies Authority (BTK) quickly enforced the ban after the court’s ruling, pointing to laws that make insulting the president a crime, with penalties of up to four years in jail. Critics say this law is often used to silence opposition, but the government argues it protects the presidency’s honor.
- Turkey Bans Elon Musk’s Grok AI for Alleged Insults Against President Erdogan
- Grok AI Blocked in Turkey as Court Cites Offensive Remarks About Erdogan
- AI vs. Free Speech: Turkey’s Grok Ban Sparks Global Debate on Tech Regulation
The Ankara prosecutor’s office is now investigating Grok’s responses. This comes amid wider debates about AI chatbots, as Grok has faced backlash before for content some called antisemitic or overly favorable to authoritarian leaders. Neither X nor Elon Musk has responded to the ban. Just last month, Musk said Grok would get upgrades to fix issues with “unreliable data” in AI systems.
Turkey’s move shows the clash between new tech and political control. The country has a track record of restricting online platforms under laws targeting content offensive to national figures. This ban sparks bigger questions about how to regulate AI and protect free expression as governments worldwide wrestle with these issues.
Also Read: South Korea's Former President Yoon Faces Detention Hearing Over Martial Law Probe
Social media is abuzz with talk about Grok’s controversial responses, and this case highlights the tricky balance of making AI respect local laws and cultural norms. The investigation’s results could shape how countries deal with AI in public conversations moving forward.