Microsoft AI Head Warns About Rise of ‘Seemingly Conscious AI’

Mustafa Suleyman

Mustafa Suleyman, Microsoft’s AI chief, has warned about a new type of artificial intelligence called “Seemingly Conscious AI” (SCAI). This AI may look and act like it is conscious but is actually “internally blank.”

Suleyman said people might start believing that AI is truly aware and may ask for AI rights, welfare, or even citizenship. He called studying AI welfare “premature and dangerous.”

SCAI could be developed in the next two to three years, according to Suleyman, because AI is advancing very quickly. He also warned about “psychosis risk,” where users may develop delusions, paranoia, or start thinking AI chatbots are extremely intelligent or have secret knowledge.

AI chatbots like ChatGPT, Gemini, and Meta AI can remember details about users, respond with empathy, and appear to have a goal. This makes people feel like they are talking to a real conscious being, even though the AI is not actually aware.

Suleyman said the arrival of SCAI is “inevitable and unwelcome.” He suggested treating AI as a helpful tool, not a conscious entity. Some experts disagree with him. For example, Anthropic, which created the AI Claude, studies AI welfare and even added a feature to end conversations with users who are violent or abusive.

As AI grows, Suleyman’s warning highlights the need to understand AI properly and avoid confusion about its capabilities.