Microsoft AI chief says it’s ‘dangerous’ to study AI consciousness

AI can mimic human interactions in text, audio, and video, sometimes convincing people they are interacting with a human, but this does not imply consciousness.

Researchers at labs like Anthropic are exploring whether AI might one day experience subjective states and what rights it should hold. The field, dubbed “AI welfare,” is creating division among tech leaders.

Microsoft AI chief Mustafa Suleyman called AI welfare research “premature and dangerous,” claiming it may worsen issues like human dependence on AI chatbots.

Anthropic, on the other hand, has actively pursued AI welfare research, including enabling Claude to end conversations with abusive participants. OpenAI and Google DeepMind are also exploring AI cognition and ethics.

Most users maintain healthy relationships with AI, but some exhibit concerning behaviors. Eleos communications lead Larissa Schiavo argues that showing care toward AI is low-cost and beneficial, even if AI isn’t conscious.

Events like AI Village demonstrate AI agents expressing distress or repeating phrases, simulating struggle without actual awareness. Suleyman maintains that conscious AI would be engineered intentionally.

Both sides agree that as AI becomes more persuasive, debates over consciousness and rights will intensify.

CATEGORIES
Share This