Enlarge (credit: FT montage/Dreamstime)

Two of the world’s biggest artificial intelligence companies announced major advances in consumer AI products last week.

Microsoft-backed OpenAI said that its ChatGPT software could now “see, hear and speak,” conversing using voice alone and responding to user queries in both pictures and words. Meanwhile, Facebook owner Meta announced that an AI assistant and multiple celebrity chatbot personalities would be available for billions of WhatsApp and Instagram users to talk with.

But as these groups race to commercialize AI, the so-called “guardrails” that prevent these systems going awry—such as generating toxic speech and misinformation, or helping commit crimes—are struggling to evolve in tandem, according to AI leaders and researchers.

Read 22 remaining paragraphs | Comments

By