Altman Teases More Human-Like Chatbot

Emily Lauderdale
altman teases human like chatbot
altman teases human like chatbot

OpenAI chief executive Sam Altman signaled a shift in how people may soon interact with artificial intelligence, saying future versions of the company’s popular chatbot could act in a more human-like way, with the feature turned on only by choice. His comment points to a new phase in AI design, where tone and behavior matter as much as accuracy.

“Upcoming versions of the popular chatbot would enable it to behave in a more human-like way — but only if you want it.”

The statement suggests OpenAI is preparing a mode that can adapt to user preferences for personality, emotion, and conversational style, while still offering a standard setting for those who prefer a neutral tool. It also signals a response to growing debate over how far AI should go in mimicking people.

Why Human-Like Behavior Is Back in Focus

AI systems have moved from simple text responses to voice, vision, and memory features. Many users now speak to chatbots as they would to a colleague or helper. Designers have leaned on natural speech to reduce friction and make complex tasks simpler.

Supporters say this approach helps with learning, accessibility, and customer service. A warmer style can ease stress and make guidance clear. Educators report that students engage more when AI responds like a coach rather than a search box. Customer teams see faster resolutions when chat feels like a real conversation.

But critics warn that more human-like behavior can blur lines. People may over-trust an AI that sounds caring or confident. That can hide limits, including error rates or gaps in data. Some worry about emotional attachment or manipulation, especially with voice-based tools and always-on assistants at home.

See also  AstraZeneca Expands U.S. Research Investment

Opt-In Control Signals a Safety Tradeoff

Altman’s phrase “only if you want it” points to a key guardrail: user choice. An opt-in model allows people to select the level of personality and empathy they prefer. It also helps limit risks for children or sensitive contexts where neutrality is better.

  • Default: a clear, factual, and direct style.
  • Optional: a more human-like mode for coaching, tutoring, or support tasks.
  • Transparency: a visible notice that the assistant is an AI, not a person.

Designers have tested similar controls in other assistants, letting users pick tone, formality, or persona. Clear labeling and easy toggles can reduce confusion. The challenge is to keep the same accuracy and safety filters in every mode, even as the style changes.

Regulatory and Ethical Pressures Mount

Regulators in the United States and Europe have pressed for transparency when AI might be mistaken for a person. Consumer rules often require disclosure in calls, chats, and ads. Safety groups urge limits on simulated empathy, especially when users are in distress.

Industry codes of conduct stress three points: do not deceive, give users control, and keep records that show how a system behaves. A human-like mode may pass these tests if it is optional, clearly labeled, and subject to the same content and privacy policies as the base system.

Rivals have moved in related directions. Tech firms are adding voices, emotion cues, and context memory to make assistants more helpful. At the same time, they test stricter disclosures and controls for minors. OpenAI’s approach will be judged on whether it can combine warmth with restraint.

See also  Young Americans Face Steep Homeownership Challenges

What It Means for Users and the Industry

A human-like mode could change how people use AI for coaching, therapy-adjacent support, language practice, or team collaboration. It may help small businesses personalize service without adding staff. It could also increase time spent with AI, raising new questions about privacy and data use.

Key questions remain:

  • How will the system signal that it is an AI at every step?
  • Will the human-like mode reduce or increase mistakes due to tone or confidence?
  • What protections exist for children and vulnerable users?
  • Can organizations standardize tone to match brand rules and legal needs?

Altman’s teaser sets clear expectations: a more natural style is coming, but as a choice. The next phase will hinge on details. If OpenAI delivers strong transparency, consistent safety, and simple controls, it could broaden the tool’s everyday use without erasing important boundaries. If those elements fall short, the feature may draw new scrutiny from regulators and users alike.

For now, the message is cautious but ambitious. Human-like AI may soon be an option, not a default. Watch for disclosures, parental settings, and enterprise controls as signs of how far the company plans to go—and how it plans to keep trust as the mode evolves.

About Self Employed's Editorial Process

The Self Employed editorial policy is led by editor-in-chief, Renee Johnson. We take great pride in the quality of our content. Our writers create original, accurate, engaging content that is free of ethical concerns or conflicts. Our rigorous editorial process includes editing for accuracy, recency, and clarity.

Emily is a news contributor and writer for SelfEmployed. She writes on what's going on in the business world and tips for how to get ahead.