This AI Chatbot Is Blowing People's Minds. Here’s What It’s Been Writing.
A new chatbot created by artificial intelligence non-profit OpenAI Inc. has taken the internet by storm, as users speculated on its ability to replace everything from playwrights to college essays.
(Bloomberg) -- A new chatbot created by artificial intelligence non-profit OpenAI Inc. has taken the internet by storm, as users speculated on its ability to replace everything from playwrights to college essays.
From historical arguments to poems on cryptocurrency, users took to Twitter to share their surprise at the detailed answers the so-called ChatGPT provided, after the startup sought user feedback on the AI model Wednesday.
OpenAI’s chief executive officer Sam Altman said in a tweet Thursday that there has been “a lot more demand” than expected.
This AI Chatbot Is a Shockingly Competent Macro Pundit
California, San Francisco-based OpenAI has made headlines over its GPT-3 software which allows AI models to respond intelligently to text prompts. Earlier this year, the second version of its DALL-E model went viral for its ability to generate photo-realistic images from user submissions.
OpenAI was co-founded by Tesla Inc. CEO Elon Musk and Altman with other investors about seven years ago to develop AI technologies that “benefits all of humanity.” While Musk left the company in 2018 after disagreements over its direction, on Thursday, he offered an endorsement of the model’s abilities on Twitter.
Chatbot technology is not new, although its deployment has seen mixed success. Microsoft Corp.’s AI bot ‘Tay’ was taken down in 2016 after Twitter users taught it to say racist, sexist and offensive remarks. Another developed by Meta Platforms Inc. suffered similar issues this year.
Developers acknowledge the model “sometimes writes plausible-sounding but incorrect or nonsensical answers” and can be “excessively verbose” due to the training it received from humans.
While most people were delighted with the bot’s musings, some were quick to point out flaws, such as the model giving a detailed but incorrect answer to a question on algebra, and its ability to override limits on output related to issues like gore, crime and racism.
More stories like this are available on bloomberg.com
©2022 Bloomberg L.P.