New software can type text in seconds that’s hard to distinguish from human text – it could change the world, with disastrous consequences for millions. “If your job involves writing emails, drafting documents, writing articles or ad copy, or exchanging legal papers: you have to assume that this will have a profound impact.
“It’s not necessarily a good idea,” computer scientist Sridhar Ramaswamy warned at the DLD Innovation Conference in Munich. Abba musician Bjorn Olvius predicted that the program would write better music than many songs today.
Expectations that software with artificial intelligence will replace office workers, just as automation once eliminated many factory jobs, have been around for a long time. So far, machine learning has been used for assistive applications and seems far from ready. Then in November, ChatGPT came out and caused quite a stir. ChatGPT can write any text such as articles, business letters, poems, and news articles on demand – and imitate some author’s style if you wish.
How is that?
The program is trained on huge amounts of text and mimics what it knows by predicting reasonable next words. The result is always grammatically correct and solid – and somewhat uninspiring. But for everyday scenarios like a resignation letter or email, that’s always enough.
Knowledge questions are also answered in complete sentences based on the information recorded. If you ask ChatGPT how old is the president of Australia, the program replies: “Australia has no president.” But then ChatGPT lets you know that Prime Minister Scott Morrison is 54 years old. Just: Anthony Albanese has been the Australian Prime Minister since May last year. However, the ChatGPT knowledge base was created in 2021. Sometimes the program references it, sometimes it doesn’t. Even worse, in another attempt, ChatGPT made Morrison president.
The answer seems convincing, but it is incorrect
Now ChatGPT is still an experimental project that can and will learn. However, the error reveals a fundamental problem: the answer sounds convincing, but it’s wrong – and the user has no evidence to judge that.
At the same time, targeted disinformation authors also get a powerful tool. Technology creates “endless possibilities for crafting relatively believable lies very quickly,” warned Phil Lieben, a Munich-based Silicon Valley veteran. This year, as a result, “a wave of crap is coming our way.” Over time, AI will become more grounded in reality and will then take advantage of its capabilities.
Until then, however, one must resist the temptation to make it easier to work with programs like ChatGPT and automatically produce poor-quality content, Lepine emphasized. He warned that this would only “raise the mediocrity”. If something can be written by AI, it shouldn’t be written that way as humans. “We need to raise the bar on what it means to have something man-made — with a level of quality and authenticity.”
Microsoft is investing $1 billion in OpenAI
Elsewhere in the technology industry, intelligent language programs with artificial intelligence are also being worked on in many places. While ChatGPT developer OpenAI made its software publicly available, Google, for example, has kept its language software under lock and key and only uses it internally.
Microsoft can benefit from ChatGPT. The software giant invested $1 billion in OpenAI in 2019. Another two billion followed, reports The New York Times and The Information website. With money, OpenAI paid for the required computing power. Now another $10 billion investment is being discussed. Microsoft could thus secure a third of OpenAI — and also plans to use AI technology in the previously beaten search engine Bing, Information writes.
© dpa-infocom, dpa: 230115-99-225932/2