ChatGPT has the answers. How does ChatGPT get its information?
In less time than it takes to write that sentence, chat. .
“Once upon a time, there was a strange and mysterious world that existed besides ours,” begins the response. He continues of a physics professor sitting alone in his office on a dark and stormy night (of course), “His mind is consumed by the mysteries of quantum physics… It was a force that could bend the fabric of space and time, twisting the rules of reality itself,” reads the chat window.
Wow, ChatGPT’s answer is strangely entertaining and strangely educational. In the end, the old professor finds out everything and shares his knowledge with the world. The article is awesome and scary, especially those last two sentences:
“His theory changes the way we see the world and leads to new technologies, but it also opens the door to powers beyond human understanding, which can be used for good or evil. It changes the future of humanity forever.”
Yes, it can speak for itself.
Artificial intelligence in court:DoNotPay’s first robot lawyer to handle speeding tickets in court via AI. How it works.
What is ChatGPT?
ChatGPT (generative pre-trained transformer) is The latest viral sensation From a San Francisco based startup Open AI.
It’s a free online tool trained with millions of pages of writing from all over the internet to understand textual queries and respond to them in any style you like.
When I ask him to explain ChatGPT to my mom, he snaps,
“ChatGPT is a computer program that uses artificial intelligence (AI) to understand and respond to natural language text, just like a human. It can answer questions, write sentences, and even carry on a conversation with you. It’s like having your own personal bot that can understand you and talk to you!”

ChatGPT is free. Try it yourself
The easiest way to get a picture of its powers is to try it yourself. It’s free, just sign up for an account, then ask them a question.
You can even ask him to write something for you—anything really, in any style—from a poem using your child’s name to lyrics about your dog, business slogans, articles, research papers, and even program code. He writes replies in a few seconds and follows them up in the same thread if you don’t like the first answer.
ChatGPT was released as a prototype to the public on November 30, 2022. Within five days, more than 1 million people had used it.

By comparison, it took Netflix Three and a half years To get that many people on board. Facebook has not yet hacked its first million people 10 monthsAnd Spotify is gone Five months Before you hit the million user mark.
Microsoft has been confirmed on Monday that it was making a “multi-year, multi-billion dollar” investment in OpenAI, and though it didn’t disclose the exact dollar amount – it was reportedly a $10 billion deal.
No more free Netflix?:Netflix says it plans to stop sharing free passwords before April
How does ChatGPT work?
ChatGPT is trained to write already on the internet until 2021. When you type your question or prompt, it reacts at lightning speed.
“I am a machine learning model that has been trained on a large dataset of text allowing me to understand and respond to text-based input,” he replies when I ask him to explain how it works.
The idea behind this new generative AI is that everything from online search engines like Google to digital assistants like Alexa and Siri can be reinvented. It can also do most of the heavy lifting on writing information, creating content, chatbots for customer service, research, legal documents, and much more.
“[OpenAI] It will offer vastly new capabilities…at a scale and speed never seen before, reinventing almost everything about our lives and our careers,” says Neil Voss, co-founder of augmented reality startup, anima. Voss uses the OpenAI system to create augmented reality-based “objects” that can talk to their owners.
And he and many others expect the latest OpenAI tools to become the most important since the smartphone launch, with the capabilities already likened to the early days of the Internet.
“Very quickly, AI will not only be looking for information [much easier] But understanding them — reshaping them and making them useful — is much faster,” Voss explains in an email.
In a follow-up question about how ChatGPT and this type of next-generation AI will be used in the next year or two, the program highlighted many applications including healthcare, “for things like diagnostics, drug discovery and personalized treatment plans,” and creating content for “texts.” humanoids, audio, creative writing, news articles, video scripts, and more.”
While some worry that computers might drive people out of their jobs, it is the bots’ latest sentence that raises the most serious red flags.
What are the risks of ChatGPT?
ChatGPT parrots return the found content, and while it “looks” reliable, it could be completely wrong. (We all know by now that not everything you read on the internet is true, right?)
Artificial intelligence is not yet able to tell fact from fiction, and ChatGPT has already been trained on two years of data. If you ask him a question at the right time, like what’s the latest iPhone model, he says it’s 13.
“In the past, AI has been used largely for predictions or classification. ChatGPT will actually generate new articles, news items, blog posts, even school articles, and it’s very hard to tell them apart from real man-made writing,” Helen Lee Puig tells me via E-mail.
Bouygues is the President and Founder Reboot Foundation, which calls for critical thinking to combat the rise of misinformation. She worries that new technology like ChatGPT may spread misinformation or fake news, generate bias, or habituation spread propaganda.
“My biggest concern is that it will make people dumber – especially young people, while computers get smarter,” Puigs explains. Why? Because more and more people will use these tools like ChatGPT to answer questions or generally engage with the world without richer, more reflective types of thinking. Take social media. People click, post, and retweet articles and content they haven’t read. ChatGPT will make that happen. Worse by making it easier for people not to think. Instead, it would be very easy to make a robot conjure up its own thoughts and ideas.”
Open AI Usage and content policies specifically warning against deceptive practices, including; Promote deception, deceive, manipulate users, or attempt to influence policy. It also states that when sharing content, “all users must clearly indicate that it has been generated by artificial intelligence” in a way that no one can reasonably be mistaken or misunderstood. ”
But we are talking about people. And honesty? sigh.
buzzfeed announced Thursday It will partner with ChatGPT to create the content. news site Cnet He is heavily criticized for using artificial intelligence to create informative articles in the money section, without full disclosure and transparency.
A recent survey of 1,000 college students in America by an online magazine Intelligent.com It also reports that one in three have used ChatGPT for written tasks, even though most of them think it’s “cheating”.
New York City and Seattle school districts most recently ChatGPT banned of their devices and networks, and many colleges are considering similar steps.
How to detect written content with artificial intelligence
In a statement from OpenAI, a company spokesperson told us via email that they are already working on a tool to help identify ChatGPT-generated text. It appears to be similar to the algorithm’s “watermark,” or some kind of invisible flag built into ChatGPT scripts that can identify its source,” according to CBS.
We’ve always called for transparency about the use of AI-generated text. Our policies require that users be forthright with their audience when using APIs and creative tools such as DALL-E and GPT-3,” the OpenAI statement affirms.
A senior at Princeton University recently created an app called GPTZero Determines if the AI has written an article. But it’s not ready for the fans yet.
I used an AI content detector called clerk, and I’ve spotted most instances of ChatGPT that I’ve fed. But some people fear that AI’s ability to imitate humans will move much faster than technology’s ability to tune it.
However, the cat is out of the bag and is never wrestled again.
“It’s not evil,” says Neil Foss. “On the other side of this are achievements that we could only dream of, but which will be difficult to achieve. It is up to us to apply that potential to things that are worthwhile, meaningful, and humane.”
When I asked ChatGPT to write a sentence about the ethical implications of ChatGPT in the style of tech journalist Jennifer Jolly, she said,
“ChatGPT is a technology game, but it also raises important ethical considerations, such as how to ensure that this powerful tool is used responsibly and for the greater good.”
I must admit, I couldn’t have said it better myself.
Jennifer Jolly is an Emmy Award-winning consumer technology columnist. The views and opinions expressed in this column are those of the author and do not necessarily reflect the views and opinions of USA TODAY.