How Purdue is Helping Design and Increase Trust in AI – WISH-TV | Indianapolis News | Indiana weather
West Lafayette, Indiana (WISH) — Indiana researchers are helping shape world-changing artificial intelligence.
Researchers at Purdue University told I-Team 8 that a computer system that simulates aspects of human cognition will affect every aspect of people’s lives, from farming to medicine to how students learn and write.
Snehasis Mukopadhyay, a Purdue professor of computer and information sciences who has 30 years of AI research under his belt, said AI is not what people imagine it to be. They are basically computer code called algorithms.
“An algorithm is basically a step-by-step approach to solving a problem,” Mukupadhyay said.
Compare mukupadhyay algorithm for making tea. “First, you heat the water, put the tea bag in the cup, then pour the water in, add the milk and sugar, and stir.”
These same types of steps can be applied to any reasoning problem that humans can do. With fast computers and AI algorithms, the job can be done very quickly.
AI “is going to be ubiquitous. Indeed, everywhere,” said Arjan Duresi, also a professor of computer and information sciences at Purdue.
Dorici says he and his team of student researchers are devising ways to measure people’s trust in existing AI systems in agriculture and medicine. “Right now, they are better than doctors at detecting cancer. The problem is that doctors are somehow afraid to use them because they don’t know how the algorithm came to the decision.”
Dorsey’s team is trying to find the Goldilocks Zone of usability and trustworthiness for AI systems. “You can’t embrace something or trust something if you don’t understand it.”
Dorsey has described the AI as essentially a child who needs supervision or what he calls “the humans in the loop”.
“In the distant future, AI really becomes an adult, and then, instead of seeing it as a simple tool, as it is today, we have to see it as a colleague.”
Until then, we’ll have childish forms of AI, which includes ChatGPT, a prototype launched on November 30th.
“You sign in, you ask him a question, and he spits out the writing,” said Bradley Dilger, a professor who teaches writing at Purdue and director of the introductory composition program.
He says Chat GPT is an AI chatbot that’s really good at producing basic typing when you ask it. However, it is not good enough to produce research papers on very specific and vague topics.
ChatGPT isn’t the only AI-powered chatbot.
In less than 5 minutes, Dilger used another system to create a cover letter for a job. “What would happen is if it was really easy to create a cover letter, the value of the cover letter would drop and there would be other ways of measuring people’s potential for jobs.”
Professors are already considering ways to stop students from cheating using artificial intelligence and chatbots, Dilger said, but the problem of cheating is not new to teaching.
“It’s very hard to stop someone from paying a company $100 to write a research paper for them. We know that happens. We try to do things to make it less likely, like asking students to turn in drafts. Asking students to choose topics and subject matter. That’s not generic,” Dilger said. You know, that’s very specific and then actually working with the material that they’re interested in.”
All of the researchers who spoke with I-Team 8 were optimistic about the future of AI, but admitted that it could potentially be dangerous if technology creators didn’t do it right.
Dorici said, “The worst case scenario, in my opinion, is if this is abused. Like anything else, technology in general is neutral. You can use a weapon for good or bad, so this is the worst case scenario, that somebody uses it for bad ends.”
Mukupadhyay added, “Like a child, if you feed a child a lot of biased views of the world, the child will grow up to be biased.”
Purdue’s dean told I-Team 8 that the university created new majors in computer science and philosophy so that its students can help create the future of intelligent, ethically sound AI.