How will artificial intelligence affect higher education and criminal justice? : UNM Newsroom

It was almost a year ago when OpenAI publicly introduced its generative AI platforms ChatGPT and DALL-E. The availability of this new technology immediately sent shock waves around the world and AI became part of the public lexicon.

The sudden availability of technology has left many wondering how it might affect the world. Episode one of It’s (probably) not rocket scienceA podcast from the University of New Mexico explores the impact artificial intelligence could have on the world.

Leo Lo, dean of UNM’s College of Learning and Library Sciences, has made artificial intelligence and its impact and potential uses in education an area of ​​research in the months since he first encountered it.

“It kind of shocked everyone, and it certainly shocked me, that you could produce all these wonderful written essays and very human responses,” Lo said on a recent episode of UNM’s “It’s (Probably) Not Rocket Science.”

As a learning expert, Lo quickly jumped in to see how ChatGPT could transform higher education and improve student and faculty workflows. He took a course at Oxford University, began interviewing colleagues in university libraries and worked to develop best practices for using the technology. The importance of AI literacy was immediately apparent to him.

“Every field from art to business will be touched by people who can and know how to use AI,” Lo said. “There is a saying that humans will not be replaced by artificial intelligence, at least in the near future, but they will be replaced by humans using AI”

Lo recommends that everyone explore the technology and consider how it could be used in their workflow. Rapid engineering, or the ability to ask generative AI the right questions, will become a must-have skill for resume validation in the future. Lo developed and published a CLEAR framework for how best to drive AI.

Here’s how it works:

  1. Briefly: Keep it short and to the point. Don’t overload the AI ​​with unnecessary information.
  2. Logical: The structure suggests logically with a clear flow of ideas.
  3. Clear: Be clear about the expected length and format of the content.
  4. Adaptation: Adjust the words and phrases until you are satisfied with the output.
  5. Reflecting: Based on the answer, constantly evaluate and refine your question.

Lo described himself as optimistic about the future of artificial intelligence, citing its potential to help synthesize information and tailor learning to individual students and save time. Lo already uses ChatGPT to help him answer emails. letters, be sure to disclose its use to all recipients and edit the letter if necessary.

Still, he worries about the use of technology in higher education. As the debate about plagiarism rages across the country, Lo’s first priority is making sure faculty understand artificial intelligence. He recommends that teachers and professors provide students with clear guidelines on how technology may or may not be used in classroom work. He also warns users, students or not, of a phenomenon now known as “hallucinations,” in which technology creates text that is completely fictional and presents it as fact.

“Libraries have a lot of students coming in with fake citations, and we have to tell them they don’t exist,” Lo said. “We use these opportunities to teach students that a chatbot is great in some ways and terrible in others.”

It’s not just students who like AI-generated information. Just a few months ago, lawyers in New York were fined for citing bogus cases in a legal briefing they conducted with ChatGPT. These and other legal issues are at the forefront of Sonia Gipson Rankin, a UNM Law School professor and computer scientist featured in this episode.

Gipson Rankin also raised concerns about data vulnerabilities and the inability to hold AI systems accountable in court.

She explained that governments and criminal justice systems already used third-party algorithms and artificial intelligence to help decide everything from whether someone may have committed fraud to whether they should be released from prison on bail.

“The algorithm is hard-coded. That is, if the user does something, the software is programmed by a human to do something in response,” Gipson Rankin said. “Artificial intelligence is a system that can predict or determine what the next right thing would be.

This distinction is crucial in the legal space, because when a question arises about the solutions of one of these systems, one can examine the algorithm and its codes, and its programmer can help explain how the technology was created, but the artificial intelligence makes its own predictions.

“What do I put on the stand when it comes to figuring out why the AI ​​decided to do this,” Gipson Rankin said. “He understands how to get a result. We don’t have enough information about the process and it’s a legitimate concern.

Despite her concerns, Gipson Rankin is ultimately optimistic about the technology and uses it several times a week for fun and to explore its possibilities. Her family even got ChatGPT to write a personal song for her uncle’s 80th birthdayth birthday. She compared the current state of AI to using cars before they were made safer with seat belts.

“I’m very excited about the new mechanisms and tracking tools to ensure that individuals have proper access to the law,” she said. “It’s really great that we can have these ways to expand our ideas, but we want to do it in a way that best protects people’s privacy.”

Check it out It’s (probably) not rocket science To hear these topics in more detail. Subscribe Spotify. or Apple Podcasts.

Godfrey Kemp

"Bacon fanatic. Social media enthusiast. Music practitioner. Internet scholar. Incurable travel advocate. Wannabe web junkie. Coffeeaholic. Alcohol fanatic."