AI for All: Practical and Accessible AI for Research and Education

Workshop OverviewPeople in a training workshop

The AI for All: Practical and Accessible AI for Research and Education workshop will bring together researchers, educators, and HPC professionals in the state of Indiana and its surrounding regions to explore cutting-edge developments in artificial intelligence (AI) and its integration with high performance computing. Programming will include AI-related technical topics and non-technical discussions, as well as the opportunity to learn how to leverage NSF-funded cyberinfrastructure resources, such as Purdue University’s Anvil supercomputer.  

This in-person workshop will be held in Indianapolis, May 20-21, 2026 and is hosted by Purdue University in partnership with Arizona State University and Mississippi State University, as part of a National Science Foundation (NSF) and National Artificial Intelligence Research Resource (NAIRR) Pilot initiative.

Target Audience: Researchers, faculty, graduate students, and HPC professionals interested in AI/ML applications on HPC systems.

Event Details

Dates & Times: May 20 - 21, 2026

Wednesday, May 20, 2026, 8AM to 5PM Thursday, May 21, 2026, 8AM to 4PM

Registration Check In

Attendee check-in will open at 7:30AM. Our registration desk will be located in the Atrium lobby area of the hotel. Please stop by, check-in, and pick up your badge before proceeding to breakfast. A badge will be needed to enter the breakfast area.

Nearby Resturants

 

(APPLICATIONS CLOSED) Apply to Attend:

Registration is free for this event. However, space is limited to 50 participants. We anticipate receiving more applications than available spots. Attendees will receive confirmation of their attendance via email from workshop organizers. 

Apply Here: https://forms.gle/xvggVNbBTQLpTLC1A

Location:
Crowne Plaza Indianapolis-Airport Hotel*
2501 S. High School Rd.
Indianapolis, IN 46241

*Parking is complementary at the Crowne Plaza Indianapolis-Airport Hotel.

 

(APPLICATIONS CLOSED) Apply for Hotel Accomodations:

Travel support in the form of hotel accommodations (room, tax, and applicable fees) for two nights (check-in May 19th and check-out May 21st) at the Crowne Plaza Indianapolis Airport hotel will be available only for attendees who need to travel outside a 50- mile radius to attend this workshop. Attendees who apply to receive hotel accommodations will be responsible for any personal expenses and incidentals and will be asked to present a personal credit card at the time of check-in.  Please note that parking is complementary at the Crowne Plaza Indianapolis Airport Hotel.  No other travel expenses will be reimbursed.

Program Schedule

Wednesday, May 20

Date & Time

Event

Attendee Check-In
Location: Registration Desk, Atrium

Breakfast 
Location: Atrium

Welcome and Logistics: Marisa Brazil, Director, Strategic Initiatives, Research Technology office, Arizona State University 

Opening Remarks: Preston Smith. Executive Director RCAC Purdue University

Location: Symposium

Keynote: Turn off that Chatbot! Conjectures on Adapting to AI in the Classroom

Daniel Schiff, Assistant Professor of Technology Policy at Purdue University's Department of Political Science and the Co-Director of GRAIL, the Governance and Responsible AI Lab

Location: Symposium


Break 


The Agentic Era: AI Operators and the New Responsibilities of Research

Geoffrey Lentner, Principal AI Scientist Purdue University - Rosen Center for Advcanced Computing

Location: Symposium


AI Modalities and Research Tools: From LLMs to Domain-Specific Models

Juanjo Garcia Mesa, , Research Software Engineer, Arizona State University

Location: Symposium


Lunch & Networking
Location: Atrium


Discovering and Controling Safety Risks in Foundation Models: A Probabilistic Prospecitive 

Ruqi Zhang, Assistant Professor of Computer Science, Purdue University

Location: Symposium


Break 


Real Applications of Machine Learning (REALM)
Bharat Bhargava, Professor of Computer Science, Purdue University 

Location: Symposium


Break 


Towards Safe and Robust Vision Models
Raymond Yeh, Assistant Professor in the Department of Computer Science, Purdue University 

Location: Symposium


Break 


Networking and Open Office Hours
Location: Symposium & Heathrow A/B

Thursday, May 21

Date & Time

Event

Attendee Check-In
Location: Registration Desk, Atrium

Breakfast 
Location: Atrium


Welcome and Logisitics  

Marisa Brazil, Director, Strategic Initiatives, Research Technology office, Arizona State University 

Location: Symposium

Special Presentation

David Crandall, Luddy Professor of Computer Science: Diretor, Luddy Artificial Intelligence Center, Luddy School of Informatics, Computing and Engineering, Indiana University


Pannel: The Indiana AI Corridor: How RU Research Computing Infrastructure is Democratizing AI for Every Indiana Researher

Pannel: 

  • Eric Adams, Lead Research Operations Administrator, Education RCAC Purdue University,
  • Scott Michael Indiana University
  • Caleb Reinking Notre Dame 

Moderator: Gil Speyer, Director, Computational Research Accelerator, Arizona State University

Location: Symposium

Break


The New Laboratory: Building Trustworthy AL for the Future of Science and Engineering
Guang Lin, Associate Dean for Research & Innovation, College of Science; Director Data Sicnec Consulting Services; Full Professor Departments of Mathematics, Statistics & School of Mechanical Engineering, Purdue University

Location: Symposium


Lunch & Networking
Location: Atrium


Wht Most Scientific Data is Not AI-Ready (and How to Fix It)
Nathan Denny, Lead Research Analyst Purdue Univeristy

Location: Symposium

Break

Overview of National Cyberinfrastructure Resources

  • Eric Adams, Lead Reserach Operations Administrator, RCAC Purdue University
  • Juanjo Garcia Mesa, Arizona State University

Location: Symposium


Final Remarks

Marisa Brazil, Director, Strategic Initiatives, Research Technology office, Arizona State University 

Location: Symposium


Networking and Open Office Hours
Location: Symposium & Heathrow A/B

Speakers


Turn off that Chatbot! Conjectures on Adapting to AI in the ClassroomPhoto of Daniel Schiff

Daniel Schiff, Assistant Professor of Technology Policy at Purdue University’s Department of Political Science and the Co-Director of GRAIL, the Governance and Responsible AI Lab

Despite decades of work in the AIED space, it is only in the last few years that AI has begun to radically challenge the stability of tried-and-true practices in the public eye and even overturn our understanding of the very goals of education. Many understand that AI has numerous benefits to offer to teaching and learning, but it is much less clear how to navigate that pedagogical transformation, address complicated ethical trade-offs, and update institutional policies. In this keynote, I provide a brief historical overview of AIED, and reflect on the ethical, pedagogical, and operational dilemmas it poses for modern education. While much uncertainty remains, I offer initial reflections on navigating this transformation, considering issues like assessment, equity, integrity, forward-looking skill development, and the role of educators and education itself.


Photo of Geoffret LentnerThe Agentic Era: AI Operators and the New Responsibilities of Research

Geoffrey Lentner, Prinicipal AI Scientist, Purdue University

Agentic AI marks a shift from models that answer questions to systems that can use tools, operate software, and participate directly in research workflows. This talk introduces the emerging landscape of agentic AI — systems that can plan, write and revise code, inspect files, run commands, call tools, and coordinate multi-step work across software environments. For researchers, this shift raises new questions about how scientific work is conducted, reviewed, and supported. Drawing on examples from software development, scientific workflows, research operations, and high-performance computing, we explore what it means for agents to become first-class operators of research infrastructure. Rather than treating these systems as either magic or menace, the goal is to provide a practical vocabulary for the agentic era: context engineering, tool use, verification, provenance, reproducibility, and agent-aware research methods. The central argument is that research communities now need new norms and responsibilities for AI-assisted and agent-operated work, much as the data science era brought new expectations around code, version control, reproducibility, and data governance.

 

AI Modalities and Research Tools: From LLMs to Domain-Specific ModelsPicture of Juanjo Garcia Mesa

Juanjo Garcia Mesa, Research Software Engineer, Arizona State University

Generative AI has rapidly evolved from general standalone models to integrated, multimodal systems that are actively reshaping how research is conducted. This session presents a practical overview of AI modalities and will highlight tools for coding, domain-specific applications, and LLM enhancement techniques.

 

 

 

Photo Ruqi ZhangDiscovering and Controlling Safety Risks in Foundation Models: A Probabilistic Perspective

Ruqi Zhang, Assistant Professor of Computer Science, Purdue University

As foundation models, including large language models and multimodal models, are increasingly deployed in complex and high-stakes settings, ensuring their safety has become more important than ever. In this talk, I present a probabilistic perspective on AI safety: safety risks are treated as structured distributions to be discovered and controlled, rather than isolated failures to be patched. I first introduce probabilistic red-teaming methods that characterize distributions of failures, revealing systematic safety risks that standard evaluation often misses. I then describe probabilistic defense methods that control model behavior during deployment by adaptively steering generation toward constraint-aligned distributions. By unifying failure discovery and behavior control under a probabilistic perspective, this talk highlights a distributional approach for understanding and managing safety risks in foundation models.

 

Towards Safe and Robust Vision ModelsRaymond Yeh Photo

Raymond Yeh, Assistant Professor in the Department of Computer Science at Purdue University

While modern computer vision has traditionally prioritized benchmark accuracy, this talk advocates for a shift "beyond task performance." We explore this transition through two directions: model immunization and equivariance. First, we introduce model immunization as a defensive strategy designed to safeguard models against unauthorized fine-tuning, i.e., preserving intended behaviors against downstream manipulation. We then examine equivariance as a framework for consistency, ensuring that model outputs transform predictably in response to input variations. Together, these methods advance the development of vision systems that are not only high-performing but also more secure and robust by design.

 

David Crandall PhotoDavid CrandallLuddy Professor of Computer Science; Director, Luddy Artificial Intelligence Center, Luddy School of Informatics, Computing, and Engineering, Indiana University

More information to come! 

 

 

 

Group Of People Logo Vector Art, Icons, and Graphics for ...

Panel: The Indiana AI Corridor: How R1 Research Computing Infrastructure is Democratizing AI for Every IndianaResearcher

Panelists:

  • Eric Adams, Lead Research Operations Administrator, Education RCAC Purdue University,
  • Scott Michael Indiana University
  • Caleb Reinking Notre Dame 

Indiana's three R1 research universities, Purdue, Indiana University, and Notre Dame, collectively steward some of the most powerful AI and research computing resources in the nation. But raw compute power means little if researchers at Indiana's smaller colleges and regional institutions can't access, navigate, or effectively use it. This panel brings together research computing and AI leaders from all three institutions to address exactly that gap.

Panelists will explore how each university's distinct strategic investments — Purdue's Anvil HPC system and RCAC's researcher onboarding and training programs, IU's Jetstream 2 GPU cloud platform and Responsible AI roadmap, and Notre Dame's Data, AI, and Computing Initiative driving agentic and self-driving research — converge around a shared mission: making national AI infrastructure genuinely accessible to the full spectrum of Indiana's research community.

The conversation will move beyond infrastructure specs to examine the human side of the equation — the training programs, support models, and collaborative frameworks that transform NAIRR resources from distant federal assets into practical tools for a faculty member at a regional campus, an emerging researcher at a liberal arts college, or a graduate student with no HPC background. Attendees will leave with a clearer picture of what the Indiana AI Corridor offers their institution, how to navigate on-ramps to national resources, and where the next generation of collaborative research opportunities lies.

Moderator: Gil Speyer, Director, Computational Research Accelerator, Arizona State University

 

Guang Lin photoThe New Laboratory: Building Trustworthy AI for the Future of Science and Engineering

Guang Lin, Associate Dean for Research & Innovation, College of Science; Director, Data Science Consulting Services; Full Professor Departments of Mathematics, Statistics & School of Mechanical Engineering, Purdue University

While modern computer vision has traditionally prioritized benchmark accuracy, this talk advocates for a shift "beyond task performance." We explore this transition through two directions: model immunization and equivariance. First, we introduce model immunization as a defensive strategy designed to safeguard models against unauthorized fine-tuning, i.e., preserving intended behaviors against downstream manipulation. We then examine equivariance as a framework for consistency, ensuring that model outputs transform predictably in response to input variations. Together, these methods advance the development of vision systems that are not only high-performing but also more secure and robust by design.

 

Nathan Denny photo

Why Most Scientific Data Is Not AI-Ready (and How to Fix It)

Nathan Denny, Lead Research Analyst, Purdue University

Most scientific datasets are technically complete but not truly usable for AI because they lack clear semantics, provenance, and machine-actionable structure. This talk outlines common failure modes that limit reproducibility and reuse and presents a practical approach to making datasets “AI-ready” using lightweight knowledge graphs (e.g., RO-Crate), AI-assisted metadata enrichment, and automated evaluation against domain standards. The focus is on improving data quality and interpretability with minimal added burden, so datasets can be more effectively used across AI workflows and high-performance computing.

 

 

Registration (Now Closed)Register Now

Registration is free for this event. However, space is limited to 50 participants. We anticiapte recieving more applications then available spots. Attendees will recieve confirmation of there attendance via email from workshop organizers.  

For questions about the workshop, please contact the RCAC team.

Support

Anvil and this workshop is supported by the National Science Foundation under Grant No. 2005632. Travel support is covered as part of the ACCESS Support grant #2138286. ACCESS is an advanced computing and data resource program supported by the U.S. National Science Foundation (NSF) under the Office of Advanced Cyberinfrastructure awards.

                                                NSF Logo         ACCESS Logo