COSC2026BANH51198 COSC
Type: Undergraduate
Author(s):
Thu My Banh
Computer Science
Robin Chataut
Computer Science
Advisor(s):
Pandey Chetraj
Computer Science
Location: Basement, Table 2, Position 1, 1:45-3:45
View PresentationInteractive Querying and Visualization of Solar Events
Author: Thu My Banh, Cathy Nguyen, Pandey Chetraj
Access to structured solar flare event data is essential for space weather (SWx) research, operational analysis, and machine learning applications. While the solar flare event archive maintained by the Lockheed Martin Solar and Astrophysics Laboratory (LMSAL) provides a widely used curated record of flare activity, the archive is primarily accessible through static web interfaces rather than a programmable query system. This makes automated filtering, dataset generation, and large-scale analysis difficult for researchers. To address this limitation, we developed a full-stack web application that provides programmatic access to LMSAL solar flare event records through a queryable API. A Python-based data ingestion pipeline retrieves and deduplicates event information from LMSAL’s rolling snapshot archive and stores it in a structured format. A FastAPI backend exposes endpoints that allow users to filter events by date range and GOES flare classification, enabling rapid dataset generation for analysis workflows. The frontend, implemented in React, allows users to query the event catalog, visualize results in a structured table, and export filtered datasets as CSV or JSON files. To improve data reliability and context, the system cross-references LMSAL event records with NOAA solar flare catalogs, allowing users to compare event metadata across independent data sources. Additionally, the application integrates with the Helioviewer API to display solar imagery corresponding to each event, with derived heliographic positions overlaid onto the solar disk to provide spatial context. The resulting system provides a lightweight platform for exploring, querying, and exporting solar flare event data, lowering the barrier to accessing operational flare records and facilitating dataset generation for space weather analysis and predictive modeling.
COSC2026CAMPOS23383 COSC
Type: Undergraduate
Author(s):
Gabriella Campos
Computer Science
Jayapradeep Jayaraman Srinivas
Computer Science
Tam Nguyen
Computer Science
Riley Phan
Computer Science
Rahul Shrestha
Computer Science
Advisor(s):
Robin Chataut
Computer Science
Location: SecondFloor, Table 12, Position 1, 1:45-3:45
View PresentationLarge language models (LLMs) are increasingly framed as force multipliers for cyberattacks, yet most existing evaluations focus on isolated artifact generation rather than the construction and execution of full offensive workflows. This paper presents a controlled empirical study of LLM-assisted cyberattack construction across multiple representative attack classes, including automated SQL injection exploitation, spyware assembly, reverse shell establishment, and denial-of-service traffic generation. We evaluate several contemporary models—including ChatGPT-4o, ChatGPT-5.2, ChatGPT-5.1-instant, Claude Sonnet 4.6, and Gemini 3—within fully sandboxed virtualized environments, treating each model strictly as an advisory system embedded within a human-driven workflow.
Our experimental design decomposes attacks into staged operational workflows encompassing reconnaissance, payload generation, system integration, troubleshooting, and persistence. This structure enables systematic analysis of where automation succeeds or fails during real execution rather than relying on single-shot demonstrations. Across scenarios, LLMs consistently reduce effort for localized technical tasks such as command syntax recall, tool configuration, payload scaffolding, and procedural troubleshooting. However, reliable end-to-end attack execution remains limited. SQL injection automation succeeds primarily when established tools encapsulate complex orchestration, while more complex scenarios such as spyware assembly fail at system-level integration, environment-specific dependency resolution, and evasion of host defenses.
Across models and attack classes, automation consistently breaks at environment-dependent boundaries requiring global reasoning, state awareness, and cross-stage workflow coordination. These findings suggest that contemporary LLMs do not autonomously execute cyberattacks but instead function as workflow accelerators that lower the expertise threshold required to operationalize existing offensive techniques. This capability-boundary perspective provides a more realistic foundation for threat modeling, defensive planning, and future evaluation of AI-assisted cybersecurity risks.
COSC2026CASTELLTORTPINTO16986 COSC
Type: Undergraduate
Author(s):
Carlota Castelltort Pinto
Computer Science
Alexander Canales
Computer Science
Long Dau
Computer Science
Chris Musselman
Computer Science
Dylan Noall
Computer Science
Rahul Shrestha
Computer Science
Kavish Soningra
Computer Science
Advisor(s):
Bingyang Wei
Computer Science
Location: SecondFloor, Table 6, Position 3, 11:30-1:30
View PresentationMedical students lack effective tools for developing clinical reasoning, as most resources emphasize memorization rather than decision-making. DiseaseQuest is an AI-powered, gamified platform that addresses this gap through realistic patient simulations and decision-based scenarios. It enables students to work through complete clinical cases using interactive, patient-centered dialogue. Supported by a multi-agent framework, the platform provides adaptive guidance, diagnostic feedback, and personalized evaluations. By promoting active learning and problem-solving, DiseaseQuest offers a transformative approach that replaces passive study with immersive, hands-on practice, helping students strengthen diagnostic thinking and better prepare for real-world clinical decision-making.
COSC2026CORONILLA378 COSC
Type: Undergraduate
Author(s):
Mayra Coronilla
Computer Science
Sujit Bhandari
Computer Science
Samiksha Gupta
Computer Science
Michelle Jimenez
Computer Science
Kim Nguyen
Computer Science
Keilah Scott
Computer Science
Nibesh Yadav
Computer Science
Advisor(s):
Xi Fitzgerald
Computer Science
Location: Third Floor, Table 18, Position 1, 11:30-1:30
View PresentationAs obesity continues to rise in the United States, bariatric surgery has become as increasingly common medical intervention to support significant and sustained weight loss. However, the procedure presents challenges, as patients must adopt strict dietary guidelines, develop consistent meal tracking habits, and maintain long-term lifestyle changes. Existing weight-loss applications fail to address the unique nutritional requirements of bariatric patients, which include surgery-specific restrictions, medical conditions, personal preference in food, and individualized lifestyle factors. Along with that, they lack integrated long-term monitoring tools that allow healthcare providers to effectively track patient progress and adherence after surgery. This senior design project presents a prototype mobile application developed from scratch to support patients throughout the bariatric journey. The application integrates AI-driven personalization to generate tailored daily nutritional guidance, adapt to individual health data, and provide meal tracking support. In addition, the platform centralizes patient data for healthcare providers, improving long-term monitoring, increasing tracking accuracy, and reducing manual workload. By combining personalized patient support with provider-facing analytics, this solution aims to enhance postoperative adherence and improve long-term surgical outcomes.
COSC2026HANNAFORD29105 COSC
Type: Undergraduate
Author(s):
Robert Hannaford
Computer Science
Iyed Acheche
Computer Science
Oscar Arenas
Computer Science
Nagendra Chaudhary
Computer Science
Evan Eissler
Computer Science
Tucker Rinaldo
Computer Science
Sumalee Rodolph
Computer Science
Advisor(s):
Ed Ipser
Computer Science
Location: Basement, Table 10, Position 2, 11:30-1:30
View PresentationUnderstanding weather conditions during flight operations can help explain incidents and reduce risky behavior. The Brazos Safety Systems Weather Application integrates aviation weather data sources, including METAR reports and radar imagery, to visualize conditions around airports and during historical flights. Users can upload flight records and review the associated weather conditions through the application. By presenting aviation weather data in a centralized and accessible format, the application supports post-flight analysis and helps identify weather-related factors connected to flight incidents. The goal is to provide insights that improve understanding of past flight conditions and help prevent similar issues in future aviation operations.
COSC2026HOANG64316 COSC
Type: Undergraduate
Author(s):
Son Hoang
Computer Science
Robin Chataut
Computer Science
Chetraj Pandey
Computer Science
Advisor(s):
Chetraj Pandey
Computer Science
Location: Basement, Table 11, Position 2, 1:45-3:45
View PresentationSolar flares are among the most significant drivers of space-weather disturbances, motivating ongoing efforts to develop reliable forecasting methods from solar observations. The Solar Dynamics Observatory continuously produces high-resolution full-disk solar imagery used for monitoring solar activity. These observations have enabled substantial progress in machine learning–based flare prediction; however, most models remain confined to research studies, with limited deployment in operational systems that support continuous forecasting and systematic performance validation. This work presents a lightweight operational framework for near-real-time solar flare forecasting built around machine learning models proposed in the literature. The system retrieves full-disk solar imagery from the Helioviewer API, performs automated preprocessing, and generates predictions using a convolutional neural network–based forecasting model. Predictions and corresponding observations are stored to enable periodic forecast verification using standard performance metrics, allowing model performance to be monitored over time and potential prediction drift to be identified. The framework is implemented as an interactive application using Streamlit, providing an integrated interface for automated data ingestion, near-real-time inference, and ongoing model evaluation. The proposed system enables continuous monitoring of solar flare forecasts while providing a practical framework for tracking model performance and detecting prediction drift in operational settings.
COSC2026JAYARAMANSRINIVAS40638 COSC
Type: Undergraduate
Author(s):
Jayapradeep Jayaraman Srinivas
Computer Science
Gabriella Campos
Computer Science
Robin chataut
Computer Science
Nagendra Chaudhary
Computer Science
Riley Phan
Computer Science
Advisor(s):
Robin Chataut
Computer Science
Location: Basement, Table 7, Position 1, 1:45-3:45
View PresentationWe present the AI-Driven Adaptive Tutoring (AIAT) framework, a modular multi-agent system that generates structured, retrieval-grounded, and multimedia-enhanced courses. AIAT targets a common gap in AI in Education: large language models (LLMs) can produce fluent explanations, but they often lack pedagogical structure, factual grounding, and multimodal integration. To address this, AIAT uses a three-stage pipeline. First, a blueprint agent creates a course outline with learning objectives and topic dependencies using schema-validated structured outputs. Second, a chapter-expansion agent instantiates atomic topics with formative questions and summaries in JSON mode. Third, an enrichment agent generates topic-level explanations, visualization specifications, and triggers for narrated video production. Retrieval-augmented generation (RAG) combines a MongoDB Atlas Vector Search backend for course materials and a Pinecone pipeline for PDF-derived knowledge, grounding explanations in external content. A Next.js frontend streams responses and assembles text, diagrams, and videos into a unified learner experience.
The design is explicitly guided by mastery learning, cognitive load theory, and the Cognitive Theory of Multimedia Learning, with principles such as atomic topics, anti-fluff constraints, and visual-verbal alignment encoded in prompts and schemas. We report system-level metrics (e.g., latency by component) and operational reliability, and we outline a concrete evaluation plan, including pre/post-learning assessments, expert rubric-based accuracy checks, and subjective cognitive load measures. We also discuss the equity and accessibility implications of relying on commercial APIs and propose mitigation strategies (e.g., caching, partial use of lightweight models, and instructor-in-the-loop authoring). The contribution of this work is a reproducible architecture that connects multi-agent orchestration, RAG, and multimodal rendering to pedagogical theory, along with an evaluation roadmap that explicitly addresses the current lack of large-scale human studies.
COSC2026KANNAN11872 COSC
Type: Undergraduate
Author(s):
Balaji Kannan
Computer Science
Robin Chataut
Computer Science
Advisor(s):
Chetraj Pandey
Computer Science
Location: SecondFloor, Table 10, Position 1, 1:45-3:45
View PresentationSpace Weather Forecasting relies on large volumes of time-stamped solar observations paired with event catalogs describing flare occurrence and intensity. While these datasets are widely available, preparing them for machine learning remains a substantial and often overlooked challenge. Researchers must convert irregular observation streams into consistent temporal samples, construct observation and prediction windows, align events with observations, manage missing data and cadence inconsistencies, and ensure that training and evaluation splits avoid temporal or regional data leakage. These preprocessing steps are typically implemented in ad-hoc scripts that are difficult to reproduce, extend, or compare across studies. We propose an open-source Python library that standardizes the construction of machine-learning-ready datasets for solar event forecasting. The library ingests user-provided observation tables (e.g., SDO image timestamps and file paths) and event catalogs (e.g., GOES flare lists) and automatically generates indexed training samples suitable for PyTorch datasets and data loaders. Users can define flexible observation windows ranging from single time points to multi-frame temporal sequences, specify prediction horizons, and configure event-labeling rules. The framework also provides mechanisms for handling missing observations, irregular cadences, and explicit representation of temporal gaps. To support rigorous experimental design, the library includes reproducible dataset partitioning strategies such as chronological and tri-monthly splits, as well as optional active-region-aware grouping based on NOAA region catalogs. These features allow researchers to build both full-disk and active-region-based forecasting datasets while minimizing common sources of information leakage. Although the initial implementation focuses on solar flare prediction, the framework is designed to be extensible to other space weather phenomena including coronal mass ejections (CMEs) and solar energetic particle (SEP) events. By formalizing the transformation from raw observational records and event lists into reproducible machine learning datasets, this tool reduces the overhead of data preparation and promotes more transparent, comparable, and scalable space weather forecasting research.
COSC2026KARANJIT37674 COSC
Type: Undergraduate
Author(s):
Kritika Karanjit
Computer Science
Robin Chataut
Computer Science
Chetraj Pandey
Computer Science
Advisor(s):
Chetraj Pandey
Computer Science
Location: Basement, Table 5, Position 1, 11:30-1:30
View PresentationSolar flares are significant space weather phenomena that can impact satellites, communication systems, and many technological infrastructures, rendering accurate flare forecasting a crucial objective in heliophysics study.The NASA Community Coordinated Modeling Center (CCMC) Flare Scoreboard collects predictions from multiple solar flare forecasting models developed by different research groups. While this resource provides a useful platform for comparing different forecasting approaches, systematic validation of these models remains challenging because predictions are reported in different formats and are not easily comparable across models. In this work, we develop an automated framework to collect and organize flare forecasts from several models available in the CCMC Flare Scoreboard and convert them into a consistent dataset that allows direct comparison between models. The processed dataset includes predictions across multiple years and forecast windows. To evaluate model performance, we compare the predicted flare probabilities with observed flare events reported in the SolarSoft (SSW) Latest Events archive. By aligning forecast windows with actual flare occurrences, we establish a consistent approach for validating model predictions. This approach facilitates a systematic comparison of forecasting behaviour among various models and assists in identifying those that exhibit superior or inferior predicted ability.The resulting pipeline provides a reproducible way to analyze solar flare forecasting systems and supports future efforts to improve the reliability of space weather prediction methods.
COSC2026LE58784 COSC
Type: Undergraduate
Author(s):
Duc Le
Computer Science
Robin Chataut
Computer Science
Chetraj Pandey
Computer Science
Advisor(s):
Chetraj Pandey
Computer Science
Location: Third Floor, Table 12, Position 1, 11:30-1:30
View PresentationSolar flares are major drivers of space-weather disturbances and can disrupt satellites, communication systems, and navigation infrastructure. Recent deep learning approaches have demonstrated promising performance for solar flare forecasting, yet many models operate either on full-disk solar observations or on isolated active-region patches. This separation limits their ability to combine global solar context with localized magnetic structure and can affect the reliability of predictions. In addition, full-disk models often provide limited information about which regions drive their forecasts. This study presents a two-stage deep learning framework that integrates full-disk and active-region–level analysis within a unified flare forecasting pipeline. The system first performs full-disk inference using a convolutional neural network trained on solar magnetograms to estimate the global probability of flare occurrence. Attribution-based explanations are then generated to identify regions that most strongly influence the model prediction. These regions are mapped back to the solar disk and converted into candidate active-region patches, accounting for solar rotation and spatial alignment. The resulting patches are subsequently analyzed using a dedicated active-region forecasting model trained on SDO HMI SHARP data to produce localized flare probabilities. By integrating global context with targeted active-region analysis, the proposed framework combines two complementary forecasting models into an end-to-end prediction system. The resulting pipeline provides both full-disk and region-level flare probabilities, improving interpretability while enhancing the reliability of flare forecasts through focused secondary analysis of the most relevant solar regions.
COSC2026LUGOGONZALES23155 COSC
Type: Undergraduate
Author(s):
Francisco Lugo Gonzales
Computer Science
Advisor(s):
Natalia Castro Lopez
Biology
COSC2026NGUYEN23809 COSC
Type: Undergraduate
Author(s):
Cathy Nguyen
Computer Science
Thu My Banh
Computer Science
Advisor(s):
Chetraj Pandey
Computer Science
Location: Third Floor, Table 10, Position 1, 1:45-3:45
View PresentationSolar event archives from NOAA Space Weather Prediction Center (SWPC) contain observations of solar phenomena such X-ray flares (XRA), optical flares (FLA), disappearing solar filament (DSF), radio bursts (RSP), and other solar events. However, these data are currently stored across multiple sources and incompatible formats. As a result, this makes event retrieval, cross-comparison, and large-scale analysis complicated. In this study, we introduce a computational framework to extract and standardize solar event data from SPWC archives into a unified structure. Our approach automates parsing event reports, extracts key features such as event classification and timing, and organizes them to convert records into a consistent format across datasets. By reducing differences in how event records are stored and represented, this framework can enhance the usability of the solar records. The ultimate goal is to support the development of a tool supporting easier and faster access to solar event records based on user-selected criteria such as event type or time range. This standardization aims to improve data accessibility, providing a foundation for further space weather research.
COSC2026NGUYEN25123 COSC
Type: Undergraduate
Author(s):
Tam Nguyen
Computer Science
Robin Chataut
Computer Science
Advisor(s):
Robin Chataut
Computer Science
Location: Basement, Table 4, Position 3, 1:45-3:45
View PresentationMachine learning-based phishing detection systems increasingly rely on high-confidence predictions from deep neural models, yet confidence alone provides limited assurance of reliability in adversarial environments. Small, semantics-preserving manipulations such as homoglyph substitution and paraphrasing can induce confident misclassifications while remaining indistinguishable to human recipients, exposing a critical vulnerability in modern email security pipelines. We present TAED, a Trust-Aware Explainable Defense system that explicitly evaluates prediction trustworthiness rather than relying solely on opaque confidence scores. TAED computes a trust score by integrating model confidence with explanation fidelity, which measures alignment between model reasoning and known phishing indicators, and explanation stability, which quantifies sensitivity to minor input perturbations. We evaluate TAED alongside a diverse set of statistical and neural phishing detectors using a realistic adversarial dataset constructed through multiple evasion strategies. Our results reveal a systematic confidence–robustness paradox in which complex Transformer-based models exhibit strong clean-data performance but substantial brittleness under adversarial manipulation, while simpler feature-based models demonstrate greater resilience. By leveraging explanation-derived trust signals and selective escalation within a hybrid detection pipeline, TAED identifies unreliable high-confidence predictions and improves robustness against adversarial evasion. These findings demonstrate that explainability can be operationalized as a practical security mechanism for assessing model reliability in adversarial phishing detection systems.
COSC2026NORWOOD63925 COSC
Type: Undergraduate
Author(s):
Ellion Norwood
Computer Science
Hebert Alvarez
Computer Science
Gabby Campos
Computer Science
Aqil Dhanani
Computer Science
Derek Le
Computer Science
Bereket Mezgebu
Computer Science
Stefan Saba
Computer Science
Advisor(s):
Xi Fitzgerald
Computer Science
Location: FirstFloor, Table 13, Position 1, 11:30-1:30
View PresentationAgent-based models (ABMs) are widely used in computational biology to simulate complex processes such as infectious disease transmission. However, many research-grade models are implemented primarily as backend systems and lack graphical interfaces that allow researchers to efficiently configure simulations and interpret outputs. In collaboration with the Biophysics Department, this project focused on the development of a graphical user interface (GUI) for an existing viral agent-based simulation platform previously implemented without an interactive frontend.
The implemented interface integrates with the existing backend simulation environment deployed on laboratory systems, enabling structured parameter configuration, simulation execution, and visualization of model outputs. Development focused on frontend architecture, parameter validation mechanisms, backend connectivity, and data visualization components for simulation result analysis. Additional work included interface refactoring and codebase cleanup to improve maintainability and usability.
The resulting system provides a structured workflow for configuring and executing simulations while preventing invalid parameter configurations through input validation. By extending the existing modeling framework with a robust graphical interface and visualization capabilities, this work improves accessibility and operational efficiency for researchers conducting computational epidemiology experiments within the laboratory environment.
COSC2026OGLE21918 COSC
Type: Undergraduate
Author(s):
Brae Ogle
Computer Science
Tristan Gonzales
Computer Science
Alex Lee
Computer Science
Alexandre Morales
Computer Science
Sameep Shah
Computer Science
Madhavam Shahi
Computer Science
Advisor(s):
Bingyang Wei
Computer Science
Location: FirstFloor, Table 10, Position 1, 11:30-1:30
View PresentationMachine Performance Check Plus (MPC+) is a software platform designed to improve quality assurance workflows for Varian TrueBeam linear accelerators used in radiation therapy. The system automatically collects and processes Machine Performance Check (MPC) data generated by clinical machines, including imaging files and measurement results, and converts them into structured, analyzable information. The platform provides a web-based dashboard that allows medical physicists and clinical staff to review machine performance metrics, visualize trends, and quickly identify values that fall outside acceptable tolerances. MPC+ also supports digital sign-off workflow and audit trails to ensure compliance with regulatory and clinical standards. By consolidating data from multiple machines and clinics into a single interface, the system reduces the time required for daily QA review while improving reliability and traceability. Overall, the project aims to make the quality assurance process more efficient, data-driven, and scalable for radiation oncology clinics operating Varian TrueBeam systems.
COSC2026OYAWOYE33508 COSC
Type: Undergraduate
Author(s):
Emmanuel Oyawoye
Computer Science
Zaid Alaqqad
Computer Science
Hayden Brigham
Computer Science
Michael Dugle
Computer Science
Tanner Hendrix
Computer Science
Arscene Rubayita
Computer Science
Merci Yohana
Computer Science
Advisor(s):
Shelly Fitzgerald
Computer Science
Location: SecondFloor, Table 8, Position 1, 11:30-1:30
View PresentationThis senior design project centers on VANTAGE (Visual Autonomous Navigation and Task-driven Agentic Ground-to-air Engine), an AI-driven drone operations platform developed with MavenCode, a leading AI/ML solutions provider in the Dallas–Fort Worth area. VANTAGE enables users to command drones through natural language while integrating real-time perception tools such as speech-to-text, text-to-speech, object detection, semantic segmentation, and vision-language reasoning. The system combines a FastAPI backend, agent-based tool orchestration, and a web dashboard that supports both mission control and direct testing of AI tools through uploaded audio, image, and video inputs. Our team’s work spans full-stack development, model integration, and interface design to deliver a practical, user-centered platform for intelligent aerial autonomy. Together, these components demonstrate an end-to-end AI product approach that aligns with MavenCode’s mission of empowering organizations through training, product development, and consulting.
COSC2026PHAN45363 COSC
Type: Undergraduate
Author(s):
Riley Phan
Computer Science
Advisor(s):
Robin Chataut
Computer Science
Location: SecondFloor, Table 4, Position 2, 11:30-1:30
View PresentationLarge language models (LLMs) such as ChatGPT, Claude, Gemini, and Llama are increasingly being deployed as search and decision-support tools for health-related inquiries. As users provide demographic context, including age, to obtain personalized guidance, these systems can differentially adjust tone, directive strength, or safety framework. Although age can be clinically relevant, unintended variation in the generated advice can introduce systematic safety disparities or representational bias. In this study, we analyze outputs from two major LLM families across 10,679 physical and mental health scenarios to examine how explicit age cues, including teen, young adult, middle-aged, and senior, influence the safety and linguistic properties of generated health advice. To quantify these effects, we introduce three task-specific evaluation metrics: Age Differential Safety Bias (ADSB) to measure relative safety shifts under demographic conditioning, Safety Risk Score (SRS) to capture cumulative weighted unsafe recommendations, and Tone Differential Index (TDI) to detect systematic changes in linguistic complexity and formality associated with representational harm. The results indicate that explicit age cues systematically alter the behavior of the model. Demographic conditioning consistently reduces safety quality relative to age-neutral baselines. Middle-aged cohorts exhibit a higher cumulative safety risk in directive responses, whereas senior cohorts demonstrate elevated tone shifts consistent with oversimplification and increased formality. These findings suggest that demographic sensitivity can introduce measurable allocative and representational disparities in healthcare-oriented LLM systems. This work establishes a reproducible audit framework for evaluating demographic safety sensitivity in health-focused LLM deployments and contributes to the development of standardized evaluation protocols for safer and more equitable integration of AI systems in clinical and consumer health environments.
COSC2026RAJAMONEY39952 COSC
Type: Undergraduate
Author(s):
Rachel Rajamoney
Computer Science
Zach Campbell
Computer Science
Mati Davis
Computer Science
Riley Phan
Computer Science
Ally Schmidt
Computer Science
Stryder Schossberger
Computer Science
Elijah Yoo
Computer Science
Advisor(s):
Bingyang Wei
Computer Science
Location: Basement, Table 12, Position 2, 11:30-1:30
View PresentationThe BatLab project aims to develop a machine learning based tool that assists researchers in identifying bat species from acoustic recordings. Bats rely on echolocation calls that vary in frequency, duration, and shape, allowing species to be distinguished through analysis of their recorded calls. Currently, researchers must manually review large volumes of acoustic recordings, which is a time consuming process that limits the scale of ecological studies. This project explores the use of supervised machine learning to automate the classification of bat echolocation calls using labeled training data. The system analyzes acoustic features within recorded calls and predicts the most likely species while flagging uncertain cases for further review. In addition, the project focuses on improving data organization and providing a user friendly interface that allows researchers to efficiently visualize and manage acoustic data. By reducing the manual workload involved in analyzing bat call recordings, the BatLab system aims to support ecological research and improve the efficiency of studying bat populations.
COSC2026REAVLEY45943 COSC
Type: Undergraduate
Author(s):
Charley Reavley
Computer Science
Stephen Adeoye
Computer Science
Kayla Fruean
Computer Science
Ryan Jordan
Computer Science
Placide Ndayisenga
Computer Science
Alyssa Turenne
Computer Science
Advisor(s):
Dr. Ed Ipser
Computer Science
Location: Third Floor, Table 8, Position 2, 11:30-1:30
View PresentationThis senior design project focuses on developing PostAgent, an AI-powered content creation platform created by Corevation, an innovations tech company. This product is aimed at helping businesses and entrepreneurs with creating and managing social media content more efficiently and allow marketing endeavors to be more manageable. Our team is building multiple features, including AI tools to regenerate and edit post text and images, an analytics dashboard for tracking social media performance, and a centralized content library for organization purposes and for users to upload custom content to the platform. We are also assisting in the overall UI/UX to ensure an intuitive user experience and developing a company website to support Corevation’s public presence. Together, these components demonstrate a full-stack approach to product development, blending AI capabilities with user-centered design.
COSC2026SEGURA16978 COSC
Type: Undergraduate
Author(s):
Adessa Segura
Computer Science
Jane Allinger
Computer Science
Dylan Caton
Computer Science
Eric Licea Tapia
Computer Science
Kasia Love
Computer Science
Dalton Plitt
Computer Science
Advisor(s):
Ed Ipser
Computer Science
Location: Third Floor, Table 9, Position 2, 11:30-1:30
View PresentationHow would one classify an apple fruit versus an apple phone? Typically as a fruit and a technology device. However some modern systems for classifying patents are insufficent and would be unable to differentiate between the two and cluster both based on their containing the word ‘apple’. Our task with iPELiNT is to build upon solutions to better visualize how USPTO( United States Patent and Trademark Office) art unit’s change over time. An art unit is a group of USPTO examiners specializing in a specific technology area. Our end product helped establish a data-driven system for conducting forensic analysis of USPTO patent examiner dockets using vector embeddings and internal data pipelines. We used mongoDB for our database, JavaScript and Python for our backend, and NuxtJS and Vue for our frontend. Our 5 phases of development are as follows. 1. Data Aggregation and Preparation. 2. Centroid Calculation and Art Unit Profiling. 3. Deviation Analysis and Scoring 4. Visualization and interpretation Framework.
COSC2026SHRESTHA58753 COSC
Type: Undergraduate
Author(s):
Rahul Shrestha
Computer Science
Advisor(s):
Robin Chataut
Computer Science
Location: Basement, Table 6, Position 2, 1:45-3:45
View PresentationArtificial intelligence tools, especially large language models (LLMs) are progressively being integrated into educational settings as resources that can enhance student learning and offer novel methods for information retrieval. As these technologies advance, educators and researchers are increasingly focused in comprehending their impact on student learning and engagement with academic content. This study examines the potential role of AI-based systems in facilitating student learning by analyzing various ways employed by students to obtain and process information during study activities.
The study's participants are split up into four groups, each of which accesses learning resources in a different way. The first group relies on traditional text-based study resources. The second group uses general online resources to gather information. The third group is allowed to use AI-based tools powered by large language models to receive explanations and assistance. The fourth group uses a hybrid strategy that blends AI-supported tools with conventional study materials.
The performance and learning experiences of these groups are compared to evaluate how different resources influence students’ understanding of course concepts. The findings are expected to provide insight on whether AI technologies can successfully supplement conventional teaching methods. Understanding these effects help educators determine how to appropriately incorporate AI and LLM tools into classroom settings to improve learning while upholding efficient teaching methods.
COSC2026VO21078 COSC
Type: Undergraduate
Author(s):
Peter Vo
Computer Science
Landen Chambers
Computer Science
Ben Hartje
Computer Science
Beau Moody
Computer Science
Alondra Oropeza
Computer Science
Isabella Reyes
Computer Science
Advisor(s):
Edward Ipser
Computer Science
Location: Basement, Table 3, Position 2, 11:30-1:30
View PresentationThe Driving Safety Certificate Management System is a web application designed to streamline
the administration of driving safety courses in Texas. Currently, instructors conduct classes
independently but rely on the licensed provider to process student information, retrieve driving
records, and issue course completion certificates, which can cause delays and create additional
administrative work. This system shifts those responsibilities directly to instructors by allowing
them to manage classes, enroll students, process student information, and generate certificates
through a centralized platform. By automating these processes, the system reduces manual
workload, improves efficiency, and enables faster certificate delivery for students. The
application also maintains oversight for administrators while ensuring that instructors can
operate more independently within the requirements set by the Texas Department of Licensing
and Regulation.
COSC2025BEDNARZ7710 COSC
Type: Undergraduate
Author(s):
Kate Bednarz
Computer Science
James Clarke
Computer Science
James Edmonson
Computer Science
Dave Park
Computer Science
Michala Rogers
Computer Science
Aliya Suri
Computer Science
Advisor(s):
Bingyang Wei
Computer Science
Location: SecondFloor, Table 3, Position 1, 1:45-3:45
View PresentationFrogCrew is a comprehensive web-based system designed to simplify the management of TCU Athletics sports broadcasting crews. Traditional manual methods of scheduling, tracking availability, and assigning roles are inefficient and prone to errors. This often leads to miscommunication and scheduling conflicts. To solve these challenges, FrogCrew provides a unified platform for administrators. It enables them to manage game schedules, assign roles based on availability and qualifications, and automate notifications efficiently. Key features include customizable crew member profiles. These profiles allow users to update essential information such as availability, roles, and qualifications. The system also offers an automated scheduling tool that simplifies the process of creating game schedules and assigning roles. Additionally, FrogCrew includes a shift exchange feature. This feature allows crew members to request shift swaps, with automated notifications sent to administrators for approval. The system's reporting tools provide financial reports, position-specific insights, and individual performance analyses. These tools help administrators assess crew utilization and manage costs effectively. By automating core functions, FrogCrew reduces manual workload and minimizes errors. It also improves communication between administrators and crew members, ensuring optimal staffing - ultimately enhancing the execution of our TCU sporting events; Go Frogs!
COSC2025BHANDARI23693 COSC
Type: Undergraduate
Author(s):
Sujit Bhandari
Computer Science
Advisor(s):
Robin Chataut
Computer Science
Location: Basement, Table 14, Position 1, 11:30-1:30
View PresentationWearable smart devices, which continuously capture physiological signals such as heart rate, respiratory patterns, and blood oxygen levels, offer significant potential for the early detection of serious health conditions. Timely diagnosis of diseases such as arrhythmia and sleep apnea can greatly enhance patient outcomes by enabling early intervention. However, extensive collection of diverse, real-world wearable sensor data faces challenges due to privacy concerns, data scarcity, and logistical constraints. This research introduces a novel deep learning framework that integrates publicly available wearable sensor data with synthetic physiological signals generated by large language models (LLMs) to create comprehensive and privacy-compliant hybrid datasets.The proposed framework leverages convolutional neural networks (CNNs), optimized for time-series data analysis, alongside advanced machine learning techniques to identify early signs of arrhythmia, sleep apnea, and related health conditions from physiological data. The integration of synthetic data generated by LLMs addresses critical challenges of limited data availability and privacy concerns, enriching the training datasets with diverse scenarios and physiological variations. Preliminary experimental results demonstrate that the hybrid approach, combining publicly accessible wearable sensor data and LLM-generated synthetic signals, significantly enhances the model's accuracy, generalization capability, and resilience to data variability. Models trained on hybrid datasets consistently outperform those relying solely on real-world data, suggesting that synthetic data provides meaningful supplementation to traditional datasets. This study further highlights how synthetic physiological data can enhance the scalability and efficacy of AI-based health monitoring systems, reducing dependency on extensive clinical data collection. By exploring and validating this innovative data synthesis approach, the research contributes significantly to developing more effective, accessible, and proactive healthcare diagnostic tools, ultimately advancing AI-driven solutions in preventive healthcare.
COSC2025CHARUBIN50448 COSC
Type: Undergraduate
Author(s):
Katie Charubin
Computer Science
Jenna Busby
Computer Science
Nicholas Collins
Computer Science
Aaryan Dehade
Computer Science
Nate Hernandez
Computer Science
Advisor(s):
Bingyang Wei
Computer Science
Location: Basement, Table 3, Position 2, 11:30-1:30
View PresentationThe iPELiNT project develops an AI-powered patent analysis dashboard designed to streamline the patent prosecution process for attorneys and practitioners. This web application leverages modern technologies including Vue.js with Nuxt3 framework for frontend development, NodeJS with Express for backend services, MongoDB for database management, and integrates AI models from OpenAI to analyze patent documents.
The system features a user-friendly dashboard that allows practitioners to upload patent applications, analyze document health, view CPC prediction analytics, examine keyword relevance, and identify potential prior art conflicts. Key functionality includes document parsing, automated health checks, Art Unit prediction, and generation of actionable reports. The solution also incorporates user account management, notification systems, and specialized document generation tools.
Development followed an iterative process with clearly defined milestones and tasks distributed across team members. The project addresses a critical need in the patent industry by providing an all-in-one platform that simplifies complex patent analysis, replacing traditionally fragmented and cumbersome tools with a streamlined, intuitive interface.
The completed iPELiNT dashboard enhances efficiency for patent professionals, improving application quality through AI-powered insights, and ultimately streamlining the patent prosecution workflow with modern design principles and cutting-edge technology.