Skip to content
Program
IDETC-CIE > Program > Student Hackathon

Student Hackathon

ASME 2025 Student Hackathon

Background

The Computer & Information in Engineering (CIE) Division of the American Society of Mechanical Engineers (ASME) held past hackathon events at the IDETC/CIE 2020, 2021, 2022, 2023, and 2024 Conferences. events have provided students and engineering practitioners with a unique platform to explore how cutting-edge data science and machine learning techniques can address real-world engineering challenges.

Building on this legacy, the CIE Division is excited to host the ASME-CIE Hackathon once again at the IDETC/CIE 2025 Conference. This year’s hackathon will focus on emerging trends in multi-modality process monitoring, generative models, and large language models, reflecting the latest advancements in engineering applications of machine learning.

Participants will have the opportunity to work with state-of-the-art datasets and tackle pressing and innovative challenges in the field, gaining hands-on experience with the tools and techniques shaping the future of engineering innovation. The event will be held in a hybrid format, combining both virtual and on-site participation, and will take place as a pre-conference event.


Hilton Anaheim, California

August 10 - 16, 2025 virtually and August 17, 2025 in person

in conjunction with

ASME IDETC/CIE 2025 Conference (August 17 - 20, 2025 in person)


Registration Details

  • A TBD registration fee per participant will apply for the Hackathon event. Participants can register either as a conference add-on or as a stand-alone event.
  • This is a hybrid event, offering both virtual and on-site participation options. If you are unable to attend in person, please email us at idetccie.seikm@gmail.com so we can assist you in connecting virtually.
  • Participants are not limited in the number of problems they can choose to tackle during the Hackathon.

Important Dates

August 5, 2025, 11:59pm EDT: Registration Deadline
August 10, 2025, 12 – 3pm EDT: Virtual Hackathon Kick-off
August 17, 2025: Hybrid Hackathon Closing
August 17, 2025, 6am EDT: due for Hackathon Deliverables
August 17, 2025, 10:30am – 4:15 pm EDT: Final Presentations
August 17, 2025, 4:15 – 5:30pm EDT: Hackathon Judging
August 17, 2025, 5:30 - 6:30pm EDT: Closing Ceremony


* By registering for the Hackathon, you agree to allow your information to be shared with other registrants and volunteer leaders for the purpose of communicating event information and intra team communication.


Award Information

Exciting prizes await! For each problem category, the top three teams will be awarded prizes. Prize amounts will be announced soon!

Note: Teams will be judged within their respective problem topic areas, and awards will be selected separately for each category.


Eligibility

Undergraduate students, graduate students, postdocs, and non-students (e.g. professionals) are welcome to attend the Hackathon and experience the exciting competitions.


Hackathon Team and Presentation

  • All participants must be registered by August 5, 2025 11:59 pm EDT. Everyone will be placed in a team up of 1-2 members. You may form a team based on your own preference. All implementations must be based on the original work.
  • Each Hackathon team will continue their own meetings via their own chosen platform between 08/10/25 and 08/17/25.
  • Each team needs to present their teamwork including the technical approach and submit the results to their own GitHub repository by the submission deadline.
  • Team final presentation, results, and technical approaches will be evaluated based on a technical committee (separated with the Hackathon organizing committee).

 


 

Problem Sets

The full dataset can be accessed by clicking on the links under the statement.

Problem Statement 1
National Institute of Standards and Technology Technical; NIST

Laser Powder Bed Fusion (LPBF) is a leading metal additive manufacturing (AM) technology valued for its ability to fabricate complex, high-performance components. However, the process remains vulnerable to defects that compromise part quality and repeatability. One of the most critical—and often overlooked—sources of such defects is the powder spreading step, where anomalies like streaks or debris can lead to uneven powder layers and downstream printing issues.

Monitoring the layer-wise powder spreading conditions is essential to identify potential defects early in the printing process. As machine vision and data-driven methods become more capable, image-based inspection has become a promising tool to automate the quality control of powder spreading process. Still, challenges remain in robustly detecting anomalies under varying lighting and surface conditions, and in generating high-fidelity synthetic data to support model development.

This challenge invites participants to explore both: building models to segment powder spreading anomalies and generating realistic powder-bed images to enhance quality monitoring in LPBF.

Problem Statement 2
DesignQA

  • How well can LLMs understand design engineering documents?
  • Will model hallucinations exacerbate design biases and lead to design catastrophes?
  • Can you develop a method to beat the best-performing model on an engineering design benchmark, DesignQA?

A trusted AI assistant that can deeply understand technical engineering documents would be invaluable for engineers, especially for tasks that require frequent reference to standards or compliance checks. Unfortunately, research shows that vision-language models (VLMs) currently struggle with synthesizing technical information from documents with engineering images. To evaluate these capabilities, the DesignQA benchmark—composed of 1149 question-answer pairs based on real data from the MIT Motorsports team—tests VLMs on their ability to interpret and apply the Formula SAE rulebook. GPT-4o, Gemini 1.0, Claude Opus, and other models have been evaluated on DesignQA. However, no existing model achieves a perfect score on any DesignQA subset. The results for many of the benchmark subsets indicate that there is substantial room for VLM improvement when it comes to understanding and cross-analyzing engineering documentation and images.

 


Hackathon Organizers

Prof. Yaoyao Fiona Zhao (McGill University), yaoyao.zhao@mcgill.ca
Prof. Zhenghui Sha (University of Texas – Austin), zsha@austin.utxas.edu
Prof. Hyunwoong Ko (Arizona State University), hyunwoong.ko@asu.edu
Dr. Zhuo Yang (Georgetown University, National Institute of Standards and Technologies), zy253@georgetown.edu
Dr. Laxmi Poudel (General Electric), Laxmi.Poudel@ge.com
Dr. Yan Lu (National Institute of Standards and Technologies), yan.lu@nist.gov
Jiarui Xie (McGill University), jiarui.xie@mail.mcgill.ca