Project Description
Reassembling Vision explores how machine learning and interaction design can transform the often-overlooked process of architectural model disassembly into a more engaging, traceable, and sustainable workflow. In design schools, physical models are frequently discarded after reviews, while materials such as foam, chipboard, wood, acrylic, and plastics are rarely sorted or reused correctly. This project responds to that problem by prototyping a web-based system that uses image classification, segmentation, and material-stream logic to identify model-making materials from uploaded photos, classify them into recyclable or non-recyclable categories, and provide visual feedback on their composition.
Beyond sorting, the platform introduces a playful “detective” and “creative” workflow: users can detect materials, understand their recycling streams, and then reuse selected pieces through shape extraction, silhouette matching, 2D arrangement, or template-based reconstruction. The system combines a 13-class material classifier, segmentation methods, MobileNet-based classification, and interface design to connect sustainability education with hands-on making. Rather than treating waste management as a passive instruction system, the project frames reuse as an interactive design experience, showing how computer vision can support circular material practices, design pedagogy, and future product tools for sustainable fabrication environments.