Talk 1: Portable laser cutting Talk 2: LSVP: Towards Effective On-the-go Video Learning Using Optical Head-Mounted Displays
Speaker 2: Mr Ashwin Ram
25 Feb 2021 Thursday, 04:00 PM to 05:30 PM
Laser-cut 3D models shared online tend to be basic and trivial - models build over long periods of time and by multiple designers are few/nonexistent. I argue that this is caused by a lack of an exchange format that would allow continuing the work. At first glance, it may seem like such a format already exist, as laser cut models are already widely shared in the form of 2D cutting plans. However, such files are susceptible to variations in cutter properties (aka kerf) and do not allow modifying the model in any meaningful way (no adjustment of material thickness, no parametric changes, etc.). I consider this format machine specific.
My first take on the challenge is to see how far we can get by still building on the de-facto standard, i.e., 2D cutting plans. I tackled the challenge by rewriting 2D cutting plans, replacing non-portable elements with portable ones. However, this comes at a cost of extra incisions, reducing the structural integrity of models and impacting aesthetic qualities and rare mechanisms or joints may go undetected. I thus take a more radical approach, which is to move to a 3D exchange format (kyub). This eliminates these challenges, as it guarantees portability by generating a new machine-specific 2D file for the local machine when exported. Instead, it raises the question of compatibility: Files already exist in 2D - how to get them into 3D? I demonstrate a software tool to reconstruct the 3D geometry of the model encoded in a 2D cutting plan, allows modifying it using a 3D editor, and re-encodes it to a 2D cutting plan. I demonstrate how this approach allows me to make a much wider range of modifications, including scaling, changing material thickness, and even remixing models.
The transition from sharing machine-oriented 2D cutting files, to 3D files, enables users worldwide to collaborate, share, and reuse. And thus, to move on from users creating thousands of trivial models from scratch to collaborating on big complex projects.
Thijs Roumen is a PhD candidate in Human Computer Interaction in the lab of Patrick Baudisch, Hasso Plattner Institute in Potsdam, Germany. He received his MSc from the University of Southern Denmark, Sonderborg in 2013 and BSc from the Technical University of Eindhoven, Netherlands in 2011. Between the PhD and master he worked at the National University of Singapore as a Research Assistant with Shengdong Zhao. His research interests are in personal fabrication, digital collaboration and enabling increased complexity for laser cutting. His papers are published as full papers in top-tier ACM conferences CHI and UIST. He serves on several ACM program committees including ACM UIST.
The ubiquity of mobile phones allows video content to be watched on the go. However, users' current on-the-go video learning experience on phones is encumbered by issues of toggling and managing attention between the video and surroundings, as informed by our initial qualitative study. To alleviate this, we explore how combining the emergent smart glasses (Optical Head-Mounted Display or OHMD) platform with a redesigned video presentation style can better distribute users' attention between learning and walking tasks. We evaluated three presentation techniques: highlighting, sequentiality, and data persistence to find that combining sequentiality and data persistence is highly effective, yielding a 56% higher immediate recall score compared to a static video presentation. We also compared the OHMD against smartphones to delineate the advantages of either platform for on-the-go video learning in the context of everyday mobility tasks. We found that OHMDs improved users' 7-day delayed recall scores by 17% while still allowing 5.6% faster walking speed, especially during complex mobility tasks. Based on the findings, we introduce Layered Serial Visual Presentation (LSVP) style, which incorporates sequentiality, strict data persistence, and transparent background, among other properties, for future OHMD-based on-the-go video learning.
Ashwin Ram is a PhD candidate in NUS-HCI lab, Singapore advised by Prof. Shengdong Zhao. He received his BTech in 2018 from the National Institute of Technology - Trichy, India. His research focuses on dynamic content-based (e.g. videos) ubiquitous learning on smart glasses.