Software Techniques for Measuring, Managing and Reducing Numerical Error in Programs

Ms Himeshi De Silva
Dr Wong Weng Fai, Associate Professor, School of Computing
Professor John L. Gustafson, Visiting Professor, School of Computing

  18 Feb 2020 Tuesday, 10:00 AM to 11:30 AM

 Executive Classroom, COM2-04-02


Floating point arithmetic is ubiquitous in software. Scientists, engineers, programmers or anyone intending to solve real world problems using computers, have to grapple with the floating point number systems available on machines in order to encode the real number quantities in applications into a format that can be computed with. However, due to physical resource limitations, more often than not, floating point numbers only represent an approximation of the actual value. The resulting numerical error can play the roles of both friend and foe by presenting formidable challenges to the correctness of programs and providing opportunities for energy savings through further approximation. In this thesis proposal, we propose three soft-ware techniques to address three key issues relating to numerical error -- namely its measurement, management and reduction. To measure numerical error of a computation by obtaining rigorous error bounds on it, we propose the use of a new number system called unums. As a use case, we employ unums to measure the numerical stability of the Strassen-Winograd matrix multiplication when executed with different hardware instructions and techniques that can potentially improve its numerical stability. Managing numerical error in a controlled manner allows for potential energy savings in certain applications that are amenable to approximation. To this end, we augment existing symbolic execution methods to identify program components that can lend to the reduction of execution cost under acceptable losses of accuracy. Finally, emerging applications such as deep learning are demanding more efficient floating point representations than those that are currently available. We evaluate the use of Posit numbers in the training of several state-of-the-art deep neural networks to show that it has superior numerical stability when compared to existing representations.