Friday, May 30, 2014
While I'm writing this post, my passport with the UK visa stamp is still travelling somewhere in the US. Did I tell you that my flight is next Monday? Anyway, it was really close, and there was time that I thought I could not go.
This is going to be my third experience with PLDI (the two previous ones were in San Jose in 2011, and Seattle in 2013). I had lots of fun last time, and I believe it will be even more exciting this year.
I will present two papers at the conference. The first paper, Compiler Validation via Equivalence Modulo Inputs, introduces a novel method to generate compiler test programs from existing code. We found nearly 200 GCC and LLVM bugs over the last few months. This work was done at UC Davis with my advisor Zhendong Su and my lab-mate Mehrdad Afshari. The second one, FlashExtract: A General Framework for Data Extraction by Examples, presents a framework that allows easy creation of specialized synthesizers to support data extraction by examples. I did this work with my co-advisor, Sumit Gulwani at Microsoft Research during my two MSR internships last year.
See you guys in a few days!
Our PLDI paper introduces an abstract interpretation-based framework for leveraging multiple versions of the same program to present only warnings resulting from the changes in the program to the user, thereby reducing the number of false alarms reported.
My SRC work on piecewise refutation analysis presents a new technique for precise and scalable goal-directed static analysis. The idea is to use cheap up-front information (such as a points-to analysis) to soundly "jump" around the program in order to focus a precise symbolic analysis only on the code that matters for the property of interest, enhancing scalability.
PLDI was awesome last year and I am hoping this year will be just as great. I've already read several very nice papers from this year's program and am looking forward to hearing about more work in the presented talks and meeting more people. Can't wait!
It was another cold Ithaca morning. November 1st, 2012. All Saints Day, if that's your style, or All Hollows Day for the traditionalists. Either way, it was a day to remember.
I didn't know it right then. Woke up, like any other morning, scrambled out of bed and into a coat, and stumbled up the 40-degree, 900 foot incline separating my apartment from the university. That's Cornell for ya, Cornell in a nutshell. They say the hill's a metaphor; don't believe it. It's all part of the game, just another one of the little jokes we tell to survive the winter and isolation. You gotta forget the cold somehow.
My hands were still stinging from the frost when he walked through the door. Immediately, I knew something was up. Something was different. For starters, he was younger than you'd expect. Young for a professor. He was wearing a t-shirt, too. Beneath my double-layer coat, sweater, and long underwear I shivered a little on his behalf. Poor guy must've been from out of town. Then there was his computer, his slides. He didn't use powerpoint and he didn't use beamer. Instead, he projected a blank page on his tablet and scribbled notes with a stylus. Revolutionary. But most surprising were his feet. He wasn't wearing shoes.
The man was Ross Tate. The lecture was the Curry-Howard isomorphism. That summer I analyzed a mountain of code to see if one of his ideas held up in practice. It did, so together with PhD student Fabian Muelboeck we wrote it up in a paper. This summer, I'm fortunate enough to get to share this idea.
The paper is titled Getting F-Bounded Polymorphism into Shape and it's about an idea from Ross's friends in industry that has powerful consequences. They treated some classes/interfaces differently from others. Things like Comparable, Equatable, Addable, Clonable received special treatment in their codebase. Instead of being passed around as data or instantiated, they only modified other class/interface declarations. String, for example, might extend Comparable<String> to support a new method in type-safe way.
Ross determined the core rules influencing their guidelines. I did a survey of 60 java projects to determine whether other developers followed the same rules. It turns out they did. Ross then formalized these rules and made them part of the type system. Here's where the magic happens.
Just by separating these concepts--constraints vs. data--we eliminate cyclic dependencies in the type system. Immediately, subtyping becomes decidable under a simple algorithm. With a little more work, non-syntactic equality and decidable joins pop out. Future work is promising; in fact, I'm currently working on proving conditional inheritance in our system. It's one firm step towards a type safe world.
I'm glad to have gotten to work with Ross, and I'm grateful to be able to attend PLDI to share this idea with the community. Shouts out to PLDI, ACM SIGPLAN, and the NSF for making this happen.
Wednesday, May 28, 2014
Tuesday, May 27, 2014
I am a 2nd year PhD student from Purdue University.
My paper "Accurate Application Progress Analysis for Large-Scale Parallel Debugging" was accepted in the conference. In this paper we present a highly accurate automated debugging technique for parallel applications written in MPI.
Parallel scientific applications which run on supercomputers with hundreds of thousands of processes are extremely difficult to debug. Our approach will be able to identify the root-cause of the problem and its associated code region with minimal manual interaction.
This is a joint work between Purdue and Lawrence Livermore National Lab.
I will also be presenting a poster of the paper during the poster session.
I will be attending PLDI for the first time so I am really excited. I have already shortlisted all the interesting talks I am going to attend. Also I hope to meet other researcher working on exciting problems which might lead future collaborations. The NSF SIGPLAN travel award helped me a lot in managing my travel budget. I am really thankful to PLDI organizing committee and SIGPLAN for giving me this opportunity.
Department of Computer Science at Rice University working under the supervision
of Prof. Vivek Sarkar in the Habanero Extreme Scale Software Research Project.
My research interests mostly include Parallel Programming Models and Runtime
Systems with the aim to make writing task parallel programs on multicore
machines easier for programmers.
While I have attended conferences such as OOPSLA and ECOOP in the past, I am
looking forward to attending PLDI and visiting Scotland for the first time this
year. I'm excited to be going to the conference and co-located workshops. I will
be presenting two papers in co-located workshops:
- "A Case for Cooperative Scheduling in X10's Managed Runtime" at the X10 workshop.
In this work, we motivate the use of a cooperative runtime to address the problem of scheduling parallel tasks with general synchronization patterns. Current implementations for task-parallel programming models provide efficient support for fork-join parallelism, but are unable to efficiently support more general synchronization patterns that are important for a wide range of applications. In the presence of patterns such as futures, barriers, and phasers, current task-parallel implementations revert to thread-blocking scheduling of tasks. Our experimental results show that our cooperative runtime delivers significant improvements in performance and memory utilization on a range of benchmarks using future and phaser constructs, relative to a thread-blocking runtime system while using the same underlying work-stealing task scheduler.
- "Exploiting Implicit Parallelism in Dynamic Array Programming Languages" at the ARRRAY workshop.
We have built an interpreter for the array programming language J. The interpreter exploits implicit data parallelism in the language to achieve good parallel speedups on a variety of benchmark applications. Many array programming languages operate on entire arrays without the need to write loops. Writing without loops simplifies the programs. Array programs without loops allow an interpreter to parallelize the execution of the code without complex analysis or input from the programmer. Our implementation of an implicitly parallelizing interpreter for J is written entirely in Java. The interpreter itself is responsible for exploiting the parallelism available in the applications. Our results show we attain good parallel speed-up on a variety of benchmarks, including near perfect linear speed-up on inherently parallel benchmarks.
Hope you get the time to attend my talks and I'll also be running into you at
the various sessions. Do provide any feedback you have whenever we meet. Looking
forward to meeting all of you and making new friends and possibly also exploring
a bit of Edinburgh.
I will participate in the Student Research Competition, presenting a system called FACADE, which is a compiler and runtime system for Big Data applications. In this system, we propose a new programming model that breaks the long-held object-oriented programming principle: objects are used both for data representation and data manipulation. Instead, we make a clear separation of them. FACADE has the ability to automatically transform existing Java programs into our model with very minimal users' effort. And it guarantees an upper bound of number of data objects regardless how much data a program needs to process.
I look forward to hearing all the talks. I wish PLDI could record all the talks so I wouldn't have to make painful decision of which session to attend. But that's just my wish :) (is there anyone out there having the same wish?)
Monday, May 26, 2014
Fast (rise4fun.com/Fast/) is a programming language for static analysis and optimization of programs that manipulate tree data structures that I developed during my first internship at Microsoft Research together with Margus Veanes, Ben Livshits, and David Molnar.
AutomataTutor, on the other hand, is a tool for automatic grading and feedback generation of Finite Automata constructions in undergrad education that I developed in collaboration with my advisor (Rajeev Alur), Sumit Gulwani, Dileep Kini, Mahesh Viswanathan, and Bjoern Hartmann.
Finally I'll present my ongoing research on the static analysis of Web Scrapers at the PLDI SRC.
Besides the talks, I really look forward to meet the other researchers attending the conference and hear what they will present.