The name's Imam. Shams Imam. I am a fifth-year graduate student in the
Department of Computer Science at Rice University working under the supervision
of Prof. Vivek Sarkar in the Habanero Extreme Scale Software Research Project.
My research interests mostly include Parallel Programming Models and Runtime
Systems with the aim to make writing task parallel programs on multicore
machines easier for programmers.
While I have attended conferences such as OOPSLA and ECOOP in the past, I am
looking forward to attending PLDI and visiting Scotland for the first time this
year. I'm excited to be going to the conference and co-located workshops. I will
be presenting two papers in co-located workshops:
Hope you get the time to attend my talks and I'll also be running into you at
the various sessions. Do provide any feedback you have whenever we meet. Looking
forward to meeting all of you and making new friends and possibly also exploring
a bit of Edinburgh.
Department of Computer Science at Rice University working under the supervision
of Prof. Vivek Sarkar in the Habanero Extreme Scale Software Research Project.
My research interests mostly include Parallel Programming Models and Runtime
Systems with the aim to make writing task parallel programs on multicore
machines easier for programmers.
While I have attended conferences such as OOPSLA and ECOOP in the past, I am
looking forward to attending PLDI and visiting Scotland for the first time this
year. I'm excited to be going to the conference and co-located workshops. I will
be presenting two papers in co-located workshops:
- "A Case for Cooperative Scheduling in X10's Managed Runtime" at the X10 workshop.
In this work, we motivate the use of a cooperative runtime to address the problem of scheduling parallel tasks with general synchronization patterns. Current implementations for task-parallel programming models provide efficient support for fork-join parallelism, but are unable to efficiently support more general synchronization patterns that are important for a wide range of applications. In the presence of patterns such as futures, barriers, and phasers, current task-parallel implementations revert to thread-blocking scheduling of tasks. Our experimental results show that our cooperative runtime delivers significant improvements in performance and memory utilization on a range of benchmarks using future and phaser constructs, relative to a thread-blocking runtime system while using the same underlying work-stealing task scheduler.
- "Exploiting Implicit Parallelism in Dynamic Array Programming Languages" at the ARRRAY workshop.
We have built an interpreter for the array programming language J. The interpreter exploits implicit data parallelism in the language to achieve good parallel speedups on a variety of benchmark applications. Many array programming languages operate on entire arrays without the need to write loops. Writing without loops simplifies the programs. Array programs without loops allow an interpreter to parallelize the execution of the code without complex analysis or input from the programmer. Our implementation of an implicitly parallelizing interpreter for J is written entirely in Java. The interpreter itself is responsible for exploiting the parallelism available in the applications. Our results show we attain good parallel speed-up on a variety of benchmarks, including near perfect linear speed-up on inherently parallel benchmarks.
Hope you get the time to attend my talks and I'll also be running into you at
the various sessions. Do provide any feedback you have whenever we meet. Looking
forward to meeting all of you and making new friends and possibly also exploring
a bit of Edinburgh.
No comments:
Post a Comment