vinu

univ

“The only thing that interferes with my learning is my education” - Einstein

Machine Learning

Taught by Prof. Aditya Bhaskara (PhD. Princeton Univ.) I studied Formal notions of learnability, generalization, supervised vs. unsupervised learning, etc. The goal was to get introduced to the fundamental techniques and models that are central in today's ML applications.

How are they different from any old algorithm?

How can we reason about the ability of an algorithm to “learn from examples”, and classify data it has never seen?!

The main textbook for the course is Understanding Machine Learning From Theory to Algorithms, by Shai Shalev-Shwartz and Shai Ben-David. and Excellent lecture notes by Ankur Moitra. My Scibe notes

Taught by Prof. Vivek Srikumar (PhD. UIUC) . This was a hand-on course, studied techniques for developing computer programs that can acquire new knowledge automatically or adapt their behavior over time.

Topics include several algorithms for supervised and unsupervised learning, decision trees, online learning, linear classifiers, empirical risk minimization, computational learning theory, ensemble methods,

Bayesian methods, clustering and dimensionality reduction.

Changing random stuff until your program works is "hacky" and "bad coding practice." But it you do it fast enough it is "Machine Learning" and pays 4x your current salary -- Prof. of CS4620

Taught by Prof. Tom Flectcher (Ph.D. University of North Carolina) I studied how to use probability theory to model and analyze data.

Data in the real world almost always involves uncertainty. This uncertainty may come from noise in the measurements, missing information, or from the fact that we only have a randomly sampled subset from a larger population.

Probabilistic models are an effective approach for understanding such data, by incorporating our assumptions and prior knowledge of the world.

Taught by Prof. Vivek Srikumar (PhD. UIUC) This was a project based course, a graduate level overview of concepts and techniques for statistical modeling of structured data.

Much of the data we see is in an unstructured form – text, images, videos, etc.

How do we efficiently learn to extract structured information from such raw data?

This could involve tasks such as parsing a sentence, creating a tabulated summary of the information in a webpage, adding tags to an image, recognizing objects in images, etc.

Taught by Prof. Aditya Bhaskara (PhD. Princeton Univ.). Machine Learning techniques now are ubiquitous tools to extract structural information from data collections. With the increasing volume of data, large-scale ML applications require an efficient implementation to accelerate the performance. Existing systems parallelize algorithms through either data parallelism or model parallelism.

But data parallelism cannot obtain good statistical efficiency due to the conflicting updates to parameters while the performance is damaged by global barriers in model parallel methods.

In this course, we studied how to facilitate the development of large-scale ML applications in production environment.

By allowing concurrent updates to model across different groups and scheduling the updates in each group, our goal is to achieve a good balance between hardware efficiency and statistical efficiency. Besides, we want to reduce the network latency by overlapping the parameter pulling and update computing and also utilize the sparseness of data to avoid the pulling of unnecessary parameters. Here is one of my presentations

Taught by Prof. Rajeev Balasubramonian (PhD. University of Rochester). This was a project based course and my favorite course so far.. hardware approaches for implementing neural-inspired algorithms. Neural-inspired algorithms use a variety of models for (i) the neuron (e.g., perceptrons and spiking neurons), (ii) connectivity among neurons (e.g., feed-forward, recurrent, reservoir), (iii) training (e.g., back-propagation and {brace yourself} spike timing dependent plasticity), etc. Focus was on state-of-the-art hardware approaches to implement these algorithms.

Yielding accelerator chips that will be used for a variety of cognitive tasks in datacenters, mobile devices, self-driving cars, etc. Here is a link to my Project

Scientific Computing

Taught by Prof. Bei Wang (PhD. Duke Univ). A Graduate breadth course to give students exposure to the algorithms and implementations often used in scientific computing.

Scientific computing has become an indispensable tool in many branches of research, and is vitally important for studying a wide range of physical and social phenomena.

In this course we examined the mathematical foundations of well-established numerical algorithms and explore their use through practical examples drawn from a range of scientific and engineering disciplines.

Computer Science

Advanced Algorithms

Taught by Prof. Aditya Bhaskara (PhD. Princeton Univ.) Algorithm design and analysis is a fundamental and important part of computer science.

This course introduced us to advanced techniques for the design and analysis of algorithms, and we explored a variety of applications.

The Link to the course page is private to UofU Grad students who were registered for this class. But, The department was kind enough to allows us to recall class lectures from the YouTube playlist

Advanced Operating Systems

Taught by Prof. John Regehr (PhD. Univeristy of Virginia). This was another fun course and we all read the classic OS Concepts book Operating System Concepts- 9th Edition by Silberschatz, Galvin, and Gagne

A compulsory :) intense 4-credit graduate-level course we covered a broad range of topics in operating system design and implementation, including:

Operating system structuring, Synchronization, communication and scheduling in parallel systems.

Picked up APIs from pthread library and Hacked linux Kernal.

Taught by Prof. Mary Hall (PhD. Rice Univ.)

This course teaches students how to program massively parallel processors. We learnt various techniques for constructing parallel programs. Case studies are used to demonstrate the development process, which begins with computational thinking and ends with effective and efficient parallel programs.

This course shows me the basic concepts of parallel programming and GPU architecture.

Topics of performance, floating-point format, parallel patterns, and dynamic parallelism were covered in depth.

We also covered more parallel programming examples, commonly-used libraries such as Thrust, and explanations of the latest tools. We also become familiar with new coverage of CUDA 5.0, improved performance, enhanced development tools, increased hardware support, and more; increased coverage of related technology, OpenCL and new material on algorithm patterns, GPU clusters, host programming, and data parallelism; compared to previous offerings of this course, we had two new case studies (on MRI reconstruction and molecular visualization) that explore the latest applications of CUDA and GPUs for scientific research and high-performance computing.

Teaching

I was fortunate to be a teaching assistant for the following Computer Science courses at the University Of Utah.

Automata and Computability

This course was offered by my Advisor, Dr. Ganesh Gopalakrishnan and here is its Course Page.

This course covers fundamental models of computation.

It starts with the theory of syntax processing, takes us through various grammar and machine constructions, and ends with an exploration of the theoretical limits of computing.

The approach is fairly unusual, and is based on a Programmer's Perspective.


Structured Parallel Programming: Patterns for Efficient Computation

This course will be offered by my Advisor, Dr. Ganesh Gopalakrishnan

Programming is now parallel programming. Much as structured programming revolutionized traditional serial programming decades ago, a new kind of structured programming, based on patterns, is relevant to parallel programming today.

Dr. Ganesh describes how to design and implement maintainable and efficient parallel algorithms using a pattern-based approach. We present both theory and practice, and give detailed concrete examples using multiple programming models. Examples are primarily given using two of the most popular and cutting edge programming models for parallel programming: Threading Building Blocks, and Cilk Plus.

These architecture-independent models enable easy integration into existing applications, preserve investments in existing code, and speed the development of parallel applications. We also introduced programming massively parallel processors using CUDA C++

About How A School might Kill a student's Creativity

And About How to escape Education's Death Valley