Condensa: A Programming System for Model Compression
Condensa is a framework for programmable model compression in Python, developed in collaboration with NVIDIA research, comes with a set of built-in compression operators which may be used to compose complex compression schemes targeting specific combinations of DNN architecture, hardware platform, and optimization objective. To recover any accuracy lost during compression, Condensa uses a constrained optimization formulation of model compression and employs an Augmented Lagrangian-based algorithm as the optimizer.
Madonna : A Framework for Measurements and Assistance in designing Low Power Deep Neural Networks.
This project presents Madonna v1.0, a direction towares automated co-design approach across the numerical precision to optimize DNN hardware accelerators. Compared to an 64-bit floating point accelerator baseline, we show that 32-bit floating points accelerators, reduces energy by 1.5;
Training time improved by 1.22x and a observable improvement in Inference as well; Across three datasets, these power and energy measurements provide a collective average of 0.5W reduction and 2x energy reduction over an accelerator baseline without almost compromising DNN model accuracy.
Madonna enables accurate, low power DNN accelerators , making it feasible to deploy DNNs in power-constrained IoT and mobile devices.
Cinchona: Using Condensa Compressed DNNs for Malaria Detection
Details Coming here soon, Please check back here, If you want to get access to pre-releases feel free to email
"There is a computer disease that anybody who works with computers knows about. It's a very serious disease and it interferes completely with the work. The trouble with computers is that you 'play' with them!" -- Richard P. Feynman
GANesha: An Experimentation framework for using GANs for Generating Interesting Images
10800 Images of Lord Ganesha
- Source Google Images Query using 108 "Alternative" Names for Ganesha
- Thresholded pixel values to be b/w
150 x 150
- All configurable
VULFI - An LLVM based Fault Injection Framework
VULFI is an open-source instruction-level fault injection framework developed using LLVM compiler infrastructure. VULFI performs fault injection at LLVM's intermediate representation (IR) level. It currently supports C, C++, ISPC, OpenCL, and MPI-C languages. Note that to target an OpenCL program for fault injection using VULFI, the program should be compiled using Intel's OpenCL Compiler.
Conceptually, VULFI can target any high level language which can be compiled to LLVM's intermediate representation. It is our intention to keep VULFI up-to-date supporting the latest LLVM development branch while also maintaining reasonable support for older versions of LLVM.
I was very fortunate to be mentored by Dr. Vishal Sharma, VULFI was part of his Ph.D. thesis topic and an extension of KULFI (LLVM Instruction Level Fault Injector https://github.com/soarlab/KULFI)