Formal Methods for Safety-Critical System Design and Validation:
Formal methods specify rigorous theoretical and mathematical models to design software and hardware systems and use formal techniques to validate and certify the design behaviors, ensuring its correctness, power management, reliability, security, quality and robustness attributes. We primarily aim to design automated frameworks for the following aspects:
(a) Functional correctness of integrated circuit designs during various phases (both in pre-silicon as well as post-silicon) leveraging simulation, formal as well as semi-formal techniques
(b) Comprehensive verification of the power management strategy for integrated circuits involving the digital power manager orchestrating the analog power control circuitry
(c) Reliability and robustness analysis of safety-critical systems from formal specifications of spatial and temporal redundancy artefacts, properties of the error-free system, error probabilities of the control components, and reliability targets
Computer-Aided Design (CAD) for Security:
VLSI designs and integrated circuits are manufactured using several CAD frameworks where many challenging algorithmic problems need to be solved, including design synthesis, floor-planning, placement and routing, to deliver a final chip. We primarily focus on the following aspects:
(a) Vulnerability assessment of cryptographic primitives in secure cipher designs with respect to fault attacks and synthesis of automatic countermeasures to mitigate security flaws in cipher algorithms as well as their hardware or software implementations
(b) Developing strong physically unclonable function (PUF) designs and its compositions which are resilient to machine learning attacks, leveraging CAD based testability and formal learnability analysis frameworks, justifying the responsiveness, unclonability, reliability, correlation and learnability factors of PUFs
Robust Explainability of Machine Learning Models:
Explainability is becoming one of the key needs for widespread adoption of ML approaches in safety-critical applications. We primarily target to explore and devise approaches for comprehensive interpretability as well as ensure robustness of trained ML models so that it cannot be fooled or manipulated by adversarial/malicious perturbations.
Principal Investigator
- An Integrated Hierarchical Debugging Framework for XLS Tool-suite Google Asia Pacific Pte Ltd.
- Automation and Setting Up of a Reference Lab for Validating Cryptographic Algorithms/Modules as per ISO/IEC Standards Ministry of Electronics and Information Technology
- Integrating Verification and Coverage Management Framework with XLS Tool-suite Google Asia Pacific Pte Ltd.
- Tackling Uncertainties on Multi-Agent RL through Integration of Agent Termination Dynamics Google Asia Pacific Pte Ltd.
- The Qualcomm Faculty Award (QFA) 2021 Qualcomm Technologies, INC.
Co-Principal Investigator
- Exploring Human and Machine Cognition: Building a Learning Analytics Portal
of AI Assisted Problem Solving Models for Visuo-Spatial Tasks (1817_ZBSA) Department of Science and Technology (DST)
- Information Security Education and Awareness (ISEA) Project Phase III Ministry of Electronics and Information Technology
- ML and RL based Predictive Modeling Tool for Spatio-Temporal Temperature map on a Multi-core CPU under Dynamic Workloads Google Asia Pacific Pte Ltd.
Ph. D. Students
Das Adhikary Debjyoti Tushar
Area of Research: Artificial Intelligence
Madhumanti Bhattacharyya
Area of Research: Cognitive Neuropsychology
Mithun Kumar Mahto
Area of Research: Verification of ML Systems
Sourav Das
Area of Research: Design Verification
Subhankar Jana
Area of Research: Formal Methods in Machine Learning