CONIX Publication

End-to-End QoR Predictive Model for Efficient Logic Synthesis Optimization

Authors: Minwoo Kang, John Wawrzynek, Sehoon Kim, Jingyi Xu, Borijove Nikolic, Kurt Keutzer, Alan Mishchenko


The recent surge of interest in applying machine learning (ML) algorithms to electronic design automation (EDA), including logic synthesis, has demonstrated the potential of significantly improving both design efficiency and quality of results (QoR). However, existing learning-based methods for logic synthesis require significant amounts of online data collection that hinder their practical integration into EDA toolflows. Furthermore, due to their lack of transfer-learning capabilities, the majority of previous ML-based frameworks do not extend to new circuit designs without re-training the entire model. This work presents an end-to-end post-synthesis QoR predictive model that achieves few-shot domain adaptation to circuits and synthesis recipes not seen during training. Once trained, our model can be utilized as an efficient cost estimator for black-box optimization and search algorithms to obtain optimized logic synthesis recipes tailored to each RTL design. The core of our model is a graph neural network (GNN) module that embeds the gate-level connectivity of the circuit netlist and a subsequent recurrent neural network (RNN) module that further adds information about the sequence of synthesis transformations applied to the circuit. The model is trained on a dataset of around 30K synthesis results for each circuit, labeled with ground-truth post-synthesis QoRs from industrial tools. Experimental results demonstrate that our model accurately predicts critical path delay and LUT count of FPGA-mapped netlists, even for synthesis recipes of varying lengths not seen during training. We also show that with finetuning, our model easily generalizes to entirely new circuits outside the training dataset and achieves high prediction accuracy.

Release Date: 01/01/2023
Uploaded File: View