Day 1: June 28th
8:15am-8:45am Breakfast
8:45am-9:10am Opening remarks
9:10am-10:10am Keynote 1: Reasoning Myths about Language Models: What is Next?
Speaker: Dan Roth, VP/Distinguished Scientist, AWS AI Labs & Eduardo D. Glandt
Distinguished Professor, UPenn. Abstract and Bio [Recordings]
Session Chair: Deming Chen
10:10am-10:30am Coffee Break
10:30am-Noon Best Paper Nominees, 15 min talks (12 min + 3 for Q&A)
Session Chair: Jose Renau, UCSC
- AMSNet: Netlist Dataset for AMS Circuits (Zhuofu Tao, Yichen Shi, Yiru Huo, Rui Ye, Zonghang Li, Li Huang, Chen Wu, Na Bai, Zhiping Yu, Ting-Jung Lin, Lei He)
- VerilogReader: LLM-Aided Hardware Test Generation (Ruiyang Ma, Yuxin Yang, Ziqian Liu, Jiaxi Zhang, Min Li, Junhua Huang, Guojie Luo)
- RTLCoder: Outperforming GPT-3.5 in Design RTL Generation with Our Open-Source Dataset and Lightweight Solution (Shang Liu, Wenji Fang, Yao Lu, Qijun Zhang, Hongce Zhang, Zhiyao Xie)
- EDA Corpus: A Large Language Model Dataset for Enhanced Interaction with OpenROAD (Bing-Yue Wu, Utsav Sharma, Sai Rahul Dhanvi Kankipati, Ajay Yadav, Bintu Kappil George, Sai Ritish Guntupalli, Austin Rovinski, Vidya Chhabria)
- MG-Verilog: Multi-grained Dataset Towards Enhanced LLM-assisted Verilog Generation (Yongan Zhang, Zhongzhi Yu, Yonggan Fu, Cheng Wan, Yingyan Celine Lin)
- Large Language Model (LLM) for Standard Cell Layout Design Optimization (Chia-Tung Ho, Haoxing Ren)
Noon-1:00pm Lunch
1:00pm-2:30pm Invited Session 1
Session Chair: Vidya Chhabria, ASU
- 1:00-1:30: Scaling Intelligence
Speaker: Azalia Mirhoseini, Stanford University [Recording]
- 1:30-2:00: Deep Reinforcement in the Real World: From Chip Design to LLMs
Speaker: Anna Goldie, Stanford/Google DeepMind
- 2:00-2:30: Recent Techniques to Improve LLM Performance on "Hard" Domains like Coding and Mathematics
Speaker: Mo Tiwari, OpenAI [Recording]
2:30pm-3:30pm Poster Session (For All talks on Day 1), WIPs, Coffee.
(WIP posters are welcome in any of the poster sessions)
3:30pm-5:00pm LLMs for RTL, 13 min talks (10 min + 3 for Q&A)
Session Chair: Jeyavijayan Rajendran, TAMU
- A Multi-Expert Large Language Model Architecture for Verilog Code Generation (Bardia Nadimi, Hao Zheng)
- VHDL-Eval: A Framework for Evaluating Large Language Models in VHDL Code Generation (Prashanth Vijayaraghavan, Luyao Shi, Stefano Ambrogio, Charles Mackin, Apoorva Nitsure, David Beymer, Ehsan Degan)
- Toward Hardware Security Benchmarking of LLMs (Raheel Afsharmazayejani, Mohammad Moradi Shahmiri, Parker Link, Hammond Pearce, Benjamin Tan)
- VGV: Verilog Generation using Visual Capabilities of Multi-Modal Large Language Models (SamZaak Wong, Gwok-Waa Wan, Dongping Liu, Xi Wang)
- HDLEval Benchmarking LLMs for Multiple HDLs (Mark Zakharov, Farzaneh Rabiei Kashanaki, Jose Renau)
- RTL-Repo: A Benchmark for Evaluating LLMs on Large-Scale RTL Design Projects (Ahmed Allam, Mohamed Shalan)
- From Bugs to Fixes: HDL Bug Identification and Patching using LLMs and RAG (Khushboo Qayyum, Muhammad Hassan, Sallar Ahmadi-Pour, Chandan Kumar Jha, Rolf Drechsler)
5:00pm-6:00pm Panel: Generative AI for semiconductor from architecture to fab: A pipedream? Or ready for prime time
- Akhilesh Kumar (Ansys), Ioannis Savidis (Drexel University), Mike Kazda (IBM), Srinivas Bodapati (Intel), Sid Dhodhi (Nvidia), Ivan Kissiov (Siemens EDA), Ilhami Torunoglu (Siemens EDA). Leigh Anne Clevenger (Si2), Moderator.
link to details
6:00pm-8:00pm Banquet
Day 2: June 29th
8:15am-8:45am. Breakfast
8:45am-9:00am Best Paper Award Announcement
9:00am-10:00am Keynote 2 : From Imitation to Discovery Test: A case study of AlphaGeometry
Speaker: Thang Luong, Senior Staff Research Scientist,
Google DeepMind. Abstract and Bio
Session Chair: Siddharth Garg
10:00am-10:30am Coffee Break
10:30am-Noon Novel Uses and LLMs in Design, 13 min talks (10 min + 3 for Q&A)
Session Chair: Austin Rovinski, NYU
- CreativEval: Evaluating Creativity of LLM-Based Hardware Code Generation (Matthew DeLorenzo, Vasudev Gohil, Jeyavijayan Rajendran)
- Assessing Economic Viability: A Comparative Analysis of Total Cost of Ownership for Domain-Adapted Large Language Models versus Open-Source Counterparts in Chip Design Coding Assistance (Amit Sharma, Teodor-Dumitru Ene, Kishor Kunal, Mingjie Liu, Haoxing Ren)
- FabSage: A Large Multimodal Model for IC Defect Detection, Analysis, and Knowledge Querying (Yuqi Jiang, Qian Jin, Xudong Lu, Qi Sun, Cheng Zhuo)
- Qiskit Code Assistant: Training LLMs for generating Quantum Computing Code (Nicolas Dupuis, Luca Buratti, Sanjay Vishwakarma, Aitana Viudes Forrat, David Kremer, Ismael Faro, Ruchir Puri, Juan Cruz-Benito)
- Evaluating Large Language Models for G-Code Debugging, Manipulation, and Comprehension (Anushrut Jignasu, Kelly O. Marshall, Baskar Ganapathysubramanian, Aditya Balu, Chinmay Hegde, Adarsh Krishnamurthy)
- Can Low-Rank Knowledge Distillation in LLMs be Useful for Microelectronic Reasoning? (Fin Amin, Nirjhor Rouf, Paul Franzon)
- Evaluating LLMs for Hardware Design and Test (Jason Blocklove, Siddharth Garg, Ramesh Karri, Hammond Pearce)
Noon-1:00pm Lunch
1:00pm-2:00pm Invited Session 2
Session Chair: Hammond Pearce, UNSW
- 1:00-1:30: Self-Improvement with Large Language Models
Speaker: Xinyun Chen, Google DeepMind Abstract and Bio [Recording]
- 1:30-2:00: The Story of MLCommons: Lessons Learned and Future Visions for ML-Aided Design
Speaker: Vijay Janapa Reddi, Harvard University [Recording]
2:00pm-3:00pm Poster Session (For All talks on Day 2), WIPs, Coffee.
(WIP posters are welcome in any of the poster sessions)
3:00pm-4:45pm From Software to Synthesis, 13 min talks (10 min + 3 for Q&A)
Session Chair: Benjamin Tan, University of Calgary
- Ask-EDA: A Design Assistant Empowered by LLM, Hybrid RAG and Abbreviation De-hallucination (Luyao Shi, Michael Kazda, Bradley Sears, Nick Shropshire, Ruchir Puri)
- C2HLSC: Can LLMs Bridge the Software-to-Hardware Design Gap? (Luca Collini, Siddharth Garg, Ramesh Karri)
- Novel Preprocessing Technique for Data Embedding in Engineering Code Generation Using Large Language Model (Yu-Chen Lin, Akhilesh Kumar, Norman Chang, Wenliang Zhang, Muhammad Zakir, Rucha Apte, Haiyang He, Chao Wang, Jyh-Shing Roger Jang)
- LLM-Aided Compilation for Tensor Accelerators (Charles Hong, Sahil Bhatia, Altan Haan, Shengjun Kris Dong, Dima Nikiforov, Alvin Cheung, Sophia Shao)
- LLM-aided explanations of EDA synthesis errors (Siyu Qiu, Benjamin Tan, Hammond Pearce)
- CircuitSynth: Leveraging Large Language Models for Circuit Topology Synthesis (Prashanth Vijayaraghavan, Luyao Shi, Ehsan Degan, Xin Zhang)
- An Iteratively-refined Dataset for High-Level Synthesis Functional Verification through LLM-Aided Bug Injection (Lily Jiaxin Wan, Hanchen Ye, Jinghua Wang, Manvi Jha, Deming Chen)
- Optimizing High-Level Synthesis Designs with Retrieval-Augmented Large Language Models (Haocheng Xu, Haotian Hu, Sitao Huang)
4:45pm-5:00pm Concluding Remarks