Introduction

What is Generative AI?

AI’s interpretation of my query and its “cringe” response

What makes a good prompt?

Setting up our review prompt

"1_Goal": "You are an expert FPGA design reviewer tasked with analyzing
HDL source code (Verilog/VHDL) and synthesis/implementation reports.
Your goal is to identify performance, area, timing, and power optimization
opportunities; detect design-quality issues and potential bugs;
and summarize insights in a clear, prioritized format that helps
a hardware engineer improve design closure, maintainability, and
efficiency. All specific findings should clearly point to the module
or report they reference by name and, if possible, line number or
section number. The report should be in markdown.",
"2_Context": {
  "Files_Provided": {
  	"Source": ["*.v", "*.sv", "*.vhd"],
  	"Synthesis": ["*.vds"],
  	"Implementation": ["*.vdi", "*.rpt"],
  	"Constraints": ["*.xdc"],
  	"Baseline": ["previous_run_metrics.json (optional)"]
  },
  "Key_Findings_Categories": [
  	"1. RTL Quality",
  	"2. Structural Design",
  	"3. Resource Inference",
  	"4. Clocking and Reset",
  	"5. CDC and Timing Safety",
  	"6. Synthesis Utilization and QoR",
  	"7. Implementation Timing and Physical",
  	"8. Power and Thermal",
  	"9. Constraint Consistency",
  	"10. Readability and Maintainability"
  ],
  "Quantitative_Metrics": [
  	"Total Modules Analyzed",
  	"Average Lines per Module",
  	"Latch Count",
  	"Clock Domain Count",
  	"Reset Nets Count",
  	"CDC Signal Count",
  	"Combinational Loop Count",
  	"Signal Fanout Max",
  	"Estimated Pipeline Depth",
  	"LUT Utilization Percent",
  	"FF Utilization Percent",
  	"DSP Inference Rate",
  	"BRAM Inference Rate",
  	"WNS (ns)",
  	"TNS (ns)",
  	"Power Dynamic (mW)",
  	"Power Static (mW)"
  ],
  "Analysis_Categories": [
  	"RTL Quality & Inference: latch inference, FSM encoding, CDC safety, unused logic, resource mapping quality (DSP, BRAM, SRL).",
  	"Synthesis Metrics: area, LUT/FF/DSP/BRAM utilization, high-fanout nets, register duplication, logic depth, resource efficiency.",
  	"Implementation QoR: timing (WNS, TNS), unconstrained paths, congestion, placement/floorplanning issues, inter-SLR crossings.",
  	"Power & Thermal: dynamic vs static power, toggling hotspots, clock gating opportunities.",
  	"Constraint Consistency: missing or conflicting constraints, undefined clocks, wildcards.",
  	"QoR Trends: compare timing, area, or power changes vs. baseline build."
  ]
}
"3_Format": {
  "Output_Structure": [
	"### FPGA AI Review Summary",
	"",
	"**1. Overview**",
	"Brief summary of overall design health (e.g., 'Design meets timing but shows poor BRAM inference and missing clock constraints.').",
	"",
	"**2. Quantitative Analysis (use these exact categories)**",
	"| Metric | Value | Expected Range | Status |",
	"|--------|--------|----------------|--------|",
	"| Total Modules Analyzed | X | — | ✅ |",
	"| Latch Count | X | 0 | ⚠️ |",
	"| Clock Domain Count | X | ≤3 | ✅ |",
	"| Signal Fanout Max | X | ≤64 | ⚠️ |",
	"| DSP Inference Rate | X% | ≥80% | ✅ |",
	"| BRAM Inference Rate | X% | ≥70% | ⚠️ |",
	"| LUT Utilization Percent | X% | <80% | ⚠️ |",
	"| FF Utilization Percent | X% | <80% | ✅ |",
	"| WNS (ns) | X | ≥0 | ⚠️ |",
	"| Power Dynamic (mW) | X | <target | ✅ |",
	"",
	"**3. Key Findings (Use these exact categories. You MUST be thorough and report on every single category, even if no issues exist.)**",
	"- [RTL Quality]",
	"      - Issue / Impact / Suggested Action",
	"      - Issue / Impact / Suggested Action",
	"      ...",
	"- [Structural Design]",
	"      - Issue / Impact / Suggested Action",
	"      - Issue / Impact / Suggested Action",
	"      ...",
	"- [Structural Design]",
	"      - Issue / Impact / Suggested Action",
	"      - Issue / Impact / Suggested Action",
	"      ...",
	"- [Resource Inference]",
	"      - Issue / Impact / Suggested Action",
	"      - Issue / Impact / Suggested Action",
	"      ...",
	"- [Clocking and Reset]",
	"      - Issue / Impact / Suggested Action",
	"      - Issue / Impact / Suggested Action",
	"      ...",
	"- [CDC and Timing Safety]",
	"      - Issue / Impact / Suggested Action",
	"      - Issue / Impact / Suggested Action",
	"      ...",
	"- [Synthesis Utilization and QoR]",
	"      - Issue / Impact / Suggested Action",
	"      - Issue / Impact / Suggested Action",
	"      ...",
	"- [Implementation Timing and Physical]",
	"      - Issue / Impact / Suggested Action",
	"      - Issue / Impact / Suggested Action",
	"      ...",
	"- [Power and Thermal]",
	"      - Issue / Impact / Suggested Action",
	"      - Issue / Impact / Suggested Action",
	"      ...",
	"- [Constraint Consistency]",
	"      - Issue / Impact / Suggested Action",
	"      - Issue / Impact / Suggested Action",
	"      ...",
	"- [Readability and Maintainability]",
	"      - Issue / Impact / Suggested Action",
	"      - Issue / Impact / Suggested Action",
	"      ...",
	"",
	"**4. AI Insights and Recommendations**",
	"Summarize root causes, potential refactors, or optimization strategies (e.g., pipeline long paths, restructure FSMs, or apply clock gating).",
	"",
	"**5. QoR Trend Summary (if baseline data available)**",
	"Summarize improvements or regressions vs. previous runs in timing, power, or area metrics."
  ]
},
"4_Tone": "Professional, technical, and advisory. Use a structured,
concise style similar to an experienced FPGA design review.
Be factual, quantify metrics where possible, and focus on
actionable improvements. Highlight critical issues with ⚠️
or 🚨 symbols, and acknowledge strengths when observed."
"5_Request_for_Clarifying_Questions": {
  "Instruction": "If essential context is missing (e.g., device family, clock target, or expected power range), ask up to three concise clarifying questions before finalizing the review.",
  "Example_Questions": [
	"What is the target FPGA device and family?",
	"What is the primary system clock period or frequency constraint?",
	"Were realistic toggle/activity files used for power analysis?"
  ]
}

Verbalized Sampling

Image sourced from Verbalized Sampling: How To Mitigate Mode Collapse and Unlock LLM Diversity, Sec 5.1, pg. 9

Utilizing Gemini API for Automation

import json
import os
import argparse
import re
import subprocess
from datetime import datetime
from google import genai

# Parse command-line arguments
def parse_arguments():
    parser = argparse.ArgumentParser(description='FPGA Design Review Assistant')
    parser.add_argument('--rtl-dir', default='rtl/',
                      help='Directory containing RTL files')
    parser.add_argument('--xdc-dir', default='xdc/',
                      help='Directory containing XDC files')
    parser.add_argument('--reports-dir', default='reports/',
                      help='Directory containing report files')
    parser.add_argument('--output-dir', default='ai-reports/',
                      help='Output directory for markdown files')
    parser.add_argument('--prompt-file', default='fpga_review_prompt.json',
                      help='JSON file containing LLM prompt instructions')
    parser.add_argument('--model', default='gemini-2.5-flash',
                      help='GenAI model to use for generation')
    parser.add_argument('--vs', type=int, default=1,
                      help='Verbalized sampling factor')
    parser.add_argument('--pdf', action='store_true',
                      help='Also generate PDF copies of the reports using md2pdf (if available)')
    parser.add_argument('--name', default=os.getenv('USER', 'Unknown'),
                      help='Name to prepend to the report')
    parser.add_argument('--date', default=datetime.now().strftime('%Y-%m-%d'),
                      help='Date to prepend to the report')
    parser.add_argument('--title', default='FPGA Design Review',
                      help='Title to prepend to the report')
    parser.add_argument('--file-name', default='fpga_review_report',
                      help='Name (prefix) for output filenames (default: fpga_review_report)')
    return parser.parse_args()

# Function to get all files in a directory and its subdirectories
def get_all_files_subdir(directory_path):
    all_files = []
    for root, _, files in os.walk(directory_path):
        for file in files:
            all_files.append(os.path.join(root, file))
    return all_files


def generate_pdf(md_path):
    """Try to convert a markdown file to PDF using md2pdf.

    Returns the pdf path on success or None on failure.
    """
    pdf_path = os.path.splitext(md_path)[0] + '.pdf'
    try:
        subprocess.run(["md2pdf", md_path, pdf_path], check=True)
        print(f"PDF generated: {pdf_path}")
        return pdf_path
    except FileNotFoundError:
        print("md2pdf not found on PATH; install it or provide an alternative to generate PDFs.")
    except subprocess.CalledProcessError as e:
        print(f"md2pdf failed: {e}")
    return None

# Main execution
if __name__ == "__main__":
    print("Starting FPGA Design Review Assistant...")
    args = parse_arguments()
    
    # Read in the prompt
    prompt = []

    # Read in files for the prompt
    file_list = get_all_files_subdir(args.rtl_dir)
    file_list += get_all_files_subdir(args.xdc_dir)
    file_list += get_all_files_subdir(args.reports_dir)

    print ("Files processed in query...")

    # Append contents of each file to the prompt
    for file in file_list:
        print ("\t" + file)
        with open(file, "r", encoding="utf-8") as f:
            content = f.read()
            prompt.append(content)

    if (args.vs > 1):
        print(f"Generating {args.vs} reports using verbalized sampling...")
        prompt.append(f"Generate exactly {args.vs} responses with their corresponding probabilities and separate each response with a newline followed by \"VS=<probability> PROBABILITY\" so that we can delimit the reports into separate documents for the following prompt:")
    else:
        print("Generating single report...")

    # Append the LLM prompt instructions
    with open(args.prompt_file, "r", encoding="utf-8") as file:
        prompt_data = json.load(file)

    json_str = json.dumps(prompt_data, ensure_ascii=False)

    prompt.append(json_str)

    # Query the model
    client = genai.Client()

    response = client.models.generate_content(
        model=args.model,
        contents=prompt
    )

    # Create output directory if it doesn't exist
    os.makedirs(args.output_dir, exist_ok=True)

    # Handle output based on verbalized sampling factor
    # Prepare a small metadata header to prepend to each markdown report
    header = f"<div align=\"center\">\n\n# {args.title}\n\n{args.name}\n\n{args.date}\n\n---\n\n</div>\n\n"

    if args.vs > 1:
        # Output multiple reports. The model was instructed to separate blocks with
        # a trailing "<probability> PROBABILITY" token, so split on the
        # delimiter and extract the trailing float from each segment.
        raw_segments = response.text.split("PROBABILITY")
        # Iterate through non-empty segments
        non_empty = [s for s in raw_segments if s.strip()]
        for i, seg in enumerate(non_empty, 1):
            seg = seg.rstrip()
            # Try to extract a trailing probability float (e.g. 0.15)
            m = seg.find("VS=")
            if m:
                prob_str = seg[m + 3:].strip()
                # remove the probability that was appended to the content
                content_body = seg[:m].rstrip()
            else:
                prob_str = 'unknown'
                content_body = seg

            filename = f"{args.file_name}_{i}_{prob_str}.md"
            output_path = os.path.join(args.output_dir, filename)
            with open(output_path, "w", encoding="utf-8") as report_file:
                report_file.write(header + content_body.strip() + "\n")
            print(f"Report {i} saved to: {output_path}")

            # Optionally generate PDF using md2pdf if requested
            if args.pdf:
                generate_pdf(output_path)
    else:
        # Single report case
        output_filename = f"{args.file_name}.md"
        output_path = os.path.join(args.output_dir, output_filename)
        with open(output_path, "w", encoding="utf-8") as report_file:
            report_file.write(header + response.text.strip() + "\n")
        print(f"Report saved to: {output_path}")

        if args.pdf:
            generate_pdf(output_path)
$ python3 fpga_review_assistant.py --help
Starting FPGA Design Review Assistant...
usage: fpga_review_assistant.py [-h] [--rtl-dir RTL_DIR] [--xdc-dir XDC_DIR] [--reports-dir REPORTS_DIR] [--output-dir OUTPUT_DIR] [--prompt-file PROMPT_FILE] [--model MODEL] [--vs VS] [--pdf] [--name NAME] [--date DATE] [--title TITLE]

FPGA Design Review Assistant

options:
  -h, --help            show this help message and exit
  --rtl-dir RTL_DIR     Directory containing RTL files
  --xdc-dir XDC_DIR     Directory containing XDC files
  --reports-dir REPORTS_DIR
                        Directory containing report files
  --output-dir OUTPUT_DIR
                        Output directory for markdown files
  --prompt-file PROMPT_FILE
                        JSON file containing LLM prompt instructions
  --model MODEL         GenAI model to use for generation
  --vs VS               Verbalized sampling factor
  --pdf                 Also generate PDF copies of the reports using md2pdf (if available)
  --name NAME           Name to prepend to the report
  --date DATE           Date to prepend to the report
  --title TITLE         Title to prepend to the report

Distilling Vivado’s Artifacts

Comparing The Vibes

Vibe reviews* limited by TPM constraints! Ref

Report Analysis

Auto-generated report header

Overview

Overview section for the limited reports

Quantitative Analysis

Metrics for source code only (top left), limited reports (top right), and full reports (center bottom)

Key Findings

First reported issue in the key findings section of the limited report

AI Insights / Recommendations

AI’s primary recommendations for the limited report based on vibes

Quality of Results (QoR) Trends

Conclusion

Future Work

Leave a comment