HDLOpt is a comprehensive toolset for optimizing and analyzing hardware description language (HDL) code, with a focus on Verilog (for now). It provides robust parsing, analysis, and optimization capabilities for HDL designs.
- Python 3.8+
- One of the following HDL simulators:
- ModelSim
- Icarus Verilog (iverilog)
- For netlist, resource usage analyses:
- Yosys synthesis tool
- OSS CAD Suite (recommended for Windows users)
- Graphviz
- For timing, power analyses:
- Vivado
- Clone the repository:
git clone https://github.com/yourusername/hdlopt.git
cd hdlopt
- Create and activate a virtual environment:
# On Linux/MacOS
python -m venv venv
source venv/bin/activate
# On Windows
python -m venv venv
venv\Scripts\activate
- Install required packages:
pip install -r requirements.txt
- Ensure ModelSim/Questa is installed and added to system PATH
- Verify installation:
vlog -version
vsim -version
- Install Icarus Verilog:
# Ubuntu/Debian
sudo apt-get install iverilog
# MacOS
brew install icarus-verilog
# Windows
# Download installer from http://bleyer.org/icarus/
- Verify installation:
iverilog -v
- Install Yosys:
# Ubuntu/Debian
sudo apt-get install yosys
# MacOS
brew install yosys
# Windows
# Install OSS CAD Suite from https://github.com/YosysHQ/oss-cad-suite-build
- Verify installation:
yosys --version
- Robust Verilog Parsing: Multi-mode parsing engine with native and PyVerilog backends
- Modular Pattern Matching: Flexible pattern matching for HDL code analysis
- JSON Serialization: Standardized format for storing and exchanging HDL component data
- Automated Testbench Generation: Intelligent testbench creation with configurable parameters
- Resource Usage Analysis: Design analysis using Yosys with detailed metrics
- Timing Analysis: Critical path and timing constraint analysis using Vivado
- Power Analysis: Dynamic and static power analysis with supply usage details
- Waveform Analysis: VCD waveform parsing and visualization with timing checks
- Schematic Generation: Gate-level schematic visualization using Yosys+Graphviz and Vivado
- Comprehensive Reporting: PDF report generation for all analysis results
- Recursive Analysis: Support for hierarchical designs and submodule analysis
- Experiment Management: Automated tracking and comparison of HDL design iterations
- Run Management: Unified runner interface for all analyses with experiment tracking
- Analysis Versioning: Version control and history tracking for HDL components
- Design Comparison: Detailed comparison between different design iterations
HDLOpt provides two parsing modes: native parsing and PyVerilog-based parsing. Here's how to use them:
from hdlopt.scripts.parsing import VerilogParser, ParserMode
# Using native parser (default)
parser = VerilogParser(mode=ParserMode.NATIVE)
modules = parser.parse_file("path/to/your/design.v")
# Using PyVerilog parser (if PyVerilog is installed)
parser = VerilogParser(mode=ParserMode.PYVERILOG)
modules = parser.parse_file("path/to/your/design.v")
Once you've parsed a Verilog file, you can work with the module objects:
for module in modules:
# Access module properties
print(f"Module name: {module.name}")
print(f"Parameters: {module.parameters}")
print(f"Inputs: {[input.name for input in module.inputs]}")
print(f"Outputs: {[output.name for output in module.outputs]}")
# Serialize to JSON
module.serialize_to_json("output.json")
The parser produces standardized JSON output for each module. Here's an example:
{
"component_name": "complex_alu",
"parameters": [
{
"name": "WIDTH",
"value": "8",
"description": "Data width parameter"
}
],
"inputs": [
{
"name": "clk",
"type": "wire",
"sign": "unsigned",
"bit_width": "1",
"comment": "System clock",
"default_value": "1'b0"
}
],
"outputs": [...],
"internals": [...],
"mode": "sequential",
"submodules": ["full_adder", "carry_lookahead"]
}
The parsing system is built with a flexible, extensible architecture:
- Base Parser Interface: Defined in
VerilogParserBase
- Implementation Modes:
- Native Parser: Pure Python implementation using regex and state machines
- PyVerilog Parser: Wrapper around PyVerilog's parsing capabilities
- Core Components:
- Signal Class: Represents Verilog signals (inputs, outputs, wires, regs)
- VerilogModule Class: Represents complete Verilog module structure
- Pattern Matching System: Flexible pattern matching for code analysis
The pattern matching system supports multiple strategies:
- String Matching: Exact string comparison
- Substring Matching: Partial string matching with optional count constraints
- Regex Matching: Regular expression-based pattern matching
The testbench generator creates comprehensive testbenches for Verilog modules:
from hdlopt.scripts.testbench.core import TestbenchGenerator
from hdlopt.rules.base import Rule
# Create generator with rules
generator = TestbenchGenerator(
component_name="adder",
rules=[AdderRule()],
constraints=ConstraintConfig(
param_constraints={"adder": lambda n: n <= 64},
input_constraints={"adder": {"a": lambda r: r[0] >= 0}}
),
timing=TimingConfig(
clk_period=10,
operation_delay=20
)
)
# Generate testbenches
generator.generate(recursive=True) # Also generates for submodules
Analyze HDL designs using the ResourceAnalyzer:
Analyze timing constraints and critical paths using Vivado:
from hdlopt.scripts.analysis.timing import TimingAnalyzer, TimingConfig
# Configure timing analysis
config = TimingConfig(
clk_period={"clk": 10, "clk_div2": 20}, # Clock periods in ns
operation_delay=5,
rule_delay={"adder": "wait(valid);"}
)
# Create analyzer and run analysis
analyzer = TimingAnalyzer("adder", config)
results = analyzer.analyze()
Analyze power consumption:
from hdlopt.scripts.analysis.power import PowerAnalyzer, PowerConfig
# Configure power analysis
config = PowerConfig(
temperature=85.0, # Junction temperature
process="typical",
toggle_rate=0.5
)
# Create analyzer and run analysis
analyzer = PowerAnalyzer("adder", config)
results = analyzer.analyze()
Analyze VCD waveforms:
from hdlopt.scripts.analysis.waveform import WaveformAnalyzer, WaveformConfig
# Configure waveform analysis
config = WaveformConfig(
signals=["clk", "rst", "data_in", "data_out"],
include_value_changes=True,
include_timing_violations=True
)
# Create analyzer and analyze VCD file
analyzer = WaveformAnalyzer("adder", config)
results = analyzer.analyze("simulation.vcd")
Generate gate-level schematics using either Yosys+Graphviz or Vivado:
from hdlopt.scripts.analysis.schematic import (
SchematicGenerator, SchematicConfig, SchematicTool, SchematicFormat
)
# Using Yosys + Graphviz
config = SchematicConfig(
tool=SchematicTool.YOSYS,
format=SchematicFormat.PNG,
graph_attrs={'rankdir': 'LR'}
)
# Generate schematic
generator = SchematicGenerator("adder", config)
schematic_path = generator.generate()
# Using Vivado
vivado_config = SchematicConfig(
tool=SchematicTool.VIVADO,
format=SchematicFormat.PDF
)
vivado_gen = SchematicGenerator("adder", vivado_config)
vivado_path = vivado_gen.generate()
```python
from hdlopt.scripts.analysis.resource import ResourceAnalyzer, ResourceAnalysisConfig
# Configure analysis
config = ResourceAnalysisConfig(
increment_rules={
"adder": {
"param_name": "WIDTH",
"cell_type": "full_adder",
"increment_per_param": 1,
"base_value": 4
}
},
recursive=True
)
# Create analyzer and run analysis
analyzer = ResourceAnalyzer("adder", config)
results = analyzer.analyze()
Execute generated testbenches:
from hdlopt.scripts.testbench.runner import TestbenchRunner
# Create runner
runner = TestbenchRunner(
simulator="modelsim", # or "iverilog"
timeout=300
)
# Run testbenches recursively
results = runner.run_recursive(
component_name="adder",
base_dir="generated",
force_recompile=False
)
# Check results
for result in results:
print(f"Component: {result.component_name}")
print(f"Tests passed: {result.passed_tests}/{result.num_tests}")
The resource analyzer produces detailed metrics in JSON format:
The timing analyzer produces timing path and constraint analysis data:
{
"timing_summary": {
"wns": -2.5, // Worst Negative Slack
"tns": -10.3, // Total Negative Slack
"whs": 0.5, // Worst Hold Slack
"failing_endpoints": 3,
"total_endpoints": 100
},
"clock_summary": [{
"name": "clk",
"period": 10.0,
"wns": -2.5,
"tns": -5.2,
"failing_endpoints": 2
}],
"path_groups": [...],
"inter_clock": [...]
}
The power analyzer provides detailed power consumption data:
{
"summary": {
"total_on_chip": 1.5, // Watts
"dynamic": 0.8,
"static": 0.7,
"effective_thetaja": 28.4,
"junction_temp": 85.0
},
"on_chip_components": [
{
"name": "Clocking",
"power": 0.2,
"used": 1,
"utilization": 25.0
}
],
"power_supply": [
{
"source": "Vccint",
"voltage": 1.0,
"total_current": 0.5,
"dynamic_current": 0.3,
"static_current": 0.2
}
]
}
The waveform analyzer provides timing and signal analysis data:
{
"signals": {
"clk": {
"transitions": 1000,
"toggle_rate": 0.5,
"min_pulse_width": 4.2
}
},
"timing_violations": [{
"type": "setup",
"time": 156.2,
"slack": -0.5,
"source": "reg1",
"destination": "reg2"
}],
"glitches": [{
"signal": "data",
"time": 245.8,
"width": 0.6
}]
}
```json
{
"4": { // WIDTH=4 configuration
"test_module": {
"wire_count": 10,
"wire_bits": 32,
"port_count": 5,
"port_bits": 16,
"cell_count": 3,
"hierarchy_depth": 2,
"cells": {
"full_adder": 2,
"half_adder": 1
},
"raw_gates": {
"$_AND_": 4,
"$_XOR_": 2
},
"sub_modules": {
"half_adder": 1
}
}
}
}
HDLOpt generates comprehensive PDF reports for all analysis types:
- Testbench Reports: Include test configurations, pass/fail statistics, and detailed test case results
- Resource Reports: Show resource utilization with:
- Gate-level metrics
- Hierarchy analysis
- Cell usage statistics
- Raw gate counts
- Timing Reports: Include:
- Setup/hold timing analysis
- Clock domain summaries
- Critical path details
- Inter-clock transfers
- Power Reports: Show:
- On-chip power breakdown
- Supply voltage/current analysis
- Thermal metrics
- Component-level power usage
- Waveform Reports: Display:
- Signal transition analysis
- Timing violation details
- Glitch detection
- Clock domain analysis
- Schematic Reports: Present:
- Gate-level diagrams
- Hierarchical views
- Module interconnections
- Signal flow visualization
HDLOpt provides a unified runner interface for executing all analyses and managing experiments:
from hdlopt.runner import HDLAnalysisRunner, RunnerConfig, AnalysisType
# Configure runner
config = RunnerConfig(
analyses=[AnalysisType.TESTBENCH, AnalysisType.TIMING],
output_dir="generated",
simulator="modelsim",
recursive=True,
experiment_name="adder_optimization",
experiment_version="2.0",
experiment_desc="Optimizing adder critical path",
experiment_tags={"optimization": "timing", "target": "fpga"}
)
# Create and run
runner = HDLAnalysisRunner(config)
run_id = runner.run(["adder"]) # Returns experiment run ID
# Run all analyses on a module
python -m hdlopt.runner analyze adder -a all
# Run specific analyses
python -m hdlopt.runner analyze counter -a testbench timing -n "Counter_Opt1"
# List all experiment runs
python -m hdlopt.runner list-runs
# Show specific run details
python -m hdlopt.runner show-run run_20240124_123456
# Compare two runs
python -m hdlopt.runner compare run_1 run_2
# Show component history
python -m hdlopt.runner history counter
HDLOpt automatically tracks and manages design iterations and analysis results:
from hdlopt.scripts.experiment_manager import ExperimentManager, ExperimentConfig
# Configure experiment
config = ExperimentConfig(
name="adder_optimization",
version="2.0",
description="Optimizing adder critical path",
tags={"target": "fpga"}
)
# Create manager
manager = ExperimentManager(config)
# Track new run
run_id = manager.start_run(
components=["adder.v"],
config={"param_WIDTH": 32}
)
# Update metrics
manager.update_metrics(run_id, {
"timing_wns": -2.5,
"power_total": 1.2
})
# Add artifacts
manager.add_artifact(run_id, "timing_report", "adder_timing.pdf")
# Get component history
history = manager.get_component_history("adder")
for entry in history:
print(f"Version: {entry['version']}")
print(f"Timestamp: {entry['timestamp']}")
print(f"Metrics: {entry['metrics']}")
# Compare versions
comparison = manager.compare_runs("run_1", "run_2")
print(f"Changes: {comparison['component_changes']}")
print(f"Metric Differences: {comparison['metric_changes']}")
from hdlopt.scripts.testbench.manager import IntegratedTestManager
from hdlopt.scripts.testbench.optimizer import TestOptimizer, ModuleComplexity
# Create manager with optimization
manager = IntegratedTestManager(
component_name="adder",
rules=rules,
max_parallel=4,
target_cases_per_file=1000,
simulator="modelsim"
)
# Load module details
with open("adder_details.json") as f:
module_details = json.load(f)
# Plan tests with optimization
test_plan = manager.plan_tests(
module_details=module_details,
desired_cases=1000,
available_time=300 # 5 minute timeout
)
# Execute optimized test plan
results = manager.execute_test_plan(
plan=test_plan,
module_details=module_details,
recursive=True
)
from hdlopt.scripts.testbench.optimizer import TestOptimizer
optimizer = TestOptimizer(base_path="generated")
# Get input ranges for module
input_ranges = {
"a": [0, 255],
"b": [0, 255],
"cin": [0, 1]
}
# Identify edge cases
edge_cases = optimizer.identify_edge_cases(
input_ranges,
special_signals={"clk", "rst"}
)
# Generate distribution-based test cases
regular_cases = optimizer.generate_test_distribution(
input_ranges,
num_cases=1000,
granularity=0.1
)
# Generate coverage matrix
coverage_matrix = optimizer.generate_coverage_matrix(
test_cases=test_cases,
input_ranges=input_ranges
)
# Generate coverage report
coverage_report = optimizer.generate_coverage_report(
test_cases=test_cases,
input_ranges=input_ranges
)
# Visualize coverage
optimizer.visualize_coverage(
test_cases=test_cases,
input_ranges=input_ranges,
output_path="coverage_plot.png"
)
# Calculate module complexity
complexity = optimizer.calculate_module_complexity(module_details)
complexity_score = complexity.calculate_score()
# Estimate execution time
est_time = optimizer.estimate_execution_time(
complexity=complexity,
num_cases=len(test_cases)
)
# Optimize test selection if needed
if est_time > available_time:
optimized_cases = optimizer.optimize_test_selection(
test_cases=test_cases,
input_ranges=input_ranges,
target_cases=int(len(test_cases) * 0.7)
)