Skip to content

High performance hybrid classical-quantum computing learning framework written in C

Notifications You must be signed in to change notification settings

RickyC0626-forks/quantum_geometric_tensor

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Quantum Geometric Learning (Pre-release)

A high performance production-capable quantum computing framework in active development that leverages differential geometry and algebraic topology for superior performance on real quantum hardware. This is a pre-release version - while the core algorithms and architecture are complete, the library is currently undergoing final compilation and testing. Support planned for IBM Quantum (127-433 qubits), Rigetti (80+ qubits), and D-Wave (5000+ qubits) systems.

Note: This is a pre-release version. The library is not yet fully compiled and some features may be incomplete. We are releasing the source code for transparency and to gather community feedback.

Key Features

1. Hardware-Native Quantum Computing (Coming Soon)

  • Multi-Vendor Support (In Development)
    // Example of planned hardware integration
    quantum_system_t* system = quantum_init_system(&config);
    quantum_execute_circuit(system, your_circuit);
    • IBM Quantum (127-433 qubits) - Integration in progress
    • Rigetti (80+ qubits) - Integration in progress
    • D-Wave (5000+ qubits) - Integration in progress
    • Full emulation mode - In development

2. Geometric Error Protection (In Development)

  • Topology-Based Protection (Core algorithms complete, integration in progress)
    // Example of planned error protection
    protection_config_t config = {
        .type = PROTECTION_GEOMETRIC,
        .strength = 0.8
    };
    quantum_geometric_protect(state, &config);
    • Theoretical error reduction from O(ε) to O(ε²)
    • Expected coherence time improvements of 10-100x
    • 99.9% state fidelity

3. Hardware-Optimized Performance (In Development)

  • Automatic Optimization (Core algorithms complete, testing in progress)
    // Example of planned optimization features
    optimization_config_t config = {
        .circuit_optimization = true,
        .geometric_compilation = true
    };
    quantum_optimize_circuit(circuit, &config);
    • Expected 30-70% circuit depth reduction
    • 2-5x gate fidelity improvement
    • Hardware-aware compilation in development

4. Distributed Training (Planned)

  • Fault-Tolerant Training (Architecture designed, implementation in progress)
    // Example of planned distributed features
    distributed_config_t config = {
        .world_size = size,
        .local_rank = rank,
        .use_data_parallel = true,
        .checkpoint_dir = "/path/to/checkpoints"
    };
    distributed_manager_t* manager = distributed_manager_create(&config);
    • Automatic workload distribution (in development)
    • Process failure recovery (planned)
    • Gradient synchronization (in development)
    • Checkpoint management (planned)

Development Status

This is a pre-release version with the following status:

  • Core Algorithms: ✅ Complete
  • Architecture: ✅ Complete
  • Documentation: ✅ Complete
  • Compilation: 🚧 In Progress
  • Hardware Integration: 🚧 In Progress
  • Testing: 🚧 In Progress
  • Performance Optimization: 🚧 In Progress

Quick Start (For Developers)

1. Installation

# Note: Library is currently in pre-release

# Install dependencies
sudo apt install cmake libopenmpi-dev

# Clone repository
git clone https://github.com/tsotchke/quantum_geometric_learning.git
cd quantum_geometric_learning

# Build system setup (compilation not yet complete)
mkdir build && cd build
cmake ..

2. Example Code (API Preview)

// Note: This is example code showing the API as intended

#include <quantum_geometric/core/quantum_geometric_core.h>

int main() {
    // Initialize quantum system
    quantum_system_t* system = quantum_init_system(&(quantum_config_t){
        .backend = BACKEND_IBM,
        .optimization = true
    });

    // Create quantum circuit
    quantum_circuit_t* circuit = quantum_circuit_create();
    quantum_circuit_h(circuit, 0);  // Hadamard gate
    quantum_circuit_cx(circuit, 0, 1);  // CNOT gate

    // Execute with geometric protection
    execution_results_t results;
    quantum_execute_circuit(system, circuit, &results);

    // Print results
    printf("Fidelity: %.3f\n", results.fidelity);

    // Cleanup
    quantum_circuit_destroy(circuit);
    quantum_system_destroy(system);
    return 0;
}

3. Examples (In Development)

# Note: Examples are not yet fully runnable

# Example code is available in the examples/ directory
# These demonstrate the planned functionality:
examples/quantum_geometric_basics.c     # Basic quantum operations
examples/error_correction_example.c     # Error correction features
examples/quantum_field_example.c        # Quantum field operations
examples/surface_code_example.c         # Surface code implementation

4. Verify Distributed Setup

# Test distributed environment
./tools/test_distributed_setup.sh

# Configure distributed settings
vim etc/quantum_geometric/distributed_config.json

Core Components

1. Quantum Geometric Core

// Create quantum state with geometric protection
quantum_state* state = quantum_state_create(2);
quantum_geometric_protect(state, &config);
  • Manifold operations
  • Natural gradient optimization
  • Topological protection

2. Hardware Integration

// Configure quantum hardware
quantum_hardware_config_t config = {
    .backend = BACKEND_IBM,
    .device = "ibm_manhattan",
    .optimization = {
        .topology_aware = true,
        .error_mitigation = true
    }
};
  • Multi-vendor support
  • Native operations
  • Error correction

3. Performance Optimization

// Monitor and optimize performance
performance_config_t config = {
    .metrics = {
        .circuit_depth = true,
        .gate_fidelity = true
    },
    .optimization = {
        .type = OPTIMIZATION_GEOMETRIC
    }
};
  • Circuit optimization
  • Resource management
  • Performance monitoring

4. Distributed Training

// Initialize distributed training
distributed_config_t config = {
    .world_size = size,              // Total number of processes
    .local_rank = rank,              // This process's rank
    .num_gpus_per_node = 1,          // GPUs per node
    .batch_size = 32,                // Global batch size
    .micro_batch_size = 8,           // Per-process batch size
    .use_data_parallel = true,       // Enable data parallelism
    .use_model_parallel = false,     // Disable model parallelism
    .learning_rate = 0.001f,         // Learning rate
    .warmup_steps = 100,             // LR warmup steps
    .max_steps = 1000,               // Total training steps
    .save_interval = 50,             // Checkpoint interval
    .checkpoint_dir = "/path/to/checkpoints"
};

// Create manager and initialize environment
distributed_manager_t* manager = distributed_manager_create(&config);
distributed_manager_init_environment(manager);

// Train with fault tolerance
for (size_t step = 0; step < max_steps; step++) {
    if (distributed_manager_train_step(manager, pipeline,
                                     batch_data, batch_size,
                                     step, &metrics) != 0) {
        // Handle process failure
        distributed_manager_handle_failure(manager,
                                        metrics.failed_process_rank);
        // Retry step
        distributed_manager_train_step(manager, pipeline,
                                     batch_data, batch_size,
                                     step, &metrics);
    }
    
    // Print metrics (rank 0 only)
    if (rank == 0 && step % 10 == 0) {
        printf("Step %zu: Loss = %.4f, Accuracy = %.2f%%\n",
               step, metrics.loss, metrics.accuracy * 100);
    }
}

Features:

  • Automatic data sharding and workload distribution
  • Fault detection and recovery
  • Gradient synchronization and optimization
  • Performance monitoring and metrics
  • Checkpoint management
  • Multi-GPU support

Documentation

Getting Started

  1. Installation Guide - Setup instructions
  2. Quick Start Guide - First steps
  3. Beginner's Guide - Core concepts

Core Documentation

  1. API Reference - Complete API
  2. Theory Guide - Mathematical foundations
  3. Examples - Code examples

Advanced Topics

  1. Hardware Integration
  2. Error Protection
  3. Performance Optimization
  4. Distributed Training

Architectural Performance Metrics

Analysis

  • Phase Error Protection

    • Classical: O(ε²) error scaling
    • Geometric: O(ε⁴) error scaling
    • Target: 100x improvement in error resistance
  • Gate Error Mitigation

    • Classical: O(ε) error rates
    • Geometric: O(ε²) error rates
    • Target: 10x reduction in gate errors
  • State Fidelity

    • Classical: ~95% fidelity
    • Geometric: >99.9% fidelity target
    • Expected: 5x improvement in state preservation

Production Implementation

  • Circuit Optimization (In Development)

    • Target: 30-70% circuit depth reduction
    • Planned: Automatic topology-aware compilation
    • Planned: Hardware-native gate optimization
  • Memory Management (In Development)

    • Target: 60-80% reduction in memory usage
    • Planned: Geometric state compression
    • Planned: Efficient tensor operations
  • Qubit Utilization (In Development)

    • Target: 40-60% reduction in qubit requirements
    • Planned: Topology-aware qubit mapping
    • Planned: Dynamic resource allocation

Hardware Integration Goals

  • IBM Quantum (In Development)

    • Target: 10M operations/second
    • Target: >99% gate fidelity
    • Target: Support for 100-200 gate depth
  • Rigetti Systems (In Development)

    • Target: 5M operations/second
    • Target: >98% gate fidelity
    • Target: Support for 50-100 gate depth
  • D-Wave Systems (In Development)

    • Target: 1M operations/second
    • Target: >95% solution quality
    • Target: Native quantum annealing support

Distributed Performance Goals

  • Scaling Efficiency (In Development)

    • Target: Linear scaling up to 256 nodes
    • Target: 90% GPU utilization
    • Planned: Automatic load balancing
  • Fault Tolerance (In Development)

    • Target: <1s recovery time
    • Target: Zero data loss
    • Planned: Automatic checkpoint recovery
  • Communication Optimization (In Development)

    • Target: <5% network overhead
    • Target: Optimized gradient sync
    • Target: Efficient parameter broadcast

Contributing

We welcome contributions to help complete the implementation:

  1. Read CONTRIBUTING.md
  2. Set up development environment:
    ./tools/setup_dev_env.sh
  3. Areas needing assistance:
    • Compilation fixes
    • Hardware integration
    • Performance optimization
    • Testing infrastructure

Development Roadmap

  1. Phase 1 - Current (Pre-release)

    • Complete compilation fixes
    • Finish hardware integration
    • Implement core testing
  2. Phase 2 (Planned)

    • Performance optimization
    • Hardware validation
    • Full test coverage
  3. Phase 3 (Planned)

    • Production release
    • Additional hardware support
    • Advanced features

License

MIT License

Citation

@article{QuantumGeometricTensorLibrary,
  author  = {tsotchke},
  title   = {Quantum Geometric Tensor Library},
  year    = {2025},
  url = {https://github.com/tsotchke/quantum_geometric_tensor}
}

Support

  • Documentation (Documentation nearly complete, implementation details in progress)
  • Examples (Code examples available but not yet runnable)

About

High performance hybrid classical-quantum computing learning framework written in C

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • C 89.0%
  • Metal 6.9%
  • Objective-C++ 2.1%
  • Cuda 1.5%
  • Other 0.5%