Quantum Computing API
QuantumStateEngine
Quantum Core
Core quantum state simulator with automatic backend selection based on qubit count. Supports exact statevector simulation (1-20 qubits), tensor networks (20-40 qubits), and MPS approximation (40-50 qubits).
__init__(num_qubits, backend=None)
Initialize quantum circuit with specified number of qubits.
Parameter | Type | Description |
---|---|---|
num_qubits |
int | Number of qubits in the circuit (1-50) |
backend |
str, optional | Force specific backend ('statevector', 'tensor_network', 'mps') |
from aios.quantum_ml_algorithms import QuantumStateEngine # Automatic backend selection qc = QuantumStateEngine(num_qubits=10) # Force specific backend qc_large = QuantumStateEngine(num_qubits=30, backend='tensor_network')
hadamard(qubit)
Apply Hadamard gate to create superposition.
qc.hadamard(qubit: int) → None
Parameter | Type | Description |
---|---|---|
qubit |
int | Target qubit index (0 to num_qubits-1) |
# Create superposition on all qubits for i in range(5): qc.hadamard(i)
cnot(control, target)
Apply CNOT (controlled-NOT) gate for entanglement.
qc.cnot(control: int, target: int) → None
Parameter | Type | Description |
---|---|---|
control |
int | Control qubit index |
target |
int | Target qubit index |
# Create entanglement chain for i in range(4): qc.cnot(i, i + 1)
rx(qubit, angle), ry(qubit, angle), rz(qubit, angle)
Apply rotation gates around X, Y, or Z axis.
qc.rx(qubit: int, angle: float) → None
Parameter | Type | Description |
---|---|---|
qubit |
int | Target qubit index |
angle |
float | Rotation angle in radians |
import numpy as np # Rotate qubit 0 by π/4 around X-axis qc.rx(0, np.pi / 4) # Parameterized rotation theta = np.random.uniform(0, 2 * np.pi) qc.ry(1, theta)
measure(shots=1000)
Measure all qubits and return outcome distribution.
qc.measure(shots: int = 1000) → Dict[str, int]
Parameter | Type | Description |
---|---|---|
shots |
int | Number of measurement repetitions |
results = qc.measure(shots=1000) # {'00': 245, '01': 253, '10': 247, '11': 255} # Get most common outcome most_common = max(results, key=results.get) print(f"Most common: {most_common} ({results[most_common]} times)")
expectation_value(observable)
Compute expectation value of an observable.
qc.expectation_value(observable: str) → float
Parameter | Type | Description |
---|---|---|
observable |
str | Observable string (e.g., 'Z0', 'Z0*Z1', 'X0*Y1') |
# Single-qubit observable z0 = qc.expectation_value('Z0') # Two-qubit correlation z0z1 = qc.expectation_value('Z0*Z1') # Complex observable energy = qc.expectation_value('X0*X1') + 0.5 * qc.expectation_value('Z0')
QuantumVQE
Quantum Optimization
Variational Quantum Eigensolver for finding ground state energies. Uses hardware-efficient ansatz circuits with classical optimization loop.
__init__(num_qubits, depth=3, optimizer='adam')
Parameter | Type | Description |
---|---|---|
num_qubits |
int | Number of qubits |
depth |
int | Circuit depth (more = more expressive) |
optimizer |
str | Optimizer ('adam', 'l-bfgs-b', 'cobyla') |
from aios.quantum_ml_algorithms import QuantumVQE # Define Hamiltonian def hamiltonian(qc): return qc.expectation_value('Z0*Z1') - 0.5 * qc.expectation_value('Z0') # Initialize VQE vqe = QuantumVQE(num_qubits=4, depth=3) # Optimize energy, params = vqe.optimize(hamiltonian, max_iter=100) print(f"Ground state energy: {energy:.6f}") print(f"Optimal parameters: {params}")
HHL Linear System Solver
Quantum Linear Algebra
Harrow-Hassidim-Lloyd algorithm for solving linear systems Ax = b with exponential speedup. Complexity: O(log(N)κ²) vs O(N³) classical.
hhl_linear_system_solver(A, b)
hhl_linear_system_solver(A: np.ndarray, b: np.ndarray) → Dict
Parameter | Type | Description |
---|---|---|
A |
np.ndarray | Coefficient matrix (N × N) |
b |
np.ndarray | Right-hand side vector (N,) |
from aios.quantum_hhl_algorithm import hhl_linear_system_solver import numpy as np # Define linear system A = np.array([[2.0, -0.5], [-0.5, 2.0]]) b = np.array([1.0, 0.0]) # Solve with quantum advantage result = hhl_linear_system_solver(A, b) print(f"Success probability: {result['success_probability']:.4f}") print(f"Quantum advantage: {result['quantum_advantage']:.1f}x") print(f"Condition number κ: {result['condition_number']:.2f}") # Note: HHL outputs expectation values, not full solution vector print(f"Solution expectation: {result['solution_expectation']}")
Important: HHL algorithm outputs expectation values ⟨x|M|x⟩, not the full solution vector x. This is sufficient for many applications (e.g., expectation values in quantum chemistry, optimization objectives).
When to use: Best for sparse, well-conditioned matrices (low κ). For dense or ill-conditioned systems, classical methods may be more efficient.
ML Algorithms API
Mamba (Adaptive State Space)
ML Sequence Modeling
Mamba architecture with O(N) complexity vs O(N²) for attention. Input-dependent parameters enable content-based reasoning.
AdaptiveStateSpace(input_dim, state_dim, output_dim)
Parameter | Type | Description |
---|---|---|
input_dim |
int | Input feature dimension |
state_dim |
int | Hidden state dimension |
output_dim |
int | Output feature dimension |
from aios.ml_algorithms import AdaptiveStateSpace import torch # Initialize Mamba layer mamba = AdaptiveStateSpace( input_dim=512, state_dim=128, output_dim=512 ) # Process long sequence efficiently x = torch.randn(32, 10000, 512) # (batch, length, features) output, final_state = mamba(x) print(f"Output shape: {output.shape}") # (32, 10000, 512) print(f"Final state: {final_state.shape}") # (32, 128) # Complexity: O(N) vs O(N²) for attention!
Optimal Transport Flow Matching
ML Generative
Fast generative modeling with 10-20 sampling steps vs 1000 for diffusion models. Direct velocity field learning with straight sampling paths.
OptimalTransportFlowMatcher(data_dim, time_dim)
Parameter | Type | Description |
---|---|---|
data_dim |
int | Data dimension (e.g., 784 for 28×28 images) |
time_dim |
int | Time embedding dimension |
from aios.ml_algorithms import OptimalTransportFlowMatcher import torch # Initialize flow matcher flow = OptimalTransportFlowMatcher( data_dim=784, # 28×28 images time_dim=32 ) # Fast sampling: only 20 steps! samples = flow.sample( num_samples=16, num_steps=20 # vs 1000 for diffusion ) print(f"Generated samples: {samples.shape}") # (16, 784) # 50x faster sampling than diffusion models!
Neural-Guided MCTS
ML Planning
Monte Carlo Tree Search with neural policy/value guidance (AlphaGo-style). PUCT algorithm balances exploration and exploitation.
NeuralGuidedMCTS(policy_fn, value_fn, num_simulations, c_puct)
from aios.ml_algorithms import NeuralGuidedMCTS # Define neural network functions def policy_fn(state): """Neural network policy: returns [(action, prob), ...]""" # Your neural network here return [(action, prob) for action, prob in action_probs] def value_fn(state): """Neural network value estimate""" # Your neural network here return estimated_value # Initialize MCTS mcts = NeuralGuidedMCTS( policy_fn=policy_fn, value_fn=value_fn, num_simulations=800, # AlphaGo uses 800-1600 c_puct=1.0 # Exploration constant ) # Search for best move best_action = mcts.search(current_state) print(f"Best action: {best_action}")
Adaptive Particle Filter
ML Bayesian
Sequential Monte Carlo for real-time state estimation and sensor fusion. Adaptive resampling based on effective sample size.
AdaptiveParticleFilter(num_particles, state_dim, obs_dim)
from aios.ml_algorithms import AdaptiveParticleFilter import numpy as np # Initialize filter pf = AdaptiveParticleFilter( num_particles=1000, state_dim=4, # [x, y, vx, vy] obs_dim=2 # [x, y] measurements ) # Time update (prediction) def transition_fn(x): """State transition model""" x_new = x.copy() x_new[:2] += x_new[2:] * 0.1 # Position update return x_new pf.predict(transition_fn=transition_fn, process_noise=0.05) # Measurement update def likelihood_fn(x, obs): """Measurement likelihood""" diff = x[:2] - obs return np.exp(-0.5 * np.sum(diff**2)) observation = np.array([1.0, 2.0]) pf.update(observation=observation, likelihood_fn=likelihood_fn) # Get state estimate estimate = pf.estimate() print(f"State estimate: {estimate}")
Autonomous Discovery API
AutonomousLLMAgent
ML Level 4
Level 4 autonomous agent with self-directed learning capabilities. Synthesizes goals, builds knowledge graphs, and pursues learning independently.
__init__(model_name, autonomy_level)
Parameter | Type | Description |
---|---|---|
model_name |
str | LLM model name (e.g., "deepseek-r1") |
autonomy_level |
AgentAutonomy | Level 0-4 (use AgentAutonomy.LEVEL_4) |
from aios.autonomous_discovery import AutonomousLLMAgent, AgentAutonomy # Create Level 4 autonomous agent agent = AutonomousLLMAgent( model_name="deepseek-r1", autonomy_level=AgentAutonomy.LEVEL_4 ) # Give it a mission agent.set_mission( "quantum computing machine learning applications", duration_hours=1.0 ) # Agent autonomously: # - Decomposes mission into learning objectives # - Balances exploration vs exploitation # - Builds knowledge graph # - Self-evaluates and adapts await agent.pursue_autonomous_learning() # Export discovered knowledge knowledge = agent.export_knowledge_graph() print(f"Discovered {knowledge['stats']['total_concepts']} concepts") print(f"Average confidence: {knowledge['stats']['average_confidence']:.2f}") print(f"High confidence count: {knowledge['stats']['high_confidence_count']}")
export_knowledge_graph()
agent.export_knowledge_graph() → Dict
knowledge = agent.export_knowledge_graph() # Structure: { "nodes": { "quantum_entanglement": { "confidence": 0.95, "timestamp": "2025-10-13T10:30:00Z", "parent": "quantum_mechanics" }, ... }, "edges": [ {"source": "quantum_computing", "target": "quantum_entanglement", "type": "requires"}, ... ], "stats": { "total_concepts": 247, "average_confidence": 0.87, "high_confidence_count": 189 } }
Utilities
Algorithm Catalog
get_algorithm_catalog()
Get comprehensive catalog of all available algorithms with metadata.
from aios.ml_algorithms import get_algorithm_catalog catalog = get_algorithm_catalog() for name, info in catalog.items(): print(f"{name}:") print(f" Category: {info['category']}") print(f" Complexity: {info['complexity']}") print(f" Available: {info['available']}") print(f" Requires: {info['requires']}") print()
Dependency Checking
check_dependencies()
# Check quantum algorithms from aios.quantum_ml_algorithms import check_dependencies as check_quantum status = check_quantum() print(f"Quantum algorithms available: {status['available']}") # Check autonomous discovery from aios.autonomous_discovery import check_autonomous_discovery_dependencies status = check_autonomous_discovery_dependencies() print(f"Autonomous discovery ready: {status['ready']}")
Quick Reference
Import Cheatsheet
# Quantum Computing from aios.quantum_ml_algorithms import ( QuantumStateEngine, QuantumVQE, QuantumQAOA, QuantumNeuralNetwork ) from aios.quantum_hhl_algorithm import hhl_linear_system_solver # ML Algorithms from aios.ml_algorithms import ( AdaptiveStateSpace, # Mamba OptimalTransportFlowMatcher, # Flow Matching NeuralGuidedMCTS, # AlphaGo-style AdaptiveParticleFilter, # Bayesian tracking NoUTurnSampler, # NUTS HMC SparseGaussianProcess, # Scalable GP get_algorithm_catalog # List all ) # Autonomous Discovery from aios.autonomous_discovery import ( AutonomousLLMAgent, AgentAutonomy, create_autonomous_discovery_action )