Getting Started
Ai|oS provides state-of-the-art quantum computing and machine learning algorithms through a unified Python interface. This guide will help you get started with quantum simulations, ML algorithms, and autonomous discovery.
Key Features
- 23 Quantum Algorithms - VQE, QAOA, QNN, HHL, and more
- 10 Advanced ML Algorithms - Mamba, Flow Matching, MCTS, Bayesian methods
- Level 5-7 Autonomy - Autonomous agents with goal synthesis and self-awareness
- 1-50 Qubit Simulation - Exact simulation up to 20 qubits, approximate to 50
- GPU Acceleration - Automatic CUDA support when available
- Production Ready - Enterprise-grade security and error handling
Installation
Prerequisites
- Python 3.8 or higher
- pip package manager
- NumPy (required for all algorithms)
- PyTorch (required for quantum and most ML algorithms)
- SciPy (required for VQE and optimization)
Install from Repository
# Clone the repository
git clone https://github.com/yourusername/aios.git
cd aios
# Install dependencies
pip install -r requirements.txt
# Verify installation
python -c "from aios.quantum_ml_algorithms import check_dependencies; check_dependencies()"
Install Specific Components
# Quantum algorithms only
pip install torch numpy scipy
# ML algorithms only (some require PyTorch, some only NumPy)
pip install torch numpy
# Autonomous discovery (requires all)
pip install torch numpy scipy
Quantum Algorithms
Ai|oS provides 23 quantum computing algorithms spanning gate-based, variational, and ML-enhanced methods.
Quantum State Engine
Core quantum simulator with automatic backend selection based on qubit count.
from aios.quantum_ml_algorithms import QuantumStateEngine
# Create quantum circuit
qc = QuantumStateEngine(num_qubits=5)
# Build superposition
for i in range(5):
qc.hadamard(i)
# Apply entanglement
for i in range(4):
qc.cnot(i, i + 1)
# Measure
results = qc.measure(shots=1000)
print(f"Results: {results}")
# Expectation value
energy = qc.expectation_value('Z0*Z1')
print(f"Energy: {energy}")
Variational Quantum Eigensolver (VQE)
Find ground state energies for quantum chemistry and optimization problems.
VQE
QuantumOptimization
Use Case: Molecular energy calculation, ground state finding
Quantum Advantage: Exponential speedup for certain Hamiltonians
from aios.quantum_ml_algorithms import QuantumVQE
# Define Hamiltonian
def hamiltonian(qc):
return qc.expectation_value('Z0') - 0.5 * qc.expectation_value('Z1')
# Initialize VQE
vqe = QuantumVQE(num_qubits=4, depth=3)
# Optimize
energy, params = vqe.optimize(hamiltonian, max_iter=100)
print(f"Ground state energy: {energy:.6f}")
print(f"Optimal parameters: {params}")
HHL Linear System Solver
Solve linear systems Ax = b with exponential quantum speedup.
HHL (Harrow-Hassidim-Lloyd)
QuantumLinear Algebra
Use Case: Electromagnetic scattering, differential equations, ML optimization
Quantum Advantage: Exponential speedup for well-conditioned sparse matrices
from aios.quantum_hhl_algorithm import hhl_linear_system_solver
import numpy as np
# Define linear system
A = np.array([[2.0, -0.5], [-0.5, 2.0]])
b = np.array([1.0, 0.0])
# Solve with quantum advantage
result = hhl_linear_system_solver(A, b)
print(f"Success probability: {result['success_probability']:.4f}")
print(f"Quantum advantage: {result['quantum_advantage']:.1f}x")
print(f"Condition number: {result['condition_number']:.2f}")
# Note: HHL outputs expectation values, not full solution vector
print(f"Solution expectation: {result['solution_expectation']}")
Quantum Simulation Scaling
Qubits | Backend | Memory | Time | Accuracy |
---|---|---|---|---|
1-20 | Statevector | 1 MB - 8 GB | <1s - 10s | 100% |
20-40 | Tensor Network | 8 GB - 32 GB | 10s - 5min | ~99% |
40-50 | MPS | 16 GB - 64 GB | 5min - 1hr | ~95% |
50+ | Real Hardware | N/A | Variable | ~90% (noise) |
ML Algorithms
10 state-of-the-art machine learning and probabilistic algorithms for Ai|oS meta-agents.
Mamba (Adaptive State Space)
AdaptiveStateSpace (Mamba)
Sequence ModelingState Space
Use Case: Long sequence modeling, efficient alternative to Transformers
Key Feature: Input-dependent parameters enable content-based reasoning
from aios.ml_algorithms import AdaptiveStateSpace
import torch
# Initialize Mamba layer
mamba = AdaptiveStateSpace(
input_dim=512,
state_dim=128,
output_dim=512
)
# Process sequence
x = torch.randn(32, 1000, 512) # (batch, length, features)
output, final_state = mamba(x)
print(f"Output shape: {output.shape}") # (32, 1000, 512)
print(f"Final state: {final_state.shape}") # (32, 128)
Flow Matching (Optimal Transport)
OptimalTransportFlowMatcher
GenerativeFast Sampling
Use Case: Fast image/audio generation (10-20 steps vs 1000 for diffusion)
Key Feature: Direct velocity field learning, straight sampling paths
from aios.ml_algorithms import OptimalTransportFlowMatcher
import torch
# Initialize flow matcher
flow = OptimalTransportFlowMatcher(
data_dim=784, # e.g., 28x28 images
time_dim=32
)
# Sample batch
samples = flow.sample(
num_samples=16,
num_steps=20 # Only 20 steps for high quality!
)
print(f"Generated samples: {samples.shape}") # (16, 784)
Neural-Guided MCTS
NeuralGuidedMCTS (AlphaGo-style)
PlanningSearch
Use Case: Game playing, planning, decision making
Key Feature: PUCT algorithm with learned policy/value guidance
from aios.ml_algorithms import NeuralGuidedMCTS
# Define game environment
def policy_fn(state):
"""Neural network policy"""
return [(action, prob) for action, prob in action_probs]
def value_fn(state):
"""Neural network value estimate"""
return estimated_value
# Initialize MCTS
mcts = NeuralGuidedMCTS(
policy_fn=policy_fn,
value_fn=value_fn,
num_simulations=800,
c_puct=1.0 # Exploration constant
)
# Search for best move
best_action = mcts.search(current_state)
print(f"Best action: {best_action}")
Particle Filter (Sequential Monte Carlo)
AdaptiveParticleFilter
BayesianState Estimation
Use Case: Real-time tracking, sensor fusion, time-series
Key Feature: Adaptive resampling based on effective sample size
from aios.ml_algorithms import AdaptiveParticleFilter
import numpy as np
# Initialize filter
pf = AdaptiveParticleFilter(
num_particles=1000,
state_dim=4, # e.g., [x, y, vx, vy]
obs_dim=2 # e.g., [x, y] measurements
)
# Time update (prediction)
def transition_fn(x):
"""State transition model"""
return x + 0.1 * np.random.randn(*x.shape)
pf.predict(transition_fn=transition_fn, process_noise=0.05)
# Measurement update
def likelihood_fn(x, obs):
"""Measurement likelihood"""
diff = x[:2] - obs # Compare position to observation
return np.exp(-0.5 * np.sum(diff**2))
observation = np.array([1.0, 2.0])
pf.update(observation=observation, likelihood_fn=likelihood_fn)
# Get estimate
estimate = pf.estimate()
print(f"State estimate: {estimate}")
Autonomous Discovery System
Level 4 autonomous agents that self-direct learning and pursue knowledge independently.
Level 1: Action suggestion
Level 2: Action on subset
Level 3: Conditional autonomy
Level 4: Full autonomy - agent sets own goals
Quick Start
from aios.autonomous_discovery import AutonomousLLMAgent, AgentAutonomy
# Create Level 4 autonomous agent
agent = AutonomousLLMAgent(
model_name="deepseek-r1",
autonomy_level=AgentAutonomy.LEVEL_4
)
# Give it a mission and let it learn
agent.set_mission(
"quantum computing machine learning applications",
duration_hours=1.0
)
# Agent autonomously:
# - Decomposes mission into learning objectives
# - Balances exploration vs exploitation
# - Builds knowledge graph
# - Self-evaluates and adapts
await agent.pursue_autonomous_learning()
# Export discovered knowledge
knowledge = agent.export_knowledge_graph()
print(f"Discovered {knowledge['stats']['total_concepts']} concepts")
print(f"Average confidence: {knowledge['stats']['average_confidence']:.2f}")
Performance Characteristics
Feature | Baseline | Optimized | 8-GPU System |
---|---|---|---|
Tokens/sec per GPU | 1,000 | 7,500 | 60,000 aggregate |
Concepts/second | 1-5 | 20-50 | 50-100 |
Knowledge graph size (1 hr) | 100-500 nodes | 500-2000 nodes | 2000-5000 nodes |
Integration with Ai|oS Meta-Agents
from aios.autonomous_discovery import create_autonomous_discovery_action
# Security agent learns threat patterns
async def security_research(ctx):
mission = "ransomware attack vectors cloud vulnerabilities"
discovery = create_autonomous_discovery_action(
mission,
duration_hours=0.5
)
knowledge = await discovery()
ctx.publish_metadata('security.threat_patterns', knowledge)
return ActionResult(
success=True,
message=f"Discovered {knowledge['stats']['total_concepts']} threat patterns",
payload=knowledge['stats']
)
API Reference
Core APIs for quantum computing, ML algorithms, and autonomous discovery.
Quantum State Engine
Method | Parameters | Returns |
---|---|---|
hadamard(qubit) |
qubit: int | None |
cnot(control, target) |
control: int, target: int | None |
rx(qubit, angle) |
qubit: int, angle: float | None |
measure(shots) |
shots: int | Dict[str, int] |
expectation_value(observable) |
observable: str | float |
ML Algorithms Catalog
from aios.ml_algorithms import get_algorithm_catalog
catalog = get_algorithm_catalog()
for name, info in catalog.items():
print(f"{name}:")
print(f" Category: {info['category']}")
print(f" Complexity: {info['complexity']}")
print(f" Available: {info['available']}")
print()
Examples
Example 1: VQE for Molecular Energy
from aios.quantum_ml_algorithms import QuantumVQE
# H2 molecule Hamiltonian (simplified)
def h2_hamiltonian(qc):
# Coefficients from chemistry calculation
c1 = -1.0523 # ZZ term
c2 = 0.3979 # Z term
zz = qc.expectation_value('Z0*Z1')
z = qc.expectation_value('Z0')
return c1 * zz + c2 * z
# Optimize
vqe = QuantumVQE(num_qubits=2, depth=2)
energy, params = vqe.optimize(h2_hamiltonian, max_iter=100)
print(f"Ground state energy: {energy:.6f} Hartree")
# Expected: ~-1.137 Hartree for H2 at equilibrium
Example 2: Particle Filter for Tracking
from aios.ml_algorithms import AdaptiveParticleFilter
import numpy as np
# Track object with noisy GPS measurements
pf = AdaptiveParticleFilter(num_particles=500, state_dim=4, obs_dim=2)
# Simulate tracking over time
true_trajectory = []
estimated_trajectory = []
for t in range(100):
# True state (unknown to filter)
true_pos = np.array([t * 0.1, np.sin(t * 0.1)])
true_trajectory.append(true_pos)
# Prediction step
def transition(x):
# Constant velocity model
x[:2] += x[2:] * 0.1 # Position update
return x
pf.predict(transition_fn=transition, process_noise=0.01)
# Noisy measurement
measurement = true_pos + np.random.randn(2) * 0.1
# Update step
def likelihood(x, obs):
diff = x[:2] - obs
return np.exp(-np.sum(diff**2) / (2 * 0.01))
pf.update(observation=measurement, likelihood_fn=likelihood)
# Estimate
estimate = pf.estimate()
estimated_trajectory.append(estimate[:2])
# Calculate tracking error
error = np.mean([np.linalg.norm(t - e)
for t, e in zip(true_trajectory, estimated_trajectory)])
print(f"Average tracking error: {error:.4f}")
Example 3: Autonomous Discovery for Ai|oS
from aios.autonomous_discovery import AutonomousLLMAgent, AgentAutonomy
async def main():
# Create Level 4 agent
agent = AutonomousLLMAgent(
model_name="deepseek-r1",
autonomy_level=AgentAutonomy.LEVEL_4
)
# Cycle 1: Learn about quantum computing
agent.set_mission("quantum computing fundamentals", duration_hours=0.5)
await agent.pursue_autonomous_learning()
# Cycle 2: Agent autonomously identifies gaps and continues
agent.set_mission("quantum error correction codes", duration_hours=0.5)
await agent.pursue_autonomous_learning()
# Export comprehensive knowledge
knowledge = agent.export_knowledge_graph()
print(f"Total concepts: {knowledge['stats']['total_concepts']}")
print(f"High confidence (>0.9): {knowledge['stats']['high_confidence_count']}")
# Find most important concepts
sorted_concepts = sorted(
knowledge['nodes'].items(),
key=lambda x: x[1]['confidence'],
reverse=True
)
print("\nTop 5 concepts:")
for concept, data in sorted_concepts[:5]:
print(f" {concept}: {data['confidence']:.3f}")
import asyncio
asyncio.run(main())
Troubleshooting
Common Issues
pip install torch
For GPU support, visit pytorch.org for CUDA-enabled version.
Consider using approximate backends or real quantum hardware.
- Increase
max_iter
- Increase circuit
depth
- Adjust learning rate
- Use different initialization
- Enable GPU acceleration
- Use distributed inference (multi-GPU)
- Enable prefill/decode disaggregation
- Reduce
duration_hours
for quick tests
Getting Help
If you encounter issues not covered here:
- Check the GitHub Issues
- Review example code in
aios/examples/
- Read the source code documentation
- Contact the development team