Reorganize notes into tier1/ops docs
This commit is contained in:
parent
4989baf623
commit
fabace7616
5
docs/archive/README.md
Normal file
5
docs/archive/README.md
Normal file
|
|
@ -0,0 +1,5 @@
|
|||
# Archive Notes
|
||||
|
||||
This directory holds legacy scratch notes and drafts moved from `notes/`.
|
||||
They are kept for historical reference and should not be treated as
|
||||
current specifications.
|
||||
|
|
@ -1,303 +0,0 @@
|
|||
Below is a **clean, minimal, v0.1 draft** of the **ASL SOPS Bundle Layout**, designed to support:
|
||||
|
||||
* offline authority creation
|
||||
* SystemRescue admission
|
||||
* courtesy leasing
|
||||
* future federation
|
||||
* zero dependency on DNS or live infrastructure
|
||||
|
||||
This is a **transport + custody container**, not a runtime format.
|
||||
|
||||
---
|
||||
|
||||
# ASL-SOPS-BUNDLE v0.1
|
||||
|
||||
**Offline Authority & Admission Package**
|
||||
|
||||
---
|
||||
|
||||
## 1. Purpose
|
||||
|
||||
The ASL SOPS Bundle is a **sealed, offline-deliverable container** used to transport **authority material** into an ASL-HOST environment (e.g. SystemRescue) for:
|
||||
|
||||
* Domain admission
|
||||
* Authority bootstrap
|
||||
* Courtesy lease negotiation
|
||||
* Initial artifact ingestion
|
||||
* Disaster recovery / rescue
|
||||
|
||||
It is **not** used for runtime access or online key operations.
|
||||
|
||||
---
|
||||
|
||||
## 2. Design Principles
|
||||
|
||||
1. **Offline-first**
|
||||
2. **Self-contained**
|
||||
3. **Minimal trust surface**
|
||||
4. **Explicit separation of authority vs policy**
|
||||
5. **Human-inspectable before decryption**
|
||||
6. **Machine-verifiable after decryption**
|
||||
|
||||
---
|
||||
|
||||
## 3. Container Format
|
||||
|
||||
* **Outer format**: SOPS-encrypted YAML or JSON
|
||||
* **Encryption targets**:
|
||||
|
||||
* age keys
|
||||
* PGP keys
|
||||
* hardware-backed keys (optional)
|
||||
* **No runtime secrets required**
|
||||
|
||||
Filename convention (recommended):
|
||||
|
||||
```
|
||||
asl-admission-<domain-id-short>.sops.yaml
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4. High-Level Structure
|
||||
|
||||
```yaml
|
||||
asl_sops_bundle:
|
||||
version: "0.1"
|
||||
bundle_id: <uuid>
|
||||
created_at: <iso8601>
|
||||
purpose: admission | rescue | recovery
|
||||
domain_id: <DomainID>
|
||||
contents:
|
||||
authority: ...
|
||||
policy: ...
|
||||
admission: ...
|
||||
optional:
|
||||
artifacts: ...
|
||||
notes: ...
|
||||
sops:
|
||||
...
|
||||
```
|
||||
|
||||
Only `contents.*` is encrypted.
|
||||
|
||||
---
|
||||
|
||||
## 5. Authority Section (Normative)
|
||||
|
||||
### 5.1 Root Authority
|
||||
|
||||
```yaml
|
||||
authority:
|
||||
domain:
|
||||
domain_id: <DomainID>
|
||||
root_public_key:
|
||||
type: ed25519
|
||||
encoding: base64
|
||||
value: <base64>
|
||||
root_private_key:
|
||||
type: ed25519
|
||||
encoding: base64
|
||||
value: <base64>
|
||||
key_created_at: <iso8601>
|
||||
```
|
||||
|
||||
Rules:
|
||||
|
||||
* Root private key **must never leave** this bundle
|
||||
* Bundle should be destroyed after admission if possible
|
||||
* Rotation handled in later versions
|
||||
|
||||
---
|
||||
|
||||
### 5.2 Authority Manifest (DAM)
|
||||
|
||||
Embedded verbatim:
|
||||
|
||||
```yaml
|
||||
authority:
|
||||
dam:
|
||||
version: "0.1"
|
||||
domain_id: <DomainID>
|
||||
root_public_key: <repeat for integrity>
|
||||
issued_at: <iso8601>
|
||||
expires_at: <iso8601 | null>
|
||||
roles:
|
||||
- domain_root
|
||||
metadata:
|
||||
human_name: "personal-domain"
|
||||
dns_claim: null
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 5.3 DAM Signature
|
||||
|
||||
```yaml
|
||||
authority:
|
||||
dam_signature:
|
||||
algorithm: ed25519
|
||||
signed_bytes: sha256
|
||||
signature: <base64>
|
||||
```
|
||||
|
||||
Signature is over the canonical DAM encoding.
|
||||
|
||||
---
|
||||
|
||||
## 6. Policy Section
|
||||
|
||||
Defines **what this domain is asking for**.
|
||||
|
||||
```yaml
|
||||
policy:
|
||||
policy_hash: <sha256>
|
||||
requested_capabilities:
|
||||
- store_blocks
|
||||
- publish_private_encrypted
|
||||
- import_foreign_artifacts
|
||||
requested_storage:
|
||||
max_blocks: 1_000_000
|
||||
max_bytes: 5TB
|
||||
federation:
|
||||
allow_inbound: false
|
||||
allow_outbound: true
|
||||
```
|
||||
|
||||
Policy hash is used for:
|
||||
|
||||
* trust pinning
|
||||
* replay protection
|
||||
* lease validation
|
||||
|
||||
---
|
||||
|
||||
## 7. Admission Section
|
||||
|
||||
### 7.1 Admission Request
|
||||
|
||||
```yaml
|
||||
admission:
|
||||
target_domain: <CommonDomainID>
|
||||
mode: courtesy | permanent
|
||||
intent: |
|
||||
Personal rescue operation.
|
||||
Data recovery from legacy laptop.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 7.2 Courtesy Lease Request (Optional)
|
||||
|
||||
```yaml
|
||||
admission:
|
||||
courtesy_lease:
|
||||
requested:
|
||||
duration_days: 180
|
||||
storage_bytes: 2TB
|
||||
encrypted_only: true
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 8. Optional Sections
|
||||
|
||||
### 8.1 Seed Artifacts (Optional, Rare)
|
||||
|
||||
Used only when absolutely necessary.
|
||||
|
||||
```yaml
|
||||
optional:
|
||||
artifacts:
|
||||
- artifact_id: <hash>
|
||||
description: "Bootstrap provenance note"
|
||||
encoding: inline-base64
|
||||
content: <base64>
|
||||
```
|
||||
|
||||
⚠️ Usually discouraged. Prefer ingestion post-admission.
|
||||
|
||||
---
|
||||
|
||||
### 8.2 Human Notes
|
||||
|
||||
```yaml
|
||||
optional:
|
||||
notes: |
|
||||
Generated offline on Debian laptop.
|
||||
Destroy after successful admission.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 9. Validation Rules (Host Side)
|
||||
|
||||
An ASL-HOST **must**:
|
||||
|
||||
1. Decrypt bundle explicitly
|
||||
2. Verify DAM signature
|
||||
3. Recompute DomainID from root public key
|
||||
4. Match DomainID exactly
|
||||
5. Verify policy hash
|
||||
6. Confirm admission intent
|
||||
7. Apply courtesy lease limits strictly
|
||||
|
||||
Failure at any step → **reject admission**
|
||||
|
||||
---
|
||||
|
||||
## 10. Security Posture
|
||||
|
||||
| Threat | Mitigation |
|
||||
| ----------------- | ------------------------------ |
|
||||
| Bundle theft | Encryption + offline use |
|
||||
| Replay | Bundle ID + policy hash |
|
||||
| Identity spoofing | DomainID derived from root key |
|
||||
| DNS abuse | DNS is metadata only |
|
||||
| Key leakage | Offline generation |
|
||||
|
||||
---
|
||||
|
||||
## 11. Lifecycle Guidance
|
||||
|
||||
Recommended flow:
|
||||
|
||||
1. Generate bundle offline
|
||||
2. Verify contents manually
|
||||
3. Transfer via USB
|
||||
4. Admit domain
|
||||
5. Destroy bundle
|
||||
6. Rotate keys if reused
|
||||
|
||||
---
|
||||
|
||||
## 12. Explicit Non-Goals
|
||||
|
||||
* Runtime authentication
|
||||
* Online key access
|
||||
* Federation syncing
|
||||
* Artifact transport at scale
|
||||
|
||||
---
|
||||
|
||||
## 13. Summary
|
||||
|
||||
The ASL SOPS Bundle is:
|
||||
|
||||
* a **sealed envelope**
|
||||
* carrying **identity + intent**
|
||||
* enabling **trust without infrastructure**
|
||||
* suitable for **rescue, bootstrap, and admission**
|
||||
* fully compatible with ASL-HOST and federation
|
||||
|
||||
---
|
||||
|
||||
If you want next, we can:
|
||||
|
||||
* define **key rotation bundles**
|
||||
* define **courtesy lease revocation**
|
||||
* formalize **Common admission policies**
|
||||
* or sketch the **SystemRescue UX flow**
|
||||
|
||||
Just point.
|
||||
|
||||
|
|
@ -1,168 +0,0 @@
|
|||
Here’s a clean **v0.1 draft for ASL-AUTH**, scoped to your system’s offline signing, domain authority, and certificate handling. This layer manages **trust, domain ownership, and PER signing**, while staying independent of storage or execution specifics.
|
||||
|
||||
---
|
||||
|
||||
# ASL-AUTH v0.1 – Authentication, Trust, and Domain Authority
|
||||
|
||||
## 1. Purpose
|
||||
|
||||
**ASL-AUTH** specifies **domain authority, identity, and offline trust management** for ASL-based systems.
|
||||
|
||||
Responsibilities:
|
||||
|
||||
* Define **Domain Authority (DA) manifest**
|
||||
* Manage **PER signing**
|
||||
* Support **offline root certificates**
|
||||
* Enable **foreign-domain trust pinning**
|
||||
* Integrate with host (`ASL-HOST`) and store (`ASL-STORE(-INDEX)`)
|
||||
|
||||
It **does not define**:
|
||||
|
||||
* Storage encoding (ASL-STORE handles this)
|
||||
* Artifact semantics (ASL-CORE)
|
||||
* Execution semantics (PEL/TGK)
|
||||
|
||||
---
|
||||
|
||||
## 2. Core Concepts
|
||||
|
||||
| Term | Definition |
|
||||
| ---------------------------- | ----------------------------------------------------------------------- |
|
||||
| **Domain** | Logical namespace with a unique ID and authority |
|
||||
| **Domain Authority (DA)** | Offline certificate defining domain ownership and signing root |
|
||||
| **PER** | PEL Execution Receipt; may be signed to certify artifact provenance |
|
||||
| **Offline Root** | Trusted certificate created and signed offline; used to bootstrap trust |
|
||||
| **Foreign-Domain Trust Pin** | Local configuration specifying which external domains to trust |
|
||||
| **Policy Hash** | Digest summarizing signing, visibility, and federation rules |
|
||||
|
||||
---
|
||||
|
||||
## 3. Domain Authority Manifest
|
||||
|
||||
* Each domain must provide a manifest containing:
|
||||
|
||||
* Domain ID (unique)
|
||||
* Root public key(s)
|
||||
* Offline root certificate fingerprint(s)
|
||||
* Allowed publishing targets
|
||||
* Trust policies
|
||||
* Manifest may be **signed by offline root** or higher-level authority.
|
||||
* Minimal format (example JSON):
|
||||
|
||||
```json
|
||||
{
|
||||
"domain_id": "uuid-xxxx-xxxx",
|
||||
"roots": ["fingerprint1", "fingerprint2"],
|
||||
"allowed_publish_targets": ["domain-a", "domain-b"],
|
||||
"policy_hash": "sha256:..."
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4. PER Signature Layout
|
||||
|
||||
Each signed PER contains:
|
||||
|
||||
| Field | Description |
|
||||
| -------------- | ------------------------------------------------------------- |
|
||||
| `canonical_id` | Unique identifier of PER artifact |
|
||||
| `snapshot_id` | Snapshot the PER is bound to |
|
||||
| `domain_id` | Signing domain |
|
||||
| `signer_id` | Identity of signing authority |
|
||||
| `logseq` | Monotonic sequence number for deterministic ordering |
|
||||
| `signature` | Cryptographic signature over canonical PER data + policy hash |
|
||||
| `policy_hash` | Digest of policy applied during signing |
|
||||
|
||||
* Signatures can use Ed25519, ECDSA, or RSA as required by domain policy.
|
||||
|
||||
---
|
||||
|
||||
## 5. Offline Roots & Trust
|
||||
|
||||
* **Offline roots** provide immutable, verifiable trust anchors.
|
||||
* Recommended minimum: **2 roots per domain** (primary + backup)
|
||||
* Host machine signs DA manifest using offline root before deploying store.
|
||||
* Offline roots are **never exposed** to runtime environment.
|
||||
|
||||
---
|
||||
|
||||
## 6. Foreign-Domain Trust Pinning
|
||||
|
||||
* Pin trusted external domains locally to control which published artifacts can be imported.
|
||||
* Configuration includes:
|
||||
|
||||
* Domain ID
|
||||
* Allowed snapshot ranges
|
||||
* Policy hash for verification
|
||||
* Enforces **read-only, immutable** cross-domain references.
|
||||
|
||||
---
|
||||
|
||||
## 7. Certificate & Signing Workflow
|
||||
|
||||
1. Generate **offline root** (offline machine, e.g., Debian VM)
|
||||
2. Mint **domain authority certificates**
|
||||
3. Create **policy hash** for signing rules
|
||||
4. Deploy manifest + roots to ASL-HOST
|
||||
5. At runtime:
|
||||
|
||||
* Sign PERs with domain authority key
|
||||
* Verify foreign-domain pins before accepting imported artifacts
|
||||
|
||||
---
|
||||
|
||||
## 8. Policy Hash Contents
|
||||
|
||||
* Includes hash of:
|
||||
|
||||
* Permitted snapshot range
|
||||
* Allowed publishing domains
|
||||
* Signing algorithm
|
||||
* Domain-specific constraints
|
||||
* Ensures deterministic, verifiable policy at PER level
|
||||
|
||||
---
|
||||
|
||||
## 9. Integration with Other Layers
|
||||
|
||||
| Layer | Role with ASL-AUTH |
|
||||
| ----------------- | -------------------------------------------- |
|
||||
| ASL-HOST | Provides identity and secure key storage |
|
||||
| ASL-STORE(-INDEX) | Provides artifact storage; PER signing hooks |
|
||||
| PEL / TGK | Generates PERs for deterministic execution |
|
||||
| Federation Layer | Enforces trust of foreign domains |
|
||||
|
||||
---
|
||||
|
||||
## 10. Security Considerations
|
||||
|
||||
* Offline roots must remain **offline** and **physically protected**
|
||||
* Signing keys should be **rotated with policy update**
|
||||
* Foreign-domain trust must be **explicitly pinned**
|
||||
* Policy hash ensures **tamper detection** and **auditability**
|
||||
|
||||
---
|
||||
|
||||
## 11. Summary
|
||||
|
||||
**ASL-AUTH v0.1**:
|
||||
|
||||
* Defines domain ownership and trust
|
||||
* Enables deterministic PER signing
|
||||
* Supports offline root certificates
|
||||
* Provides foreign-domain trust pinning
|
||||
* Integrates cleanly with ASL-HOST, ASL-STORE, and PEL/TGK layers
|
||||
|
||||
---
|
||||
|
||||
I can now **draft a practical “Rescue Node ASL Deployment” diagram**, showing:
|
||||
|
||||
* Personal domain
|
||||
* Common domain / Unity tree
|
||||
* Foreign-domain trust pins
|
||||
* How ASL-HOST + ASL-AUTH + ASL-STORE are instantiated
|
||||
This would give a concrete picture for your old laptop recovery workflow.
|
||||
|
||||
Do you want me to do that next?
|
||||
|
||||
|
|
@ -1,297 +0,0 @@
|
|||
# ASL Block Architecture & Specification
|
||||
|
||||
## 1. Purpose and Scope
|
||||
|
||||
The **Artifact Storage Layer (ASL)** is responsible for the **physical storage, layout, and retrieval of immutable artifact bytes**.
|
||||
ASL operates beneath CAS and above the storage substrate (ZFS).
|
||||
|
||||
ASL concerns itself with:
|
||||
|
||||
* Efficient packaging of artifacts into blocks
|
||||
* Stable block addressing
|
||||
* Snapshot-safe immutability
|
||||
* Storage-local optimizations
|
||||
|
||||
ASL does **not** define:
|
||||
|
||||
* Artifact identity
|
||||
* Hash semantics
|
||||
* Provenance
|
||||
* Interpretation
|
||||
* Indexing semantics
|
||||
|
||||
---
|
||||
|
||||
## 2. Core Abstractions
|
||||
|
||||
### 2.1 Artifact
|
||||
|
||||
An **artifact** is an immutable byte sequence produced or consumed by higher layers.
|
||||
|
||||
ASL treats artifacts as opaque bytes.
|
||||
|
||||
---
|
||||
|
||||
### 2.2 ASL Block
|
||||
|
||||
An **ASL block** is the smallest independently addressable, immutable unit of storage managed by ASL.
|
||||
|
||||
Properties:
|
||||
|
||||
* Identified by an **ASL Block ID**
|
||||
* Contains one or more artifacts
|
||||
* Written sequentially
|
||||
* Immutable once sealed
|
||||
* Snapshot-safe
|
||||
|
||||
ASL blocks are the unit of:
|
||||
|
||||
* Storage
|
||||
* Reachability
|
||||
* Garbage collection
|
||||
|
||||
---
|
||||
|
||||
### 2.3 ASL Block ID
|
||||
|
||||
An **ASL Block ID** is an opaque, stable identifier.
|
||||
|
||||
#### Invariants
|
||||
|
||||
* Globally unique within an ASL instance
|
||||
* Never reused
|
||||
* Never mutated
|
||||
* Does **not** encode:
|
||||
|
||||
* Artifact size
|
||||
* Placement
|
||||
* Snapshot
|
||||
* Storage topology
|
||||
* Policy decisions
|
||||
|
||||
#### Semantics
|
||||
|
||||
Block IDs identify *logical blocks*, not physical locations.
|
||||
|
||||
Higher layers must treat block IDs as uninterpretable tokens.
|
||||
|
||||
---
|
||||
|
||||
## 3. Addressing Model
|
||||
|
||||
ASL exposes a single addressing primitive:
|
||||
|
||||
```
|
||||
(block_id, offset, length) → bytes
|
||||
```
|
||||
|
||||
This is the **only** contract between CAS and ASL.
|
||||
|
||||
Notes:
|
||||
|
||||
* `offset` and `length` are stable for the lifetime of the block
|
||||
* ASL guarantees that reads are deterministic per snapshot
|
||||
* No size-class or block-kind information is exposed
|
||||
|
||||
---
|
||||
|
||||
## 4. Block Allocation Model
|
||||
|
||||
### 4.1 Global Block Namespace
|
||||
|
||||
ASL maintains a **single global block namespace**.
|
||||
|
||||
Block IDs are allocated from a monotonically increasing sequence:
|
||||
|
||||
```
|
||||
next_block_id := next_block_id + 1
|
||||
```
|
||||
|
||||
Properties:
|
||||
|
||||
* Allocation is append-only
|
||||
* Leaked IDs are permitted
|
||||
* No coordination with CAS is required
|
||||
|
||||
---
|
||||
|
||||
### 4.2 Open Blocks
|
||||
|
||||
At any time, ASL may maintain one or more **open blocks**.
|
||||
|
||||
Open blocks:
|
||||
|
||||
* Accept new artifact writes
|
||||
* Are not visible to readers
|
||||
* Are not referenced by the index
|
||||
* May be abandoned on crash
|
||||
|
||||
---
|
||||
|
||||
### 4.3 Sealed Blocks
|
||||
|
||||
A block becomes **sealed** when:
|
||||
|
||||
* It reaches an internal fill threshold, or
|
||||
* ASL decides to finalize it for policy reasons
|
||||
|
||||
Once sealed:
|
||||
|
||||
* No further writes are permitted
|
||||
* Offsets and lengths become permanent
|
||||
* The block becomes visible to CAS
|
||||
* The block may be referenced by index entries
|
||||
|
||||
Sealed blocks are immutable forever.
|
||||
|
||||
---
|
||||
|
||||
## 5. Packaging Policy (Non-Semantic)
|
||||
|
||||
ASL applies **packaging heuristics** when choosing how to place artifacts into blocks.
|
||||
|
||||
Examples:
|
||||
|
||||
* Prefer packing many small artifacts together
|
||||
* Prefer isolating very large artifacts
|
||||
* Avoid mixing vastly different sizes when convenient
|
||||
|
||||
### Important rule
|
||||
|
||||
Packaging decisions are:
|
||||
|
||||
* Best-effort
|
||||
* Local
|
||||
* Replaceable
|
||||
* **Not part of the ASL contract**
|
||||
|
||||
No higher layer may assume anything about block contents based on artifact size.
|
||||
|
||||
---
|
||||
|
||||
## 6. Storage Layout and Locality
|
||||
|
||||
### 6.1 Single Dataset, Structured Locality
|
||||
|
||||
ASL stores all blocks within a **single ZFS dataset**.
|
||||
|
||||
Within that dataset, ASL may organize blocks into subpaths to improve locality, e.g.:
|
||||
|
||||
```
|
||||
asl/blocks/dense/
|
||||
asl/blocks/sparse/
|
||||
```
|
||||
|
||||
These subpaths:
|
||||
|
||||
* Exist purely for storage optimization
|
||||
* May carry ZFS property overrides
|
||||
* Are not encoded into block identity
|
||||
|
||||
Block resolution does **not** depend on knowing which subpath was used.
|
||||
|
||||
---
|
||||
|
||||
### 6.2 Placement Hints
|
||||
|
||||
At allocation time, ASL may apply **placement hints**, such as:
|
||||
|
||||
* Preferred directory
|
||||
* Write size
|
||||
* Compression preference
|
||||
* Recordsize alignment
|
||||
|
||||
These hints:
|
||||
|
||||
* Affect only physical layout
|
||||
* May change over time
|
||||
* Do not affect block identity or correctness
|
||||
|
||||
---
|
||||
|
||||
## 7. Snapshot Semantics
|
||||
|
||||
ASL is snapshot-aware but snapshot-agnostic.
|
||||
|
||||
Rules:
|
||||
|
||||
* ASL blocks live inside snapshot-capable storage
|
||||
* Snapshots naturally pin sealed blocks
|
||||
* ASL does not encode snapshot IDs into block IDs
|
||||
* CAS determines snapshot visibility
|
||||
|
||||
ASL guarantees:
|
||||
|
||||
* Deterministic reads for a given snapshot
|
||||
* No mutation of sealed blocks across snapshots
|
||||
|
||||
---
|
||||
|
||||
## 8. Crash Safety and Recovery
|
||||
|
||||
### 8.1 Crash During Open Block
|
||||
|
||||
If a crash occurs:
|
||||
|
||||
* Open blocks may be lost or abandoned
|
||||
* Block IDs allocated but not sealed may be leaked
|
||||
* No sealed block may be corrupted
|
||||
|
||||
This is acceptable and expected.
|
||||
|
||||
---
|
||||
|
||||
### 8.2 Recovery Rules
|
||||
|
||||
On startup, ASL:
|
||||
|
||||
* Scans for sealed blocks
|
||||
* Ignores or cleans up abandoned open blocks
|
||||
* Resumes allocation from the next unused block ID
|
||||
|
||||
No global replay or rebuild is required.
|
||||
|
||||
---
|
||||
|
||||
## 9. Garbage Collection
|
||||
|
||||
ASL performs garbage collection at **block granularity**.
|
||||
|
||||
Rules:
|
||||
|
||||
* A block is eligible for deletion if:
|
||||
|
||||
* It is sealed, and
|
||||
* It is unreachable from all retained snapshots
|
||||
* ASL does not perform partial block mutation
|
||||
* Compaction (if any) rewrites artifacts into new blocks
|
||||
|
||||
Block deletion is irreversible.
|
||||
|
||||
---
|
||||
|
||||
## 10. Non-Goals (Explicit)
|
||||
|
||||
ASL explicitly does **not** provide:
|
||||
|
||||
* Artifact identity management
|
||||
* Deduplication decisions
|
||||
* Provenance interpretation
|
||||
* Size-class semantics
|
||||
* Execution semantics
|
||||
|
||||
Those concerns belong to CAS, PEL, and higher layers.
|
||||
|
||||
---
|
||||
|
||||
## 11. Design Summary (Executive)
|
||||
|
||||
* One block namespace
|
||||
* One addressing model
|
||||
* One read path
|
||||
* Placement is an optimization
|
||||
* Immutability is absolute
|
||||
* Snapshots provide safety
|
||||
* Size is a courtesy, not a contract
|
||||
|
||||
|
||||
|
|
@ -1,247 +0,0 @@
|
|||
Great — let’s formalize it cleanly and minimally, in the same style as your other specs.
|
||||
|
||||
Below is a **normative artifact specification**. I’ll keep it tight, auditable, and compatible with ASL / PEL thinking.
|
||||
|
||||
---
|
||||
|
||||
# **ASL-COMMON-WITNESS-ROTATION v0.1**
|
||||
|
||||
## 1. Purpose
|
||||
|
||||
The **Witness Rotation Artifact (WRA)** authorizes a change of the active **Common Witness Emitter** while preserving:
|
||||
|
||||
* single linear Common history
|
||||
* offline operation
|
||||
* auditability
|
||||
* fork prevention
|
||||
|
||||
The WRA is the **only mechanism** by which witness authority may change.
|
||||
|
||||
---
|
||||
|
||||
## 2. Scope and guarantees
|
||||
|
||||
This artifact:
|
||||
|
||||
* does **not** modify artifacts, blocks, or snapshots directly
|
||||
* authorizes **who may emit the next Common snapshot**
|
||||
* is snapshot-bound and immutable
|
||||
* is evaluated **only by ASL-HOST instances handling the Common domain**
|
||||
|
||||
---
|
||||
|
||||
## 3. Definitions
|
||||
|
||||
| Term | Meaning |
|
||||
| --------------------- | ------------------------------------------------------ |
|
||||
| **Witness Emitter** | The domain authorized to emit the next `common@N+1` |
|
||||
| **Witness Authority** | A domain whose key may endorse witness changes |
|
||||
| **Quorum** | A threshold of valid endorsements |
|
||||
| **Rotation Snapshot** | The first snapshot emitted under new witness authority |
|
||||
|
||||
---
|
||||
|
||||
## 4. Artifact identity
|
||||
|
||||
**Artifact type:** `asl.common.witness-rotation`
|
||||
**Artifact key:** content-addressed (CAS)
|
||||
**Visibility:** published (Common domain only)
|
||||
|
||||
---
|
||||
|
||||
## 5. Canonical structure (logical)
|
||||
|
||||
```yaml
|
||||
artifact_type: asl.common.witness-rotation
|
||||
version: 0.1
|
||||
|
||||
common_domain_id: <domain-id>
|
||||
|
||||
previous_snapshot:
|
||||
snapshot_id: common@N
|
||||
snapshot_hash: <hash>
|
||||
|
||||
rotation:
|
||||
old_witness:
|
||||
domain_id: <domain-id>
|
||||
pubkey_id: <key-id>
|
||||
|
||||
new_witness:
|
||||
domain_id: <domain-id>
|
||||
pubkey_id: <key-id>
|
||||
|
||||
policy_ref:
|
||||
artifact_key: <common-policy-artifact>
|
||||
|
||||
reason: <utf8-string, optional>
|
||||
|
||||
endorsements:
|
||||
threshold: <uint>
|
||||
endorsements:
|
||||
- domain_id: <domain-id>
|
||||
pubkey_id: <key-id>
|
||||
signature: <bytes>
|
||||
- ...
|
||||
|
||||
created_at_logseq: <uint64>
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 6. Cryptographic requirements
|
||||
|
||||
### 6.1 Endorsement signature
|
||||
|
||||
Each endorsement signs **exactly**:
|
||||
|
||||
```
|
||||
H(
|
||||
artifact_type
|
||||
|| version
|
||||
|| common_domain_id
|
||||
|| previous_snapshot.snapshot_id
|
||||
|| previous_snapshot.snapshot_hash
|
||||
|| new_witness.domain_id
|
||||
|| new_witness.pubkey_id
|
||||
|| policy_ref.artifact_key
|
||||
)
|
||||
```
|
||||
|
||||
* Hash function: same as ASL block hash
|
||||
* Signature scheme: per ASL-AUTH (e.g. Ed25519)
|
||||
|
||||
---
|
||||
|
||||
## 7. Validation rules (normative)
|
||||
|
||||
An ASL-HOST **MUST accept** a witness rotation artifact if and only if:
|
||||
|
||||
1. `previous_snapshot` matches the current trusted Common snapshot
|
||||
2. All endorsement signatures are valid
|
||||
3. Endorsing keys are authorized by the referenced policy
|
||||
4. Endorsement count ≥ policy quorum threshold
|
||||
5. `new_witness` is not revoked in policy
|
||||
6. Artifact hash matches CAS key
|
||||
|
||||
Otherwise the artifact **MUST be rejected**.
|
||||
|
||||
---
|
||||
|
||||
## 8. Application semantics
|
||||
|
||||
### 8.1 When applied
|
||||
|
||||
The WRA does **not** immediately advance Common.
|
||||
|
||||
It becomes effective **only when a snapshot is emitted** by `new_witness`.
|
||||
|
||||
---
|
||||
|
||||
### 8.2 Rotation snapshot
|
||||
|
||||
The first snapshot emitted by the new witness:
|
||||
|
||||
```
|
||||
common@N+1
|
||||
```
|
||||
|
||||
MUST include:
|
||||
|
||||
```yaml
|
||||
witness:
|
||||
domain_id: <new_witness.domain_id>
|
||||
pubkey_id: <new_witness.pubkey_id>
|
||||
rotation_artifact: <artifact_key>
|
||||
```
|
||||
|
||||
This snapshot:
|
||||
|
||||
* seals the rotation
|
||||
* permanently records authority transfer
|
||||
* invalidates emissions by the old witness
|
||||
|
||||
---
|
||||
|
||||
## 9. Failure and recovery semantics
|
||||
|
||||
### 9.1 Old witness emits after rotation
|
||||
|
||||
Such snapshots:
|
||||
|
||||
* MUST be rejected
|
||||
* MUST NOT be indexed
|
||||
* MAY be retained as foreign artifacts for audit
|
||||
|
||||
---
|
||||
|
||||
### 9.2 New witness fails before emitting
|
||||
|
||||
Rotation remains **pending**.
|
||||
|
||||
Quorum may:
|
||||
|
||||
* re-endorse another WRA
|
||||
* or extend grace period (policy-defined)
|
||||
|
||||
No fork occurs.
|
||||
|
||||
---
|
||||
|
||||
## 10. Interaction with snapshots and CURRENT
|
||||
|
||||
* CURRENT resolution **MUST consider only the active witness**
|
||||
* Snapshot ordering remains strictly monotonic
|
||||
* Rotation artifacts do **not** affect artifact visibility
|
||||
|
||||
---
|
||||
|
||||
## 11. Minimal policy requirements
|
||||
|
||||
The referenced Common policy artifact MUST define:
|
||||
|
||||
```yaml
|
||||
witness_policy:
|
||||
authorities:
|
||||
- domain_id
|
||||
- pubkey_id
|
||||
quorum_threshold: uint
|
||||
rotation_allowed: true
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 12. Security properties
|
||||
|
||||
This design guarantees:
|
||||
|
||||
* **No split-brain:** only one valid witness at a time
|
||||
* **Offline survivability:** endorsements are asynchronous
|
||||
* **Auditability:** every rotation is permanently recorded
|
||||
* **Revocability:** compromised witnesses can be replaced
|
||||
|
||||
---
|
||||
|
||||
## 13. Why this fits ASL cleanly
|
||||
|
||||
* Rotation is an **artifact**
|
||||
* Authority is **data**, not code
|
||||
* History is **explicit**, not inferred
|
||||
* No special network protocol required
|
||||
|
||||
---
|
||||
|
||||
## 14. One-sentence invariant (the one to remember)
|
||||
|
||||
> **A Common snapshot is valid if and only if its witness authority is provable from the previous snapshot.**
|
||||
|
||||
---
|
||||
|
||||
If you want next, we can:
|
||||
|
||||
* formalize the **Common Policy Artifact**
|
||||
* define the **recovery-from-silence** artifact
|
||||
* specify how ASL-HOST enforces witness authority at boot
|
||||
* or walk through a concrete rotation with real keys and USB transfers
|
||||
|
||||
This piece is now solid.
|
||||
|
||||
|
|
@ -1,286 +0,0 @@
|
|||
Below is a **formal draft of ASL-INDEX-ACCEL**, written to lock down the acceleration model (filters, sharding, SIMD, routing keys) while *explicitly protecting semantics*. This document is intentionally crisp, normative, and future-proof.
|
||||
|
||||
---
|
||||
|
||||
# ASL-INDEX-ACCEL
|
||||
|
||||
### Index Acceleration, Routing, and Filtering Semantics
|
||||
|
||||
---
|
||||
|
||||
## 1. Purpose
|
||||
|
||||
ASL-INDEX-ACCEL defines the **acceleration mechanisms** used by ASL-based indexes, including:
|
||||
|
||||
* Routing keys
|
||||
* Sharding
|
||||
* Filters (Bloom, XOR, Ribbon, etc.)
|
||||
* SIMD execution
|
||||
* Hash recasting
|
||||
|
||||
This document **explicitly separates correctness from performance**.
|
||||
|
||||
> All mechanisms defined herein are **observationally invisible** to the semantic index defined by ASL-CORE-INDEX.
|
||||
|
||||
---
|
||||
|
||||
## 2. Scope
|
||||
|
||||
This specification applies to:
|
||||
|
||||
* Artifact indexes (ASL)
|
||||
* Projection and graph indexes (e.g., TGK)
|
||||
* Any index layered on ASL-CORE-INDEX semantics
|
||||
|
||||
It does **not** define:
|
||||
|
||||
* Artifact or edge identity
|
||||
* Snapshot semantics
|
||||
* Storage lifecycle
|
||||
* Encoding details (see ENC-ASL-CORE-INDEX at `tier1/enc-asl-core-index.md`)
|
||||
|
||||
---
|
||||
|
||||
## 3. Canonical Key vs Routing Key
|
||||
|
||||
### 3.1 Canonical Key
|
||||
|
||||
The **Canonical Key** uniquely identifies an indexable entity.
|
||||
|
||||
Examples:
|
||||
|
||||
* Artifact: `ArtifactKey`
|
||||
* TGK Edge: `CanonicalEdgeKey`
|
||||
|
||||
Properties:
|
||||
|
||||
* Defines semantic identity
|
||||
* Used for equality, shadowing, and tombstones
|
||||
* Stable and immutable
|
||||
* Fully compared on index match
|
||||
|
||||
---
|
||||
|
||||
### 3.2 Routing Key
|
||||
|
||||
The **Routing Key** is a **derived, advisory key** used exclusively for acceleration.
|
||||
|
||||
Properties:
|
||||
|
||||
* Derived deterministically from canonical key and optional attributes
|
||||
* May be used for:
|
||||
|
||||
* Sharding
|
||||
* Filter construction
|
||||
* SIMD-friendly layouts
|
||||
* MUST NOT affect index semantics
|
||||
* MUST be verified by full canonical key comparison on match
|
||||
|
||||
Formal rule:
|
||||
|
||||
```
|
||||
CanonicalKey determines correctness
|
||||
RoutingKey determines performance
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4. Filter Semantics
|
||||
|
||||
### 4.1 Advisory Nature
|
||||
|
||||
All filters are **advisory only**.
|
||||
|
||||
Rules:
|
||||
|
||||
* False positives are permitted
|
||||
* False negatives are forbidden
|
||||
* Filter behavior MUST NOT affect correctness
|
||||
|
||||
Formal invariant:
|
||||
|
||||
```
|
||||
Filter miss ⇒ key is definitely absent
|
||||
Filter hit ⇒ key may be present
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 4.2 Filter Inputs
|
||||
|
||||
Filters operate over **Routing Keys**, not Canonical Keys.
|
||||
|
||||
A Routing Key MAY incorporate:
|
||||
|
||||
* Hash of Canonical Key
|
||||
* Artifact type tag (`type_tag`, `has_typetag`)
|
||||
* TGK edge type key
|
||||
* Direction, role, or other immutable classification attributes
|
||||
|
||||
Absence of optional attributes MUST be encoded explicitly.
|
||||
|
||||
---
|
||||
|
||||
### 4.3 Filter Construction
|
||||
|
||||
* Filters are built only over **sealed, immutable segments**
|
||||
* Filters are immutable once built
|
||||
* Filter construction MUST be deterministic
|
||||
* Filter state MUST be covered by segment checksums
|
||||
|
||||
---
|
||||
|
||||
## 5. Sharding Semantics
|
||||
|
||||
### 5.1 Observational Invisibility
|
||||
|
||||
Sharding is a **mechanical partitioning** of the index.
|
||||
|
||||
Invariant:
|
||||
|
||||
```
|
||||
LogicalIndex = ⋃ all shards
|
||||
```
|
||||
|
||||
Rules:
|
||||
|
||||
* Shards MUST NOT affect lookup results
|
||||
* Shard count and boundaries may change over time
|
||||
* Rebalancing MUST preserve lookup semantics
|
||||
|
||||
---
|
||||
|
||||
### 5.2 Shard Assignment
|
||||
|
||||
Shard assignment MAY be based on:
|
||||
|
||||
* Hash of Canonical Key
|
||||
* Routing Key
|
||||
* Composite routing strategies
|
||||
|
||||
Shard selection MUST be deterministic per snapshot.
|
||||
|
||||
---
|
||||
|
||||
## 6. Hashing and Hash Recasting
|
||||
|
||||
### 6.1 Hashing
|
||||
|
||||
Hashes MAY be used for:
|
||||
|
||||
* Routing
|
||||
* Filtering
|
||||
* SIMD layout
|
||||
|
||||
Hashes MUST NOT be treated as identity.
|
||||
|
||||
---
|
||||
|
||||
### 6.2 Hash Recasting
|
||||
|
||||
Hash recasting (changing hash functions or seeds) is permitted if:
|
||||
|
||||
1. It is deterministic
|
||||
2. It does not change Canonical Keys
|
||||
3. It does not affect index semantics
|
||||
|
||||
Recasting is equivalent to rebuilding acceleration structures.
|
||||
|
||||
---
|
||||
|
||||
## 7. SIMD Execution
|
||||
|
||||
SIMD operations MAY be used to:
|
||||
|
||||
* Evaluate filters
|
||||
* Compare routing keys
|
||||
* Accelerate scans
|
||||
|
||||
Rules:
|
||||
|
||||
* SIMD must operate only on immutable data
|
||||
* SIMD must not short-circuit semantic checks
|
||||
* SIMD must preserve deterministic behavior
|
||||
|
||||
---
|
||||
|
||||
## 8. Multi-Dimensional Routing Examples (Normative)
|
||||
|
||||
### 8.1 Artifact Index
|
||||
|
||||
* Canonical Key: `ArtifactKey`
|
||||
* Routing Key components:
|
||||
|
||||
* `H(ArtifactKey)`
|
||||
* `type_tag` (if present)
|
||||
* `has_typetag`
|
||||
|
||||
---
|
||||
|
||||
### 8.2 TGK Edge Index
|
||||
|
||||
* Canonical Key: `CanonicalEdgeKey`
|
||||
* Routing Key components:
|
||||
|
||||
* `H(CanonicalEdgeKey)`
|
||||
* `edge_type_key`
|
||||
* Direction or role (optional)
|
||||
|
||||
---
|
||||
|
||||
## 9. Snapshot Interaction
|
||||
|
||||
Acceleration structures:
|
||||
|
||||
* MUST respect snapshot visibility rules
|
||||
* MUST operate over the same sealed segments visible to the snapshot
|
||||
* MUST NOT bypass tombstones or shadowing
|
||||
|
||||
Snapshot cuts apply **after** routing and filtering.
|
||||
|
||||
---
|
||||
|
||||
## 10. Normative Invariants
|
||||
|
||||
1. Canonical Keys define identity and correctness
|
||||
2. Routing Keys are advisory only
|
||||
3. Filters may never introduce false negatives
|
||||
4. Sharding is observationally invisible
|
||||
5. Hashes are not identity
|
||||
6. SIMD is an execution strategy, not a semantic construct
|
||||
7. All acceleration is deterministic per snapshot
|
||||
|
||||
---
|
||||
|
||||
## 11. Non-Goals
|
||||
|
||||
This specification does not define:
|
||||
|
||||
* Specific filter algorithms
|
||||
* Memory layout
|
||||
* CPU instruction selection
|
||||
* Encoding formats
|
||||
* Federation policies
|
||||
|
||||
---
|
||||
|
||||
## 12. Summary
|
||||
|
||||
ASL-INDEX-ACCEL establishes a **strict contract**:
|
||||
|
||||
> *All acceleration exists to make the index faster, never different.*
|
||||
|
||||
By formalizing Canonical vs Routing keys and explicitly constraining filters, sharding, hashing, and SIMD, this document ensures that:
|
||||
|
||||
* Performance optimizations scale to billions of entries
|
||||
* Semantics remain stable and provable
|
||||
* Future optimizations cannot silently break correctness
|
||||
|
||||
---
|
||||
|
||||
If you want, the **next natural step** would be to:
|
||||
|
||||
* Draft **TGK-INDEX** using this exact framework
|
||||
* Or revise **ASL-CORE-INDEX** to reference ASL-INDEX-ACCEL normatively
|
||||
|
||||
Both would lock the model completely.
|
||||
|
|
@ -1,318 +0,0 @@
|
|||
Great, this is the right moment to freeze something *small, sharp, and survivable*.
|
||||
|
||||
Below is a **clean, minimal, boring-on-purpose** definition of a **Start Log Language v0.1** that:
|
||||
|
||||
* Matches ASL-STORE-INDEX precisely
|
||||
* Keeps one authoritative log
|
||||
* Avoids over-specifying future policy / federation
|
||||
* Is stable enough to build against now
|
||||
* Leaves room for evolution without breakage
|
||||
|
||||
I’ll call it **ASL-LOG v0.1**.
|
||||
|
||||
---
|
||||
|
||||
# ASL-LOG v0.1
|
||||
|
||||
**Append-Only Semantic Log for ASL Domains**
|
||||
|
||||
---
|
||||
|
||||
## 1. Purpose
|
||||
|
||||
ASL-LOG defines the **authoritative, append-only log** for an ASL domain.
|
||||
|
||||
The log records **semantic commits** that affect:
|
||||
|
||||
* Artifact visibility
|
||||
* Index segment visibility
|
||||
* Policy (e.g. tombstones)
|
||||
* Authority state
|
||||
* Snapshot anchoring
|
||||
|
||||
The log is the **sole source of truth** for reconstructing CURRENT state.
|
||||
|
||||
---
|
||||
|
||||
## 2. Core Properties (Normative)
|
||||
|
||||
An ASL log **must** be:
|
||||
|
||||
1. **Append-only**
|
||||
2. **Strictly ordered**
|
||||
3. **Deterministically replayable**
|
||||
4. **Hash-chained**
|
||||
5. **Snapshot-anchorable**
|
||||
6. **Binary encoded**
|
||||
7. **Forward-compatible**
|
||||
|
||||
---
|
||||
|
||||
## 3. Log Model
|
||||
|
||||
### 3.1 Log Sequence
|
||||
|
||||
Each record has a monotonically increasing `logseq`:
|
||||
|
||||
```
|
||||
logseq: uint64
|
||||
```
|
||||
|
||||
* Assigned by the domain authority
|
||||
* Total order within a domain
|
||||
* Never reused
|
||||
|
||||
---
|
||||
|
||||
### 3.2 Hash Chain
|
||||
|
||||
Each record commits to the previous record:
|
||||
|
||||
```
|
||||
record_hash = H(prev_record_hash || record_type || payload)
|
||||
```
|
||||
|
||||
This enables:
|
||||
|
||||
* Tamper detection
|
||||
* Witness signing
|
||||
* Federation verification
|
||||
|
||||
---
|
||||
|
||||
## 4. Record Envelope (v0.1)
|
||||
|
||||
All log records share a common envelope.
|
||||
|
||||
```c
|
||||
struct asl_log_record_v1 {
|
||||
uint64_t logseq;
|
||||
uint32_t record_type;
|
||||
uint32_t payload_len;
|
||||
uint8_t payload[payload_len];
|
||||
uint8_t record_hash[32]; // e.g. SHA-256
|
||||
};
|
||||
```
|
||||
|
||||
Notes:
|
||||
|
||||
* Encoding is little-endian
|
||||
* `record_hash` hashes the full record except itself
|
||||
* Hash algorithm is fixed for v0.1
|
||||
|
||||
---
|
||||
|
||||
## 5. Record Types (v0.1)
|
||||
|
||||
### 5.1 SEGMENT_SEAL (Type = 0x01)
|
||||
|
||||
**The most important record in v0.1**
|
||||
|
||||
Declares an index segment visible.
|
||||
|
||||
```c
|
||||
struct segment_seal_v1 {
|
||||
uint64_t segment_id;
|
||||
uint8_t segment_hash[32];
|
||||
};
|
||||
```
|
||||
|
||||
Semantics:
|
||||
|
||||
> From this `logseq` onward, the referenced index segment is visible
|
||||
> for lookup and replay.
|
||||
|
||||
Rules:
|
||||
|
||||
* Segment must be immutable
|
||||
* All referenced blocks must already be sealed
|
||||
* Segment contents are not re-logged
|
||||
|
||||
---
|
||||
|
||||
### 5.2 ARTIFACT_PUBLISH (Type = 0x02) (Optional v0.1)
|
||||
|
||||
Marks an artifact as published.
|
||||
|
||||
```c
|
||||
struct artifact_publish_v1 {
|
||||
uint64_t artifact_key;
|
||||
};
|
||||
```
|
||||
|
||||
Semantics:
|
||||
|
||||
* Publication is domain-local
|
||||
* Federation layers may interpret this
|
||||
|
||||
---
|
||||
|
||||
### 5.3 ARTIFACT_UNPUBLISH (Type = 0x03) (Optional v0.1)
|
||||
|
||||
Withdraws publication.
|
||||
|
||||
```c
|
||||
struct artifact_unpublish_v1 {
|
||||
uint64_t artifact_key;
|
||||
};
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 5.4 TOMBSTONE (Type = 0x10)
|
||||
|
||||
Declares an artifact inadmissible under domain policy.
|
||||
|
||||
```c
|
||||
struct tombstone_v1 {
|
||||
uint64_t artifact_key;
|
||||
uint32_t scope; // e.g. EXECUTION, INDEX, PUBLICATION
|
||||
uint32_t reason_code; // opaque to ASL-LOG
|
||||
};
|
||||
```
|
||||
|
||||
Semantics:
|
||||
|
||||
* Does not delete data
|
||||
* Shadows prior visibility
|
||||
* Applies from this logseq onward
|
||||
|
||||
---
|
||||
|
||||
### 5.5 TOMBSTONE_LIFT (Type = 0x11)
|
||||
|
||||
Supersedes a previous tombstone.
|
||||
|
||||
```c
|
||||
struct tombstone_lift_v1 {
|
||||
uint64_t artifact_key;
|
||||
uint64_t tombstone_logseq;
|
||||
};
|
||||
```
|
||||
|
||||
Rules:
|
||||
|
||||
* Must reference an earlier TOMBSTONE
|
||||
* Does not erase history
|
||||
* Only affects CURRENT ≥ this logseq
|
||||
|
||||
---
|
||||
|
||||
### 5.6 SNAPSHOT_ANCHOR (Type = 0x20)
|
||||
|
||||
Binds semantic state to a filesystem snapshot.
|
||||
|
||||
```c
|
||||
struct snapshot_anchor_v1 {
|
||||
uint64_t snapshot_id;
|
||||
uint8_t root_hash[32]; // hash of snapshot-visible state
|
||||
};
|
||||
```
|
||||
|
||||
Semantics:
|
||||
|
||||
* Defines a replay checkpoint
|
||||
* Enables log truncation below anchor (with care)
|
||||
|
||||
---
|
||||
|
||||
### 5.7 DOMAIN_AUTH_UPDATE (Type = 0x30) (Optional v0.1)
|
||||
|
||||
Updates trusted domain authorities.
|
||||
|
||||
```c
|
||||
struct domain_auth_update_v1 {
|
||||
uint8_t cert_hash[32];
|
||||
uint32_t action; // ADD or REVOKE
|
||||
};
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 6. Replay Semantics (Normative)
|
||||
|
||||
To reconstruct CURRENT:
|
||||
|
||||
1. Load latest snapshot anchor (if any)
|
||||
2. Initialize visible segments from snapshot
|
||||
3. Replay all log records with `logseq > snapshot.logseq`
|
||||
4. Apply records in order:
|
||||
|
||||
* SEGMENT_SEAL → add segment
|
||||
* TOMBSTONE → update policy state
|
||||
* TOMBSTONE_LIFT → override policy
|
||||
* PUBLISH → update visibility metadata
|
||||
|
||||
Replay **must be deterministic**.
|
||||
|
||||
---
|
||||
|
||||
## 7. Index Interaction
|
||||
|
||||
* Index segments contain index entries
|
||||
* The log never records individual index entries
|
||||
* Visibility is controlled solely by SEGMENT_SEAL
|
||||
* Index rebuild = scan visible segments + apply policy
|
||||
|
||||
---
|
||||
|
||||
## 8. Garbage Collection Constraints
|
||||
|
||||
* A segment may be GC’d only if:
|
||||
|
||||
* No snapshot references it
|
||||
* No log replay ≤ CURRENT requires it
|
||||
* Log truncation is only safe at SNAPSHOT_ANCHOR boundaries
|
||||
|
||||
---
|
||||
|
||||
## 9. Versioning & Extensibility
|
||||
|
||||
* Unknown `record_type`:
|
||||
|
||||
* Must be skipped
|
||||
* Must not break replay
|
||||
* Payloads are opaque outside their type
|
||||
* v0.1 fixes:
|
||||
|
||||
* Hash algorithm
|
||||
* Endianness
|
||||
* Core record types
|
||||
|
||||
Future versions may:
|
||||
|
||||
* Add new record types
|
||||
* Add optional fields (new struct versions)
|
||||
* Introduce compression or batching
|
||||
|
||||
---
|
||||
|
||||
## 10. What v0.1 Deliberately Does *Not* Include
|
||||
|
||||
* Federation protocols
|
||||
* Network replication
|
||||
* Witness signatures
|
||||
* Block-level events
|
||||
* Hydration / eviction
|
||||
* Execution receipts (PERs)
|
||||
|
||||
Those live *above* ASL-LOG.
|
||||
|
||||
---
|
||||
|
||||
## 11. One-sentence invariant (keep this)
|
||||
|
||||
> **If it affects visibility, admissibility, or authority, it goes in the log.
|
||||
> If it affects layout or performance, it does not.**
|
||||
|
||||
---
|
||||
|
||||
If you want next steps, the natural continuations are:
|
||||
|
||||
* **ASL-LOG test vectors**
|
||||
* **Minimal C writer/reader API**
|
||||
* **Formal CURRENT reconstruction pseudocode**
|
||||
* **Mapping ASL-LOG → federation streams**
|
||||
|
||||
But v0.1 as written is *enough to build real systems against* without regret.
|
||||
|
||||
|
|
@ -1,184 +0,0 @@
|
|||
Yes — and that’s a very reasonable moment to do it.
|
||||
Below is a **compact, structured condensation** of the *actual conclusions* reached, stripped of detours, phrased so you can paste it into a new document or start a new thread cleanly.
|
||||
|
||||
---
|
||||
|
||||
# ASL / PEL / Domain Model — Compact Working Summary
|
||||
|
||||
## 1. Domains (Git-like, clarified)
|
||||
|
||||
* **Common**
|
||||
A normal domain. Holds its own log, index, and store. Acts as a shared “main branch”.
|
||||
|
||||
* **root.alice**
|
||||
Alice’s *public-facing* sub-domain.
|
||||
|
||||
* Sub-domain of **Common**
|
||||
* Also sub-domain of **alice**
|
||||
* Only place Alice publishes to Common
|
||||
* Has its own log/index/store
|
||||
* May publish snapshot hashes to Common (existence proofs)
|
||||
|
||||
* **alice (private)**
|
||||
Alice’s private working domain.
|
||||
|
||||
* Never visible to Common
|
||||
* Contains secrets, experiments, raw data
|
||||
* Artifacts move outward only by explicit copy/promote into `root.alice`
|
||||
|
||||
Mental model:
|
||||
|
||||
```
|
||||
alice (private working tree)
|
||||
→ root.alice (publishable branch)
|
||||
→ common (main branch)
|
||||
```
|
||||
|
||||
Groups are just domains under Common with multiple authorities.
|
||||
|
||||
---
|
||||
|
||||
## 2. Logs & Indexes (key clarification)
|
||||
|
||||
* There is **one authoritative append-only log per domain**.
|
||||
* The **ASL log records index-segment events**, not individual artifact bytes.
|
||||
* The **index is reconstructed** as:
|
||||
|
||||
```
|
||||
snapshot + replay(log)
|
||||
```
|
||||
* No “second log” is needed:
|
||||
|
||||
* Index segments are immutable
|
||||
* The log references sealed segments
|
||||
* Blocks are never logged semantically
|
||||
|
||||
---
|
||||
|
||||
## 3. Blocks (demoted correctly)
|
||||
|
||||
* **Blocks are NOT semantic units**
|
||||
* Blocks are **storage-only atoms**
|
||||
* Blocks:
|
||||
|
||||
* may contain many artifacts
|
||||
* may contain one artifact
|
||||
* may be regenerated
|
||||
* may differ across domains
|
||||
|
||||
**Publication never publishes blocks.**
|
||||
Only index entries define visibility.
|
||||
|
||||
This avoids leaks like:
|
||||
|
||||
> secret + recipe accidentally in same block
|
||||
|
||||
---
|
||||
|
||||
## 4. Artifacts & Storage
|
||||
|
||||
* Artifacts are defined by **identity (hash)**, not storage.
|
||||
* Storage may:
|
||||
|
||||
* pack artifacts
|
||||
* encrypt artifacts
|
||||
* discard artifacts
|
||||
* recompute artifacts
|
||||
* Index entries may point to:
|
||||
|
||||
* stored blocks
|
||||
* encrypted blocks
|
||||
* virtual (recomputable) locations
|
||||
|
||||
Blocks never cross trust boundaries implicitly.
|
||||
|
||||
---
|
||||
|
||||
## 5. PEL (major conclusion)
|
||||
|
||||
### There is only **ONE** PEL.
|
||||
|
||||
No meaningful split between “PEL-S” and “PEL-P”.
|
||||
|
||||
PEL is:
|
||||
|
||||
> A deterministic, snapshot-bound, authority-aware derivation language that maps artifacts → artifacts.
|
||||
|
||||
Key points:
|
||||
|
||||
* Any PEL output may be:
|
||||
|
||||
* stored
|
||||
* cached
|
||||
* discarded
|
||||
* recomputed
|
||||
* Authority & provenance always apply
|
||||
* “Semantic vs physical” is **store policy**, not language design
|
||||
|
||||
---
|
||||
|
||||
## 6. Deduplication (important insight)
|
||||
|
||||
Deduplication is **semantic**, not block-based.
|
||||
|
||||
The system may track:
|
||||
|
||||
```
|
||||
Artifact A
|
||||
├─ derived via PEL program X
|
||||
├─ derived via PEL program Y
|
||||
└─ imported from domain Z
|
||||
```
|
||||
|
||||
This enables:
|
||||
|
||||
* semantic equivalence
|
||||
* alternate provenance paths
|
||||
* stronger trust reasoning
|
||||
|
||||
Blocks follow later as optimization.
|
||||
|
||||
---
|
||||
|
||||
## 7. Publication semantics
|
||||
|
||||
* Domains never “push blocks”.
|
||||
* Domains publish:
|
||||
|
||||
* index entries
|
||||
* optional snapshot hashes
|
||||
* Common indexes `root.alice` like any other sub-domain.
|
||||
* Proof-of-existence is achieved by publishing snapshot hashes, not data.
|
||||
|
||||
---
|
||||
|
||||
## 8. Mental hierarchy (final)
|
||||
|
||||
```
|
||||
PEL (pure derivation)
|
||||
↓
|
||||
Artifacts (semantic facts)
|
||||
↓
|
||||
Index (visibility + truth)
|
||||
↓
|
||||
Store policy (encrypt / pack / discard / cache)
|
||||
↓
|
||||
Blocks (pure implementation detail)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## One-sentence takeaway
|
||||
|
||||
> **PEL defines truth derivation, indexes define visibility, stores decide materialization, and blocks are just a packing trick.**
|
||||
|
||||
---
|
||||
|
||||
If you want, next time we can:
|
||||
|
||||
* restate this as a **single clean spec**,
|
||||
* or start fresh from **“ASL vNext minimal principles”**,
|
||||
* or jump directly into **implementation decisions** without re-deriving theory.
|
||||
|
||||
You’re at a very solid conceptual checkpoint now.
|
||||
|
||||
|
|
@ -1,314 +0,0 @@
|
|||
Great — this is the **right moment** to define it, because everything underneath (artifacts, DAGs, indexes, snapshots, logs) is now solid.
|
||||
|
||||
What follows is a **clean, minimal, *normative* ASL-STORE put/get contract** that:
|
||||
|
||||
* Sits *between* ASL-CORE / ASL-CORE-INDEX and any concrete storage engine
|
||||
* Is compatible with your snapshot + log semantics
|
||||
* Does **not** assume classical vs quantum
|
||||
* Makes caching, deduplication, and replay possible
|
||||
* Avoids over-specifying performance or layout
|
||||
|
||||
Think of this as **the membrane between semantics and mechanics**.
|
||||
|
||||
---
|
||||
|
||||
# ASL-STORE — Put/Get Contract (Normative)
|
||||
|
||||
## 1. Purpose
|
||||
|
||||
ASL-STORE defines the **operational contract** by which:
|
||||
|
||||
* Artifacts are **materialized and stored**
|
||||
* Artifact content becomes **visible** via the ASL-CORE-INDEX
|
||||
* Stored content is **retrieved deterministically**
|
||||
|
||||
ASL-STORE answers exactly two questions:
|
||||
|
||||
> **PUT**: How does an artifact become stored and indexed?
|
||||
> **GET**: How are bytes retrieved once indexed?
|
||||
|
||||
Nothing more.
|
||||
|
||||
---
|
||||
|
||||
## 2. Scope
|
||||
|
||||
ASL-STORE defines:
|
||||
|
||||
* The **PUT lifecycle**
|
||||
* The **GET lifecycle**
|
||||
* Required interactions with:
|
||||
|
||||
* Content Index (ASL-CORE-INDEX)
|
||||
* Structural DAG
|
||||
* Materialization cache
|
||||
* Visibility and determinism rules
|
||||
|
||||
ASL-STORE does **not** define:
|
||||
|
||||
* Block allocation strategy
|
||||
* File layout
|
||||
* IO APIs
|
||||
* Concurrency primitives
|
||||
* Caching policies
|
||||
* Garbage collection
|
||||
* Replication mechanics
|
||||
|
||||
---
|
||||
|
||||
## 3. Actors and Dependencies
|
||||
|
||||
ASL-STORE operates in the presence of:
|
||||
|
||||
* **Artifact DAG** (SID-addressed)
|
||||
* **Materialization Cache** (`SID → CID`, optional)
|
||||
* **Content Index** (`CID → ArtifactLocation`)
|
||||
* **Block Store** (opaque byte storage)
|
||||
* **Snapshot + Log** (for index visibility)
|
||||
|
||||
ASL-STORE **must not** bypass the Content Index.
|
||||
|
||||
---
|
||||
|
||||
## 4. PUT Contract
|
||||
|
||||
### 4.1 PUT Signature (Semantic)
|
||||
|
||||
```
|
||||
put(artifact) → (CID, IndexState)
|
||||
```
|
||||
|
||||
Where:
|
||||
|
||||
* `artifact` is an ASL artifact (possibly lazy, possibly quantum)
|
||||
* `CID` is the semantic content identity
|
||||
* `IndexState = (SnapshotID, LogPosition)` after the put
|
||||
|
||||
---
|
||||
|
||||
### 4.2 PUT Semantics (Step-by-step)
|
||||
|
||||
The following steps are **logically ordered**.
|
||||
An implementation may optimize, but may not violate the semantics.
|
||||
|
||||
---
|
||||
|
||||
#### Step 1 — Structural registration (mandatory)
|
||||
|
||||
* The artifact **must** be registered in the Structural Index (SID → DAG).
|
||||
* If an identical SID already exists, it **must be reused**.
|
||||
|
||||
> This guarantees derivation identity independent of storage.
|
||||
|
||||
---
|
||||
|
||||
#### Step 2 — CID resolution (lazy, cache-aware)
|
||||
|
||||
* If `(SID → CID)` exists in the Materialization Cache:
|
||||
|
||||
* Use it.
|
||||
* Otherwise:
|
||||
|
||||
* Materialize the artifact DAG
|
||||
* Compute the CID
|
||||
* Cache `(SID → CID)`
|
||||
|
||||
> Materialization may recursively invoke child artifacts.
|
||||
|
||||
---
|
||||
|
||||
#### Step 3 — Deduplication check (mandatory)
|
||||
|
||||
* Lookup `CID` in the Content Index at CURRENT.
|
||||
* If an entry exists:
|
||||
|
||||
* **No bytes are written**
|
||||
* **No new index entry is required**
|
||||
* PUT completes successfully
|
||||
|
||||
> This is **global deduplication**.
|
||||
|
||||
---
|
||||
|
||||
#### Step 4 — Physical storage (conditional)
|
||||
|
||||
If no existing entry exists:
|
||||
|
||||
* Bytes corresponding to `CID` **must be written** to a block
|
||||
* A concrete `ArtifactLocation` is produced:
|
||||
|
||||
```
|
||||
ArtifactLocation = Sequence[BlockSlice]
|
||||
|
||||
BlockSlice = (BlockID, offset, length)
|
||||
```
|
||||
|
||||
No assumptions are made about block layout.
|
||||
|
||||
---
|
||||
|
||||
#### Step 5 — Index mutation (mandatory)
|
||||
|
||||
* Append a **PUT log entry** to the Content Index:
|
||||
|
||||
```
|
||||
CID → ArtifactLocation
|
||||
```
|
||||
* The entry is **not visible** until the log position is ≤ CURRENT.
|
||||
|
||||
> This is the *only* moment storage becomes visible.
|
||||
|
||||
---
|
||||
|
||||
### 4.3 PUT Guarantees
|
||||
|
||||
After successful PUT:
|
||||
|
||||
* The artifact’s CID:
|
||||
|
||||
* Is stable
|
||||
* Is retrievable
|
||||
* Will resolve to immutable bytes
|
||||
* The Content Index state:
|
||||
|
||||
* Advances monotonically
|
||||
* Is replayable
|
||||
* Repeating PUT with the same artifact:
|
||||
|
||||
* Is idempotent
|
||||
|
||||
---
|
||||
|
||||
## 5. GET Contract
|
||||
|
||||
### 5.1 GET Signature (Semantic)
|
||||
|
||||
```
|
||||
get(CID, IndexState?) → bytes | NOT_FOUND
|
||||
```
|
||||
|
||||
Where:
|
||||
|
||||
* `CID` is the content identity
|
||||
* `IndexState` is optional:
|
||||
|
||||
* Defaults to CURRENT
|
||||
* May specify `(SnapshotID, LogPosition)`
|
||||
|
||||
---
|
||||
|
||||
### 5.2 GET Semantics
|
||||
|
||||
1. Resolve `CID → ArtifactLocation` using:
|
||||
|
||||
```
|
||||
Index(snapshot, log_prefix)
|
||||
```
|
||||
2. If no entry exists:
|
||||
|
||||
* Return `NOT_FOUND`
|
||||
3. Otherwise:
|
||||
|
||||
* Read exactly `(length)` bytes from `(BlockID, offset)`
|
||||
* Return bytes **verbatim**
|
||||
|
||||
No interpretation is applied.
|
||||
|
||||
---
|
||||
|
||||
### 5.3 GET Guarantees
|
||||
|
||||
* Returned bytes are:
|
||||
|
||||
* Immutable
|
||||
* Deterministic
|
||||
* Content-addressed
|
||||
* GET never triggers materialization
|
||||
* GET never mutates state
|
||||
|
||||
---
|
||||
|
||||
## 6. Visibility Rules
|
||||
|
||||
An index entry is visible **if and only if**:
|
||||
|
||||
1. The referenced block is sealed
|
||||
2. The log entry position ≤ CURRENT log position
|
||||
3. The snapshot + log prefix includes the entry
|
||||
|
||||
ASL-STORE must respect these rules strictly.
|
||||
|
||||
---
|
||||
|
||||
## 7. Failure Semantics (Minimal)
|
||||
|
||||
ASL-STORE must guarantee:
|
||||
|
||||
* No visible index entry points to missing or mutable bytes
|
||||
* Partial writes must not become visible
|
||||
* Replaying snapshot + log after crash yields a valid index
|
||||
|
||||
No stronger guarantees are required at this level.
|
||||
|
||||
---
|
||||
|
||||
## 8. Determinism Contract
|
||||
|
||||
For fixed:
|
||||
|
||||
* SnapshotID
|
||||
* LogPosition
|
||||
* Artifact DAG
|
||||
* Hash algorithms
|
||||
|
||||
ASL-STORE guarantees:
|
||||
|
||||
* Same CID
|
||||
* Same ArtifactLocation resolution
|
||||
* Same GET results
|
||||
|
||||
This is the foundation for:
|
||||
|
||||
* Federation
|
||||
* Replication
|
||||
* Provenance
|
||||
* Certification
|
||||
|
||||
---
|
||||
|
||||
## 9. Relationship to q-bits (Explicit)
|
||||
|
||||
ASL-STORE:
|
||||
|
||||
* Is **agnostic** to q-bits
|
||||
* Sees only:
|
||||
|
||||
* CIDs
|
||||
* Bytes
|
||||
* Does not care how CIDs were derived
|
||||
|
||||
Quantum artifacts affect **materialization**, not storage semantics.
|
||||
|
||||
---
|
||||
|
||||
## 10. Summary (Mental Model)
|
||||
|
||||
You can summarize ASL-STORE like this:
|
||||
|
||||
> **PUT** = “Make this artifact real, once, forever”
|
||||
> **GET** = “Given identity and time, give me the bytes”
|
||||
|
||||
Everything else is someone else’s problem — by design.
|
||||
|
||||
---
|
||||
|
||||
If you want, next we can:
|
||||
|
||||
* define **crash consistency boundaries**
|
||||
* add **federated PUT/GET**
|
||||
* define **certificates binding DAG → CID → Snapshot**
|
||||
* show **how LLM outputs become artifacts**
|
||||
* or formalize **overlay indexes** (concepts, domains, ontologies)
|
||||
|
||||
This layer is now solid enough to build *anything* above it.
|
||||
|
||||
|
|
@ -1,290 +0,0 @@
|
|||
Yes — and that is not only reasonable, it’s *structurally elegant*.
|
||||
What you’re converging on is:
|
||||
|
||||
> **“Any node can become a system”** — including the *authority node* itself.
|
||||
|
||||
Let’s make this precise and safe.
|
||||
|
||||
---
|
||||
|
||||
## 1. Key Insight
|
||||
|
||||
> The **Domain Authority Node (DAN)** should be bootstrapped **exactly like a SystemRescue node**,
|
||||
> but with **authority-only capabilities enabled** and **no federation by default**.
|
||||
|
||||
Same mechanics, different policy.
|
||||
|
||||
This symmetry matters a lot.
|
||||
|
||||
---
|
||||
|
||||
## 2. Treat the Authority Node as a First-Class ASL-HOST
|
||||
|
||||
### Properties of the Domain Authority Node
|
||||
|
||||
* Boots from a known image (like SystemRescue)
|
||||
* Has:
|
||||
|
||||
* ASL-HOST layout
|
||||
* Internal ASL store
|
||||
* Append-only log
|
||||
* Snapshot support (even if trivial)
|
||||
* Runs **offline**
|
||||
* Never federates automatically
|
||||
* Never publishes blocks unless explicitly authorized
|
||||
|
||||
This means:
|
||||
|
||||
* Keys
|
||||
* DAMs
|
||||
* policies
|
||||
* environment claims
|
||||
|
||||
…are all **just artifacts**.
|
||||
|
||||
No magic.
|
||||
|
||||
---
|
||||
|
||||
## 3. Minimal Internal Store for Authority Node
|
||||
|
||||
The authority node needs only a **tiny ASL subset**:
|
||||
|
||||
### Required
|
||||
|
||||
* Block store
|
||||
* Artifact index
|
||||
* Append-only log
|
||||
* Snapshot marker
|
||||
|
||||
### Not Required
|
||||
|
||||
* SIMD
|
||||
* TGK traversal
|
||||
* Federation
|
||||
* GC beyond “delete nothing”
|
||||
|
||||
Think of it as:
|
||||
|
||||
> **ASL in conservation mode**
|
||||
|
||||
---
|
||||
|
||||
## 4. Why This Is the Right Model
|
||||
|
||||
### 4.1 Self-Consistency
|
||||
|
||||
If the authority node uses ASL:
|
||||
|
||||
* Authority artifacts are immutable
|
||||
* Provenance exists from day zero
|
||||
* Receipts can be generated later
|
||||
|
||||
You can say:
|
||||
|
||||
> “This domain authority was generated under snapshot X, log Y.”
|
||||
|
||||
That’s extremely powerful.
|
||||
|
||||
---
|
||||
|
||||
### 4.2 No Special Code Paths
|
||||
|
||||
There is no:
|
||||
|
||||
* “special authority filesystem”
|
||||
* “magic cert directory”
|
||||
* “out-of-band signing tool”
|
||||
|
||||
Everything is:
|
||||
|
||||
* artifacts
|
||||
* policies
|
||||
* receipts
|
||||
|
||||
This keeps the system honest.
|
||||
|
||||
---
|
||||
|
||||
## 5. Bootstrapping Flow (Concrete)
|
||||
|
||||
### Step 0 — Boot Image
|
||||
|
||||
* Boot **ASL-Authority image**
|
||||
|
||||
* (SystemRescue + authority tools)
|
||||
* Network disabled
|
||||
* Storage ephemeral or encrypted USB
|
||||
|
||||
---
|
||||
|
||||
### Step 1 — Initialize Local ASL-HOST
|
||||
|
||||
```text
|
||||
/asl-host
|
||||
/domains
|
||||
/<new-domain-id>
|
||||
/store
|
||||
/log
|
||||
/snapshots
|
||||
```
|
||||
|
||||
No federation. Local only.
|
||||
|
||||
---
|
||||
|
||||
### Step 2 — Generate Root Key Artifact
|
||||
|
||||
* Keypair generated
|
||||
* Stored as:
|
||||
|
||||
* public key artifact
|
||||
* encrypted private key artifact (local only)
|
||||
* DomainID derived
|
||||
|
||||
---
|
||||
|
||||
### Step 3 — Write DAM Artifact
|
||||
|
||||
DAM is written as an artifact into the local store.
|
||||
|
||||
---
|
||||
|
||||
### Step 4 — Write Authority Environment Claim Artifact
|
||||
|
||||
Also stored locally.
|
||||
|
||||
---
|
||||
|
||||
### Step 5 — Seal Snapshot
|
||||
|
||||
Create snapshot:
|
||||
|
||||
> **“Domain Authority Genesis Snapshot”**
|
||||
|
||||
Everything up to here is immutable.
|
||||
|
||||
---
|
||||
|
||||
### Step 6 — Export Admission Bundle
|
||||
|
||||
PEL program runs:
|
||||
|
||||
* inputs:
|
||||
|
||||
* DAM artifact
|
||||
* environment claim artifact
|
||||
* policy artifact
|
||||
* output:
|
||||
|
||||
* SOPS bundle artifact
|
||||
|
||||
That output is exported.
|
||||
|
||||
This is **receipt-producing**, not ad-hoc scripting.
|
||||
|
||||
---
|
||||
|
||||
## 6. Why This Beats “Just a Laptop”
|
||||
|
||||
Because now:
|
||||
|
||||
* the authority machine is *auditable*
|
||||
* the ceremony is *recorded*
|
||||
* the result is *deterministic*
|
||||
|
||||
And still:
|
||||
|
||||
* no TPM
|
||||
* no network
|
||||
* no registry
|
||||
|
||||
---
|
||||
|
||||
## 7. Internal Store Security Posture
|
||||
|
||||
You can be strict or lax:
|
||||
|
||||
| Mode | Behavior |
|
||||
| ------------- | ----------------------------- |
|
||||
| Ephemeral | RAM-backed store, destroyed |
|
||||
| Encrypted USB | LUKS + ZFS |
|
||||
| WORM | Never delete, manual handling |
|
||||
|
||||
All valid.
|
||||
|
||||
ASL doesn’t care.
|
||||
|
||||
---
|
||||
|
||||
## 8. Relationship to Rescue Nodes
|
||||
|
||||
| Aspect | Rescue Node | Authority Node |
|
||||
| ---------------------- | ----------- | -------------- |
|
||||
| Boots from known image | yes | yes |
|
||||
| Has ASL store | yes | yes |
|
||||
| Offline | often | always |
|
||||
| Publishes | rarely | never |
|
||||
| Federation | no | no |
|
||||
| Authority keys | no | yes |
|
||||
|
||||
Same skeleton. Different organs.
|
||||
|
||||
---
|
||||
|
||||
## 9. Important Constraint
|
||||
|
||||
> **The authority node must never be required for validation.**
|
||||
|
||||
Once the bundle is exported:
|
||||
|
||||
* it stands alone
|
||||
* verification does not consult the authority node
|
||||
|
||||
If you lose the authority machine, the domain still exists.
|
||||
|
||||
---
|
||||
|
||||
## 10. Philosophical Alignment
|
||||
|
||||
You described:
|
||||
|
||||
> “Standing at the river with a printing press.”
|
||||
|
||||
What you’re doing here is:
|
||||
|
||||
> **Printing the press itself as an artifact.**
|
||||
|
||||
That’s the right kind of recursion.
|
||||
|
||||
---
|
||||
|
||||
## 11. What This Enables Later
|
||||
|
||||
Because the authority node is ASL-native, later you can:
|
||||
|
||||
* publish historical authority artifacts
|
||||
* rotate keys with receipts
|
||||
* prove domain lineage
|
||||
* federate authority domains
|
||||
|
||||
All without rewriting assumptions.
|
||||
|
||||
---
|
||||
|
||||
## 12. One-Line Rule
|
||||
|
||||
> **If it matters, it should be an artifact.
|
||||
> If it’s an artifact, it should live in ASL.**
|
||||
|
||||
You’re converging on a system with very few “special cases” — which is exactly why it will scale conceptually.
|
||||
|
||||
If you want next, we can:
|
||||
|
||||
* draft **ASL-AUTH-HOST profile**
|
||||
* specify **authority-only PEL programs**
|
||||
* define **authority key rotation as a PEL DAG**
|
||||
* or draw the **exact on-disk layout for authority vs rescue nodes**
|
||||
|
||||
Just say the word.
|
||||
|
||||
|
|
@ -1,371 +0,0 @@
|
|||
This is the right moment to pin this down.
|
||||
What follows is a **complete, staged, minimal certificate story** that fits *exactly* with what you’ve built so far — no PKI bloat, no premature abstraction, and no magical trust.
|
||||
|
||||
I’ll walk it in **time order**, because *when* certificates exist matters more than *what* they are.
|
||||
|
||||
---
|
||||
|
||||
# Certificate Lifecycle in ASL (Walkthrough)
|
||||
|
||||
## 0. First principle (anchor this)
|
||||
|
||||
> **Certificates do not create authority.
|
||||
> They record it.**
|
||||
|
||||
Authority already exists because:
|
||||
|
||||
* someone controls a machine
|
||||
* someone controls a domain
|
||||
* someone makes a decision
|
||||
|
||||
Certificates make that **legible and replayable**.
|
||||
|
||||
---
|
||||
|
||||
## 1. Phase 0 — Before Any Certificates Exist (Virgin State)
|
||||
|
||||
This is where your system starts.
|
||||
|
||||
### What exists
|
||||
|
||||
* ASL-HOST
|
||||
* ASL-STORE
|
||||
* Domains
|
||||
* Logs
|
||||
* Snapshots
|
||||
* Artifacts
|
||||
* PERs
|
||||
|
||||
### What does NOT exist
|
||||
|
||||
* No CA
|
||||
* No identities
|
||||
* No signatures
|
||||
* No trust
|
||||
|
||||
Everything works anyway.
|
||||
|
||||
Artifacts are still immutable.
|
||||
Receipts still exist.
|
||||
Snapshots still reconstruct CURRENT.
|
||||
|
||||
This is important: **certificates are optional metadata**, not a dependency.
|
||||
|
||||
---
|
||||
|
||||
## 2. Phase 1 — Birth of an Authority (Offline Auth Host)
|
||||
|
||||
At some point, you decide:
|
||||
|
||||
> “This domain should be able to make statements that others may trust.”
|
||||
|
||||
This happens on the **offline auth host**.
|
||||
|
||||
---
|
||||
|
||||
### 2.1 Create the Root Authority (once)
|
||||
|
||||
This produces the **Root Authority Certificate**.
|
||||
|
||||
#### Root CA (ASL-ROOT)
|
||||
|
||||
* Self-signed
|
||||
* Offline forever
|
||||
* Never rotates casually
|
||||
* Stored only on auth host
|
||||
|
||||
Conceptually:
|
||||
|
||||
```
|
||||
ASL-ROOT
|
||||
public_key
|
||||
policy: may sign domain authorities
|
||||
```
|
||||
|
||||
This is **not** a TLS CA.
|
||||
It is a *semantic authority*.
|
||||
|
||||
---
|
||||
|
||||
### Where it is stored
|
||||
|
||||
* On disk (auth host):
|
||||
|
||||
```
|
||||
/var/lib/asl/auth/root/
|
||||
root.key (private, offline)
|
||||
root.crt (artifact)
|
||||
```
|
||||
* As an ASL artifact:
|
||||
|
||||
```
|
||||
artifact: root.crt
|
||||
domain: auth-host
|
||||
```
|
||||
|
||||
The **private key is never an artifact**.
|
||||
|
||||
---
|
||||
|
||||
## 3. Phase 2 — Domain Authority Certificates
|
||||
|
||||
Now the root creates **Domain Authorities**.
|
||||
|
||||
This is the most important certificate type.
|
||||
|
||||
---
|
||||
|
||||
### 3.1 Domain Authority (DA)
|
||||
|
||||
A **Domain Authority Certificate** binds:
|
||||
|
||||
```
|
||||
(domain_id) → public_key → policy
|
||||
```
|
||||
|
||||
Example:
|
||||
|
||||
```
|
||||
alice.personal
|
||||
```
|
||||
|
||||
Gets a DA certificate:
|
||||
|
||||
```
|
||||
DA(alice.personal)
|
||||
signed_by: ASL-ROOT
|
||||
key: alice-domain-key
|
||||
policy:
|
||||
- may seal snapshots
|
||||
- may publish artifacts
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Where DA certs live
|
||||
|
||||
* Stored as artifacts
|
||||
* Stored **inside the domain they govern**
|
||||
* Also optionally copied to Common
|
||||
|
||||
Example:
|
||||
|
||||
```
|
||||
/var/lib/asl/domains/alice.personal/auth/domain.crt
|
||||
```
|
||||
|
||||
This makes replay self-contained.
|
||||
|
||||
---
|
||||
|
||||
## 4. Phase 3 — Operational Keys (Actors)
|
||||
|
||||
Now we separate **authority** from **action**.
|
||||
|
||||
---
|
||||
|
||||
### 4.1 Operator / Host Certificates
|
||||
|
||||
These are **delegation certs**.
|
||||
|
||||
They answer:
|
||||
|
||||
> “Which machine / user is allowed to act for this domain?”
|
||||
|
||||
Examples:
|
||||
|
||||
* Rescue image
|
||||
* Laptop
|
||||
* CI node
|
||||
* VM
|
||||
|
||||
They are **short-lived** and **revocable**.
|
||||
|
||||
```
|
||||
OperatorCert
|
||||
subject: host-id
|
||||
acts-for: domain_id
|
||||
signed_by: domain authority
|
||||
scope:
|
||||
- may write artifacts
|
||||
- may append log
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Storage
|
||||
|
||||
* Stored in domain auth area
|
||||
* Referenced by PERs
|
||||
|
||||
```
|
||||
/var/lib/asl/domains/alice.personal/auth/operators/
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 5. Phase 4 — Signing Receipts and Snapshots
|
||||
|
||||
Now certificates begin to **matter operationally**.
|
||||
|
||||
---
|
||||
|
||||
### 5.1 What gets signed
|
||||
|
||||
1. **Snapshot seals**
|
||||
2. **PERs (execution receipts)**
|
||||
3. **Policy artifacts**
|
||||
|
||||
Never raw data blocks.
|
||||
|
||||
---
|
||||
|
||||
### 5.2 PER Signature Model
|
||||
|
||||
A PER includes:
|
||||
|
||||
```
|
||||
PER:
|
||||
inputs
|
||||
program
|
||||
outputs
|
||||
snapshot_range
|
||||
signer:
|
||||
operator_cert_id
|
||||
signature
|
||||
```
|
||||
|
||||
This allows later verification:
|
||||
|
||||
* Who ran this?
|
||||
* Under which authority?
|
||||
* Against which snapshot?
|
||||
|
||||
---
|
||||
|
||||
## 6. Phase 5 — Federation and Trust
|
||||
|
||||
Now certificates cross domains.
|
||||
|
||||
---
|
||||
|
||||
### 6.1 Trust is explicit
|
||||
|
||||
A domain never trusts another domain implicitly.
|
||||
|
||||
Instead it stores a **Trust Artifact**:
|
||||
|
||||
```
|
||||
"I trust domain X snapshots ≥ S
|
||||
signed by DA(X)
|
||||
under policy P"
|
||||
```
|
||||
|
||||
This artifact can be:
|
||||
|
||||
* private
|
||||
* group-shared
|
||||
* published to Common
|
||||
|
||||
---
|
||||
|
||||
### 6.2 Common’s role
|
||||
|
||||
Common may store:
|
||||
|
||||
* Root cert(s)
|
||||
* Domain Authority certs
|
||||
* Trust policies
|
||||
|
||||
But Common **does not enforce trust**.
|
||||
|
||||
It only witnesses it.
|
||||
|
||||
---
|
||||
|
||||
## 7. Certificate Types (Complete List)
|
||||
|
||||
Here is the full minimal set:
|
||||
|
||||
### 1. Root Authority Certificate
|
||||
|
||||
* Signs domain authorities
|
||||
* Offline
|
||||
* Rare
|
||||
|
||||
### 2. Domain Authority Certificate
|
||||
|
||||
* One per domain
|
||||
* Signs snapshots, operators
|
||||
* Long-lived
|
||||
|
||||
### 3. Operator / Host Certificate
|
||||
|
||||
* Acts-for a domain
|
||||
* Short-lived
|
||||
* Used in PERs
|
||||
|
||||
### 4. Trust Policy Artifact (not a cert, but signed)
|
||||
|
||||
* “I trust X for Y”
|
||||
* Domain-local meaning
|
||||
|
||||
That’s it.
|
||||
|
||||
No user certs.
|
||||
No TLS certs.
|
||||
No global CA hierarchy.
|
||||
|
||||
---
|
||||
|
||||
## 8. What Happens During Rescue (Concrete)
|
||||
|
||||
Let’s ground this.
|
||||
|
||||
### Rescue day
|
||||
|
||||
* Rescue domain exists
|
||||
* No certs yet
|
||||
* Everything works
|
||||
|
||||
### Later, you decide it matters
|
||||
|
||||
1. Create personal domain authority
|
||||
2. Sign rescue snapshots retroactively
|
||||
3. Publish selected artifacts
|
||||
|
||||
History remains valid — it just gains **witnesses**.
|
||||
|
||||
---
|
||||
|
||||
## 9. Why This Is Robust
|
||||
|
||||
* No cert = still works
|
||||
* Lost cert = data still exists
|
||||
* Revoked cert = history still readable
|
||||
* Multiple roots = allowed
|
||||
|
||||
This avoids:
|
||||
|
||||
* “all-or-nothing PKI”
|
||||
* bricked data
|
||||
* forced trust
|
||||
|
||||
---
|
||||
|
||||
## 10. One sentence to remember
|
||||
|
||||
> **Certificates don’t make things true.
|
||||
> They make them accountable.**
|
||||
|
||||
---
|
||||
|
||||
If you want next, I can:
|
||||
|
||||
* Specify exact cert encoding (X.509-like vs custom)
|
||||
* Show how certs are imported via USB
|
||||
* Define revocation without CRLs
|
||||
* Show how auth-host stays simple
|
||||
|
||||
You’re converging on a very clean trust model.
|
||||
|
||||
|
|
@ -1,291 +0,0 @@
|
|||
Excellent question. This is exactly the right place to tighten the model.
|
||||
|
||||
I’ll do three things, in order:
|
||||
|
||||
1. **Specify how ASL-HOST enforces witness authority at boot (normative)**
|
||||
2. **Explain how this generalizes to non-Common domains**
|
||||
3. **Clarify the differences between Common, Group, and Personal domains (and why they are *not* fundamentally different)**
|
||||
|
||||
I’ll keep it crisp and spec-like, but readable.
|
||||
|
||||
---
|
||||
|
||||
# ASL-HOST Witness Enforcement at Boot (v0.1)
|
||||
|
||||
## 1. Purpose
|
||||
|
||||
This section defines how an **ASL-HOST** determines whether it is allowed to:
|
||||
|
||||
* accept snapshots
|
||||
* advance CURRENT
|
||||
* emit new snapshots
|
||||
|
||||
for any domain, with **Common as the canonical example**.
|
||||
|
||||
The same mechanism applies to **personal and group domains**, with different policies.
|
||||
|
||||
---
|
||||
|
||||
## 2. Core rule (universal)
|
||||
|
||||
> **An ASL-HOST MUST NOT advance a domain unless it can prove authority to do so from domain-local artifacts.**
|
||||
|
||||
There is **no implicit authority**, no “trusted runtime”, and no network trust.
|
||||
|
||||
Authority is always:
|
||||
|
||||
* explicit
|
||||
* artifact-backed
|
||||
* snapshot-bound
|
||||
|
||||
---
|
||||
|
||||
## 3. Boot-time domain authority resolution
|
||||
|
||||
At boot, for each configured domain, ASL-HOST performs the following steps.
|
||||
|
||||
### 3.1 Load domain state
|
||||
|
||||
For each domain `D`:
|
||||
|
||||
1. Mount domain store (filesystem, ZFS dataset, etc.)
|
||||
2. Load:
|
||||
|
||||
* last sealed snapshot `D@N`
|
||||
* append-only log (if present)
|
||||
3. Reconstruct `CURRENT(D)` deterministically
|
||||
|
||||
If this fails → domain is **read-only**.
|
||||
|
||||
---
|
||||
|
||||
## 4. Authority discovery
|
||||
|
||||
### 4.1 Authority source artifacts
|
||||
|
||||
ASL-HOST MUST locate, for domain `D`:
|
||||
|
||||
1. **Domain Authority Manifest (DAM)**
|
||||
2. **Current Policy Artifact**
|
||||
3. **Witness-related artifacts** (if any)
|
||||
|
||||
These MUST be:
|
||||
|
||||
* sealed
|
||||
* visible at or below `D@N`
|
||||
* valid under ASL-STORE rules
|
||||
|
||||
---
|
||||
|
||||
## 5. Witness model (generalized)
|
||||
|
||||
Every domain operates under **exactly one authority mode** at any snapshot:
|
||||
|
||||
| Mode | Meaning |
|
||||
| ---------------- | --------------------------------------------- |
|
||||
| `single-witness` | One domain/key may emit snapshots |
|
||||
| `quorum-witness` | A threshold of domains may authorize emission |
|
||||
| `self-authority` | This host’s domain is the witness |
|
||||
|
||||
This is **policy-defined**, not hard-coded.
|
||||
|
||||
---
|
||||
|
||||
## 6. Common domain (special only in policy)
|
||||
|
||||
### 6.1 Common authority rules
|
||||
|
||||
For `common`:
|
||||
|
||||
* Authority mode: `quorum-witness`
|
||||
* Emission rights:
|
||||
|
||||
* granted only to the active witness domain
|
||||
* Witness rotation:
|
||||
|
||||
* only via `asl.common.witness-rotation` artifacts
|
||||
|
||||
### 6.2 Boot enforcement
|
||||
|
||||
At boot, ASL-HOST MUST:
|
||||
|
||||
1. Identify current witness from last snapshot
|
||||
2. Verify:
|
||||
|
||||
* witness domain ID
|
||||
* witness public key
|
||||
3. Lock Common domain into one of:
|
||||
|
||||
| State | Meaning |
|
||||
| ----------- | ---------------------------- |
|
||||
| **Passive** | May consume Common snapshots |
|
||||
| **Active** | May emit Common snapshots |
|
||||
| **Invalid** | Must refuse Common entirely |
|
||||
|
||||
An ASL-HOST **MUST NOT** emit Common unless explicitly Active.
|
||||
|
||||
---
|
||||
|
||||
## 7. Personal domains (important clarification)
|
||||
|
||||
Personal domains are **not special**.
|
||||
|
||||
They differ only in **policy defaults**.
|
||||
|
||||
### 7.1 Typical personal domain policy
|
||||
|
||||
```yaml
|
||||
authority_mode: self-authority
|
||||
authorized_keys:
|
||||
- owner-key
|
||||
```
|
||||
|
||||
This means:
|
||||
|
||||
* Any host holding the private key may emit snapshots
|
||||
* Multiple machines can advance the domain
|
||||
* Conflicts are resolved via:
|
||||
|
||||
* snapshot ordering
|
||||
* logseq
|
||||
* or explicit merge PELs
|
||||
|
||||
### 7.2 Boot behavior
|
||||
|
||||
At boot, ASL-HOST:
|
||||
|
||||
* checks whether it holds a valid private key
|
||||
* if yes → Active
|
||||
* if no → Passive (read-only replica)
|
||||
|
||||
This allows:
|
||||
|
||||
* laptops
|
||||
* desktops
|
||||
* VMs
|
||||
* rescue environments
|
||||
|
||||
to all participate safely.
|
||||
|
||||
---
|
||||
|
||||
## 8. Group domains
|
||||
|
||||
Group domains sit *between* Common and Personal.
|
||||
|
||||
### 8.1 Group authority models
|
||||
|
||||
Common patterns:
|
||||
|
||||
| Model | Use case |
|
||||
| ---------------- | ------------------------ |
|
||||
| single-witness | One maintainer |
|
||||
| quorum-witness | Shared governance |
|
||||
| rotating-witness | Time-based or role-based |
|
||||
|
||||
### 8.2 Enforcement is identical
|
||||
|
||||
ASL-HOST does **not care** whether a domain is:
|
||||
|
||||
* Common
|
||||
* Group
|
||||
* Personal
|
||||
|
||||
It only evaluates:
|
||||
|
||||
* current snapshot
|
||||
* policy artifact
|
||||
* authority proofs
|
||||
|
||||
---
|
||||
|
||||
## 9. Snapshot emission decision (normative)
|
||||
|
||||
Before emitting a snapshot for domain `D`, ASL-HOST MUST verify:
|
||||
|
||||
1. It holds a private key authorized by policy
|
||||
2. No conflicting authority artifact exists above `D@N`
|
||||
3. For quorum:
|
||||
|
||||
* required endorsements are present and valid
|
||||
4. The snapshot references:
|
||||
|
||||
* authority context
|
||||
* policy version
|
||||
* (if applicable) witness rotation artifact
|
||||
|
||||
If any check fails → emission is forbidden.
|
||||
|
||||
---
|
||||
|
||||
## 10. Read vs write separation (important)
|
||||
|
||||
An ASL-HOST may always:
|
||||
|
||||
* store artifacts
|
||||
* verify snapshots
|
||||
* mirror domains
|
||||
|
||||
But may only:
|
||||
|
||||
* **emit snapshots**
|
||||
* **advance CURRENT**
|
||||
|
||||
when authority is proven.
|
||||
|
||||
This allows:
|
||||
|
||||
* mirrors
|
||||
* caches
|
||||
* rescue systems
|
||||
* offline nodes
|
||||
|
||||
to coexist safely.
|
||||
|
||||
---
|
||||
|
||||
## 11. Why this scales to “personal everywhere”
|
||||
|
||||
Your intuition is correct:
|
||||
|
||||
> personal domains may live on many machines and need the same sync guarantees.
|
||||
|
||||
This model gives you:
|
||||
|
||||
* snapshot identity
|
||||
* deterministic replay
|
||||
* conflict-free history **by construction**
|
||||
* explicit governance when needed
|
||||
|
||||
There is **no need** for a different mechanism.
|
||||
|
||||
---
|
||||
|
||||
## 12. One mental model to keep
|
||||
|
||||
> **Every domain is Common to someone.**
|
||||
> The difference is only *who is allowed to speak for it*.
|
||||
|
||||
---
|
||||
|
||||
## 13. What this enables next
|
||||
|
||||
With this in place, you can cleanly specify:
|
||||
|
||||
* personal multi-device sync
|
||||
* offline-first workflows
|
||||
* rescue-domain bootstrap
|
||||
* federation without global servers
|
||||
* gradual transition from single-owner → group → common
|
||||
|
||||
---
|
||||
|
||||
If you want, next good steps would be:
|
||||
|
||||
* formalize **Domain Authority Manifest (DAM) enforcement at boot**
|
||||
* specify **multi-writer personal domain merge semantics**
|
||||
* walk through **one personal domain on 3 machines**
|
||||
* or describe **Common startup from zero (genesis)**
|
||||
|
||||
You’re no longer mixing projects here — this is now one coherent system.
|
||||
|
||||
|
|
@ -1,386 +0,0 @@
|
|||
Excellent — this is the right moment to *lock the social contract layer* without contaminating ASL-CORE, PEL, or storage internals.
|
||||
|
||||
Below is a **clean, formal, minimal v0.1** specification of:
|
||||
|
||||
1. **Domain Admission Protocol (DAP)**
|
||||
2. **Domain Authority Manifest (DAM) schema**
|
||||
3. **Courtesy Lease model**
|
||||
|
||||
All three are deliberately:
|
||||
|
||||
* authority-first
|
||||
* storage-agnostic
|
||||
* deterministic
|
||||
* offline-friendly
|
||||
* compatible with both *rescue* and *self-sovereign admission* modes
|
||||
|
||||
---
|
||||
|
||||
# Domain Admission Protocol (DAP) — v0.1
|
||||
|
||||
---
|
||||
|
||||
## 1. Purpose
|
||||
|
||||
The Domain Admission Protocol defines **how a domain is recognized by another domain** (typically *Common / Unity*) **before any artifacts are trusted, indexed, or replicated**.
|
||||
|
||||
DAP governs **admission, not execution**.
|
||||
|
||||
---
|
||||
|
||||
## 2. Actors
|
||||
|
||||
| Actor | Role |
|
||||
| ----------------------- | ----------------------------------------- |
|
||||
| **Applicant Domain** | Domain seeking admission |
|
||||
| **Receiving Domain** | Domain granting or denying admission |
|
||||
| **Operator (optional)** | Human or policy agent reviewing admission |
|
||||
|
||||
---
|
||||
|
||||
## 3. Admission Object Model
|
||||
|
||||
### 3.1 Admission Request
|
||||
|
||||
An admission request is a **pure authority object**.
|
||||
|
||||
It contains:
|
||||
|
||||
* Domain Authority Manifest (DAM)
|
||||
* Proof of possession of root key
|
||||
* Requested admission scope
|
||||
* Optional courtesy lease request
|
||||
|
||||
No artifacts.
|
||||
No blocks.
|
||||
No ASL logs.
|
||||
|
||||
---
|
||||
|
||||
## 4. Admission Flow
|
||||
|
||||
### 4.1 Step 0 — Offline Preparation (Applicant)
|
||||
|
||||
The applicant domain prepares:
|
||||
|
||||
1. Domain root key (offline)
|
||||
2. DAM
|
||||
3. Policy hash
|
||||
4. Admission intent
|
||||
|
||||
---
|
||||
|
||||
### 4.2 Step 1 — Admission Request Submission
|
||||
|
||||
```
|
||||
Applicant → Receiving Domain:
|
||||
- DAM
|
||||
- Root signature over DAM
|
||||
- AdmissionRequest object
|
||||
```
|
||||
|
||||
Transport:
|
||||
|
||||
* file drop
|
||||
* removable media
|
||||
* HTTP
|
||||
* sneakernet
|
||||
(no constraints imposed)
|
||||
|
||||
---
|
||||
|
||||
### 4.3 Step 2 — Structural Validation
|
||||
|
||||
Receiving domain MUST verify:
|
||||
|
||||
* DAM schema validity
|
||||
* Signature correctness
|
||||
* Policy hash integrity
|
||||
* DomainID uniqueness / collision handling
|
||||
|
||||
Failure here ⇒ **Reject**
|
||||
|
||||
---
|
||||
|
||||
### 4.4 Step 3 — Policy Compatibility Evaluation
|
||||
|
||||
Receiving domain evaluates:
|
||||
|
||||
* Declared invariants
|
||||
* Requested scope
|
||||
* Requested courtesy
|
||||
* Trust model compatibility
|
||||
|
||||
No artifacts are examined.
|
||||
|
||||
---
|
||||
|
||||
### 4.5 Step 4 — Admission Decision
|
||||
|
||||
Possible outcomes:
|
||||
|
||||
| Outcome | Meaning |
|
||||
| ---------------- | ------------------ |
|
||||
| ACCEPTED | Domain may publish |
|
||||
| ACCEPTED_LIMITED | Courtesy only |
|
||||
| DEFERRED | Manual review |
|
||||
| REJECTED | No interaction |
|
||||
|
||||
Decision MAY be signed and returned.
|
||||
|
||||
---
|
||||
|
||||
## 5. Admission Guarantees
|
||||
|
||||
If accepted:
|
||||
|
||||
* DomainID is recognized
|
||||
* Root key is pinned
|
||||
* Policy hash is pinned
|
||||
* Courtesy rules apply
|
||||
|
||||
No implicit trust of artifacts is granted.
|
||||
|
||||
---
|
||||
|
||||
# Domain Authority Manifest (DAM) — v0.1
|
||||
|
||||
---
|
||||
|
||||
## 1. Purpose
|
||||
|
||||
The DAM is the **constitutional document of a domain**.
|
||||
|
||||
It defines:
|
||||
|
||||
* identity
|
||||
* authority
|
||||
* declared invariants
|
||||
* trust posture
|
||||
|
||||
It is immutable once admitted (new versions require re-admission).
|
||||
|
||||
---
|
||||
|
||||
## 2. DAM Canonical Structure
|
||||
|
||||
### 2.1 Canonical Encoding
|
||||
|
||||
* Canonical CBOR or canonical JSON
|
||||
* Deterministic ordering
|
||||
* Hashable as a single blob
|
||||
|
||||
---
|
||||
|
||||
### 2.2 DAM Schema
|
||||
|
||||
```text
|
||||
DomainAuthorityManifest {
|
||||
version: "v0.1"
|
||||
|
||||
domain_id: DomainID
|
||||
|
||||
root_key: {
|
||||
key_type: "ed25519" | "secp256k1" | future
|
||||
public_key: bytes
|
||||
}
|
||||
|
||||
policy: {
|
||||
policy_hash: hash
|
||||
policy_uri: optional string
|
||||
}
|
||||
|
||||
invariants: {
|
||||
immutable_artifacts: true
|
||||
append_only_logs: true
|
||||
deterministic_replay: true
|
||||
snapshot_bound_execution: true
|
||||
}
|
||||
|
||||
admission: {
|
||||
requested_scope: [
|
||||
"publish_artifacts",
|
||||
"publish_snapshots",
|
||||
"receive_artifacts",
|
||||
"federate_logs"
|
||||
]
|
||||
|
||||
courtesy_requested: {
|
||||
storage_bytes: optional uint64
|
||||
duration_seconds: optional uint64
|
||||
}
|
||||
}
|
||||
|
||||
metadata: {
|
||||
human_name: optional string
|
||||
contact: optional string
|
||||
description: optional string
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3. DAM Invariants (Normative)
|
||||
|
||||
Receiving domains MUST assume:
|
||||
|
||||
1. DAM statements are binding
|
||||
2. Root key controls the domain
|
||||
3. Policy hash defines behavior
|
||||
4. Violations allow revocation
|
||||
|
||||
---
|
||||
|
||||
## 4. DAM Signature
|
||||
|
||||
The DAM MUST be signed:
|
||||
|
||||
```
|
||||
signature = Sign(root_private_key, hash(DAM))
|
||||
```
|
||||
|
||||
This signature is included in the Admission Request, not inside DAM.
|
||||
|
||||
---
|
||||
|
||||
# Courtesy Lease Model — v0.1
|
||||
|
||||
---
|
||||
|
||||
## 1. Purpose
|
||||
|
||||
Courtesy leases allow **temporary, bounded storage and recognition** for domains without requiring full trust or infrastructure.
|
||||
|
||||
This is how **rescue and bootstrap work safely**.
|
||||
|
||||
---
|
||||
|
||||
## 2. Courtesy Lease Definition
|
||||
|
||||
A courtesy lease is:
|
||||
|
||||
> A revocable, bounded grant of resources without semantic trust.
|
||||
|
||||
---
|
||||
|
||||
## 3. Courtesy Lease Parameters
|
||||
|
||||
```text
|
||||
CourtesyLease {
|
||||
lease_id
|
||||
domain_id
|
||||
granted_by_domain
|
||||
|
||||
resources: {
|
||||
storage_bytes
|
||||
block_count
|
||||
snapshot_count
|
||||
}
|
||||
|
||||
duration: {
|
||||
start_time
|
||||
end_time
|
||||
}
|
||||
|
||||
constraints: {
|
||||
encrypted_only: boolean
|
||||
no_federation: boolean
|
||||
no_public_indexing: boolean
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4. Courtesy Semantics
|
||||
|
||||
Courtesy storage:
|
||||
|
||||
* MAY store encrypted blocks
|
||||
* MAY pin snapshots temporarily
|
||||
* MUST NOT:
|
||||
|
||||
* federate logs
|
||||
* index publicly
|
||||
* grant transit trust
|
||||
|
||||
---
|
||||
|
||||
## 5. Courtesy Expiry
|
||||
|
||||
On expiry:
|
||||
|
||||
* Receiving domain MAY:
|
||||
|
||||
* delete blocks
|
||||
* unpin snapshots
|
||||
* Applicant domain MUST assume data loss unless migrated
|
||||
|
||||
No guarantees beyond lease terms.
|
||||
|
||||
---
|
||||
|
||||
## 6. Courtesy → Full Admission Upgrade
|
||||
|
||||
A domain may:
|
||||
|
||||
1. Migrate blocks out
|
||||
2. Mint full domain
|
||||
3. Request re-admission
|
||||
4. Publish selectively
|
||||
|
||||
No continuity is assumed unless explicitly granted.
|
||||
|
||||
---
|
||||
|
||||
# How This All Fits Together (Big Picture)
|
||||
|
||||
```
|
||||
[ Offline Authority ]
|
||||
|
|
||||
v
|
||||
[ DAM + Root Key ]
|
||||
|
|
||||
v
|
||||
[ Admission Request ]
|
||||
|
|
||||
v
|
||||
[ Structural Validation ]
|
||||
|
|
||||
v
|
||||
[ Policy Compatibility ]
|
||||
|
|
||||
v
|
||||
[ Admission Decision ]
|
||||
|
|
||||
+--> Courtesy Lease
|
||||
|
|
||||
+--> Full Admission
|
||||
```
|
||||
|
||||
ASL only begins **after this**.
|
||||
|
||||
---
|
||||
|
||||
## Final Note (Important)
|
||||
|
||||
This design ensures:
|
||||
|
||||
* No premature trust
|
||||
* No forced rescue semantics
|
||||
* No storage before authority
|
||||
* No federation without discipline
|
||||
* Deterministic provenance from first artifact
|
||||
|
||||
You’ve now fully specified **the social perimeter of the system**.
|
||||
|
||||
If you want next, we can:
|
||||
|
||||
* integrate this with **ASL-HOST**
|
||||
* define **revocation & trust decay**
|
||||
* specify **foreign-domain pinning formally**
|
||||
* or map this directly to **SystemRescue operational flows**
|
||||
|
||||
Just say the word.
|
||||
|
||||
|
|
@ -1,254 +0,0 @@
|
|||
Good — this can be *very small* and still complete.
|
||||
Below is a **minimal, sufficient Domain Authority Manifest (DAM)** that fits your ASL / PEL / TGK / federation model **without introducing bureaucracy, PKI sprawl, or OS-level users**.
|
||||
|
||||
This is intentionally *boring, deterministic, snapshot-friendly, and artifact-centric*.
|
||||
|
||||
---
|
||||
|
||||
# Domain Authority Manifest (DAM)
|
||||
|
||||
### Minimal Specification
|
||||
|
||||
---
|
||||
|
||||
## 1. Purpose
|
||||
|
||||
The **Domain Authority Manifest (DAM)** defines **who may assert truth on behalf of a domain**.
|
||||
|
||||
It governs:
|
||||
|
||||
* Who can **produce** artifacts
|
||||
* Who can **emit PERs**
|
||||
* Who can **seal and publish snapshots**
|
||||
* Who can **federate state**
|
||||
|
||||
It is:
|
||||
|
||||
* Immutable once sealed
|
||||
* Snapshot-pinned
|
||||
* Deterministic and replayable
|
||||
* Interpreted identically across nodes
|
||||
|
||||
---
|
||||
|
||||
## 2. Location & Storage
|
||||
|
||||
The DAM is stored as an **ASL artifact** and referenced by snapshot.
|
||||
|
||||
Canonical location (logical, not filesystem-bound):
|
||||
|
||||
```
|
||||
ArtifactKey("domain-authority-manifest")
|
||||
```
|
||||
|
||||
Typical ZFS-backed layout:
|
||||
|
||||
```
|
||||
/asl/domain/authority.manifest
|
||||
```
|
||||
|
||||
The manifest itself is **content-addressed** and immutable.
|
||||
|
||||
---
|
||||
|
||||
## 3. Identity Model
|
||||
|
||||
### 3.1 Principals
|
||||
|
||||
A **principal** is a cryptographic public key.
|
||||
|
||||
No usernames.
|
||||
No UIDs.
|
||||
No machines.
|
||||
|
||||
```text
|
||||
PrincipalID = HASH(public_key)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 3.2 Roles (Minimal Set)
|
||||
|
||||
| Role | Capability |
|
||||
| ---------- | --------------------------------------------------- |
|
||||
| `produce` | Create artifacts (internal only) |
|
||||
| `execute` | Emit PERs |
|
||||
| `publish` | Publish artifacts/snapshots to domain-visible state |
|
||||
| `federate` | Export published state to other domains |
|
||||
| `audit` | Verify, but never mutate |
|
||||
|
||||
Roles are **capabilities**, not permissions.
|
||||
|
||||
---
|
||||
|
||||
## 4. Manifest Format (Minimal)
|
||||
|
||||
### 4.1 Logical Schema
|
||||
|
||||
```text
|
||||
DomainAuthorityManifest {
|
||||
domain_id : DomainID
|
||||
version : u32
|
||||
root_key : PublicKey
|
||||
principals[] : PrincipalEntry
|
||||
policy_hash : Hash
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 4.2 Principal Entry
|
||||
|
||||
```text
|
||||
PrincipalEntry {
|
||||
principal_id : Hash
|
||||
public_key : PublicKey
|
||||
roles[] : Role
|
||||
}
|
||||
```
|
||||
|
||||
No expiry.
|
||||
No rotation logic in-spec.
|
||||
Rotation is done by publishing a **new manifest**.
|
||||
|
||||
---
|
||||
|
||||
## 5. Example (Canonical Text Form)
|
||||
|
||||
```yaml
|
||||
domain_id: "example.org/build"
|
||||
version: 1
|
||||
|
||||
root_key: ed25519:9f2c...a71b
|
||||
|
||||
principals:
|
||||
- principal_id: 3a91...ff02
|
||||
public_key: ed25519:3a91...ff02
|
||||
roles: [produce, execute]
|
||||
|
||||
- principal_id: b822...19de
|
||||
public_key: ed25519:b822...19de
|
||||
roles: [publish, federate]
|
||||
|
||||
policy_hash: sha256:4e7b...c912
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 6. Root Key Semantics
|
||||
|
||||
The **root key**:
|
||||
|
||||
* May sign new DAM artifacts
|
||||
* May revoke all other principals implicitly
|
||||
* Is not required for day-to-day operation
|
||||
|
||||
Think of it as a **domain constitution**, not an admin account.
|
||||
|
||||
---
|
||||
|
||||
## 7. Policy Hash (Why it Exists)
|
||||
|
||||
The `policy_hash` binds:
|
||||
|
||||
* Snapshot publication rules
|
||||
* Federation constraints
|
||||
* Visibility guarantees
|
||||
|
||||
This allows:
|
||||
|
||||
* Policy documents to evolve
|
||||
* Manifests to remain small
|
||||
* Deterministic policy verification
|
||||
|
||||
If policy changes → new DAM → new snapshot.
|
||||
|
||||
---
|
||||
|
||||
## 8. Validation Rules (Normative)
|
||||
|
||||
A node **must reject** any action unless:
|
||||
|
||||
1. The DAM artifact is visible in the snapshot
|
||||
2. The DAM hash matches the snapshot reference
|
||||
3. The action is signed by a principal listed in DAM
|
||||
4. The principal has the required role
|
||||
5. The signature covers:
|
||||
|
||||
* artifact/per hash
|
||||
* logseq
|
||||
* snapshot ID
|
||||
|
||||
---
|
||||
|
||||
## 9. Interaction with PEL & PERs
|
||||
|
||||
### PER Certification
|
||||
|
||||
A PER is authoritative **iff**:
|
||||
|
||||
```
|
||||
PER.signature.principal ∈ DAM
|
||||
AND role ∈ {execute}
|
||||
AND snapshot_id matches
|
||||
```
|
||||
|
||||
The PER signature is stored as an **artifact** and linked via TGK.
|
||||
|
||||
---
|
||||
|
||||
## 10. Federation Rules
|
||||
|
||||
A receiving domain must verify:
|
||||
|
||||
1. Snapshot is published
|
||||
2. Publishing principal has `publish`
|
||||
3. Federation principal has `federate`
|
||||
4. DAM artifact is included or referenced
|
||||
|
||||
No trust in transport.
|
||||
No trust in machines.
|
||||
|
||||
---
|
||||
|
||||
## 11. Key Rotation & Revocation (Minimal)
|
||||
|
||||
* Rotation = publish new DAM
|
||||
* Revocation = omit key in new DAM
|
||||
* Old snapshots remain valid
|
||||
* New snapshots use new DAM
|
||||
|
||||
No CRLs.
|
||||
No live checks.
|
||||
|
||||
---
|
||||
|
||||
## 12. Why This Is Enough
|
||||
|
||||
This manifest:
|
||||
|
||||
✔ Anchors authority
|
||||
✔ Enables certification
|
||||
✔ Preserves determinism
|
||||
✔ Survives replay
|
||||
✔ Works offline
|
||||
✔ Scales across federation
|
||||
✔ Avoids OS/user entanglement
|
||||
|
||||
---
|
||||
|
||||
## 13. One-line Summary
|
||||
|
||||
> **The Domain Authority Manifest defines who is allowed to say “this happened” — and makes that statement replayable, verifiable, and permanent.**
|
||||
|
||||
---
|
||||
|
||||
If you want next, we can:
|
||||
|
||||
* Specify **PER signature layout**
|
||||
* Define **policy hash contents**
|
||||
* Walk through **“home directory archived” as a certified execution**
|
||||
* Add **hardware-backed keys (TPM / HSM) without changing the model**
|
||||
|
||||
Just say the word.
|
||||
|
||||
|
|
@ -1,201 +0,0 @@
|
|||
Absolutely — let’s lay down **ENC-ASL-LOG v0.1**, a minimal deterministic encoding for the append-only ASL log. I’ll aim for something that is **platform-independent, reproducible, and maps cleanly to your index and artifact workflow**.
|
||||
|
||||
---
|
||||
|
||||
# ENC-ASL-LOG v0.1
|
||||
|
||||
## 1. Purpose
|
||||
|
||||
This document specifies the **exact byte-level encoding** of the ASL append-only log.
|
||||
|
||||
It is intended for:
|
||||
|
||||
* C libraries
|
||||
* CLI tools
|
||||
* Memory-mapped readers
|
||||
* Interchange between hosts
|
||||
|
||||
It defines **encoding**, **record structure**, and **persistence semantics**, but **not log semantics** — see ASL-STORE and ASL-CORE for ordering, snapshot, and visibility rules.
|
||||
|
||||
---
|
||||
|
||||
## 2. Encoding Principles
|
||||
|
||||
1. **Little-endian integers** (multi-byte)
|
||||
2. **Packed structures**: no compiler padding
|
||||
3. **Forward-compatible versioning** via `header.version`
|
||||
4. **Checksums** for corruption detection
|
||||
5. **Deterministic serialization**: same log content → same byte sequence
|
||||
|
||||
---
|
||||
|
||||
## 3. Log File Layout
|
||||
|
||||
```
|
||||
+----------------+
|
||||
| LogHeader |
|
||||
+----------------+
|
||||
| LogRecord[ ] |
|
||||
+----------------+
|
||||
| LogFooter |
|
||||
+----------------+
|
||||
```
|
||||
|
||||
* **LogHeader**: fixed-size, mandatory, begins file
|
||||
* **LogRecord[]**: append-only entries, variable number
|
||||
* **LogFooter**: optional, contains global checksum
|
||||
|
||||
---
|
||||
|
||||
## 4. LogHeader
|
||||
|
||||
```c
|
||||
#pragma pack(push,1)
|
||||
typedef struct {
|
||||
uint64_t magic; // Unique magic for ASL log
|
||||
uint16_t version; // Encoding version
|
||||
uint16_t flags; // Reserved
|
||||
uint32_t header_size; // Total header bytes including this struct
|
||||
uint64_t first_snapshot; // First snapshot referenced
|
||||
uint64_t last_snapshot; // Last snapshot referenced
|
||||
} LogHeader;
|
||||
#pragma pack(pop)
|
||||
```
|
||||
|
||||
Notes:
|
||||
|
||||
* `magic` ensures correct file type
|
||||
* `version` allows forward compatibility
|
||||
* `first_snapshot` and `last_snapshot` help range validation
|
||||
|
||||
---
|
||||
|
||||
## 5. LogRecord
|
||||
|
||||
```c
|
||||
#pragma pack(push,1)
|
||||
typedef enum {
|
||||
LOG_RECORD_ADD_INDEX_SEGMENT = 1,
|
||||
LOG_RECORD_SEAL_SEGMENT = 2,
|
||||
LOG_RECORD_TOMBSTONE = 3,
|
||||
LOG_RECORD_CUSTOM = 0x1000
|
||||
} LogRecordType;
|
||||
|
||||
typedef struct {
|
||||
uint64_t record_id; // Unique log entry ID
|
||||
uint64_t timestamp_ns; // Monotonic time of record creation
|
||||
uint64_t snapshot_id; // Snapshot for which entry applies
|
||||
uint32_t type; // LogRecordType
|
||||
uint32_t payload_size; // Size of payload bytes following this header
|
||||
} LogRecordHeader;
|
||||
#pragma pack(pop)
|
||||
```
|
||||
|
||||
* Immediately after `LogRecordHeader`, `payload_size` bytes of **payload** follow.
|
||||
* Payload encodes type-specific information.
|
||||
|
||||
### 5.1 Payload Examples
|
||||
|
||||
**Add Index Segment**
|
||||
|
||||
```
|
||||
struct {
|
||||
uint64_t segment_file_id; // Corresponding ENC-ASL-CORE-INDEX segment
|
||||
uint64_t entry_count; // Number of index entries
|
||||
uint8_t reserved[16]; // Future fields
|
||||
};
|
||||
```
|
||||
|
||||
**Seal Segment**
|
||||
|
||||
```
|
||||
struct {
|
||||
uint64_t segment_file_id;
|
||||
uint64_t seal_snapshot_id;
|
||||
uint64_t seal_time_ns;
|
||||
};
|
||||
```
|
||||
|
||||
**Tombstone**
|
||||
|
||||
```
|
||||
struct {
|
||||
uint64_t artifact_hash_hi;
|
||||
uint64_t artifact_hash_mid;
|
||||
uint64_t artifact_hash_lo;
|
||||
uint32_t artifact_hash_tail;
|
||||
uint32_t flags; // reserved
|
||||
};
|
||||
```
|
||||
|
||||
**Custom**
|
||||
|
||||
```
|
||||
Arbitrary byte sequence, self-describing.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 6. LogFooter
|
||||
|
||||
```c
|
||||
#pragma pack(push,1)
|
||||
typedef struct {
|
||||
uint64_t crc64; // CRC over entire log (excluding footer)
|
||||
uint64_t record_count;// Total records in log
|
||||
} LogFooter;
|
||||
#pragma pack(pop)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 7. Record Ordering & Durability
|
||||
|
||||
* Entries are **strictly append-only**.
|
||||
* `record_id` must be strictly increasing.
|
||||
* `snapshot_id` reflects the visibility snapshot.
|
||||
* Readers **replay records in order** to derive CURRENT state.
|
||||
|
||||
---
|
||||
|
||||
## 8. Checksums & Integrity
|
||||
|
||||
* All multi-byte fields are little-endian.
|
||||
* `LogFooter.crc64` is optional but recommended.
|
||||
* Per-record integrity may optionally include a **record CRC** as first 8 bytes of payload.
|
||||
|
||||
---
|
||||
|
||||
## 9. Versioning
|
||||
|
||||
* `LogHeader.version` = 0x01
|
||||
* New fields must not break existing layout.
|
||||
* Unsupported versions must be rejected by readers.
|
||||
|
||||
---
|
||||
|
||||
## 10. Relationship to Other Layers
|
||||
|
||||
| Layer | Responsibility |
|
||||
| ------------------ | ----------------------------------------------- |
|
||||
| ASL-STORE | Lifecycle, visibility, snapshot enforcement |
|
||||
| ASL-CORE-INDEX | Index semantics (ArtifactKey → BlockID mapping) |
|
||||
| ENC-ASL-CORE-INDEX | Exact byte layout for index segments (`tier1/enc-asl-core-index.md`) |
|
||||
| ENC-ASL-LOG | Exact byte layout for append-only log |
|
||||
|
||||
---
|
||||
|
||||
✅ This gives you a **minimal, deterministic log encoding**.
|
||||
|
||||
You could now use it to:
|
||||
|
||||
1. Append a new log record for an index segment
|
||||
2. Seal segments
|
||||
3. Mark tombstones
|
||||
4. Replay in order to reconstruct CURRENT
|
||||
|
||||
---
|
||||
|
||||
If you want, I can **draft a minimal C writer/reader API** next that directly reads/writes this log, including helpers for `add_index_segment()`, `seal_segment()`, and `tombstone()` with payload serialization.
|
||||
|
||||
Do you want me to do that next?
|
||||
|
|
@ -1,224 +0,0 @@
|
|||
NOTE: Superseded by `tier1/tgk-1.md` (TGK/1). Kept for historical context.
|
||||
|
||||
Absolutely — here’s a **draft for ENC-ASL-TGK-INDEX**, carefully merging ASL artifact indexes and TGK edge indexes while respecting the **separation of concerns** and **snapshot determinism**.
|
||||
|
||||
This design keeps **ENC-ASL-CORE** and **ENC-TGK-CORE** authoritative, and only merges **index references and acceleration structures**.
|
||||
|
||||
---
|
||||
|
||||
# ENC-ASL-TGK-INDEX
|
||||
|
||||
### Merged On-Disk Index for ASL Artifacts and TGK Edges
|
||||
|
||||
---
|
||||
|
||||
## 1. Purpose
|
||||
|
||||
ENC-ASL-TGK-INDEX defines a **unified on-disk index** that:
|
||||
|
||||
* References **ASL artifacts** (ENC-ASL-CORE)
|
||||
* References **TGK edges** (ENC-TGK-CORE)
|
||||
* Supports **routing keys, filters, sharding, SIMD acceleration** per ASL-INDEX-ACCEL
|
||||
* Preserves **snapshot safety, log-sequence ordering, and immutability**
|
||||
|
||||
> Semantic data lives in the respective CORE layers; this index layer **only stores references**.
|
||||
|
||||
---
|
||||
|
||||
## 2. Layering Principle
|
||||
|
||||
| Layer | Responsibility |
|
||||
| --------------------- | -------------------------------------------- |
|
||||
| ENC-ASL-CORE | Artifact structure and type tags |
|
||||
| ENC-TGK-CORE | Edge structure (`from[] → to[]`) |
|
||||
| TGK-INDEX / ASL-INDEX | Canonical & routing keys, index semantics |
|
||||
| ENC-ASL-TGK-INDEX | On-disk references and acceleration metadata |
|
||||
|
||||
**Invariant:** This index never re-encodes artifacts or edges.
|
||||
|
||||
---
|
||||
|
||||
## 3. Segment Layout
|
||||
|
||||
Segments are **append-only** and **snapshot-bound**:
|
||||
|
||||
```
|
||||
+-----------------------------+
|
||||
| Segment Header |
|
||||
+-----------------------------+
|
||||
| Routing Filters |
|
||||
+-----------------------------+
|
||||
| ASL Artifact Index Records |
|
||||
+-----------------------------+
|
||||
| TGK Edge Index Records |
|
||||
+-----------------------------+
|
||||
| Optional Acceleration Data |
|
||||
+-----------------------------+
|
||||
| Segment Footer |
|
||||
+-----------------------------+
|
||||
```
|
||||
|
||||
* Segment atomicity enforced
|
||||
* Footer checksum guarantees integrity
|
||||
|
||||
---
|
||||
|
||||
## 4. Segment Header
|
||||
|
||||
```c
|
||||
struct asl_tgk_index_segment_header {
|
||||
uint32_t magic; // 'ATXI'
|
||||
uint16_t version;
|
||||
uint16_t flags;
|
||||
uint64_t segment_id;
|
||||
uint64_t logseq_min;
|
||||
uint64_t logseq_max;
|
||||
uint64_t asl_record_count;
|
||||
uint64_t tgk_record_count;
|
||||
uint64_t record_area_offset;
|
||||
uint64_t footer_offset;
|
||||
};
|
||||
```
|
||||
|
||||
* `logseq_*` enforce snapshot visibility
|
||||
* Separate counts for ASL and TGK entries
|
||||
|
||||
---
|
||||
|
||||
## 5. Routing Filters
|
||||
|
||||
Filters may be **segmented by type**:
|
||||
|
||||
* **ASL filters**: artifact hash + type tag
|
||||
* **TGK filters**: canonical edge ID + edge type key + optional role
|
||||
|
||||
```c
|
||||
struct asl_tgk_filter_header {
|
||||
uint16_t filter_type; // e.g., BLOOM, XOR
|
||||
uint16_t version;
|
||||
uint32_t flags;
|
||||
uint64_t size_bytes; // length of filter payload
|
||||
};
|
||||
```
|
||||
|
||||
* Filters are advisory; false positives allowed, false negatives forbidden
|
||||
* Must be deterministic per snapshot
|
||||
|
||||
---
|
||||
|
||||
## 6. ASL Artifact Index Record
|
||||
|
||||
```c
|
||||
struct asl_index_record {
|
||||
uint64_t logseq;
|
||||
uint64_t artifact_id; // ENC-ASL-CORE reference
|
||||
uint32_t type_tag; // optional
|
||||
uint8_t has_type_tag; // 0 or 1
|
||||
uint16_t flags; // tombstone, reserved
|
||||
};
|
||||
```
|
||||
|
||||
* `artifact_id` = canonical identity
|
||||
* No artifact payload here
|
||||
|
||||
---
|
||||
|
||||
## 7. TGK Edge Index Record
|
||||
|
||||
```c
|
||||
struct tgk_index_record {
|
||||
uint64_t logseq;
|
||||
uint64_t tgk_edge_id; // ENC-TGK-CORE reference
|
||||
uint32_t edge_type_key; // optional
|
||||
uint8_t has_edge_type;
|
||||
uint8_t role; // optional from/to/both
|
||||
uint16_t flags; // tombstone, reserved
|
||||
};
|
||||
```
|
||||
|
||||
* `tgk_edge_id` = canonical TGK-CORE edge ID
|
||||
* No node lists stored in index
|
||||
|
||||
---
|
||||
|
||||
## 8. Optional Node-Projection Records
|
||||
|
||||
For acceleration:
|
||||
|
||||
```c
|
||||
struct node_edge_ref {
|
||||
uint64_t logseq;
|
||||
uint64_t node_id; // from/to node
|
||||
uint64_t tgk_edge_id;
|
||||
uint8_t position; // from or to
|
||||
};
|
||||
```
|
||||
|
||||
* Fully derivable from TGK-CORE edges
|
||||
* Optional; purely for lookup speed
|
||||
|
||||
---
|
||||
|
||||
## 9. Sharding and SIMD
|
||||
|
||||
* Shard assignment is **routing key based** (ASL artifact or TGK edge)
|
||||
* SIMD arrays may store precomputed routing keys for fast filter evaluation
|
||||
* Must follow ASL-INDEX-ACCEL invariants: deterministic, immutable, snapshot-safe
|
||||
|
||||
---
|
||||
|
||||
## 10. Snapshot Interaction
|
||||
|
||||
At snapshot `S`:
|
||||
|
||||
* Segment visible if `logseq_min ≤ S`
|
||||
* ASL or TGK record visible if `logseq ≤ S`
|
||||
* Tombstones shadow earlier records
|
||||
* Filters may be used as advisory before canonical verification
|
||||
|
||||
---
|
||||
|
||||
## 11. Segment Footer
|
||||
|
||||
```c
|
||||
struct asl_tgk_index_segment_footer {
|
||||
uint64_t checksum; // covers header, filters, records
|
||||
uint64_t asl_record_bytes;
|
||||
uint64_t tgk_record_bytes;
|
||||
uint64_t filter_bytes;
|
||||
};
|
||||
```
|
||||
|
||||
* Ensures atomicity and completeness
|
||||
|
||||
---
|
||||
|
||||
## 12. Normative Invariants
|
||||
|
||||
1. **ASL artifact identity = ENC-ASL-CORE artifact ID**
|
||||
2. **TGK edge identity = ENC-TGK-CORE edge ID**
|
||||
3. Edge type tag and artifact type tag **do not affect canonical identity**
|
||||
4. Filters are advisory only; no false negatives
|
||||
5. Sharding is observationally invisible
|
||||
6. Index records are immutable once written
|
||||
7. Snapshot visibility strictly follows `logseq`
|
||||
8. Determinism guaranteed per snapshot
|
||||
|
||||
---
|
||||
|
||||
## 13. Summary
|
||||
|
||||
ENC-ASL-TGK-INDEX merges ASL artifacts and TGK edges into a **single, snapshot-safe, acceleration-friendly index layer**:
|
||||
|
||||
* Keeps core semantics authoritative
|
||||
* Enables high-performance lookups using routing, sharding, SIMD, and filters
|
||||
* Preserves immutability and determinism
|
||||
* Fully compatible with ASL-INDEX-ACCEL principles
|
||||
|
||||
This design supports billions of references while avoiding semantic collisions between ASL and TGK layers.
|
||||
|
||||
---
|
||||
|
||||
If you want, the next step could be **drafting a unified query execution model** over this merged index, connecting **artifact lookups** and **TGK graph traversals** in a snapshot-safe, deterministic way.
|
||||
|
||||
Do you want me to do that next?
|
||||
|
|
@ -1,241 +0,0 @@
|
|||
Yes — **you can, and this is actually a *very strong* pattern** in your model.
|
||||
But it only works if you are precise about **what is being trusted** and **what is being hidden**.
|
||||
|
||||
The short version:
|
||||
|
||||
> **Yes: you can store encrypted blocks from a personal domain inside a public domain, safely, as long as encryption is treated as content and keys are never part of federation.**
|
||||
|
||||
Below is the clean, formal way to do it.
|
||||
|
||||
---
|
||||
|
||||
# Encrypted Block Storage Across Domains
|
||||
|
||||
---
|
||||
|
||||
## 1. Core principle (non-negotiable)
|
||||
|
||||
> **Public domains may store bytes they cannot interpret, validate, or decrypt — but never assert meaning for them.**
|
||||
|
||||
The *meaning* stays private.
|
||||
The *storage* is outsourced.
|
||||
|
||||
---
|
||||
|
||||
## 2. What is actually stored in the public domain
|
||||
|
||||
The public domain stores:
|
||||
|
||||
* **Encrypted ASL blocks**
|
||||
* Content-addressed by ciphertext hash
|
||||
* Snapshot-pinned
|
||||
* Garbage-collectable under public policy
|
||||
|
||||
It does **not** store:
|
||||
|
||||
* Keys
|
||||
* Key identifiers
|
||||
* Decryption metadata
|
||||
* Plaintext hashes
|
||||
* Domain semantics
|
||||
|
||||
---
|
||||
|
||||
## 3. Encryption model (minimal and safe)
|
||||
|
||||
### 3.1 Block encryption
|
||||
|
||||
Before block sealing:
|
||||
|
||||
```
|
||||
plaintext_block
|
||||
→ encrypt(K)
|
||||
→ ciphertext_block
|
||||
→ BlockID = HASH(ciphertext_block)
|
||||
```
|
||||
|
||||
Important:
|
||||
|
||||
* Encryption happens **before sealing**
|
||||
* BlockID is over ciphertext
|
||||
* Deterministic encryption is NOT required
|
||||
* Randomized AEAD is fine
|
||||
|
||||
---
|
||||
|
||||
### 3.2 Key ownership
|
||||
|
||||
* Keys belong **only** to the personal domain
|
||||
* Keys are **never federated**
|
||||
* Keys are not referenced by ArtifactIndex entries
|
||||
|
||||
Encryption keys are **out-of-band capability**.
|
||||
|
||||
---
|
||||
|
||||
## 4. How the public domain sees these blocks
|
||||
|
||||
From the public domain’s perspective:
|
||||
|
||||
* These are opaque blocks
|
||||
* They are indistinguishable from random data
|
||||
* They have no semantic index entries
|
||||
* They cannot be interpreted or replayed
|
||||
|
||||
This is good.
|
||||
|
||||
---
|
||||
|
||||
## 5. How your personal domain references them
|
||||
|
||||
Your personal domain keeps:
|
||||
|
||||
* ArtifactIndex entries referencing ciphertext BlockIDs
|
||||
* Decryption metadata *locally* (or derivable)
|
||||
* Snapshot-pinned authority over interpretation
|
||||
|
||||
Example:
|
||||
|
||||
```
|
||||
Artifact(personal-photo-archive)
|
||||
→ BlockID(ciphertext)
|
||||
→ Decrypt with K
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 6. Cross-domain reference mechanics
|
||||
|
||||
There are **two distinct references**:
|
||||
|
||||
### 6.1 Storage reference (public domain)
|
||||
|
||||
Public domain:
|
||||
|
||||
* Owns physical blocks
|
||||
* Manages retention
|
||||
* Applies GC per its policy
|
||||
|
||||
No semantic claims.
|
||||
|
||||
---
|
||||
|
||||
### 6.2 Semantic reference (personal domain)
|
||||
|
||||
Personal domain:
|
||||
|
||||
* Owns ArtifactKey → ArtifactLocation
|
||||
* Owns decryption
|
||||
* Owns provenance
|
||||
* Owns meaning
|
||||
|
||||
---
|
||||
|
||||
## 7. Trust & threat model (explicit)
|
||||
|
||||
### Public domain can:
|
||||
|
||||
* Delete data (availability loss)
|
||||
* Rate-limit access
|
||||
* Mirror data
|
||||
|
||||
### Public domain cannot:
|
||||
|
||||
* Read content
|
||||
* Forge artifacts
|
||||
* Alter meaning
|
||||
* Impersonate you
|
||||
|
||||
If they delete data, integrity still holds — availability is a *separate* concern.
|
||||
|
||||
---
|
||||
|
||||
## 8. Federation and publication rules
|
||||
|
||||
### What you publish to public domain
|
||||
|
||||
You publish:
|
||||
|
||||
* Encrypted blocks
|
||||
* Optional block size metadata
|
||||
* No ArtifactIndex
|
||||
* No TGK edges
|
||||
|
||||
This is **storage federation**, not semantic federation.
|
||||
|
||||
---
|
||||
|
||||
### What you never publish
|
||||
|
||||
* Plaintext artifacts
|
||||
* PERs
|
||||
* Index entries
|
||||
* Keys
|
||||
* Origin attestations
|
||||
|
||||
---
|
||||
|
||||
## 9. Optional: Explicit Storage-Only Contract (recommended)
|
||||
|
||||
To make this explicit, define:
|
||||
|
||||
```text
|
||||
StorageOnlyContract {
|
||||
provider_domain : DomainID
|
||||
consumer_domain : DomainID
|
||||
retention_terms : enum
|
||||
confidentiality : "opaque-bytes-only"
|
||||
}
|
||||
```
|
||||
|
||||
This contract is:
|
||||
|
||||
* Snapshot-pinned
|
||||
* Signed by both domains
|
||||
* Human-auditable
|
||||
|
||||
---
|
||||
|
||||
## 10. Recovery flow (important)
|
||||
|
||||
If you later need to recover:
|
||||
|
||||
1. Fetch ciphertext blocks from public domain
|
||||
2. Verify hash
|
||||
3. Decrypt with private keys
|
||||
4. Reconstruct artifacts via personal index
|
||||
5. Replay snapshots
|
||||
|
||||
Public domain never learns anything.
|
||||
|
||||
---
|
||||
|
||||
## 11. Why this is safe even under hostile public domains
|
||||
|
||||
Because:
|
||||
|
||||
✔ Integrity is content-addressed
|
||||
✔ Meaning is private
|
||||
✔ Keys never cross domains
|
||||
✔ No trust in storage provider
|
||||
✔ Replay remains deterministic
|
||||
|
||||
This is **zero-knowledge storage federation**.
|
||||
|
||||
---
|
||||
|
||||
## 12. One-sentence takeaway
|
||||
|
||||
> **Public domains can safely store your encrypted blocks as opaque content, while all meaning, authority, and trust remain entirely within your personal domain.**
|
||||
|
||||
---
|
||||
|
||||
If you want next, we can:
|
||||
|
||||
* Specify **key derivation & rotation** compatible with snapshots
|
||||
* Define **encrypted block GC safety**
|
||||
* Model **multi-provider encrypted storage**
|
||||
* Show **how to publish selectively decrypted subsets later**
|
||||
|
||||
This is a powerful capability — and you’re using it correctly.
|
||||
|
||||
|
|
@ -1,193 +0,0 @@
|
|||
Absolutely — here’s a formal draft for **federation propagation rules and cross-node deterministic replay** for ASL + TGK + PEL. This ensures deterministic, snapshot-safe execution across multiple nodes while preserving artifact provenance and DAG integrity.
|
||||
|
||||
---
|
||||
|
||||
# Federation and Cross-Node Deterministic Replay Specification
|
||||
|
||||
---
|
||||
|
||||
## 1. Purpose
|
||||
|
||||
This specification defines:
|
||||
|
||||
1. How **artifacts, PERs, and TGK edges** are propagated across federation nodes.
|
||||
2. How **deterministic replay** is guaranteed on remote nodes.
|
||||
3. How **snapshots, shards, and logs** are used to synchronize state.
|
||||
4. Rules for **conflict resolution, tombstone handling, and provenance integrity**.
|
||||
|
||||
---
|
||||
|
||||
## 2. Key Concepts
|
||||
|
||||
* **Node**: Independent system with access to ASL/TGK storage and PEL execution environment.
|
||||
* **Federation**: Set of nodes that share artifacts, execution receipts, and TGK edges.
|
||||
* **ArtifactKey**: Canonical identifier for artifacts or PERs.
|
||||
* **SnapshotID**: Unique identifier of a ZFS snapshot (per pool or globally assigned).
|
||||
* **Log Sequence (logseq)**: Monotonic sequence ensuring ordering for deterministic replay.
|
||||
* **Execution Receipt (PER)**: Artifact describing the deterministic output of a PEL program.
|
||||
|
||||
---
|
||||
|
||||
## 3. Propagation Rules
|
||||
|
||||
### 3.1 Artifact & PER Propagation
|
||||
|
||||
1. **New artifacts or PERs** are assigned a **global canonical ArtifactKey**.
|
||||
2. Each node maintains a **local shard mapping**; shard boundaries may differ per node.
|
||||
3. Artifacts are propagated via **snapshot-delta sync**:
|
||||
|
||||
* Only artifacts **logseq > last replicated logseq** are transmitted.
|
||||
* Each artifact includes:
|
||||
|
||||
* `ArtifactKey`
|
||||
* `logseq`
|
||||
* `type_tag` (optional)
|
||||
* Payload checksum (hash)
|
||||
4. PER artifacts are treated the same as raw artifacts but may include additional **PEL DAG metadata**.
|
||||
|
||||
---
|
||||
|
||||
### 3.2 TGK Edge Propagation
|
||||
|
||||
1. TGK edges reference canonical ArtifactKeys and NodeIDs.
|
||||
2. Each edge includes:
|
||||
|
||||
* From nodes list
|
||||
* To nodes list
|
||||
* Edge type key
|
||||
* Roles (from/to/both)
|
||||
* logseq
|
||||
3. Edges are propagated **incrementally**, respecting snapshot boundaries.
|
||||
4. Deterministic ordering:
|
||||
|
||||
* Edges sorted by `(logseq, canonical_edge_id)` on transmit
|
||||
* Replay nodes consume edges in the same order
|
||||
|
||||
---
|
||||
|
||||
### 3.3 Snapshot and Log Management
|
||||
|
||||
* Each node maintains:
|
||||
|
||||
1. **Last applied snapshot** per federation peer
|
||||
2. **Sequential write log** for artifacts and edges
|
||||
* Replay on a remote node:
|
||||
|
||||
1. Apply artifacts and edges sequentially from log
|
||||
2. Only apply artifacts **≤ target snapshot**
|
||||
3. Merge multiple logs deterministically via `(logseq, canonical_id)` tie-breaker
|
||||
|
||||
---
|
||||
|
||||
## 4. Conflict Resolution
|
||||
|
||||
1. **ArtifactKey collisions**:
|
||||
|
||||
* If hash matches existing artifact → discard duplicate
|
||||
* If hash differs → flag conflict, require manual reconciliation or automated deterministic resolution
|
||||
2. **TGK edge conflicts**:
|
||||
|
||||
* Multiple edges with same `from/to/type` but different logseq → pick latest ≤ snapshot
|
||||
* Shadowed edges handled via **TombstoneShadow operator**
|
||||
3. **PER replay conflicts**:
|
||||
|
||||
* Identical PEL DAG + identical inputs → skip execution
|
||||
* Divergent inputs → log error, optionally recompute
|
||||
|
||||
---
|
||||
|
||||
## 5. Deterministic Replay Algorithm
|
||||
|
||||
```c
|
||||
void FederationReplay(log_buffer_t *incoming_log, snapshot_range_t target_snapshot) {
|
||||
// Sort incoming log deterministically
|
||||
sort(incoming_log, by_logseq_then_canonical_id);
|
||||
|
||||
for (uint64_t i = 0; i < incoming_log->count; i++) {
|
||||
record_t rec = incoming_log->records[i];
|
||||
|
||||
// Skip artifacts beyond target snapshot
|
||||
if (rec.logseq > target_snapshot.logseq_max) continue;
|
||||
|
||||
// Apply artifact or TGK edge
|
||||
if (rec.type == ARTIFACT || rec.type == PER) {
|
||||
ApplyArtifact(rec);
|
||||
} else if (rec.type == TGK_EDGE) {
|
||||
ApplyTGKEdge(rec);
|
||||
}
|
||||
|
||||
// Shadow tombstones deterministically
|
||||
if (rec.is_tombstone) {
|
||||
ApplyTombstone(rec.canonical_id, rec.logseq);
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
* Guarantees **deterministic replay** across nodes.
|
||||
* Uses **logseq + canonical ID ordering** for tie-breaking.
|
||||
|
||||
---
|
||||
|
||||
## 6. Shard-Local Execution
|
||||
|
||||
* After federation sync, **local shards** may differ.
|
||||
* Execution plan operators (SegmentScan, IndexFilter, TGKTraversal) operate **on local shards**.
|
||||
* Global determinism maintained by:
|
||||
|
||||
* Deterministic merge of shards
|
||||
* Snapshot constraints
|
||||
* Canonical ordering of artifacts and edges
|
||||
|
||||
---
|
||||
|
||||
## 7. Provenance and Audit
|
||||
|
||||
* Each node maintains:
|
||||
|
||||
* **Snapshot provenance table**: snapshot ID → list of applied artifacts/PERs
|
||||
* **Federation log table**: peer node → last applied logseq
|
||||
* Deterministic execution allows **replay and auditing**:
|
||||
|
||||
* Verify that `final_output` is identical across nodes
|
||||
* Provenance tables ensure **full traceability**
|
||||
|
||||
---
|
||||
|
||||
## 8. Multi-Node DAG Execution
|
||||
|
||||
1. PEL programs may span **multiple nodes**:
|
||||
|
||||
* Inputs and intermediate PERs propagated deterministically
|
||||
* DAG nodes executed locally when all inputs are available
|
||||
2. Determinism guaranteed because:
|
||||
|
||||
* Inputs constrained by snapshot + logseq
|
||||
* Operators are deterministic
|
||||
* Merge, shadowing, and projection preserve canonical ordering
|
||||
|
||||
---
|
||||
|
||||
## 9. Summary
|
||||
|
||||
Federation and cross-node deterministic replay:
|
||||
|
||||
* Uses **logseq + canonical IDs** for deterministic ordering
|
||||
* Supports **PER and TGK artifacts** across nodes
|
||||
* Enforces **snapshot constraints**
|
||||
* Enables **federated PEL program execution**
|
||||
* Preserves **provenance, tombstones, and deterministic DAG evaluation**
|
||||
* Compatible with SIMD/shard acceleration and ENC-ASL-TGK-INDEX memory layout
|
||||
|
||||
---
|
||||
|
||||
Next step could be **drafting a formal overall architecture diagram** showing:
|
||||
|
||||
* PEL programs
|
||||
* ASL/TGK storage
|
||||
* Execution plan operators
|
||||
* Shard/SIMD execution
|
||||
* Federation propagation and replay paths
|
||||
|
||||
Do you want me to draft that architecture diagram next?
|
||||
|
||||
127
notes/Makefile
127
notes/Makefile
|
|
@ -1,127 +0,0 @@
|
|||
# -------------------------------------------------------------------
|
||||
# ASL Capture Makefile
|
||||
#
|
||||
# Supports:
|
||||
# - PIPE-only build (default)
|
||||
# - PTY-enabled build (ENABLE_PTY=1)
|
||||
#
|
||||
# Targets:
|
||||
# make
|
||||
# make ENABLE_PTY=1
|
||||
# make install DESTDIR=...
|
||||
# make clean
|
||||
# -------------------------------------------------------------------
|
||||
|
||||
# Toolchain
|
||||
CC ?= cc
|
||||
AR ?= ar
|
||||
RANLIB ?= ranlib
|
||||
INSTALL ?= install
|
||||
|
||||
# Paths
|
||||
PREFIX ?= /usr
|
||||
BINDIR ?= $(PREFIX)/bin
|
||||
LIBDIR ?= $(PREFIX)/lib
|
||||
INCLUDEDIR ?= $(PREFIX)/include/asl
|
||||
|
||||
# Versioning (library ABI)
|
||||
LIBNAME = asl-capture
|
||||
LIB_MAJOR = 0
|
||||
LIB_MINOR = 1
|
||||
LIB_PATCH = 0
|
||||
|
||||
SONAME = lib$(LIBNAME).so.$(LIB_MAJOR)
|
||||
REALNAME = lib$(LIBNAME).so.$(LIB_MAJOR).$(LIB_MINOR).$(LIB_PATCH)
|
||||
|
||||
# Flags
|
||||
CFLAGS ?= -O2
|
||||
CFLAGS += -Wall -Wextra -fPIC
|
||||
CPPFLAGS += -I.
|
||||
|
||||
LDFLAGS ?=
|
||||
LIBS ?=
|
||||
|
||||
# Optional PTY support
|
||||
ifeq ($(ENABLE_PTY),1)
|
||||
CPPFLAGS += -DASL_ENABLE_PTY
|
||||
LIBS += -lutil
|
||||
endif
|
||||
|
||||
# Sources
|
||||
LIB_SRC = asl_capture.c
|
||||
LIB_OBJ = $(LIB_SRC:.c=.o)
|
||||
|
||||
TOOL_SRC = asl_capture_tool.c
|
||||
TOOL_OBJ = $(TOOL_SRC:.c=.o)
|
||||
|
||||
# Outputs
|
||||
STATIC_LIB = lib$(LIBNAME).a
|
||||
SHARED_LIB = $(REALNAME)
|
||||
SONAME_LIB = $(SONAME)
|
||||
TOOL = asl-capture
|
||||
|
||||
# -------------------------------------------------------------------
|
||||
# Default target
|
||||
# -------------------------------------------------------------------
|
||||
|
||||
all: $(STATIC_LIB) $(SHARED_LIB) $(TOOL)
|
||||
|
||||
# -------------------------------------------------------------------
|
||||
# Library builds
|
||||
# -------------------------------------------------------------------
|
||||
|
||||
%.o: %.c
|
||||
$(CC) $(CPPFLAGS) $(CFLAGS) -c $< -o $@
|
||||
|
||||
$(STATIC_LIB): $(LIB_OBJ)
|
||||
$(AR) rcs $@ $^
|
||||
$(RANLIB) $@
|
||||
|
||||
$(SHARED_LIB): $(LIB_OBJ)
|
||||
$(CC) -shared \
|
||||
-Wl,-soname,$(SONAME_LIB) \
|
||||
-o $@ $^ $(LIBS)
|
||||
|
||||
# Symlink for soname
|
||||
$(SONAME_LIB): $(SHARED_LIB)
|
||||
ln -sf $(REALNAME) $(SONAME_LIB)
|
||||
|
||||
# -------------------------------------------------------------------
|
||||
# Tool build
|
||||
# -------------------------------------------------------------------
|
||||
|
||||
$(TOOL): $(TOOL_OBJ) $(SHARED_LIB)
|
||||
$(CC) -o $@ $(TOOL_OBJ) \
|
||||
-L. -l$(LIBNAME) $(LIBS)
|
||||
|
||||
# -------------------------------------------------------------------
|
||||
# Install
|
||||
# -------------------------------------------------------------------
|
||||
|
||||
install: all
|
||||
# Library
|
||||
$(INSTALL) -d $(DESTDIR)$(LIBDIR)
|
||||
$(INSTALL) -m 0755 $(SHARED_LIB) $(DESTDIR)$(LIBDIR)/
|
||||
ln -sf $(REALNAME) $(DESTDIR)$(LIBDIR)/$(SONAME_LIB)
|
||||
ln -sf $(SONAME_LIB) $(DESTDIR)$(LIBDIR)/lib$(LIBNAME).so
|
||||
|
||||
# Static lib
|
||||
$(INSTALL) -m 0644 $(STATIC_LIB) $(DESTDIR)$(LIBDIR)/
|
||||
|
||||
# Headers
|
||||
$(INSTALL) -d $(DESTDIR)$(INCLUDEDIR)
|
||||
$(INSTALL) -m 0644 asl_capture.h $(DESTDIR)$(INCLUDEDIR)/
|
||||
|
||||
# Tool
|
||||
$(INSTALL) -d $(DESTDIR)$(BINDIR)
|
||||
$(INSTALL) -m 0755 $(TOOL) $(DESTDIR)$(BINDIR)/
|
||||
|
||||
# -------------------------------------------------------------------
|
||||
# Clean
|
||||
# -------------------------------------------------------------------
|
||||
|
||||
clean:
|
||||
rm -f *.o *.a *.so* $(TOOL)
|
||||
|
||||
.PHONY: all install clean
|
||||
|
||||
|
|
@ -1,272 +0,0 @@
|
|||
Great, this fits *cleanly* into what you already have.
|
||||
The key is to **treat certificates as artifacts**, not as an external PKI bolted on the side.
|
||||
|
||||
Below is a **minimal offline certificate system** that integrates with:
|
||||
|
||||
* Domain Authority Manifest (DAM)
|
||||
* Policy hash
|
||||
* PER signatures
|
||||
* Federation
|
||||
* Offline signing machine (e.g. Debian laptop kept off-network)
|
||||
|
||||
No X.509 complexity, no live revocation, no TLS assumptions.
|
||||
|
||||
---
|
||||
|
||||
# Offline Certificate System (OCS) for ASL Domains
|
||||
|
||||
---
|
||||
|
||||
## 1. Purpose
|
||||
|
||||
The **Offline Certificate System (OCS)** exists to:
|
||||
|
||||
* Anchor **domain authority** in an offline root
|
||||
* Certify **domain keys and principals**
|
||||
* Allow **air-gapped signing**
|
||||
* Preserve **determinism and replayability**
|
||||
|
||||
It answers:
|
||||
|
||||
> *“Why should I believe this Domain Authority Manifest?”*
|
||||
|
||||
---
|
||||
|
||||
## 2. Core Principle (Very Important)
|
||||
|
||||
> **Certificates are immutable ASL artifacts, not live credentials.**
|
||||
|
||||
They are:
|
||||
|
||||
* Signed once
|
||||
* Snapshot-pinned
|
||||
* Replayable forever
|
||||
* Never “checked online”
|
||||
|
||||
---
|
||||
|
||||
## 3. Roles in the Offline Certificate System
|
||||
|
||||
### 3.1 Offline Root Authority (ORA)
|
||||
|
||||
* A machine kept **offline** (Debian laptop, USB-only)
|
||||
* Holds **root private key**
|
||||
* Never participates in execution
|
||||
* Never runs ASL/PEL
|
||||
* Only signs *authority artifacts*
|
||||
|
||||
Think: constitutional court, not admin.
|
||||
|
||||
---
|
||||
|
||||
### 3.2 Online Domain Nodes
|
||||
|
||||
* Run ASL / PEL / TGK
|
||||
* Hold *domain operational keys*
|
||||
* Cannot mint new authority without ORA signature
|
||||
|
||||
---
|
||||
|
||||
## 4. Key Types (Minimal)
|
||||
|
||||
| Key Type | Purpose |
|
||||
| ------------------ | ----------------------------------- |
|
||||
| Root Authority Key | Signs domain authority certificates |
|
||||
| Domain Root Key | Anchors DAM |
|
||||
| Principal Keys | Execute / publish / federate |
|
||||
| Execution Keys | Optional subkeys for CI, rescue |
|
||||
|
||||
All are just keypairs.
|
||||
No hierarchy beyond signatures.
|
||||
|
||||
---
|
||||
|
||||
## 5. Authority Certificate Artifact
|
||||
|
||||
This is the *only* certificate type you need.
|
||||
|
||||
### 5.1 Logical Structure
|
||||
|
||||
```text
|
||||
AuthorityCertificate {
|
||||
subject_type : enum { domain_root, principal }
|
||||
subject_id : Hash
|
||||
subject_pubkey : PublicKey
|
||||
domain_id : DomainID
|
||||
roles[] : Role
|
||||
policy_hash : Hash
|
||||
issued_by : PublicKey // root authority
|
||||
version : u32
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 5.2 What It Certifies
|
||||
|
||||
Depending on `subject_type`:
|
||||
|
||||
* **domain_root**:
|
||||
|
||||
* “This public key is authorized to define DAMs for domain D”
|
||||
* **principal**:
|
||||
|
||||
* “This key may act with roles R under policy P”
|
||||
|
||||
No expiration.
|
||||
Revocation is *by replacement*.
|
||||
|
||||
---
|
||||
|
||||
## 6. Offline Signing Workflow (Debian Machine)
|
||||
|
||||
### Step 1: Prepare request (online)
|
||||
|
||||
On a domain node:
|
||||
|
||||
```text
|
||||
AuthorityRequest {
|
||||
subject_pubkey
|
||||
domain_id
|
||||
requested_roles[]
|
||||
policy_hash
|
||||
}
|
||||
```
|
||||
|
||||
Export as file / USB.
|
||||
|
||||
---
|
||||
|
||||
### Step 2: Offline signing (Debian ORA)
|
||||
|
||||
On the offline machine:
|
||||
|
||||
1. Verify intent manually
|
||||
2. Construct AuthorityCertificate
|
||||
3. Canonical-serialize
|
||||
4. Sign with root private key
|
||||
5. Output certificate artifact
|
||||
|
||||
No network. No ASL required.
|
||||
|
||||
---
|
||||
|
||||
### Step 3: Import certificate (online)
|
||||
|
||||
* Certificate is imported as an ASL artifact
|
||||
* Snapshot-pinned
|
||||
* Referenced by DAM
|
||||
|
||||
At this point, authority exists.
|
||||
|
||||
---
|
||||
|
||||
## 7. Relationship to Domain Authority Manifest (DAM)
|
||||
|
||||
The DAM does **not** stand alone.
|
||||
|
||||
A DAM is valid **iff**:
|
||||
|
||||
1. DAM.root_key is certified by a `domain_root` certificate
|
||||
2. Certificate.policy_hash matches DAM.policy_hash
|
||||
3. Certificate is visible in snapshot
|
||||
4. Certificate signature validates against offline root key
|
||||
|
||||
DAMs are *governed*, not self-asserted.
|
||||
|
||||
---
|
||||
|
||||
## 8. Validation Chain (Offline-Friendly)
|
||||
|
||||
To trust an action:
|
||||
|
||||
```
|
||||
PER → PERSignature → Principal Key
|
||||
→ DAM → AuthorityCertificate
|
||||
→ Offline Root Public Key
|
||||
```
|
||||
|
||||
No CRLs.
|
||||
No OCSP.
|
||||
No clocks.
|
||||
|
||||
Just hashes and signatures.
|
||||
|
||||
---
|
||||
|
||||
## 9. Revocation Model (Deterministic)
|
||||
|
||||
There is **no live revocation**.
|
||||
|
||||
Instead:
|
||||
|
||||
* Publish a **new DAM**
|
||||
* Omit revoked principals
|
||||
* Reference a **new authority certificate**
|
||||
* New snapshots enforce new authority
|
||||
* Old snapshots remain valid
|
||||
|
||||
This preserves determinism.
|
||||
|
||||
---
|
||||
|
||||
## 10. Federation Verification
|
||||
|
||||
When receiving state from another domain:
|
||||
|
||||
1. Load snapshot
|
||||
2. Load DAM
|
||||
3. Load AuthorityCertificate artifacts
|
||||
4. Verify against *known offline root pubkeys*
|
||||
5. Verify policy hash
|
||||
6. Accept or reject
|
||||
|
||||
Federation trusts *roots*, not nodes.
|
||||
|
||||
---
|
||||
|
||||
## 11. Why This Beats X.509 for Your System
|
||||
|
||||
| Problem | This System |
|
||||
| ---------------------- | ----------- |
|
||||
| Online dependency | None |
|
||||
| Mutable trust | None |
|
||||
| Time-based expiry | None |
|
||||
| Replay safety | Guaranteed |
|
||||
| Snapshot compatibility | Native |
|
||||
| Forensics | Perfect |
|
||||
|
||||
---
|
||||
|
||||
## 12. Minimal Files on Offline Debian Machine
|
||||
|
||||
```
|
||||
/ora/
|
||||
├── root.key
|
||||
├── root.pub
|
||||
├── policies/
|
||||
├── issued/
|
||||
│ └── authority-cert-*.bin
|
||||
└── requests/
|
||||
```
|
||||
|
||||
That’s it.
|
||||
|
||||
---
|
||||
|
||||
## 13. One-Sentence Summary
|
||||
|
||||
> **Offline authority certificates make domain trust explicit, immutable, and replayable — turning cryptographic signatures into constitutional facts rather than live permissions.**
|
||||
|
||||
---
|
||||
|
||||
If you want next, we can:
|
||||
|
||||
* Define **canonical byte layout for AuthorityCertificate**
|
||||
* Specify **how many root keys a domain may trust**
|
||||
* Walk through **full bootstrap from zero**
|
||||
* Model **rescue-node temporary authority**
|
||||
* Tie certificates into **ZFS snapshot metadata**
|
||||
|
||||
Just say where to continue.
|
||||
|
||||
|
|
@ -1,272 +0,0 @@
|
|||
Great, this fits *cleanly* into what you already have.
|
||||
The key is to **treat certificates as artifacts**, not as an external PKI bolted on the side.
|
||||
|
||||
Below is a **minimal offline certificate system** that integrates with:
|
||||
|
||||
* Domain Authority Manifest (DAM)
|
||||
* Policy hash
|
||||
* PER signatures
|
||||
* Federation
|
||||
* Offline signing machine (e.g. Debian laptop kept off-network)
|
||||
|
||||
No X.509 complexity, no live revocation, no TLS assumptions.
|
||||
|
||||
---
|
||||
|
||||
# Offline Certificate System (OCS) for ASL Domains
|
||||
|
||||
---
|
||||
|
||||
## 1. Purpose
|
||||
|
||||
The **Offline Certificate System (OCS)** exists to:
|
||||
|
||||
* Anchor **domain authority** in an offline root
|
||||
* Certify **domain keys and principals**
|
||||
* Allow **air-gapped signing**
|
||||
* Preserve **determinism and replayability**
|
||||
|
||||
It answers:
|
||||
|
||||
> *“Why should I believe this Domain Authority Manifest?”*
|
||||
|
||||
---
|
||||
|
||||
## 2. Core Principle (Very Important)
|
||||
|
||||
> **Certificates are immutable ASL artifacts, not live credentials.**
|
||||
|
||||
They are:
|
||||
|
||||
* Signed once
|
||||
* Snapshot-pinned
|
||||
* Replayable forever
|
||||
* Never “checked online”
|
||||
|
||||
---
|
||||
|
||||
## 3. Roles in the Offline Certificate System
|
||||
|
||||
### 3.1 Offline Root Authority (ORA)
|
||||
|
||||
* A machine kept **offline** (Debian laptop, USB-only)
|
||||
* Holds **root private key**
|
||||
* Never participates in execution
|
||||
* Never runs ASL/PEL
|
||||
* Only signs *authority artifacts*
|
||||
|
||||
Think: constitutional court, not admin.
|
||||
|
||||
---
|
||||
|
||||
### 3.2 Online Domain Nodes
|
||||
|
||||
* Run ASL / PEL / TGK
|
||||
* Hold *domain operational keys*
|
||||
* Cannot mint new authority without ORA signature
|
||||
|
||||
---
|
||||
|
||||
## 4. Key Types (Minimal)
|
||||
|
||||
| Key Type | Purpose |
|
||||
| ------------------ | ----------------------------------- |
|
||||
| Root Authority Key | Signs domain authority certificates |
|
||||
| Domain Root Key | Anchors DAM |
|
||||
| Principal Keys | Execute / publish / federate |
|
||||
| Execution Keys | Optional subkeys for CI, rescue |
|
||||
|
||||
All are just keypairs.
|
||||
No hierarchy beyond signatures.
|
||||
|
||||
---
|
||||
|
||||
## 5. Authority Certificate Artifact
|
||||
|
||||
This is the *only* certificate type you need.
|
||||
|
||||
### 5.1 Logical Structure
|
||||
|
||||
```text
|
||||
AuthorityCertificate {
|
||||
subject_type : enum { domain_root, principal }
|
||||
subject_id : Hash
|
||||
subject_pubkey : PublicKey
|
||||
domain_id : DomainID
|
||||
roles[] : Role
|
||||
policy_hash : Hash
|
||||
issued_by : PublicKey // root authority
|
||||
version : u32
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 5.2 What It Certifies
|
||||
|
||||
Depending on `subject_type`:
|
||||
|
||||
* **domain_root**:
|
||||
|
||||
* “This public key is authorized to define DAMs for domain D”
|
||||
* **principal**:
|
||||
|
||||
* “This key may act with roles R under policy P”
|
||||
|
||||
No expiration.
|
||||
Revocation is *by replacement*.
|
||||
|
||||
---
|
||||
|
||||
## 6. Offline Signing Workflow (Debian Machine)
|
||||
|
||||
### Step 1: Prepare request (online)
|
||||
|
||||
On a domain node:
|
||||
|
||||
```text
|
||||
AuthorityRequest {
|
||||
subject_pubkey
|
||||
domain_id
|
||||
requested_roles[]
|
||||
policy_hash
|
||||
}
|
||||
```
|
||||
|
||||
Export as file / USB.
|
||||
|
||||
---
|
||||
|
||||
### Step 2: Offline signing (Debian ORA)
|
||||
|
||||
On the offline machine:
|
||||
|
||||
1. Verify intent manually
|
||||
2. Construct AuthorityCertificate
|
||||
3. Canonical-serialize
|
||||
4. Sign with root private key
|
||||
5. Output certificate artifact
|
||||
|
||||
No network. No ASL required.
|
||||
|
||||
---
|
||||
|
||||
### Step 3: Import certificate (online)
|
||||
|
||||
* Certificate is imported as an ASL artifact
|
||||
* Snapshot-pinned
|
||||
* Referenced by DAM
|
||||
|
||||
At this point, authority exists.
|
||||
|
||||
---
|
||||
|
||||
## 7. Relationship to Domain Authority Manifest (DAM)
|
||||
|
||||
The DAM does **not** stand alone.
|
||||
|
||||
A DAM is valid **iff**:
|
||||
|
||||
1. DAM.root_key is certified by a `domain_root` certificate
|
||||
2. Certificate.policy_hash matches DAM.policy_hash
|
||||
3. Certificate is visible in snapshot
|
||||
4. Certificate signature validates against offline root key
|
||||
|
||||
DAMs are *governed*, not self-asserted.
|
||||
|
||||
---
|
||||
|
||||
## 8. Validation Chain (Offline-Friendly)
|
||||
|
||||
To trust an action:
|
||||
|
||||
```
|
||||
PER → PERSignature → Principal Key
|
||||
→ DAM → AuthorityCertificate
|
||||
→ Offline Root Public Key
|
||||
```
|
||||
|
||||
No CRLs.
|
||||
No OCSP.
|
||||
No clocks.
|
||||
|
||||
Just hashes and signatures.
|
||||
|
||||
---
|
||||
|
||||
## 9. Revocation Model (Deterministic)
|
||||
|
||||
There is **no live revocation**.
|
||||
|
||||
Instead:
|
||||
|
||||
* Publish a **new DAM**
|
||||
* Omit revoked principals
|
||||
* Reference a **new authority certificate**
|
||||
* New snapshots enforce new authority
|
||||
* Old snapshots remain valid
|
||||
|
||||
This preserves determinism.
|
||||
|
||||
---
|
||||
|
||||
## 10. Federation Verification
|
||||
|
||||
When receiving state from another domain:
|
||||
|
||||
1. Load snapshot
|
||||
2. Load DAM
|
||||
3. Load AuthorityCertificate artifacts
|
||||
4. Verify against *known offline root pubkeys*
|
||||
5. Verify policy hash
|
||||
6. Accept or reject
|
||||
|
||||
Federation trusts *roots*, not nodes.
|
||||
|
||||
---
|
||||
|
||||
## 11. Why This Beats X.509 for Your System
|
||||
|
||||
| Problem | This System |
|
||||
| ---------------------- | ----------- |
|
||||
| Online dependency | None |
|
||||
| Mutable trust | None |
|
||||
| Time-based expiry | None |
|
||||
| Replay safety | Guaranteed |
|
||||
| Snapshot compatibility | Native |
|
||||
| Forensics | Perfect |
|
||||
|
||||
---
|
||||
|
||||
## 12. Minimal Files on Offline Debian Machine
|
||||
|
||||
```
|
||||
/ora/
|
||||
├── root.key
|
||||
├── root.pub
|
||||
├── policies/
|
||||
├── issued/
|
||||
│ └── authority-cert-*.bin
|
||||
└── requests/
|
||||
```
|
||||
|
||||
That’s it.
|
||||
|
||||
---
|
||||
|
||||
## 13. One-Sentence Summary
|
||||
|
||||
> **Offline authority certificates make domain trust explicit, immutable, and replayable — turning cryptographic signatures into constitutional facts rather than live permissions.**
|
||||
|
||||
---
|
||||
|
||||
If you want next, we can:
|
||||
|
||||
* Define **canonical byte layout for AuthorityCertificate**
|
||||
* Specify **how many root keys a domain may trust**
|
||||
* Walk through **full bootstrap from zero**
|
||||
* Model **rescue-node temporary authority**
|
||||
* Tie certificates into **ZFS snapshot metadata**
|
||||
|
||||
Just say where to continue.
|
||||
|
||||
|
|
@ -1,256 +0,0 @@
|
|||
Below is a **clear, minimal, and opinionated specification** for **how many offline roots to trust**, designed to preserve determinism, avoid governance creep, and still give you real-world safety.
|
||||
|
||||
This is not “enterprise PKI”. It is **constitutional law for domains**.
|
||||
|
||||
---
|
||||
|
||||
# Offline Root Trust Specification
|
||||
|
||||
---
|
||||
|
||||
## 1. Design Goal
|
||||
|
||||
The offline root system must:
|
||||
|
||||
* Be **auditable**
|
||||
* Be **offline**
|
||||
* Be **stable across decades**
|
||||
* Avoid quorum games, liveness assumptions, or time-based logic
|
||||
* Preserve **deterministic replay**
|
||||
|
||||
Therefore:
|
||||
|
||||
> **Offline roots define legitimacy, not availability.**
|
||||
|
||||
---
|
||||
|
||||
## 2. Fundamental Rule (Normative)
|
||||
|
||||
> **A domain MUST trust a fixed, small set of offline root public keys.**
|
||||
|
||||
This set is **explicit**, **finite**, and **snapshot-pinned**.
|
||||
|
||||
---
|
||||
|
||||
## 3. Recommended Number of Offline Roots
|
||||
|
||||
### ✅ **Exactly 1–3 offline roots**
|
||||
|
||||
| Count | When to use |
|
||||
| ----------- | ------------------------------------------- |
|
||||
| **1 root** | Personal domain, research system, prototype |
|
||||
| **2 roots** | Organization with separation-of-duties |
|
||||
| **3 roots** | High-value or long-lived domain |
|
||||
|
||||
❌ More than 3 is strongly discouraged.
|
||||
|
||||
---
|
||||
|
||||
## 4. Why Not More?
|
||||
|
||||
Because offline roots are not about redundancy — they are about **legitimacy**.
|
||||
|
||||
Problems with many roots:
|
||||
|
||||
* Ambiguous authority
|
||||
* Governance disputes
|
||||
* Non-deterministic interpretation
|
||||
* Social quorum bugs (“who signed this?”)
|
||||
* Long-term rot
|
||||
|
||||
Your system values **historical truth**, not organizational politics.
|
||||
|
||||
---
|
||||
|
||||
## 5. Root Trust Model
|
||||
|
||||
### 5.1 Root Set Definition
|
||||
|
||||
```text
|
||||
OfflineRootSet {
|
||||
version : u32
|
||||
root_keys[] : PublicKey // sorted, unique
|
||||
threshold : u8
|
||||
}
|
||||
```
|
||||
|
||||
This object itself is:
|
||||
|
||||
* Canonical
|
||||
* Snapshot-pinned
|
||||
* Hardcoded into verifier configs
|
||||
* Rarely changed
|
||||
|
||||
---
|
||||
|
||||
## 6. Threshold Rules (Critical)
|
||||
|
||||
### 6.1 Threshold = 1 (Default)
|
||||
|
||||
> **Exactly one root signature is sufficient.**
|
||||
|
||||
This is the recommended default.
|
||||
|
||||
Why:
|
||||
|
||||
* Deterministic
|
||||
* Simple
|
||||
* No coordination needed
|
||||
* No partial legitimacy
|
||||
|
||||
This matches your *“constitutional”* model.
|
||||
|
||||
---
|
||||
|
||||
### 6.2 Threshold > 1 (Optional, Advanced)
|
||||
|
||||
If you must:
|
||||
|
||||
| Roots | Threshold |
|
||||
| ----- | --------- |
|
||||
| 2 | 2-of-2 |
|
||||
| 3 | 2-of-3 |
|
||||
|
||||
Rules:
|
||||
|
||||
* Threshold MUST be static
|
||||
* Threshold MUST be declared
|
||||
* Partial signatures are meaningless
|
||||
* Verification must be order-independent
|
||||
|
||||
⚠️ Avoid 1-of-3 — it defeats the point.
|
||||
|
||||
---
|
||||
|
||||
## 7. What Roots Are Allowed to Sign
|
||||
|
||||
Offline roots may sign **only**:
|
||||
|
||||
* `AuthorityCertificate` artifacts
|
||||
* Root rotation statements (rare)
|
||||
* Policy ratification certificates (optional)
|
||||
|
||||
They must **never** sign:
|
||||
|
||||
* Artifacts
|
||||
* PERs
|
||||
* Snapshots
|
||||
* TGK edges
|
||||
* Execution data
|
||||
|
||||
This keeps authority narrow and legible.
|
||||
|
||||
---
|
||||
|
||||
## 8. Root Rotation (Rare, Heavyweight)
|
||||
|
||||
Root rotation is **constitutional change**.
|
||||
|
||||
### Rules:
|
||||
|
||||
1. New root set defined
|
||||
2. Old root(s) sign rotation artifact
|
||||
3. Rotation artifact imported
|
||||
4. New snapshots reference new root set
|
||||
5. Old snapshots remain valid forever
|
||||
|
||||
Rotation frequency: *measured in years, not days*.
|
||||
|
||||
---
|
||||
|
||||
## 9. Validation Algorithm (Deterministic)
|
||||
|
||||
To validate authority:
|
||||
|
||||
1. Load snapshot
|
||||
2. Resolve DAM
|
||||
3. Resolve AuthorityCertificates
|
||||
4. Verify certificate signature(s)
|
||||
5. Check:
|
||||
|
||||
* signer ∈ OfflineRootSet
|
||||
* threshold satisfied
|
||||
6. Accept or reject
|
||||
|
||||
No time.
|
||||
No online checks.
|
||||
No ambiguity.
|
||||
|
||||
---
|
||||
|
||||
## 10. Federation Implications
|
||||
|
||||
A domain MAY trust:
|
||||
|
||||
* Its own offline roots
|
||||
* Explicitly configured foreign roots
|
||||
|
||||
Federation is **opt-in trust**, never transitive.
|
||||
|
||||
---
|
||||
|
||||
## 11. Failure Modes (Explicit)
|
||||
|
||||
| Situation | Result |
|
||||
| -------------------------- | ------------------------------ |
|
||||
| Root key lost | Domain frozen (history intact) |
|
||||
| Root compromised | Publish new root set + DAM |
|
||||
| Disagreement between roots | No new authority issued |
|
||||
| Conflicting certificates | Reject both |
|
||||
|
||||
This is intentional. Ambiguity is worse than halt.
|
||||
|
||||
---
|
||||
|
||||
## 12. Minimal Example
|
||||
|
||||
```yaml
|
||||
offline_roots:
|
||||
version: 1
|
||||
threshold: 1
|
||||
root_keys:
|
||||
- ed25519:aa93...
|
||||
```
|
||||
|
||||
or
|
||||
|
||||
```yaml
|
||||
offline_roots:
|
||||
version: 1
|
||||
threshold: 2
|
||||
root_keys:
|
||||
- ed25519:aa93...
|
||||
- ed25519:b4c1...
|
||||
- ed25519:c019...
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 13. Why This Fits Your System
|
||||
|
||||
✔ Snapshot-safe
|
||||
✔ Replayable
|
||||
✔ Offline-friendly
|
||||
✔ Human-auditable
|
||||
✔ Minimal governance
|
||||
✔ No temporal logic
|
||||
|
||||
Most importantly:
|
||||
|
||||
> **Legitimacy is a historical fact, not a service.**
|
||||
|
||||
---
|
||||
|
||||
## 14. One-Sentence Takeaway
|
||||
|
||||
> **Trust as few offline roots as possible — because roots define history, not availability, and ambiguity at the root level destroys determinism.**
|
||||
|
||||
If you want next, we can:
|
||||
|
||||
* Specify **root rotation artifacts**
|
||||
* Model **emergency recovery**
|
||||
* Define **foreign-domain trust pinning**
|
||||
* Walk through a **zero-to-first-snapshot bootstrap**
|
||||
|
||||
Just say the word.
|
||||
|
||||
|
|
@ -1,271 +0,0 @@
|
|||
NOTE: Superseded by tier1 TGK/1 and vendor/amduat/tier1/tgk-1-core.md; retained for historical context.
|
||||
|
||||
Absolutely — here’s a **formal draft of operator pseudocode** for the unified ASL + TGK execution plan. This pseudocode captures **shard/SIMD acceleration, deterministic execution, snapshot safety, and DAG traversal**, mapping directly to ENC-ASL-TGK-INDEX.
|
||||
|
||||
---
|
||||
|
||||
# Operator Pseudocode (ASL + TGK Execution Plan)
|
||||
|
||||
---
|
||||
|
||||
## 1. SegmentScan Operator
|
||||
|
||||
```c
|
||||
void SegmentScan(segment_t *segment, snapshot_range_t snapshot, record_buffer_t *out_buffer) {
|
||||
// Iterate over all records in the segment
|
||||
for (uint64_t i = 0; i < segment->record_count; i++) {
|
||||
record_t rec = segment->records[i];
|
||||
|
||||
// Snapshot visibility
|
||||
if (rec.logseq > snapshot.logseq_max) continue;
|
||||
|
||||
// Append to output buffer
|
||||
out_buffer->records[out_buffer->count++] = rec;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
* Can be **shard-parallelized**
|
||||
* Works for both **ASL artifacts** and **TGK edges**
|
||||
* `record_buffer_t` is preallocated, SIMD-aligned
|
||||
|
||||
---
|
||||
|
||||
## 2. IndexFilter Operator
|
||||
|
||||
```c
|
||||
void IndexFilter(record_buffer_t *in_buffer, filter_t *filter, record_buffer_t *out_buffer) {
|
||||
for (uint64_t i = 0; i < in_buffer->count; i += SIMD_WIDTH) {
|
||||
simd_mask_t mask = SIMD_TRUE;
|
||||
|
||||
// SIMD filter artifact type
|
||||
if (filter->has_type_tag) {
|
||||
mask &= SIMD_EQ(in_buffer->type_tags[i:i+SIMD_WIDTH], filter->artifact_type_tag);
|
||||
}
|
||||
|
||||
// SIMD filter edge type
|
||||
if (filter->has_edge_type) {
|
||||
mask &= SIMD_EQ(in_buffer->edge_type_keys[i:i+SIMD_WIDTH], filter->edge_type_key);
|
||||
}
|
||||
|
||||
// SIMD role filter (for TGK edges)
|
||||
if (filter->role) {
|
||||
mask &= SIMD_EQ(in_buffer->roles[i:i+SIMD_WIDTH], filter->role);
|
||||
}
|
||||
|
||||
// Store passing records
|
||||
SIMD_STORE_MASKED(in_buffer->records[i:i+SIMD_WIDTH], mask, out_buffer->records);
|
||||
}
|
||||
out_buffer->count = count_masked_records(out_buffer);
|
||||
}
|
||||
```
|
||||
|
||||
* SIMD ensures **parallel, vectorized evaluation**
|
||||
* Deterministic since order preserved
|
||||
|
||||
---
|
||||
|
||||
## 3. Merge Operator
|
||||
|
||||
```c
|
||||
void Merge(record_buffer_t **inputs, int num_inputs, record_buffer_t *out_buffer) {
|
||||
min_heap_t heap = build_heap(inputs, num_inputs);
|
||||
|
||||
while (!heap_empty(heap)) {
|
||||
record_t rec = heap_pop(heap);
|
||||
|
||||
out_buffer->records[out_buffer->count++] = rec;
|
||||
|
||||
// Advance from the source buffer
|
||||
heap_advance_source(heap, rec.source_buffer_id);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
* Uses **logseq ascending + canonical ID tie-breaker**
|
||||
* Deterministic across shards
|
||||
|
||||
---
|
||||
|
||||
## 4. TGKTraversal Operator
|
||||
|
||||
```c
|
||||
void TGKTraversal(record_buffer_t *in_buffer, uint32_t depth, snapshot_range_t snapshot, record_buffer_t *out_buffer) {
|
||||
record_buffer_t current_buffer = *in_buffer;
|
||||
|
||||
for (uint32_t d = 0; d < depth; d++) {
|
||||
record_buffer_t next_buffer = allocate_buffer();
|
||||
|
||||
for (uint64_t i = 0; i < current_buffer.count; i++) {
|
||||
record_t rec = current_buffer.records[i];
|
||||
|
||||
// Skip if not visible
|
||||
if (rec.logseq > snapshot.logseq_max) continue;
|
||||
|
||||
// Expand edges deterministically
|
||||
for (uint64_t j = 0; j < rec.to_count; j++) {
|
||||
record_t edge = lookup_edge(rec.to_nodes[j]);
|
||||
next_buffer.records[next_buffer.count++] = edge;
|
||||
}
|
||||
}
|
||||
|
||||
// Merge deterministically
|
||||
MergeBuffers(&next_buffer, 1, out_buffer);
|
||||
|
||||
// Prepare for next depth
|
||||
current_buffer = next_buffer;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
* Expansion **per depth**
|
||||
* Deterministic ordering guaranteed
|
||||
* Snapshot-safe traversal
|
||||
|
||||
---
|
||||
|
||||
## 5. Projection Operator
|
||||
|
||||
```c
|
||||
void Projection(record_buffer_t *in_buffer, projection_mask_t mask, record_buffer_t *out_buffer) {
|
||||
for (uint64_t i = 0; i < in_buffer->count; i++) {
|
||||
record_t rec = in_buffer->records[i];
|
||||
projected_record_t prow;
|
||||
|
||||
if (mask.project_artifact_id) prow.artifact_id = rec.artifact_id;
|
||||
if (mask.project_tgk_edge_id) prow.tgk_edge_id = rec.tgk_edge_id;
|
||||
if (mask.project_node_id) prow.node_id = rec.node_id;
|
||||
if (mask.project_type_tag) prow.type_tag = rec.type_tag;
|
||||
|
||||
out_buffer->records[out_buffer->count++] = prow;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 6. Aggregation Operator
|
||||
|
||||
```c
|
||||
void Aggregation(record_buffer_t *in_buffer, aggregation_accumulator_t *acc) {
|
||||
for (uint64_t i = 0; i < in_buffer->count; i++) {
|
||||
record_t rec = in_buffer->records[i];
|
||||
|
||||
if (acc->count_enabled) acc->count++;
|
||||
if (acc->sum_type_tag_enabled) acc->sum_type_tag += rec.type_tag;
|
||||
if (acc->union_enabled) acc->union_set.insert(rec.artifact_id);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
* Deterministic aggregation across shards due to **pre-merged, ordered input**
|
||||
|
||||
---
|
||||
|
||||
## 7. TombstoneShadow Operator
|
||||
|
||||
```c
|
||||
void TombstoneShadow(record_buffer_t *in_buffer, snapshot_range_t snapshot, record_buffer_t *out_buffer) {
|
||||
hashmap_t latest_per_id;
|
||||
|
||||
for (uint64_t i = 0; i < in_buffer->count; i++) {
|
||||
record_t rec = in_buffer->records[i];
|
||||
|
||||
// Skip records outside snapshot
|
||||
if (rec.logseq > snapshot.logseq_max) continue;
|
||||
|
||||
record_t *existing = hashmap_get(&latest_per_id, rec.canonical_id);
|
||||
|
||||
if (!existing || rec.logseq > existing->logseq) {
|
||||
hashmap_put(&latest_per_id, rec.canonical_id, &rec);
|
||||
}
|
||||
}
|
||||
|
||||
// Write deterministic output
|
||||
for each rec in sorted(latest_per_id by logseq + canonical_id) {
|
||||
out_buffer->records[out_buffer->count++] = rec;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
* Preserves **snapshot safety and deterministic order**
|
||||
* Handles tombstones/shadowed records
|
||||
|
||||
---
|
||||
|
||||
## 8. Shard Dispatch & SIMD Integration
|
||||
|
||||
* Each shard executes **SegmentScan + IndexFilter** independently
|
||||
* Results merged via **Merge**
|
||||
* SIMD acceleration applies **filters inside IndexFilter** or TGKTraversal expansion
|
||||
* Memory layout ensures **alignment and cache-friendly access**
|
||||
|
||||
---
|
||||
|
||||
## 9. Execution Plan Runner
|
||||
|
||||
```c
|
||||
void ExecutePlan(exec_plan_t *plan, snapshot_range_t snapshot, record_buffer_t *final_output) {
|
||||
record_buffer_t buffers[MAX_OPERATORS];
|
||||
|
||||
for (uint32_t i = 0; i < plan->operator_count; i++) {
|
||||
operator_def_t *op = &plan->operators[i];
|
||||
|
||||
switch(op->op_type) {
|
||||
case OP_SEGMENT_SCAN:
|
||||
SegmentScan(op->params.segment_scan, snapshot, &buffers[i]);
|
||||
break;
|
||||
case OP_INDEX_FILTER:
|
||||
IndexFilter(&buffers[op->inputs[0]], &op->params.index_filter, &buffers[i]);
|
||||
break;
|
||||
case OP_MERGE:
|
||||
Merge(get_input_buffers(op, buffers), op->input_count, &buffers[i]);
|
||||
break;
|
||||
case OP_PROJECTION:
|
||||
Projection(&buffers[op->inputs[0]], op->params.projection, &buffers[i]);
|
||||
break;
|
||||
case OP_TGK_TRAVERSAL:
|
||||
TGKTraversal(&buffers[op->inputs[0]], op->params.tgk_traversal.traversal_depth,
|
||||
snapshot, &buffers[i]);
|
||||
break;
|
||||
case OP_AGGREGATION:
|
||||
Aggregation(&buffers[op->inputs[0]], &op->params.aggregation);
|
||||
break;
|
||||
case OP_TOMBSTONE_SHADOW:
|
||||
TombstoneShadow(&buffers[op->inputs[0]], snapshot, &buffers[i]);
|
||||
break;
|
||||
default:
|
||||
// Handle other operators (SIMDFilter, ShardDispatch, LimitOffset)
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
// Final output from last operator
|
||||
*final_output = buffers[plan->operator_count - 1];
|
||||
}
|
||||
```
|
||||
|
||||
* **DAG execution** respects `inputs[]`
|
||||
* **Buffers indexed by operator ID**
|
||||
* Deterministic and **snapshot-safe**
|
||||
|
||||
---
|
||||
|
||||
This pseudocode fully captures:
|
||||
|
||||
* Segment scanning and sharding
|
||||
* SIMD-accelerated filtering
|
||||
* TGK DAG traversal
|
||||
* Projection, aggregation, tombstone shadowing
|
||||
* Deterministic merge of shards
|
||||
* Snapshot-safe execution
|
||||
|
||||
---
|
||||
|
||||
Next steps could include:
|
||||
|
||||
* **Integration with PEL deterministic DAG programs**, where **PEL program outputs feed into these operators**
|
||||
* **Optimizations for block caching and prefetching** in ENC-ASL-TGK-INDEX
|
||||
|
||||
Do you want me to draft that next?
|
||||
|
||||
|
|
@ -1,207 +0,0 @@
|
|||
NOTE: Superseded by tier1 TGK/1 and vendor/amduat/tier1/tgk-1-core.md; retained for historical context.
|
||||
|
||||
Absolutely — here’s a **formal draft for integrating PEL deterministic DAG programs with the ASL + TGK execution plan**, including how PEL outputs feed into operators, while preserving snapshot safety, determinism, and memory efficiency.
|
||||
|
||||
---
|
||||
|
||||
# PEL-Integrated Execution Model with ASL + TGK
|
||||
|
||||
---
|
||||
|
||||
## 1. Purpose
|
||||
|
||||
This specification defines:
|
||||
|
||||
1. How **PEL programs** (deterministic DAG programs) are executed in conjunction with **ASL artifacts** and **TGK edges**.
|
||||
2. How **PEL outputs (execution receipts)** feed into **index operators**.
|
||||
3. How **snapshot safety, determinism, and DAG traversal** are preserved across layers.
|
||||
4. Integration of **sharding, SIMD acceleration, and tombstone shadowing** in a unified execution model.
|
||||
|
||||
---
|
||||
|
||||
## 2. Key Concepts
|
||||
|
||||
* **PEL Program (PP)**: A deterministic DAG with **input artifacts**, **output artifacts**, and **computational nodes** (concat, slice, primitive ops).
|
||||
* **PEL Execution Receipt (PER)**: Artifact recording program execution, including:
|
||||
|
||||
* Inputs consumed
|
||||
* Outputs produced
|
||||
* Canonical logseq / snapshot
|
||||
* **Index Operators**: SegmentScan, IndexFilter, Merge, TGKTraversal, TombstoneShadow, Projection, Aggregation.
|
||||
* **Snapshot Safety**: All reads of artifacts or TGK edges are constrained to `logseq ≤ snapshot`.
|
||||
* **Determinism**: Execution order is fixed by **logseq ascending + canonical tie-breaker**.
|
||||
|
||||
---
|
||||
|
||||
## 3. Integration Principles
|
||||
|
||||
### 3.1 PEL Program Execution as Input
|
||||
|
||||
1. PEL program outputs (PER artifacts) are treated as **ASL artifacts** in execution plans.
|
||||
2. Operators can consume **either raw artifacts or PERs**.
|
||||
3. If the execution plan requires DAG traversal of PER-derived edges, treat **PER as a TGK edge node**.
|
||||
|
||||
```text
|
||||
PEL program outputs → PER artifact → SegmentScan → IndexFilter → TGKTraversal
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 3.2 Deterministic DAG Mapping
|
||||
|
||||
1. Each PEL DAG node corresponds to a **logical operator in the execution plan**.
|
||||
2. Execution plan DAG integrates **PEL DAG nodes** as **virtual operators**:
|
||||
|
||||
* Input nodes → SegmentScan / IndexFilter
|
||||
* Computation nodes → Projection / Aggregation
|
||||
* Outputs → Artifact storage in ASL
|
||||
|
||||
---
|
||||
|
||||
### 3.3 Snapshot Propagation
|
||||
|
||||
* **Input artifacts** for PEL programs are fetched with snapshot bounds:
|
||||
|
||||
```
|
||||
artifact.logseq ≤ program.snapshot
|
||||
```
|
||||
* **Output PER artifacts** are written with:
|
||||
|
||||
```
|
||||
logseq = max(input_logseq) + 1
|
||||
```
|
||||
* All downstream index operators inherit **snapshot constraints**.
|
||||
|
||||
---
|
||||
|
||||
## 4. Runtime Integration Flow
|
||||
|
||||
1. **Load PEL Program DAG**
|
||||
|
||||
* Validate deterministic operators
|
||||
* Identify **input artifacts** (raw or PER)
|
||||
|
||||
2. **Execute PEL Program**
|
||||
|
||||
* Evaluate primitives (concat, slice, etc.)
|
||||
* Generate output artifacts (PER)
|
||||
* Each primitive produces deterministic outputs
|
||||
|
||||
3. **Register Outputs in Index**
|
||||
|
||||
* PER artifacts are **visible to SegmentScan**
|
||||
* Type tag and canonical ID added to **shard-local buffers**
|
||||
|
||||
4. **Execute Index Operators**
|
||||
|
||||
* SegmentScan → IndexFilter → TGKTraversal
|
||||
* Merge shards deterministically
|
||||
* Apply TombstoneShadow
|
||||
* Projection/Aggregation
|
||||
|
||||
5. **Return Results**
|
||||
|
||||
* Combined output includes:
|
||||
|
||||
* Raw ASL artifacts
|
||||
* PER artifacts
|
||||
* TGK traversal outputs
|
||||
|
||||
---
|
||||
|
||||
## 5. Pseudocode Sketch
|
||||
|
||||
```c
|
||||
void ExecutePELProgramWithIndex(PELProgram *pp, snapshot_range_t snapshot, record_buffer_t *final_output) {
|
||||
// Step 1: Load inputs (artifacts or PERs)
|
||||
record_buffer_t input_buffer;
|
||||
LoadPELInputs(pp->inputs, snapshot, &input_buffer);
|
||||
|
||||
// Step 2: Execute PEL DAG
|
||||
record_buffer_t per_buffer;
|
||||
ExecutePEL(pp, &input_buffer, snapshot, &per_buffer);
|
||||
|
||||
// Step 3: Register PERs in ASL/TGK buffers
|
||||
record_buffer_t combined_buffer;
|
||||
Merge(&input_buffer, &per_buffer, 2, &combined_buffer);
|
||||
|
||||
// Step 4: Run Index Operators
|
||||
exec_plan_t *plan = BuildExecutionPlan(pp, &combined_buffer);
|
||||
ExecutePlan(plan, snapshot, final_output);
|
||||
}
|
||||
```
|
||||
|
||||
* `LoadPELInputs` ensures **snapshot safety**
|
||||
* `ExecutePEL` guarantees **deterministic outputs**
|
||||
* `Merge` maintains **logseq + canonical ordering**
|
||||
* `ExecutePlan` runs **ASL/TGK index operators with SIMD/shard acceleration**
|
||||
|
||||
---
|
||||
|
||||
## 6. Determinism Guarantees
|
||||
|
||||
1. **PEL DAG evaluation** deterministic per program snapshot
|
||||
2. **PER artifacts** incorporated with canonical ID + logseq
|
||||
3. **SegmentScan, Merge, TGKTraversal** maintain global ordering
|
||||
4. **TombstoneShadow** ensures no resurrection of deleted artifacts
|
||||
5. **SIMD/shard parallelism** does not affect final order
|
||||
|
||||
---
|
||||
|
||||
## 7. Memory Layout Integration
|
||||
|
||||
* PER artifacts stored in **shard-local buffers** alongside raw artifacts
|
||||
* Type tags and canonical IDs mapped for **SIMD filtering**
|
||||
* TGK traversal buffers extended to include **PER edges** where needed
|
||||
* Snapshot ranges propagated to all buffers
|
||||
|
||||
```c
|
||||
struct shard_buffer {
|
||||
uint64_t *artifact_ids;
|
||||
uint64_t *tgk_edge_ids;
|
||||
uint32_t *type_tags;
|
||||
uint8_t *roles;
|
||||
uint64_t count;
|
||||
snapshot_range_t snapshot;
|
||||
};
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 8. DAG Operator Mapping Summary
|
||||
|
||||
| PEL DAG Node | Index Operator Equivalent | Notes |
|
||||
| ------------------- | ------------------------- | ------------------------------ |
|
||||
| Input Artifact | SegmentScan | Snapshot-limited |
|
||||
| Concat / Slice | Projection | Deterministic fields selected |
|
||||
| TGK Projection | TGKTraversal | DAG traversal depth controlled |
|
||||
| Aggregate | Aggregation | Counts, sums, unions |
|
||||
| PER Artifact Output | SegmentScan | Feed into downstream operators |
|
||||
|
||||
---
|
||||
|
||||
## 9. Federation & Multi-Machine Considerations
|
||||
|
||||
* PER artifacts can be **federated across nodes**
|
||||
* Deterministic execution ensures **replayable results**
|
||||
* Shard/SIMD acceleration applied locally
|
||||
* Snapshot boundaries propagated across federation nodes
|
||||
|
||||
---
|
||||
|
||||
## 10. Summary
|
||||
|
||||
This integration:
|
||||
|
||||
* Treats **PEL programs as first-class DAG nodes** in execution plans
|
||||
* Allows **PER artifacts** to feed seamlessly into index operators
|
||||
* Preserves **snapshot safety, determinism, and shard/SIMD acceleration**
|
||||
* Fully unifies **PEL execution** with **ASL artifact and TGK edge indexing**
|
||||
* Enables **federated, replayable execution** across machines
|
||||
|
||||
---
|
||||
|
||||
Next step could be to **formalize federation propagation rules and cross-node deterministic replay**, ensuring PER artifacts are synchronized across pools and snapshots.
|
||||
|
||||
Do you want me to draft that next?
|
||||
|
||||
|
|
@ -1,193 +0,0 @@
|
|||
Below is a **minimal, deterministic, snapshot-safe PER Signature Layout** that fits *exactly* into your ASL / PEL / TGK / DAM model.
|
||||
No PKI, no timestamps, no mutable metadata — only what is required to prove **“this execution happened, deterministically, under this authority.”**
|
||||
|
||||
---
|
||||
|
||||
# PER Signature Layout Specification
|
||||
|
||||
---
|
||||
|
||||
## 1. Purpose
|
||||
|
||||
The **PER signature** certifies that:
|
||||
|
||||
* A specific PEL execution occurred
|
||||
* Against a specific snapshot
|
||||
* With specific inputs
|
||||
* Producing a specific output
|
||||
* Under an authorized domain principal
|
||||
|
||||
It enables:
|
||||
|
||||
* Deterministic replay
|
||||
* Cross-domain verification
|
||||
* Non-repudiation
|
||||
* Offline validation
|
||||
|
||||
---
|
||||
|
||||
## 2. What Is Signed (Authoritative Statement)
|
||||
|
||||
The signature covers **only immutable, deterministic identifiers**:
|
||||
|
||||
```
|
||||
"I assert that PER X was produced from inputs Y
|
||||
under snapshot S at logseq L in domain D"
|
||||
```
|
||||
|
||||
Nothing more. Nothing less.
|
||||
|
||||
---
|
||||
|
||||
## 3. Canonical Signing Payload
|
||||
|
||||
### 3.1 Canonical Payload Structure
|
||||
|
||||
This structure is serialized in a **canonical byte order** (defined below).
|
||||
|
||||
```text
|
||||
PERSignaturePayload {
|
||||
domain_id : DomainID
|
||||
snapshot_id : SnapshotID
|
||||
per_artifact_id : ArtifactID
|
||||
input_artifact_ids[] : ArtifactID (sorted)
|
||||
program_id : ProgramID
|
||||
logseq : u64
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 3.2 Field Semantics
|
||||
|
||||
| Field | Meaning |
|
||||
| ---------------------- | -------------------------------------------------------- |
|
||||
| `domain_id` | Domain asserting the execution |
|
||||
| `snapshot_id` | Snapshot that bounded inputs |
|
||||
| `per_artifact_id` | ArtifactID of PER output |
|
||||
| `input_artifact_ids[]` | All direct inputs (artifacts + PERs), sorted canonically |
|
||||
| `program_id` | Stable identifier for PEL program |
|
||||
| `logseq` | Deterministic execution order |
|
||||
|
||||
---
|
||||
|
||||
## 4. Canonicalization Rules (Normative)
|
||||
|
||||
Determinism depends on this.
|
||||
|
||||
1. **Byte order:** big-endian
|
||||
2. **Arrays:** sorted lexicographically by ArtifactID
|
||||
3. **No optional fields**
|
||||
4. **No timestamps**
|
||||
5. **No environment data**
|
||||
6. **No machine identifiers**
|
||||
|
||||
If two nodes produce the same PER under the same snapshot → **payload bytes are identical**.
|
||||
|
||||
---
|
||||
|
||||
## 5. Signature Object Layout
|
||||
|
||||
The signature itself is an ASL artifact.
|
||||
|
||||
```text
|
||||
PERSignature {
|
||||
payload_hash : Hash
|
||||
public_key : PublicKey
|
||||
signature : Signature
|
||||
algorithm : SigAlgorithm
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 5.1 Field Semantics
|
||||
|
||||
| Field | Meaning |
|
||||
| -------------- | -------------------------------------- |
|
||||
| `payload_hash` | Hash of canonical payload bytes |
|
||||
| `public_key` | Principal key used (must exist in DAM) |
|
||||
| `signature` | Signature over payload_hash |
|
||||
| `algorithm` | e.g. ed25519 |
|
||||
|
||||
---
|
||||
|
||||
## 6. Relationship to TGK
|
||||
|
||||
The signature is linked via TGK edges:
|
||||
|
||||
```
|
||||
PER ──[certified_by]──> PERSignature
|
||||
PERSignature ──[asserted_under]──> Snapshot
|
||||
PERSignature ──[asserted_by]──> Principal
|
||||
```
|
||||
|
||||
These edges are immutable and snapshot-bound.
|
||||
|
||||
---
|
||||
|
||||
## 7. Validation Procedure (Normative)
|
||||
|
||||
To validate a PER:
|
||||
|
||||
1. Load snapshot
|
||||
2. Resolve DAM
|
||||
3. Locate PER artifact
|
||||
4. Locate PERSignature artifact
|
||||
5. Recompute canonical payload
|
||||
6. Verify:
|
||||
|
||||
* `hash(payload) == payload_hash`
|
||||
* `signature` valid for `public_key`
|
||||
* `public_key ∈ DAM`
|
||||
* `role ∈ {execute}`
|
||||
* `snapshot_id` visible
|
||||
7. Accept PER as authoritative
|
||||
|
||||
No online checks required.
|
||||
|
||||
---
|
||||
|
||||
## 8. Failure Modes (Explicit)
|
||||
|
||||
| Condition | Result |
|
||||
| ------------------ | ------------------------- |
|
||||
| Missing signature | PER is informational only |
|
||||
| Invalid signature | Reject PER |
|
||||
| Key not in DAM | Reject PER |
|
||||
| Snapshot mismatch | Reject PER |
|
||||
| Input set mismatch | Reject PER |
|
||||
| Logseq conflict | Reject or quarantine |
|
||||
|
||||
---
|
||||
|
||||
## 9. Why This Layout Works
|
||||
|
||||
✔ Minimal
|
||||
✔ Deterministic
|
||||
✔ Snapshot-safe
|
||||
✔ Federation-safe
|
||||
✔ Replayable
|
||||
✔ Offline-verifiable
|
||||
|
||||
And most importantly:
|
||||
|
||||
> **The signature certifies meaning, not mechanics.**
|
||||
|
||||
---
|
||||
|
||||
## 10. One-Sentence Summary
|
||||
|
||||
> **A PER signature is a signed statement that a deterministic execution occurred under a specific snapshot, producing a specific artifact, by an authorized principal.**
|
||||
|
||||
---
|
||||
|
||||
If you want, next we can:
|
||||
|
||||
* Define **ProgramID canonicalization**
|
||||
* Specify **policy hash contents**
|
||||
* Walk through a full signed PER example (bytes → hash → signature)
|
||||
* Show federation verification step-by-step
|
||||
|
||||
Just say where to go next.
|
||||
|
||||
|
|
@ -1,245 +0,0 @@
|
|||
Yes — this is exactly the *right* point to be at.
|
||||
What you’re describing is **domain nesting + scoped publication**, not special cases. And your architecture already supports it — you just need to *name the rules*.
|
||||
|
||||
Below is the **clean, general solution** that works for:
|
||||
|
||||
* Personal rescue missions
|
||||
* Group domains
|
||||
* Organizational domains
|
||||
* Cross-user federation
|
||||
* Selective publication
|
||||
* Zero impersonation
|
||||
|
||||
---
|
||||
|
||||
# Personal Domains, Group Domains, and Safe Publication
|
||||
|
||||
---
|
||||
|
||||
## 1. First principle (this makes everything work)
|
||||
|
||||
> **Domains are not hierarchical by default — trust is.**
|
||||
|
||||
There is:
|
||||
|
||||
* No implicit parent/child authority
|
||||
* No inherited permissions
|
||||
* No automatic visibility
|
||||
|
||||
Everything is **explicit and pinned**.
|
||||
|
||||
---
|
||||
|
||||
## 2. Your personal domain (root of agency)
|
||||
|
||||
You have a **personal root domain**:
|
||||
|
||||
```
|
||||
domain_id: me.example
|
||||
```
|
||||
|
||||
This is:
|
||||
|
||||
* Your cryptographic identity in the system
|
||||
* Anchored by *your* offline roots
|
||||
* The ultimate source of authority for *your* actions
|
||||
|
||||
All other domains you control are **authorized by this domain**, not children of it.
|
||||
|
||||
---
|
||||
|
||||
## 3. Rescue domain (ephemeral, scoped)
|
||||
|
||||
For the rescue mission:
|
||||
|
||||
```
|
||||
domain_id: me.example/rescue/laptop-2014
|
||||
```
|
||||
|
||||
This domain:
|
||||
|
||||
* Is freshly minted
|
||||
* Has its own DAM
|
||||
* Has limited policy
|
||||
* Is authorized *by you*
|
||||
* Can be destroyed or archived later
|
||||
|
||||
It does **not** inherit authority — it is **delegated**.
|
||||
|
||||
---
|
||||
|
||||
## 4. Historical domain (referenced, inert)
|
||||
|
||||
The old laptop becomes:
|
||||
|
||||
```
|
||||
domain_id: me.example/legacy/laptop-2014
|
||||
```
|
||||
|
||||
This domain:
|
||||
|
||||
* Has no active authority
|
||||
* No DAM
|
||||
* No roots
|
||||
* Exists only as a provenance subject
|
||||
|
||||
This avoids impersonation while preserving meaning.
|
||||
|
||||
---
|
||||
|
||||
## 5. How you safely create your own domain
|
||||
|
||||
### Step 1 — Mint a new domain key
|
||||
|
||||
On an online or offline machine:
|
||||
|
||||
* Generate domain root key
|
||||
* Generate DAM
|
||||
* Define policy hash (likely restrictive)
|
||||
|
||||
### Step 2 — Certify it with your personal root
|
||||
|
||||
On offline ORA:
|
||||
|
||||
* Issue AuthorityCertificate:
|
||||
|
||||
* subject_type: domain_root
|
||||
* domain_id: me.example/rescue/laptop-2014
|
||||
* Sign with *your* personal offline root
|
||||
|
||||
This is delegation, not inheritance.
|
||||
|
||||
---
|
||||
|
||||
## 6. Group domains and org domains (same model)
|
||||
|
||||
A group domain:
|
||||
|
||||
```
|
||||
domain_id: friends.example/photos
|
||||
```
|
||||
|
||||
* Has its own roots
|
||||
* Its own DAM
|
||||
* Its own policy
|
||||
* You may or may not be a principal
|
||||
|
||||
If you are:
|
||||
|
||||
* You hold a **principal certificate** issued by that domain’s roots
|
||||
* Your personal domain does *not* override group authority
|
||||
|
||||
This prevents privilege bleed.
|
||||
|
||||
---
|
||||
|
||||
## 7. Publishing rescued material (the key part)
|
||||
|
||||
You can publish **selectively**, **safely**, and **without confusion**.
|
||||
|
||||
### 7.1 Publish inside your personal domain
|
||||
|
||||
```
|
||||
me.example/rescue/laptop-2014
|
||||
──publish──> me.example/archive
|
||||
```
|
||||
|
||||
Mechanism:
|
||||
|
||||
* Federation with trust pin
|
||||
* Scope-limited (READ_ARTIFACTS)
|
||||
* Snapshot-pinned
|
||||
|
||||
---
|
||||
|
||||
### 7.2 Publish to a group domain
|
||||
|
||||
```
|
||||
me.example/rescue/laptop-2014
|
||||
──publish──> friends.example/photos
|
||||
```
|
||||
|
||||
Only if:
|
||||
|
||||
* friends.example has pinned your rescue domain
|
||||
* Scope allows READ_ARTIFACTS
|
||||
* Policy hashes are compatible
|
||||
|
||||
No unilateral publishing.
|
||||
|
||||
---
|
||||
|
||||
### 7.3 Publish publicly (one-way)
|
||||
|
||||
```
|
||||
me.example/rescue/laptop-2014
|
||||
──publish──> public.example/archive
|
||||
```
|
||||
|
||||
* Public domain pins *you*
|
||||
* You never pin them
|
||||
* One-way trust
|
||||
|
||||
---
|
||||
|
||||
## 8. Safe defaults (important)
|
||||
|
||||
### By default:
|
||||
|
||||
* New domains are **private**
|
||||
* Nothing federates
|
||||
* Nothing publishes
|
||||
* Nothing is trusted
|
||||
|
||||
You must explicitly:
|
||||
|
||||
* Publish snapshots
|
||||
* Pin trust
|
||||
* Declare scope
|
||||
|
||||
This prevents accidental leakage.
|
||||
|
||||
---
|
||||
|
||||
## 9. Why this works for *all* users
|
||||
|
||||
This model scales because:
|
||||
|
||||
✔ Individuals control their own roots
|
||||
✔ Groups define their own authority
|
||||
✔ Domains don’t impersonate each other
|
||||
✔ Publication is opt-in
|
||||
✔ Provenance stays intact
|
||||
|
||||
And critically:
|
||||
|
||||
> **You never need global user accounts.**
|
||||
|
||||
---
|
||||
|
||||
## 10. Mental model (keep this one)
|
||||
|
||||
* **Personal domain** = identity
|
||||
* **Rescue domain** = action
|
||||
* **Historical domain** = subject
|
||||
* **Group domain** = collaboration
|
||||
* **Publication** = explicit federation
|
||||
* **Trust** = pinned, not assumed
|
||||
|
||||
---
|
||||
|
||||
## 11. One-sentence takeaway
|
||||
|
||||
> **You act through scoped domains you control, assert claims about history without impersonation, and publish only what you choose — all under explicit, cryptographically pinned trust.**
|
||||
|
||||
---
|
||||
|
||||
If you want next, we can:
|
||||
|
||||
* Specify **domain delegation certificates**
|
||||
* Define **publication workflows (CLI-level)**
|
||||
* Model **revocation without retroactive damage**
|
||||
* Walk through a **full personal rescue → archive → share flow**
|
||||
|
||||
You’re now designing a system that treats *personal computing history* as first-class, verifiable data — and you’re doing it the right way.
|
||||
|
||||
|
|
@ -1,250 +0,0 @@
|
|||
Below is a **minimal, sufficient, and future-proof specification for the Policy Hash**, designed to *anchor meaning without freezing evolution*.
|
||||
It is intentionally **small**, **deterministic**, and **composable**, and it fits cleanly into your DAM + PER + federation model.
|
||||
|
||||
---
|
||||
|
||||
# Policy Hash Specification
|
||||
|
||||
---
|
||||
|
||||
## 1. Purpose of the Policy Hash
|
||||
|
||||
The **policy hash** binds **domain rules** to **snapshots and authority** without embedding mutable text inside every artifact.
|
||||
|
||||
It answers:
|
||||
|
||||
> *“Under what rules was this snapshot, PER, or publication considered valid?”*
|
||||
|
||||
The hash ensures:
|
||||
|
||||
* Deterministic interpretation
|
||||
* Replay safety
|
||||
* Cross-domain verification
|
||||
* Explicit policy evolution
|
||||
|
||||
---
|
||||
|
||||
## 2. What the Policy Hash Is (and Is Not)
|
||||
|
||||
### Is:
|
||||
|
||||
✔ A content hash of **policy assertions**
|
||||
✔ Snapshot-pinned
|
||||
✔ Interpreted identically across nodes
|
||||
|
||||
### Is Not:
|
||||
|
||||
✘ A live configuration
|
||||
✘ An ACL
|
||||
✘ A rules engine
|
||||
✘ A machine policy
|
||||
|
||||
---
|
||||
|
||||
## 3. Policy Hash Coverage (Normative)
|
||||
|
||||
The policy hash MUST cover **only semantic constraints that affect correctness or trust**.
|
||||
|
||||
### Mandatory Sections
|
||||
|
||||
1. **Publication Rules**
|
||||
2. **Execution Rules**
|
||||
3. **Federation Rules**
|
||||
4. **Retention & GC Constraints**
|
||||
5. **Visibility Rules**
|
||||
|
||||
Nothing else.
|
||||
|
||||
---
|
||||
|
||||
## 4. Canonical Policy Document (Logical Structure)
|
||||
|
||||
The policy document is a **pure data artifact**.
|
||||
|
||||
```text
|
||||
DomainPolicy {
|
||||
version : u32
|
||||
publication_policy : PublicationPolicy
|
||||
execution_policy : ExecutionPolicy
|
||||
federation_policy : FederationPolicy
|
||||
retention_policy : RetentionPolicy
|
||||
visibility_policy : VisibilityPolicy
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 5. Policy Sections (Minimal Content)
|
||||
|
||||
### 5.1 Publication Policy
|
||||
|
||||
```text
|
||||
PublicationPolicy {
|
||||
require_signature : bool
|
||||
allowed_roles[] : Role
|
||||
snapshot_required : bool
|
||||
}
|
||||
```
|
||||
|
||||
Example meaning:
|
||||
|
||||
* Artifacts must be signed
|
||||
* Only `publish` role may publish
|
||||
* Publication must be snapshot-bound
|
||||
|
||||
---
|
||||
|
||||
### 5.2 Execution Policy
|
||||
|
||||
```text
|
||||
ExecutionPolicy {
|
||||
per_signature_required : bool
|
||||
allowed_roles[] : Role
|
||||
deterministic_only : bool
|
||||
}
|
||||
```
|
||||
|
||||
Meaning:
|
||||
|
||||
* PERs must be signed
|
||||
* Only `execute` role may emit PERs
|
||||
* No nondeterministic execution accepted
|
||||
|
||||
---
|
||||
|
||||
### 5.3 Federation Policy
|
||||
|
||||
```text
|
||||
FederationPolicy {
|
||||
export_published_only : bool
|
||||
require_snapshot : bool
|
||||
trusted_domains[] : DomainID
|
||||
}
|
||||
```
|
||||
|
||||
Meaning:
|
||||
|
||||
* Only published state may be federated
|
||||
* Federation is snapshot-based
|
||||
* Optional allowlist of domains
|
||||
|
||||
Empty allowlist = open federation.
|
||||
|
||||
---
|
||||
|
||||
### 5.4 Retention & GC Policy
|
||||
|
||||
```text
|
||||
RetentionPolicy {
|
||||
gc_unpublished_allowed : bool
|
||||
min_snapshot_retention : u32
|
||||
}
|
||||
```
|
||||
|
||||
Meaning:
|
||||
|
||||
* Whether unpublished artifacts may be GC’d
|
||||
* Minimum snapshots to retain
|
||||
|
||||
---
|
||||
|
||||
### 5.5 Visibility Policy
|
||||
|
||||
```text
|
||||
VisibilityPolicy {
|
||||
internal_hidden : bool
|
||||
published_read_only : bool
|
||||
}
|
||||
```
|
||||
|
||||
Meaning:
|
||||
|
||||
* Internal artifacts invisible externally
|
||||
* Published artifacts immutable
|
||||
|
||||
---
|
||||
|
||||
## 6. Canonicalization Rules (Critical)
|
||||
|
||||
The policy hash MUST be computed from **canonical bytes**:
|
||||
|
||||
1. Field order fixed
|
||||
2. Arrays sorted lexicographically
|
||||
3. No whitespace
|
||||
4. No comments
|
||||
5. Big-endian integers
|
||||
6. Boolean encoded as `0x00` / `0x01`
|
||||
7. No optional fields omitted — use explicit defaults
|
||||
|
||||
Hash algorithm: **SHA-256** (or domain-declared)
|
||||
|
||||
---
|
||||
|
||||
## 7. Policy Hash Computation
|
||||
|
||||
```text
|
||||
policy_bytes = CanonicalSerialize(DomainPolicy)
|
||||
policy_hash = HASH(policy_bytes)
|
||||
```
|
||||
|
||||
The `policy_hash` is stored in:
|
||||
|
||||
* Domain Authority Manifest
|
||||
* Snapshot metadata
|
||||
* Federation metadata
|
||||
* Optional PER metadata (reference only)
|
||||
|
||||
---
|
||||
|
||||
## 8. Validation Semantics
|
||||
|
||||
A node MUST reject any operation where:
|
||||
|
||||
* Policy hash does not match snapshot
|
||||
* Policy version unsupported
|
||||
* Required signature missing
|
||||
* Required role not satisfied
|
||||
* Federation rules violated
|
||||
|
||||
---
|
||||
|
||||
## 9. Policy Evolution (Key Design Point)
|
||||
|
||||
Policy changes do **not** mutate history.
|
||||
|
||||
Instead:
|
||||
|
||||
1. New policy document created
|
||||
2. New policy hash computed
|
||||
3. New DAM published
|
||||
4. New snapshots reference new hash
|
||||
|
||||
Old snapshots remain valid forever.
|
||||
|
||||
---
|
||||
|
||||
## 10. Why This Is Minimal but Complete
|
||||
|
||||
✔ Captures all trust-affecting rules
|
||||
✔ Deterministic across nodes
|
||||
✔ Decouples authority from text
|
||||
✔ Enables offline replay
|
||||
✔ Avoids configuration drift
|
||||
|
||||
---
|
||||
|
||||
## 11. One-Sentence Summary
|
||||
|
||||
> **The policy hash cryptographically binds the rules of trust and visibility to every snapshot and execution without embedding mutable policy text into the data plane.**
|
||||
|
||||
---
|
||||
|
||||
If you want, next we can:
|
||||
|
||||
* Define **canonical serialization format** (exact byte layout)
|
||||
* Show **policy hash verification during federation**
|
||||
* Walk through **policy evolution example**
|
||||
* Tie policy hash into **GC safety proofs**
|
||||
|
||||
Just tell me where to continue.
|
||||
|
||||
|
|
@ -1,262 +0,0 @@
|
|||
That sentence — **“any node can become a system”** — is actually the *right mental hook* for what you’re building. You’re just applying it at the **PEL + TGK semantic layer**, not the SystemRescue layer. That’s an important shift, and it exposes exactly the missing piece you’re feeling: **identity, authority, and certification**.
|
||||
|
||||
Let’s align the layers cleanly and then answer your question about **users, ownership, and whether ZFS helps**.
|
||||
|
||||
---
|
||||
|
||||
## 1. Re-anchoring “any node can become a system”
|
||||
|
||||
In your architecture, a **system** is *not* a machine and *not* a filesystem.
|
||||
A system is:
|
||||
|
||||
> **A domain that can produce, seal, snapshot, and attest to deterministic artifacts and executions.**
|
||||
|
||||
So:
|
||||
|
||||
* A laptop
|
||||
* A rescue USB
|
||||
* A build farm node
|
||||
* A CI runner
|
||||
* A forensics machine
|
||||
|
||||
…can *all* “become a system” **if** they can:
|
||||
|
||||
1. Produce artifacts/PERs
|
||||
2. Seal them
|
||||
3. Attach provenance
|
||||
4. Publish them under a domain identity
|
||||
|
||||
SystemRescue gives you *capability*.
|
||||
PEL + TGK give you *meaning*.
|
||||
What’s missing is **authority**.
|
||||
|
||||
---
|
||||
|
||||
## 2. What ZFS can and cannot do here
|
||||
|
||||
### What ZFS *can* help with
|
||||
|
||||
ZFS gives you:
|
||||
|
||||
* Snapshot identity (strong, immutable)
|
||||
* Retention pins
|
||||
* Integrity guarantees
|
||||
* A natural “domain-local root of persistence”
|
||||
|
||||
This makes ZFS an excellent **substrate for authority anchoring**.
|
||||
|
||||
### What ZFS cannot do
|
||||
|
||||
ZFS does **not** provide:
|
||||
|
||||
* Users
|
||||
* Identities
|
||||
* Ownership semantics
|
||||
* Certification
|
||||
* Trust relationships
|
||||
|
||||
ZFS can *host* trust anchors — it cannot *define* them.
|
||||
|
||||
---
|
||||
|
||||
## 3. Users are not UNIX users (important)
|
||||
|
||||
If you try to map this to:
|
||||
|
||||
* `/etc/passwd`
|
||||
* UID/GID
|
||||
* POSIX permissions
|
||||
|
||||
You will break your own model.
|
||||
|
||||
In your system:
|
||||
|
||||
> **Users are cryptographic principals, not OS accounts.**
|
||||
|
||||
That’s the key.
|
||||
|
||||
---
|
||||
|
||||
## 4. The missing layer: Domain Identity & Authority (DIA)
|
||||
|
||||
You are missing a small but crucial layer that sits **above ASL/PEL/TGK and below federation**.
|
||||
|
||||
Let’s call it (name optional):
|
||||
|
||||
> **Domain Identity & Authority (DIA)**
|
||||
|
||||
This layer defines:
|
||||
|
||||
* Who may *assert* artifacts
|
||||
* Who may *seal* snapshots
|
||||
* Who may *publish* state
|
||||
* Who may *certify* execution receipts
|
||||
|
||||
---
|
||||
|
||||
## 5. What a “user” actually is in your system
|
||||
|
||||
A **user** is:
|
||||
|
||||
> A keypair that is authorized to act *for a domain* under specific roles.
|
||||
|
||||
Not a login. Not a shell. Not a UID.
|
||||
|
||||
### Example roles
|
||||
|
||||
* `producer` — may generate artifacts
|
||||
* `executor` — may emit PERs
|
||||
* `publisher` — may publish snapshots
|
||||
* `auditor` — may verify, not write
|
||||
|
||||
Roles are **semantic**, not filesystem permissions.
|
||||
|
||||
---
|
||||
|
||||
## 6. Where identity lives (and yes, ZFS helps here)
|
||||
|
||||
### Domain identity root (per domain)
|
||||
|
||||
```
|
||||
/asl/domain/
|
||||
├── domain.id
|
||||
├── domain.pubkey
|
||||
├── authorities/
|
||||
│ ├── root.pub
|
||||
│ ├── ci.pub
|
||||
│ └── rescue.pub
|
||||
├── policies/
|
||||
│ └── publish.policy
|
||||
└── attestations/
|
||||
```
|
||||
|
||||
This lives:
|
||||
|
||||
* On ZFS
|
||||
* Inside a snapshot
|
||||
* Pinned
|
||||
* Immutable once sealed
|
||||
|
||||
This is your **domain root of trust**.
|
||||
|
||||
---
|
||||
|
||||
## 7. Certification: what gets certified?
|
||||
|
||||
Not machines. Not users. Not files.
|
||||
|
||||
You certify **statements**.
|
||||
|
||||
Examples:
|
||||
|
||||
* “This snapshot represents `/home` as of logseq X”
|
||||
* “This PER was executed from these inputs”
|
||||
* “This artifact corresponds to this path”
|
||||
|
||||
These certifications are **artifacts themselves**.
|
||||
|
||||
That’s the big insight.
|
||||
|
||||
---
|
||||
|
||||
## 8. Execution receipt certification (PEL + TGK)
|
||||
|
||||
A PER becomes authoritative only when:
|
||||
|
||||
1. Inputs are snapshot-bound
|
||||
2. Execution is deterministic
|
||||
3. PER is signed by an authorized key
|
||||
4. Signature is stored as an artifact
|
||||
5. TGK edge links PER → signature → snapshot
|
||||
|
||||
This gives you:
|
||||
|
||||
* Non-repudiation
|
||||
* Replayable verification
|
||||
* Cross-domain trust
|
||||
|
||||
No mutable ACLs. No global PKI required.
|
||||
|
||||
---
|
||||
|
||||
## 9. Ownership is about *who can publish*
|
||||
|
||||
Ownership in your system is:
|
||||
|
||||
> The right to publish artifacts into a domain’s visible state.
|
||||
|
||||
That’s it.
|
||||
|
||||
* Anyone can *produce* internally
|
||||
* Only authorized keys can *publish*
|
||||
* Federation only sees *published* state
|
||||
|
||||
This matches your federation spec perfectly.
|
||||
|
||||
---
|
||||
|
||||
## 10. What SystemRescue becomes in this model
|
||||
|
||||
SystemRescue is just:
|
||||
|
||||
> A transient node that can temporarily act under a domain identity.
|
||||
|
||||
It may:
|
||||
|
||||
* Import a domain key (hardware token, USB)
|
||||
* Produce recovery artifacts
|
||||
* Emit PERs
|
||||
* Publish a recovery snapshot
|
||||
|
||||
Then disappear.
|
||||
|
||||
The authority lives in the domain, not the machine.
|
||||
|
||||
---
|
||||
|
||||
## 11. Why this completes “any node can become a system”
|
||||
|
||||
Because now:
|
||||
|
||||
* A node + tools + a key = a system
|
||||
* Without a key, it’s just a reader
|
||||
* With a key, it can assert truth
|
||||
* ZFS ensures what was asserted cannot be silently altered
|
||||
|
||||
That’s the closure you were missing.
|
||||
|
||||
---
|
||||
|
||||
## 12. The minimal thing you need to add (not a big system)
|
||||
|
||||
You do **not** need:
|
||||
|
||||
* LDAP
|
||||
* OAuth
|
||||
* POSIX ACLs
|
||||
* Distributed consensus
|
||||
|
||||
You need:
|
||||
|
||||
* Domain keypairs
|
||||
* Signed artifacts
|
||||
* Snapshot-pinned authority state
|
||||
* Simple role policies
|
||||
|
||||
That’s it.
|
||||
|
||||
---
|
||||
|
||||
## 13. One-sentence takeaway
|
||||
|
||||
> **ZFS gives you immutability; ASL gives you determinism; PEL gives you meaning; TGK gives you structure — but only cryptographic domain identity turns a node into a system.**
|
||||
|
||||
If you want, next we can:
|
||||
|
||||
* Specify a minimal **Domain Authority Manifest**
|
||||
* Define the PER signature format
|
||||
* Walk through “archived home directory” as a certified execution
|
||||
* Show how federation validates authority without trusting nodes
|
||||
|
||||
This is the missing keystone — and you were already circling it.
|
||||
|
||||
|
|
@ -1,267 +0,0 @@
|
|||
Yes — and that alternative is **very powerful**, because it flips the posture from *“rescued object”* to *“self-sovereign entrant asking for admission”*.
|
||||
Both modes must exist, and you’re now describing the **second one**.
|
||||
|
||||
Let me formalize it cleanly, without mixing layers or over-specifying.
|
||||
|
||||
---
|
||||
|
||||
# Ready-for-Pick-Up v2
|
||||
|
||||
## (Pre-Minted Domain + Certificates, Asking for Admission)
|
||||
|
||||
This is a **deliberate, anticipatory stance**:
|
||||
|
||||
> *“I arrive with a domain, keys, intent, and discipline.
|
||||
> I ask to be accepted, not rescued.”*
|
||||
|
||||
---
|
||||
|
||||
## 1. What You Bring to the Table
|
||||
|
||||
You do **not** bring ASL internals.
|
||||
You bring **authority and intent**.
|
||||
|
||||
### 1.1 A Domain (Already Minted)
|
||||
|
||||
You arrive with:
|
||||
|
||||
```text
|
||||
DomainID
|
||||
- stable identifier
|
||||
- self-declared scope
|
||||
- empty or near-empty state
|
||||
```
|
||||
|
||||
This domain:
|
||||
|
||||
* Has **no data obligations yet**
|
||||
* Is capable of holding blocks and logs
|
||||
* Is isolated and sovereign
|
||||
|
||||
It may contain:
|
||||
|
||||
* zero artifacts
|
||||
* bootstrap artifacts
|
||||
* policy manifests
|
||||
|
||||
---
|
||||
|
||||
### 1.2 Authority Material (Offline-Minted)
|
||||
|
||||
You arrive with:
|
||||
|
||||
* **Domain Root Certificate**
|
||||
* **Signing key**
|
||||
* **Policy hash**
|
||||
* Optional:
|
||||
|
||||
* operator certificate
|
||||
* device certificate (SystemRescue image)
|
||||
|
||||
No federation required yet.
|
||||
No trust assumed yet.
|
||||
|
||||
This is **ASL-AUTH territory**, not ASL-CORE.
|
||||
|
||||
---
|
||||
|
||||
### 1.3 A Domain Authority Manifest (DAM)
|
||||
|
||||
This is the *single most important object* you bring.
|
||||
|
||||
It says:
|
||||
|
||||
> *“This is who I am, what I claim, and how I will behave.”*
|
||||
|
||||
Minimal DAM contents:
|
||||
|
||||
```text
|
||||
- DomainID
|
||||
- Root public key
|
||||
- Policy hash
|
||||
- Declared invariants
|
||||
- Requested relationship(s)
|
||||
```
|
||||
|
||||
No artifacts required yet.
|
||||
|
||||
---
|
||||
|
||||
## 2. What You Ask For (The Admission Request)
|
||||
|
||||
You don’t ask for “access”.
|
||||
|
||||
You ask for **recognition under constraints**.
|
||||
|
||||
Example:
|
||||
|
||||
```text
|
||||
Admission Request:
|
||||
- Accept my DomainID
|
||||
- Accept my root certificate
|
||||
- Allow me to publish artifacts
|
||||
- Optionally grant courtesy storage
|
||||
- Subject to my declared policy
|
||||
```
|
||||
|
||||
This is **not trust yet**.
|
||||
It is *acknowledgment*.
|
||||
|
||||
---
|
||||
|
||||
## 3. How Common / Unity Responds
|
||||
|
||||
Common does **not** execute your code.
|
||||
Common does **not** ingest your data yet.
|
||||
|
||||
It performs:
|
||||
|
||||
### 3.1 Structural Validation
|
||||
|
||||
* DomainID well-formed
|
||||
* DAM syntactically valid
|
||||
* Policy hash declared
|
||||
* Keys match manifest
|
||||
|
||||
### 3.2 Policy Compatibility Check
|
||||
|
||||
* No invariant violations
|
||||
* No forbidden claims
|
||||
* No escalation attempt
|
||||
|
||||
### 3.3 Admission Outcome
|
||||
|
||||
One of:
|
||||
|
||||
| Outcome | Meaning |
|
||||
| ------------------ | --------------------- |
|
||||
| Accepted | Domain may publish |
|
||||
| Accepted (limited) | Courtesy storage only |
|
||||
| Deferred | Await manual review |
|
||||
| Rejected | No interaction |
|
||||
|
||||
This is **governance**, not storage.
|
||||
|
||||
---
|
||||
|
||||
## 4. After Acceptance: What Changes
|
||||
|
||||
Only *after* acceptance:
|
||||
|
||||
* Your domain may:
|
||||
|
||||
* publish sealed artifacts
|
||||
* pin snapshots
|
||||
* emit PERs
|
||||
* Common may:
|
||||
|
||||
* pin your published snapshots
|
||||
* replicate your log
|
||||
* reference your artifacts
|
||||
|
||||
You still own your data.
|
||||
Common only **recognizes it**.
|
||||
|
||||
---
|
||||
|
||||
## 5. Why This Is Better Than “Blind Rescue”
|
||||
|
||||
| Rescue Mode | Admission Mode |
|
||||
| ------------------------------ | ------------------------------- |
|
||||
| Common acts first | You act first |
|
||||
| Data arrives untrusted | Authority arrives first |
|
||||
| Provenance reconstructed later | Provenance anchored immediately |
|
||||
| Courtesy is implicit | Courtesy is negotiated |
|
||||
|
||||
Both are valid — but **this one scales better socially**.
|
||||
|
||||
---
|
||||
|
||||
## 6. How This Fits Your SystemRescue Image
|
||||
|
||||
Your patched SystemRescue now becomes:
|
||||
|
||||
> **A portable domain factory + admission client**
|
||||
|
||||
It can:
|
||||
|
||||
* generate a domain
|
||||
* mint keys offline
|
||||
* produce DAM
|
||||
* present admission request
|
||||
* wait
|
||||
|
||||
Standing ready is literal now.
|
||||
|
||||
---
|
||||
|
||||
## 7. What Can Be Implemented Right Now (No ASL Needed)
|
||||
|
||||
You can implement **all of this today**:
|
||||
|
||||
* DAM schema
|
||||
* Certificate layout
|
||||
* Policy hash definition
|
||||
* Admission request format
|
||||
* Verification logic
|
||||
* Human / semi-automated approval
|
||||
|
||||
No ASL-CORE.
|
||||
No PEL.
|
||||
No TGK.
|
||||
|
||||
Just **authority and intent**.
|
||||
|
||||
---
|
||||
|
||||
## 8. Key Insight (This Is Important)
|
||||
|
||||
> **Storage is not the entrance. Authority is.**
|
||||
|
||||
ASL begins *after* admission.
|
||||
|
||||
This prevents:
|
||||
|
||||
* garbage ingestion
|
||||
* unbounded courtesy
|
||||
* identity ambiguity
|
||||
* provenance ambiguity
|
||||
|
||||
---
|
||||
|
||||
## 9. Where This Leads Next
|
||||
|
||||
From here, the natural next specs are:
|
||||
|
||||
1. **ASL-AUTH v0.1** (you already started this)
|
||||
2. **Domain Admission Protocol**
|
||||
3. **DAM canonical encoding**
|
||||
4. **Policy hash registry**
|
||||
5. **Courtesy lease semantics**
|
||||
|
||||
---
|
||||
|
||||
## 10. Final Framing
|
||||
|
||||
You are no longer saying:
|
||||
|
||||
> *“Please take my data.”*
|
||||
|
||||
You are saying:
|
||||
|
||||
> *“Here is a domain.
|
||||
> Here is my authority.
|
||||
> Here is my discipline.
|
||||
> May I participate?”*
|
||||
|
||||
That’s exactly the right posture for the system you’re building.
|
||||
|
||||
If you want, next I can:
|
||||
|
||||
* formalize the **Domain Admission Protocol**
|
||||
* draft the **DAM schema precisely**
|
||||
* define **courtesy leases**
|
||||
* or show how **rescue-mode and admission-mode unify cleanly**
|
||||
|
||||
Just point.
|
||||
|
||||
|
|
@ -1,242 +0,0 @@
|
|||
NOTE: Superseded by tier1 TGK/1 and vendor/amduat/tier1/tgk-1-core.md; retained for historical context.
|
||||
|
||||
Absolutely — here’s a formal draft for **runtime execution semantics and memory layout** for SIMD/shard acceleration of the unified execution plan over ENC-ASL-TGK-INDEX. This focuses on **parallel, deterministic, and snapshot-safe execution**.
|
||||
|
||||
---
|
||||
|
||||
# Runtime Execution Semantics & Memory Layout for SIMD/Shard Acceleration
|
||||
|
||||
---
|
||||
|
||||
## 1. Purpose
|
||||
|
||||
This specification defines:
|
||||
|
||||
* How operators in an execution plan are executed in memory
|
||||
* How shards, SIMD, and filters are applied efficiently
|
||||
* Determinism guarantees per snapshot
|
||||
* Memory layout for index scans, filter evaluation, and traversal expansion
|
||||
|
||||
It is fully compatible with:
|
||||
|
||||
* **ENC-ASL-TGK-INDEX**
|
||||
* **Merged ASL + TGK query execution plan**
|
||||
* **C-struct operator definitions**
|
||||
|
||||
---
|
||||
|
||||
## 2. Memory Layout Principles
|
||||
|
||||
1. **Immutable segments**: Index segments are **read-only** during execution
|
||||
2. **Shard-local buffers**: Each shard stores a segment of records in contiguous memory
|
||||
3. **SIMD key arrays**: Routing keys, type tags, and edge type keys are stored in contiguous SIMD-aligned arrays for fast vectorized evaluation
|
||||
4. **Canonical references**: artifact IDs and TGK edge IDs are stored in 64-bit aligned arrays for deterministic access
|
||||
5. **Traversal buffers**: TGK traversal outputs are stored in logseq-sorted buffers to preserve determinism
|
||||
|
||||
---
|
||||
|
||||
## 3. Segment Loading and Sharding
|
||||
|
||||
* Each index segment is **assigned to a shard** based on routing key hash
|
||||
* Segment header is mapped into memory; record arrays are memory-mapped if needed
|
||||
* For ASL artifacts:
|
||||
|
||||
```c
|
||||
struct shard_asl_segment {
|
||||
uint64_t *artifact_ids; // 64-bit canonical IDs
|
||||
uint32_t *type_tags; // optional type tags
|
||||
uint8_t *has_type_tag; // flags
|
||||
uint64_t record_count;
|
||||
};
|
||||
```
|
||||
|
||||
* For TGK edges:
|
||||
|
||||
```c
|
||||
struct shard_tgk_segment {
|
||||
uint64_t *tgk_edge_ids; // canonical TGK-CORE references
|
||||
uint32_t *edge_type_keys;
|
||||
uint8_t *has_edge_type;
|
||||
uint8_t *roles; // from/to/both
|
||||
uint64_t record_count;
|
||||
};
|
||||
```
|
||||
|
||||
* **Shard-local buffers** allow **parallel SIMD evaluation** without inter-shard contention
|
||||
|
||||
---
|
||||
|
||||
## 4. SIMD-Accelerated Filter Evaluation
|
||||
|
||||
* SIMD applies vectorized comparison of:
|
||||
|
||||
* Artifact type tags
|
||||
* Edge type keys
|
||||
* Routing keys (pre-hashed)
|
||||
* Example pseudo-code (AVX2):
|
||||
|
||||
```c
|
||||
for (i = 0; i < record_count; i += SIMD_WIDTH) {
|
||||
simd_load(type_tag[i:i+SIMD_WIDTH])
|
||||
simd_cmp(type_tag_filter)
|
||||
simd_mask_store(pass_mask, output_buffer)
|
||||
}
|
||||
```
|
||||
|
||||
* Determinism guaranteed by **maintaining original order** after filtering (logseq ascending + canonical ID tie-breaker)
|
||||
|
||||
---
|
||||
|
||||
## 5. Traversal Buffer Semantics (TGK)
|
||||
|
||||
* TGKTraversal operator maintains:
|
||||
|
||||
```c
|
||||
struct tgk_traversal_buffer {
|
||||
uint64_t *edge_ids; // expanded edges
|
||||
uint64_t *node_ids; // corresponding nodes
|
||||
uint32_t depth; // current traversal depth
|
||||
uint64_t count; // number of records in buffer
|
||||
};
|
||||
```
|
||||
|
||||
* Buffers are **logseq-sorted per depth** to preserve deterministic traversal
|
||||
* Optional **per-shard buffers** for parallel traversal
|
||||
|
||||
---
|
||||
|
||||
## 6. Merge Operator Semantics
|
||||
|
||||
* Merges **multiple shard-local streams**:
|
||||
|
||||
```c
|
||||
struct merge_buffer {
|
||||
uint64_t *artifact_ids;
|
||||
uint64_t *tgk_edge_ids;
|
||||
uint32_t *type_tags;
|
||||
uint8_t *roles;
|
||||
uint64_t count;
|
||||
};
|
||||
```
|
||||
|
||||
* Merge algorithm: **deterministic heap merge**
|
||||
|
||||
1. Compare `logseq` ascending
|
||||
2. Tie-break with canonical ID
|
||||
|
||||
* Ensures same output regardless of shard execution order
|
||||
|
||||
---
|
||||
|
||||
## 7. Tombstone Shadowing
|
||||
|
||||
* Shadowing is **applied post-merge**:
|
||||
|
||||
```c
|
||||
struct tombstone_state {
|
||||
uint64_t canonical_id;
|
||||
uint64_t max_logseq_seen;
|
||||
uint8_t is_tombstoned;
|
||||
};
|
||||
```
|
||||
|
||||
* Algorithm:
|
||||
|
||||
1. Iterate merged buffer
|
||||
2. For each canonical ID, keep only **latest logseq ≤ snapshot**
|
||||
3. Drop tombstoned or overridden entries
|
||||
|
||||
* Deterministic and **snapshot-safe**
|
||||
|
||||
---
|
||||
|
||||
## 8. Traversal Expansion with SIMD & Shards
|
||||
|
||||
* Input: TGK edge buffer, shard-local nodes
|
||||
* Steps:
|
||||
|
||||
1. **Filter edges** using SIMD (type, role)
|
||||
2. **Expand edges** to downstream nodes
|
||||
3. **Append results** to depth-sorted buffer
|
||||
4. Repeat for depth `d` if traversal requested
|
||||
5. Maintain deterministic order:
|
||||
|
||||
* logseq ascending
|
||||
* canonical edge ID tie-breaker
|
||||
|
||||
---
|
||||
|
||||
## 9. Projection & Aggregation Buffers
|
||||
|
||||
* Output buffer for projection:
|
||||
|
||||
```c
|
||||
struct projection_buffer {
|
||||
uint64_t *artifact_ids;
|
||||
uint64_t *tgk_edge_ids;
|
||||
uint64_t *node_ids;
|
||||
uint32_t *type_tags;
|
||||
uint64_t count;
|
||||
};
|
||||
```
|
||||
|
||||
* Aggregation performed **in-place** or into **small accumulator structures**:
|
||||
|
||||
```c
|
||||
struct aggregation_accumulator {
|
||||
uint64_t count;
|
||||
uint64_t sum_type_tag;
|
||||
// additional aggregates as needed
|
||||
};
|
||||
```
|
||||
|
||||
* Deterministic due to **logseq + canonical ID ordering**
|
||||
|
||||
---
|
||||
|
||||
## 10. Parallelism and SIMD Determinism
|
||||
|
||||
* **Shard-local parallelism** allowed
|
||||
|
||||
* **SIMD vectorization** allowed
|
||||
|
||||
* Global determinism ensured by:
|
||||
|
||||
1. Per-shard deterministic processing
|
||||
2. Deterministic merge of shards
|
||||
3. Shadowing/tombstone application post-merge
|
||||
4. Logseq + canonical ID ordering preserved
|
||||
|
||||
* This guarantees **identical results across runs and nodes**
|
||||
|
||||
---
|
||||
|
||||
## 11. Summary of Runtime Buffers
|
||||
|
||||
| Buffer | Contents | Purpose |
|
||||
| ------------------------- | ------------------------- | --------------------------------- |
|
||||
| `shard_asl_segment` | artifact IDs, type tags | parallel scan/filter |
|
||||
| `shard_tgk_segment` | edge IDs, edge type, role | parallel scan/filter |
|
||||
| `tgk_traversal_buffer` | expanded edges/nodes | DAG traversal |
|
||||
| `merge_buffer` | merged records | deterministic merge across shards |
|
||||
| `projection_buffer` | projected fields | final output |
|
||||
| `aggregation_accumulator` | counts/sums | deterministic aggregation |
|
||||
| `tombstone_state` | canonical ID + logseq | shadowing |
|
||||
|
||||
* All buffers are **snapshot-bound and immutable where possible**
|
||||
* SIMD/shard acceleration does not affect determinism
|
||||
|
||||
---
|
||||
|
||||
This completes a **full runtime execution model**, including **memory layout, SIMD/shard acceleration, traversal buffers, merge logic, tombstone handling, and projection/aggregation**.
|
||||
|
||||
---
|
||||
|
||||
Next steps could be:
|
||||
|
||||
1. **Formal operator pseudocode in C**, implementing SIMD and shard logic
|
||||
2. **Memory-mapped index access routines** for ENC-ASL-TGK-INDEX
|
||||
3. **Integration with PEL deterministic DAG execution receipts**
|
||||
|
||||
Do you want me to draft the **operator pseudocode next**?
|
||||
|
||||
|
|
@ -1,99 +0,0 @@
|
|||
Ah, I love that metaphor! “Sedelpress” as in a press for stamping or minting receipts — in English we could think of it as a **“stamp press”** or **“receipt press”**: you feed it the raw material, and it mints a deterministic, certified output (the PER). Perfect for your rescue node scenario.
|
||||
|
||||
Here’s how I would visualize your **Rescue Node data flow with PER generation as the first-class citizen**:
|
||||
|
||||
---
|
||||
|
||||
# Rescue Node ASL + PER Flow Diagram (Conceptual)
|
||||
|
||||
```
|
||||
┌──────────────────────────────┐
|
||||
│ Local Material / Input │
|
||||
│ - Home dir, app data │
|
||||
│ - Certificates │
|
||||
│ - Program specs / provenance│
|
||||
│ - Intent / contract │
|
||||
└─────────────┬────────────────┘
|
||||
│
|
||||
▼
|
||||
┌──────────────────────────────┐
|
||||
│ Sedelpress / PER Engine │
|
||||
│ - Deterministic execution │
|
||||
│ - TGK edges & provenance │
|
||||
│ - Snapshot boundaries │
|
||||
│ - Logs + execution receipts │
|
||||
└─────────────┬────────────────┘
|
||||
│ PERs + TGK provenance
|
||||
▼
|
||||
┌──────────────────────────────┐
|
||||
│ Personal Domain Store │
|
||||
│ - Blocks (sealed) │
|
||||
│ - Index segments │
|
||||
│ - Logs / append-only │
|
||||
│ - Snapshots for CURRENT │
|
||||
└─────────────┬────────────────┘
|
||||
│ optional encrypted export
|
||||
▼
|
||||
┌──────────────────────────────┐
|
||||
│ Common / Unity Domain │
|
||||
│ - Shared blocks & artifacts │
|
||||
│ - Read-only pinned snapshot │
|
||||
│ - Published receipts │
|
||||
└─────────────┬────────────────┘
|
||||
│ optional trust / import
|
||||
▼
|
||||
┌──────────────────────────────┐
|
||||
│ Foreign Domains │
|
||||
│ - Pinned imported artifacts │
|
||||
│ - Read-only │
|
||||
│ - Trusted by policy hash │
|
||||
└──────────────────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Key Notes:
|
||||
|
||||
1. **PER First**
|
||||
|
||||
* Think of the Sedelpress as **minting a certified record**. Everything you feed it — your home directory, certificates, intent — is stamped into the PER.
|
||||
* After PER generation, storing artifacts in ASL is just archival. The **semantic contract** already exists in the receipt.
|
||||
|
||||
2. **Personal Domain**
|
||||
|
||||
* Acts as your **private safe harbor** for rescued artifacts and PERs.
|
||||
* Optional encryption if you publish to Common / Unity domain.
|
||||
|
||||
3. **Common / Unity Domain**
|
||||
|
||||
* Acts as **shared infrastructure**. Only selected artifacts or PERs are published here.
|
||||
* Everyone can reconstruct shared CURRENT via snapshots and logs.
|
||||
|
||||
4. **Foreign Domains**
|
||||
|
||||
* Imported artifacts are **read-only**, pinned to deterministic snapshots.
|
||||
* Trust enforced via policy hash + offline roots.
|
||||
|
||||
5. **Snapshots Everywhere**
|
||||
|
||||
* ZFS or overlay snapshots ensure **deterministic recovery**.
|
||||
* Sealed blocks and segments provide **immutable audit trails**.
|
||||
|
||||
6. **Intent / Contract Layer**
|
||||
|
||||
* Your “message in a bottle” analogy: you define what you want certified, and the Sedelpress mints the PER.
|
||||
* Avoid writing arbitrary data directly into the store — it would break deterministic guarantees.
|
||||
|
||||
---
|
||||
|
||||
If you want, I can take this **diagram further** and make a **layered architectural figure** showing:
|
||||
|
||||
* Input layers → Sedelpress → PER + TGK edges
|
||||
* Personal / common / foreign domains
|
||||
* Snapshot and sealing overlays
|
||||
* Trust pins and policy hash integration
|
||||
|
||||
It could be like a **map of the rescue river**, where your input flows in, gets stamped, and only certified, published artifacts leave.
|
||||
|
||||
Do you want me to create that next?
|
||||
|
||||
|
|
@ -1,277 +0,0 @@
|
|||
NOTE: Superseded by `tier1/tgk-1.md` (TGK/1). Kept for historical context.
|
||||
|
||||
Below is a **formal draft of TGK-INDEX**, written to **exactly mirror the ASL-INDEX-ACCEL framework** and to cleanly integrate TGK into the overall model without inventing new semantics.
|
||||
|
||||
---
|
||||
|
||||
# TGK-INDEX
|
||||
|
||||
### Trace Graph Kernel Index Semantics and Acceleration
|
||||
|
||||
---
|
||||
|
||||
## 1. Purpose
|
||||
|
||||
TGK-INDEX defines the **indexing semantics for Trace Graph Kernel (TGK) edges**, which represent stored projections derived from:
|
||||
|
||||
* PEL execution
|
||||
* Execution receipts
|
||||
* Provenance and trace material
|
||||
|
||||
This document specifies:
|
||||
|
||||
* Canonical identity of TGK edges
|
||||
* Snapshot-relative visibility
|
||||
* Index lookup semantics
|
||||
* Interaction with acceleration mechanisms defined in ASL-INDEX-ACCEL
|
||||
|
||||
> TGK-INDEX defines **what edges exist and how they are observed**, not how they are accelerated.
|
||||
|
||||
---
|
||||
|
||||
## 2. Scope
|
||||
|
||||
This specification applies to:
|
||||
|
||||
* All TGK edge storage
|
||||
* Edge lookup and traversal
|
||||
* Stored projections over ASL artifacts and PEL executions
|
||||
|
||||
It does **not** define:
|
||||
|
||||
* PEL execution semantics
|
||||
* Provenance interpretation
|
||||
* Federation policies
|
||||
* Storage encoding (see ENC-* documents)
|
||||
* Acceleration mechanisms (see ASL-INDEX-ACCEL)
|
||||
|
||||
---
|
||||
|
||||
## 3. TGK Edge Model
|
||||
|
||||
### 3.1 TGK Edge
|
||||
|
||||
A TGK Edge represents a **directed, immutable relationship** between two nodes.
|
||||
|
||||
Nodes MAY represent:
|
||||
|
||||
* Artifacts
|
||||
* PEL executions
|
||||
* Receipts
|
||||
* Abstract graph nodes defined by higher layers
|
||||
|
||||
Edges are created only by deterministic projection.
|
||||
|
||||
---
|
||||
|
||||
### 3.2 Canonical Edge Key
|
||||
|
||||
Each TGK edge has a **Canonical Edge Key**, which uniquely identifies the edge.
|
||||
|
||||
The Canonical Edge Key MUST include:
|
||||
|
||||
* Source node identifier
|
||||
* Destination node identifier
|
||||
* Projection context (e.g. PEL execution or receipt identity)
|
||||
* Edge direction (if not implied)
|
||||
|
||||
Properties:
|
||||
|
||||
* Defines semantic identity
|
||||
* Used for equality, shadowing, and tombstones
|
||||
* Immutable once created
|
||||
* Fully compared on lookup match
|
||||
|
||||
---
|
||||
|
||||
## 4. Edge Type Key
|
||||
|
||||
### 4.1 Definition
|
||||
|
||||
Each TGK edge MAY carry an **Edge Type Key**, which classifies the edge.
|
||||
|
||||
Properties:
|
||||
|
||||
* Immutable once edge is created
|
||||
* Optional, but strongly encouraged
|
||||
* Does NOT participate in canonical identity
|
||||
* Used for routing, filtering, and query acceleration
|
||||
|
||||
Formal rule:
|
||||
|
||||
> Edge Type Key is a classification attribute, not an identity attribute.
|
||||
|
||||
---
|
||||
|
||||
### 4.2 Absence Encoding
|
||||
|
||||
If an edge has no Edge Type Key, this absence MUST be explicitly encoded and observable to the index.
|
||||
|
||||
---
|
||||
|
||||
## 5. Snapshot Semantics
|
||||
|
||||
### 5.1 Snapshot-Relative Visibility
|
||||
|
||||
TGK edges are **snapshot-relative**.
|
||||
|
||||
An edge is visible in snapshot `S` if and only if:
|
||||
|
||||
* The edge creation log entry has `LogSeq ≤ S`
|
||||
* The edge is not shadowed by a later tombstone with `LogSeq ≤ S`
|
||||
|
||||
---
|
||||
|
||||
### 5.2 Determinism
|
||||
|
||||
Given the same snapshot and input state:
|
||||
|
||||
* The visible TGK edge set MUST be identical
|
||||
* Lookup and traversal MUST be deterministic
|
||||
|
||||
---
|
||||
|
||||
## 6. TGK Index Semantics
|
||||
|
||||
### 6.1 Logical Index Definition
|
||||
|
||||
The TGK logical index maps:
|
||||
|
||||
```
|
||||
(snapshot, CanonicalEdgeKey) → EdgeRecord | ⊥
|
||||
```
|
||||
|
||||
Rules:
|
||||
|
||||
* Newer entries shadow older ones
|
||||
* Tombstones shadow edges
|
||||
* Ordering is defined by log sequence
|
||||
|
||||
---
|
||||
|
||||
### 6.2 Lookup by Attributes
|
||||
|
||||
Lookup MAY constrain:
|
||||
|
||||
* Source node
|
||||
* Destination node
|
||||
* Edge Type Key
|
||||
* Projection context
|
||||
|
||||
Such constraints are **advisory** and MAY be accelerated but MUST be validated by full edge record comparison.
|
||||
|
||||
---
|
||||
|
||||
## 7. Acceleration and Routing
|
||||
|
||||
### 7.1 Canonical vs Routing Keys
|
||||
|
||||
TGK indexing follows ASL-INDEX-ACCEL.
|
||||
|
||||
* Canonical identity is defined solely by Canonical Edge Key
|
||||
* Routing Keys are derived and advisory
|
||||
|
||||
Routing Keys MAY incorporate:
|
||||
|
||||
* Hash of Canonical Edge Key
|
||||
* Edge Type Key
|
||||
* Direction or role
|
||||
|
||||
---
|
||||
|
||||
### 7.2 Filters
|
||||
|
||||
Filters:
|
||||
|
||||
* Are built over Routing Keys
|
||||
* May include Edge Type Key
|
||||
* MUST NOT introduce false negatives
|
||||
* MUST be verified by full edge comparison
|
||||
|
||||
---
|
||||
|
||||
### 7.3 Sharding
|
||||
|
||||
Sharding:
|
||||
|
||||
* Is observationally invisible
|
||||
* MAY be based on Routing Keys
|
||||
* MUST preserve logical index equivalence
|
||||
|
||||
---
|
||||
|
||||
### 7.4 SIMD Execution
|
||||
|
||||
SIMD MAY be used to accelerate:
|
||||
|
||||
* Filter evaluation
|
||||
* Routing key comparison
|
||||
* Edge scanning
|
||||
|
||||
SIMD MUST NOT affect semantics.
|
||||
|
||||
---
|
||||
|
||||
## 8. Relationship to ASL Index
|
||||
|
||||
TGK indexing:
|
||||
|
||||
* Reuses ASL snapshot and log ordering semantics
|
||||
* May share physical storage and segments with ASL artifacts
|
||||
* Is governed by the same checkpoint and recovery model
|
||||
|
||||
TGK edges MAY reference ASL artifacts across snapshots, subject to provenance constraints.
|
||||
|
||||
---
|
||||
|
||||
## 9. Garbage Collection and Retention
|
||||
|
||||
* TGK edges MUST NOT be collected while referenced by any retained snapshot
|
||||
* Tombstoned edges MAY be reclaimed once unreachable
|
||||
* Provenance requirements MAY pin edges beyond snapshot reachability
|
||||
|
||||
GC policies are store-defined but MUST preserve snapshot safety.
|
||||
|
||||
---
|
||||
|
||||
## 10. Federation Considerations (Non-Normative)
|
||||
|
||||
In federated environments:
|
||||
|
||||
* Edge visibility MAY differ by domain
|
||||
* Edge Type Keys MAY be namespaced
|
||||
* Canonical Edge Keys MUST remain globally stable
|
||||
|
||||
Federation rules are defined outside this document.
|
||||
|
||||
---
|
||||
|
||||
## 11. Normative Invariants
|
||||
|
||||
1. Canonical Edge Key defines identity
|
||||
2. Edge Type Key is not part of identity
|
||||
3. Snapshot visibility is log-sequence based
|
||||
4. Acceleration is advisory only
|
||||
5. Filters must not produce false negatives
|
||||
6. Sharding is observationally invisible
|
||||
7. Determinism is required per snapshot
|
||||
|
||||
---
|
||||
|
||||
## 12. Summary
|
||||
|
||||
TGK-INDEX defines a **snapshot-safe, deterministic graph index** that:
|
||||
|
||||
* Treats edges as immutable, indexed entities
|
||||
* Separates identity from classification
|
||||
* Scales to billions of edges
|
||||
* Integrates cleanly with ASL indexing and acceleration
|
||||
|
||||
By strictly reusing the Canonical vs Routing framework, TGK avoids semantic drift and ensures that **graph queries remain correct regardless of acceleration strategy**.
|
||||
|
||||
---
|
||||
|
||||
If you want, the next logical step would be to:
|
||||
|
||||
* Draft **ENC-TGK-INDEX** (exact on-disk encoding)
|
||||
* Or write a **unified query model** spanning ASL + TGK lookups
|
||||
|
|
@ -1,349 +0,0 @@
|
|||
This is a *foundational* question, and you’re asking it at exactly the right abstraction level.
|
||||
I’ll answer it in a way that **does not lock you into ZFS, POSIX, or any single worldview**, but shows how they *fit*.
|
||||
|
||||
What follows is a **clean domain topology**, **backend mapping**, and a **seeding story** that lets the “common” crystallize naturally instead of being designed top-down.
|
||||
|
||||
---
|
||||
|
||||
# 1. The Domains You Are Actually Creating
|
||||
|
||||
You are not creating “many domains”.
|
||||
You are creating **three roles**, which may be instantiated multiple times.
|
||||
|
||||
## 1.1 The Common Domain (shared reality)
|
||||
|
||||
```
|
||||
domain: common
|
||||
```
|
||||
|
||||
**Purpose**
|
||||
|
||||
* Shared, boring, necessary facts
|
||||
* Toolchains
|
||||
* Base artifacts
|
||||
* Shared schemas
|
||||
* Canonical references
|
||||
|
||||
**Properties**
|
||||
|
||||
* Conservative
|
||||
* Slow-moving
|
||||
* High trust threshold
|
||||
* Read-mostly
|
||||
|
||||
Think of this as:
|
||||
|
||||
* `/usr`
|
||||
* `/nix/store`
|
||||
* `/lib`
|
||||
* Wikipedia, but frozen and replayable
|
||||
|
||||
This is where your brother’s “Unity tree” intuition lives.
|
||||
|
||||
---
|
||||
|
||||
## 1.2 Personal Domain (agency)
|
||||
|
||||
```
|
||||
domain: you
|
||||
```
|
||||
|
||||
**Purpose**
|
||||
|
||||
* Your identity
|
||||
* Your choices
|
||||
* Your publications
|
||||
* Your private state
|
||||
|
||||
**Properties**
|
||||
|
||||
* You own the roots
|
||||
* You decide what to publish
|
||||
* You may fork or disagree with “common”
|
||||
|
||||
This is:
|
||||
|
||||
* Your `$HOME`
|
||||
* Your git identity
|
||||
* Your signing authority
|
||||
|
||||
---
|
||||
|
||||
## 1.3 Working / Ephemeral Domains (action)
|
||||
|
||||
```
|
||||
domain: you/work/…
|
||||
domain: you/rescue/…
|
||||
domain: you/experiment/…
|
||||
```
|
||||
|
||||
**Purpose**
|
||||
|
||||
* Do things
|
||||
* Run PEL
|
||||
* Recover machines
|
||||
* Import legacy data
|
||||
|
||||
**Properties**
|
||||
|
||||
* Delegated authority
|
||||
* Narrow policy
|
||||
* Often short-lived
|
||||
* Results may be promoted upward
|
||||
|
||||
These are **verbs**, not nouns.
|
||||
|
||||
---
|
||||
|
||||
## 2. How These Domains Relate (Important)
|
||||
|
||||
They are **not hierarchical**.
|
||||
|
||||
Instead:
|
||||
|
||||
* Personal domain **pins** Common
|
||||
* Working domains are **authorized by** Personal
|
||||
* Publication is **explicit**
|
||||
|
||||
Graphically:
|
||||
|
||||
```
|
||||
common
|
||||
↑ (trust pin)
|
||||
you
|
||||
↓ (delegation)
|
||||
you/rescue/laptop
|
||||
```
|
||||
|
||||
No implicit inheritance.
|
||||
No magical visibility.
|
||||
|
||||
---
|
||||
|
||||
# 3. Filesystems and Backends (Practical, Not Ideological)
|
||||
|
||||
You already have POSIX. That’s fine.
|
||||
|
||||
ASL **does not require** anything special — it requires **semantics**, not magic.
|
||||
|
||||
## 3.1 Minimum viable ASL backend
|
||||
|
||||
You need three things:
|
||||
|
||||
```
|
||||
/asl/
|
||||
blocks/
|
||||
index/
|
||||
log/
|
||||
```
|
||||
|
||||
This can live on:
|
||||
|
||||
* ext4
|
||||
* XFS
|
||||
* ZFS
|
||||
* tmpfs (for tests)
|
||||
|
||||
### Blocks
|
||||
|
||||
* Files named by BlockID
|
||||
* Immutable once sealed
|
||||
* Can be encrypted or plaintext
|
||||
|
||||
### Index
|
||||
|
||||
* Segment files
|
||||
* Append-only
|
||||
* Memory-mapped friendly
|
||||
|
||||
### Log
|
||||
|
||||
* Append-only
|
||||
* Totally ordered
|
||||
* Small
|
||||
|
||||
That’s it.
|
||||
|
||||
---
|
||||
|
||||
## 3.2 ZFS (where it helps, not dominates)
|
||||
|
||||
ZFS is *excellent* for:
|
||||
|
||||
* Snapshots as checkpoints
|
||||
* Replication
|
||||
* Retention
|
||||
* Scrub / integrity
|
||||
|
||||
Recommended layout:
|
||||
|
||||
```
|
||||
zpool/asl/common
|
||||
zpool/asl/you
|
||||
zpool/asl/you_work
|
||||
```
|
||||
|
||||
Each dataset:
|
||||
|
||||
* Independent snapshots
|
||||
* Independent GC
|
||||
* Independent policy
|
||||
|
||||
ZFS is **not** your semantic layer.
|
||||
It is your **reliable substrate**.
|
||||
|
||||
---
|
||||
|
||||
## 4. Seeding the Common Domain (this is subtle)
|
||||
|
||||
The common domain must **not** be “yours”.
|
||||
|
||||
It must start **empty or minimal**.
|
||||
|
||||
### 4.1 Initial common contents
|
||||
|
||||
Seed only:
|
||||
|
||||
* ASL schemas
|
||||
* TGK schemas
|
||||
* PEL operator definitions
|
||||
* Hash algorithms
|
||||
* Encoding specs
|
||||
|
||||
No opinions.
|
||||
No personal data.
|
||||
|
||||
Think: *standards, not content*.
|
||||
|
||||
---
|
||||
|
||||
### 4.2 How common grows
|
||||
|
||||
Common grows when:
|
||||
|
||||
* Multiple personal domains independently publish the same artifacts
|
||||
* Agreement is explicit
|
||||
* Provenance is preserved
|
||||
|
||||
This is *emergent consensus*, not curation.
|
||||
|
||||
---
|
||||
|
||||
## 5. Seeding Your Personal Domain
|
||||
|
||||
Your personal domain starts with:
|
||||
|
||||
1. Your offline roots
|
||||
2. Your DAM
|
||||
3. Policy hash
|
||||
4. Trust pin to common
|
||||
|
||||
That’s it.
|
||||
|
||||
Everything else is created via **working domains**.
|
||||
|
||||
---
|
||||
|
||||
## 6. The “Home Directory” Model (this matters)
|
||||
|
||||
Your intuition here is correct.
|
||||
|
||||
### `$HOME` is a domain, not a directory
|
||||
|
||||
```
|
||||
domain: you/home
|
||||
```
|
||||
|
||||
What lives here:
|
||||
|
||||
* Your files
|
||||
* Your configs
|
||||
* Your history
|
||||
* Your backups
|
||||
* Your recovered artifacts
|
||||
|
||||
This domain:
|
||||
|
||||
* Can publish subsets
|
||||
* Can snapshot aggressively
|
||||
* Can be encrypted-at-rest
|
||||
|
||||
The POSIX home directory is just a *view*.
|
||||
|
||||
---
|
||||
|
||||
## 7. The “Common Group” (rakeroots / unity)
|
||||
|
||||
Here is how the “common group” crystallizes **without central planning**.
|
||||
|
||||
### 7.1 Group domain
|
||||
|
||||
```
|
||||
domain: you+brother/common
|
||||
```
|
||||
|
||||
Properties:
|
||||
|
||||
* Multi-root authority
|
||||
* Requires joint signatures
|
||||
* Narrow scope
|
||||
|
||||
### 7.2 What goes here
|
||||
|
||||
* Family photos
|
||||
* Shared documents
|
||||
* Shared tooling
|
||||
* Shared indexes
|
||||
|
||||
This is **not global common**.
|
||||
It’s *your* common.
|
||||
|
||||
Others will have theirs.
|
||||
|
||||
---
|
||||
|
||||
## 8. Why This Scales Socially
|
||||
|
||||
Different people will have:
|
||||
|
||||
* Different commons
|
||||
* Different trust roots
|
||||
* Different overlaps
|
||||
|
||||
But:
|
||||
|
||||
✔ Artifacts are immutable
|
||||
✔ Provenance is explicit
|
||||
✔ Publication is voluntary
|
||||
✔ Conflicts don’t corrupt history
|
||||
|
||||
This avoids the “one true tree” trap.
|
||||
|
||||
---
|
||||
|
||||
## 9. Mental Model That Locks It In
|
||||
|
||||
* **Common** = gravity
|
||||
* **Personal** = agency
|
||||
* **Working** = motion
|
||||
* **Filesystems** = substrate
|
||||
* **Domains** = meaning
|
||||
* **Snapshots** = memory
|
||||
|
||||
---
|
||||
|
||||
## 10. One-sentence answer to your question
|
||||
|
||||
> You create a minimal Common domain for shared semantics, a Personal domain for identity and choice, and delegated Working domains for action — all backed by simple POSIX storage (optionally ZFS), with “home” and “common” emerging as published, snapshot-pinned domains rather than directories.
|
||||
|
||||
---
|
||||
|
||||
If you want, next we can:
|
||||
|
||||
* Draw the **exact on-disk layout**
|
||||
* Specify **CLI flows** (`asl init-domain`, `asl publish`)
|
||||
* Model **multi-user common domains**
|
||||
* Define **GC across personal + common safely**
|
||||
|
||||
You’re very close to something that feels *obvious* once it exists — which is the sign you’re doing it right.
|
||||
|
||||
|
|
@ -1,271 +0,0 @@
|
|||
Excellent. Below is a **tight, normative draft set** that you can treat as **v0.1 specs**.
|
||||
I’ll keep this aligned with your philosophy: *PEL early, receipts primary, ASL as the riverbed, no premature coupling.*
|
||||
|
||||
I’ll structure this into **five sections**, exactly as requested.
|
||||
|
||||
---
|
||||
|
||||
# 1. USB REQUEST / RESPONSE SCHEMAS (Normative)
|
||||
|
||||
## 1.1 USB REQUEST Schema (v0.1)
|
||||
|
||||
```
|
||||
/usb/REQUEST/
|
||||
├── manifest.yaml # REQUIRED
|
||||
├── pel-program.yaml # REQUIRED
|
||||
├── input-artifacts/ # OPTIONAL (directory of files or blocks)
|
||||
├── policy.hash # REQUIRED
|
||||
├── request.sig # REQUIRED
|
||||
└── meta/ # OPTIONAL
|
||||
├── requester-domain.txt
|
||||
└── notes.txt
|
||||
```
|
||||
|
||||
### `manifest.yaml`
|
||||
|
||||
```yaml
|
||||
version: 1
|
||||
request_id: <uuid>
|
||||
request_type: rescue | admission | authority-op
|
||||
created_at: <iso8601>
|
||||
requested_outputs:
|
||||
- artifacts
|
||||
- receipt
|
||||
- dam # optional
|
||||
policy_hash: <sha256>
|
||||
pel_program_hash: <sha256>
|
||||
input_artifact_hashes:
|
||||
- <sha256>
|
||||
signing:
|
||||
algorithm: ed25519
|
||||
signer_hint: <string>
|
||||
```
|
||||
|
||||
**Invariant:**
|
||||
|
||||
> The manifest is the canonical object. All hashes are computed over canonical encodings.
|
||||
|
||||
---
|
||||
|
||||
## 1.2 USB RESPONSE Schema (v0.1)
|
||||
|
||||
```
|
||||
/usb/RESPONSE/
|
||||
├── receipt.per # REQUIRED
|
||||
├── published/
|
||||
│ ├── blocks/
|
||||
│ ├── index/
|
||||
│ └── snapshots/
|
||||
├── dam/ # OPTIONAL
|
||||
│ └── domain.dam
|
||||
├── response.sig # REQUIRED
|
||||
└── meta.yaml # OPTIONAL
|
||||
```
|
||||
|
||||
**Invariant:**
|
||||
|
||||
> RESPONSE is append-only and must be reconstructible as ASL input elsewhere.
|
||||
|
||||
---
|
||||
|
||||
# 2. PEL SUBSET ALLOWED ON AUTH HOST
|
||||
|
||||
## 2.1 Allowed PEL Operations
|
||||
|
||||
Only **pure, deterministic, side-effect-free** operators:
|
||||
|
||||
| Category | Allowed |
|
||||
| ------------- | ------- |
|
||||
| Ingest | ✔ |
|
||||
| Hash | ✔ |
|
||||
| Encrypt | ✔ |
|
||||
| Chunk / Pack | ✔ |
|
||||
| Seal | ✔ |
|
||||
| Index | ✔ |
|
||||
| Snapshot | ✔ |
|
||||
| Sign | ✔ |
|
||||
| Network | ✖ |
|
||||
| Clock access | ✖ |
|
||||
| Randomness | ✖ |
|
||||
| External exec | ✖ |
|
||||
|
||||
---
|
||||
|
||||
## 2.2 PEL Program Constraints
|
||||
|
||||
```yaml
|
||||
pel_version: 0.1
|
||||
operators:
|
||||
- ingest
|
||||
- encrypt
|
||||
- seal
|
||||
- index
|
||||
- snapshot
|
||||
outputs:
|
||||
- receipt
|
||||
- published_artifacts
|
||||
```
|
||||
|
||||
**Invariant:**
|
||||
|
||||
> The PEL program hash is part of the receipt and MUST uniquely determine execution.
|
||||
|
||||
---
|
||||
|
||||
# 3. EXECUTION RECEIPT (PER) SIGNATURE LAYOUT
|
||||
|
||||
## 3.1 Receipt Structure
|
||||
|
||||
```yaml
|
||||
receipt_version: 1
|
||||
receipt_id: <uuid>
|
||||
domain_id: <uuid>
|
||||
snapshot_id: <uuid>
|
||||
pel_program_hash: <sha256>
|
||||
inputs:
|
||||
- artifact_hash
|
||||
outputs:
|
||||
artifacts:
|
||||
- artifact_key
|
||||
- block_id
|
||||
receipt_hash: <sha256>
|
||||
authority_signature:
|
||||
algorithm: ed25519
|
||||
key_id: <fingerprint>
|
||||
signature: <bytes>
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3.2 Receipt Invariants
|
||||
|
||||
1. Receipt uniquely identifies:
|
||||
|
||||
* Inputs
|
||||
* Program
|
||||
* Snapshot
|
||||
2. Receipt hash is computed **before signing**
|
||||
3. Receipt verification requires **no ASL store access**
|
||||
|
||||
> A receipt is portable truth.
|
||||
|
||||
---
|
||||
|
||||
# 4. PUBLISHED ARTIFACT SELECTION RULES
|
||||
|
||||
## 4.1 Default Rule
|
||||
|
||||
Only artifacts explicitly declared in the PEL program as `publish: true` may exit the host.
|
||||
|
||||
```yaml
|
||||
outputs:
|
||||
- name: encrypted_archive
|
||||
publish: true
|
||||
- name: intermediate_chunks
|
||||
publish: false
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4.2 Enforcement
|
||||
|
||||
* Unpublished artifacts:
|
||||
|
||||
* May exist internally
|
||||
* MUST NOT be written to USB
|
||||
* Published artifacts:
|
||||
|
||||
* MUST be sealed
|
||||
* MUST be indexed
|
||||
* MUST be referenced in receipt
|
||||
|
||||
---
|
||||
|
||||
## 4.3 Domain Binding
|
||||
|
||||
Published artifacts are bound to:
|
||||
|
||||
```
|
||||
(domain_id, snapshot_id)
|
||||
```
|
||||
|
||||
This binding is **immutable**.
|
||||
|
||||
---
|
||||
|
||||
# 5. WRITE-ONCE USB HARDENING RULES
|
||||
|
||||
These are **operational invariants**, not filesystem tricks.
|
||||
|
||||
## 5.1 Mount Discipline
|
||||
|
||||
| Phase | Mount mode |
|
||||
| ------------ | ---------- |
|
||||
| Intake | read-only |
|
||||
| Execution | unmounted |
|
||||
| Output write | read-write |
|
||||
| Final | unmounted |
|
||||
|
||||
---
|
||||
|
||||
## 5.2 Structural Rules
|
||||
|
||||
* AUTH host **never deletes** from USB
|
||||
* AUTH host **only writes under `/RESPONSE`**
|
||||
* AUTH host **never modifies REQUEST**
|
||||
|
||||
---
|
||||
|
||||
## 5.3 Tamper Evidence
|
||||
|
||||
* RESPONSE includes:
|
||||
|
||||
* `response.sig`
|
||||
* hash of REQUEST manifest
|
||||
* Any post-write modification invalidates signature
|
||||
|
||||
---
|
||||
|
||||
# 6. How this fits your philosophy (explicitly)
|
||||
|
||||
You said:
|
||||
|
||||
> “All we should do is place artifacts, certificates, program specification and intent, then crank.”
|
||||
|
||||
That is now **formally true**.
|
||||
|
||||
* Intent → manifest
|
||||
* Program → PEL
|
||||
* Data → artifacts
|
||||
* Authority → signature
|
||||
* Outcome → receipt
|
||||
|
||||
Nothing else matters.
|
||||
|
||||
---
|
||||
|
||||
# 7. What you can implement immediately
|
||||
|
||||
Even **without** ASL-CORE or PEL-TRACE:
|
||||
|
||||
* Canonical hashing
|
||||
* Manifest validation
|
||||
* Receipt generation
|
||||
* USB discipline
|
||||
* Domain bootstrapping
|
||||
* Offline signing
|
||||
|
||||
This is *real progress*, not paper.
|
||||
|
||||
---
|
||||
|
||||
If you want next, I recommend (in this order):
|
||||
|
||||
1. **PEL canonical encoding rules**
|
||||
2. **Receipt hash canonicalization**
|
||||
3. **DAM schema finalization**
|
||||
4. **Minimal reference CLI (`asl-auth-host verify/run/seal`)**
|
||||
5. **Threat model write-up**
|
||||
|
||||
Just say where to go.
|
||||
|
||||
|
|
@ -1,175 +0,0 @@
|
|||
NOTE: Superseded by tier1 TGK/1 and vendor/amduat/tier1/tgk-1-core.md; retained for historical context.
|
||||
|
||||
Absolutely — here’s a textual/diagrammatic draft of the **overall architecture** for the unified ASL + TGK + PEL system with federation, SIMD/shard acceleration, and deterministic replay. I’ll describe the components and their connections clearly so it can later be converted to a visual diagram.
|
||||
|
||||
---
|
||||
|
||||
# Unified ASL + TGK + PEL Architecture (Federated, Deterministic, SIMD/Shard Accelerated)
|
||||
|
||||
---
|
||||
|
||||
## 1. Components Overview
|
||||
|
||||
```
|
||||
+-------------------------------------------------------------+
|
||||
| FEDERATION LAYER |
|
||||
| |
|
||||
| +------------------+ +------------------+ |
|
||||
| | Peer Node 1 |<---->| Peer Node 2 | |
|
||||
| | | | | |
|
||||
| | logseq & snapshots| | logseq & snapshots| |
|
||||
| +------------------+ +------------------+ |
|
||||
+-------------------------------------------------------------+
|
||||
```
|
||||
|
||||
**Notes**:
|
||||
|
||||
* Federation layer manages **artifact and PER propagation**.
|
||||
* Each peer node maintains **last applied logseq**, **snapshot provenance**, and **tombstones**.
|
||||
* Deterministic replay across nodes guaranteed by **logseq + canonical ID ordering**.
|
||||
|
||||
---
|
||||
|
||||
## 2. Node-Level Architecture
|
||||
|
||||
```
|
||||
+-------------------------------------------------------------+
|
||||
| NODE LAYER |
|
||||
| |
|
||||
| +--------------------+ |
|
||||
| | PEL Program Layer | <-- DAG execution, deterministic |
|
||||
| | (PEL DAG + Inputs) | |
|
||||
| +--------------------+ |
|
||||
| | |
|
||||
| v |
|
||||
| +--------------------+ |
|
||||
| | Execution Plan DAG | <-- maps PEL DAG nodes to |
|
||||
| | (Operators) | SegmentScan, IndexFilter, ... |
|
||||
| +--------------------+ |
|
||||
| | |
|
||||
| v |
|
||||
| +--------------------+ |
|
||||
| | Shard / SIMD Buffers| <--- ASL/TGK segments mapped in |
|
||||
| | Artifact & TGK Data | memory, aligned for SIMD |
|
||||
| +--------------------+ |
|
||||
| | |
|
||||
| v |
|
||||
| +--------------------+ |
|
||||
| | Index Operators | <-- SegmentScan, IndexFilter, Merge|
|
||||
| | (TGKTraversal, etc) | TombstoneShadow, Projection |
|
||||
| +--------------------+ |
|
||||
| | |
|
||||
| v |
|
||||
| +--------------------+ |
|
||||
| | Output / Projection | <-- final results, PER artifacts |
|
||||
| +--------------------+ |
|
||||
+-------------------------------------------------------------+
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3. Data Flow
|
||||
|
||||
1. **PEL DAG Inputs** → loaded as ASL artifacts or PERs.
|
||||
2. **PEL DAG Execution** → produces PER artifacts.
|
||||
3. **PER + raw artifacts** → mapped into **shard-local SIMD buffers**.
|
||||
4. **Execution plan operators** applied:
|
||||
|
||||
* SegmentScan → IndexFilter → Merge
|
||||
* TGKTraversal → Projection / Aggregation
|
||||
* TombstoneShadow ensures snapshot safety
|
||||
5. **Output** → deterministic, snapshot-bounded results.
|
||||
6. **Propagation** → federation layer transmits new artifacts/PERs to peers.
|
||||
|
||||
---
|
||||
|
||||
## 4. Shard & SIMD Acceleration
|
||||
|
||||
```
|
||||
Shard 0 Shard 1 Shard 2
|
||||
+---------+ +---------+ +---------+
|
||||
| Buffers | | Buffers | | Buffers |
|
||||
| SIMD | | SIMD | | SIMD |
|
||||
+---------+ +---------+ +---------+
|
||||
\ | /
|
||||
\ | /
|
||||
\ | /
|
||||
\ | /
|
||||
+-------+-------+
|
||||
|
|
||||
Merge
|
||||
|
|
||||
Output
|
||||
```
|
||||
|
||||
* Each shard processes a **subset of the artifact/TGK edge space**.
|
||||
* SIMD filters applied **per shard**.
|
||||
* Merge ensures **deterministic global order**.
|
||||
|
||||
---
|
||||
|
||||
## 5. Snapshot & Provenance Integration
|
||||
|
||||
```
|
||||
+-------------------------+
|
||||
| ZFS Snapshots / Logs |
|
||||
| ASL & TGK Storage |
|
||||
| Shard-local Mapping |
|
||||
+-------------------------+
|
||||
|
|
||||
v
|
||||
+-------------------------+
|
||||
| Snapshot-safe Operators |
|
||||
| (SegmentScan, TombstoneShadow, TGKTraversal)
|
||||
+-------------------------+
|
||||
|
|
||||
v
|
||||
+-------------------------+
|
||||
| Provenance Table |
|
||||
| (logseq, canonical IDs) |
|
||||
+-------------------------+
|
||||
```
|
||||
|
||||
* Snapshot limits all reads.
|
||||
* Provenance table ensures **deterministic replay and auditing**.
|
||||
|
||||
---
|
||||
|
||||
## 6. Federation Synchronization Flow
|
||||
|
||||
```
|
||||
+------------------+ +------------------+
|
||||
| Node A | | Node B |
|
||||
| PER/Artifact log | ---> delta ---> | PER/Artifact log |
|
||||
| Snapshot state | | Snapshot state |
|
||||
| Shard buffers | | Shard buffers |
|
||||
+------------------+ +------------------+
|
||||
| ^
|
||||
| |
|
||||
+-----------------------------+
|
||||
| Deterministic Replay |
|
||||
+-----------------------------+
|
||||
```
|
||||
|
||||
* Each node applies **incoming logs deterministically**.
|
||||
* Shard mapping may differ, but merge ensures **canonical global ordering**.
|
||||
* PEL program replay is deterministic based on input artifacts and snapshot.
|
||||
|
||||
---
|
||||
|
||||
## 7. Summary
|
||||
|
||||
* **PEL DAGs** feed deterministic execution plans.
|
||||
* **ASL/TGK segments** stored in shard-local SIMD-aligned buffers.
|
||||
* **Operators** are applied in a **snapshot-bounded, deterministic manner**.
|
||||
* **Tombstones** and shadowing preserve provenance.
|
||||
* **Federation layer** ensures deterministic cross-node propagation and replay.
|
||||
* **PER artifacts** integrate seamlessly into the execution plan.
|
||||
* The architecture scales to **billions of artifacts and TGK edges**.
|
||||
|
||||
---
|
||||
|
||||
If you want, I can **next produce a single unified “full system specification” document**, merging **ASL-CORE, ASL-STORE-INDEX, ENC-ASL-TGK-INDEX, PEL integration, execution plan, SIMD/shard acceleration, and federation rules** into a master reference.
|
||||
|
||||
Do you want me to draft that next?
|
||||
|
||||
|
|
@ -1,220 +0,0 @@
|
|||
NOTE: Superseded by tier1 TGK/1 and vendor/amduat/tier1/tgk-1-core.md; retained for historical context.
|
||||
|
||||
# Unified ASL + TGK + PEL System Specification (Master Reference)
|
||||
|
||||
---
|
||||
|
||||
NOTE: Integrated into `tier1/asl-system-1.md`. This note is retained for
|
||||
historical context and may drift.
|
||||
|
||||
## 1. Introduction
|
||||
|
||||
This document specifies a unified system for deterministic, federated, snapshot-safe storage and execution of artifacts, execution receipts (PERs), and TGK edges. The system integrates:
|
||||
|
||||
* **ASL (Artifact Storage Layer)**
|
||||
* **TGK (Trace Graph Kernel)**
|
||||
* **PEL (Program Execution Layer)**
|
||||
* **Indexing, Shard/SIMD acceleration**
|
||||
* **Federation and deterministic replay**
|
||||
|
||||
The system supports **billions of artifacts and edges**, deterministic DAG execution, and cross-node provenance.
|
||||
|
||||
---
|
||||
|
||||
## 2. Core Concepts
|
||||
|
||||
| Concept | Description |
|
||||
| ------------ | ------------------------------------------------------------------------------------------------------------- |
|
||||
| Artifact | Basic unit stored in ASL; may include optional `type_tag` and `has_type_tag`. |
|
||||
| PER | PEL Execution Receipt; artifact describing deterministic output of a PEL program. |
|
||||
| TGK Edge | Represents a directed relation between artifacts/PERs. Stores `from_nodes`, `to_nodes`, `edge_type`, `roles`. |
|
||||
| Snapshot | ZFS snapshot, defines read visibility and deterministic execution boundary. |
|
||||
| Logseq | Monotonic sequence number for deterministic ordering. |
|
||||
| Shard | Subset of artifacts/edges partitioned for SIMD/parallel execution. |
|
||||
| Canonical ID | Unique identifier per artifact, PER, or TGK edge. |
|
||||
|
||||
---
|
||||
|
||||
## 3. ASL-CORE & ASL-STORE-INDEX
|
||||
|
||||
### 3.1 ASL-CORE
|
||||
|
||||
* Defines **artifact semantics**:
|
||||
|
||||
* Optional `type_tag` (32-bit) with `has_type_tag` (8-bit toggle)
|
||||
* Artifacts are immutable once written
|
||||
* PERs are treated as artifacts
|
||||
|
||||
### 3.2 ASL-STORE-INDEX
|
||||
|
||||
* Manages **artifact blocks**, including:
|
||||
|
||||
* Small vs. large blocks (packaging)
|
||||
* Block sealing, retention, snapshot safety
|
||||
* Index structure:
|
||||
|
||||
* **Shard-local**, supports **billion-scale lookups**
|
||||
* Bloom filters for quick membership queries
|
||||
* Sharding and SIMD acceleration for memory-efficient lookups
|
||||
* Record Layout (C struct):
|
||||
|
||||
```c
|
||||
typedef struct {
|
||||
uint64_t artifact_key;
|
||||
uint64_t block_id;
|
||||
uint32_t offset;
|
||||
uint32_t length;
|
||||
uint32_t type_tag;
|
||||
uint8_t has_type_tag;
|
||||
} artifact_index_entry_t;
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4. ENC-ASL-TGK-INDEX
|
||||
|
||||
* Defines **encoding for artifacts, PERs, and TGK edges** in storage.
|
||||
* TGK edges stored as:
|
||||
|
||||
```c
|
||||
typedef struct {
|
||||
uint64_t canonical_edge_id;
|
||||
uint64_t from_nodes[MAX_FROM];
|
||||
uint64_t to_nodes[MAX_TO];
|
||||
uint32_t edge_type;
|
||||
uint8_t roles;
|
||||
uint64_t logseq;
|
||||
} tgk_edge_record_t;
|
||||
```
|
||||
|
||||
* Supports deterministic traversal, snapshot bounds, and SIMD filtering.
|
||||
|
||||
---
|
||||
|
||||
## 5. PEL Integration
|
||||
|
||||
### 5.1 PEL Program DAG
|
||||
|
||||
* Deterministic DAG with:
|
||||
|
||||
* Inputs: artifacts or PERs
|
||||
* Computation nodes: concat, slice, primitive ops
|
||||
* Outputs: artifacts or PERs
|
||||
* Guarantees snapshot-bound determinism:
|
||||
|
||||
* Inputs: `logseq ≤ snapshot_max`
|
||||
* Outputs: `logseq = max(input_logseq) + 1`
|
||||
|
||||
### 5.2 Execution Plan Mapping
|
||||
|
||||
| PEL Node | Execution Plan Operator |
|
||||
| -------------- | ---------------------------- |
|
||||
| Input Artifact | SegmentScan |
|
||||
| Concat/Slice | Projection |
|
||||
| TGK Projection | TGKTraversal |
|
||||
| Aggregate | Aggregation |
|
||||
| PER Output | SegmentScan (fed downstream) |
|
||||
|
||||
---
|
||||
|
||||
## 6. Execution Plan Operators
|
||||
|
||||
* **SegmentScan**: scan artifacts/PERs within snapshot
|
||||
* **IndexFilter**: SIMD-accelerated filtering by type_tag, edge_type, role
|
||||
* **Merge**: deterministic merge across shards
|
||||
* **TGKTraversal**: depth-limited deterministic DAG traversal
|
||||
* **Projection**: select fields
|
||||
* **Aggregation**: count, sum, union
|
||||
* **TombstoneShadow**: applies tombstones and ensures snapshot safety
|
||||
|
||||
---
|
||||
|
||||
## 7. Shard & SIMD Execution
|
||||
|
||||
* Artifacts/edges partitioned by shard
|
||||
* SIMD applied per shard for filters and traversal
|
||||
* Deterministic merge across shards ensures global ordering
|
||||
* Buffers structured for memory alignment:
|
||||
|
||||
```c
|
||||
struct shard_buffer {
|
||||
uint64_t *artifact_ids;
|
||||
uint64_t *tgk_edge_ids;
|
||||
uint32_t *type_tags;
|
||||
uint8_t *roles;
|
||||
uint64_t count;
|
||||
snapshot_range_t snapshot;
|
||||
};
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 8. Federation & Cross-Node Deterministic Replay
|
||||
|
||||
* **Propagation rules**:
|
||||
|
||||
* Only new artifacts/PERs/edges (`logseq > last_applied`) transmitted
|
||||
* Delta replication per snapshot
|
||||
* **Replay rules**:
|
||||
|
||||
* Sort by `(logseq, canonical_id)` for deterministic application
|
||||
* Apply tombstones/shadowing
|
||||
* Preserve snapshot boundaries
|
||||
* **Conflict resolution**:
|
||||
|
||||
* ArtifactKey collisions: duplicate hash → ignore, differing hash → flag
|
||||
* Edge conflicts: latest logseq ≤ snapshot
|
||||
* PER conflicts: identical inputs → skip execution
|
||||
|
||||
---
|
||||
|
||||
## 9. Provenance & Audit
|
||||
|
||||
* **Provenance table**: snapshot → artifacts/PERs applied
|
||||
* **Federation log table**: peer node → last applied logseq
|
||||
* **Deterministic replay** guarantees identical final outputs across nodes
|
||||
|
||||
---
|
||||
|
||||
## 10. Data Flow Summary
|
||||
|
||||
```
|
||||
PEL DAG Inputs --> Execute PEL Program --> Generate PERs
|
||||
| |
|
||||
v v
|
||||
ASL/TGK Shard Buffers (SIMD-aligned, snapshot-safe)
|
||||
|
|
||||
v
|
||||
Execution Plan Operators (SegmentScan, IndexFilter, Merge, TGKTraversal, TombstoneShadow)
|
||||
|
|
||||
v
|
||||
Final Output (artifacts + PERs + TGK projections)
|
||||
|
|
||||
v
|
||||
Federation Layer (propagation & deterministic replay across nodes)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 11. Snapshot & Log Integration
|
||||
|
||||
* All operations are **snapshot-bounded**.
|
||||
* **ZFS snapshots** + append-only sequential logs provide:
|
||||
|
||||
* Checkpointing
|
||||
* Deterministic replay
|
||||
* Garbage collection of unreachable artifacts while preserving provenance
|
||||
|
||||
---
|
||||
|
||||
## 12. Summary
|
||||
|
||||
This unified system specification ensures:
|
||||
|
||||
* **Deterministic execution** (PEL + index + TGK)
|
||||
* **Snapshot-safe operations**
|
||||
* **Shard/SIMD acceleration**
|
||||
* **Federated, replayable, cross-node consistency**
|
||||
* **Integration of PER artifacts with TGK edges**
|
||||
* **Provenance and auditability at scale**
|
||||
|
||||
|
|
@ -1,272 +0,0 @@
|
|||
NOTE: Superseded by tier1 TGK/1 and vendor/amduat/tier1/tgk-1-core.md; retained for historical context.
|
||||
|
||||
Here’s a **formal draft of the execution plan specification** for the unified ASL + TGK query execution model. It defines operators, data flow, and snapshot semantics in a deterministic, layered way.
|
||||
|
||||
---
|
||||
|
||||
# Unified Execution Plan Specification (ASL + TGK)
|
||||
|
||||
---
|
||||
|
||||
## 1. Purpose
|
||||
|
||||
This specification formalizes **query execution plans** for:
|
||||
|
||||
* ASL artifacts (ENC-ASL-CORE)
|
||||
* TGK edges (ENC-TGK-CORE)
|
||||
* Merged index references (ENC-ASL-TGK-INDEX)
|
||||
|
||||
Goals:
|
||||
|
||||
1. Deterministic per snapshot (`logseq`)
|
||||
2. Respect tombstones and shadowing
|
||||
3. Leverage filters, sharding, SIMD acceleration
|
||||
4. Support DAG traversals (TGK edges) and artifact projections
|
||||
5. Enable formal planning and optimization
|
||||
|
||||
---
|
||||
|
||||
## 2. Execution Plan Structure
|
||||
|
||||
An execution plan `EP` is a **directed acyclic graph (DAG)** of **operators**:
|
||||
|
||||
```
|
||||
EP = { nodes: [Op1, Op2, ...], edges: [(Op1→Op2), ...] }
|
||||
```
|
||||
|
||||
### Node Properties
|
||||
|
||||
* `op_id`: unique operator ID
|
||||
* `op_type`: see Operator Types (Section 3)
|
||||
* `inputs`: references to upstream operators
|
||||
* `outputs`: reference streams
|
||||
* `constraints`: optional filtering conditions
|
||||
* `snapshot`: logseq limit
|
||||
* `projections`: requested fields
|
||||
* `traversal_depth`: optional for TGK expansion
|
||||
|
||||
---
|
||||
|
||||
## 3. Operator Types
|
||||
|
||||
| Operator | Description |
|
||||
| ----------------- | --------------------------------------------------------------------------------------- |
|
||||
| `SegmentScan` | Scans a segment of ENC-ASL-TGK-INDEX, applies advisory filters |
|
||||
| `IndexFilter` | Applies canonical constraints (artifact type, edge type, role) |
|
||||
| `Merge` | Deterministically merges multiple streams (logseq ascending, canonical key tie-breaker) |
|
||||
| `Projection` | Selects output fields from index references |
|
||||
| `TGKTraversal` | Expands TGK edges from node sets (depth-limited DAG traversal) |
|
||||
| `Aggregation` | Performs count, sum, union, or other aggregations |
|
||||
| `LimitOffset` | Applies pagination or top-N selection |
|
||||
| `ShardDispatch` | Routes records from different shards in parallel, maintaining deterministic order |
|
||||
| `SIMDFilter` | Parallel filter evaluation for routing keys or type tags |
|
||||
| `TombstoneShadow` | Applies shadowing to remove tombstoned or overridden records |
|
||||
|
||||
---
|
||||
|
||||
## 4. Operator Semantics
|
||||
|
||||
### 4.1 SegmentScan
|
||||
|
||||
* Inputs: segment(s) of ENC-ASL-TGK-INDEX
|
||||
* Outputs: raw record stream
|
||||
* Steps:
|
||||
|
||||
1. Select segments with `logseq_min ≤ snapshot`
|
||||
2. Apply **advisory filters** to eliminate records
|
||||
3. Return record references (artifact_id, tgk_edge_id)
|
||||
|
||||
---
|
||||
|
||||
### 4.2 IndexFilter
|
||||
|
||||
* Inputs: raw record stream
|
||||
* Outputs: filtered stream
|
||||
* Steps:
|
||||
|
||||
1. Apply **canonical constraints**:
|
||||
|
||||
* Artifact type tag
|
||||
* Edge type key, role
|
||||
* Node IDs for TGK edges
|
||||
2. Drop tombstoned or shadowed records
|
||||
* Deterministic
|
||||
|
||||
---
|
||||
|
||||
### 4.3 Merge
|
||||
|
||||
* Inputs: multiple streams
|
||||
* Outputs: merged stream
|
||||
* Sort order:
|
||||
|
||||
1. logseq ascending
|
||||
2. canonical ID tie-breaker
|
||||
* Deterministic, regardless of input shard order
|
||||
|
||||
---
|
||||
|
||||
### 4.4 Projection
|
||||
|
||||
* Inputs: record stream
|
||||
* Outputs: projected fields
|
||||
* Steps:
|
||||
|
||||
* Select requested fields (artifact_id, tgk_edge_id, node_id, type tags)
|
||||
* Preserve order
|
||||
|
||||
---
|
||||
|
||||
### 4.5 TGKTraversal
|
||||
|
||||
* Inputs: node set or TGK edge references
|
||||
* Outputs: expanded TGK edge references (DAG traversal)
|
||||
* Parameters:
|
||||
|
||||
* `depth`: max recursion depth
|
||||
* `snapshot`: logseq cutoff
|
||||
* `direction`: from/to
|
||||
* Deterministic traversal:
|
||||
|
||||
* logseq ascending per edge
|
||||
* canonical key tie-breaker
|
||||
* Optional projection of downstream nodes or artifacts
|
||||
|
||||
---
|
||||
|
||||
### 4.6 Aggregation
|
||||
|
||||
* Inputs: record stream
|
||||
* Outputs: aggregated result
|
||||
* Examples:
|
||||
|
||||
* `COUNT(*)`, `UNION`, `SUM(type_tag)`
|
||||
* Deterministic: preserves snapshot and logseq ordering
|
||||
|
||||
---
|
||||
|
||||
### 4.7 LimitOffset
|
||||
|
||||
* Inputs: record stream
|
||||
* Outputs: top-N slice
|
||||
* Deterministic: ordering from upstream merge operator
|
||||
|
||||
---
|
||||
|
||||
### 4.8 ShardDispatch & SIMDFilter
|
||||
|
||||
* Inputs: parallel streams from shards
|
||||
* Outputs: unified stream
|
||||
* Ensures:
|
||||
|
||||
* Deterministic merge order
|
||||
* SIMD acceleration for type/tag filters
|
||||
* Filters are advisory; exact canonical check downstream
|
||||
|
||||
---
|
||||
|
||||
### 4.9 TombstoneShadow
|
||||
|
||||
* Inputs: record stream
|
||||
* Outputs: visible records only
|
||||
* Logic:
|
||||
|
||||
* For a given canonical key (artifact or TGK edge):
|
||||
|
||||
* Keep only the latest `logseq ≤ snapshot`
|
||||
* Remove shadowed/tombstoned versions
|
||||
|
||||
---
|
||||
|
||||
## 5. Data Flow Example
|
||||
|
||||
**Query:** Find all artifacts of type `42` reachable via TGK edges of type `7` from node `N0`, depth 2.
|
||||
|
||||
Execution Plan:
|
||||
|
||||
```
|
||||
SegmentScan(ASL segments)
|
||||
→ IndexFilter(type_tag=42)
|
||||
→ Merge
|
||||
|
||||
SegmentScan(TGK segments)
|
||||
→ IndexFilter(edge_type=7, from_node=N0)
|
||||
→ TGKTraversal(depth=2)
|
||||
→ TombstoneShadow
|
||||
→ Merge
|
||||
|
||||
Merge(ASL results, TGK results)
|
||||
→ Projection(artifact_id, tgk_edge_id, node_id)
|
||||
→ Aggregation(COUNT)
|
||||
```
|
||||
|
||||
* Each operator preserves **snapshot semantics**
|
||||
* Deterministic order maintained throughout
|
||||
|
||||
---
|
||||
|
||||
## 6. Snapshot and Determinism Guarantees
|
||||
|
||||
1. **Segment visibility**: `logseq_min ≤ snapshot`
|
||||
2. **Record visibility**: `logseq ≤ snapshot`
|
||||
3. **Merge and traversal order**: logseq ascending → canonical key
|
||||
4. Filters, SIMD, and sharding **cannot alter output**
|
||||
5. Tombstones guarantee no resurrection of removed records
|
||||
|
||||
---
|
||||
|
||||
## 7. Plan Serialization (Optional)
|
||||
|
||||
Execution plans can be serialized for:
|
||||
|
||||
* Reuse across queries
|
||||
* Federation / distributed execution
|
||||
* Deterministic replay
|
||||
|
||||
Serialization format:
|
||||
|
||||
```c
|
||||
struct exec_plan {
|
||||
uint32_t plan_version;
|
||||
uint32_t operator_count;
|
||||
struct operator_def operators[]; // DAG nodes
|
||||
struct operator_edge edges[]; // DAG edges
|
||||
};
|
||||
```
|
||||
|
||||
* Each `operator_def` references type, parameters, projections, snapshot
|
||||
* Each `operator_edge` references upstream → downstream operators
|
||||
|
||||
---
|
||||
|
||||
## 8. Normative Invariants
|
||||
|
||||
1. Deterministic output per snapshot
|
||||
2. Core objects (artifact/edge) identity authoritative from CORE layers
|
||||
3. Filters and acceleration advisory only
|
||||
4. DAG traversal respects depth and snapshot
|
||||
5. Aggregation deterministic across parallel shards
|
||||
6. Execution plan serialization preserves all operator semantics
|
||||
|
||||
---
|
||||
|
||||
## 9. Summary
|
||||
|
||||
This specification formalizes:
|
||||
|
||||
* **Operators**, **data flow**, **DAG execution**
|
||||
* **Snapshot safety** and **determinism**
|
||||
* **Parallel acceleration** (filters, SIMD, shards)
|
||||
* **Unified handling** of ASL artifacts and TGK edges
|
||||
|
||||
It allows **formal reasoning, optimization, and distributed execution** while maintaining the strict separation between core semantics and index/acceleration layers.
|
||||
|
||||
---
|
||||
|
||||
Next steps could include:
|
||||
|
||||
* **C-structs for operator definitions and execution plan serialization**
|
||||
* **Physical operator implementations mapping to ENC-ASL-TGK-INDEX**
|
||||
|
||||
Do you want me to draft those C-struct definitions next?
|
||||
|
||||
|
|
@ -1,179 +0,0 @@
|
|||
NOTE: Superseded by tier1 TGK/1 and vendor/amduat/tier1/tgk-1-core.md; retained for historical context.
|
||||
|
||||
Absolutely — here’s a **draft for a unified query execution model** over the merged ENC-ASL-TGK-INDEX. It treats ASL artifacts and TGK edges uniformly for queries, respects snapshots, and preserves determinism.
|
||||
|
||||
---
|
||||
|
||||
# Unified Query Execution Model (ASL + TGK)
|
||||
|
||||
---
|
||||
|
||||
## 1. Purpose
|
||||
|
||||
This model defines how **queries over ASL artifacts and TGK edges** are executed:
|
||||
|
||||
* Snapshot-safe
|
||||
* Deterministic per log sequence
|
||||
* Able to leverage acceleration structures (filters, routing, SIMD)
|
||||
* Able to support DAG program projections and trace graph traversals
|
||||
|
||||
It does **not** redefine core semantics:
|
||||
|
||||
* ENC-ASL-CORE defines artifacts
|
||||
* ENC-TGK-CORE defines edges
|
||||
* ENC-ASL-TGK-INDEX defines references and acceleration
|
||||
|
||||
---
|
||||
|
||||
## 2. Query Abstraction
|
||||
|
||||
A **query** Q is defined as:
|
||||
|
||||
```
|
||||
Q = {
|
||||
snapshot: S,
|
||||
constraints: C, // filters on artifacts, edges, or nodes
|
||||
projections: P, // select returned fields
|
||||
traversal: optional, // TGK edge expansion
|
||||
aggregation: optional // count, union, etc.
|
||||
}
|
||||
```
|
||||
|
||||
* **snapshot**: the log sequence cutoff
|
||||
* **constraints**: logical predicate over index fields (artifact type, edge type, node ID)
|
||||
* **projections**: the output columns
|
||||
* **traversal**: optional TGK graph expansion
|
||||
* **aggregation**: optional summarization
|
||||
|
||||
---
|
||||
|
||||
## 3. Execution Stages
|
||||
|
||||
### 3.1 Index Scan
|
||||
|
||||
1. Determine **segments visible** for snapshot `S`
|
||||
2. For each segment:
|
||||
|
||||
* Use **filters** to eliminate segments/records (advisory)
|
||||
* Decode **ASL artifact references** and **TGK edge references**
|
||||
* Skip tombstoned or shadowed records
|
||||
|
||||
### 3.2 Constraint Evaluation
|
||||
|
||||
* Evaluate **canonical constraints**:
|
||||
|
||||
* Artifact ID, type tag
|
||||
* Edge ID, edge type, role
|
||||
* Node ID (from/to)
|
||||
* Filters are advisory; exact check required
|
||||
|
||||
### 3.3 Traversal Expansion (Optional)
|
||||
|
||||
For TGK edges:
|
||||
|
||||
1. Expand edges from a set of nodes
|
||||
2. Apply **snapshot constraints** to prevent including edges outside S
|
||||
3. Produce DAG projections or downstream artifact IDs
|
||||
|
||||
### 3.4 Projection and Aggregation
|
||||
|
||||
* Apply **projection fields** as requested
|
||||
* Optionally aggregate or reduce results
|
||||
* Maintain **deterministic order** by logseq ascending, then canonical key
|
||||
|
||||
---
|
||||
|
||||
## 4. Routing and SIMD Acceleration
|
||||
|
||||
* SIMD may evaluate **multiple routing keys in parallel**
|
||||
* Routing keys are precomputed in ENC-ASL-TGK-INDEX optional sections
|
||||
* Acceleration **cannot change semantics**
|
||||
* Parallel scans **must be deterministic**: order of records in output = logseq + canonical key
|
||||
|
||||
---
|
||||
|
||||
## 5. Snapshot Semantics
|
||||
|
||||
* Segment is visible if `segment.logseq_min ≤ S`
|
||||
* Record is visible if `record.logseq ≤ S`
|
||||
* Tombstones shadow earlier records
|
||||
* Deterministic filtering required
|
||||
|
||||
---
|
||||
|
||||
## 6. Traversal Semantics (TGK edges)
|
||||
|
||||
* Given a set of start nodes `N_start`:
|
||||
|
||||
* Fetch edges with `from[] ∩ N_start ≠ ∅` (or `to[]` depending on direction)
|
||||
* Each edge expanded **once per logseq**
|
||||
* Expansion obeys snapshot S
|
||||
* Edge properties (type, role) used in filtering but not for identity
|
||||
|
||||
* Optional recursion depth `d` may be specified for DAG traversal
|
||||
|
||||
---
|
||||
|
||||
## 7. Unified Query API (Conceptual)
|
||||
|
||||
```
|
||||
result_set = query(
|
||||
snapshot=S,
|
||||
artifact_constraints={type_tag=42},
|
||||
edge_constraints={edge_type=7, role=FROM},
|
||||
start_nodes=[node1, node2],
|
||||
projections=[artifact_id, tgk_edge_id, node_id],
|
||||
traversal_depth=3,
|
||||
aggregation='count'
|
||||
)
|
||||
```
|
||||
|
||||
* Returns combined **artifact + TGK edge references**
|
||||
* Traversal automatically expands TGK edges
|
||||
* Aggregation and projection deterministic
|
||||
|
||||
---
|
||||
|
||||
## 8. Determinism Guarantees
|
||||
|
||||
1. **Same snapshot + same constraints → identical results**
|
||||
2. **Logseq ascending + canonical key tie-breaks**
|
||||
3. Filters, shards, SIMD do **not affect result set**
|
||||
4. Traversal expansion deterministic per DAG rules
|
||||
|
||||
---
|
||||
|
||||
## 9. Garbage Collection Safety
|
||||
|
||||
* Records and edges **must not be removed** if they appear in snapshot `S` or any **reachable traversal**
|
||||
* Optional: **tombstone records** prevent accidental resurrection
|
||||
|
||||
---
|
||||
|
||||
## 10. Federation Considerations (Non-normative)
|
||||
|
||||
* Domains may restrict **visibility of artifacts/edges**
|
||||
* Queries may include **domain filters**
|
||||
* Canonical IDs remain stable across domains
|
||||
|
||||
---
|
||||
|
||||
## 11. Summary
|
||||
|
||||
The unified query execution model:
|
||||
|
||||
* Provides **single API over ASL + TGK references**
|
||||
* Fully respects **snapshot determinism**
|
||||
* Supports **DAG traversal over TGK edges**
|
||||
* Leverages **ENC-ASL-TGK-INDEX acceleration**
|
||||
* Guarantees **semantic consistency** without re-encoding core objects
|
||||
|
||||
---
|
||||
|
||||
Next step could be:
|
||||
|
||||
* **Formal execution plan specification**: physical operators, filter pushdown, traversal operators
|
||||
* **C-struct definitions for query results**, mapping directly to ENC-ASL-TGK-INDEX
|
||||
|
||||
Do you want me to draft that next?
|
||||
|
||||
|
|
@ -1,195 +0,0 @@
|
|||
NOTE: Superseded by tier1 TGK/1 and vendor/amduat/tier1/tgk-1-core.md; retained for historical context.
|
||||
|
||||
Absolutely — here’s a formal draft of **C structure mappings and binary encoding** for artifacts, PERs, TGK edges, shards, and execution plans in the unified ASL + TGK + PEL system. This is meant for **C libraries, API frontends, and toolchains**, preserving memory alignment, determinism, and snapshot safety.
|
||||
|
||||
---
|
||||
|
||||
# Unified System C Struct Mapping and Binary Encoding Specification
|
||||
|
||||
---
|
||||
|
||||
## 1. Artifact & PER Records
|
||||
|
||||
### 1.1 Artifact Index Entry
|
||||
|
||||
```c
|
||||
typedef struct __attribute__((packed)) {
|
||||
uint64_t artifact_key; // canonical ArtifactKey
|
||||
uint64_t block_id; // CAS/ASL block ID
|
||||
uint32_t offset; // offset within block
|
||||
uint32_t length; // length in bytes
|
||||
uint32_t type_tag; // optional type tag
|
||||
uint8_t has_type_tag; // 1 if type_tag is valid, 0 otherwise
|
||||
uint8_t reserved[3]; // padding for 8-byte alignment
|
||||
uint64_t logseq; // monotonic log sequence
|
||||
} artifact_index_entry_t;
|
||||
```
|
||||
|
||||
**Binary encoding**:
|
||||
|
||||
| Field | Bytes | Notes |
|
||||
| ------------ | ----- | ----------------------- |
|
||||
| artifact_key | 8 | canonical ID |
|
||||
| block_id | 8 | ZFS CAS block reference |
|
||||
| offset | 4 | offset in block |
|
||||
| length | 4 | payload size |
|
||||
| type_tag | 4 | optional type |
|
||||
| has_type_tag | 1 | toggle |
|
||||
| reserved | 3 | alignment padding |
|
||||
| logseq | 8 | monotonic sequence |
|
||||
|
||||
---
|
||||
|
||||
### 1.2 PER (PEL Execution Receipt) Record
|
||||
|
||||
```c
|
||||
typedef struct __attribute__((packed)) {
|
||||
artifact_index_entry_t base_artifact; // embedded artifact info
|
||||
uint64_t pel_program_id; // PEL program DAG canonical ID
|
||||
uint32_t input_count; // number of input artifacts
|
||||
uint64_t *input_keys; // array of ArtifactKeys
|
||||
uint32_t output_count; // number of outputs
|
||||
uint64_t *output_keys; // array of ArtifactKeys
|
||||
} per_record_t;
|
||||
```
|
||||
|
||||
**Encoding notes**:
|
||||
|
||||
* Base artifact encoding is identical to `artifact_index_entry_t`
|
||||
* Followed by PEL-specific fields: `pel_program_id`, `input_count`, `input_keys[]`, `output_count`, `output_keys[]`
|
||||
* Arrays are **length-prefixed** for serialization
|
||||
|
||||
---
|
||||
|
||||
## 2. TGK Edge Records
|
||||
|
||||
```c
|
||||
#define MAX_FROM 16
|
||||
#define MAX_TO 16
|
||||
|
||||
typedef struct __attribute__((packed)) {
|
||||
uint64_t canonical_edge_id; // unique edge ID
|
||||
uint64_t from_nodes[MAX_FROM]; // from node ArtifactKeys
|
||||
uint64_t to_nodes[MAX_TO]; // to node ArtifactKeys
|
||||
uint32_t from_count; // actual number of from nodes
|
||||
uint32_t to_count; // actual number of to nodes
|
||||
uint32_t edge_type; // type key
|
||||
uint8_t roles; // bitmask of roles
|
||||
uint8_t reserved[7]; // padding
|
||||
uint64_t logseq; // log sequence
|
||||
} tgk_edge_record_t;
|
||||
```
|
||||
|
||||
**Encoding notes**:
|
||||
|
||||
* Fixed-size array simplifies SIMD processing
|
||||
* `from_count` / `to_count` indicate valid entries
|
||||
* Deterministic ordering preserved by `logseq + canonical_edge_id`
|
||||
|
||||
---
|
||||
|
||||
## 3. Shard-Local Buffers
|
||||
|
||||
```c
|
||||
typedef struct {
|
||||
artifact_index_entry_t *artifacts; // pointer to artifact array
|
||||
tgk_edge_record_t *edges; // pointer to TGK edges
|
||||
uint64_t artifact_count;
|
||||
uint64_t edge_count;
|
||||
snapshot_range_t snapshot; // snapshot bounds for this shard
|
||||
} shard_buffer_t;
|
||||
```
|
||||
|
||||
**Binary encoding**:
|
||||
|
||||
* Continuous memory layout per shard for SIMD operations
|
||||
* `artifact_count` and `edge_count` used for iteration
|
||||
* `snapshot_range_t` defines `min_logseq` and `max_logseq` for safety
|
||||
|
||||
---
|
||||
|
||||
## 4. Execution Plan Structures
|
||||
|
||||
### 4.1 Operator Definition
|
||||
|
||||
```c
|
||||
typedef enum {
|
||||
OP_SEGMENT_SCAN,
|
||||
OP_INDEX_FILTER,
|
||||
OP_MERGE,
|
||||
OP_TGK_TRAVERSAL,
|
||||
OP_PROJECTION,
|
||||
OP_AGGREGATION,
|
||||
OP_TOMBSTONE_SHADOW
|
||||
} operator_type_t;
|
||||
|
||||
typedef struct __attribute__((packed)) {
|
||||
uint32_t op_id; // unique operator ID
|
||||
operator_type_t type; // operator type
|
||||
uint32_t input_count; // number of inputs
|
||||
uint32_t output_count; // number of outputs
|
||||
uint32_t params_length; // length of serialized params
|
||||
uint8_t *params; // pointer to operator parameters
|
||||
uint32_t shard_id; // shard this operator applies to
|
||||
} operator_t;
|
||||
```
|
||||
|
||||
* `params` contains **operator-specific configuration** (e.g., filter masks, edge_type keys)
|
||||
* Operators are serialized sequentially in execution plan
|
||||
|
||||
---
|
||||
|
||||
### 4.2 Execution Plan Serialization
|
||||
|
||||
```c
|
||||
typedef struct __attribute__((packed)) {
|
||||
uint32_t plan_id; // unique plan ID
|
||||
uint32_t operator_count; // number of operators
|
||||
operator_t *operators; // pointer to operator array
|
||||
snapshot_range_t snapshot; // snapshot bounds for execution
|
||||
} execution_plan_t;
|
||||
```
|
||||
|
||||
**Encoding**:
|
||||
|
||||
1. `plan_id` (4 bytes)
|
||||
2. `operator_count` (4 bytes)
|
||||
3. `snapshot_range_t` (min_logseq, max_logseq, 16 bytes)
|
||||
4. Serialized operators (fixed-size header + variable `params`)
|
||||
|
||||
---
|
||||
|
||||
## 5. Binary Serialization Rules
|
||||
|
||||
1. **All structures packed** to prevent gaps (`__attribute__((packed))`)
|
||||
2. **Canonical byte order**: little-endian for cross-platform compatibility
|
||||
3. **Pointers** replaced by offsets in serialized form
|
||||
4. Arrays (inputs, outputs, from/to nodes) **length-prefixed**
|
||||
5. `logseq` + `canonical_id` used for deterministic ordering
|
||||
|
||||
---
|
||||
|
||||
## 6. Notes on SIMD / Shard Layout
|
||||
|
||||
* All arrays in `shard_buffer_t` are **contiguous and aligned to 64-byte boundaries** for vectorized loads
|
||||
* Fixed-size arrays in `tgk_edge_record_t` simplify branchless SIMD filtering
|
||||
* Serialization preserves shard boundaries for distributed execution and federation propagation
|
||||
|
||||
---
|
||||
|
||||
## 7. Summary
|
||||
|
||||
This C struct mapping and binary encoding specification:
|
||||
|
||||
* Covers **artifact, PER, TGK edge**, and **execution plan** structures
|
||||
* Enables **snapshot-safe deterministic execution**
|
||||
* Supports **SIMD/shard acceleration**
|
||||
* Is ready for **C libraries, API frontends, and cross-node federation**
|
||||
* Preserves **provenance, logseq ordering, and deterministic replay**
|
||||
|
||||
---
|
||||
|
||||
Next logical step could be **formalizing garbage collection and tombstone application rules** in binary form, ensuring unreachable artifacts can be removed safely while preserving determinism.
|
||||
|
||||
Do you want me to draft that next?
|
||||
|
||||
|
|
@ -1,238 +0,0 @@
|
|||
Yes — that’s exactly right, and what you just articulated is an **important missing conceptual layer**.
|
||||
You’ve essentially identified the need for a **PEL-visible “foreign domain binding” state machine**, and the notion of a **virgin authority**.
|
||||
|
||||
I’ll formalize this cleanly and minimally, without over-specifying.
|
||||
|
||||
---
|
||||
|
||||
# Virgin Authority, Foreign Domains, and USB as a PEL Domain
|
||||
|
||||
## 1. The Missing Abstraction: “Presented Domain”
|
||||
|
||||
From the PEL / ASL perspective, **USB content is not just files**.
|
||||
|
||||
It is one of:
|
||||
|
||||
1. A **presented domain**
|
||||
2. A **domain admission request**
|
||||
3. A **known foreign domain**
|
||||
4. A **virgin domain**
|
||||
|
||||
This must be explicit, not inferred.
|
||||
|
||||
---
|
||||
|
||||
## 2. Domain States (Normative)
|
||||
|
||||
### 2.1 Authority Domain States
|
||||
|
||||
| State | Description |
|
||||
| --------------- | ----------------------------- |
|
||||
| **Virgin** | No master authority exists |
|
||||
| **Rooted** | Root authority keys exist |
|
||||
| **Federating** | Can sign foreign domains |
|
||||
| **Operational** | Normal steady-state authority |
|
||||
|
||||
> The auth host **starts in Virgin**.
|
||||
|
||||
---
|
||||
|
||||
### 2.2 Presented Domain States (USB)
|
||||
|
||||
When a USB is inserted, its contents are classified as one of:
|
||||
|
||||
| State | Meaning |
|
||||
| ------------------ | ------------------------ |
|
||||
| **Virgin** | No certificates present |
|
||||
| **Self-asserting** | Contains unsigned claims |
|
||||
| **Admitted** | Has valid DAM |
|
||||
| **Known foreign** | Previously pinned domain |
|
||||
|
||||
This classification is done **by PEL**, not by shell logic.
|
||||
|
||||
---
|
||||
|
||||
## 3. USB as a Temporary ASL Domain
|
||||
|
||||
**Key principle:**
|
||||
|
||||
> USB content is treated as a *temporary ASL domain* with read-only semantics.
|
||||
|
||||
Let’s call it:
|
||||
|
||||
```
|
||||
domain_id = PRESENTED::<hash(usb_fingerprint)>
|
||||
```
|
||||
|
||||
Properties:
|
||||
|
||||
* Read-only
|
||||
* No sealing allowed
|
||||
* No GC
|
||||
* No snapshots persisted
|
||||
* Exists only for duration of execution
|
||||
|
||||
PEL can refer to:
|
||||
|
||||
```yaml
|
||||
inputs:
|
||||
- domain: presented
|
||||
path: /REQUEST/input-artifacts
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4. Virgin Authority Bootstrapping (First Ever Operation)
|
||||
|
||||
### 4.1 Virgin State Invariants
|
||||
|
||||
When the auth host is virgin:
|
||||
|
||||
* No root keys exist
|
||||
* No trusted domains exist
|
||||
* No policy is mutable
|
||||
* Only one PEL program is allowed:
|
||||
**Authority Genesis**
|
||||
|
||||
---
|
||||
|
||||
### 4.2 Authority Genesis PEL Program
|
||||
|
||||
Allowed exactly once.
|
||||
|
||||
```yaml
|
||||
pel_program_type: authority-genesis
|
||||
inputs:
|
||||
- entropy_source
|
||||
- operator_assertion
|
||||
outputs:
|
||||
- root_authority_key
|
||||
- policy_hash
|
||||
- genesis_snapshot
|
||||
```
|
||||
|
||||
Effects:
|
||||
|
||||
* Root keys generated
|
||||
* Policy hash sealed
|
||||
* Authority transitions:
|
||||
|
||||
```
|
||||
Virgin → Rooted
|
||||
```
|
||||
|
||||
Receipt produced is **the birth certificate of the authority**.
|
||||
|
||||
---
|
||||
|
||||
## 5. Admission PEL Program (Foreign Domain Birth)
|
||||
|
||||
### 5.1 Admission Program Semantics
|
||||
|
||||
```yaml
|
||||
pel_program_type: domain-admission
|
||||
inputs:
|
||||
- presented_domain
|
||||
- identity_claims
|
||||
outputs:
|
||||
- domain_id
|
||||
- domain_dam
|
||||
- courtesy_lease
|
||||
```
|
||||
|
||||
Rules:
|
||||
|
||||
* Presented domain may be virgin
|
||||
* Domain ID assigned by authority
|
||||
* DAM signed and returned
|
||||
* Domain becomes **Admitted**
|
||||
|
||||
---
|
||||
|
||||
## 6. Rescue PEL Program (Foreign Domain Capture)
|
||||
|
||||
```yaml
|
||||
pel_program_type: rescue
|
||||
inputs:
|
||||
- presented_domain
|
||||
- raw_artifacts
|
||||
outputs:
|
||||
- encrypted_blocks
|
||||
- receipt
|
||||
```
|
||||
|
||||
Rules:
|
||||
|
||||
* No identity required
|
||||
* Outputs always encrypted
|
||||
* Bound to a domain chosen by operator or authority policy
|
||||
* Can be admitted later
|
||||
|
||||
---
|
||||
|
||||
## 7. Known Foreign Domains (Trust Pinning)
|
||||
|
||||
Once a domain is admitted:
|
||||
|
||||
* Its DAM hash is pinned locally
|
||||
* Future USBs presenting that domain are recognized
|
||||
* PEL may allow richer operations
|
||||
|
||||
```yaml
|
||||
foreign_domains:
|
||||
- domain_id
|
||||
- dam_hash
|
||||
- trust_level: read-only | federated
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 8. Why This is Correct
|
||||
|
||||
### Philosophically
|
||||
|
||||
* Nothing is “known” until a receipt says so
|
||||
* Virgin is a real state
|
||||
* Authority is created, not assumed
|
||||
* USB is a *domain*, not a filesystem
|
||||
|
||||
### Technically
|
||||
|
||||
* Keeps PEL central
|
||||
* Avoids shell heuristics
|
||||
* Enables deterministic replay
|
||||
* Aligns with ASL semantics
|
||||
|
||||
---
|
||||
|
||||
## 9. Practical Implications for Implementation
|
||||
|
||||
You can implement **today**:
|
||||
|
||||
1. Domain state machine (virgin/rooted/operational)
|
||||
2. USB classification via PEL pre-pass
|
||||
3. Authority genesis as first program
|
||||
4. Admission as normal PEL execution
|
||||
5. Rescue as encrypted-only PEL execution
|
||||
|
||||
No need for:
|
||||
|
||||
* Networking
|
||||
* Long-term federation
|
||||
* Complex identity systems
|
||||
|
||||
---
|
||||
|
||||
## 10. One-sentence summary
|
||||
|
||||
> **The auth host is born virgin, USBs are temporary domains, and only PEL programs are allowed to change reality.**
|
||||
|
||||
If you want next, I can:
|
||||
|
||||
* Formalize the **Authority Genesis PEL program**
|
||||
* Define **Presented Domain fingerprinting**
|
||||
* Draft **domain state transition diagrams**
|
||||
* Show **how this plugs into ASL-HOST cleanly**
|
||||
|
||||
Just say where to go.
|
||||
|
||||
|
|
@ -1,203 +0,0 @@
|
|||
Short answer: **yes, you *can* use SOPS offline — and yes, it actually fits your model well — but only in a very specific role.**
|
||||
It should be a **transport + hygiene container**, *not* a trust anchor or semantic authority.
|
||||
|
||||
Below is the precise guidance, aligned with your architecture.
|
||||
|
||||
---
|
||||
|
||||
## 1. What SOPS is good at (in *your* system)
|
||||
|
||||
SOPS excels at:
|
||||
|
||||
* Encrypting small structured payloads (YAML / JSON / binary blobs)
|
||||
* Using **offline keys** (age, PGP)
|
||||
* Providing tamper-evidence during transport
|
||||
* Being human-operable on an air-gapped Debian machine
|
||||
|
||||
That maps nicely to:
|
||||
|
||||
> **“Move sensitive authority material safely from the offline signer to the online ASL domain.”**
|
||||
|
||||
---
|
||||
|
||||
## 2. What SOPS must *not* be used for
|
||||
|
||||
SOPS must **not** be used as:
|
||||
|
||||
* A trust model
|
||||
* A certification system
|
||||
* An authority definition
|
||||
* A replacement for DAM / certificates
|
||||
* A verifier of correctness
|
||||
|
||||
In your system:
|
||||
|
||||
* **Signatures establish truth**
|
||||
* **SOPS only protects envelopes**
|
||||
|
||||
This distinction is critical.
|
||||
|
||||
---
|
||||
|
||||
## 3. Correct placement of SOPS in the stack
|
||||
|
||||
Think of SOPS as living **outside** ASL semantics:
|
||||
|
||||
```
|
||||
[ Offline Debian ORA ]
|
||||
|
|
||||
| (SOPS-encrypted container)
|
||||
v
|
||||
[ Transport medium: USB / SCP / email ]
|
||||
|
|
||||
v
|
||||
[ Online ASL node ]
|
||||
|
|
||||
| (decrypt)
|
||||
v
|
||||
[ ASL artifact import + signature verification ]
|
||||
```
|
||||
|
||||
Once decrypted:
|
||||
|
||||
* SOPS is **discarded**
|
||||
* Only hashes + signatures matter
|
||||
|
||||
---
|
||||
|
||||
## 4. What should go *inside* SOPS containers
|
||||
|
||||
### Good candidates ✅
|
||||
|
||||
* AuthorityCertificate artifacts (binary or canonical form)
|
||||
* DAM source manifests (pre-hash)
|
||||
* Policy documents (pre-hash)
|
||||
* Key material *temporarily* (if absolutely required)
|
||||
* Signing requests awaiting approval
|
||||
|
||||
### Bad candidates ❌
|
||||
|
||||
* ASL blocks
|
||||
* Snapshots
|
||||
* PERs
|
||||
* TGK edges
|
||||
* Anything whose *meaning* depends on encryption
|
||||
|
||||
---
|
||||
|
||||
## 5. Recommended SOPS payload structure
|
||||
|
||||
Use SOPS as a **sealed courier envelope**:
|
||||
|
||||
```yaml
|
||||
kind: asl-authority-envelope
|
||||
version: 1
|
||||
|
||||
payload:
|
||||
type: authority-certificate
|
||||
domain_id: example.org/build
|
||||
policy_hash: sha256:...
|
||||
artifact_bytes: |
|
||||
BASE64(AuthorityCertificate canonical bytes)
|
||||
|
||||
metadata:
|
||||
issued_by: offline-root
|
||||
comment: "Initial domain bootstrap"
|
||||
```
|
||||
|
||||
* Encrypt with **age** (simplest, modern, offline-friendly)
|
||||
* No need for KMS
|
||||
* No need for cloud hooks
|
||||
|
||||
---
|
||||
|
||||
## 6. Canonical rule (important)
|
||||
|
||||
> **The SOPS container is never hashed, never signed, and never referenced by ASL.**
|
||||
|
||||
Only the **decrypted payload bytes** are:
|
||||
|
||||
* Hashed
|
||||
* Signed
|
||||
* Stored as artifacts
|
||||
* Snapshot-pinned
|
||||
* Federated
|
||||
|
||||
This preserves determinism.
|
||||
|
||||
---
|
||||
|
||||
## 7. Offline Debian workflow (clean and safe)
|
||||
|
||||
### On offline machine
|
||||
|
||||
1. Generate or review authority material
|
||||
2. Produce canonical bytes
|
||||
3. Sign with root key
|
||||
4. Place result in SOPS envelope
|
||||
5. Encrypt with age recipients
|
||||
6. Export file
|
||||
|
||||
### On online machine
|
||||
|
||||
1. Decrypt SOPS envelope
|
||||
2. Verify signature
|
||||
3. Hash payload
|
||||
4. Import as ASL artifact
|
||||
5. Reference from DAM / snapshot
|
||||
6. Forget SOPS ever existed
|
||||
|
||||
---
|
||||
|
||||
## 8. Why SOPS is a *good* fit here
|
||||
|
||||
| Requirement | SOPS |
|
||||
| ---------------------- | ----------------- |
|
||||
| Offline | ✔ |
|
||||
| Human-auditable | ✔ |
|
||||
| Minimal tooling | ✔ |
|
||||
| Deterministic payloads | ✔ (after decrypt) |
|
||||
| Transport security | ✔ |
|
||||
| Authority semantics | ✘ (by design) |
|
||||
|
||||
That last line is a **feature**, not a bug.
|
||||
|
||||
---
|
||||
|
||||
## 9. Alternative (why you probably don’t need more)
|
||||
|
||||
You *don’t* need:
|
||||
|
||||
* Vault
|
||||
* HSM integration
|
||||
* X.509
|
||||
* TPM attestation
|
||||
* Live secret management
|
||||
|
||||
Because your **authority is historical and snapshot-bound**, not session-based.
|
||||
|
||||
---
|
||||
|
||||
## 10. Final recommendation (clear and direct)
|
||||
|
||||
✔ **Yes, use SOPS offline**
|
||||
✔ Use it as a **sealed delivery container**
|
||||
✔ Keep **all meaning outside** SOPS
|
||||
✘ Do not let SOPS define trust
|
||||
✘ Do not hash or sign SOPS files
|
||||
|
||||
---
|
||||
|
||||
## One-sentence takeaway
|
||||
|
||||
> **SOPS is an excellent armored courier for authority artifacts — but the moment it defines meaning or trust, it’s being misused.**
|
||||
|
||||
If you want next, we can:
|
||||
|
||||
* Define an exact **age key strategy** for ORA
|
||||
* Specify **how many offline roots to trust**
|
||||
* Model **temporary rescue-node authority via SOPS**
|
||||
* Show a **bootstrap-from-zero ceremony**
|
||||
|
||||
Just say where to go next.
|
||||
|
||||
|
|
@ -1,73 +0,0 @@
|
|||
Yes — you can absolutely do most of this inside a **Debian-based container** (or even on Linux Mint, which is Ubuntu-based), with some caveats:
|
||||
|
||||
---
|
||||
|
||||
### 1️⃣ What works in a container
|
||||
|
||||
* **Debootstrap / building the ISO filesystem** – you can create the `chroot` and populate it inside a Debian container.
|
||||
* **Copying your tools** (`asl-auth-host`, `asl-rescue`, helpers) into the ISO root.
|
||||
* **Overlay scripts and directory structure** – fully container-compatible.
|
||||
* **Generating the ISO** using `genisoimage` or `xorriso` – works fine in a container.
|
||||
|
||||
---
|
||||
|
||||
### 2️⃣ What **won’t fully work** in a container
|
||||
|
||||
* **ZFS pool creation** (`zpool create`) – you need **kernel support** for ZFS, and containers usually don’t have full access to `/dev` to create loopback-backed pools.
|
||||
|
||||
* Workarounds:
|
||||
|
||||
* Use a **loopback file inside the container** and mount ZFS on the host after building.
|
||||
* Or skip actual pool creation, just **pre-create the dataset folder structure**, and mount ZFS later on the real machine.
|
||||
|
||||
* **Boot testing** – you can build the ISO in a container, but to actually boot and test, you’ll need a VM or real hardware.
|
||||
|
||||
---
|
||||
|
||||
### 3️⃣ Suggested container workflow
|
||||
|
||||
```text
|
||||
[Host Linux Mint]
|
||||
|
|
||||
v
|
||||
[Debian container] --> run build_asl_auth_host_iso.sh
|
||||
|
|
||||
v
|
||||
[Output ISO] --> copy to VM or USB to test boot
|
||||
```
|
||||
|
||||
Steps:
|
||||
|
||||
1. Run a Debian container:
|
||||
|
||||
```bash
|
||||
docker run -it --name asl-build -v $PWD/work:/work debian:bullseye bash
|
||||
```
|
||||
|
||||
2. Inside the container, install required packages:
|
||||
|
||||
```bash
|
||||
apt update
|
||||
apt install -y debootstrap genisoimage squashfs-tools xorriso
|
||||
```
|
||||
|
||||
3. Copy your scripts and binaries into the container (`/work` folder).
|
||||
|
||||
4. Run the ISO build script (`build_asl_auth_host_iso.sh`).
|
||||
|
||||
5. Copy the generated ISO back to your host for boot testing.
|
||||
|
||||
---
|
||||
|
||||
✅ **Bottom line:**
|
||||
|
||||
* You **can** build the ISO entirely in a container.
|
||||
* For **ZFS pools or boot testing**, you need either host privileges or a VM for full functionality.
|
||||
* On Linux Mint, the container approach keeps your host clean and reproducible.
|
||||
|
||||
---
|
||||
|
||||
If you want, I can **rewrite the build script specifically for container-friendly execution**, skipping ZFS creation but keeping the dataset folders so it’s ready to mount ZFS later. This is the smoothest workflow for Linux Mint.
|
||||
|
||||
Do you want me to do that?
|
||||
|
||||
|
|
@ -1,358 +0,0 @@
|
|||
You’re right to call this out 👍
|
||||
What I gave you so far had the **semantic behavior**, but it stopped *just short* of the thing that actually makes it operationally usable: **explicit snapshot identity and log positions**.
|
||||
|
||||
Let’s fix that cleanly, and at the same time infer what **ASL-STORE-INDEX** must minimally contain — without overreaching.
|
||||
|
||||
---
|
||||
|
||||
## 1️⃣ What ASL-STORE-INDEX minimally is (inferred)
|
||||
|
||||
From your spec, ASL-STORE-INDEX must define:
|
||||
|
||||
### A. Stable snapshot identity
|
||||
|
||||
So you can say:
|
||||
|
||||
> “This index state corresponds to snapshot X”
|
||||
|
||||
That implies:
|
||||
|
||||
* SnapshotID (opaque, comparable)
|
||||
* Snapshot is immutable
|
||||
* Snapshot can be referenced later
|
||||
|
||||
### B. Log position
|
||||
|
||||
So you can say:
|
||||
|
||||
> “Replay log entries [0…N)”
|
||||
|
||||
That implies:
|
||||
|
||||
* Total ordering
|
||||
* Stable positions
|
||||
* Prefix replay
|
||||
|
||||
### C. Coordination rule
|
||||
|
||||
So you can say:
|
||||
|
||||
> “Snapshot S + log prefix L defines CURRENT”
|
||||
|
||||
That’s it.
|
||||
No storage layout, no IO, no performance policy.
|
||||
|
||||
---
|
||||
|
||||
## 2️⃣ What was missing before
|
||||
|
||||
Previously:
|
||||
|
||||
* Snapshot existed ❌ **but had no identity**
|
||||
* Log existed ❌ **but had no positions**
|
||||
* CURRENT existed ❌ **but couldn’t be referenced**
|
||||
|
||||
So you couldn’t:
|
||||
|
||||
* checkpoint
|
||||
* replicate
|
||||
* resume
|
||||
* federate
|
||||
* or prove determinism
|
||||
|
||||
Let’s add exactly what’s needed — nothing more.
|
||||
|
||||
---
|
||||
|
||||
## 3️⃣ Updated semantic model
|
||||
|
||||
We make **three things explicit**:
|
||||
|
||||
```text
|
||||
SnapshotID = opaque identifier
|
||||
LogPosition = integer (monotonic)
|
||||
IndexState = (SnapshotID, LogPosition)
|
||||
```
|
||||
|
||||
And the invariant becomes:
|
||||
|
||||
```
|
||||
Index(SnapshotID, LogPosition) =
|
||||
Snapshot[SnapshotID] + replay(log[0:LogPosition])
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4️⃣ Updated implementation (drop-in replacement)
|
||||
|
||||
### `content_index.py` (snapshot + log positions)
|
||||
|
||||
```python
|
||||
from __future__ import annotations
|
||||
from dataclasses import dataclass
|
||||
from typing import Dict, Optional, List, Iterable
|
||||
from enum import Enum, auto
|
||||
import uuid
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------
|
||||
# Core data types
|
||||
# ---------------------------------------------------------------------
|
||||
|
||||
SnapshotID = str
|
||||
LogPosition = int
|
||||
|
||||
|
||||
@dataclass(frozen=True)
|
||||
class ArtifactLocation:
|
||||
block_id: str
|
||||
offset: int
|
||||
length: int
|
||||
|
||||
|
||||
class LogOp(Enum):
|
||||
PUT = auto()
|
||||
TOMBSTONE = auto()
|
||||
|
||||
|
||||
@dataclass(frozen=True)
|
||||
class IndexLogEntry:
|
||||
position: LogPosition
|
||||
op: LogOp
|
||||
artifact_key: str
|
||||
location: Optional[ArtifactLocation] = None
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------
|
||||
# Snapshot
|
||||
# ---------------------------------------------------------------------
|
||||
|
||||
class ContentIndexSnapshot:
|
||||
"""
|
||||
Immutable snapshot of the content index.
|
||||
"""
|
||||
def __init__(
|
||||
self,
|
||||
snapshot_id: SnapshotID,
|
||||
mapping: Dict[str, ArtifactLocation],
|
||||
log_position: LogPosition,
|
||||
):
|
||||
self.snapshot_id = snapshot_id
|
||||
self.log_position = log_position
|
||||
self._mapping = dict(mapping)
|
||||
|
||||
def get(self, key: str) -> Optional[ArtifactLocation]:
|
||||
return self._mapping.get(key)
|
||||
|
||||
def items(self):
|
||||
return self._mapping.items()
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Append-only log with positions
|
||||
|
||||
```python
|
||||
class ContentIndexLog:
|
||||
"""
|
||||
Append-only, totally ordered log.
|
||||
"""
|
||||
def __init__(self, start_position: LogPosition = 0):
|
||||
self._entries: List[IndexLogEntry] = []
|
||||
self._next_position = start_position
|
||||
|
||||
def append_put(self, key: str, loc: ArtifactLocation) -> LogPosition:
|
||||
pos = self._next_position
|
||||
self._next_position += 1
|
||||
self._entries.append(IndexLogEntry(
|
||||
position=pos,
|
||||
op=LogOp.PUT,
|
||||
artifact_key=key,
|
||||
location=loc,
|
||||
))
|
||||
return pos
|
||||
|
||||
def append_tombstone(self, key: str) -> LogPosition:
|
||||
pos = self._next_position
|
||||
self._next_position += 1
|
||||
self._entries.append(IndexLogEntry(
|
||||
position=pos,
|
||||
op=LogOp.TOMBSTONE,
|
||||
artifact_key=key,
|
||||
))
|
||||
return pos
|
||||
|
||||
def entries(self) -> Iterable[IndexLogEntry]:
|
||||
return self._entries
|
||||
|
||||
def upto(self, position: LogPosition) -> Iterable[IndexLogEntry]:
|
||||
return (e for e in self._entries if e.position < position)
|
||||
|
||||
@property
|
||||
def tail_position(self) -> LogPosition:
|
||||
return self._next_position
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### ContentIndex with explicit CURRENT
|
||||
|
||||
```python
|
||||
class ContentIndex:
|
||||
"""
|
||||
ASL-CORE-INDEX with snapshot IDs and log positions.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
snapshot: Optional[ContentIndexSnapshot] = None,
|
||||
log: Optional[ContentIndexLog] = None,
|
||||
):
|
||||
self._snapshot = snapshot or ContentIndexSnapshot(
|
||||
snapshot_id="genesis",
|
||||
mapping={},
|
||||
log_position=0,
|
||||
)
|
||||
self._log = log or ContentIndexLog(
|
||||
start_position=self._snapshot.log_position
|
||||
)
|
||||
|
||||
# -----------------------------------------------------------------
|
||||
# Lookup at CURRENT
|
||||
# -----------------------------------------------------------------
|
||||
|
||||
def lookup(self, key: str) -> Optional[ArtifactLocation]:
|
||||
# Shadowing: reverse replay up to CURRENT
|
||||
for entry in reversed(list(self._log.entries())):
|
||||
if entry.artifact_key != key:
|
||||
continue
|
||||
if entry.op == LogOp.TOMBSTONE:
|
||||
return None
|
||||
return entry.location
|
||||
|
||||
return self._snapshot.get(key)
|
||||
|
||||
# -----------------------------------------------------------------
|
||||
# Append-only mutation
|
||||
# -----------------------------------------------------------------
|
||||
|
||||
def put(self, key: str, loc: ArtifactLocation) -> LogPosition:
|
||||
return self._log.append_put(key, loc)
|
||||
|
||||
def tombstone(self, key: str) -> LogPosition:
|
||||
return self._log.append_tombstone(key)
|
||||
|
||||
# -----------------------------------------------------------------
|
||||
# Snapshotting
|
||||
# -----------------------------------------------------------------
|
||||
|
||||
def snapshot(self) -> ContentIndexSnapshot:
|
||||
"""
|
||||
Create a new immutable snapshot representing CURRENT.
|
||||
"""
|
||||
materialized: Dict[str, ArtifactLocation] = dict(self._snapshot.items())
|
||||
|
||||
for entry in self._log.entries():
|
||||
if entry.op == LogOp.PUT:
|
||||
materialized[entry.artifact_key] = entry.location
|
||||
elif entry.op == LogOp.TOMBSTONE:
|
||||
materialized.pop(entry.artifact_key, None)
|
||||
|
||||
snapshot_id = str(uuid.uuid4())
|
||||
snapshot = ContentIndexSnapshot(
|
||||
snapshot_id=snapshot_id,
|
||||
mapping=materialized,
|
||||
log_position=self._log.tail_position,
|
||||
)
|
||||
return snapshot
|
||||
|
||||
def install_snapshot(self, snapshot: ContentIndexSnapshot):
|
||||
"""
|
||||
Install a snapshot and reset log (external coordination assumed).
|
||||
"""
|
||||
self._snapshot = snapshot
|
||||
self._log = ContentIndexLog(start_position=snapshot.log_position)
|
||||
|
||||
# -----------------------------------------------------------------
|
||||
# Observability
|
||||
# -----------------------------------------------------------------
|
||||
|
||||
@property
|
||||
def current_snapshot_id(self) -> SnapshotID:
|
||||
return self._snapshot.snapshot_id
|
||||
|
||||
@property
|
||||
def current_log_position(self) -> LogPosition:
|
||||
return self._log.tail_position
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 5️⃣ Why this is enough (and not too much)
|
||||
|
||||
You now have:
|
||||
|
||||
* ✅ Snapshot identity
|
||||
* ✅ Log positions
|
||||
* ✅ Deterministic replay
|
||||
* ✅ Explicit CURRENT
|
||||
* ✅ Federation-ready state vectors
|
||||
* ✅ Crash recovery anchor
|
||||
* ✅ Replication boundary
|
||||
|
||||
And still:
|
||||
|
||||
* ❌ No IO assumptions
|
||||
* ❌ No block allocation rules
|
||||
* ❌ No GC
|
||||
* ❌ No concurrency model
|
||||
|
||||
That’s *exactly* what ASL-STORE-INDEX should be.
|
||||
|
||||
---
|
||||
|
||||
## 6️⃣ How this unlocks higher layers
|
||||
|
||||
With `(SnapshotID, LogPosition)` you can now:
|
||||
|
||||
* Compare index states across nodes
|
||||
* Prove two systems are in sync
|
||||
* Incrementally replicate logs
|
||||
* Build federated overlays
|
||||
* Anchor provenance certificates
|
||||
* Bind DAG derivations to storage state
|
||||
* Feed stable views into ML / analytics
|
||||
|
||||
This is the **bridge** between your infrastructure and everything above it.
|
||||
|
||||
---
|
||||
|
||||
## 7️⃣ Sanity check (design smell test)
|
||||
|
||||
Ask yourself:
|
||||
|
||||
> “Can I reconstruct CURRENT from snapshot + log prefix?”
|
||||
|
||||
Yes — deterministically.
|
||||
|
||||
> “Can I drop the cache and still recover?”
|
||||
|
||||
Yes.
|
||||
|
||||
> “Can classical bytes ignore all of this?”
|
||||
|
||||
Yes — fast path still exists.
|
||||
|
||||
That means the design is sound.
|
||||
|
||||
---
|
||||
|
||||
If you want, next we can:
|
||||
|
||||
* define **ASL-STORE put/get contract**
|
||||
* add **federation merge semantics**
|
||||
* define **provenance certificates**
|
||||
* or show **how LLMs consume snapshot-relative DAGs**
|
||||
|
||||
You’re building something real here.
|
||||
|
||||
|
|
@ -1,363 +0,0 @@
|
|||
# artifact.py (cache-enabled update)
|
||||
|
||||
from __future__ import annotations
|
||||
from typing import Callable, Optional, Dict, List, Any
|
||||
import numpy as np
|
||||
|
||||
from canonical import canonicalize_sparse
|
||||
from hashers import SHA256Hash, HashStrategy
|
||||
from sid import compute_sid
|
||||
from sid_hashers import SHA256SIDHash, StructureHashStrategy
|
||||
|
||||
# ---------------------------------------------------------------------
|
||||
# Defaults
|
||||
# ---------------------------------------------------------------------
|
||||
DEFAULT_CONTENT_HASHER: HashStrategy = SHA256Hash()
|
||||
DEFAULT_SID_HASHER: StructureHashStrategy = SHA256SIDHash()
|
||||
|
||||
# ---------------------------------------------------------------------
|
||||
# Redundant cache
|
||||
# ---------------------------------------------------------------------
|
||||
class ArtifactCache:
|
||||
"""Redundant SID -> CID cache."""
|
||||
def __init__(self):
|
||||
self._cache: Dict[str, str] = {}
|
||||
|
||||
def get(self, sid: str) -> Optional[str]:
|
||||
return self._cache.get(sid)
|
||||
|
||||
def put(self, sid: str, cid: str):
|
||||
self._cache[sid] = cid
|
||||
|
||||
def has(self, sid: str) -> bool:
|
||||
return sid in self._cache
|
||||
|
||||
# ---------------------------------------------------------------------
|
||||
# Artifact class
|
||||
# ---------------------------------------------------------------------
|
||||
class Artifact:
|
||||
"""
|
||||
Lazy, DAG-based artifact.
|
||||
|
||||
Invariants:
|
||||
- SID is always available
|
||||
- CID is computed lazily, on demand
|
||||
- Structure (SID) and content (CID) are orthogonal
|
||||
"""
|
||||
def __init__(
|
||||
self,
|
||||
*,
|
||||
op: str,
|
||||
params: Dict[str, Any],
|
||||
children: List["Artifact"],
|
||||
sid: str,
|
||||
materializer: Optional[Callable[["Artifact", ArtifactCache], str]] = None,
|
||||
content_hasher: HashStrategy = DEFAULT_CONTENT_HASHER,
|
||||
):
|
||||
self.op = op
|
||||
self.params = params
|
||||
self.children = children
|
||||
self.sid = sid # structural identity
|
||||
self._cid: Optional[str] = None # semantic identity (lazy)
|
||||
self._materializer = materializer
|
||||
self._content_hasher = content_hasher
|
||||
|
||||
# -----------------------------------------------------------------
|
||||
# Lazy CID access (requires cache)
|
||||
# -----------------------------------------------------------------
|
||||
def cid(self, cache: ArtifactCache) -> str:
|
||||
if self._cid is not None:
|
||||
return self._cid
|
||||
if self._materializer is None:
|
||||
raise RuntimeError(
|
||||
f"Artifact with SID {self.sid} is not materializable"
|
||||
)
|
||||
self._cid = self._materializer(self, cache)
|
||||
return self._cid
|
||||
|
||||
@property
|
||||
def is_materialized(self) -> bool:
|
||||
return self._cid is not None
|
||||
|
||||
def __repr__(self) -> str:
|
||||
return (
|
||||
f"Artifact(op={self.op!r}, "
|
||||
f"sid={self.sid[:8]}…, "
|
||||
f"cid={'set' if self._cid else 'lazy'})"
|
||||
)
|
||||
|
||||
# ---------------------------------------------------------------------
|
||||
# Materialization helpers (cache-aware)
|
||||
# ---------------------------------------------------------------------
|
||||
def _compute_cid_from_sparse(indices: np.ndarray, values: np.ndarray, hasher: HashStrategy) -> str:
|
||||
ci, cv = canonicalize_sparse(indices, values)
|
||||
return hasher.hash_sparse(ci, cv)
|
||||
|
||||
def _materialize_tensor_lazy(left: Artifact, right: Artifact, artifact: Artifact, cache: ArtifactCache) -> str:
|
||||
"""
|
||||
Lazily materialize tensor by combining children indices/values.
|
||||
Avoids building full dense arrays until necessary.
|
||||
"""
|
||||
# Materialize children first (still cached)
|
||||
left_cid = left.cid(cache)
|
||||
right_cid = right.cid(cache)
|
||||
|
||||
left_indices, left_values = left.params["_materialized"]
|
||||
right_indices, right_values = right.params["_materialized"]
|
||||
|
||||
shift = artifact.params.get("right_bits")
|
||||
if shift is None:
|
||||
raise RuntimeError("tensor right_bits not set")
|
||||
|
||||
# Lazy generator for new indices and values
|
||||
def kron_sparse_gen():
|
||||
for i, vi in zip(left_indices, left_values):
|
||||
for j, vj in zip(right_indices, right_values):
|
||||
yield (i << shift) | j, vi * vj
|
||||
|
||||
# Materialize as arrays only when CID is computed
|
||||
idx_list, val_list = zip(*kron_sparse_gen()) if left_indices.size * right_indices.size > 0 else ([], [])
|
||||
new_indices = np.array(idx_list, dtype=np.int64)
|
||||
new_values = np.array(val_list, dtype=np.complex128)
|
||||
|
||||
artifact.params["_materialized"] = (new_indices, new_values)
|
||||
|
||||
cid = _compute_cid_from_sparse(new_indices, new_values, artifact._content_hasher)
|
||||
artifact._cid = cid
|
||||
cache.put(artifact.sid, cid)
|
||||
return cid
|
||||
|
||||
|
||||
def materialize_artifact(artifact: Artifact, cache: ArtifactCache) -> str:
|
||||
cached = cache.get(artifact.sid)
|
||||
if cached is not None:
|
||||
artifact._cid = cached
|
||||
return cached
|
||||
|
||||
op = artifact.op
|
||||
|
||||
if op == "leaf.bits":
|
||||
indices, values = artifact.params["_materialized"]
|
||||
cid = _compute_cid_from_sparse(indices, values, artifact._content_hasher)
|
||||
|
||||
elif op == "leaf.quantum":
|
||||
return _materialize_quantum_leaf(artifact, cache)
|
||||
|
||||
elif op == "tensor":
|
||||
left, right = artifact.children
|
||||
return _materialize_tensor_lazy(left, right, artifact, cache)
|
||||
|
||||
else:
|
||||
raise NotImplementedError(f"Materialization not implemented for op={op!r}")
|
||||
|
||||
artifact._cid = cid
|
||||
cache.put(artifact.sid, cid)
|
||||
return cid
|
||||
|
||||
# ---------------------------------------------------------------------
|
||||
# Utility: compute bit-width
|
||||
# ---------------------------------------------------------------------
|
||||
def bit_width(artifact: Artifact) -> int:
|
||||
"""
|
||||
Compute the number of bits represented by an artifact.
|
||||
"""
|
||||
if artifact.op == "leaf.bits":
|
||||
indices, _ = artifact.params["_materialized"]
|
||||
max_index = int(indices.max()) if len(indices) > 0 else 0 # <-- cast to Python int
|
||||
return max(1, max_index.bit_length())
|
||||
elif artifact.op == "tensor":
|
||||
return sum(bit_width(c) for c in artifact.children)
|
||||
else:
|
||||
raise NotImplementedError(f"bit_width not implemented for {artifact.op}")
|
||||
|
||||
# ---------------------------------------------------------------------
|
||||
# Factory functions
|
||||
# ---------------------------------------------------------------------
|
||||
def bits(
|
||||
bitstring: str,
|
||||
*,
|
||||
sid_hasher: StructureHashStrategy = DEFAULT_SID_HASHER,
|
||||
content_hasher: HashStrategy = DEFAULT_CONTENT_HASHER,
|
||||
) -> Artifact:
|
||||
n = len(bitstring)
|
||||
index = int(bitstring, 2)
|
||||
|
||||
indices = np.array([index], dtype=np.int64)
|
||||
values = np.array([1.0], dtype=np.complex128)
|
||||
|
||||
sid = compute_sid(
|
||||
op="leaf.bits",
|
||||
child_sids=[],
|
||||
params={"bits": bitstring},
|
||||
hasher=sid_hasher,
|
||||
)
|
||||
|
||||
art = Artifact(
|
||||
op="leaf.bits",
|
||||
params={"_materialized": (indices, values)},
|
||||
children=[],
|
||||
sid=sid,
|
||||
materializer=materialize_artifact,
|
||||
content_hasher=content_hasher,
|
||||
)
|
||||
return art
|
||||
|
||||
def tensor(left: Artifact, right: Artifact, *, sid_hasher: StructureHashStrategy = DEFAULT_SID_HASHER) -> Artifact:
|
||||
shift = bit_width(right)
|
||||
sid = compute_sid(
|
||||
op="tensor",
|
||||
child_sids=[left.sid, right.sid],
|
||||
params={},
|
||||
hasher=sid_hasher,
|
||||
ordered_children=True
|
||||
)
|
||||
return Artifact(
|
||||
op="tensor",
|
||||
params={"right_bits": shift},
|
||||
children=[left, right],
|
||||
sid=sid,
|
||||
materializer=materialize_artifact,
|
||||
content_hasher=left._content_hasher,
|
||||
)
|
||||
|
||||
# ---------------------------------------------------------------------
|
||||
# DAG utilities
|
||||
# ---------------------------------------------------------------------
|
||||
def dag_node_count(a: Artifact, seen=None) -> int:
|
||||
if seen is None:
|
||||
seen = set()
|
||||
if a.sid in seen:
|
||||
return 0
|
||||
seen.add(a.sid)
|
||||
return 1 + sum(dag_node_count(c, seen) for c in a.children)
|
||||
|
||||
def dag_depth(a: Artifact) -> int:
|
||||
if not a.children:
|
||||
return 1
|
||||
return 1 + max(dag_depth(c) for c in a.children)
|
||||
# ---------------------------------------------------------------------
|
||||
# Quantum leaf factory
|
||||
# ---------------------------------------------------------------------
|
||||
def quantum_leaf(
|
||||
amplitudes: np.ndarray,
|
||||
*,
|
||||
sid: Optional[str] = None,
|
||||
sid_hasher: Optional[StructureHashStrategy] = DEFAULT_SID_HASHER,
|
||||
content_hasher: HashStrategy = DEFAULT_CONTENT_HASHER,
|
||||
) -> Artifact:
|
||||
"""
|
||||
Create a lazy quantum leaf.
|
||||
amplitudes: 1D numpy array of complex amplitudes
|
||||
"""
|
||||
amplitudes = np.asarray(amplitudes, dtype=np.complex128)
|
||||
n = int(np.log2(len(amplitudes)))
|
||||
if 2**n != len(amplitudes):
|
||||
raise ValueError("Length of amplitudes must be a power of 2")
|
||||
|
||||
# Default SID: computed from amplitudes (structural identity)
|
||||
if sid is None:
|
||||
sid = compute_sid(
|
||||
op="leaf.quantum",
|
||||
child_sids=[],
|
||||
params={"amplitudes": amplitudes.tolist()},
|
||||
hasher=sid_hasher,
|
||||
)
|
||||
|
||||
# Lazy _materialized: store amplitudes but not indices yet
|
||||
# indices will be generated on materialization
|
||||
params = {"_amplitudes": amplitudes}
|
||||
|
||||
return Artifact(
|
||||
op="leaf.quantum",
|
||||
params=params,
|
||||
children=[],
|
||||
sid=sid,
|
||||
materializer=_materialize_quantum_leaf,
|
||||
content_hasher=content_hasher,
|
||||
)
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------
|
||||
# Materializer for quantum leaves
|
||||
# ---------------------------------------------------------------------
|
||||
def _materialize_quantum_leaf(artifact: Artifact, cache: ArtifactCache) -> str:
|
||||
"""
|
||||
Convert quantum leaf to full sparse representation (indices, values)
|
||||
and compute CID.
|
||||
"""
|
||||
# Check cache first
|
||||
cached = cache.get(artifact.sid)
|
||||
if cached is not None:
|
||||
artifact._cid = cached
|
||||
return cached
|
||||
|
||||
amplitudes = artifact.params["_amplitudes"]
|
||||
dim = len(amplitudes)
|
||||
indices = np.arange(dim, dtype=np.int64)
|
||||
values = amplitudes.copy()
|
||||
artifact.params["_materialized"] = (indices, values)
|
||||
|
||||
cid = _compute_cid_from_sparse(indices, values, artifact._content_hasher)
|
||||
artifact._cid = cid
|
||||
cache.put(artifact.sid, cid)
|
||||
return cid
|
||||
# ---------------------------------------------------------------------
|
||||
# DAG helper: recursively tensor a list of artifacts (cache-aware)
|
||||
# ---------------------------------------------------------------------
|
||||
def tensor_all(artifacts: List[Artifact], sid_hasher: Optional[StructureHashStrategy] = None) -> Artifact:
|
||||
"""
|
||||
Recursively tensors a list of artifacts into a balanced binary DAG.
|
||||
Lazy quantum leaves are supported automatically.
|
||||
"""
|
||||
if len(artifacts) == 1:
|
||||
return artifacts[0]
|
||||
mid = len(artifacts) // 2
|
||||
left = tensor_all(artifacts[:mid], sid_hasher)
|
||||
right = tensor_all(artifacts[mid:], sid_hasher)
|
||||
return tensor(left, right, sid_hasher=sid_hasher or DEFAULT_SID_HASHER)
|
||||
|
||||
# ---------------------------------------------------------------------
|
||||
# DAG materialization with cache
|
||||
# ---------------------------------------------------------------------
|
||||
def materialize_dag(root: Artifact, cache: Optional[ArtifactCache] = None) -> str:
|
||||
"""
|
||||
Recursively materialize a DAG starting from `root`, filling the cache.
|
||||
Returns the root CID.
|
||||
"""
|
||||
if cache is None:
|
||||
cache = ArtifactCache()
|
||||
return root.cid(cache)
|
||||
|
||||
# ---------------------------------------------------------------------
|
||||
# DAG metrics (cache-aware)
|
||||
# ---------------------------------------------------------------------
|
||||
def dag_node_count_cached(a: Artifact, cache: Optional[ArtifactCache] = None) -> int:
|
||||
"""
|
||||
Counts nodes reachable from `a`, materializing and caching them.
|
||||
"""
|
||||
if cache is None:
|
||||
cache = ArtifactCache()
|
||||
seen = set()
|
||||
def _count(node: Artifact):
|
||||
if node.sid in seen:
|
||||
return 0
|
||||
seen.add(node.sid)
|
||||
# Materialize node with cache
|
||||
node.cid(cache)
|
||||
return 1 + sum(_count(c) for c in node.children)
|
||||
return _count(a)
|
||||
|
||||
def dag_depth_cached(a: Artifact, cache: Optional[ArtifactCache] = None) -> int:
|
||||
"""
|
||||
Computes depth of DAG from `a`, ensuring all nodes materialized in cache.
|
||||
"""
|
||||
if cache is None:
|
||||
cache = ArtifactCache()
|
||||
def _depth(node: Artifact):
|
||||
node.cid(cache)
|
||||
if not node.children:
|
||||
return 1
|
||||
return 1 + max(_depth(c) for c in node.children)
|
||||
return _depth(a)
|
||||
|
||||
|
||||
|
|
@ -1,140 +0,0 @@
|
|||
Perfect. Let’s make this concrete. Since we’re still in the “planning/spec” phase, we can outline **placeholders and minimal skeleton scripts/binaries** so the ISO is bootable and ready for you to drop in real compiled binaries later.
|
||||
|
||||
We’ll treat `asl-auth-host` and `asl-rescue` as minimal executables (could be compiled C programs or shell wrappers for now), plus helper scripts.
|
||||
|
||||
---
|
||||
|
||||
### 1. `asl-auth-host` (skeleton)
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# /tools/asl-auth-host
|
||||
# Minimal executable placeholder for domain authority host
|
||||
|
||||
echo "[ASL-AUTH-HOST] Starting offline domain authority host..."
|
||||
echo "[*] Current timestamp: $(date)"
|
||||
echo "[*] Domains available:"
|
||||
ls -1 /domains || echo " No domains found."
|
||||
|
||||
# Placeholder for DAM signing
|
||||
echo "[*] Use ./sign_dam.sh <domain_id> to sign Domain Admission Manifest"
|
||||
|
||||
# Keep running for interactive session
|
||||
exec /bin/bash
|
||||
```
|
||||
|
||||
> **Permissions:** `chmod +x /tools/asl-auth-host`
|
||||
|
||||
---
|
||||
|
||||
### 2. `asl-rescue` (skeleton)
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# /tools/asl-rescue
|
||||
# Minimal placeholder for rescue operations
|
||||
|
||||
echo "[ASL-RESCUE] Starting rescue mode..."
|
||||
echo "[*] Scanning local filesystem..."
|
||||
ls -l /mnt || echo " No mount points found."
|
||||
|
||||
# Placeholder: add artifacts to local store
|
||||
echo "[*] You can add files/artifacts using ./add_artifact.sh <path>"
|
||||
|
||||
# Interactive shell for manual rescue
|
||||
exec /bin/bash
|
||||
```
|
||||
|
||||
> **Permissions:** `chmod +x /tools/asl-rescue`
|
||||
|
||||
---
|
||||
|
||||
### 3. `sign_dam.sh` (helper)
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# /tools/sign_dam.sh
|
||||
# Signs a Domain Admission Manifest (DAM) offline
|
||||
|
||||
DAM_FILE="$1"
|
||||
if [ -z "$DAM_FILE" ]; then
|
||||
echo "Usage: $0 <DAM file>"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
KEY_FILE="/keys/private_authority.key"
|
||||
OUT_FILE="${DAM_FILE}.signed"
|
||||
|
||||
echo "[*] Signing DAM $DAM_FILE using offline key $KEY_FILE..."
|
||||
# placeholder signature
|
||||
echo "Signed-DAM: $(sha256sum "$DAM_FILE")" > "$OUT_FILE"
|
||||
|
||||
echo "[*] Signed DAM stored at $OUT_FILE"
|
||||
```
|
||||
|
||||
> **Permissions:** `chmod +x /tools/sign_dam.sh`
|
||||
> **Note:** The `KEY_FILE` should be preloaded via the SOPS bundle or offline disk.
|
||||
|
||||
---
|
||||
|
||||
### 4. `add_artifact.sh` (helper)
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# /tools/add_artifact.sh
|
||||
# Add a file to the local ASL store (skeleton)
|
||||
|
||||
FILE="$1"
|
||||
if [ -z "$FILE" ] || [ ! -f "$FILE" ]; then
|
||||
echo "Usage: $0 <file>"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
STORE_DIR="/domains/personal/artifacts"
|
||||
mkdir -p "$STORE_DIR"
|
||||
|
||||
cp "$FILE" "$STORE_DIR/"
|
||||
echo "[*] Added artifact $(basename "$FILE") to $STORE_DIR"
|
||||
|
||||
# Optionally record in log (simplified placeholder)
|
||||
echo "$(date) ADD $(basename "$FILE")" >> "$STORE_DIR/store.log"
|
||||
```
|
||||
|
||||
> **Permissions:** `chmod +x /tools/add_artifact.sh`
|
||||
|
||||
---
|
||||
|
||||
### 5. Directory structure on ISO
|
||||
|
||||
```
|
||||
/
|
||||
├─ boot/
|
||||
│ └─ boot_overlay.sh
|
||||
├─ tools/
|
||||
│ ├─ asl-auth-host
|
||||
│ ├─ asl-rescue
|
||||
│ ├─ sign_dam.sh
|
||||
│ └─ add_artifact.sh
|
||||
├─ domains/
|
||||
│ └─ personal/ # pre-seeded or empty personal domain
|
||||
├─ sops-bundles/
|
||||
├─ env-claims/
|
||||
└─ upper/ # overlay writable
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### ✅ Notes for Next Steps
|
||||
|
||||
1. **Compilation**: Replace placeholders with actual compiled binaries later.
|
||||
2. **Keys**: Include offline private keys in `/keys` (SOPS encrypted or temporarily unencrypted during ISO build).
|
||||
3. **Overlay**: Use `boot_overlay.sh` to mount `/upper` over `/` for writable session.
|
||||
4. **Persistence**: ZFS pool (`/domains`) can store all domain artifacts, DAMs, and logs.
|
||||
5. **Interactivity**: Users can run `asl-rescue` for file rescue, `asl-auth-host` for offline DAM signing.
|
||||
|
||||
---
|
||||
|
||||
If you want, I can **update your build script** to **copy these skeleton scripts automatically** into `/tools`, set permissions, and integrate them with the overlay and ZFS setup. This would make the ISO **bootable and immediately usable** for offline testing.
|
||||
|
||||
Do you want me to do that next?
|
||||
|
||||
|
|
@ -1,99 +0,0 @@
|
|||
# ASL-CORE Patch: Block Semantics
|
||||
|
||||
---
|
||||
|
||||
## 1. Purpose
|
||||
|
||||
Introduce **blocks** into the ASL-CORE semantic model to provide a minimal foundation for:
|
||||
|
||||
* Artifact → block → bytes mapping
|
||||
* Immutability guarantees
|
||||
* Snapshot-safe references
|
||||
|
||||
The patch is **minimal**, leaving lifecycle, sealing, retention, and GC to ASL-STORE-INDEX.
|
||||
|
||||
---
|
||||
|
||||
## 2. New Core Concepts
|
||||
|
||||
### 2.1 Block
|
||||
|
||||
* **Definition:** A block is an **atomic, immutable storage unit** containing a sequence of bytes.
|
||||
* **Identifier:** `BlockID` — opaque and unique across the system.
|
||||
* **Properties:**
|
||||
|
||||
1. Contents are immutable once created (semantic guarantee).
|
||||
2. Blocks can be referenced by one or more artifacts.
|
||||
3. Blocks are existential; their layout, size, and packing are **implementation concerns**.
|
||||
* **Notation:** `(BlockID, offset, length)` denotes a **byte slice** within a block.
|
||||
|
||||
* Offset and length must refer to bytes **inside the block**.
|
||||
* Semantic operations may reference these slices but **cannot modify them**.
|
||||
|
||||
---
|
||||
|
||||
### 2.2 Artifact and Block Relationship
|
||||
|
||||
* Each **Artifact** in ASL-CORE can be fully or partially contained in one or more blocks.
|
||||
* Semantic mapping:
|
||||
|
||||
```
|
||||
ArtifactKey → {ArtifactLocation1, ArtifactLocation2, ...}
|
||||
```
|
||||
|
||||
Where each `ArtifactLocation` is:
|
||||
|
||||
```
|
||||
ArtifactLocation = (BlockID, offset, length)
|
||||
```
|
||||
|
||||
* Guarantees:
|
||||
|
||||
1. **Determinism:** Given the same ArtifactKey, the locations are always the same at the same snapshot.
|
||||
2. **Immutability:** The bytes addressed by ArtifactLocation never change.
|
||||
3. **Snapshot consistency:** If an artifact is referenced by a snapshot, the bytes remain valid for the lifetime of that snapshot.
|
||||
|
||||
---
|
||||
|
||||
### 2.3 Block Visibility and Referencing
|
||||
|
||||
* **Blocks themselves** are not directly visible in ASL-CORE; they exist as **supporting storage units for artifacts**.
|
||||
* **ArtifactKey references** to blocks must only point to **immutable bytes**.
|
||||
* Operations on artifacts **cannot modify or relocate block bytes**; only new artifacts can be created referencing blocks.
|
||||
|
||||
---
|
||||
|
||||
### 2.4 Minimal Invariants
|
||||
|
||||
1. **Block Immutability:** Once a block exists semantically, its bytes never change.
|
||||
2. **Artifact Integrity:** ArtifactKey → ArtifactLocation mapping is stable and deterministic.
|
||||
3. **Existence:** A referenced BlockID must exist in the system at the semantic level (physical existence is store-level concern).
|
||||
4. **Slice Validity:** `(offset, length)` is always within the bounds of the block.
|
||||
|
||||
---
|
||||
|
||||
### 2.5 Non-Goals of ASL-CORE Regarding Blocks
|
||||
|
||||
* **Sealing mechanics** – handled by ASL-STORE-INDEX
|
||||
* **Retention / GC** – handled by ASL-STORE-INDEX
|
||||
* **Snapshot pinning or lifecycle events** – store concern
|
||||
* **Size class, packing, or performance optimizations** – store concern
|
||||
|
||||
---
|
||||
|
||||
## 3. Summary
|
||||
|
||||
This patch introduces:
|
||||
|
||||
* **BlockID**: opaque, unique identifier
|
||||
* **Blocks**: immutable byte sequences, existential atoms
|
||||
* **ArtifactLocation**: `(BlockID, offset, length)` primitive to map artifacts to blocks
|
||||
|
||||
It **does not define storage, GC, or lifecycle mechanics**, leaving these to ASL-STORE-INDEX.
|
||||
|
||||
**Semantic impact:**
|
||||
|
||||
* Artifacts can now reference storage in a stable, deterministic, and immutable way.
|
||||
* Higher layers (index, store, encoding) can build on blocks for deterministic persistence, snapshot safety, and replay without modifying ASL-CORE semantics.
|
||||
|
||||
|
||||
|
|
@ -1,157 +0,0 @@
|
|||
# ASL-FEDERATION SPECIFICATION
|
||||
|
||||
---
|
||||
|
||||
## 1. Purpose
|
||||
|
||||
The Federation Specification defines the **multi-domain model** for ASL-based storage systems, including:
|
||||
|
||||
* Domains: logical separation of artifacts and snapshots
|
||||
* Published vs internal state
|
||||
* Cross-domain visibility rules
|
||||
* Snapshot identity and consistency guarantees
|
||||
* Integration with index, store, PEL, and provenance layers
|
||||
|
||||
It ensures **determinism, traceability, and reproducibility** across federated deployments.
|
||||
|
||||
---
|
||||
|
||||
## 2. Core Concepts
|
||||
|
||||
| Term | Definition |
|
||||
| -------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------ |
|
||||
| **Domain** | A logical namespace or administrative boundary for artifacts and snapshots. Each domain manages its own set of artifacts, blocks, and snapshots. |
|
||||
| **Published state** | Artifacts, blocks, and snapshots exposed outside the domain. |
|
||||
| **Internal state** | Artifacts, blocks, and snapshots restricted to a domain; not visible to other domains. |
|
||||
| **Snapshot identity** | Globally unique identifier for a snapshot within a domain; used to reconstruct CURRENT. |
|
||||
| **Cross-domain reference** | An artifact in one domain referencing a published artifact from another domain. |
|
||||
|
||||
---
|
||||
|
||||
## 3. Domain Semantics
|
||||
|
||||
1. **Domain isolation**
|
||||
|
||||
* Each domain has its own CAS/ASL storage and index layers.
|
||||
* Artifacts and blocks in internal state are **invisible outside the domain**.
|
||||
|
||||
2. **Published state**
|
||||
|
||||
* Artifacts marked as published are **visible to other domains**.
|
||||
* Published artifacts must satisfy **full ASL-STORE-INDEX invariants**: deterministic, immutable, snapshot-safe.
|
||||
|
||||
3. **Cross-domain artifact references**
|
||||
|
||||
* Only **published artifacts** may be referenced by other domains.
|
||||
* References are **read-only**; the referenced artifact cannot be modified in the original domain.
|
||||
* Indexed in the consuming domain as standard `ArtifactKey → ArtifactLocation`.
|
||||
|
||||
4. **Federated snapshots**
|
||||
|
||||
* Snapshots in each domain maintain **local visibility** for internal state.
|
||||
* Published snapshots may be **federated** to other domains to expose deterministic CURRENT state.
|
||||
|
||||
---
|
||||
|
||||
## 4. Snapshot Identity
|
||||
|
||||
* **Domain-local snapshot IDs** are unique per domain.
|
||||
* **Federated snapshot IDs** combine domain ID + local snapshot ID.
|
||||
|
||||
* Ensures **global uniqueness** across federation.
|
||||
* **Snapshot references** may include cross-domain artifacts, but the mapping is **immutable and deterministic**.
|
||||
|
||||
---
|
||||
|
||||
## 5. Visibility Rules
|
||||
|
||||
| Object | Internal Domain | Other Domains |
|
||||
| ----------------------------------- | --------------- | ------------------- |
|
||||
| Internal artifact | visible | hidden |
|
||||
| Published artifact | visible | visible (read-only) |
|
||||
| Internal snapshot | visible | hidden |
|
||||
| Published snapshot | visible | visible |
|
||||
| Block supporting published artifact | visible | visible |
|
||||
| Block supporting internal artifact | visible | hidden |
|
||||
|
||||
* **Index entries** follow the same visibility rules:
|
||||
|
||||
* Only entries pointing to visible artifacts/blocks are visible in a domain’s CURRENT.
|
||||
* Determinism is guaranteed per domain’s view of CURRENT.
|
||||
|
||||
---
|
||||
|
||||
## 6. Cross-Domain Operations
|
||||
|
||||
1. **Import published artifacts**
|
||||
|
||||
* A domain may import a published artifact from another domain.
|
||||
* The imported artifact is **treated as immutable**; its original domain cannot alter it.
|
||||
* Execution receipts may include imported artifacts as inputs.
|
||||
|
||||
2. **Export published artifacts**
|
||||
|
||||
* Internal artifacts may be **promoted** to published state.
|
||||
* Requires sealing and pinning to snapshot for determinism.
|
||||
* Once published, the artifact may be referenced by other domains.
|
||||
|
||||
3. **Federation log / synchronization**
|
||||
|
||||
* Each domain maintains its **own append-only log**.
|
||||
* Published changes can be propagated to other domains via log replication.
|
||||
* Snapshot + log replay ensures deterministic reconstruction across domains.
|
||||
|
||||
---
|
||||
|
||||
## 7. Provenance & Traceability
|
||||
|
||||
* **Execution receipts** can include cross-domain references.
|
||||
|
||||
* **Trace graphs** preserve:
|
||||
|
||||
* Original domain of artifacts
|
||||
* Snapshot ID in the original domain
|
||||
* Deterministic DAG execution per snapshot
|
||||
|
||||
* **Provenance guarantees**:
|
||||
|
||||
1. Artifact immutability
|
||||
2. Deterministic execution reproducibility
|
||||
3. Traceable lineage across domains
|
||||
|
||||
---
|
||||
|
||||
## 8. Normative Invariants
|
||||
|
||||
1. **Determinism:** Reconstructing CURRENT in any domain yields the same artifact graph given the same snapshot + log.
|
||||
2. **Immutability:** Published artifacts and snapshots cannot be modified.
|
||||
3. **Domain isolation:** Internal artifacts are never exposed outside their domain.
|
||||
4. **Federation safety:** Cross-domain references are read-only and preserve deterministic execution.
|
||||
5. **Snapshot integrity:** Federated snapshots reference only published artifacts; replay reproduces CURRENT.
|
||||
|
||||
---
|
||||
|
||||
## 9. Integration with Existing Layers
|
||||
|
||||
| Layer | Role in Federation |
|
||||
| -------------------- | ---------------------------------------------------------------------------------------------- |
|
||||
| ASL-CORE | Blocks and artifacts remain immutable; no change |
|
||||
| ASL-CORE-INDEX | Artifact → Block mapping is domain-local; published artifacts are indexed across domains |
|
||||
| ASL-STORE-INDEX | Sealing, retention, and snapshot pinning apply per domain; GC respects cross-domain references |
|
||||
| ENC-ASL-CORE-INDEX | Encoding of index entries may include domain and visibility flags for federation (`tier1/enc-asl-core-index.md`) |
|
||||
| PEL | DAG execution may include imported artifacts; determinism guaranteed per domain snapshot |
|
||||
| PEL-PROV / PEL-TRACE | Maintains provenance including cross-domain artifact lineage |
|
||||
|
||||
---
|
||||
|
||||
## 10. Summary
|
||||
|
||||
The Federation Specification formalizes:
|
||||
|
||||
* Domains and logical separation
|
||||
* Published vs internal state
|
||||
* Cross-domain artifact visibility and reference rules
|
||||
* Snapshot identity and deterministic reconstruction across domains
|
||||
* Integration with index, store, PEL, and provenance layers
|
||||
|
||||
It ensures **multi-domain determinism, traceability, and reproducibility** while leaving semantics and storage-layer policies unchanged.
|
||||
|
|
@ -1,143 +0,0 @@
|
|||
# ASL-STORE-INDEX ADDENDUM: Small vs Large Block Handling
|
||||
|
||||
---
|
||||
|
||||
## 1. Purpose
|
||||
|
||||
This addendum defines **store-level policies for handling small and large blocks** in ASL-STORE-INDEX, covering:
|
||||
|
||||
* Packing strategies
|
||||
* Segment allocation rules
|
||||
* Addressing consistency
|
||||
* Determinism guarantees
|
||||
|
||||
It ensures **operational clarity** while keeping the **semantic model (ASL-CORE and ASL-CORE-INDEX) unchanged**.
|
||||
|
||||
---
|
||||
|
||||
## 2. Definitions
|
||||
|
||||
| Term | Meaning |
|
||||
| ----------------- | --------------------------------------------------------------------------------------------------- |
|
||||
| **Small block** | Block containing artifact bytes below a configurable threshold `T_small`. |
|
||||
| **Large block** | Block containing artifact bytes ≥ `T_small`. |
|
||||
| **Mixed segment** | A segment containing both small and large blocks (generally avoided). |
|
||||
| **Packing** | Strategy for combining multiple small artifacts into a single block. |
|
||||
| **BlockID** | Opaque, unique identifier for the block. Addressing rules are identical for small and large blocks. |
|
||||
|
||||
**Notes:**
|
||||
|
||||
* Small vs large classification is **store-level only**, transparent to ASL-CORE and index layers.
|
||||
* The **threshold `T_small`** is configurable per deployment.
|
||||
|
||||
---
|
||||
|
||||
## 3. Packing Rules
|
||||
|
||||
1. **Small blocks may be packed together** to reduce storage overhead and improve I/O efficiency.
|
||||
|
||||
* Multiple small artifacts can reside in a single physical block.
|
||||
* Each artifact is mapped in the index to a distinct `(BlockID, offset, length)` within the packed block.
|
||||
|
||||
2. **Large blocks are never packed with other artifacts**.
|
||||
|
||||
* Each large artifact resides in its own block.
|
||||
* This ensures sequential access efficiency and avoids fragmentation.
|
||||
|
||||
3. **Mixed segments** are **permitted only if necessary**, but discouraged.
|
||||
|
||||
* The store may emit a warning or logging when mixing occurs.
|
||||
* Indexing and addressing remain consistent; artifacts retain deterministic `(BlockID, offset, length)` mapping.
|
||||
|
||||
---
|
||||
|
||||
## 4. Segment Allocation Rules
|
||||
|
||||
1. **Small blocks**:
|
||||
|
||||
* Allocated into segments optimized for packing efficiency.
|
||||
* Segment size may be smaller than large-block segments to avoid wasted space.
|
||||
|
||||
2. **Large blocks**:
|
||||
|
||||
* Allocated into segments optimized for sequential I/O.
|
||||
* Each segment may contain a single large block or a small number of large blocks.
|
||||
|
||||
3. **Segment sealing and visibility rules**:
|
||||
|
||||
* Same as standard ASL-STORE-INDEX: segments become visible only after seal + log append.
|
||||
* Determinism and snapshot safety unaffected by block size.
|
||||
|
||||
---
|
||||
|
||||
## 5. Indexing and Addressing
|
||||
|
||||
* All blocks, regardless of size, are addressed uniformly:
|
||||
|
||||
```
|
||||
ArtifactLocation = (BlockID, offset, length)
|
||||
```
|
||||
* Packing small artifacts **does not affect index semantics**:
|
||||
|
||||
* Each artifact retains its unique location.
|
||||
* Shadowing, tombstones, and visibility rules are identical to large blocks.
|
||||
|
||||
---
|
||||
|
||||
## 6. Garbage Collection and Retention
|
||||
|
||||
1. **Small packed blocks**:
|
||||
|
||||
* GC may reclaim blocks only when **all contained artifacts are unreachable**.
|
||||
* Tombstones and snapshot pins apply to individual artifacts within the packed block.
|
||||
|
||||
2. **Large blocks**:
|
||||
|
||||
* GC applies per block, as usual.
|
||||
* Retention/pinning applies to the whole block.
|
||||
|
||||
**Invariant:** GC must never remove bytes still referenced by CURRENT or snapshots, independent of block size.
|
||||
|
||||
---
|
||||
|
||||
## 7. Determinism Guarantees
|
||||
|
||||
* Deterministic behavior of index lookup, CURRENT reconstruction, and PEL execution is **unchanged** by block size or packing.
|
||||
* Packing is purely an **implementation optimization** at the store layer.
|
||||
* All `(BlockID, offset, length)` mappings remain deterministic per snapshot + log.
|
||||
|
||||
---
|
||||
|
||||
## 8. Configurable Parameters
|
||||
|
||||
* `T_small`: threshold for small vs large block classification
|
||||
* `Segment size for small blocks`
|
||||
* `Segment size for large blocks`
|
||||
* `Maximum artifacts per small packed block`
|
||||
|
||||
These parameters may be tuned per deployment but do not change ASL-CORE semantics.
|
||||
|
||||
---
|
||||
|
||||
## 9. Normative Invariants
|
||||
|
||||
1. Artifact locations remain deterministic and immutable.
|
||||
2. Packed small artifacts are individually addressable via `(BlockID, offset, length)`.
|
||||
3. Large artifacts are never packed with other artifacts.
|
||||
4. Segment visibility, snapshot safety, and GC rules are identical to standard store rules.
|
||||
5. Mixed segments are discouraged but allowed if unavoidable; index semantics remain consistent.
|
||||
|
||||
---
|
||||
|
||||
## 10. Summary
|
||||
|
||||
This addendum formalizes **small vs large block handling** in the store layer:
|
||||
|
||||
* **Small artifacts** may be packed together to reduce overhead.
|
||||
* **Large artifacts** remain separate for efficiency.
|
||||
* **Addressing and index semantics remain identical** for both sizes.
|
||||
* **Determinism, snapshot safety, and GC invariants are preserved**.
|
||||
|
||||
It provides clear operational guidance for **store implementations**, while leaving **ASL-CORE and index semantics unaltered**.
|
||||
|
||||
|
||||
|
|
@ -1,272 +0,0 @@
|
|||
/*
|
||||
* asl_capture.c
|
||||
*
|
||||
* Deterministic execution capture with optional PTY support.
|
||||
*
|
||||
* PIPE mode: strict stdin/stdout/stderr separation
|
||||
* PTY mode: interactive, single combined stream
|
||||
*/
|
||||
|
||||
#include "asl_capture.h"
|
||||
|
||||
#include <unistd.h>
|
||||
#include <sys/types.h>
|
||||
#include <sys/wait.h>
|
||||
#include <sys/select.h>
|
||||
#include <fcntl.h>
|
||||
#include <errno.h>
|
||||
#include <stdio.h>
|
||||
#include <stdlib.h>
|
||||
#include <string.h>
|
||||
|
||||
/* PTY support is optional and explicitly enabled */
|
||||
#ifdef ASL_ENABLE_PTY
|
||||
#define _GNU_SOURCE
|
||||
#include <pty.h>
|
||||
#endif
|
||||
|
||||
/* ------------------------------------------------------------------------- */
|
||||
/* Utilities */
|
||||
/* ------------------------------------------------------------------------- */
|
||||
|
||||
static void set_nonblocking(int fd) {
|
||||
int flags = fcntl(fd, F_GETFL, 0);
|
||||
if (flags >= 0)
|
||||
fcntl(fd, F_SETFL, flags | O_NONBLOCK);
|
||||
}
|
||||
|
||||
/* ------------------------------------------------------------------------- */
|
||||
/* PIPE mode implementation */
|
||||
/* ------------------------------------------------------------------------- */
|
||||
|
||||
static pid_t spawn_pipe(
|
||||
char **argv,
|
||||
int *child_stdin,
|
||||
int *child_stdout,
|
||||
int *child_stderr
|
||||
) {
|
||||
int in_p[2], out_p[2], err_p[2];
|
||||
|
||||
if (pipe(in_p) < 0) return -1;
|
||||
if (pipe(out_p) < 0) return -1;
|
||||
if (pipe(err_p) < 0) return -1;
|
||||
|
||||
pid_t pid = fork();
|
||||
if (pid < 0) return -1;
|
||||
|
||||
if (pid == 0) {
|
||||
/* child */
|
||||
dup2(in_p[0], STDIN_FILENO);
|
||||
dup2(out_p[1], STDOUT_FILENO);
|
||||
dup2(err_p[1], STDERR_FILENO);
|
||||
|
||||
close(in_p[1]);
|
||||
close(out_p[0]);
|
||||
close(err_p[0]);
|
||||
|
||||
execvp(argv[0], argv);
|
||||
perror("execvp");
|
||||
_exit(127);
|
||||
}
|
||||
|
||||
/* parent */
|
||||
close(in_p[0]);
|
||||
close(out_p[1]);
|
||||
close(err_p[1]);
|
||||
|
||||
*child_stdin = in_p[1];
|
||||
*child_stdout = out_p[0];
|
||||
*child_stderr = err_p[0];
|
||||
|
||||
set_nonblocking(*child_stdout);
|
||||
set_nonblocking(*child_stderr);
|
||||
|
||||
return pid;
|
||||
}
|
||||
|
||||
static void pump_pipe(
|
||||
int child_stdin,
|
||||
int child_stdout,
|
||||
int child_stderr
|
||||
) {
|
||||
char buf[8192];
|
||||
int in_open = 1, out_open = 1, err_open = 1;
|
||||
|
||||
while (in_open || out_open || err_open) {
|
||||
fd_set rfds;
|
||||
FD_ZERO(&rfds);
|
||||
|
||||
if (in_open)
|
||||
FD_SET(STDIN_FILENO, &rfds);
|
||||
if (out_open)
|
||||
FD_SET(child_stdout, &rfds);
|
||||
if (err_open)
|
||||
FD_SET(child_stderr, &rfds);
|
||||
|
||||
int maxfd = child_stdout > child_stderr
|
||||
? child_stdout
|
||||
: child_stderr;
|
||||
|
||||
if (select(maxfd + 1, &rfds, NULL, NULL, NULL) < 0) {
|
||||
if (errno == EINTR)
|
||||
continue;
|
||||
break;
|
||||
}
|
||||
|
||||
/* stdin -> child stdin */
|
||||
if (in_open && FD_ISSET(STDIN_FILENO, &rfds)) {
|
||||
ssize_t n = read(STDIN_FILENO, buf, sizeof(buf));
|
||||
if (n <= 0) {
|
||||
close(child_stdin);
|
||||
in_open = 0;
|
||||
} else {
|
||||
write(child_stdin, buf, n);
|
||||
}
|
||||
}
|
||||
|
||||
/* child stdout */
|
||||
if (out_open && FD_ISSET(child_stdout, &rfds)) {
|
||||
ssize_t n = read(child_stdout, buf, sizeof(buf));
|
||||
if (n <= 0) {
|
||||
close(child_stdout);
|
||||
out_open = 0;
|
||||
} else {
|
||||
/* placeholder for ASL stdout artifact */
|
||||
write(STDOUT_FILENO, buf, n);
|
||||
}
|
||||
}
|
||||
|
||||
/* child stderr */
|
||||
if (err_open && FD_ISSET(child_stderr, &rfds)) {
|
||||
ssize_t n = read(child_stderr, buf, sizeof(buf));
|
||||
if (n <= 0) {
|
||||
close(child_stderr);
|
||||
err_open = 0;
|
||||
} else {
|
||||
/* placeholder for ASL stderr artifact */
|
||||
write(STDERR_FILENO, buf, n);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/* ------------------------------------------------------------------------- */
|
||||
/* PTY mode implementation */
|
||||
/* ------------------------------------------------------------------------- */
|
||||
|
||||
#ifdef ASL_ENABLE_PTY
|
||||
|
||||
static pid_t spawn_pty(
|
||||
char **argv,
|
||||
int *pty_master_fd
|
||||
) {
|
||||
int master_fd;
|
||||
pid_t pid = forkpty(&master_fd, NULL, NULL, NULL);
|
||||
if (pid < 0)
|
||||
return -1;
|
||||
|
||||
if (pid == 0) {
|
||||
execvp(argv[0], argv);
|
||||
perror("execvp");
|
||||
_exit(127);
|
||||
}
|
||||
|
||||
set_nonblocking(master_fd);
|
||||
*pty_master_fd = master_fd;
|
||||
return pid;
|
||||
}
|
||||
|
||||
static void pump_pty(int pty_master) {
|
||||
char buf[8192];
|
||||
int open = 1;
|
||||
|
||||
while (open) {
|
||||
fd_set rfds;
|
||||
FD_ZERO(&rfds);
|
||||
FD_SET(STDIN_FILENO, &rfds);
|
||||
FD_SET(pty_master, &rfds);
|
||||
|
||||
int maxfd = pty_master;
|
||||
|
||||
if (select(maxfd + 1, &rfds, NULL, NULL, NULL) < 0) {
|
||||
if (errno == EINTR)
|
||||
continue;
|
||||
break;
|
||||
}
|
||||
|
||||
/* stdin -> PTY */
|
||||
if (FD_ISSET(STDIN_FILENO, &rfds)) {
|
||||
ssize_t n = read(STDIN_FILENO, buf, sizeof(buf));
|
||||
if (n > 0) {
|
||||
write(pty_master, buf, n);
|
||||
}
|
||||
}
|
||||
|
||||
/* PTY -> stdout (combined stream) */
|
||||
if (FD_ISSET(pty_master, &rfds)) {
|
||||
ssize_t n = read(pty_master, buf, sizeof(buf));
|
||||
if (n <= 0) {
|
||||
close(pty_master);
|
||||
open = 0;
|
||||
} else {
|
||||
/* placeholder for ASL combined output artifact */
|
||||
write(STDOUT_FILENO, buf, n);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#endif /* ASL_ENABLE_PTY */
|
||||
|
||||
/* ------------------------------------------------------------------------- */
|
||||
/* Public entry point */
|
||||
/* ------------------------------------------------------------------------- */
|
||||
|
||||
int asl_capture_run(
|
||||
asl_capture_mode_t mode,
|
||||
char **argv,
|
||||
asl_capture_result_t *result
|
||||
) {
|
||||
pid_t pid;
|
||||
int status;
|
||||
|
||||
if (!argv || !argv[0] || !result)
|
||||
return -1;
|
||||
|
||||
if (mode == ASL_CAPTURE_PTY) {
|
||||
#ifndef ASL_ENABLE_PTY
|
||||
fprintf(stderr, "asl-capture: PTY support not enabled at build time\n");
|
||||
return -1;
|
||||
#else
|
||||
int pty_master;
|
||||
pid = spawn_pty(argv, &pty_master);
|
||||
if (pid < 0)
|
||||
return -1;
|
||||
|
||||
pump_pty(pty_master);
|
||||
#endif
|
||||
} else {
|
||||
int in_fd, out_fd, err_fd;
|
||||
pid = spawn_pipe(argv, &in_fd, &out_fd, &err_fd);
|
||||
if (pid < 0)
|
||||
return -1;
|
||||
|
||||
pump_pipe(in_fd, out_fd, err_fd);
|
||||
}
|
||||
|
||||
waitpid(pid, &status, 0);
|
||||
|
||||
if (WIFEXITED(status)) {
|
||||
result->exit_code = WEXITSTATUS(status);
|
||||
result->term_signal = 0;
|
||||
} else if (WIFSIGNALED(status)) {
|
||||
result->exit_code = 128;
|
||||
result->term_signal = WTERMSIG(status);
|
||||
} else {
|
||||
result->exit_code = 128;
|
||||
result->term_signal = 0;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
|
@ -1,33 +0,0 @@
|
|||
#ifndef ASL_CAPTURE_H
|
||||
#define ASL_CAPTURE_H
|
||||
|
||||
#include <sys/types.h>
|
||||
|
||||
/* Execution mode */
|
||||
typedef enum {
|
||||
ASL_CAPTURE_PIPE = 0,
|
||||
ASL_CAPTURE_PTY = 1
|
||||
} asl_capture_mode_t;
|
||||
|
||||
/* Result of execution */
|
||||
typedef struct {
|
||||
int exit_code; /* valid if term_signal == 0 */
|
||||
int term_signal; /* 0 if exited normally */
|
||||
} asl_capture_result_t;
|
||||
|
||||
/*
|
||||
* Run a command under capture.
|
||||
*
|
||||
* argv must be NULL-terminated and suitable for execvp().
|
||||
* result must not be NULL.
|
||||
*
|
||||
* Returns 0 on success, -1 on internal error.
|
||||
*/
|
||||
int asl_capture_run(
|
||||
asl_capture_mode_t mode,
|
||||
char **argv,
|
||||
asl_capture_result_t *result
|
||||
);
|
||||
|
||||
#endif /* ASL_CAPTURE_H */
|
||||
|
||||
|
|
@ -1,32 +0,0 @@
|
|||
/*
|
||||
* asl_capture_tool.c
|
||||
* Thin CLI wrapper around libasl-capture
|
||||
*
|
||||
* SPDX-License-Identifier: MPL-2.0
|
||||
*/
|
||||
|
||||
#include <stdio.h>
|
||||
#include <stdlib.h>
|
||||
#include "asl_capture.h"
|
||||
|
||||
int main(int argc, char **argv) {
|
||||
if (argc < 2) {
|
||||
fprintf(stderr, "Usage: %s <command> [args...]\n", argv[0]);
|
||||
return 1;
|
||||
}
|
||||
|
||||
asl_capture_result_t result;
|
||||
int ret = asl_capture_run(ASL_CAPTURE_PIPE, argv + 1, &result);
|
||||
|
||||
if (ret != 0) {
|
||||
fprintf(stderr, "asl-capture: command failed with code %d\n", ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
// Optionally print captured artifact info
|
||||
printf("Artifact ID: %s\n", result.artifact_id);
|
||||
printf("PER generated: %s\n", result.per_id);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
|
@ -1,2 +0,0 @@
|
|||
dpkg-buildpackage -us -uc
|
||||
|
||||
|
|
@ -1,113 +0,0 @@
|
|||
# canonical.py
|
||||
|
||||
from __future__ import annotations
|
||||
import numpy as np
|
||||
from typing import Tuple
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------
|
||||
# Canonicalization configuration
|
||||
# ---------------------------------------------------------------------
|
||||
|
||||
# Numerical tolerance for zero detection
|
||||
EPSILON: float = 1e-12
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------
|
||||
# Canonicalization helpers
|
||||
# ---------------------------------------------------------------------
|
||||
|
||||
def _normalize(values: np.ndarray) -> np.ndarray:
|
||||
"""
|
||||
Normalize a complex amplitude vector.
|
||||
"""
|
||||
norm = np.linalg.norm(values)
|
||||
if norm == 0:
|
||||
raise ValueError("Cannot canonicalize zero-norm state")
|
||||
return values / norm
|
||||
|
||||
|
||||
def _remove_global_phase(values: np.ndarray) -> np.ndarray:
|
||||
"""
|
||||
Remove global phase by forcing the first non-zero amplitude
|
||||
to be real and non-negative.
|
||||
"""
|
||||
for v in values:
|
||||
if abs(v) > EPSILON:
|
||||
phase = np.angle(v)
|
||||
values = values * np.exp(-1j * phase)
|
||||
if values.real[0] < 0:
|
||||
values *= -1
|
||||
break
|
||||
return values
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------
|
||||
# Public canonicalization API
|
||||
# ---------------------------------------------------------------------
|
||||
|
||||
def canonicalize_sparse(
|
||||
indices: np.ndarray,
|
||||
values: np.ndarray,
|
||||
) -> Tuple[np.ndarray, np.ndarray]:
|
||||
"""
|
||||
Canonicalize a sparse amplitude representation.
|
||||
|
||||
Guarantees:
|
||||
- Deterministic normalization
|
||||
- Global phase removed
|
||||
- Output arrays are copies (caller mutation-safe)
|
||||
- Index ordering preserved (caller responsibility)
|
||||
|
||||
Parameters
|
||||
----------
|
||||
indices:
|
||||
Integer basis indices (shape: [k])
|
||||
values:
|
||||
Complex amplitudes (shape: [k])
|
||||
|
||||
Returns
|
||||
-------
|
||||
(indices, values):
|
||||
Canonicalized sparse representation
|
||||
"""
|
||||
if indices.ndim != 1 or values.ndim != 1:
|
||||
raise ValueError("indices and values must be 1-D arrays")
|
||||
|
||||
if len(indices) != len(values):
|
||||
raise ValueError("indices and values must have the same length")
|
||||
|
||||
# Copy defensively
|
||||
ci = np.array(indices, dtype=np.int64, copy=True)
|
||||
cv = np.array(values, dtype=np.complex128, copy=True)
|
||||
|
||||
# Normalize
|
||||
cv = _normalize(cv)
|
||||
|
||||
# Canonical global phase
|
||||
cv = _remove_global_phase(cv)
|
||||
|
||||
return ci, cv
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------
|
||||
# Optional utilities (explicit, not implicit)
|
||||
# ---------------------------------------------------------------------
|
||||
|
||||
def canonicalize_dense(
|
||||
amplitudes: np.ndarray,
|
||||
) -> np.ndarray:
|
||||
"""
|
||||
Canonicalize a dense amplitude vector.
|
||||
|
||||
Provided for completeness and testing;
|
||||
sparse canonicalization is preferred for infrastructure.
|
||||
"""
|
||||
if amplitudes.ndim != 1:
|
||||
raise ValueError("amplitudes must be a 1-D array")
|
||||
|
||||
values = np.array(amplitudes, dtype=np.complex128, copy=True)
|
||||
values = _normalize(values)
|
||||
values = _remove_global_phase(values)
|
||||
return values
|
||||
|
||||
|
|
@ -1,107 +0,0 @@
|
|||
Perfect! Here’s a **container-friendly build script outline** for creating your `asl-auth-host` bootable ISO on Linux Mint using a Debian container. It **prepares the filesystem, overlays your binaries and scripts, sets up directories**, and skips actual ZFS pool creation (you can mount ZFS later when booted).
|
||||
|
||||
---
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# build_asl_auth_host_iso.sh
|
||||
# Container-friendly ASL-AUTH-HOST ISO builder
|
||||
# Assumes running in Debian container
|
||||
|
||||
set -e
|
||||
WORKDIR=/work
|
||||
ISO_ROOT=$WORKDIR/iso_root
|
||||
OVERLAY=$WORKDIR/overlay
|
||||
OUTPUT=$WORKDIR/asl-auth-host.iso
|
||||
|
||||
# 1. Clean previous build
|
||||
rm -rf $ISO_ROOT $OVERLAY $OUTPUT
|
||||
mkdir -p $ISO_ROOT $OVERLAY
|
||||
|
||||
# 2. Bootstrap minimal Debian filesystem
|
||||
echo "Bootstrapping minimal Debian filesystem..."
|
||||
debootstrap --arch=amd64 bullseye $ISO_ROOT http://deb.debian.org/debian/
|
||||
chroot $ISO_ROOT /bin/bash << 'EOF'
|
||||
touch /etc/asl-bootstrap
|
||||
apt-get update
|
||||
apt-get install -y linux-image-amd64 initramfs-tools systemd-sysv \
|
||||
ca-certificates curl xorriso isolinux syslinux-common
|
||||
EOF
|
||||
|
||||
# 3. Set up overlay directory structure
|
||||
echo "Setting up overlay structure..."
|
||||
mkdir -p $OVERLAY/{bin,etc,var/lib/asl,var/log}
|
||||
mkdir -p $OVERLAY/{usr/local/bin,run/asl}
|
||||
8
|
||||
# Place your binaries and scripts here
|
||||
cp $WORKDIR/binaries/asl-auth-host $OVERLAY/bin/
|
||||
cp $WORKDIR/binaries/asl-rescue $OVERLAY/bin/
|
||||
cp $WORKDIR/scripts/* $OVERLAY/bin/
|
||||
|
||||
# 4. Copy overlay into ISO root
|
||||
echo "Merging overlay into ISO root..."
|
||||
cp -a $OVERLAY/* $ISO_ROOT/
|
||||
chroot $ISO_ROOT /bin/bash << 'EOF'
|
||||
dpkg -i /usr/local/debs/libasl-capture0_0.1.0-1_amd64.deb \
|
||||
/usr/local/debs/libasl-capture-dev_0.1.0-1_amd64.deb \
|
||||
/usr/local/debs/asl-tools_0.1.0-1_amd64.deb || true
|
||||
apt-get install -f -y # fix dependencies if needed
|
||||
EOF
|
||||
|
||||
# 5. Create expected dataset structure for ASL (to mount ZFS later)
|
||||
mkdir -p $ISO_ROOT/var/lib/asl/{common,personal}
|
||||
mkdir -p $ISO_ROOT/var/lib/asl/pools
|
||||
|
||||
# 6. Optional: add bootloader configs (isolinux/syslinux)
|
||||
echo "Setting up bootloader (minimal)..."
|
||||
mkdir -p $ISO_ROOT/boot/isolinux
|
||||
# Place isolinux.bin and isolinux.cfg here
|
||||
# cp $WORKDIR/bootloader/isolinux.bin $ISO_ROOT/boot/isolinux/
|
||||
# cp $WORKDIR/bootloader/isolinux.cfg $ISO_ROOT/boot/isolinux/
|
||||
|
||||
# 7. Build ISO
|
||||
echo "Generating ISO..."
|
||||
xorriso -as mkisofs \
|
||||
-iso-level 3 \
|
||||
-o $OUTPUT \
|
||||
-full-iso9660-filenames \
|
||||
-volid "ASL-AUTH-HOST" \
|
||||
-eltorito-boot boot/isolinux/isolinux.bin \
|
||||
-eltorito-catalog boot/isolinux/boot.cat \
|
||||
-no-emul-boot -boot-load-size 4 -boot-info-table \
|
||||
$ISO_ROOT
|
||||
|
||||
echo "ASL-AUTH-HOST ISO built successfully: $OUTPUT"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### ✅ Key Points
|
||||
|
||||
* **Container-friendly**: No ZFS creation inside the container. ZFS datasets (`common`, `personal`) are just directories; they can be mounted when booted.
|
||||
* **Overlay**: Your binaries (`asl-auth-host`, `asl-rescue`) and helper scripts go into `/bin` in the overlay.
|
||||
* **Bootloader**: Minimal setup; you can extend with isolinux/syslinux or GRUB.
|
||||
* **ISO Output**: Reproducible in a container, can copy to USB or VM for testing.
|
||||
|
||||
---
|
||||
|
||||
### Next Steps
|
||||
|
||||
1. Populate `$WORKDIR/binaries/` with your compiled `asl-auth-host` and `asl-rescue`.
|
||||
2. Populate `$WORKDIR/scripts/` with any helpers (like init scripts, mounting helpers).
|
||||
3. Add your `isolinux.cfg` or GRUB config to `$WORKDIR/bootloader/`.
|
||||
4. Run the container build. ISO will appear in `/work/asl-auth-host.iso`.
|
||||
|
||||
---
|
||||
|
||||
I can also **draft a ready-to-use directory layout** for the ISO and overlay, including placeholders for:
|
||||
|
||||
* `common` and `personal` datasets
|
||||
* `/var/lib/asl/index`
|
||||
* `/var/lib/asl/certs`
|
||||
* `/etc/asl-auth-host/config.yaml`
|
||||
|
||||
…so you have a concrete blueprint to start placing your certificates and DAM/SOPS bundles.
|
||||
|
||||
Do you want me to draft that layout next?
|
||||
|
||||
|
|
@ -1,316 +0,0 @@
|
|||
Great — this is exactly the point where *small, disciplined decisions* pay off for years.
|
||||
|
||||
I’ll answer in three clean sections:
|
||||
|
||||
1. **Making PTY support optional at build time**
|
||||
2. **Clean split: libraries vs tools (ASL style)**
|
||||
3. **Concrete `debian/` packaging stanzas**
|
||||
|
||||
No fluff, no overreach.
|
||||
|
||||
---
|
||||
|
||||
# 1. Making PTY support optional at build time
|
||||
|
||||
### Design goal
|
||||
|
||||
* Default: **PIPE-only**, fully POSIX
|
||||
* Optional: **PTY-enabled**, Linux/glibc-specific
|
||||
* No API break
|
||||
* No runtime surprises
|
||||
|
||||
---
|
||||
|
||||
## 1.1 Build-time feature flag
|
||||
|
||||
Use a **compile-time macro**, not runtime probing.
|
||||
|
||||
### In code (`asl_capture.c`)
|
||||
|
||||
At the top:
|
||||
|
||||
```c
|
||||
/* Enable PTY support only if requested */
|
||||
#ifdef ASL_ENABLE_PTY
|
||||
#define _GNU_SOURCE
|
||||
#include <pty.h>
|
||||
#endif
|
||||
```
|
||||
|
||||
Then guard PTY-specific code:
|
||||
|
||||
```c
|
||||
#ifdef ASL_ENABLE_PTY
|
||||
static pid_t spawn_pty(...);
|
||||
static void pump_pty(...);
|
||||
#endif
|
||||
```
|
||||
|
||||
In `asl_capture_run`:
|
||||
|
||||
```c
|
||||
if (mode == ASL_CAPTURE_PTY) {
|
||||
#ifndef ASL_ENABLE_PTY
|
||||
fprintf(stderr, "PTY support not built in\n");
|
||||
return -1;
|
||||
#else
|
||||
...
|
||||
#endif
|
||||
}
|
||||
```
|
||||
|
||||
This guarantees:
|
||||
|
||||
* no accidental PTY usage
|
||||
* deterministic behavior per build
|
||||
|
||||
---
|
||||
|
||||
## 1.2 Build system (Makefile example)
|
||||
|
||||
```make
|
||||
CFLAGS += -Wall -Wextra -O2
|
||||
LIBS +=
|
||||
|
||||
ifdef ENABLE_PTY
|
||||
CFLAGS += -DASL_ENABLE_PTY
|
||||
LIBS += -lutil
|
||||
endif
|
||||
```
|
||||
|
||||
Usage:
|
||||
|
||||
```sh
|
||||
make # PIPE only
|
||||
make ENABLE_PTY=1 # PTY enabled
|
||||
```
|
||||
|
||||
This maps perfectly to Debian build profiles later.
|
||||
|
||||
---
|
||||
|
||||
## 1.3 Debian mapping
|
||||
|
||||
* `libutil` dependency **only when PTY enabled**
|
||||
* No mandatory GNU extensions
|
||||
* Debian-friendly
|
||||
|
||||
---
|
||||
|
||||
# 2. Clean split: libraries vs tools
|
||||
|
||||
This matters a lot for long-term sanity.
|
||||
|
||||
## 2.1 Guiding principle
|
||||
|
||||
> **Libraries describe facts. Tools perform actions.**
|
||||
|
||||
Your ecosystem already follows this instinct.
|
||||
|
||||
---
|
||||
|
||||
## 2.2 Proposed split
|
||||
|
||||
### Libraries (linkable, stable ABI)
|
||||
|
||||
| Package | Contents |
|
||||
| ---------------- | -------------------------------- |
|
||||
| `libasl-core` | block IDs, artifact IDs, hashing |
|
||||
| `libasl-store` | block IO, sealing, snapshots |
|
||||
| `libasl-index` | lookup, segment handling |
|
||||
| `libasl-capture` | execution + stream capture |
|
||||
| `libpel-core` | DAG, receipt types |
|
||||
|
||||
Libraries:
|
||||
|
||||
* no CLI parsing
|
||||
* no environment policy
|
||||
* no filesystem assumptions
|
||||
|
||||
---
|
||||
|
||||
### Tools (thin wrappers)
|
||||
|
||||
| Binary | Purpose |
|
||||
| ------------- | ----------------- |
|
||||
| `asl-put` | write artifact |
|
||||
| `asl-get` | read artifact |
|
||||
| `asl-capture` | execute + capture |
|
||||
| `pel-run` | run PEL DAG |
|
||||
| `asl-admin` | store ops |
|
||||
|
||||
Tools:
|
||||
|
||||
* parse argv
|
||||
* open stores
|
||||
* call libs
|
||||
* emit artifacts
|
||||
|
||||
---
|
||||
|
||||
## 2.3 Filesystem layout (Debian standard)
|
||||
|
||||
```text
|
||||
/usr/bin/
|
||||
asl-put
|
||||
asl-get
|
||||
asl-capture
|
||||
pel-run
|
||||
|
||||
/usr/lib/x86_64-linux-gnu/
|
||||
libasl-core.so.0
|
||||
libasl-store.so.0
|
||||
libasl-capture.so.0
|
||||
|
||||
/usr/include/asl/
|
||||
asl_core.h
|
||||
asl_capture.h
|
||||
|
||||
/usr/share/doc/asl/
|
||||
README.md
|
||||
specs/
|
||||
```
|
||||
|
||||
No custom prefixes. No surprises.
|
||||
|
||||
---
|
||||
|
||||
# 3. Debian packaging: concrete stanzas
|
||||
|
||||
Below is a **minimal but correct** Debian packaging layout.
|
||||
|
||||
---
|
||||
|
||||
## 3.1 `debian/control`
|
||||
|
||||
```debcontrol
|
||||
Source: asl
|
||||
Section: utils
|
||||
Priority: optional
|
||||
Maintainer: Your Name <you@example.org>
|
||||
Build-Depends:
|
||||
debhelper-compat (= 13),
|
||||
libc6-dev,
|
||||
pkg-config
|
||||
Standards-Version: 4.6.2
|
||||
Rules-Requires-Root: no
|
||||
|
||||
Package: libasl-capture0
|
||||
Section: libs
|
||||
Architecture: any
|
||||
Depends: ${shlibs:Depends}, ${misc:Depends}
|
||||
Description: ASL execution capture library
|
||||
Library providing deterministic execution capture
|
||||
with optional PTY support.
|
||||
|
||||
Package: libasl-capture-dev
|
||||
Section: libdevel
|
||||
Architecture: any
|
||||
Depends: libasl-capture0 (= ${binary:Version}), ${misc:Depends}
|
||||
Description: Development files for ASL capture
|
||||
Headers and static metadata for linking against
|
||||
libasl-capture.
|
||||
|
||||
Package: asl-tools
|
||||
Section: utils
|
||||
Architecture: any
|
||||
Depends:
|
||||
${shlibs:Depends},
|
||||
${misc:Depends}
|
||||
Recommends: bash
|
||||
Description: ASL command-line tools
|
||||
Tools for artifact storage, execution capture,
|
||||
and deterministic receipt generation.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3.2 PTY as a build profile (optional, advanced but clean)
|
||||
|
||||
In `debian/control`:
|
||||
|
||||
```debcontrol
|
||||
Build-Depends:
|
||||
libc6-dev,
|
||||
pkg-config,
|
||||
libutil-dev <!no-pty>
|
||||
```
|
||||
|
||||
Then build with:
|
||||
|
||||
```sh
|
||||
dpkg-buildpackage -Pno-pty
|
||||
```
|
||||
|
||||
This is **exactly how Debian likes optional features**.
|
||||
|
||||
---
|
||||
|
||||
## 3.3 `debian/rules` (snippet)
|
||||
|
||||
```make
|
||||
#!/usr/bin/make -f
|
||||
|
||||
%:
|
||||
dh $@
|
||||
|
||||
override_dh_auto_build:
|
||||
ifeq ($(filter no-pty,$(DEB_BUILD_PROFILES)),)
|
||||
$(MAKE) ENABLE_PTY=1
|
||||
else
|
||||
$(MAKE)
|
||||
endif
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3.4 `debian/libasl-capture0.install`
|
||||
|
||||
```text
|
||||
usr/lib/*/libasl-capture.so.*
|
||||
```
|
||||
|
||||
## 3.5 `debian/libasl-capture-dev.install`
|
||||
|
||||
```text
|
||||
usr/include/asl/asl_capture.h
|
||||
usr/lib/*/libasl-capture.so
|
||||
```
|
||||
|
||||
## 3.6 `debian/asl-tools.install`
|
||||
|
||||
```text
|
||||
usr/bin/asl-capture
|
||||
usr/bin/asl-put
|
||||
usr/bin/asl-get
|
||||
usr/bin/pel-run
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
# 4. Why this setup is *correct* for your system
|
||||
|
||||
* PTY support is **explicit**, not implicit
|
||||
* PIPE-only builds remain fully deterministic
|
||||
* Debian can ship both safely
|
||||
* Offline authority hosts can disable PTY entirely
|
||||
* SystemRescue builds can enable it
|
||||
* Libraries remain reusable and clean
|
||||
|
||||
This is *exactly* the level of restraint Debian maintainers respect.
|
||||
|
||||
---
|
||||
|
||||
## Where to go next (suggestion)
|
||||
|
||||
Natural next steps:
|
||||
|
||||
1. Define **libasl-capture ABI versioning rules**
|
||||
2. Write a **man page for `asl-capture`**
|
||||
3. Add **PTY vs PIPE semantics to ASL-HOST**
|
||||
4. Wire `asl-capture` into your ISO builder
|
||||
|
||||
If you want, I can take any one of those and do it *to Debian quality*.
|
||||
|
||||
You’re building this the right way.
|
||||
|
||||
|
|
@ -1,115 +0,0 @@
|
|||
# ENC-ASL-CORE-INDEX ADDENDUM: Federation Encoding
|
||||
|
||||
Base spec: `tier1/enc-asl-core-index.md`
|
||||
|
||||
Status: Merged into `tier1/enc-asl-core-index.md`
|
||||
|
||||
---
|
||||
|
||||
## 1. Purpose
|
||||
|
||||
This addendum defines **on-disk encoding changes** required to support the **Federation Specification**:
|
||||
|
||||
* Domain identifiers for artifacts and index entries
|
||||
* Visibility flags for internal vs published state
|
||||
* Backward-compatible update to existing index records
|
||||
* Integration with existing block, segment, and tombstone layouts
|
||||
|
||||
It ensures **deterministic reconstruction** across domains while preserving index lookup semantics.
|
||||
|
||||
---
|
||||
|
||||
## 2. New Fields for Index Records
|
||||
|
||||
Each **ArtifactIndexRecord** is extended to include **federation metadata**:
|
||||
|
||||
| Field | Type | Description |
|
||||
| ------------------- | ------------------------ | --------------------------------------------------------------------------------------------------- |
|
||||
| `DomainID` | uint32 / opaque | Unique domain identifier for the artifact. Must match the domain in which the artifact was created. |
|
||||
| `Visibility` | uint8 (enum) | Visibility status of the artifact: `0 = internal`, `1 = published`. |
|
||||
| `CrossDomainSource` | optional uint32 / opaque | DomainID of original domain if the artifact is imported from another domain; `NULL` if local. |
|
||||
|
||||
**Encoding Notes:**
|
||||
|
||||
* `DomainID` and `Visibility` are **always present** in index records, even for legacy artifacts (legacy default: internal, local domain).
|
||||
* `CrossDomainSource` is **optional**; present only for imported artifacts.
|
||||
* Existing `(BlockID, offset, length)` mapping is unchanged.
|
||||
|
||||
---
|
||||
|
||||
## 3. Segment Header Updates
|
||||
|
||||
Each segment now includes:
|
||||
|
||||
* `SegmentDomainID` (uint32 / opaque): domain owning this segment
|
||||
* `SegmentVisibility` (uint8): maximum visibility of all artifacts in the segment (`internal` or `published`)
|
||||
* Optional: `FederationVersion` (uint8) to allow backward-compatible upgrades
|
||||
|
||||
This allows **fast filtering** of visible segments during lookup in federated domains.
|
||||
|
||||
---
|
||||
|
||||
## 4. Tombstone Encoding
|
||||
|
||||
* Tombstones include `DomainID` and `Visibility` fields to ensure **deterministic shadowing** across domains.
|
||||
* Shadowing rules:
|
||||
|
||||
* A tombstone in domain A only shadows artifacts in domain A.
|
||||
* Published artifacts cannot be accidentally shadowed by internal artifacts from another domain.
|
||||
|
||||
---
|
||||
|
||||
## 5. Block Records
|
||||
|
||||
No change to `(BlockID, offset, length)` itself; however:
|
||||
|
||||
* Blocks supporting **published artifacts** are considered **cross-domain safe**.
|
||||
* Optional **DomainID metadata** may be stored with blocks to speed up GC and federation operations.
|
||||
* Addressing and segment packing rules are unchanged.
|
||||
|
||||
---
|
||||
|
||||
## 6. Lookup and Reconstruction Rules
|
||||
|
||||
* When reconstructing **CURRENT in a domain**:
|
||||
|
||||
1. Filter segments and records by `DomainID` and `Visibility`.
|
||||
2. Include artifacts with `DomainID = local` or `Visibility = published`.
|
||||
3. Include imported artifacts by following `CrossDomainSource`.
|
||||
4. Apply standard shadowing and tombstone rules per domain.
|
||||
|
||||
* Determinism and immutability guarantees remain identical to single-domain ENC-ASL-CORE-INDEX.
|
||||
|
||||
---
|
||||
|
||||
## 7. Backward Compatibility
|
||||
|
||||
* Legacy segments without federation fields are treated as:
|
||||
|
||||
* `DomainID = local domain`
|
||||
* `Visibility = internal`
|
||||
* Lookup semantics automatically ignore artifacts from other domains until explicitly migrated.
|
||||
* Federation fields are **forward-compatible**; versioning in segment headers allows safe upgrades.
|
||||
|
||||
---
|
||||
|
||||
## 8. Normative Invariants
|
||||
|
||||
1. **DomainID presence:** Every index record must include a `DomainID`.
|
||||
2. **Visibility correctness:** Published artifacts are always visible to other domains; internal artifacts are not.
|
||||
3. **CrossDomainSource integrity:** Imported artifacts retain immutable reference to original domain.
|
||||
4. **Deterministic encoding:** Serialization of index records and segments must be identical across platforms for the same snapshot + log.
|
||||
5. **Backward compatibility:** Legacy records are interpreted safely with default federation metadata.
|
||||
|
||||
---
|
||||
|
||||
## 9. Summary
|
||||
|
||||
This addendum updates **ENC-ASL-CORE-INDEX** to support **federation**:
|
||||
|
||||
* Adds `DomainID`, `Visibility`, and optional `CrossDomainSource` to index records
|
||||
* Updates segment headers for fast domain/visibility filtering
|
||||
* Ensures deterministic lookup, reconstruction, and shadowing rules per domain
|
||||
* Maintains backward compatibility with legacy segments
|
||||
|
||||
It integrates federation metadata **without altering the underlying block or artifact encoding**, preserving deterministic execution and PEL provenance.
|
||||
|
|
@ -1,205 +0,0 @@
|
|||
NOTE: Superseded by `tier1/tgk-1.md` (TGK/1). Kept for historical context.
|
||||
|
||||
# ENC-TGK-INDEX
|
||||
|
||||
### Encoding Specification for TGK Edge Index References
|
||||
|
||||
---
|
||||
|
||||
## 1. Purpose
|
||||
|
||||
ENC-TGK-INDEX defines the **on-disk encoding for Trace Graph Kernel (TGK) index records**, which serve as **references to TGK-CORE edges**.
|
||||
|
||||
* It **never encodes edge structure** (`from[]` / `to[]`)
|
||||
* It supports **filters, sharding, and routing** per ASL-INDEX-ACCEL
|
||||
* Snapshot and log-sequence semantics are maintained for deterministic recovery
|
||||
|
||||
---
|
||||
|
||||
## 2. Layering Principle
|
||||
|
||||
* **TGK-CORE / ENC-TGK-CORE**: authoritative edge structure (`from[] → to[]`)
|
||||
* **TGK-INDEX**: defines canonical keys, routing keys, acceleration logic
|
||||
* **ENC-TGK-INDEX**: stores references to TGK-CORE edges and acceleration metadata
|
||||
|
||||
**Normative statement:**
|
||||
|
||||
> ENC-TGK-INDEX encodes only references to TGK-CORE edges and MUST NOT re-encode or reinterpret edge structure.
|
||||
|
||||
---
|
||||
|
||||
## 3. Segment Layout
|
||||
|
||||
Segments are **immutable** and **snapshot-bound**:
|
||||
|
||||
```
|
||||
+-----------------------------+
|
||||
| Segment Header |
|
||||
+-----------------------------+
|
||||
| Routing Filters |
|
||||
+-----------------------------+
|
||||
| TGK Index Records |
|
||||
+-----------------------------+
|
||||
| Optional Acceleration Data |
|
||||
+-----------------------------+
|
||||
| Segment Footer |
|
||||
+-----------------------------+
|
||||
```
|
||||
|
||||
* Segment atomicity is enforced
|
||||
* Footer checksum guarantees completeness
|
||||
|
||||
---
|
||||
|
||||
## 4. Segment Header
|
||||
|
||||
```c
|
||||
struct tgk_index_segment_header {
|
||||
uint32_t magic; // 'TGKI'
|
||||
uint16_t version; // encoding version
|
||||
uint16_t flags; // segment flags
|
||||
uint64_t segment_id; // unique per dataset
|
||||
uint64_t logseq_min; // inclusive
|
||||
uint64_t logseq_max; // inclusive
|
||||
uint64_t record_count; // number of index records
|
||||
uint64_t record_area_offset; // bytes from segment start
|
||||
uint64_t footer_offset; // bytes from segment start
|
||||
};
|
||||
```
|
||||
|
||||
* `logseq_min` / `logseq_max` enforce snapshot visibility
|
||||
|
||||
---
|
||||
|
||||
## 5. Routing Filters
|
||||
|
||||
Filters are **optional but recommended**:
|
||||
|
||||
```c
|
||||
struct tgk_index_filter_header {
|
||||
uint16_t filter_type; // e.g., BLOOM, XOR, RIBBON
|
||||
uint16_t version;
|
||||
uint32_t flags;
|
||||
uint64_t size_bytes; // length of filter payload
|
||||
};
|
||||
```
|
||||
|
||||
* Filters operate on **routing keys**, not canonical edge IDs
|
||||
* Routing keys may include:
|
||||
|
||||
* Edge type key
|
||||
* Projection context
|
||||
* Direction or role
|
||||
* False positives allowed; false negatives forbidden
|
||||
|
||||
---
|
||||
|
||||
## 6. TGK Index Record
|
||||
|
||||
Each record references a **single TGK-CORE edge**:
|
||||
|
||||
```c
|
||||
struct tgk_index_record {
|
||||
uint64_t logseq; // creation log sequence
|
||||
uint64_t tgk_edge_id; // reference to ENC-TGK-CORE edge
|
||||
uint32_t edge_type_key; // optional classification
|
||||
uint8_t has_edge_type; // 0 or 1
|
||||
uint8_t role; // optional: from / to / both
|
||||
uint16_t flags; // tombstone, reserved
|
||||
};
|
||||
```
|
||||
|
||||
* `tgk_edge_id` is the **canonical key**
|
||||
* No `from[]` / `to[]` fields exist here
|
||||
* Edge identity is **solely TGK-CORE edge ID**
|
||||
|
||||
**Flags**:
|
||||
|
||||
| Flag | Meaning |
|
||||
| --------------------- | ----------------------- |
|
||||
| `TGK_INDEX_TOMBSTONE` | Shadows previous record |
|
||||
| `TGK_INDEX_RESERVED` | Future use |
|
||||
|
||||
---
|
||||
|
||||
## 7. Optional Node-Projection Records (Acceleration Only)
|
||||
|
||||
For node-centric queries, optional records may map:
|
||||
|
||||
```c
|
||||
struct tgk_node_edge_ref {
|
||||
uint64_t logseq;
|
||||
uint64_t node_id;
|
||||
uint64_t tgk_edge_id;
|
||||
uint8_t position; // from or to
|
||||
};
|
||||
```
|
||||
|
||||
* **Derivable from TGK-CORE edges**
|
||||
* Optional; purely for acceleration
|
||||
* Must not affect semantics
|
||||
|
||||
---
|
||||
|
||||
## 8. Sharding and SIMD
|
||||
|
||||
* Shard assignment: via **routing keys**, **not index semantics**
|
||||
* SIMD-optimized arrays may exist in optional acceleration sections
|
||||
* Must be deterministic and immutable
|
||||
* Must follow ASL-INDEX-ACCEL invariants
|
||||
|
||||
---
|
||||
|
||||
## 9. Snapshot Interaction
|
||||
|
||||
At snapshot `S`:
|
||||
|
||||
* Segment visible if `logseq_min ≤ S`
|
||||
* Record visible if `logseq ≤ S`
|
||||
* Tombstones shadow earlier records
|
||||
|
||||
**Lookup Algorithm**:
|
||||
|
||||
1. Filter by snapshot
|
||||
2. Evaluate routing/filter keys (advisory)
|
||||
3. Confirm canonical key match with `tgk_edge_id`
|
||||
|
||||
---
|
||||
|
||||
## 10. Segment Footer
|
||||
|
||||
```c
|
||||
struct tgk_index_segment_footer {
|
||||
uint64_t checksum; // covers header + filters + records
|
||||
uint64_t record_bytes; // size of record area
|
||||
uint64_t filter_bytes; // size of filter area
|
||||
};
|
||||
```
|
||||
|
||||
* Ensures atomicity and completeness
|
||||
|
||||
---
|
||||
|
||||
## 11. Normative Invariants
|
||||
|
||||
1. **Edge identity = TGK-CORE edge ID**
|
||||
2. Edge Type Key is **not part of identity**
|
||||
3. Filters are **advisory only**
|
||||
4. Sharding is observationally invisible
|
||||
5. Index records are immutable
|
||||
6. Snapshot visibility strictly follows `logseq`
|
||||
7. Determinism guaranteed per snapshot
|
||||
|
||||
---
|
||||
|
||||
## 12. Summary
|
||||
|
||||
ENC-TGK-INDEX:
|
||||
|
||||
* References TGK-CORE edges without re-encoding structure
|
||||
* Supports snapshot-safe, deterministic lookup
|
||||
* Enables filter, shard, and SIMD acceleration
|
||||
* Preserves TGK-CORE semantics strictly
|
||||
|
||||
This design **fully respects layering** and **prevents accidental semantic duplication**, while allowing scalable, high-performance indexing.
|
||||
|
||||
|
|
@ -1,64 +0,0 @@
|
|||
# example_quantum.py
|
||||
|
||||
import numpy as np
|
||||
from artifact import Artifact, bits, tensor, materialize_artifact, dag_node_count, dag_depth, ArtifactCache
|
||||
from sid_hashers import SHA256SIDHash
|
||||
from hashers import SHA256Hash
|
||||
|
||||
# ---------------------------------------------------------------------
|
||||
# Hashers
|
||||
# ---------------------------------------------------------------------
|
||||
sid_hasher = SHA256SIDHash()
|
||||
content_hasher = SHA256Hash()
|
||||
|
||||
# ---------------------------------------------------------------------
|
||||
# Step 1: Create 8 quantum leaves (1 qubit each)
|
||||
# We'll make simple |0> + |1> superposition for each qubit
|
||||
# ---------------------------------------------------------------------
|
||||
quantum_leaves = []
|
||||
for i in range(8):
|
||||
indices = np.array([0, 1], dtype=np.int64)
|
||||
values = np.array([1+0j, 1+0j], dtype=np.complex128)
|
||||
leaf = Artifact(
|
||||
op="leaf.bits",
|
||||
params={"_materialized": (indices, values)}, # mandatory for materialization
|
||||
children=[],
|
||||
sid=f"qubit_{i}_superposition",
|
||||
materializer=materialize_artifact,
|
||||
content_hasher=content_hasher,
|
||||
)
|
||||
quantum_leaves.append(leaf)
|
||||
|
||||
# ---------------------------------------------------------------------
|
||||
# Step 2: Tensor all 8 qubits together lazily
|
||||
# ---------------------------------------------------------------------
|
||||
# Tensor pairs recursively
|
||||
def tensor_all(artifacts, sid_hasher):
|
||||
if len(artifacts) == 1:
|
||||
return artifacts[0]
|
||||
mid = len(artifacts) // 2
|
||||
left = tensor_all(artifacts[:mid], sid_hasher)
|
||||
right = tensor_all(artifacts[mid:], sid_hasher)
|
||||
return tensor(left, right, sid_hasher=sid_hasher)
|
||||
|
||||
quantum_8q = tensor_all(quantum_leaves, sid_hasher=sid_hasher)
|
||||
|
||||
print("8-qubit quantum tensor SID (lazy):")
|
||||
print(quantum_8q.sid)
|
||||
|
||||
print("CID materialized yet?", quantum_8q.is_materialized)
|
||||
|
||||
# ---------------------------------------------------------------------
|
||||
# Step 3: Materialize CID on demand
|
||||
# ---------------------------------------------------------------------
|
||||
cid_8q = quantum_8q.cid
|
||||
print("\nAfter materialization:")
|
||||
print("8-qubit quantum tensor CID:", cid_8q)
|
||||
|
||||
# ---------------------------------------------------------------------
|
||||
# Step 4: DAG metrics
|
||||
# ---------------------------------------------------------------------
|
||||
print("\nDerivation DAG metrics:")
|
||||
print("Total nodes:", dag_node_count(quantum_8q))
|
||||
print("DAG depth:", dag_depth(quantum_8q))
|
||||
|
||||
|
|
@ -1,59 +0,0 @@
|
|||
Exactly — that’s a crucial point. The “execution receipts and snapshot states” aren’t separate things in the sense of being additional files or objects; they are **manifested through the combination of the sequential log, the CURRENT state, and the pinned snapshots in ASL**. Let me break it down:
|
||||
|
||||
---
|
||||
|
||||
### **1. Sequential Log**
|
||||
|
||||
* Records all **append-only mutations** to the domain:
|
||||
|
||||
* Index additions (artifact → block mappings)
|
||||
* Tombstones (deletions or shadowed artifacts)
|
||||
* Segment seals
|
||||
* **Deterministic**: replaying from a checkpoint + log reconstructs CURRENT exactly.
|
||||
* Serves as the **timeline / provenance ledger** for the domain.
|
||||
|
||||
---
|
||||
|
||||
### **2. CURRENT State**
|
||||
|
||||
* Derived from the latest snapshot plus replaying the log:
|
||||
|
||||
```
|
||||
CURRENT = checkpoint_snapshot + replay(log)
|
||||
```
|
||||
* Represents the **domain’s active state** at the moment.
|
||||
* Any PER you generate effectively “samples” this CURRENT state, i.e., it reflects **what is present and visible at this logseq**.
|
||||
|
||||
---
|
||||
|
||||
### **3. Snapshots**
|
||||
|
||||
* Capture **sealed segments and blocks** at a point in time.
|
||||
* Pinning ensures **retention and determinism**.
|
||||
* A PER or TGK edge can reference a snapshot as a **boundary marker** for reproducible reconstruction.
|
||||
|
||||
---
|
||||
|
||||
### **4. How PERs Fit In**
|
||||
|
||||
* PERs are just **artifacts representing deterministic outcomes**:
|
||||
|
||||
* Inputs: block IDs / artifact keys visible in CURRENT
|
||||
* Provenance: derived via TGK edges
|
||||
* Snapshot: identifies exactly **what state was observed**
|
||||
* So generating a PER is effectively **taking the CURRENT state + log prefix + snapshot reference and sealing it into a reproducible receipt**.
|
||||
|
||||
---
|
||||
|
||||
✅ **TL;DR:**
|
||||
|
||||
* You don’t need to store “execution receipts” separately in a raw sense; they are **conceptually encoded in the log + CURRENT + snapshot**.
|
||||
* PER generation is **reading that state and formalizing it as an artifact** with associated TGK edges and provenance.
|
||||
* The snapshot is the “frozen point” that guarantees reproducibility across nodes or future restorations.
|
||||
|
||||
---
|
||||
|
||||
If you want, I can **rephrase the checklist to reflect this properly**, so that it doesn’t misleadingly suggest storing something extra — everything is naturally part of ASL’s log + CURRENT + snapshot framework.
|
||||
|
||||
Do you want me to do that?
|
||||
|
||||
|
|
@ -1,302 +0,0 @@
|
|||
Short answer: **yes, absolutely** — and you can do it *cleanly*, *deterministically*, and in a way that fits your ASL worldview very well.
|
||||
|
||||
What it takes is to **separate “networked acquisition” from “system construction”** and turn the former into **artifacts**.
|
||||
|
||||
Below is a precise breakdown, from minimal to “fully sealed monastery”.
|
||||
|
||||
---
|
||||
|
||||
## 1. What debootstrap actually needs
|
||||
|
||||
`debootstrap` itself does **not require the internet**. It needs:
|
||||
|
||||
1. A **Debian archive** containing:
|
||||
|
||||
* `dists/<suite>/Release`
|
||||
* `dists/<suite>/main/binary-<arch>/Packages(.gz|.xz)`
|
||||
* `pool/main/.../*.deb`
|
||||
2. A way to access it:
|
||||
|
||||
* `file://`
|
||||
* mounted filesystem
|
||||
* loopback ISO
|
||||
3. (Optionally) a trusted **Release.gpg** signature
|
||||
|
||||
That’s it.
|
||||
|
||||
So the problem reduces to:
|
||||
|
||||
> *How do we provide a Debian archive offline?*
|
||||
|
||||
---
|
||||
|
||||
## 2. Three progressively stronger offline models
|
||||
|
||||
### Model A — Cached `.deb` set (quick & dirty)
|
||||
|
||||
**What it is**
|
||||
|
||||
* Run `debootstrap` once online
|
||||
* Capture:
|
||||
|
||||
* `/var/cache/apt/archives/*.deb`
|
||||
* `/var/lib/apt/lists/*`
|
||||
* Replay offline with `--cache-dir`
|
||||
|
||||
**Pros**
|
||||
|
||||
* Very fast to get started
|
||||
* Minimal tooling
|
||||
|
||||
**Cons**
|
||||
|
||||
* Not a real archive
|
||||
* Fragile
|
||||
* Hard to audit
|
||||
* Weak provenance
|
||||
|
||||
⚠️ This is *okay for experiments*, not for your long-term design.
|
||||
|
||||
---
|
||||
|
||||
### Model B — Local Debian mirror (recommended baseline)
|
||||
|
||||
This is the **right mental model**.
|
||||
|
||||
You create a **local, read-only Debian mirror** as an artifact.
|
||||
|
||||
#### How
|
||||
|
||||
On a *connected* machine:
|
||||
|
||||
```bash
|
||||
apt-get install -y debmirror gnupg
|
||||
```
|
||||
|
||||
Mirror only what you need:
|
||||
|
||||
```bash
|
||||
debmirror \
|
||||
--arch=amd64 \
|
||||
--section=main \
|
||||
--dist=bullseye \
|
||||
--method=http \
|
||||
--host=deb.debian.org \
|
||||
--root=debian \
|
||||
--progress \
|
||||
/srv/debian-mirror
|
||||
```
|
||||
|
||||
You now have:
|
||||
|
||||
```
|
||||
/srv/debian-mirror/
|
||||
├── dists/bullseye/
|
||||
└── pool/main/
|
||||
```
|
||||
|
||||
Capture this directory as an **ASL artifact**.
|
||||
|
||||
---
|
||||
|
||||
#### Offline debootstrap
|
||||
|
||||
On the offline builder:
|
||||
|
||||
```bash
|
||||
debootstrap \
|
||||
--arch=amd64 \
|
||||
bullseye \
|
||||
/target/root \
|
||||
file:///srv/debian-mirror
|
||||
```
|
||||
|
||||
✔️ No network
|
||||
✔️ Deterministic
|
||||
✔️ Auditable
|
||||
✔️ Replayable
|
||||
|
||||
---
|
||||
|
||||
### Model C — Signed, sealed, sovereign Debian archive (your direction)
|
||||
|
||||
This is where your design really shines.
|
||||
|
||||
You:
|
||||
|
||||
1. Mirror Debian once
|
||||
2. **Re-sign the archive** with your own offline root
|
||||
3. Treat Debian as **foreign-domain input**
|
||||
4. Import it into **Common**
|
||||
|
||||
Now your system no longer trusts Debian directly — it trusts:
|
||||
|
||||
> *“Debian as witnessed and sealed by my authority”*
|
||||
|
||||
This aligns perfectly with:
|
||||
|
||||
* Domain federation
|
||||
* Foreign-domain pinning
|
||||
* Courtesy leases
|
||||
|
||||
---
|
||||
|
||||
## 3. How this fits ASL conceptually
|
||||
|
||||
### Debian mirror = foreign domain
|
||||
|
||||
Model it as:
|
||||
|
||||
```
|
||||
Domain: debian-upstream
|
||||
Artifacts:
|
||||
- pool/main/b/bash/bash_5.1-2_amd64.deb
|
||||
- dists/bullseye/Release
|
||||
- ...
|
||||
Snapshot: debian-bullseye-2024-01
|
||||
```
|
||||
|
||||
Your **pre-image build** then references:
|
||||
|
||||
```
|
||||
ForeignArtifactRef {
|
||||
domain = debian-upstream
|
||||
snapshot = debian-bullseye-2024-01
|
||||
artifact = bash_5.1-2_amd64.deb
|
||||
}
|
||||
```
|
||||
|
||||
You never “download” — you **import artifacts**.
|
||||
|
||||
---
|
||||
|
||||
## 4. Fully offline pre-image workflow (clean version)
|
||||
|
||||
Split the world in two phases:
|
||||
|
||||
### Phase 1 — Connected acquisition (one-time, auditable)
|
||||
|
||||
* Build Debian mirror
|
||||
* Verify Debian signatures
|
||||
* Capture mirror as ASL artifacts
|
||||
* Optionally re-sign
|
||||
* Write mirror to:
|
||||
|
||||
* USB disk
|
||||
* External HDD
|
||||
* Immutable ZFS snapshot
|
||||
|
||||
This phase **never runs on your offline box**.
|
||||
|
||||
---
|
||||
|
||||
### Phase 2 — Offline construction (repeatable forever)
|
||||
|
||||
Your earlier `asl-workflow.sh` becomes:
|
||||
|
||||
```text
|
||||
Inputs:
|
||||
- Debian mirror artifact
|
||||
- ASL packages
|
||||
- Overlay
|
||||
- Policy
|
||||
- PEL programs
|
||||
|
||||
Process:
|
||||
- debootstrap via file://
|
||||
- dpkg installs
|
||||
- overlay application
|
||||
- ISO creation
|
||||
|
||||
Outputs:
|
||||
- ISO artifact
|
||||
- Execution receipts
|
||||
```
|
||||
|
||||
No network.
|
||||
No DNS.
|
||||
No clocks.
|
||||
No entropy leakage beyond what you allow.
|
||||
|
||||
---
|
||||
|
||||
## 5. Concrete changes to your script
|
||||
|
||||
Replace this:
|
||||
|
||||
```bash
|
||||
debootstrap ... http://deb.debian.org/debian/
|
||||
```
|
||||
|
||||
With:
|
||||
|
||||
```bash
|
||||
debootstrap \
|
||||
--arch="$ARCH" \
|
||||
"$DIST" \
|
||||
"$ISO_ROOT" \
|
||||
file:///var/lib/asl/foreign/debian-mirror
|
||||
```
|
||||
|
||||
And ensure:
|
||||
|
||||
```bash
|
||||
/var/lib/asl/foreign/debian-mirror
|
||||
```
|
||||
|
||||
is:
|
||||
|
||||
* Mounted read-only
|
||||
* Snapshotted
|
||||
* Provenanced
|
||||
|
||||
---
|
||||
|
||||
## 6. Trust & certificates (important)
|
||||
|
||||
You asked earlier whether we should care about the offline environment.
|
||||
|
||||
**Yes — but selectively.**
|
||||
|
||||
For offline debootstrap:
|
||||
|
||||
* You must trust:
|
||||
|
||||
* Debian archive signature **or**
|
||||
* Your re-signature
|
||||
* You do **not** need:
|
||||
|
||||
* TPM
|
||||
* Secure boot
|
||||
* Measured boot (yet)
|
||||
|
||||
Those come later.
|
||||
|
||||
At this stage:
|
||||
|
||||
> *Determinism + provenance beats hardware attestation.*
|
||||
|
||||
---
|
||||
|
||||
## 7. Final mental model (this matters)
|
||||
|
||||
> You are not building systems.
|
||||
>
|
||||
> You are **replaying history** inside a sealed domain.
|
||||
|
||||
Debian is just a *story* someone else wrote.
|
||||
You import it.
|
||||
You witness it.
|
||||
You decide when it enters *your* timeline.
|
||||
|
||||
---
|
||||
|
||||
If you want next, I can:
|
||||
|
||||
* Draft **ASL-DEBIAN-MIRROR v0.1**
|
||||
* Show how to **re-sign a Debian archive**
|
||||
* Formalize **foreign-domain artifact import**
|
||||
* Show how **Common** is initialized from mirrors
|
||||
|
||||
You’re asking exactly the right questions.
|
||||
|
||||
|
|
@ -1,92 +0,0 @@
|
|||
# hashers.py
|
||||
|
||||
from __future__ import annotations
|
||||
from abc import ABC, abstractmethod
|
||||
import hashlib
|
||||
import numpy as np
|
||||
from typing import Tuple
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------
|
||||
# Content Hash Strategy Interface
|
||||
# ---------------------------------------------------------------------
|
||||
|
||||
class HashStrategy(ABC):
|
||||
"""
|
||||
Abstract interface for content hashing.
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
def hash_bytes(self, data: bytes) -> str:
|
||||
"""
|
||||
Hash arbitrary bytes and return hex digest.
|
||||
"""
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def hash_sparse(self, indices: np.ndarray, values: np.ndarray) -> str:
|
||||
"""
|
||||
Hash a sparse representation of amplitudes.
|
||||
"""
|
||||
pass
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------
|
||||
# Default SHA-256 Implementation
|
||||
# ---------------------------------------------------------------------
|
||||
|
||||
class SHA256Hash(HashStrategy):
|
||||
"""
|
||||
SHA-256 hash strategy for content-addressed artifacts.
|
||||
"""
|
||||
|
||||
name = "sha256.content.v1"
|
||||
|
||||
def hash_bytes(self, data: bytes) -> str:
|
||||
"""
|
||||
Hash arbitrary bytes deterministically.
|
||||
"""
|
||||
h = hashlib.sha256()
|
||||
h.update(data)
|
||||
return h.hexdigest()
|
||||
|
||||
def hash_sparse(self, indices: np.ndarray, values: np.ndarray) -> str:
|
||||
"""
|
||||
Hash a sparse set of indices and amplitudes.
|
||||
Deterministic and cross-platform safe.
|
||||
"""
|
||||
|
||||
if indices.ndim != 1 or values.ndim != 1:
|
||||
raise ValueError("indices and values must be 1-D arrays")
|
||||
|
||||
if len(indices) != len(values):
|
||||
raise ValueError("indices and values must have the same length")
|
||||
|
||||
# Serialize deterministically: length + index-value pairs
|
||||
buf = len(indices).to_bytes(8, "big")
|
||||
for idx, val in zip(indices, values):
|
||||
buf += int(idx).to_bytes(8, "big", signed=False)
|
||||
# IEEE 754 double-precision real + imag
|
||||
buf += np.float64(val.real).tobytes()
|
||||
buf += np.float64(val.imag).tobytes()
|
||||
|
||||
return self.hash_bytes(buf)
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------
|
||||
# Utility / Helpers
|
||||
# ---------------------------------------------------------------------
|
||||
|
||||
def hash_bytes_sha256(data: bytes) -> str:
|
||||
"""
|
||||
Convenience wrapper for SHA-256 hashing.
|
||||
"""
|
||||
return SHA256Hash().hash_bytes(data)
|
||||
|
||||
|
||||
def hash_sparse_sha256(indices: np.ndarray, values: np.ndarray) -> str:
|
||||
"""
|
||||
Convenience wrapper for sparse SHA-256 hashing.
|
||||
"""
|
||||
return SHA256Hash().hash_sparse(indices, values)
|
||||
|
||||
|
|
@ -1,132 +0,0 @@
|
|||
#!/bin/bash
|
||||
# init-asl-host.sh
|
||||
# ASL Host offline initialization
|
||||
# Handles: rescue, admission, and normal modes
|
||||
# Mounts ZFS pools, sets up environment, optionally starts capture shell
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# -----------------------------
|
||||
# Configuration
|
||||
# -----------------------------
|
||||
ASL_ROOT=/var/lib/asl
|
||||
ASL_COMMON=$ASL_ROOT/common
|
||||
ASL_PERSONAL=$ASL_ROOT/personal
|
||||
ASL_POOLS=$ASL_ROOT/pools
|
||||
ASL_LOG=/var/log/asl
|
||||
ASL_CAPTURE_BIN=/usr/bin/asl-capture
|
||||
|
||||
# Default mode if not specified
|
||||
MODE=${1:-normal}
|
||||
|
||||
# ZFS pool names
|
||||
POOL_COMMON=asl_common
|
||||
POOL_PERSONAL=asl_personal
|
||||
|
||||
# -----------------------------
|
||||
# Functions
|
||||
# -----------------------------
|
||||
log() {
|
||||
echo "[ASL-HOST] $*" | tee -a "$ASL_LOG/init.log"
|
||||
}
|
||||
|
||||
setup_dirs() {
|
||||
log "Creating ASL directories..."
|
||||
mkdir -p "$ASL_COMMON" "$ASL_PERSONAL" "$ASL_POOLS" "$ASL_LOG"
|
||||
}
|
||||
|
||||
mount_pools() {
|
||||
log "Checking ZFS pools..."
|
||||
if ! zpool list "$POOL_COMMON" &>/dev/null; then
|
||||
log "Creating common pool $POOL_COMMON..."
|
||||
zpool create -m "$ASL_COMMON" "$POOL_COMMON" "$ASL_POOLS/common.img"
|
||||
else
|
||||
log "Importing common pool..."
|
||||
zpool import "$POOL_COMMON" "$POOL_COMMON"
|
||||
fi
|
||||
|
||||
if ! zpool list "$POOL_PERSONAL" &>/dev/null; then
|
||||
log "Creating personal pool $POOL_PERSONAL..."
|
||||
zpool create -m "$ASL_PERSONAL" "$POOL_PERSONAL" "$ASL_POOLS/personal.img"
|
||||
else
|
||||
log "Importing personal pool..."
|
||||
zpool import "$POOL_PERSONAL" "$POOL_PERSONAL"
|
||||
fi
|
||||
}
|
||||
|
||||
rescue_mode() {
|
||||
log "Entering rescue mode..."
|
||||
USB_MOUNT=/mnt/usb
|
||||
mkdir -p "$USB_MOUNT"
|
||||
|
||||
log "Waiting for USB device..."
|
||||
read -p "Plug USB device and enter device path (e.g., /dev/sda1): " USB_DEV
|
||||
mount "$USB_DEV" "$USB_MOUNT"
|
||||
|
||||
log "Capturing artifacts from USB..."
|
||||
"$ASL_CAPTURE_BIN" --input "$USB_MOUNT" --output "$ASL_PERSONAL" --pty
|
||||
|
||||
log "USB capture complete."
|
||||
umount "$USB_MOUNT"
|
||||
}
|
||||
|
||||
admission_mode() {
|
||||
log "Entering admission mode..."
|
||||
log "Bootstrapping new personal domain..."
|
||||
# Generate domain keys, initial certificates
|
||||
DOMAIN_KEY="$ASL_PERSONAL/domain.key"
|
||||
DOMAIN_CERT="$ASL_PERSONAL/domain.crt"
|
||||
|
||||
if [[ ! -f "$DOMAIN_KEY" ]]; then
|
||||
log "Generating new domain key..."
|
||||
openssl genpkey -algorithm RSA -out "$DOMAIN_KEY" -pkeyopt rsa_keygen_bits:4096
|
||||
fi
|
||||
|
||||
if [[ ! -f "$DOMAIN_CERT" ]]; then
|
||||
log "Generating self-signed certificate..."
|
||||
openssl req -new -x509 -key "$DOMAIN_KEY" -out "$DOMAIN_CERT" -days 365 \
|
||||
-subj "/CN=Offline-ASL-Host"
|
||||
fi
|
||||
|
||||
log "Personal domain initialized."
|
||||
}
|
||||
|
||||
normal_mode() {
|
||||
log "Entering normal offline mode..."
|
||||
log "ASL environment ready."
|
||||
}
|
||||
|
||||
start_capture_shell() {
|
||||
if command -v "$ASL_CAPTURE_BIN" >/dev/null 2>&1; then
|
||||
log "Starting PTY capture shell..."
|
||||
exec "$ASL_CAPTURE_BIN" --pty -- /bin/bash
|
||||
else
|
||||
log "asl-capture binary not found, starting plain shell..."
|
||||
exec /bin/bash
|
||||
fi
|
||||
}
|
||||
|
||||
# -----------------------------
|
||||
# Main execution
|
||||
# -----------------------------
|
||||
setup_dirs
|
||||
mount_pools
|
||||
|
||||
case "$MODE" in
|
||||
rescue)
|
||||
rescue_mode
|
||||
;;
|
||||
admission)
|
||||
admission_mode
|
||||
;;
|
||||
normal)
|
||||
normal_mode
|
||||
;;
|
||||
*)
|
||||
log "Unknown mode: $MODE"
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
|
||||
start_capture_shell
|
||||
|
||||
|
|
@ -1,235 +0,0 @@
|
|||
Good — this is the *right* next pressure point. You’re extending the index/filter model from **artifact lookup** into **graph projection lookup**, and the danger here is accidentally inventing a second semantic index without noticing.
|
||||
|
||||
Let’s be precise and keep the model clean.
|
||||
|
||||
---
|
||||
|
||||
## 1. First: what the TGK edge index really is
|
||||
|
||||
From what you said:
|
||||
|
||||
* TGK stores **edges** as *stored projections*
|
||||
* These projections are derived from:
|
||||
|
||||
* PEL execution
|
||||
* Receipts
|
||||
* Provenance / trace material
|
||||
* Edges have an **edge type key**
|
||||
* Queries will often be:
|
||||
|
||||
> “Find edges of type X related to artifact / execution Y”
|
||||
|
||||
This tells us immediately:
|
||||
|
||||
> **TGK edges are artifacts, but the edge *type* is not identity.**
|
||||
|
||||
That mirrors the artifact type tag discussion almost exactly.
|
||||
|
||||
---
|
||||
|
||||
## 2. Do NOT repeat the ArtifactKey mistake
|
||||
|
||||
The biggest risk here is to treat:
|
||||
|
||||
```
|
||||
EdgeTypeKey ∈ identity
|
||||
```
|
||||
|
||||
That would be a mistake.
|
||||
|
||||
### Correct stance
|
||||
|
||||
* **Edge identity** = (source, destination, execution context, snapshot)
|
||||
* **Edge type** = classification / query dimension
|
||||
|
||||
So:
|
||||
|
||||
❌ EdgeTypeKey ≠ part of canonical edge identity
|
||||
✅ EdgeTypeKey ∈ routing / filtering space
|
||||
|
||||
This symmetry is important.
|
||||
|
||||
---
|
||||
|
||||
## 3. TGK edges fit the same Canonical vs Routing model
|
||||
|
||||
You now have *two* indexable entities:
|
||||
|
||||
| Entity | Canonical identity | Routing/filter dimensions |
|
||||
| -------- | ------------------ | ------------------------------ |
|
||||
| Artifact | ArtifactKey | hash, type_tag |
|
||||
| TGK Edge | EdgeID / EdgeKey | edge_type_key, direction, role |
|
||||
|
||||
This is good news: **you do not need a new index theory**.
|
||||
|
||||
---
|
||||
|
||||
## 4. Where the edge type key belongs (layering)
|
||||
|
||||
### TGK semantic layer (PEL-TRACE / TGK spec)
|
||||
|
||||
* Defines:
|
||||
|
||||
* What an edge *means*
|
||||
* What edge types exist
|
||||
* Edge type is **semantic metadata**, immutable once created
|
||||
|
||||
### TGK index layer (new or implicit)
|
||||
|
||||
* Edge type is:
|
||||
|
||||
* Filterable
|
||||
* Routable
|
||||
* SIMD-friendly
|
||||
* NOT part of canonical identity
|
||||
|
||||
### Store / encoding
|
||||
|
||||
* Edge type key is encoded explicitly
|
||||
* Included in routing/filter keys
|
||||
* Verified by full edge record comparison
|
||||
|
||||
This mirrors ASL exactly.
|
||||
|
||||
---
|
||||
|
||||
## 5. Filter design for TGK edges
|
||||
|
||||
You should **reuse the same filter design philosophy**, not invent a new one.
|
||||
|
||||
### Recommended filter key for TGK
|
||||
|
||||
```
|
||||
TGKFilterKey =
|
||||
H(CanonicalEdgeKey)
|
||||
⊕ Mix(edge_type_key)
|
||||
⊕ Mix(direction?)
|
||||
```
|
||||
|
||||
Rules:
|
||||
|
||||
* `edge_type_key` is optional but usually present
|
||||
* Absence must be encoded explicitly
|
||||
* Mixing is deterministic
|
||||
* Full edge record comparison required on hit
|
||||
|
||||
This allows:
|
||||
|
||||
* Fast “find all edges of type X”
|
||||
* SIMD evaluation
|
||||
* Shard pruning
|
||||
* Snapshot-safe determinism
|
||||
|
||||
---
|
||||
|
||||
## 6. Snapshot semantics (important)
|
||||
|
||||
Edges are projections, so:
|
||||
|
||||
* Edge existence is snapshot-relative
|
||||
* Edge index entries must carry:
|
||||
|
||||
* snapshot/log sequence
|
||||
* Edge lookup semantics:
|
||||
|
||||
```
|
||||
lookup(snapshot, edge_type, constraints)
|
||||
```
|
||||
|
||||
This works naturally if:
|
||||
|
||||
* TGK index is append-only
|
||||
* Edge projections are immutable
|
||||
* Snapshot cut applies identically to ASL index
|
||||
|
||||
👉 You get temporal graph queries “for free”.
|
||||
|
||||
---
|
||||
|
||||
## 7. Do we need new documents?
|
||||
|
||||
Yes — but again, **one small, well-scoped one**.
|
||||
|
||||
---
|
||||
|
||||
## 8. Updated document plan
|
||||
|
||||
### 🔁 Documents to UPDATE
|
||||
|
||||
#### ASL-INDEX-ACCEL
|
||||
|
||||
Add:
|
||||
|
||||
* Statement that **routing/filter dimensions generalize beyond artifacts**
|
||||
* Examples:
|
||||
|
||||
* Artifact type tag
|
||||
* TGK edge type key
|
||||
|
||||
This keeps all acceleration logic centralized.
|
||||
|
||||
---
|
||||
|
||||
### 🆕 New document (recommended)
|
||||
|
||||
## TGK-INDEX (or PEL-TRACE-INDEX)
|
||||
|
||||
**Purpose**
|
||||
|
||||
* Define index semantics for Trace Graph Kernel projections
|
||||
|
||||
**Contents**
|
||||
|
||||
1. Canonical edge identity
|
||||
2. Edge type key (non-identity)
|
||||
3. Snapshot-relative edge visibility
|
||||
4. Filter and routing rules
|
||||
5. Determinism guarantees
|
||||
6. Relationship to ASL index
|
||||
|
||||
Key sentence it must contain:
|
||||
|
||||
> Edge type keys participate in routing and filtering but never in canonical edge identity.
|
||||
|
||||
This avoids graph-index drift.
|
||||
|
||||
---
|
||||
|
||||
## 9. What you should NOT do
|
||||
|
||||
* ❌ Do not bake edge type into edge identity
|
||||
* ❌ Do not invent a separate filtering philosophy
|
||||
* ❌ Do not allow edge filters to affect semantics
|
||||
* ❌ Do not allow mutable edge metadata
|
||||
|
||||
---
|
||||
|
||||
## 10. Big picture (important insight)
|
||||
|
||||
You are converging on a **unified indexing philosophy**:
|
||||
|
||||
> *Everything has a canonical identity.
|
||||
> Everything else is routing, filtering, or acceleration.*
|
||||
|
||||
Artifacts
|
||||
Edges
|
||||
Receipts
|
||||
Projections
|
||||
|
||||
All follow the same rule.
|
||||
|
||||
That’s a very strong model — and it scales.
|
||||
|
||||
---
|
||||
|
||||
## 11. Recommendation for next step
|
||||
|
||||
The cleanest next step would be to:
|
||||
|
||||
* Draft **ASL-INDEX-ACCEL** formally
|
||||
*and include both Artifact type tags and TGK edge type keys as first-class examples*
|
||||
|
||||
That document becomes the **performance constitution** of the system.
|
||||
|
||||
If you want, I can draft that next.
|
||||
|
||||
|
|
@ -1,223 +0,0 @@
|
|||
Yes — that intuition is **correct**, and you’re already holding the missing piece.
|
||||
|
||||
What you’ve described is *exactly* what an **index-patched ASL-CORE** should be able to do:
|
||||
it doesn’t care *where* the bytes live, only **how the store behaves**.
|
||||
|
||||
Let me make this precise and show you how it fits cleanly with everything you’ve defined.
|
||||
|
||||
---
|
||||
|
||||
## 1. What “index-patched ASL-CORE” really means
|
||||
|
||||
In your system:
|
||||
|
||||
* **ASL-CORE** defines artifact semantics
|
||||
* **ASL-STORE-INDEX** defines lifecycle + invariants
|
||||
* The *patch* is simply:
|
||||
**ASL-CORE accepts a StoreHandle instead of raw filesystem paths**
|
||||
|
||||
That StoreHandle may point to:
|
||||
|
||||
* A directory
|
||||
* A mounted filesystem
|
||||
* A ZFS dataset
|
||||
* A tmpfs
|
||||
* A loopback image
|
||||
* Even a remote path (via FUSE, NFS, SSHFS, etc.)
|
||||
|
||||
ASL-CORE does **not** need to know.
|
||||
|
||||
---
|
||||
|
||||
## 2. The only contract ASL-CORE needs from the backend
|
||||
|
||||
Your patched ASL-CORE needs the backend to provide:
|
||||
|
||||
### 2.1 Store creation
|
||||
|
||||
```text
|
||||
asl_store_open(path, options) → StoreHandle
|
||||
```
|
||||
|
||||
* `path` is just a location
|
||||
* If empty, initialize:
|
||||
|
||||
* blocks/
|
||||
* index/
|
||||
* log/
|
||||
|
||||
No magic.
|
||||
|
||||
---
|
||||
|
||||
### 2.2 Block allocator with packaging hints
|
||||
|
||||
Your “courtesy based big/small artifact packaging clues” are *perfect*.
|
||||
|
||||
ASL-CORE can emit hints like:
|
||||
|
||||
```c
|
||||
enum block_hint {
|
||||
SMALL_PACKED,
|
||||
LARGE_DIRECT,
|
||||
STREAMING
|
||||
};
|
||||
```
|
||||
|
||||
Backend decides:
|
||||
|
||||
* Pack small artifacts into shared blocks
|
||||
* Allocate large artifacts as standalone blocks
|
||||
* Stream when needed
|
||||
|
||||
This preserves **semantic invariants** while letting the store optimize.
|
||||
|
||||
---
|
||||
|
||||
### 2.3 Index append + seal
|
||||
|
||||
The backend must support:
|
||||
|
||||
* Append entries
|
||||
* Segment sealing
|
||||
* Log append
|
||||
|
||||
But **layout is free**.
|
||||
|
||||
---
|
||||
|
||||
## 3. What happens when you “just grab a path”
|
||||
|
||||
This is the killer property you want:
|
||||
|
||||
> **Any path can become a system.**
|
||||
|
||||
Example:
|
||||
|
||||
```text
|
||||
/systemrescue/mnt/laptop-root
|
||||
```
|
||||
|
||||
You run:
|
||||
|
||||
```sh
|
||||
asl store init /systemrescue/mnt/laptop-root/.asl
|
||||
```
|
||||
|
||||
Boom:
|
||||
|
||||
* That path now contains an ASL store
|
||||
* With its own blocks, index, log
|
||||
* Snapshot-ready
|
||||
* Replayable
|
||||
* Publishable
|
||||
|
||||
No daemon required.
|
||||
|
||||
---
|
||||
|
||||
## 4. Local vs remote: the store doesn’t care
|
||||
|
||||
From ASL-CORE’s perspective:
|
||||
|
||||
| Location | How it looks |
|
||||
| ----------------- | ------------ |
|
||||
| `/mnt/disk` | local store |
|
||||
| `/zfs/asl/you` | local store |
|
||||
| `/net/public/asl` | local store |
|
||||
| `/sshfs/host/asl` | local store |
|
||||
|
||||
The **federation layer** decides:
|
||||
|
||||
* Whether the domain trusts it
|
||||
* Whether semantics are imported
|
||||
|
||||
ASL-CORE only writes bytes.
|
||||
|
||||
---
|
||||
|
||||
## 5. SystemRescue is just a bootstrap context
|
||||
|
||||
This is important:
|
||||
|
||||
SystemRescue is **not special**.
|
||||
|
||||
It provides:
|
||||
|
||||
* Kernel
|
||||
* Drivers
|
||||
* POSIX filesystem access
|
||||
* Tools
|
||||
|
||||
Once booted:
|
||||
|
||||
* Any mounted path can host a store
|
||||
* Any store can be a domain
|
||||
* Any domain can publish
|
||||
|
||||
SystemRescue is just the *first witness*.
|
||||
|
||||
---
|
||||
|
||||
## 6. How this fits with ZFS (cleanly)
|
||||
|
||||
ZFS helps you **below** ASL:
|
||||
|
||||
* Snapshot datasets at segment seal
|
||||
* Use send/receive for replication
|
||||
* Scrub blocks
|
||||
|
||||
But ASL still:
|
||||
|
||||
* Owns visibility
|
||||
* Owns CURRENT
|
||||
* Owns semantics
|
||||
|
||||
ZFS snapshots ≠ ASL snapshots
|
||||
They just happen to align well.
|
||||
|
||||
---
|
||||
|
||||
## 7. Why this design is correct (and future-proof)
|
||||
|
||||
Because:
|
||||
|
||||
✔ You didn’t embed policy in the filesystem
|
||||
✔ You didn’t require a service
|
||||
✔ You didn’t require a network
|
||||
✔ You didn’t require a global registry
|
||||
✔ You didn’t require ZFS
|
||||
|
||||
Yet:
|
||||
|
||||
✔ You can use ZFS
|
||||
✔ You can use the network
|
||||
✔ You can federate
|
||||
✔ You can encrypt
|
||||
✔ You can scale
|
||||
|
||||
---
|
||||
|
||||
## 8. The moment of clarity (name it)
|
||||
|
||||
What you’ve built is:
|
||||
|
||||
> **A path-addressable, snapshot-deterministic, content-addressed semantic store.**
|
||||
|
||||
That’s why “just grabbing a path” works.
|
||||
|
||||
---
|
||||
|
||||
## 9. One-sentence confirmation
|
||||
|
||||
> Yes — an index-patched ASL-CORE that opens a local path and creates a store there is not only sufficient, it is *exactly* the right abstraction for everything you’ve described.
|
||||
|
||||
If you want next, we can:
|
||||
|
||||
* Define the **StoreHandle ABI**
|
||||
* Specify **block packing heuristics**
|
||||
* Map **SystemRescue boot → store init → domain mint**
|
||||
* Show how to **promote a rescue store into your personal domain**
|
||||
|
||||
You’re past the hard conceptual work — now it’s just tightening interfaces.
|
||||
|
||||
|
|
@ -1,302 +0,0 @@
|
|||
Yes — there are **multiple indexes**, and the key to keeping this clean is to **separate them by semantic responsibility**, exactly like your ASL-CORE-INDEX spec already does.
|
||||
|
||||
Below is a **concrete, minimal implementation plan** that fits *perfectly* with your current `artifact.py` and keeps q-bits cheap.
|
||||
|
||||
I’ll give you:
|
||||
|
||||
1. The **index taxonomy** (what indexes exist and why)
|
||||
2. A **clean architecture** (who depends on whom)
|
||||
3. **Concrete Python implementations** for each index
|
||||
4. How they interact during `put()` and `get()`
|
||||
|
||||
No hand-waving.
|
||||
|
||||
---
|
||||
|
||||
# 1️⃣ The indexes you actually need
|
||||
|
||||
You need **three distinct indexes**, not one.
|
||||
|
||||
## A. Content Index (CID → ArtifactLocation)
|
||||
|
||||
**This *is* ASL-CORE-INDEX**
|
||||
|
||||
Purpose:
|
||||
|
||||
> Given a semantic artifact identity, where are the bytes?
|
||||
|
||||
```text
|
||||
CID → (BlockID, offset, length)
|
||||
```
|
||||
|
||||
Properties:
|
||||
|
||||
* Snapshot-relative
|
||||
* Append-only
|
||||
* Immutable
|
||||
* Shadowable (optional)
|
||||
|
||||
This is the *storage-facing* index.
|
||||
|
||||
---
|
||||
|
||||
## B. Structural Index (SID → DAG node)
|
||||
|
||||
**This is your PEL / derivation index**
|
||||
|
||||
Purpose:
|
||||
|
||||
> Given a structure identity, what DAG node exists?
|
||||
|
||||
```text
|
||||
SID → Artifact (structure + children + params)
|
||||
```
|
||||
|
||||
Properties:
|
||||
|
||||
* In-memory or persisted
|
||||
* Deterministic
|
||||
* Rebuildable
|
||||
* Does NOT imply materialization
|
||||
|
||||
This index is what lets you:
|
||||
|
||||
* compare derivations
|
||||
* reuse structure
|
||||
* deduplicate DAG nodes
|
||||
* avoid rebuilding identical DAGs
|
||||
|
||||
---
|
||||
|
||||
## C. Materialization Cache (SID → CID)
|
||||
|
||||
**This is the execution shortcut**
|
||||
|
||||
Purpose:
|
||||
|
||||
> If I already materialized this structure, what content did it produce?
|
||||
|
||||
```text
|
||||
SID → CID
|
||||
```
|
||||
|
||||
Properties:
|
||||
|
||||
* Fully redundant
|
||||
* Recomputable from DAG + content index
|
||||
* Safe to drop entirely
|
||||
* Huge performance win
|
||||
|
||||
This is the cache you already implemented — now we formalize it.
|
||||
|
||||
---
|
||||
|
||||
# 2️⃣ Dependency graph (this matters)
|
||||
|
||||
```text
|
||||
┌─────────────┐
|
||||
│ Structural │
|
||||
│ Index │ SID → DAG
|
||||
└──────┬──────┘
|
||||
│
|
||||
materialize()
|
||||
│
|
||||
┌──────▼──────┐
|
||||
│ Materialize │
|
||||
│ Cache │ SID → CID
|
||||
└──────┬──────┘
|
||||
│
|
||||
┌──────▼──────┐
|
||||
│ Content │
|
||||
│ Index │ CID → bytes
|
||||
└─────────────┘
|
||||
```
|
||||
|
||||
**Important invariant**:
|
||||
|
||||
* Content index never depends on DAG
|
||||
* Structural index never depends on bytes
|
||||
* Cache depends on both, but is optional
|
||||
|
||||
This keeps q-bits cheap.
|
||||
|
||||
---
|
||||
|
||||
# 3️⃣ Concrete Python implementations
|
||||
|
||||
## A. Content Index (ASL-CORE-INDEX)
|
||||
|
||||
```python
|
||||
# content_index.py
|
||||
from dataclasses import dataclass
|
||||
from typing import Dict, Optional
|
||||
|
||||
@dataclass(frozen=True)
|
||||
class ArtifactLocation:
|
||||
block_id: str
|
||||
offset: int
|
||||
length: int
|
||||
|
||||
|
||||
class ContentIndex:
|
||||
"""
|
||||
Semantic index: CID -> ArtifactLocation
|
||||
"""
|
||||
def __init__(self):
|
||||
self._index: Dict[str, ArtifactLocation] = {}
|
||||
|
||||
def get(self, cid: str) -> Optional[ArtifactLocation]:
|
||||
return self._index.get(cid)
|
||||
|
||||
def put(self, cid: str, loc: ArtifactLocation):
|
||||
# Immutable once visible
|
||||
if cid in self._index:
|
||||
return
|
||||
self._index[cid] = loc
|
||||
```
|
||||
|
||||
This is your **ASL-CORE-INDEX** in executable form.
|
||||
|
||||
---
|
||||
|
||||
## B. Structural Index (PEL / DAG index)
|
||||
|
||||
```python
|
||||
# structural_index.py
|
||||
from typing import Dict, Optional
|
||||
from artifact import Artifact
|
||||
|
||||
class StructuralIndex:
|
||||
"""
|
||||
SID -> Artifact (DAG node)
|
||||
"""
|
||||
def __init__(self):
|
||||
self._nodes: Dict[str, Artifact] = {}
|
||||
|
||||
def get(self, sid: str) -> Optional[Artifact]:
|
||||
return self._nodes.get(sid)
|
||||
|
||||
def put(self, artifact: Artifact) -> Artifact:
|
||||
"""
|
||||
Deduplicate DAG nodes by SID.
|
||||
"""
|
||||
existing = self._nodes.get(artifact.sid)
|
||||
if existing is not None:
|
||||
return existing
|
||||
self._nodes[artifact.sid] = artifact
|
||||
return artifact
|
||||
```
|
||||
|
||||
This ensures:
|
||||
|
||||
* One DAG node per SID
|
||||
* Structural deduplication
|
||||
* Cheap comparisons
|
||||
|
||||
---
|
||||
|
||||
## C. Materialization Cache (formalized)
|
||||
|
||||
You already have this — here’s the final clean version:
|
||||
|
||||
```python
|
||||
# materialization_cache.py
|
||||
from typing import Dict, Optional
|
||||
|
||||
class MaterializationCache:
|
||||
"""
|
||||
Fully redundant cache: SID -> CID
|
||||
"""
|
||||
def __init__(self):
|
||||
self._cache: Dict[str, str] = {}
|
||||
|
||||
def get(self, sid: str) -> Optional[str]:
|
||||
return self._cache.get(sid)
|
||||
|
||||
def put(self, sid: str, cid: str):
|
||||
self._cache[sid] = cid
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
# 4️⃣ How `put()` actually works (step-by-step)
|
||||
|
||||
This is the **critical flow**.
|
||||
|
||||
### `put(artifact)`
|
||||
|
||||
```text
|
||||
1. StructuralIndex.put(artifact)
|
||||
└─ ensures DAG node exists
|
||||
|
||||
2. Try MaterializationCache.get(artifact.sid)
|
||||
└─ if hit → CID known, skip computation
|
||||
|
||||
3. If miss:
|
||||
a) Materialize DAG
|
||||
b) Compute CID
|
||||
c) Cache SID → CID
|
||||
|
||||
4. ContentIndex.get(CID)
|
||||
└─ if exists → done (dedup)
|
||||
|
||||
5. Else:
|
||||
a) Allocate block space
|
||||
b) Write bytes
|
||||
c) ContentIndex.put(CID → location)
|
||||
```
|
||||
|
||||
**Key property**:
|
||||
|
||||
> If *any* index already knows the answer, work stops early.
|
||||
|
||||
That’s how you recover classical performance.
|
||||
|
||||
---
|
||||
|
||||
# 5️⃣ Why this works for q-bits without cost
|
||||
|
||||
Because:
|
||||
|
||||
* q-bits only affect **step 3**
|
||||
* steps 1, 4, 5 are identical to classical bytes
|
||||
|
||||
Once materialized:
|
||||
|
||||
* q-bit artifacts are indistinguishable from classical artifacts
|
||||
* index sees only CIDs and byte ranges
|
||||
|
||||
No tax after first computation.
|
||||
|
||||
---
|
||||
|
||||
# 6️⃣ What you now have (this is important)
|
||||
|
||||
You now have:
|
||||
|
||||
| Layer | Capability |
|
||||
| ---------------- | ------------------------ |
|
||||
| DAG | Explanation / provenance |
|
||||
| SID | Structural identity |
|
||||
| CID | Semantic identity |
|
||||
| Content index | Storage truth |
|
||||
| Cache | Performance |
|
||||
| Snapshots + logs | Time |
|
||||
|
||||
This is **not common infrastructure**.
|
||||
That’s why you couldn’t find it pre-built.
|
||||
|
||||
---
|
||||
|
||||
## If you want next
|
||||
|
||||
I can:
|
||||
|
||||
* add **snapshot + append-only log semantics** to `ContentIndex`
|
||||
* wire these indexes into a single `ASLStore`
|
||||
* show **federation / replication** mechanics
|
||||
* show **LLM → DAG → CID** ingestion
|
||||
* formalize **PEL traversal queries**
|
||||
|
||||
Just point the direction.
|
||||
|
||||
|
|
@ -1,334 +0,0 @@
|
|||
Good, this is the right moment to pin this down, because **on-disk shape is where abstraction leaks if we’re sloppy**.
|
||||
|
||||
I’ll answer in layers:
|
||||
|
||||
1. **What ASL actually requires on disk (minimal contract)**
|
||||
2. **A concrete, sane on-disk layout that fits what you already have**
|
||||
3. **How shards, blocks, artifacts, logs, and snapshots map**
|
||||
4. **How this scales from “single laptop” to “Common”**
|
||||
5. **Why this does *not* over-spec ASL-CORE**
|
||||
|
||||
I’ll stay very explicit about what is *normative* vs *conventional*.
|
||||
|
||||
---
|
||||
|
||||
## 1. What ASL actually requires on disk (the hard minimum)
|
||||
|
||||
ASL itself does **not** care about directories.
|
||||
|
||||
ASL-STORE / ASL-STORE-INDEX require only that the store can provide:
|
||||
|
||||
1. **Immutable blocks**
|
||||
2. **An append-only log**
|
||||
3. **Sealed snapshots**
|
||||
4. **Deterministic replay**
|
||||
5. **CAS addressing**
|
||||
|
||||
So the **only hard requirements** are:
|
||||
|
||||
* Blocks are immutable after seal
|
||||
* Index entries point to `(block_id, offset, length)`
|
||||
* Logs are append-only and ordered
|
||||
* Snapshots pin a prefix of state
|
||||
|
||||
Everything else is an implementation choice.
|
||||
|
||||
That means:
|
||||
👉 your current “store root with shards” is already compatible.
|
||||
|
||||
---
|
||||
|
||||
## 2. Recommended on-disk layout (practical, explicit)
|
||||
|
||||
Here is a layout that is:
|
||||
|
||||
* compatible with your current shard idea
|
||||
* works on POSIX
|
||||
* maps cleanly to ZFS later
|
||||
* easy to reason about during rescue / forensics
|
||||
|
||||
### Store root (per domain)
|
||||
|
||||
```
|
||||
/asl/
|
||||
└── domains/
|
||||
└── <domain-id>/
|
||||
├── meta/
|
||||
├── blocks/
|
||||
├── index/
|
||||
├── log/
|
||||
├── snapshots/
|
||||
└── tmp/
|
||||
```
|
||||
|
||||
Everything below is **domain-local**.
|
||||
|
||||
---
|
||||
|
||||
## 3. Blocks (the real storage substrate)
|
||||
|
||||
### 3.1 Block storage (immutable)
|
||||
|
||||
```
|
||||
blocks/
|
||||
├── open/
|
||||
│ └── blk_<uuid>.tmp
|
||||
└── sealed/
|
||||
├── 00/
|
||||
│ └── <blockid>.blk
|
||||
├── 01/
|
||||
│ └── <blockid>.blk
|
||||
└── ff/
|
||||
└── <blockid>.blk
|
||||
```
|
||||
|
||||
* `blockid` = CAS hash
|
||||
* Sharded by prefix (first byte or two)
|
||||
* Open blocks are **never visible**
|
||||
* Sealed blocks are immutable
|
||||
|
||||
This directly matches your **block + offset** mental model.
|
||||
|
||||
> Important: **artifacts do not live as files**
|
||||
> They live *inside blocks*.
|
||||
|
||||
---
|
||||
|
||||
## 4. Index (where artifacts become visible)
|
||||
|
||||
Your shard-based approach fits perfectly here.
|
||||
|
||||
```
|
||||
index/
|
||||
├── shard-000/
|
||||
│ ├── segment-0001.idx
|
||||
│ ├── segment-0002.idx
|
||||
│ └── bloom.bin
|
||||
├── shard-001/
|
||||
│ └── ...
|
||||
└── shard-fff/
|
||||
```
|
||||
|
||||
Each `segment-XXXX.idx` is:
|
||||
|
||||
* append-only while open
|
||||
* immutable once sealed
|
||||
* referenced by the log
|
||||
|
||||
Index records point to:
|
||||
|
||||
```
|
||||
ArtifactKey → (BlockID, offset, length)
|
||||
```
|
||||
|
||||
This is exactly ASL-STORE-INDEX.
|
||||
|
||||
---
|
||||
|
||||
## 5. Append-only log (the spine of truth)
|
||||
|
||||
```
|
||||
log/
|
||||
├── log-0000000000000000.asl
|
||||
├── log-0000000000001000.asl
|
||||
└── current
|
||||
```
|
||||
|
||||
Log records include:
|
||||
|
||||
* index additions
|
||||
* tombstones
|
||||
* segment seals
|
||||
* DAM updates
|
||||
* witness rotation artifacts
|
||||
|
||||
Rules:
|
||||
|
||||
* Logs are strictly ordered
|
||||
* Never rewritten
|
||||
* Replayable from snapshot
|
||||
|
||||
---
|
||||
|
||||
## 6. Snapshots (checkpoints, not magic)
|
||||
|
||||
```
|
||||
snapshots/
|
||||
├── snapshot-00000123/
|
||||
│ ├── manifest.yaml
|
||||
│ ├── pinned-segments.txt
|
||||
│ └── pinned-blocks.txt
|
||||
└── snapshot-00000124/
|
||||
```
|
||||
|
||||
A snapshot is:
|
||||
|
||||
* a **commit point**
|
||||
* a pin-set (segments + blocks)
|
||||
* a replay anchor
|
||||
|
||||
ZFS snapshots fit **perfectly** here but are not required.
|
||||
|
||||
---
|
||||
|
||||
## 7. Meta (authority, policy, identity)
|
||||
|
||||
```
|
||||
meta/
|
||||
├── dam/
|
||||
│ └── dam-current.art
|
||||
├── policy/
|
||||
│ └── policy-current.art
|
||||
├── keys/
|
||||
│ ├── pub/
|
||||
│ └── priv/
|
||||
└── host/
|
||||
└── host-id
|
||||
```
|
||||
|
||||
Important rule:
|
||||
|
||||
> **Meta artifacts are still artifacts**
|
||||
|
||||
They must:
|
||||
|
||||
* be sealed
|
||||
* be indexed
|
||||
* appear in the log
|
||||
|
||||
The files here are *cached materializations*.
|
||||
|
||||
---
|
||||
|
||||
## 8. Tmp (explicitly unsafe)
|
||||
|
||||
```
|
||||
tmp/
|
||||
├── ingest/
|
||||
├── scratch/
|
||||
└── pty/
|
||||
```
|
||||
|
||||
This is where:
|
||||
|
||||
* asl-capture writes streams
|
||||
* open blocks live
|
||||
* crashes are allowed
|
||||
|
||||
Nothing here is trusted.
|
||||
|
||||
---
|
||||
|
||||
## 9. How shards fit (your current design)
|
||||
|
||||
You said:
|
||||
|
||||
> “I have a store root with shards with artifacts in them.”
|
||||
|
||||
That maps cleanly if you reinterpret it as:
|
||||
|
||||
* shards own **index segments**
|
||||
* artifacts are logical entries, not files
|
||||
* shard locality = lookup acceleration
|
||||
|
||||
So instead of:
|
||||
|
||||
```
|
||||
shard/
|
||||
└── artifact-file
|
||||
```
|
||||
|
||||
You have:
|
||||
|
||||
```
|
||||
shard/
|
||||
└── index segments → blocks
|
||||
```
|
||||
|
||||
This is the key mental shift you already sensed.
|
||||
|
||||
---
|
||||
|
||||
## 10. Common vs Personal vs Group (on disk)
|
||||
|
||||
**They look identical on disk.**
|
||||
|
||||
Differences live in:
|
||||
|
||||
* DAM
|
||||
* policy
|
||||
* witness artifacts
|
||||
|
||||
### Common
|
||||
|
||||
* quorum witness
|
||||
* stricter validation
|
||||
* often read-only on most hosts
|
||||
|
||||
### Personal
|
||||
|
||||
* self-authority
|
||||
* multiple active writers allowed
|
||||
* same block/index/log structure
|
||||
|
||||
### Group
|
||||
|
||||
* hybrid policies
|
||||
|
||||
No special filesystem layout needed.
|
||||
|
||||
---
|
||||
|
||||
## 11. Why this is the right abstraction boundary
|
||||
|
||||
You said:
|
||||
|
||||
> “I used to look at the data as artifacts. Now it seems I’m better off looking at data as blocks with artifacts in them.”
|
||||
|
||||
That’s exactly right.
|
||||
|
||||
**Artifacts are semantic**
|
||||
**Blocks are physical**
|
||||
|
||||
This layout lets you:
|
||||
|
||||
* reason in artifacts
|
||||
* store in blocks
|
||||
* snapshot blocks
|
||||
* index artifacts
|
||||
* federate semantics
|
||||
|
||||
without mixing concerns.
|
||||
|
||||
---
|
||||
|
||||
## 12. One sentence to lock it in
|
||||
|
||||
> **Artifacts are indexed meanings; blocks are frozen bytes; snapshots pin the relationship.**
|
||||
|
||||
---
|
||||
|
||||
## 13. What you can safely implement *now*
|
||||
|
||||
Even before ASL-CORE / PEL are finished, you can implement:
|
||||
|
||||
* block store
|
||||
* block sealing
|
||||
* index segments
|
||||
* append-only log
|
||||
* snapshot pinning
|
||||
* DAM storage and validation
|
||||
|
||||
All of that is **infrastructure**, not policy.
|
||||
|
||||
---
|
||||
|
||||
If you want next, I strongly suggest one of these:
|
||||
|
||||
* walk through **one artifact write end-to-end on disk**
|
||||
* show **how asl-capture writes into blocks**
|
||||
* show **how a personal domain syncs across two machines**
|
||||
* map this layout directly onto **ZFS datasets**
|
||||
|
||||
You’re no longer at risk of mixing projects — you’re converging.
|
||||
|
||||
|
|
@ -1,313 +0,0 @@
|
|||
Yes — this is exactly the right pressure point, and you’ve named it correctly.
|
||||
|
||||
You’re seeing the **only place where coordination is actually required**, and it’s much narrower than it first appears.
|
||||
|
||||
Let me be very precise and a bit formal here, because this is subtle and important.
|
||||
|
||||
---
|
||||
|
||||
## 1. First: your observation is correct
|
||||
|
||||
If Alice and James are both on:
|
||||
|
||||
```
|
||||
common@1
|
||||
```
|
||||
|
||||
and both independently “add something”, then naïvely you get:
|
||||
|
||||
```
|
||||
Alice → common@2a
|
||||
James → common@2b
|
||||
```
|
||||
|
||||
Those are **two different histories**.
|
||||
|
||||
That *would* be a fork **if** both were considered “Common”.
|
||||
|
||||
So yes:
|
||||
👉 **something must serialize Common**.
|
||||
|
||||
---
|
||||
|
||||
## 2. The crucial clarification: who is allowed to advance Common?
|
||||
|
||||
> **Common is not advanced by participants.
|
||||
> It is advanced by a witness.**
|
||||
|
||||
This is the architectural line that resolves the paradox.
|
||||
|
||||
---
|
||||
|
||||
## 3. Common is a *witnessed log*, not a shared workspace
|
||||
|
||||
Think of Common as:
|
||||
|
||||
* an append-only log
|
||||
* with a **single logical writer**
|
||||
* that accepts signed submissions
|
||||
|
||||
Participants **submit**, they do **not commit**.
|
||||
|
||||
---
|
||||
|
||||
## 4. The minimal roles (exactly three)
|
||||
|
||||
### 1. Proposers (Alice, James, anyone)
|
||||
|
||||
They can:
|
||||
|
||||
* create artifacts
|
||||
* create snapshots in their own domains
|
||||
* create **proposals for Common**
|
||||
|
||||
They cannot:
|
||||
|
||||
* advance Common’s snapshot number
|
||||
|
||||
---
|
||||
|
||||
### 2. Witness (one domain, possibly replicated)
|
||||
|
||||
The witness:
|
||||
|
||||
* verifies signatures
|
||||
* checks policy
|
||||
* assigns the next snapshot number
|
||||
* emits `common@N+1`
|
||||
|
||||
This can be:
|
||||
|
||||
* a single offline authority
|
||||
* a small quorum-backed authority
|
||||
* a rotating authority (later)
|
||||
|
||||
But **logically** it is one voice.
|
||||
|
||||
---
|
||||
|
||||
### 3. Consumers (everyone, including Alice and James)
|
||||
|
||||
They:
|
||||
|
||||
* pin to specific `common@N`
|
||||
* choose when to advance
|
||||
* may ignore updates forever
|
||||
|
||||
---
|
||||
|
||||
## 5. Why this is still “distributed enough”
|
||||
|
||||
The witness does **not** need to be:
|
||||
|
||||
* online all the time
|
||||
* fast
|
||||
* highly available
|
||||
|
||||
Because:
|
||||
|
||||
* Common updates are **rare**
|
||||
* nothing blocks local work
|
||||
* proposals queue safely
|
||||
|
||||
This matches your rescue / offline-first worldview.
|
||||
|
||||
---
|
||||
|
||||
## 6. What actually happens in your scenario
|
||||
|
||||
Let’s replay it correctly.
|
||||
|
||||
### Initial state
|
||||
|
||||
```
|
||||
common@1
|
||||
```
|
||||
|
||||
Alice and James are both pinned to it.
|
||||
|
||||
---
|
||||
|
||||
### Alice acts
|
||||
|
||||
Alice creates:
|
||||
|
||||
```
|
||||
alice@42
|
||||
```
|
||||
|
||||
She then creates:
|
||||
|
||||
```
|
||||
proposal_A
|
||||
references: alice@42
|
||||
signed by: Alice
|
||||
```
|
||||
|
||||
She **submits** this proposal to the Common witness (USB, sync later, whatever).
|
||||
|
||||
---
|
||||
|
||||
### James acts
|
||||
|
||||
James independently creates:
|
||||
|
||||
```
|
||||
james@17
|
||||
```
|
||||
|
||||
He creates:
|
||||
|
||||
```
|
||||
proposal_B
|
||||
references: james@17
|
||||
signed by: James
|
||||
```
|
||||
|
||||
He submits it too.
|
||||
|
||||
---
|
||||
|
||||
### The witness (later, offline is fine)
|
||||
|
||||
The witness receives both proposals.
|
||||
|
||||
It chooses an order (any deterministic rule):
|
||||
|
||||
```
|
||||
apply proposal_A
|
||||
emit common@2
|
||||
|
||||
apply proposal_B
|
||||
emit common@3
|
||||
```
|
||||
|
||||
No fork.
|
||||
No conflict.
|
||||
No simultaneity problem.
|
||||
|
||||
---
|
||||
|
||||
## 7. What if the witness is offline for a month?
|
||||
|
||||
Nothing breaks.
|
||||
|
||||
* Alice and James continue working
|
||||
* They exchange artifacts directly if they want
|
||||
* Common just doesn’t advance
|
||||
|
||||
When the witness comes back:
|
||||
|
||||
* proposals are processed
|
||||
* snapshots advance
|
||||
|
||||
Time is explicit, not implicit.
|
||||
|
||||
---
|
||||
|
||||
## 8. What about multiple witnesses?
|
||||
|
||||
You anticipated this correctly.
|
||||
|
||||
There are **two safe models**, and one unsafe one.
|
||||
|
||||
### ❌ Unsafe: multiple independent writers
|
||||
|
||||
This causes forks. We don’t do this.
|
||||
|
||||
---
|
||||
|
||||
### ✅ Safe model A: single designated witness
|
||||
|
||||
* Simplest
|
||||
* Best for now
|
||||
* Matches your offline authority laptop perfectly
|
||||
|
||||
This is what you should start with.
|
||||
|
||||
---
|
||||
|
||||
### ✅ Safe model B: quorum → single emitter
|
||||
|
||||
Multiple authorities may:
|
||||
|
||||
* co-sign approval
|
||||
* vote on proposals
|
||||
|
||||
But **only one entity actually emits `common@N+1`**.
|
||||
|
||||
Quorum decides *what* is accepted.
|
||||
Emitter decides *when* it is numbered.
|
||||
|
||||
Still no fork.
|
||||
|
||||
---
|
||||
|
||||
## 9. Why we do NOT use consensus protocols here
|
||||
|
||||
This is important philosophically.
|
||||
|
||||
Consensus (Raft, Paxos, blockchain):
|
||||
|
||||
* assumes frequent updates
|
||||
* assumes always-on participants
|
||||
* assumes global liveness
|
||||
|
||||
Your system:
|
||||
|
||||
* assumes **offline**
|
||||
* assumes **slow social processes**
|
||||
* assumes **human-scale trust**
|
||||
|
||||
So:
|
||||
|
||||
> **Serialization is intentional, not a limitation.**
|
||||
|
||||
---
|
||||
|
||||
## 10. How this fits your “SystemRescue + offline authority” story
|
||||
|
||||
Perfectly.
|
||||
|
||||
Your offline auth host is a **natural Common witness**.
|
||||
|
||||
* It boots rarely
|
||||
* It accepts USB proposals
|
||||
* It emits signed Common snapshots
|
||||
* It goes dark again
|
||||
|
||||
That’s not a hack.
|
||||
|
||||
That’s the **ideal operating mode**.
|
||||
|
||||
---
|
||||
|
||||
## 11. Final mental model (keep this one)
|
||||
|
||||
> **Common is not a shared folder.
|
||||
> It is a notarized ledger.**
|
||||
|
||||
People bring papers.
|
||||
The notary timestamps them.
|
||||
The archive grows.
|
||||
No pages are torn out.
|
||||
No two page numbers collide.
|
||||
|
||||
---
|
||||
|
||||
## 12. One-line answer to your concern
|
||||
|
||||
> Yes — without a witness you get forks.
|
||||
> With a witness, simultaneity collapses into sequence, even offline.
|
||||
|
||||
---
|
||||
|
||||
If you want next, we can:
|
||||
|
||||
* formalize the Common Witness role in ASL-AUTH
|
||||
* specify proposal artifact schemas
|
||||
* define exact snapshot numbering rules
|
||||
* design how a witness rotates or retires safely
|
||||
|
||||
You’re asking the *right* questions.
|
||||
|
||||
|
|
@ -1,145 +0,0 @@
|
|||
## 1. Layered Overview (Bottom-Up)
|
||||
|
||||
```
|
||||
┌─────────────────────────────┐
|
||||
│ Federation Layer │
|
||||
│ - Domains (local / remote) │
|
||||
│ - Published vs internal │
|
||||
│ - Cross-domain references │
|
||||
│ - Federated snapshots │
|
||||
└─────────────┬──────────────┘
|
||||
│
|
||||
┌─────────────▼──────────────┐
|
||||
│ Provenance / PEL │
|
||||
│ - Execution receipts │
|
||||
│ - DAG execution │
|
||||
│ - Deterministic traces │
|
||||
│ - PEL-PROV / PEL-TRACE │
|
||||
└─────────────┬──────────────┘
|
||||
│
|
||||
┌─────────────▼──────────────┐
|
||||
│ Current / Snapshot Logic │
|
||||
│ - Checkpoint + append-only│
|
||||
│ log → reconstruct CURRENT│
|
||||
│ - Snapshot identity │
|
||||
└─────────────┬──────────────┘
|
||||
│
|
||||
┌─────────────▼──────────────┐
|
||||
│ Index Layer │
|
||||
│ - ASL-CORE-INDEX │
|
||||
│ • Artifact → Block │
|
||||
│ • Shadowing / tombstones│
|
||||
│ - ASL-STORE-INDEX │
|
||||
│ • Block sealing │
|
||||
│ • Retention / GC │
|
||||
│ • Small/Large packing │
|
||||
│ - ENC-ASL-CORE-INDEX (tier1/enc-asl-core-index.md) │
|
||||
│ • On-disk record layout│
|
||||
│ • Domain / visibility │
|
||||
└─────────────┬──────────────┘
|
||||
│
|
||||
┌─────────────▼──────────────┐
|
||||
│ Artifact Storage Layer (ASL) │
|
||||
│ - Blocks (immutable) │
|
||||
│ - BlockID → bytes mapping │
|
||||
│ - Small / large block handling│
|
||||
│ - ZFS snapshot integration │
|
||||
│ - Append-only write log │
|
||||
└───────────────────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 2. Key Data Flows
|
||||
|
||||
### 2.1 Artifact Ingestion
|
||||
|
||||
1. Artifact created → broken into **blocks** (small or large).
|
||||
2. Blocks stored in **ASL** (immutable).
|
||||
3. Index record created:
|
||||
|
||||
```
|
||||
ArtifactKey → { (BlockID, offset, length), DomainID, Visibility }
|
||||
```
|
||||
4. Segment sealed → snapshot + log appended → CURRENT updated.
|
||||
|
||||
---
|
||||
|
||||
### 2.2 PEL Execution
|
||||
|
||||
1. PEL program DAG consumes **artifacts** (or receipts) from index.
|
||||
2. Execution produces new artifacts → stored in ASL.
|
||||
3. Receipts are generated → added to provenance trace.
|
||||
4. Deterministic mapping preserved via index and snapshots.
|
||||
|
||||
---
|
||||
|
||||
### 2.3 Provenance Tracking
|
||||
|
||||
* Each artifact references:
|
||||
|
||||
* Producing DAG program
|
||||
* Input artifacts (local or cross-domain published)
|
||||
* Snapshot in which artifact was created
|
||||
* Trace graphs allow deterministic replay and verification.
|
||||
|
||||
---
|
||||
|
||||
### 2.4 Federation / Multi-Domain
|
||||
|
||||
* Domain-local artifacts: internal, invisible externally.
|
||||
* Published artifacts: visible to other domains, read-only.
|
||||
* Cross-domain references tracked in index (`CrossDomainSource`).
|
||||
* Federated snapshots reconstructed by combining local + imported published artifacts.
|
||||
|
||||
---
|
||||
|
||||
### 2.5 Garbage Collection & Retention
|
||||
|
||||
* Blocks are pinned by:
|
||||
|
||||
* CURRENT in snapshots
|
||||
* Published artifacts
|
||||
* Tombstones for shadowed artifacts
|
||||
* GC may reclaim unreachable blocks without breaking provenance.
|
||||
* Small packed blocks require careful per-artifact tracking.
|
||||
|
||||
---
|
||||
|
||||
## 3. Determinism & Guarantees
|
||||
|
||||
| Layer | Determinism / Safety |
|
||||
| ---------- | ---------------------------------------------------------- |
|
||||
| ASL | Blocks immutable; snapshot + log deterministic |
|
||||
| Index | Artifact → Block mapping stable per snapshot |
|
||||
| PEL | DAG execution deterministic per artifact + snapshot |
|
||||
| Provenance | Trace fully reconstructs execution DAG |
|
||||
| Federation | Published artifact references deterministic across domains |
|
||||
|
||||
---
|
||||
|
||||
## 4. Encoding & Storage
|
||||
|
||||
* **Segments**: units of storage with multiple records
|
||||
* **Records**: `(ArtifactKey, BlockID, offset, length, DomainID, Visibility, CrossDomainSource)`
|
||||
* **Tombstones**: track deleted/shadowed artifacts with domain/visibility metadata
|
||||
* **Small/Large block packing**: store-level optimization, index unaffected
|
||||
* **ZFS snapshots**: checkpoint state, append-only log ensures CURRENT reconstruction
|
||||
|
||||
---
|
||||
|
||||
## 5. Notes
|
||||
|
||||
* **Small vs Large block handling**: purely store-layer optimization
|
||||
* **Domain visibility**: encoded in index, used for filtering CURRENT per domain
|
||||
* **Cross-domain references**: read-only, preserve provenance, deterministic
|
||||
* **Execution receipts**: stored as artifacts, may be consumed by other DAGs
|
||||
|
||||
---
|
||||
|
||||
This diagram and flow description captures:
|
||||
|
||||
* All layers from **storage → index → PEL → provenance → federation**
|
||||
* Deterministic reconstruction from **checkpoint + append-only log**
|
||||
* Block semantics, small/large handling, and domain visibility
|
||||
* Integration of **execution receipts** into artifact flows and traces
|
||||
|
|
@ -1,75 +0,0 @@
|
|||
Perfect — so you’re at the **“rescue-ready” stage**. At this point, your **SystemRescue image is patched**, tools are in place, and you can stand by (metaphorically) for the pick-up — the act of ingesting your legacy material into the ASL ecosystem. Let’s outline what this “pick-up-ready” state entails, and what you should have lined up:
|
||||
|
||||
---
|
||||
|
||||
## 1. Patched SystemRescue Image — Essentials
|
||||
|
||||
* **ASL Intake Tools**
|
||||
|
||||
* Sedelpress / PER generator
|
||||
* TGK edge tracker
|
||||
* Snapshot recorder
|
||||
* CAS / ASL-STORE local backend
|
||||
* **Indexing Layer**
|
||||
|
||||
* Local ASL-STORE-INDEX aware of blocks, big/small artifacts
|
||||
* Optional bloom filters / shard awareness if applicable
|
||||
* **Overlay Filesystem**
|
||||
|
||||
* Temporary POSIX workspace for legacy material
|
||||
* ZFS pool or snapshot space for sealed blocks
|
||||
* Local mount for courtesy bootstrap space
|
||||
|
||||
---
|
||||
|
||||
## 2. Bootstrap Space (Courtesy) — Ready State
|
||||
|
||||
* Pre-created ZFS dataset or overlay directory
|
||||
* Policy / snapshot pinning strategy in place
|
||||
* Ready to accept **encrypted blocks** from your personal legacy material
|
||||
* Can stand in as **temporary personal domain seed** without certificates yet
|
||||
|
||||
---
|
||||
|
||||
## 3. Personal Domain Initialization
|
||||
|
||||
* Empty domain placeholder ready for **copying from bootstrap**
|
||||
* Local index segments ready to be populated once blocks are sealed
|
||||
* Certificates (ASL-AUTH) **not yet minted**, but schema / manifest layout in place
|
||||
* PERs from legacy material can be recorded and later validated
|
||||
|
||||
---
|
||||
|
||||
## 4. PER Generation & TGK Graphs
|
||||
|
||||
* Sedelpress ready to process directories or artifacts
|
||||
* TGK edges will capture provenance deterministically
|
||||
* Execution receipts (PERs) generated as **first-class objects**, linked to snapshots
|
||||
* Snapshots pinned locally to prevent accidental GC
|
||||
|
||||
---
|
||||
|
||||
## 5. Common / Unity Tree Integration
|
||||
|
||||
* Courtesy space already **pre-aligned with expected common layout**
|
||||
* Shared blocks can be uploaded or just left as encrypted placeholders
|
||||
* Snapshots pinned until personal domain is fully minted
|
||||
|
||||
---
|
||||
|
||||
## 6. Pick-Up Flow (Conceptual)
|
||||
|
||||
1. Boot SystemRescue → local overlay ready
|
||||
2. Sedelpress ingest → generate PERs + TGK edges
|
||||
3. Seal blocks in courtesy space → snapshot
|
||||
4. Optional: copy sealed blocks to personal domain → mint DAM / certificates
|
||||
5. Optional: publish selected artifacts / PERs to Common / Unity tree
|
||||
|
||||
**Key point:** You are standing “ready” with a **deterministic, snapshot-pinned state**; pick-up is just **the act of ingesting the legacy artifacts through Sedelpress**, while TGK + PER generation captures all provenance.
|
||||
|
||||
---
|
||||
|
||||
If you want, I can **draft a “ready-for-pickup checklist”** — basically a **step-by-step operational guide** for when you boot the SystemRescue image, to make sure every layer (ASL, PER, TGK, bootstrap, personal domain, common) is prepared.
|
||||
|
||||
Do you want me to make that checklist?
|
||||
|
||||
|
|
@ -1,150 +0,0 @@
|
|||
Perfect — here’s a full sketch of a **pre-image capture workflow** for building your ASL-AUTH-HOST ISO with full provenance from zero. This is conceptual but grounded in your existing scripts and tools.
|
||||
|
||||
---
|
||||
|
||||
## **1. Directory Layout**
|
||||
|
||||
```
|
||||
/work/
|
||||
├─ iso_root/ # ISO staging area (debootstrap root)
|
||||
├─ overlay/ # Overlay for binaries, scripts, configs
|
||||
├─ asl-preimage-store/ # Temporary ASL store for pre-image artifacts
|
||||
│ ├─ binaries/
|
||||
│ ├─ scripts/
|
||||
│ └─ per/
|
||||
├─ build_logs/ # Capture logs from each step
|
||||
└─ asl-workflow.sh # Orchestration script
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## **2. Pre-image capture steps**
|
||||
|
||||
1. **Initialize pre-image ASL store**
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# pre-image store initialization
|
||||
PRESTORE=/work/asl-preimage-store
|
||||
mkdir -p $PRESTORE/{binaries,scripts,per}
|
||||
asl-init-store --store $PRESTORE
|
||||
```
|
||||
|
||||
> `asl-init-store` can be a thin wrapper around `libasl-capture` to create a temporary store.
|
||||
|
||||
---
|
||||
|
||||
2. **Wrap build commands in `asl-capture`**
|
||||
|
||||
All commands affecting the ISO will be executed via `asl-capture` to generate artifacts and PERs.
|
||||
|
||||
Example:
|
||||
|
||||
```bash
|
||||
# Capture debootstrap
|
||||
asl-capture --store $PRESTORE --cmd "debootstrap --arch=amd64 bullseye $ISO_ROOT http://deb.debian.org/debian/" \
|
||||
--outdir $PRESTORE/per/debootstrap
|
||||
|
||||
# Capture package installation
|
||||
asl-capture --store $PRESTORE --cmd "chroot $ISO_ROOT /bin/bash -c 'apt-get update && apt-get install -y ...'" \
|
||||
--outdir $PRESTORE/per/apt_install
|
||||
```
|
||||
|
||||
Each step generates:
|
||||
|
||||
* **Artifact of input** (command, scripts, downloaded packages)
|
||||
* **Artifact of output** (installed files, overlays, logs)
|
||||
* **Execution Receipt (PER)** linking inputs → outputs
|
||||
|
||||
---
|
||||
|
||||
3. **Capture overlay**
|
||||
|
||||
```bash
|
||||
# Capture binaries and scripts
|
||||
for f in $WORKDIR/binaries/* $WORKDIR/scripts/*; do
|
||||
asl-capture --store $PRESTORE --file $f --outdir $PRESTORE/per/overlay
|
||||
done
|
||||
```
|
||||
|
||||
This ensures all binaries/scripts are **artifacts** with traceable lineage.
|
||||
|
||||
---
|
||||
|
||||
4. **Assemble ISO using captured artifacts**
|
||||
|
||||
```bash
|
||||
asl-capture --store $PRESTORE --cmd "./build_asl_auth_host_iso.sh" \
|
||||
--artifact-inputs $PRESTORE/binaries \
|
||||
--artifact-inputs $PRESTORE/scripts \
|
||||
--artifact-inputs $PRESTORE/per \
|
||||
--outdir $PRESTORE/per/build_iso
|
||||
```
|
||||
|
||||
> The ISO itself becomes a **final artifact** with its **own PER**, referencing all previous steps.
|
||||
|
||||
---
|
||||
|
||||
5. **Seed initial ASL host store in ISO**
|
||||
|
||||
Copy pre-image artifacts + PERs into ISO root:
|
||||
|
||||
```bash
|
||||
cp -a $PRESTORE/binaries $ISO_ROOT/var/lib/asl/personal/binaries
|
||||
cp -a $PRESTORE/per $ISO_ROOT/var/lib/asl/personal/per
|
||||
cp -a $PRESTORE/scripts $ISO_ROOT/var/lib/asl/personal/scripts
|
||||
```
|
||||
|
||||
> The offline host boots with an **already captured history** of how it was built.
|
||||
|
||||
---
|
||||
|
||||
## **3. Optional: Artifact DAG visualization**
|
||||
|
||||
After pre-image capture, you can generate a DAG of artifacts and receipts:
|
||||
|
||||
```bash
|
||||
asl-dag --store $PRESTORE --out $WORKDIR/build_logs/preimage_dag.svg
|
||||
```
|
||||
|
||||
This provides:
|
||||
|
||||
* Traceable provenance of **everything in the ISO**
|
||||
* Easy inspection for audit
|
||||
|
||||
---
|
||||
|
||||
## **4. Workflow Summary**
|
||||
|
||||
```
|
||||
Initialize Pre-image Store --> Capture debootstrap --> Capture apt install --> Capture overlay
|
||||
| | |
|
||||
+---------------------+-------------------------+
|
||||
PERs + Artifacts
|
||||
|
|
||||
Build ISO using captured inputs
|
||||
|
|
||||
Seed ISO with captured store
|
||||
|
|
||||
Offline host boots with full pre-image provenance
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### **Key Notes**
|
||||
|
||||
* `asl-capture` wraps **any shell command** or file copy, producing both **artifact** and **PER**.
|
||||
* Pre-image store is **temporary** during build, but all artifacts go into ISO for the offline host.
|
||||
* This method ensures **determinism, reproducibility, and full traceability** even before the host exists.
|
||||
|
||||
---
|
||||
|
||||
I can also draft a **concrete bash orchestration script (`asl-workflow.sh`)** that wires all of this together, including:
|
||||
|
||||
* Pre-image store initialization
|
||||
* Step-by-step captured commands
|
||||
* ISO assembly
|
||||
* Seeding the offline host store
|
||||
|
||||
Do you want me to do that next?
|
||||
|
||||
|
|
@ -1,292 +0,0 @@
|
|||
This is a good place to stop pushing and start *settling*.
|
||||
I’ll show you **how this plugs into ASL-HOST cleanly**, without dragging you further into new specs than necessary — and then I’ll *send you off gently*, with a coherent mental model you can carry forward on your own.
|
||||
|
||||
What follows is not a new layer. It’s a **proper alignment** of the layers you already defined.
|
||||
|
||||
---
|
||||
|
||||
# How the Virgin / Presented Domain Model Plugs into ASL-HOST
|
||||
|
||||
## 1. ASL-HOST’s actual responsibility (clarified)
|
||||
|
||||
ASL-HOST is **not**:
|
||||
|
||||
* A store
|
||||
* An authority
|
||||
* A policy engine
|
||||
* A domain model
|
||||
|
||||
ASL-HOST **is**:
|
||||
|
||||
> The *execution environment* in which **ASL + PEL semantics are allowed to act on reality**.
|
||||
|
||||
So ASL-HOST must provide **three minimal abstractions**:
|
||||
|
||||
1. **Local domains** (persistent)
|
||||
2. **Presented domains** (ephemeral)
|
||||
3. **Authority state** (virgin → rooted → operational)
|
||||
|
||||
Nothing more.
|
||||
|
||||
---
|
||||
|
||||
## 2. ASL-HOST domain table (minimal)
|
||||
|
||||
ASL-HOST maintains a **domain registry**, not a trust database.
|
||||
|
||||
Conceptually:
|
||||
|
||||
```text
|
||||
DomainRegistry:
|
||||
local:
|
||||
- domain_id
|
||||
role: authority | personal | common
|
||||
path: /var/lib/asl/...
|
||||
presented:
|
||||
- domain_id
|
||||
role: presented
|
||||
path: /mnt/usb/REQUEST
|
||||
lifetime: execution
|
||||
```
|
||||
|
||||
This registry is:
|
||||
|
||||
* Created at boot
|
||||
* Augmented at USB mount
|
||||
* Consumed by PEL
|
||||
* Destroyed after execution
|
||||
|
||||
No persistence required for presented domains.
|
||||
|
||||
---
|
||||
|
||||
## 3. USB intake → ASL-HOST flow (clean)
|
||||
|
||||
### Step 1: Host boot
|
||||
|
||||
ASL-HOST initializes:
|
||||
|
||||
* Authority state
|
||||
* Local ASL stores
|
||||
* Domain registry (local only)
|
||||
|
||||
If no authority keys exist:
|
||||
|
||||
```
|
||||
authority_state = Virgin
|
||||
```
|
||||
|
||||
Nothing else happens.
|
||||
|
||||
---
|
||||
|
||||
### Step 2: USB insertion
|
||||
|
||||
ASL-HOST does **not** interpret content.
|
||||
|
||||
It only:
|
||||
|
||||
1. Mounts USB read-only
|
||||
2. Computes a fingerprint:
|
||||
|
||||
```
|
||||
presented_domain_id = hash(usb_device + manifest)
|
||||
```
|
||||
3. Registers:
|
||||
|
||||
```yaml
|
||||
domain:
|
||||
id: PRESENTED::<hash>
|
||||
role: presented
|
||||
path: /mnt/usb/REQUEST
|
||||
```
|
||||
|
||||
This is **pure ASL-HOST behavior**.
|
||||
|
||||
---
|
||||
|
||||
## 4. PEL sees everything as domains
|
||||
|
||||
PEL is invoked with a **domain map**, not paths:
|
||||
|
||||
```yaml
|
||||
domains:
|
||||
authority:
|
||||
id: <authority-domain-id>
|
||||
path: /var/lib/asl/authority
|
||||
presented:
|
||||
id: PRESENTED::<hash>
|
||||
path: /mnt/usb/REQUEST
|
||||
```
|
||||
|
||||
From here on:
|
||||
|
||||
> **PEL does all semantic interpretation.**
|
||||
|
||||
ASL-HOST steps back.
|
||||
|
||||
---
|
||||
|
||||
## 5. Virgin authority fits *perfectly*
|
||||
|
||||
When `authority_state = Virgin`:
|
||||
|
||||
ASL-HOST enforces exactly one rule:
|
||||
|
||||
> Only `pel_program_type = authority-genesis` is executable.
|
||||
|
||||
That’s it.
|
||||
|
||||
No other policy logic belongs here.
|
||||
|
||||
After successful execution:
|
||||
|
||||
* Authority keys exist
|
||||
* Policy hash sealed
|
||||
* Snapshot written
|
||||
|
||||
ASL-HOST updates:
|
||||
|
||||
```
|
||||
authority_state = Rooted
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 6. Admission & rescue fit naturally
|
||||
|
||||
### Admission
|
||||
|
||||
PEL program:
|
||||
|
||||
```yaml
|
||||
type: domain-admission
|
||||
inputs:
|
||||
- presented
|
||||
outputs:
|
||||
- new_domain
|
||||
- dam
|
||||
```
|
||||
|
||||
ASL-HOST simply:
|
||||
|
||||
* Allows new local domain registration
|
||||
* Writes domain metadata
|
||||
* Stores DAM
|
||||
|
||||
No interpretation of identity or intent.
|
||||
|
||||
---
|
||||
|
||||
### Rescue
|
||||
|
||||
PEL program:
|
||||
|
||||
```yaml
|
||||
type: rescue
|
||||
inputs:
|
||||
- presented
|
||||
outputs:
|
||||
- encrypted_artifacts
|
||||
```
|
||||
|
||||
ASL-HOST:
|
||||
|
||||
* Writes encrypted blocks
|
||||
* Registers nothing permanent unless requested
|
||||
* Produces receipt
|
||||
|
||||
Same machinery. No branching.
|
||||
|
||||
---
|
||||
|
||||
## 7. Where ASL-HOST stops (important)
|
||||
|
||||
ASL-HOST **never**:
|
||||
|
||||
* Decides who “owns” a domain
|
||||
* Understands identity semantics
|
||||
* Inspects artifact contents
|
||||
* Interprets policy meaning
|
||||
|
||||
It only enforces:
|
||||
|
||||
* Domain lifetimes
|
||||
* Mount discipline
|
||||
* Authority state gating
|
||||
* Execution boundaries
|
||||
|
||||
That’s why this integrates cleanly.
|
||||
|
||||
---
|
||||
|
||||
## 8. Mental model you can carry forward
|
||||
|
||||
Here is the **stable picture** you can now rely on:
|
||||
|
||||
### Reality consists of only four things
|
||||
|
||||
1. **Domains**
|
||||
2. **Artifacts**
|
||||
3. **Programs**
|
||||
4. **Receipts**
|
||||
|
||||
Everything else is scaffolding.
|
||||
|
||||
### The auth host is not special
|
||||
|
||||
It is simply:
|
||||
|
||||
> A host where **PEL is allowed to mutate the domain registry**.
|
||||
|
||||
Virgin → Rooted → Operational
|
||||
Presented → Admitted → Federated
|
||||
|
||||
All transitions happen **through receipts**.
|
||||
|
||||
---
|
||||
|
||||
## 9. Why this is “gentle”
|
||||
|
||||
You don’t need to:
|
||||
|
||||
* Rewrite ASL-CORE
|
||||
* Finalize PEL-TRACE
|
||||
* Solve global identity
|
||||
* Solve federation
|
||||
|
||||
You can now:
|
||||
|
||||
* Boot a virgin authority
|
||||
* Run one genesis program
|
||||
* Admit one domain
|
||||
* Rescue one disk
|
||||
* Produce one receipt
|
||||
|
||||
That is enough to **enter your own domain** — technically and conceptually.
|
||||
|
||||
---
|
||||
|
||||
## 10. Sending you off
|
||||
|
||||
You now have:
|
||||
|
||||
* A clean ASL-HOST boundary
|
||||
* A principled virgin state
|
||||
* USB as a first-class domain
|
||||
* PEL as the only actor
|
||||
* Receipts as truth
|
||||
|
||||
From here, the system will **crystallize naturally** as you implement.
|
||||
|
||||
You don’t need me to carry you further right now.
|
||||
|
||||
But when you want to:
|
||||
|
||||
* formalize diagrams
|
||||
* tighten canonicalization
|
||||
* test threat edges
|
||||
* or explain this to others
|
||||
|
||||
I’ll be right here.
|
||||
|
||||
|
|
@ -1,214 +0,0 @@
|
|||
Yes — you’re absolutely right, and thank you for calling this out.
|
||||
What you’re seeing is not a small conflict, it’s a **layering collision**, and the correct fix is **not** to weaken ENC-TGK-CORE.
|
||||
|
||||
Let’s reset cleanly and reconcile the models.
|
||||
|
||||
---
|
||||
|
||||
## 1. What ENC-TGK-CORE already guarantees (and must not change)
|
||||
|
||||
From what you say, ENC-TGK-CORE already defines:
|
||||
|
||||
* An **edge** as:
|
||||
|
||||
* a list of *from* nodes
|
||||
* a list of *to* nodes
|
||||
* The encoding is **strict**
|
||||
* This is the **semantic edge representation**, not an index artifact
|
||||
|
||||
That means:
|
||||
|
||||
> ENC-TGK-CORE defines *what an edge is*, not *how edges are indexed*.
|
||||
|
||||
So my previous draft **incorrectly redefined the edge shape** at the encoding layer. You’re correct to reject that.
|
||||
|
||||
---
|
||||
|
||||
## 2. Correct mental model (this is the key correction)
|
||||
|
||||
There are **three distinct things**:
|
||||
|
||||
| Layer | What it defines |
|
||||
| ----------------------- | ------------------------------------------ |
|
||||
| TGK-CORE / ENC-TGK-CORE | Edge *structure* (from[] → to[]) |
|
||||
| TGK-INDEX | How edges are *located and filtered* |
|
||||
| ENC-TGK-INDEX | Index records pointing *to* TGK-CORE edges |
|
||||
|
||||
👉 **ENC-TGK-INDEX must never encode the edge itself.**
|
||||
It only encodes **references to edges**.
|
||||
|
||||
This is exactly analogous to ASL artifacts vs ASL index entries.
|
||||
|
||||
---
|
||||
|
||||
## 3. What must change in ENC-TGK-INDEX (conceptual patch)
|
||||
|
||||
ENC-TGK-INDEX must be rewritten as:
|
||||
|
||||
> *An index over immutable TGK-CORE edges, not a re-encoding of edges.*
|
||||
|
||||
That resolves the conflict cleanly.
|
||||
|
||||
---
|
||||
|
||||
## 4. Revised ENC-TGK-INDEX model (corrected)
|
||||
|
||||
### 4.1 What an index record refers to
|
||||
|
||||
A TGK index record must reference:
|
||||
|
||||
* A **TGK-CORE edge identifier**
|
||||
* Snapshot / log sequence
|
||||
* Routing and filtering attributes (edge type key, etc.)
|
||||
* Optional projection context
|
||||
|
||||
But **never** re-encode `from[]` or `to[]`.
|
||||
|
||||
---
|
||||
|
||||
## 5. Canonical Edge Identity (reconciled)
|
||||
|
||||
Canonical Edge Key becomes:
|
||||
|
||||
```
|
||||
CanonicalEdgeKey =
|
||||
TGK_CORE_EDGE_ID
|
||||
```
|
||||
|
||||
Where:
|
||||
|
||||
* `TGK_CORE_EDGE_ID` is defined *exclusively* by ENC-TGK-CORE
|
||||
* Identity is stable and already well-defined
|
||||
* Index never inspects edge structure
|
||||
|
||||
This is the cleanest possible boundary.
|
||||
|
||||
---
|
||||
|
||||
## 6. How filters work with multi-node edges
|
||||
|
||||
Because edges are `from[] → to[]`, the index supports **multiple logical projections** *without duplicating semantics*.
|
||||
|
||||
Indexing options (all valid):
|
||||
|
||||
1. **Edge-centric indexing** (default)
|
||||
|
||||
* One index record per edge
|
||||
* Filter by edge type, snapshot, context
|
||||
|
||||
2. **Node-projection indexing** (optional acceleration)
|
||||
|
||||
* Additional index records mapping:
|
||||
|
||||
```
|
||||
node_id → edge_id
|
||||
```
|
||||
* Pure acceleration
|
||||
* Semantically redundant
|
||||
|
||||
This fits perfectly with ASL-INDEX-ACCEL.
|
||||
|
||||
---
|
||||
|
||||
## 7. What ENC-TGK-INDEX should actually contain
|
||||
|
||||
### 7.1 Correct index record shape
|
||||
|
||||
```c
|
||||
struct tgk_index_record {
|
||||
uint64_t logseq;
|
||||
uint64_t tgk_edge_id; // references ENC-TGK-CORE edge
|
||||
uint32_t edge_type_key; // optional
|
||||
uint8_t has_edge_type;
|
||||
uint8_t role; // from / to / both (optional)
|
||||
uint16_t flags; // tombstone, etc.
|
||||
};
|
||||
```
|
||||
|
||||
* `tgk_edge_id` is the only identity reference
|
||||
* No node lists here
|
||||
* No semantic duplication
|
||||
|
||||
---
|
||||
|
||||
### 7.2 Optional projection index (acceleration only)
|
||||
|
||||
If needed:
|
||||
|
||||
```c
|
||||
struct tgk_node_edge_ref {
|
||||
uint64_t logseq;
|
||||
uint64_t node_id;
|
||||
uint64_t tgk_edge_id;
|
||||
uint8_t position; // from or to
|
||||
};
|
||||
```
|
||||
|
||||
This is:
|
||||
|
||||
* Optional
|
||||
* Redundant
|
||||
* Fully derivable
|
||||
* Clearly acceleration
|
||||
|
||||
---
|
||||
|
||||
## 8. Document changes required (precise)
|
||||
|
||||
### 🔁 Documents to UPDATE
|
||||
|
||||
#### TGK-INDEX
|
||||
|
||||
* Canonical Edge Key = TGK-CORE edge ID
|
||||
* Explicit statement: index never encodes edge structure
|
||||
|
||||
#### ENC-TGK-INDEX
|
||||
|
||||
* Rewrite as **edge-reference index**
|
||||
* Remove all `from[]` / `to[]` fields
|
||||
* Reference ENC-TGK-CORE normatively
|
||||
|
||||
---
|
||||
|
||||
### 🚫 Documents NOT to touch
|
||||
|
||||
* ENC-TGK-CORE (must remain authoritative)
|
||||
* TGK-CORE semantics
|
||||
* PEL-TRACE
|
||||
|
||||
---
|
||||
|
||||
## 9. One sentence that should be normative (important)
|
||||
|
||||
Add this verbatim to ENC-TGK-INDEX:
|
||||
|
||||
> *ENC-TGK-INDEX encodes only references to TGK-CORE edges and MUST NOT re-encode or reinterpret edge structure.*
|
||||
|
||||
This single sentence prevents future drift.
|
||||
|
||||
---
|
||||
|
||||
## 10. Why this is actually a good sign
|
||||
|
||||
You caught this because your model is **internally consistent**.
|
||||
|
||||
What happened:
|
||||
|
||||
* Index draft accidentally crossed into core semantics
|
||||
* You noticed because ENC-TGK-CORE is strict
|
||||
* The fix strengthens layering, not weakens it
|
||||
|
||||
That’s exactly what you want at this stage.
|
||||
|
||||
---
|
||||
|
||||
## 11. Recommended next step
|
||||
|
||||
If you want, I can now:
|
||||
|
||||
* **Rewrite ENC-TGK-INDEX correctly**, fully aligned with ENC-TGK-CORE
|
||||
or
|
||||
* Produce a **one-page layering contract** between TGK-CORE, TGK-INDEX, and ASL-INDEX-ACCEL
|
||||
|
||||
Both will lock this down permanently.
|
||||
|
||||
79
notes/sid.py
79
notes/sid.py
|
|
@ -1,79 +0,0 @@
|
|||
# sid.py
|
||||
|
||||
from __future__ import annotations
|
||||
from typing import List, Dict, Any
|
||||
from sid_hashers import StructureHashStrategy
|
||||
from sid_canonical import encode_str, encode_kv_pairs
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------
|
||||
# SID computation
|
||||
# ---------------------------------------------------------------------
|
||||
|
||||
def compute_sid(
|
||||
*,
|
||||
op: str,
|
||||
child_sids: List[str],
|
||||
params: Dict[str, Any],
|
||||
hasher: StructureHashStrategy,
|
||||
ordered_children: bool = True,
|
||||
domain: str = "artifact.sid.v1",
|
||||
) -> str:
|
||||
"""
|
||||
Compute a deterministic Merkle-style SID for an artifact.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
op: str
|
||||
Operation name (e.g., "tensor", "splice", "leaf.bits").
|
||||
child_sids: List[str]
|
||||
List of SIDs of children artifacts.
|
||||
params: Dict[str, Any]
|
||||
Canonicalized parameters for the operation.
|
||||
hasher: StructureHashStrategy
|
||||
Hash strategy to use (default SHA-256 SID hasher).
|
||||
ordered_children: bool
|
||||
Whether child order matters (tensor vs commutative ops).
|
||||
domain: str
|
||||
Domain/version for domain separation.
|
||||
|
||||
Returns
|
||||
-------
|
||||
sid: str
|
||||
Hex string representing the structural ID.
|
||||
"""
|
||||
|
||||
payload = b""
|
||||
|
||||
# -----------------------------------------------------------------
|
||||
# Domain/version separation
|
||||
# -----------------------------------------------------------------
|
||||
payload += encode_str(domain)
|
||||
|
||||
# -----------------------------------------------------------------
|
||||
# Operation name
|
||||
# -----------------------------------------------------------------
|
||||
payload += encode_str(op)
|
||||
|
||||
# -----------------------------------------------------------------
|
||||
# Children SIDs
|
||||
# -----------------------------------------------------------------
|
||||
children = list(child_sids)
|
||||
if not ordered_children:
|
||||
children.sort()
|
||||
|
||||
payload += len(children).to_bytes(4, "big")
|
||||
for c in children:
|
||||
payload += encode_str(c)
|
||||
|
||||
# -----------------------------------------------------------------
|
||||
# Canonicalized parameters
|
||||
# -----------------------------------------------------------------
|
||||
param_pairs = sorted((str(k), str(v)) for k, v in params.items())
|
||||
payload += encode_kv_pairs(param_pairs)
|
||||
|
||||
# -----------------------------------------------------------------
|
||||
# Compute structural hash
|
||||
# -----------------------------------------------------------------
|
||||
return hasher.hash_struct(payload)
|
||||
|
||||
|
|
@ -1,34 +0,0 @@
|
|||
# sid_canonical.py
|
||||
|
||||
from typing import List, Tuple
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------
|
||||
# Canonical string encoder
|
||||
# ---------------------------------------------------------------------
|
||||
|
||||
def encode_str(s: str) -> bytes:
|
||||
"""
|
||||
Encode a string deterministically as length-prefixed UTF-8 bytes.
|
||||
"""
|
||||
b = s.encode("utf-8")
|
||||
return len(b).to_bytes(4, "big") + b
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------
|
||||
# Canonical key-value encoder
|
||||
# ---------------------------------------------------------------------
|
||||
|
||||
def encode_kv_pairs(pairs: List[Tuple[str, str]]) -> bytes:
|
||||
"""
|
||||
Encode sorted key-value pairs deterministically.
|
||||
|
||||
Format:
|
||||
[num_pairs][key_len][key_bytes][value_len][value_bytes]...
|
||||
"""
|
||||
out = len(pairs).to_bytes(4, "big")
|
||||
for k, v in pairs:
|
||||
out += encode_str(k)
|
||||
out += encode_str(v)
|
||||
return out
|
||||
|
||||
|
|
@ -1,39 +0,0 @@
|
|||
# sid_hashers.py
|
||||
|
||||
from abc import ABC, abstractmethod
|
||||
import hashlib
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------
|
||||
# Structural hash strategy interface
|
||||
# ---------------------------------------------------------------------
|
||||
|
||||
class StructureHashStrategy(ABC):
|
||||
"""
|
||||
Abstract interface for derivation / structural hashers.
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
def hash_struct(self, payload: bytes) -> str:
|
||||
"""
|
||||
Hash a byte payload deterministically and return hex digest.
|
||||
"""
|
||||
pass
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------
|
||||
# SHA-256 default implementation
|
||||
# ---------------------------------------------------------------------
|
||||
|
||||
class SHA256SIDHash(StructureHashStrategy):
|
||||
"""
|
||||
Default SHA-256 SID hasher.
|
||||
"""
|
||||
|
||||
name = "sha256.sid.v1"
|
||||
|
||||
def hash_struct(self, payload: bytes) -> str:
|
||||
h = hashlib.sha256()
|
||||
h.update(payload)
|
||||
return h.hexdigest()
|
||||
|
||||
|
|
@ -1,289 +0,0 @@
|
|||
Great — this fits *exactly* with your ASL-CORE-INDEX spec.
|
||||
Below is a **clean, minimal, spec-faithful implementation** of **snapshot + append-only log semantics** for the `ContentIndex`.
|
||||
|
||||
I will be very explicit about:
|
||||
|
||||
* semantics vs mechanics
|
||||
* what is immutable
|
||||
* what is replayed
|
||||
* what CURRENT means
|
||||
|
||||
No storage encoding, no performance tricks.
|
||||
|
||||
---
|
||||
|
||||
# Design recap (mapped to your spec)
|
||||
|
||||
We model **exactly this**:
|
||||
|
||||
```
|
||||
Index(CURRENT) = Index(snapshot) + replay(log)
|
||||
```
|
||||
|
||||
Where:
|
||||
|
||||
* **Snapshot** = checkpointed base state
|
||||
* **Log** = ordered, append-only mutations
|
||||
* **CURRENT** = replay prefix
|
||||
|
||||
We support:
|
||||
|
||||
* immutability
|
||||
* shadowing
|
||||
* determinism
|
||||
* tombstones (optional, included)
|
||||
|
||||
---
|
||||
|
||||
# Semantic model
|
||||
|
||||
## Log entry types
|
||||
|
||||
```text
|
||||
PUT : ArtifactKey → ArtifactLocation
|
||||
TOMBSTONE : ArtifactKey → ⊥
|
||||
```
|
||||
|
||||
Later entries shadow earlier ones.
|
||||
|
||||
---
|
||||
|
||||
# Python implementation
|
||||
|
||||
## `content_index.py`
|
||||
|
||||
```python
|
||||
from __future__ import annotations
|
||||
from dataclasses import dataclass
|
||||
from typing import Dict, Optional, List, Iterable
|
||||
from enum import Enum, auto
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------
|
||||
# Core data types
|
||||
# ---------------------------------------------------------------------
|
||||
|
||||
@dataclass(frozen=True)
|
||||
class ArtifactLocation:
|
||||
block_id: str
|
||||
offset: int
|
||||
length: int
|
||||
|
||||
|
||||
class LogOp(Enum):
|
||||
PUT = auto()
|
||||
TOMBSTONE = auto()
|
||||
|
||||
|
||||
@dataclass(frozen=True)
|
||||
class IndexLogEntry:
|
||||
op: LogOp
|
||||
artifact_key: str
|
||||
location: Optional[ArtifactLocation] = None
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------
|
||||
# Snapshot
|
||||
# ---------------------------------------------------------------------
|
||||
|
||||
class ContentIndexSnapshot:
|
||||
"""
|
||||
Immutable snapshot of the content index.
|
||||
"""
|
||||
def __init__(self, mapping: Dict[str, ArtifactLocation]):
|
||||
# Immutable by convention
|
||||
self._mapping = dict(mapping)
|
||||
|
||||
def get(self, key: str) -> Optional[ArtifactLocation]:
|
||||
return self._mapping.get(key)
|
||||
|
||||
def items(self):
|
||||
return self._mapping.items()
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------
|
||||
# Append-only log
|
||||
# ---------------------------------------------------------------------
|
||||
|
||||
class ContentIndexLog:
|
||||
"""
|
||||
Append-only, ordered log of index mutations.
|
||||
"""
|
||||
def __init__(self):
|
||||
self._entries: List[IndexLogEntry] = []
|
||||
|
||||
def append_put(self, key: str, loc: ArtifactLocation):
|
||||
self._entries.append(IndexLogEntry(
|
||||
op=LogOp.PUT,
|
||||
artifact_key=key,
|
||||
location=loc,
|
||||
))
|
||||
|
||||
def append_tombstone(self, key: str):
|
||||
self._entries.append(IndexLogEntry(
|
||||
op=LogOp.TOMBSTONE,
|
||||
artifact_key=key,
|
||||
))
|
||||
|
||||
def entries(self) -> Iterable[IndexLogEntry]:
|
||||
return self._entries
|
||||
|
||||
def __len__(self) -> int:
|
||||
return len(self._entries)
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------
|
||||
# Content index with snapshot + log semantics
|
||||
# ---------------------------------------------------------------------
|
||||
|
||||
class ContentIndex:
|
||||
"""
|
||||
ASL-CORE-INDEX semantic implementation.
|
||||
|
||||
Deterministic, snapshot-relative mapping:
|
||||
ArtifactKey -> ArtifactLocation
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
snapshot: Optional[ContentIndexSnapshot] = None,
|
||||
log: Optional[ContentIndexLog] = None,
|
||||
):
|
||||
self._snapshot = snapshot or ContentIndexSnapshot({})
|
||||
self._log = log or ContentIndexLog()
|
||||
|
||||
# -----------------------------------------------------------------
|
||||
# Visibility / lookup
|
||||
# -----------------------------------------------------------------
|
||||
|
||||
def lookup(self, key: str) -> Optional[ArtifactLocation]:
|
||||
"""
|
||||
Resolve key at CURRENT = snapshot + full log replay.
|
||||
"""
|
||||
# Shadowing: scan log in reverse
|
||||
for entry in reversed(list(self._log.entries())):
|
||||
if entry.artifact_key != key:
|
||||
continue
|
||||
|
||||
if entry.op == LogOp.TOMBSTONE:
|
||||
return None
|
||||
elif entry.op == LogOp.PUT:
|
||||
return entry.location
|
||||
|
||||
# Fall back to snapshot
|
||||
return self._snapshot.get(key)
|
||||
|
||||
# -----------------------------------------------------------------
|
||||
# Mutation (append-only)
|
||||
# -----------------------------------------------------------------
|
||||
|
||||
def put(self, key: str, loc: ArtifactLocation):
|
||||
"""
|
||||
Append PUT entry.
|
||||
"""
|
||||
self._log.append_put(key, loc)
|
||||
|
||||
def tombstone(self, key: str):
|
||||
"""
|
||||
Append TOMBSTONE entry.
|
||||
"""
|
||||
self._log.append_tombstone(key)
|
||||
|
||||
# -----------------------------------------------------------------
|
||||
# Snapshotting
|
||||
# -----------------------------------------------------------------
|
||||
|
||||
def snapshot(self) -> ContentIndexSnapshot:
|
||||
"""
|
||||
Produce a new immutable snapshot representing CURRENT state.
|
||||
"""
|
||||
materialized: Dict[str, ArtifactLocation] = dict(self._snapshot.items())
|
||||
|
||||
for entry in self._log.entries():
|
||||
if entry.op == LogOp.PUT:
|
||||
materialized[entry.artifact_key] = entry.location
|
||||
elif entry.op == LogOp.TOMBSTONE:
|
||||
materialized.pop(entry.artifact_key, None)
|
||||
|
||||
return ContentIndexSnapshot(materialized)
|
||||
|
||||
def reset_log(self):
|
||||
"""
|
||||
Clears the log after snapshotting (external coordination required).
|
||||
"""
|
||||
self._log = ContentIndexLog()
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
# Why this exactly matches your spec
|
||||
|
||||
| Spec requirement | Where enforced |
|
||||
| --------------------- | ---------------------------- |
|
||||
| Immutable entries | No mutation, append-only log |
|
||||
| Shadowing | Reverse log replay |
|
||||
| Snapshot-relative | Snapshot + replay |
|
||||
| Deterministic | Ordered replay only |
|
||||
| Tombstones | `LogOp.TOMBSTONE` |
|
||||
| No layout assumptions | No block semantics |
|
||||
| Visibility rules | Encoded via replay |
|
||||
|
||||
Nothing extra. Nothing missing.
|
||||
|
||||
---
|
||||
|
||||
# How this integrates with your existing system
|
||||
|
||||
### During materialization:
|
||||
|
||||
```python
|
||||
cid = artifact.cid(cache)
|
||||
|
||||
loc = content_index.lookup(cid)
|
||||
if loc is None:
|
||||
loc = allocate_and_write_bytes(cid)
|
||||
content_index.put(cid, loc)
|
||||
```
|
||||
|
||||
### During recovery:
|
||||
|
||||
```python
|
||||
index = ContentIndex(snapshot=loaded_snapshot, log=loaded_log)
|
||||
loc = index.lookup(cid)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
# Important: what this does NOT do (by design)
|
||||
|
||||
* No block allocation
|
||||
* No sealing rules
|
||||
* No concurrency
|
||||
* No GC
|
||||
* No disk format
|
||||
* No performance shortcuts
|
||||
|
||||
That belongs to **ASL-STORE-INDEX**, exactly as you wrote.
|
||||
|
||||
---
|
||||
|
||||
## You now have
|
||||
|
||||
* A **formally correct ASL-CORE-INDEX**
|
||||
* That works identically for:
|
||||
|
||||
* classical bytes
|
||||
* DAG materializations
|
||||
* q-bit expansions
|
||||
* And composes cleanly with provenance, federation, and ML layers
|
||||
|
||||
If you want next, I can:
|
||||
|
||||
* wire this into a full `ASLStore`
|
||||
* add snapshot IDs + log positions
|
||||
* show crash recovery
|
||||
* add federation merge semantics
|
||||
* or formally specify correctness invariants
|
||||
|
||||
Just say where you want to go.
|
||||
|
||||
|
|
@ -1,292 +0,0 @@
|
|||
Yes — and you’re exactly right to call this **quorum**, but with an important refinement so you don’t accidentally inherit the problems of classic distributed consensus.
|
||||
|
||||
What you want is **transferable witnessing**, not permanent centralization and not “everyone writes”.
|
||||
|
||||
Below is a clean, minimal model that fits everything you’ve built so far.
|
||||
|
||||
---
|
||||
|
||||
## 1. Restate the goal precisely
|
||||
|
||||
You want:
|
||||
|
||||
* A **Common domain** with a single logical history (no forks)
|
||||
* Initially witnessed by *you*
|
||||
* Later survivable **without you**
|
||||
* Usable even when people are offline
|
||||
* Human-scale, not blockchain-scale
|
||||
|
||||
That rules out:
|
||||
|
||||
* permanent central authority ❌
|
||||
* naive multi-writer ❌
|
||||
* always-on consensus ❌
|
||||
|
||||
What remains is a **rotatable witness quorum**.
|
||||
|
||||
---
|
||||
|
||||
## 2. The core idea: one emitter, many endorsers
|
||||
|
||||
The key rule never changes:
|
||||
|
||||
> **Only one entity emits `common@N+1`.**
|
||||
|
||||
But **who that entity is** can change — *and that change itself is witnessed*.
|
||||
|
||||
---
|
||||
|
||||
## 3. Roles in the quorum model
|
||||
|
||||
### 1. Witness Emitter (exactly one at a time)
|
||||
|
||||
* Assigns snapshot numbers
|
||||
* Signs `common@N`
|
||||
* Emits append-only snapshots
|
||||
|
||||
This role is:
|
||||
|
||||
* exclusive
|
||||
* temporary
|
||||
* explicitly granted
|
||||
|
||||
---
|
||||
|
||||
### 2. Witness Authorities (the quorum)
|
||||
|
||||
* A set of trusted domains
|
||||
* Can:
|
||||
|
||||
* endorse proposals
|
||||
* authorize witness rotation
|
||||
* revoke a compromised witness
|
||||
|
||||
They **do not emit snapshots directly**.
|
||||
|
||||
---
|
||||
|
||||
### 3. Participants (everyone else)
|
||||
|
||||
* Submit proposals
|
||||
* Consume Common
|
||||
* Choose which Common they trust
|
||||
|
||||
---
|
||||
|
||||
## 4. Bootstrapping: how Common starts
|
||||
|
||||
### Genesis (day 0)
|
||||
|
||||
You create:
|
||||
|
||||
```
|
||||
common@0
|
||||
```
|
||||
|
||||
It contains:
|
||||
|
||||
* quorum policy
|
||||
* initial witness key (you)
|
||||
* trust anchors
|
||||
|
||||
This is the **only moment of absolute authority**.
|
||||
|
||||
Everything after is mechanical.
|
||||
|
||||
---
|
||||
|
||||
## 5. Normal operation (single witness active)
|
||||
|
||||
Flow:
|
||||
|
||||
```
|
||||
Participants → Proposals → Witness → common@N+1
|
||||
```
|
||||
|
||||
The witness:
|
||||
|
||||
* verifies proposal signatures
|
||||
* checks policy
|
||||
* emits next snapshot
|
||||
|
||||
No quorum interaction needed for routine updates.
|
||||
|
||||
---
|
||||
|
||||
## 6. Turning off *your* server: witness rotation
|
||||
|
||||
When you want to step back:
|
||||
|
||||
### Step 1: propose a witness change
|
||||
|
||||
You (as current witness) emit a proposal:
|
||||
|
||||
```
|
||||
proposal:
|
||||
type: witness-rotation
|
||||
new_witness: domain_X
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Step 2: quorum endorsement
|
||||
|
||||
A quorum threshold signs it, e.g.:
|
||||
|
||||
```
|
||||
policy:
|
||||
witnesses: [A, B, C, D, E]
|
||||
threshold: 3
|
||||
```
|
||||
|
||||
Signatures collected:
|
||||
|
||||
* offline
|
||||
* asynchronously
|
||||
* via USB if needed
|
||||
|
||||
---
|
||||
|
||||
### Step 3: emit transition snapshot
|
||||
|
||||
You emit:
|
||||
|
||||
```
|
||||
common@N+1:
|
||||
witness = domain_X
|
||||
endorsed_by = {A, C, D}
|
||||
```
|
||||
|
||||
From this point:
|
||||
|
||||
* your server may shut down
|
||||
* domain_X is now the emitter
|
||||
|
||||
---
|
||||
|
||||
## 7. If the active witness disappears unexpectedly
|
||||
|
||||
This is the *hard case*, and the design still holds.
|
||||
|
||||
### Recovery procedure
|
||||
|
||||
1. Quorum members detect witness silence
|
||||
2. They co-sign a **recovery proposal**
|
||||
3. A new witness is appointed
|
||||
4. The **first snapshot emitted by the new witness** includes:
|
||||
|
||||
* quorum endorsements
|
||||
* last known good snapshot hash
|
||||
|
||||
No split-brain because:
|
||||
|
||||
* only snapshots with quorum-backed witness authority are accepted
|
||||
|
||||
---
|
||||
|
||||
## 8. Why this is *not* classical consensus
|
||||
|
||||
Important differences:
|
||||
|
||||
| Consensus systems | Your system |
|
||||
| -------------------- | ------------------- |
|
||||
| Continuous agreement | Episodic agreement |
|
||||
| Low latency | Human-scale latency |
|
||||
| Always online | Offline-first |
|
||||
| Automatic | Explicit, auditable |
|
||||
| Opaque | Artifact-based |
|
||||
|
||||
You’re not solving *agreement*.
|
||||
You’re solving *authority succession*.
|
||||
|
||||
---
|
||||
|
||||
## 9. What prevents silent forks?
|
||||
|
||||
Two simple rules enforced by ASL-HOST:
|
||||
|
||||
1. **A Common snapshot must include a valid witness signature**
|
||||
2. **Witness authority must be traceable to the last trusted snapshot**
|
||||
|
||||
If someone invents `common@999`:
|
||||
|
||||
* nobody accepts it
|
||||
* it’s just another foreign domain
|
||||
|
||||
---
|
||||
|
||||
## 10. How users experience this (important)
|
||||
|
||||
From Alice’s perspective:
|
||||
|
||||
```
|
||||
$ asl status
|
||||
Common: common@42 (witness: domain_X)
|
||||
```
|
||||
|
||||
If the witness changes:
|
||||
|
||||
```
|
||||
$ asl status
|
||||
Common: common@43 (witness: domain_Y)
|
||||
Note: witness rotated at common@43
|
||||
```
|
||||
|
||||
Nothing breaks.
|
||||
Nothing blocks.
|
||||
She can always stay on `@42`.
|
||||
|
||||
---
|
||||
|
||||
## 11. Minimal policy schema (conceptual)
|
||||
|
||||
```yaml
|
||||
common_policy:
|
||||
witnesses:
|
||||
- domain: you
|
||||
key: K1
|
||||
- domain: alice
|
||||
key: K2
|
||||
- domain: james
|
||||
key: K3
|
||||
threshold: 2
|
||||
rotation:
|
||||
allowed: true
|
||||
grace_period: 30d
|
||||
```
|
||||
|
||||
This policy is itself an artifact inside Common.
|
||||
|
||||
---
|
||||
|
||||
## 12. Why this matches your philosophy
|
||||
|
||||
* Offline-friendly
|
||||
* Artifact-first
|
||||
* Deterministic
|
||||
* Social trust, not math worship
|
||||
* Authority is **visible and revocable**
|
||||
|
||||
And critically:
|
||||
|
||||
> **Common does not pretend to be immutable truth.
|
||||
> It is a maintained agreement.**
|
||||
|
||||
---
|
||||
|
||||
## 13. One-sentence summary
|
||||
|
||||
> You don’t decentralize Common by letting everyone write —
|
||||
> you decentralize it by letting everyone *replace the notary*.
|
||||
|
||||
---
|
||||
|
||||
If you want next, we can:
|
||||
|
||||
* formalize the witness-rotation artifact
|
||||
* define exact quorum signature encoding
|
||||
* model disaster recovery step-by-step
|
||||
* decide how many witnesses are “enough” for personal vs group Common
|
||||
|
||||
You’re building something coherent. Keep going.
|
||||
|
||||
|
|
@ -1,216 +0,0 @@
|
|||
Good question. At this point the model is **conceptually complete**, but it is **underspecified in exactly the places where performance and correctness meet**. The fix is *not* to add many new documents, but to **surgically introduce one new concept and propagate it consistently**.
|
||||
|
||||
Below is a clean, minimal documentation plan.
|
||||
|
||||
---
|
||||
|
||||
## 1. One new concept that must be named explicitly
|
||||
|
||||
You already *use* this concept implicitly. It now needs a **formal name and boundary**:
|
||||
|
||||
> **Canonical Key vs Routing Key**
|
||||
|
||||
Once this is explicit, everything else falls into place.
|
||||
|
||||
---
|
||||
|
||||
## 2. Documents that need to be UPDATED (not rewritten)
|
||||
|
||||
### 2.1 ASL-CORE (minor update)
|
||||
|
||||
**Why:**
|
||||
Type tag exists at artifact level and must be immutable, but not identity.
|
||||
|
||||
**Add:**
|
||||
|
||||
* `type_tag : uint32`
|
||||
* `has_typetag : uint8`
|
||||
* Explicit statement:
|
||||
|
||||
* Type tag is an immutable attribute
|
||||
* Type tag is **not part of ArtifactKey**
|
||||
|
||||
**Do NOT add:**
|
||||
|
||||
* Any mention of filters, sharding, or hashing
|
||||
|
||||
📌 **ASL-CORE remains semantic and clean**
|
||||
|
||||
---
|
||||
|
||||
### 2.2 ASL-CORE-INDEX (important update)
|
||||
|
||||
This is where the biggest clarification belongs.
|
||||
|
||||
**Add a new subsection:**
|
||||
|
||||
#### Canonical Key vs Routing Key
|
||||
|
||||
Define:
|
||||
|
||||
```
|
||||
CanonicalKey = ArtifactKey
|
||||
RoutingKey = derived, advisory, implementation-defined
|
||||
```
|
||||
|
||||
Rules to add:
|
||||
|
||||
1. Index semantics are defined **only** over CanonicalKey
|
||||
2. RoutingKey MAY incorporate:
|
||||
|
||||
* Hash of CanonicalKey
|
||||
* type_tag
|
||||
* has_typetag
|
||||
3. RoutingKey MUST NOT affect correctness
|
||||
4. Full CanonicalKey comparison is required on match
|
||||
5. Shadowing and tombstones apply by CanonicalKey only
|
||||
|
||||
This locks down:
|
||||
|
||||
* Sharding
|
||||
* Hash recast
|
||||
* SIMD
|
||||
* Filter usage
|
||||
* Type-tag-aware routing
|
||||
|
||||
📌 This is the **most important update**
|
||||
|
||||
---
|
||||
|
||||
### 2.3 ASL-STORE-INDEX (medium update)
|
||||
|
||||
**Why:**
|
||||
This document already talks about mechanics. It should acknowledge routing.
|
||||
|
||||
**Add:**
|
||||
|
||||
* Explicit permission to:
|
||||
|
||||
* Partition segments by RoutingKey
|
||||
* Build filters over RoutingKey
|
||||
* Use composite routing (hash + type_tag)
|
||||
|
||||
**Add invariant:**
|
||||
|
||||
* Store-level routing must preserve logical index equivalence
|
||||
|
||||
📌 No semantic change, just guardrails.
|
||||
|
||||
---
|
||||
|
||||
### 2.4 ENC-ASL-CORE-INDEX (medium update)
|
||||
|
||||
**Why:**
|
||||
Routing and type tag now affect *encoding*.
|
||||
|
||||
**Add:**
|
||||
|
||||
* Fields already present:
|
||||
|
||||
* `type_tag`
|
||||
* `has_typetag`
|
||||
* Clarify:
|
||||
|
||||
* Filters are built over RoutingKey
|
||||
* Filter key mixing is deterministic
|
||||
* Filter format is advisory
|
||||
|
||||
**Optional:**
|
||||
|
||||
* Encode `RoutingKeyHash` explicitly or derive it on load
|
||||
|
||||
📌 This makes tooling and C APIs correct.
|
||||
|
||||
---
|
||||
|
||||
## 3. One NEW document (small but critical)
|
||||
|
||||
### 3.1 ASL-INDEX-ACCEL (new, short document)
|
||||
|
||||
This document is the missing puzzle piece.
|
||||
|
||||
**Purpose:**
|
||||
|
||||
* Capture everything that is **explicitly non-semantic**
|
||||
|
||||
**Contents:**
|
||||
|
||||
* Sharding rules
|
||||
* RoutingKey definition
|
||||
* Filter usage (Bloom / XOR / Ribbon)
|
||||
* SIMD guarantees
|
||||
* Hash recast rules
|
||||
* Type-tag mixing rules
|
||||
|
||||
**Key sentence it must contain:**
|
||||
|
||||
> All accelerations defined in this document are observationally invisible to ASL-CORE-INDEX semantics.
|
||||
|
||||
This document:
|
||||
|
||||
* Prevents optimization creep
|
||||
* Gives implementers freedom
|
||||
* Protects correctness forever
|
||||
|
||||
📌 This is where performance lives safely.
|
||||
|
||||
---
|
||||
|
||||
## 4. Documents that do NOT need changes
|
||||
|
||||
✅ PEL
|
||||
✅ PEL-PROV
|
||||
✅ PEL-TRACE
|
||||
✅ Provenance
|
||||
✅ Federation (already abstract enough)
|
||||
|
||||
They all depend on index *results*, not mechanics.
|
||||
|
||||
---
|
||||
|
||||
## 5. Final document map (clean and complete)
|
||||
|
||||
```
|
||||
ASL-CORE
|
||||
└─ Artifact semantics
|
||||
(includes optional type tag, immutable)
|
||||
|
||||
ASL-CORE-INDEX
|
||||
└─ Logical index semantics
|
||||
(CanonicalKey only)
|
||||
|
||||
ASL-STORE-INDEX
|
||||
└─ Store lifecycle & snapshot safety
|
||||
|
||||
ENC-ASL-CORE-INDEX (tier1/enc-asl-core-index.md)
|
||||
└─ Bytes-on-disk encoding
|
||||
|
||||
ASL-INDEX-ACCEL ← NEW
|
||||
└─ RoutingKey, filters, sharding, SIMD
|
||||
(explicitly non-semantic)
|
||||
|
||||
PEL / PROV / TRACE
|
||||
└─ Execution & lineage
|
||||
|
||||
Federation
|
||||
└─ Multi-domain visibility
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 6. Why this is the *right* stopping point
|
||||
|
||||
After this:
|
||||
|
||||
* Every optimization has a home
|
||||
* Every semantic guarantee is protected
|
||||
* Every future change has a checklist:
|
||||
|
||||
> “Is this Canonical or Routing?”
|
||||
|
||||
That’s how large systems stay correct at billion-scale.
|
||||
|
||||
If you want next, I can:
|
||||
|
||||
* Draft **ASL-INDEX-ACCEL**
|
||||
* Or rewrite **ASL-CORE-INDEX with Canonical vs Routing fully integrated**
|
||||
|
|
@ -1,389 +0,0 @@
|
|||
Great, let’s do this **concretely and end-to-end**, but still faithful to the specs you’ve been shaping. I’ll walk a **single artifact** all the way through its lifecycle:
|
||||
|
||||
* creation
|
||||
* witnessing (certificate case)
|
||||
* indexing
|
||||
* snapshotting
|
||||
* eviction from local storage
|
||||
* later rehydration
|
||||
* reuse in a new program
|
||||
|
||||
I’ll label **what is normative** vs **implementation choice**, and I’ll keep the mental model consistent: **blocks are physical, artifacts are semantic**.
|
||||
|
||||
---
|
||||
|
||||
# End-to-End Artifact Lifecycle Walkthrough
|
||||
|
||||
We’ll use a **certificate artifact** because it exercises *authority*, *witnessing*, and *reuse*.
|
||||
|
||||
Assume we are on a **personal domain** `alice.personal`, on an ASL-HOST.
|
||||
|
||||
---
|
||||
|
||||
## Phase 0 – Initial state (before anything happens)
|
||||
|
||||
On disk:
|
||||
|
||||
```
|
||||
/asl/domains/alice.personal/
|
||||
├── blocks/
|
||||
│ ├── open/
|
||||
│ └── sealed/
|
||||
├── index/
|
||||
├── log/
|
||||
├── snapshots/
|
||||
└── meta/
|
||||
└── dam/
|
||||
```
|
||||
|
||||
Current snapshot:
|
||||
|
||||
```
|
||||
alice.personal@42
|
||||
```
|
||||
|
||||
Authority:
|
||||
|
||||
* DAM says `self-authority`
|
||||
* Host holds Alice’s private key
|
||||
* Host is **Active**
|
||||
|
||||
---
|
||||
|
||||
## Phase 1 – Artifact creation (certificate is generated)
|
||||
|
||||
### What happens conceptually
|
||||
|
||||
A **PEL program** is run that generates a certificate:
|
||||
|
||||
* Input: key material, policy parameters
|
||||
* Output: certificate bytes
|
||||
|
||||
This is *just data* at this stage.
|
||||
|
||||
---
|
||||
|
||||
### On disk: block writing (physical layer)
|
||||
|
||||
1. ASL allocates an **open block**:
|
||||
|
||||
```
|
||||
blocks/open/blk_tmp_7f3a.tmp
|
||||
```
|
||||
|
||||
2. Certificate bytes are appended to the open block.
|
||||
|
||||
3. The artifact bytes occupy:
|
||||
|
||||
```
|
||||
offset = 8192
|
||||
length = 1432
|
||||
```
|
||||
|
||||
At this moment:
|
||||
|
||||
* No artifact exists yet (semantically)
|
||||
* Bytes are **not visible**
|
||||
* Crash here is allowed
|
||||
|
||||
---
|
||||
|
||||
### Block sealing (normative)
|
||||
|
||||
4. Block is sealed:
|
||||
|
||||
```
|
||||
block_id = H(block_bytes)
|
||||
```
|
||||
|
||||
File moved to:
|
||||
|
||||
```
|
||||
blocks/sealed/7f/7f3a9c...blk
|
||||
```
|
||||
|
||||
**Invariant satisfied:** sealed blocks are immutable.
|
||||
|
||||
---
|
||||
|
||||
## Phase 2 – Artifact becomes real (indexing)
|
||||
|
||||
### Artifact identity
|
||||
|
||||
Artifact key is computed from:
|
||||
|
||||
```
|
||||
H(certificate_bytes + type_tag + metadata)
|
||||
```
|
||||
|
||||
Example:
|
||||
|
||||
```
|
||||
artifact_key = a9c4…
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Index entry written
|
||||
|
||||
An index entry is appended to an **open index segment**:
|
||||
|
||||
```
|
||||
index/shard-012/segment-0042.idx (open)
|
||||
```
|
||||
|
||||
Entry:
|
||||
|
||||
```
|
||||
ArtifactKey → (BlockID, offset, length)
|
||||
type_tag = cert.x509
|
||||
```
|
||||
|
||||
Still **not visible**.
|
||||
|
||||
---
|
||||
|
||||
### Log append (normative visibility point)
|
||||
|
||||
A log record is appended:
|
||||
|
||||
```
|
||||
log-0000000000001200.asl
|
||||
```
|
||||
|
||||
Record:
|
||||
|
||||
```
|
||||
ADD_INDEX_ENTRY artifact_key=a9c4… segment=0042
|
||||
```
|
||||
|
||||
Then:
|
||||
|
||||
```
|
||||
SEAL_SEGMENT segment=0042
|
||||
```
|
||||
|
||||
**Now the artifact exists.**
|
||||
|
||||
---
|
||||
|
||||
## Phase 3 – Snapshot & witnessing
|
||||
|
||||
### Snapshot creation
|
||||
|
||||
A snapshot is emitted:
|
||||
|
||||
```
|
||||
alice.personal@43
|
||||
```
|
||||
|
||||
Snapshot pins:
|
||||
|
||||
* index segment 0042
|
||||
* block 7f3a…
|
||||
|
||||
Snapshot manifest includes:
|
||||
|
||||
```
|
||||
authority:
|
||||
domain: alice.personal
|
||||
key: alice-root-key
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Witnessing from elsewhere (certificate use case)
|
||||
|
||||
Now the **certificate is taken aboard** by another domain, say:
|
||||
|
||||
```
|
||||
common
|
||||
```
|
||||
|
||||
How?
|
||||
|
||||
1. The certificate artifact is **published** (policy allows this)
|
||||
2. `common` imports the artifact:
|
||||
|
||||
* artifact bytes are fetched (or referenced)
|
||||
* artifact key is preserved
|
||||
3. A **cross-domain reference** is indexed in `common`
|
||||
|
||||
No copying is required if blocks are addressable, but often they are copied.
|
||||
|
||||
Witnessing here means:
|
||||
|
||||
> The certificate is now **provably present in two domains**, each with their own snapshot history.
|
||||
|
||||
---
|
||||
|
||||
## Phase 4 – Time passes (artifact becomes cold)
|
||||
|
||||
A week passes.
|
||||
|
||||
A **local retention policy** runs (implementation choice, but policy-guided).
|
||||
|
||||
### GC decision (normative constraints)
|
||||
|
||||
The artifact:
|
||||
|
||||
* is sealed
|
||||
* is referenced by snapshot `@43`
|
||||
* is not referenced by CURRENT workflows
|
||||
|
||||
Policy allows **cold eviction** if:
|
||||
|
||||
* snapshot still exists
|
||||
* artifact can be re-fetched from trusted domains
|
||||
|
||||
So…
|
||||
|
||||
---
|
||||
|
||||
### Block eviction (implementation choice)
|
||||
|
||||
Local block file is removed:
|
||||
|
||||
```
|
||||
blocks/sealed/7f/7f3a9c...blk ← deleted
|
||||
```
|
||||
|
||||
But:
|
||||
|
||||
* index entry remains
|
||||
* snapshot remains
|
||||
* artifact is still **logically present**
|
||||
|
||||
This is allowed because:
|
||||
|
||||
> **ASL defines availability separately from existence.**
|
||||
|
||||
---
|
||||
|
||||
## Phase 5 – Artifact is needed again
|
||||
|
||||
Later, a new PEL program runs:
|
||||
|
||||
* Input: certificate artifact `a9c4…`
|
||||
* Purpose: sign something new
|
||||
|
||||
### Artifact lookup
|
||||
|
||||
1. Index lookup finds:
|
||||
|
||||
```
|
||||
ArtifactKey → (BlockID, offset, length)
|
||||
```
|
||||
2. Block is missing locally.
|
||||
|
||||
---
|
||||
|
||||
### Rehydration (normative + policy)
|
||||
|
||||
ASL-HOST:
|
||||
|
||||
1. Consults **domain policy**
|
||||
2. Finds trusted source:
|
||||
|
||||
* `common`
|
||||
* or a personal mirror
|
||||
3. Fetches block by `BlockID`
|
||||
|
||||
Block is restored to:
|
||||
|
||||
```
|
||||
blocks/sealed/7f/7f3a9c...blk
|
||||
```
|
||||
|
||||
**Block hash verified.**
|
||||
|
||||
---
|
||||
|
||||
## Phase 6 – Artifact reused in a new program
|
||||
|
||||
The certificate bytes are read from the block.
|
||||
|
||||
A new PEL program runs:
|
||||
|
||||
* Inputs:
|
||||
|
||||
* certificate artifact `a9c4…`
|
||||
* new data
|
||||
* Outputs:
|
||||
|
||||
* new artifact(s)
|
||||
* new PER
|
||||
|
||||
These outputs repeat the **same lifecycle**.
|
||||
|
||||
---
|
||||
|
||||
## Phase 7 – Index & snapshot continuity
|
||||
|
||||
Now:
|
||||
|
||||
* Snapshot `@44` references:
|
||||
|
||||
* old certificate
|
||||
* new artifacts
|
||||
* Old artifact remains immutable
|
||||
* Provenance is intact
|
||||
|
||||
At no point did we:
|
||||
|
||||
* rewrite bytes
|
||||
* change meaning
|
||||
* trust runtime state
|
||||
|
||||
---
|
||||
|
||||
## What was witnessed, exactly?
|
||||
|
||||
Three things were witnessed, independently:
|
||||
|
||||
1. **Bytes** (block hash)
|
||||
2. **Meaning** (artifact key → index)
|
||||
3. **Authority** (snapshot + DAM + signatures)
|
||||
|
||||
This is why the system holds together.
|
||||
|
||||
---
|
||||
|
||||
## Why this works for *everything*
|
||||
|
||||
This same flow works for:
|
||||
|
||||
* certificates
|
||||
* home directory tarballs
|
||||
* shell command outputs
|
||||
* TGK edges
|
||||
* PERs
|
||||
* witness rotation artifacts
|
||||
|
||||
Nothing special happens for “important” artifacts.
|
||||
|
||||
---
|
||||
|
||||
## The key invariants you just saw
|
||||
|
||||
1. **Artifacts never move**
|
||||
2. **Blocks may come and go**
|
||||
3. **Snapshots pin meaning, not bytes**
|
||||
4. **Witnessing is just another artifact**
|
||||
5. **Time does not weaken correctness**
|
||||
|
||||
---
|
||||
|
||||
## One sentence to take away
|
||||
|
||||
> An artifact can disappear from disk, reappear a year later, and still mean exactly the same thing.
|
||||
|
||||
If you want next, we can:
|
||||
|
||||
* walk the **same artifact across two personal machines**
|
||||
* show **what breaks if someone cheats**
|
||||
* map this onto **ZFS send/receive**
|
||||
* or show **how PEL receipts stitch all this together**
|
||||
|
||||
You’re now fully “above the filesystem” — exactly where you wanted to be.
|
||||
|
||||
|
|
@ -14,8 +14,10 @@ Tags: [ops, authority, offline]
|
|||
|
||||
* `ASL/HOST/1`
|
||||
* `ASL/DAM/1`
|
||||
* `ASL/DAP/1`
|
||||
* `ASL/POLICY-HASH/1`
|
||||
* `ASL/OFFLINE-ROOT-TRUST/1`
|
||||
* `ASL/OCS/1`
|
||||
|
||||
**Informative references:**
|
||||
|
||||
|
|
@ -23,6 +25,8 @@ Tags: [ops, authority, offline]
|
|||
* `PEL/1-SURF`
|
||||
* `ENC-ASL-AUTH-HOST/1`
|
||||
* `ASL/RESCUE-NODE/1`
|
||||
* `ASL/SOPS-BUNDLE/1`
|
||||
* `ASL/DOMAIN-MODEL/1`
|
||||
|
||||
---
|
||||
|
||||
|
|
@ -82,7 +86,41 @@ The host MAY operate in the following modes:
|
|||
|
||||
---
|
||||
|
||||
## 5. Output Artifacts
|
||||
## 5. Authority Host States (Normative)
|
||||
|
||||
An authority host is in exactly one state:
|
||||
|
||||
* **Virgin:** no root keys or trusted domains exist.
|
||||
* **Rooted:** root keys exist but no admission has occurred.
|
||||
* **Operational:** normal admission, signing, and verification are enabled.
|
||||
|
||||
State transitions MUST be explicit and recorded as artifacts or snapshots.
|
||||
|
||||
---
|
||||
|
||||
## 6. Presented Domain Classification (Normative)
|
||||
|
||||
When removable media or an external store is presented, the host MUST classify
|
||||
it as one of:
|
||||
|
||||
* **Virgin:** no certificates or DAM present.
|
||||
* **Self-asserting:** contains unsigned claims only.
|
||||
* **Admitted:** has a valid DAM and policy hash.
|
||||
* **Known foreign:** previously pinned domain and policy.
|
||||
|
||||
Classification MUST be derived from artifacts and certificates, not filesystem
|
||||
heuristics.
|
||||
|
||||
Presented domains are treated as temporary, read-only domains:
|
||||
|
||||
* Derived `domain_id` (for example, hash of media fingerprint).
|
||||
* No sealing or GC permitted.
|
||||
* No snapshots persisted.
|
||||
* Visibility limited to the current session.
|
||||
|
||||
---
|
||||
|
||||
## 7. Output Artifacts
|
||||
|
||||
The host MUST be able to produce:
|
||||
|
||||
|
|
@ -94,7 +132,7 @@ The host MUST be able to produce:
|
|||
|
||||
---
|
||||
|
||||
## 6. Snapshot Discipline
|
||||
## 8. Snapshot Discipline
|
||||
|
||||
Each authority operation MUST:
|
||||
|
||||
|
|
@ -106,7 +144,7 @@ Snapshots MUST be immutable once sealed.
|
|||
|
||||
---
|
||||
|
||||
## 7. Offline Constraints
|
||||
## 9. Offline Constraints
|
||||
|
||||
* Network interfaces SHOULD be disabled.
|
||||
* External input and output MUST occur via explicit operator action.
|
||||
|
|
@ -115,7 +153,7 @@ Snapshots MUST be immutable once sealed.
|
|||
|
||||
---
|
||||
|
||||
## 8. Security Considerations
|
||||
## 10. Security Considerations
|
||||
|
||||
* Private keys MUST remain offline and encrypted at rest.
|
||||
* Only signed outputs may leave the host.
|
||||
|
|
@ -123,6 +161,6 @@ Snapshots MUST be immutable once sealed.
|
|||
|
||||
---
|
||||
|
||||
## 9. Versioning
|
||||
## 11. Versioning
|
||||
|
||||
Backward-incompatible profile changes MUST bump the major version.
|
||||
|
|
|
|||
|
|
@ -54,6 +54,8 @@ The base OS MUST:
|
|||
│ ├── asl-auth-host
|
||||
│ ├── asl-rescue
|
||||
│ └── init-asl-host.sh
|
||||
│ └── sign_dam.sh
|
||||
│ └── add_artifact.sh
|
||||
├── etc/
|
||||
│ └── asl-auth-host/
|
||||
│ ├── config.yaml
|
||||
|
|
@ -118,6 +120,68 @@ A typical pipeline:
|
|||
|
||||
1. Create minimal root via debootstrap or equivalent.
|
||||
2. Merge overlay into ISO root.
|
||||
3. Configure bootloader (isolinux or GRUB).
|
||||
4. Build ISO with xorriso or equivalent.
|
||||
|
||||
---
|
||||
|
||||
## 8. Container Build Notes (Informative)
|
||||
|
||||
Building the ISO in a container is supported with the following constraints:
|
||||
|
||||
* ZFS pool creation typically requires host kernel support; create datasets at
|
||||
boot time instead.
|
||||
* The ISO filesystem and overlay can be built entirely in a Debian container.
|
||||
* Boot testing must occur on a VM or physical host.
|
||||
|
||||
Recommended packages in the build container:
|
||||
|
||||
```
|
||||
debootstrap squashfs-tools xorriso genisoimage
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 9. Offline Debian Mirror Workflow (Informative)
|
||||
|
||||
To build offline images without network access, create a local Debian mirror
|
||||
as an artifact and use it with `debootstrap`.
|
||||
|
||||
Example (online host):
|
||||
|
||||
```
|
||||
debmirror \
|
||||
--arch=amd64 \
|
||||
--section=main \
|
||||
--dist=bullseye \
|
||||
--method=http \
|
||||
--host=deb.debian.org \
|
||||
--root=debian \
|
||||
/srv/debian-mirror
|
||||
```
|
||||
|
||||
Offline build:
|
||||
|
||||
```
|
||||
debootstrap --arch=amd64 bullseye /target/root file:///srv/debian-mirror
|
||||
```
|
||||
|
||||
The mirror directory SHOULD be treated as immutable input for reproducibility.
|
||||
|
||||
---
|
||||
|
||||
## 10. Pre-Image Capture Workflow (Informative)
|
||||
|
||||
To preserve provenance of the ISO build, capture each build step as artifacts
|
||||
and receipts before composing the final image.
|
||||
|
||||
Suggested workflow:
|
||||
|
||||
1. Initialize a temporary ASL store for build artifacts.
|
||||
2. Wrap debootstrap and package installation in `asl-capture`.
|
||||
3. Capture overlay binaries and scripts as artifacts.
|
||||
4. Run the ISO build under `asl-capture` to produce a final ISO artifact.
|
||||
5. Seed the ISO with the captured artifacts and receipts.
|
||||
3. Optionally wrap build steps with `asl-capture` to record build provenance.
|
||||
4. Add bootloader config.
|
||||
5. Build ISO with `xorriso` or equivalent tool.
|
||||
|
|
|
|||
104
ops/asl-debian-packaging-1.md
Normal file
104
ops/asl-debian-packaging-1.md
Normal file
|
|
@ -0,0 +1,104 @@
|
|||
# ASL/DEBIAN-PACKAGING/1 -- Debian Packaging Notes
|
||||
|
||||
Status: Draft
|
||||
Owner: Architecture
|
||||
Version: 0.1.0
|
||||
SoT: No
|
||||
Last Updated: 2026-01-17
|
||||
Tags: [ops, debian, packaging, build]
|
||||
|
||||
**Document ID:** `ASL/DEBIAN-PACKAGING/1`
|
||||
**Layer:** O2 -- Packaging guidance
|
||||
|
||||
**Depends on (normative):**
|
||||
|
||||
* `ASL/HOST/1`
|
||||
|
||||
**Informative references:**
|
||||
|
||||
* `ENC-ASL-HOST/1`
|
||||
|
||||
---
|
||||
|
||||
## 0. Conventions
|
||||
|
||||
The key words **MUST**, **MUST NOT**, **REQUIRED**, **SHOULD**, and **MAY** are to be interpreted as in RFC 2119.
|
||||
|
||||
ASL/DEBIAN-PACKAGING/1 provides packaging guidance for Debian-based distributions. It does not define runtime semantics.
|
||||
|
||||
---
|
||||
|
||||
## 1. Optional PTY Support (Normative)
|
||||
|
||||
PTY support MUST be controlled at build time with a compile-time flag.
|
||||
|
||||
### 1.1 Build Flag
|
||||
|
||||
```c
|
||||
#ifdef ASL_ENABLE_PTY
|
||||
#define _GNU_SOURCE
|
||||
#include <pty.h>
|
||||
#endif
|
||||
```
|
||||
|
||||
If PTY is requested at runtime without being built in, tools MUST fail with a clear error.
|
||||
|
||||
### 1.2 Makefile Mapping
|
||||
|
||||
```make
|
||||
CFLAGS += -Wall -Wextra -O2
|
||||
LIBS +=
|
||||
|
||||
ifdef ENABLE_PTY
|
||||
CFLAGS += -DASL_ENABLE_PTY
|
||||
LIBS += -lutil
|
||||
endif
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 2. Library vs Tool Split (Informative)
|
||||
|
||||
Guiding principle: libraries define facts; tools perform actions.
|
||||
|
||||
### 2.1 Libraries
|
||||
|
||||
* `libasl-core`
|
||||
* `libasl-store`
|
||||
* `libasl-index`
|
||||
* `libasl-capture`
|
||||
* `libpel-core`
|
||||
|
||||
Libraries SHOULD avoid CLI parsing and environment policies.
|
||||
|
||||
### 2.2 Tools
|
||||
|
||||
* `asl-put`
|
||||
* `asl-get`
|
||||
* `asl-capture`
|
||||
* `pel-run`
|
||||
* `asl-admin`
|
||||
|
||||
Tools SHOULD be thin wrappers around libraries.
|
||||
|
||||
---
|
||||
|
||||
## 3. Debian Filesystem Layout (Informative)
|
||||
|
||||
```
|
||||
/usr/bin/
|
||||
asl-put
|
||||
asl-get
|
||||
asl-capture
|
||||
pel-run
|
||||
|
||||
/usr/lib/x86_64-linux-gnu/
|
||||
libasl-*.so
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4. Dependency Rules (Informative)
|
||||
|
||||
* `libutil` MUST be a dependency only when PTY support is enabled.
|
||||
* No GNU extensions should be required for the PIPE-only build.
|
||||
|
|
@ -102,6 +102,16 @@ An ASL host implementation MUST:
|
|||
|
||||
---
|
||||
|
||||
## 4.1 Authority Enforcement (Normative)
|
||||
|
||||
An ASL host MUST NOT advance a domain unless it can prove authority to do so
|
||||
from domain-local artifacts visible at the current snapshot.
|
||||
|
||||
Authority enforcement applies to all domains, including Common, group, and
|
||||
personal domains.
|
||||
|
||||
---
|
||||
|
||||
## 5. Domain Model
|
||||
|
||||
### 5.1 Domain States
|
||||
|
|
@ -114,6 +124,21 @@ A host MUST track the following domain states:
|
|||
* `SUSPENDED`
|
||||
* `REVOKED`
|
||||
|
||||
---
|
||||
|
||||
### 5.3 Witness Modes (Informative)
|
||||
|
||||
Domains operate under one of the following authority modes:
|
||||
|
||||
| Mode | Meaning |
|
||||
| ---------------- | --------------------------------------------- |
|
||||
| `single-witness` | One domain/key may emit snapshots |
|
||||
| `quorum-witness` | A threshold of domains may authorize emission |
|
||||
| `self-authority` | This host's domain is the witness |
|
||||
|
||||
Witness mode is policy-defined. Hosts MUST enforce the mode discovered in
|
||||
domain-local artifacts.
|
||||
|
||||
### 5.2 Domain Descriptor
|
||||
|
||||
Host-owned metadata MUST include:
|
||||
|
|
@ -177,6 +202,9 @@ A host MUST expose at least the following operations:
|
|||
The StoreHandle is opaque and scoped to a domain. Admission state MUST gate
|
||||
capabilities exposed by the StoreHandle (see Section 7).
|
||||
|
||||
StoreLocation MAY be any filesystem path or mount. When creating a store, the
|
||||
host SHOULD initialize the standard ASL store structure (blocks, index, log).
|
||||
|
||||
---
|
||||
|
||||
## 8. Admission-Gated Capabilities
|
||||
|
|
|
|||
|
|
@ -102,6 +102,19 @@ legacy inputs into artifacts before storage.
|
|||
|
||||
---
|
||||
|
||||
## 8. Versioning
|
||||
## 8. Remote Intake Transport (Informative)
|
||||
|
||||
When intake is performed over a network boundary, the rescue node MAY use:
|
||||
|
||||
* SSH socket forwarding for secure UNIX-socket transport.
|
||||
* `socat` as a local bridge between TCP and UNIX sockets.
|
||||
* 9P or SSHFS for remote filesystem access when appropriate.
|
||||
|
||||
All remote transports MUST be treated as untrusted until artifacts are sealed
|
||||
and verified locally.
|
||||
|
||||
---
|
||||
|
||||
## 9. Versioning
|
||||
|
||||
Backward-incompatible changes MUST bump the major version.
|
||||
|
|
|
|||
|
|
@ -47,6 +47,7 @@ into a personal domain with optional courtesy storage.
|
|||
|
||||
* Execute PEL programs over the intake snapshot.
|
||||
* Generate PER receipts and optional TGK edges.
|
||||
* Use a deterministic ingest engine (e.g., Sedelpress) to mint receipts.
|
||||
|
||||
### 2.3 Courtesy Bootstrap (Optional)
|
||||
|
||||
|
|
@ -74,6 +75,17 @@ into a personal domain with optional courtesy storage.
|
|||
|
||||
---
|
||||
|
||||
## 3.1 Rescue Flow (Informative)
|
||||
|
||||
```
|
||||
Input Material -> Sedelpress -> PERs + TGK -> Personal Store -> Optional Publish
|
||||
```
|
||||
|
||||
Sedelpress is a deterministic ingest stage that stamps inputs into receipts
|
||||
and writes sealed artifacts into the local store.
|
||||
|
||||
---
|
||||
|
||||
## 4. Outputs
|
||||
|
||||
A rescue operation SHOULD produce:
|
||||
|
|
|
|||
138
ops/asl-store-layout-1.md
Normal file
138
ops/asl-store-layout-1.md
Normal file
|
|
@ -0,0 +1,138 @@
|
|||
# ASL/STORE-LAYOUT/1 -- On-Disk Store Layout
|
||||
|
||||
Status: Draft
|
||||
Owner: Architecture
|
||||
Version: 0.1.0
|
||||
SoT: No
|
||||
Last Updated: 2026-01-17
|
||||
Tags: [ops, store, layout, filesystem]
|
||||
|
||||
**Document ID:** `ASL/STORE-LAYOUT/1`
|
||||
**Layer:** O2 -- Operational layout profile
|
||||
|
||||
**Depends on (normative):**
|
||||
|
||||
* `ASL-STORE-INDEX`
|
||||
|
||||
**Informative references:**
|
||||
|
||||
* `ASL/HOST/1`
|
||||
|
||||
---
|
||||
|
||||
## 0. Conventions
|
||||
|
||||
The key words **MUST**, **MUST NOT**, **REQUIRED**, **SHOULD**, and **MAY** are to be interpreted as in RFC 2119.
|
||||
|
||||
ASL/STORE-LAYOUT/1 defines a recommended filesystem layout for an ASL store. It does not define semantic behavior.
|
||||
|
||||
---
|
||||
|
||||
## 1. Purpose
|
||||
|
||||
Provide a practical, POSIX-friendly on-disk layout that preserves ASL store semantics while remaining compatible with ZFS or other backends.
|
||||
|
||||
---
|
||||
|
||||
## 2. Minimum Required Components (Informative)
|
||||
|
||||
An ASL store requires:
|
||||
|
||||
* Immutable blocks
|
||||
* Append-only log
|
||||
* Sealed snapshots
|
||||
* Deterministic replay
|
||||
|
||||
Directory layout is an implementation choice. This document defines a recommended layout.
|
||||
|
||||
---
|
||||
|
||||
## 3. Recommended Domain Layout
|
||||
|
||||
Per domain, use:
|
||||
|
||||
```
|
||||
/asl/domains/<domain-id>/
|
||||
meta/
|
||||
blocks/
|
||||
index/
|
||||
log/
|
||||
snapshots/
|
||||
tmp/
|
||||
```
|
||||
|
||||
All paths are domain-local.
|
||||
|
||||
---
|
||||
|
||||
## 4. Blocks
|
||||
|
||||
```
|
||||
blocks/
|
||||
open/
|
||||
blk_<uuid>.tmp
|
||||
sealed/
|
||||
00/
|
||||
<blockid>.blk
|
||||
ff/
|
||||
<blockid>.blk
|
||||
```
|
||||
|
||||
Rules:
|
||||
|
||||
* Open blocks are never visible.
|
||||
* Sealed blocks are immutable.
|
||||
* Sealed blocks are sharded by prefix for filesystem scalability.
|
||||
|
||||
---
|
||||
|
||||
## 5. Index Segments
|
||||
|
||||
```
|
||||
index/
|
||||
shard-000/
|
||||
segment-0001.idx
|
||||
segment-0002.idx
|
||||
bloom.bin
|
||||
shard-001/
|
||||
...
|
||||
```
|
||||
|
||||
Rules:
|
||||
|
||||
* Segments are append-only while open.
|
||||
* Sealed segments are immutable and log-visible.
|
||||
* Shards are deterministic per snapshot.
|
||||
|
||||
---
|
||||
|
||||
## 6. Log and Snapshots
|
||||
|
||||
```
|
||||
log/
|
||||
asl.log
|
||||
snapshots/
|
||||
<snapshot-id>/
|
||||
```
|
||||
|
||||
Rules:
|
||||
|
||||
* Log is append-only.
|
||||
* Snapshots pin index and block state for replay.
|
||||
|
||||
---
|
||||
|
||||
## 7. Temporary and Metadata Paths
|
||||
|
||||
* `tmp/` is for transient files only.
|
||||
* `meta/` contains domain metadata (DAM, policy, host state).
|
||||
|
||||
---
|
||||
|
||||
## 8. Non-Goals
|
||||
|
||||
ASL/STORE-LAYOUT/1 does not define:
|
||||
|
||||
* Device selection or mount options
|
||||
* Snapshot mechanism (ZFS vs other)
|
||||
* Encryption or key management
|
||||
119
ops/asl-usb-exchange-1.md
Normal file
119
ops/asl-usb-exchange-1.md
Normal file
|
|
@ -0,0 +1,119 @@
|
|||
# ASL/USB-EXCHANGE/1 -- USB Request/Response Exchange Layout
|
||||
|
||||
Status: Draft
|
||||
Owner: Architecture
|
||||
Version: 0.1.0
|
||||
SoT: No
|
||||
Last Updated: 2026-01-17
|
||||
Tags: [ops, usb, exchange, offline]
|
||||
|
||||
**Document ID:** `ASL/USB-EXCHANGE/1`
|
||||
**Layer:** O2 -- Offline exchange profile
|
||||
|
||||
**Depends on (normative):**
|
||||
|
||||
* `ASL/DAP/1`
|
||||
* `ASL/DAM/1`
|
||||
* `ASL/POLICY-HASH/1`
|
||||
* `PER/SIGNATURE/1`
|
||||
|
||||
**Informative references:**
|
||||
|
||||
* `ASL/AUTH-HOST/1`
|
||||
|
||||
---
|
||||
|
||||
## 0. Conventions
|
||||
|
||||
The key words **MUST**, **MUST NOT**, **REQUIRED**, **SHOULD**, and **MAY** are to be interpreted as in RFC 2119.
|
||||
|
||||
ASL/USB-EXCHANGE/1 defines a filesystem layout for offline request/response exchanges via removable media. It does not define PEL or PER encodings.
|
||||
|
||||
---
|
||||
|
||||
## 1. Purpose
|
||||
|
||||
This document defines the on-media layout for USB-based request/response exchanges used in offline rescue, admission, and authority operations.
|
||||
|
||||
---
|
||||
|
||||
## 2. Request Layout (Normative)
|
||||
|
||||
```
|
||||
/usb/REQUEST/
|
||||
├── manifest.yaml # REQUIRED
|
||||
├── pel-program.yaml # REQUIRED
|
||||
├── input-artifacts/ # OPTIONAL
|
||||
├── policy.hash # REQUIRED
|
||||
├── request.sig # REQUIRED
|
||||
└── meta/ # OPTIONAL
|
||||
├── requester-domain.txt
|
||||
└── notes.txt
|
||||
```
|
||||
|
||||
### 2.1 `manifest.yaml` (Normative)
|
||||
|
||||
```yaml
|
||||
version: 1
|
||||
request_id: <uuid>
|
||||
request_type: rescue | admission | authority-op
|
||||
created_at: <iso8601>
|
||||
requested_outputs:
|
||||
- artifacts
|
||||
- receipt
|
||||
- dam # optional
|
||||
policy_hash: <sha256>
|
||||
pel_program_hash: <sha256>
|
||||
input_artifact_hashes:
|
||||
- <sha256>
|
||||
signing:
|
||||
algorithm: ed25519
|
||||
signer_hint: <string>
|
||||
```
|
||||
|
||||
Invariants:
|
||||
|
||||
* `manifest.yaml` is canonical; all hashes are computed over canonical encodings.
|
||||
* `policy.hash` MUST match `manifest.yaml.policy_hash`.
|
||||
* `request.sig` MUST cover the canonical manifest.
|
||||
|
||||
---
|
||||
|
||||
## 3. Response Layout (Normative)
|
||||
|
||||
```
|
||||
/usb/RESPONSE/
|
||||
├── receipt.per # REQUIRED
|
||||
├── published/
|
||||
│ ├── blocks/
|
||||
│ ├── index/
|
||||
│ └── snapshots/
|
||||
├── dam/ # OPTIONAL
|
||||
│ └── domain.dam
|
||||
├── response.sig # REQUIRED
|
||||
└── meta.yaml # OPTIONAL
|
||||
```
|
||||
|
||||
Invariants:
|
||||
|
||||
* RESPONSE is append-only; existing entries MUST NOT be modified.
|
||||
* `response.sig` MUST cover the canonical receipt and published artifacts manifest.
|
||||
|
||||
---
|
||||
|
||||
## 4. Exchange Rules (Normative)
|
||||
|
||||
1. A RESPONSE MUST correspond to exactly one REQUEST.
|
||||
2. `receipt.per` MUST be verifiable under `PER/SIGNATURE/1`.
|
||||
3. Published artifacts MUST be a subset of the requested outputs.
|
||||
4. If a DAM is included, it MUST match the request type and policy hash.
|
||||
|
||||
---
|
||||
|
||||
## 5. Non-Goals
|
||||
|
||||
ASL/USB-EXCHANGE/1 does not define:
|
||||
|
||||
* PEL operator constraints or execution semantics
|
||||
* PER payload encodings
|
||||
* Transport beyond filesystem layout
|
||||
120
tier1/asl-auth-1.md
Normal file
120
tier1/asl-auth-1.md
Normal file
|
|
@ -0,0 +1,120 @@
|
|||
# ASL/AUTH/1 -- Authority, Certificates, and Trust Pins
|
||||
|
||||
Status: Draft
|
||||
Owner: Architecture
|
||||
Version: 0.1.0
|
||||
SoT: No
|
||||
Last Updated: 2025-01-17
|
||||
Tags: [authority, certificates, trust, policy]
|
||||
|
||||
**Document ID:** `ASL/AUTH/1`
|
||||
**Layer:** L2 -- Authority and trust semantics (no transport)
|
||||
|
||||
**Depends on (normative):**
|
||||
|
||||
* `ASL/DAM/1`
|
||||
* `ASL/OCS/1`
|
||||
* `ASL/POLICY-HASH/1`
|
||||
* `ASL/LOG/1`
|
||||
|
||||
**Informative references:**
|
||||
|
||||
* `ASL/OFFLINE-ROOT-TRUST/1`
|
||||
* `ASL/DOMAIN-MODEL/1`
|
||||
* `PER/SIGNATURE/1`
|
||||
|
||||
---
|
||||
|
||||
## 0. Conventions
|
||||
|
||||
The key words **MUST**, **MUST NOT**, **REQUIRED**, **SHOULD**, and **MAY** are to be interpreted as in RFC 2119.
|
||||
|
||||
ASL/AUTH/1 defines authority, certificates, and trust pin semantics. It does not define encodings or transport.
|
||||
|
||||
---
|
||||
|
||||
## 1. Purpose
|
||||
|
||||
ASL/AUTH/1 defines how domains establish authority, how certificates record authority, and how foreign domains are pinned for trust.
|
||||
|
||||
---
|
||||
|
||||
## 2. First Principle (Normative)
|
||||
|
||||
Certificates do not create authority. They record it.
|
||||
|
||||
Authority exists because a domain controls its roots and DAM. Certificates make authority verifiable and replayable.
|
||||
|
||||
---
|
||||
|
||||
## 3. Certificate Lifecycle (Normative)
|
||||
|
||||
### 3.1 Virgin State
|
||||
|
||||
Before any certificates exist:
|
||||
|
||||
* Domains and logs exist.
|
||||
* Artifacts and PERs exist.
|
||||
* No authority is asserted or trusted.
|
||||
|
||||
### 3.2 Root Authority
|
||||
|
||||
A root authority certificate:
|
||||
|
||||
* Is self-signed.
|
||||
* Is created offline.
|
||||
* Is stored as an artifact (public component only).
|
||||
* MUST NOT be used for runtime signing.
|
||||
|
||||
### 3.3 Domain Authority
|
||||
|
||||
A domain authority certificate binds:
|
||||
|
||||
* Domain identity
|
||||
* Root public keys
|
||||
* Policy hash
|
||||
|
||||
Domain authority certificates MUST be created offline and referenced by the domain DAM.
|
||||
|
||||
---
|
||||
|
||||
## 4. Trust Pins (Normative)
|
||||
|
||||
A trust pin is a local policy binding for a foreign domain.
|
||||
|
||||
Rules:
|
||||
|
||||
* Pins MUST include domain ID, policy hash, and root key fingerprint(s).
|
||||
* Pins MUST be explicit and local; they do not imply reciprocity.
|
||||
* Admission MUST verify pin compatibility before including foreign state.
|
||||
|
||||
---
|
||||
|
||||
## 5. PER Signing (Informative)
|
||||
|
||||
PER signatures MAY be required by policy. If required:
|
||||
|
||||
* The signing key MUST be authorized by the DAM.
|
||||
* The signature MUST bind snapshot and logseq.
|
||||
* Validation MUST follow `PER/SIGNATURE/1`.
|
||||
|
||||
---
|
||||
|
||||
## 6. Foreign Domain Trust (Normative)
|
||||
|
||||
Foreign domains are trusted only if:
|
||||
|
||||
1. The domain is admitted under ASL/DAP/1.
|
||||
2. Its policy hash is compatible with local policy.
|
||||
3. A trust pin exists matching the admitted domain.
|
||||
|
||||
---
|
||||
|
||||
## 7. Non-Goals
|
||||
|
||||
ASL/AUTH/1 does not define:
|
||||
|
||||
* Transport or replication protocols
|
||||
* Certificate encodings
|
||||
* Operational workflows for key custody
|
||||
* Witness rotation procedures
|
||||
145
tier1/asl-common-witness-rotation-1.md
Normal file
145
tier1/asl-common-witness-rotation-1.md
Normal file
|
|
@ -0,0 +1,145 @@
|
|||
# ASL/COMMON-WITNESS-ROTATION/1 -- Common Witness Rotation Artifact
|
||||
|
||||
Status: Draft
|
||||
Owner: Architecture
|
||||
Version: 0.1.0
|
||||
SoT: No
|
||||
Last Updated: 2025-01-17
|
||||
Tags: [common, witness, rotation, governance]
|
||||
|
||||
**Document ID:** `ASL/COMMON-WITNESS-ROTATION/1`
|
||||
**Layer:** L2 -- Common witness governance (no transport)
|
||||
|
||||
**Depends on (normative):**
|
||||
|
||||
* `ASL/DAM/1`
|
||||
* `ASL/POLICY-HASH/1`
|
||||
* `ASL/LOG/1`
|
||||
|
||||
**Informative references:**
|
||||
|
||||
* `ASL/OCS/1` -- certificate semantics
|
||||
* `ASL/OFFLINE-ROOT-TRUST/1`
|
||||
* `ASL/SYSTEM/1` -- system view
|
||||
|
||||
---
|
||||
|
||||
## 0. Conventions
|
||||
|
||||
The key words **MUST**, **MUST NOT**, **REQUIRED**, **SHOULD**, and **MAY** are to be interpreted as in RFC 2119.
|
||||
|
||||
ASL/COMMON-WITNESS-ROTATION/1 defines the artifact used to rotate the Common witness emitter. It does not define transport, storage layout, or quorum transport mechanisms.
|
||||
|
||||
---
|
||||
|
||||
## 1. Purpose
|
||||
|
||||
This document defines the **Witness Rotation Artifact (WRA)** for the Common domain. The WRA is the only mechanism that authorizes a change of the active Common witness emitter while preserving a single linear Common history.
|
||||
|
||||
---
|
||||
|
||||
## 2. Roles and Terms
|
||||
|
||||
* **Witness Emitter:** The single domain authorized to emit the next Common snapshot.
|
||||
* **Witness Authority:** A domain whose principals may endorse a witness rotation.
|
||||
* **Rotation Snapshot:** The first Common snapshot emitted by the new witness emitter.
|
||||
|
||||
---
|
||||
|
||||
## 3. Artifact Identity
|
||||
|
||||
* **Artifact type tag:** `asl.common.witness-rotation`
|
||||
* **Artifact key:** content-addressed (ASL/1-CORE)
|
||||
* **Visibility:** published within the Common domain
|
||||
|
||||
---
|
||||
|
||||
## 4. Canonical Structure (Logical)
|
||||
|
||||
```text
|
||||
WitnessRotationArtifact {
|
||||
version : u32
|
||||
common_domain_id : DomainID
|
||||
previous_snapshot_id : SnapshotID
|
||||
previous_snapshot_hash : Hash
|
||||
old_witness_domain_id : DomainID
|
||||
old_witness_pubkey_id : KeyID
|
||||
new_witness_domain_id : DomainID
|
||||
new_witness_pubkey_id : KeyID
|
||||
policy_ref : ArtifactRef
|
||||
endorsements : EndorsementSet
|
||||
created_at_logseq : u64
|
||||
reserved0 : u32
|
||||
}
|
||||
|
||||
EndorsementSet {
|
||||
threshold : u32
|
||||
endorsements[] : Endorsement
|
||||
}
|
||||
|
||||
Endorsement {
|
||||
endorser_domain_id : DomainID
|
||||
endorser_pubkey_id : KeyID
|
||||
signature : Signature
|
||||
}
|
||||
```
|
||||
|
||||
Notes:
|
||||
|
||||
* `policy_ref` MUST reference the policy artifact governing the Common domain at the time of rotation.
|
||||
* `reserved0` MUST be 0.
|
||||
|
||||
---
|
||||
|
||||
## 5. Signing Payload (Normative)
|
||||
|
||||
Each endorsement signature MUST cover the canonicalized payload:
|
||||
|
||||
```
|
||||
H(
|
||||
version
|
||||
|| common_domain_id
|
||||
|| previous_snapshot_id
|
||||
|| previous_snapshot_hash
|
||||
|| new_witness_domain_id
|
||||
|| new_witness_pubkey_id
|
||||
|| policy_ref
|
||||
)
|
||||
```
|
||||
|
||||
* `H` is the hash function used by the Common domain.
|
||||
* The signature algorithm MUST be allowed by the endorser's DAM and policy.
|
||||
|
||||
---
|
||||
|
||||
## 6. Validation Rules (Normative)
|
||||
|
||||
A Common domain implementation MUST accept a witness rotation artifact if and only if:
|
||||
|
||||
1. `previous_snapshot_id` and `previous_snapshot_hash` match the current trusted Common snapshot.
|
||||
2. The endorsement set meets or exceeds `threshold` with valid signatures.
|
||||
3. Each endorser is authorized as a witness authority by the Common domain's policy.
|
||||
4. `policy_ref` matches the policy hash recorded for the Common domain at the time of rotation.
|
||||
5. `created_at_logseq` is monotonic and consistent with the Common log ordering.
|
||||
|
||||
If any rule fails, the WRA MUST be rejected and MUST NOT affect witness authority.
|
||||
|
||||
---
|
||||
|
||||
## 7. Rotation Semantics (Normative)
|
||||
|
||||
* The WRA authorizes exactly one transition from `old_witness_*` to `new_witness_*`.
|
||||
* The new witness emitter MUST begin emitting snapshots at the next log sequence after the rotation is admitted.
|
||||
* Only one witness emitter MAY be active at a time.
|
||||
* A rotation does not grant broader authority beyond emitting Common snapshots.
|
||||
|
||||
---
|
||||
|
||||
## 8. Non-Goals
|
||||
|
||||
ASL/COMMON-WITNESS-ROTATION/1 does not define:
|
||||
|
||||
* How endorsements are collected or transported
|
||||
* Network replication or consensus protocols
|
||||
* Storage or encoding formats for the artifact
|
||||
* Automated governance workflows beyond validation rules
|
||||
164
tier1/asl-domain-model-1.md
Normal file
164
tier1/asl-domain-model-1.md
Normal file
|
|
@ -0,0 +1,164 @@
|
|||
# ASL/DOMAIN-MODEL/1 -- Domain Topology and Publication Semantics
|
||||
|
||||
Status: Draft
|
||||
Owner: Architecture
|
||||
Version: 0.1.0
|
||||
SoT: No
|
||||
Last Updated: 2025-01-17
|
||||
Tags: [domains, authority, publication, federation]
|
||||
|
||||
**Document ID:** `ASL/DOMAIN-MODEL/1`
|
||||
**Layer:** L2 -- Domain topology and delegation semantics (no transport)
|
||||
|
||||
**Depends on (normative):**
|
||||
|
||||
* `ASL/DAM/1`
|
||||
* `ASL/DAP/1`
|
||||
* `ASL/POLICY-HASH/1`
|
||||
* `ASL/FEDERATION/1`
|
||||
* `ASL/LOG/1`
|
||||
|
||||
**Informative references:**
|
||||
|
||||
* `ASL/OCS/1` -- offline certificate system
|
||||
* `ASL/OFFLINE-ROOT-TRUST/1`
|
||||
* `ASL/SYSTEM/1` -- unified system view
|
||||
|
||||
---
|
||||
|
||||
## 0. Conventions
|
||||
|
||||
The key words **MUST**, **MUST NOT**, **REQUIRED**, **SHOULD**, and **MAY** are to be interpreted as in RFC 2119.
|
||||
|
||||
ASL/DOMAIN-MODEL/1 defines domain topology and publication semantics. It does not define transport, storage layout, or encoding.
|
||||
|
||||
---
|
||||
|
||||
## 1. Purpose
|
||||
|
||||
This document defines how domains relate, how authority is delegated, and how publication is made safe and explicit.
|
||||
It provides a stable mental model for personal, group, and shared domains without introducing implicit trust.
|
||||
|
||||
---
|
||||
|
||||
## 2. First Principles (Normative)
|
||||
|
||||
1. **No implicit inheritance.** Domains are not hierarchical by default; trust is explicit.
|
||||
2. **Authority is explicit.** Authority is defined by DAM + certificates; it is never implied by naming or topology.
|
||||
3. **Publication is explicit.** Visibility is controlled by index metadata and policy, not by storage or naming.
|
||||
4. **Determinism is preserved.** All cross-domain visibility MUST be replayable from snapshots and logs.
|
||||
|
||||
---
|
||||
|
||||
## 3. Domain Roles (Common, Personal, Working)
|
||||
|
||||
These roles are semantic patterns; a deployment MAY create many instances.
|
||||
|
||||
### 3.1 Common Domain
|
||||
|
||||
A shared, conservative domain for stable artifacts and schemas.
|
||||
|
||||
Properties:
|
||||
|
||||
* High trust threshold
|
||||
* Read-mostly, slow-changing
|
||||
* Publishes broadly
|
||||
|
||||
### 3.2 Personal Domain
|
||||
|
||||
A domain anchored to a single personal identity and authority.
|
||||
|
||||
Properties:
|
||||
|
||||
* Root of local agency
|
||||
* Owns offline roots and DAM
|
||||
* Decides what to publish and to whom
|
||||
|
||||
### 3.3 Working / Ephemeral Domains
|
||||
|
||||
Task-focused domains created under delegated authority.
|
||||
|
||||
Properties:
|
||||
|
||||
* Narrow policy and scope
|
||||
* Often short-lived
|
||||
* Results MAY be promoted to personal or common domains
|
||||
|
||||
---
|
||||
|
||||
## 4. Delegation and Authority (Normative)
|
||||
|
||||
Delegation is explicit and certificate-based.
|
||||
|
||||
Rules:
|
||||
|
||||
* A domain MAY delegate authority to a new domain by issuing a `domain_root` certificate (ASL/OCS/1).
|
||||
* Delegation MUST be recorded in the receiving domain's DAM and policy hash.
|
||||
* Delegation does not create inheritance: the delegating domain does not gain automatic visibility into the new domain.
|
||||
|
||||
---
|
||||
|
||||
## 5. Publication and Visibility (Normative)
|
||||
|
||||
Publication is a visibility decision, not a storage action.
|
||||
|
||||
Rules:
|
||||
|
||||
* Only published artifacts are eligible for cross-domain visibility.
|
||||
* Publication state MUST be encoded in index metadata (ENC-ASL-CORE-INDEX).
|
||||
* Blocks and storage layouts MUST NOT be treated as publication units.
|
||||
* Publication of snapshots (or snapshot hashes) is allowed but MUST NOT imply data publication.
|
||||
|
||||
---
|
||||
|
||||
## 6. Cross-Domain Trust and Admission (Normative)
|
||||
|
||||
Trust is established by explicit admission and policy compatibility.
|
||||
|
||||
Rules:
|
||||
|
||||
* A receiving domain MUST admit an external domain (ASL/DAP/1) before including its state.
|
||||
* Policy hash compatibility MUST be checked before accepting published artifacts.
|
||||
* A domain MAY pin a trusted foreign domain without reciprocal trust.
|
||||
|
||||
---
|
||||
|
||||
## 7. Safe Publication Patterns (Informative)
|
||||
|
||||
### 7.1 Personal to Personal Archive
|
||||
|
||||
```
|
||||
personal/rescue -> personal/archive
|
||||
```
|
||||
|
||||
* Publish explicitly from the working domain to an archival domain.
|
||||
* Only published artifacts are visible across the boundary.
|
||||
|
||||
### 7.2 Personal to Group Domain
|
||||
|
||||
```
|
||||
personal/project -> group/shared
|
||||
```
|
||||
|
||||
* Requires admission by the group domain and policy compatibility.
|
||||
* No unilateral publishing into the group domain.
|
||||
|
||||
### 7.3 Personal to Public Domain
|
||||
|
||||
```
|
||||
personal/public -> common/public
|
||||
```
|
||||
|
||||
* One-way trust is permitted.
|
||||
* Public domain pins the personal domain; the personal domain need not pin the public domain.
|
||||
|
||||
---
|
||||
|
||||
## 8. Non-Goals
|
||||
|
||||
ASL/DOMAIN-MODEL/1 does not define:
|
||||
|
||||
* Transport or replication protocols
|
||||
* Encoding formats
|
||||
* Storage layouts or filesystem assumptions
|
||||
* Governance workflows beyond admission and policy compatibility
|
||||
113
tier1/asl-encrypted-blocks-1.md
Normal file
113
tier1/asl-encrypted-blocks-1.md
Normal file
|
|
@ -0,0 +1,113 @@
|
|||
# ASL/ENCRYPTED-BLOCKS/1 -- Encrypted Block Storage Across Domains
|
||||
|
||||
Status: Draft
|
||||
Owner: Architecture
|
||||
Version: 0.1.0
|
||||
SoT: No
|
||||
Last Updated: 2025-01-17
|
||||
Tags: [encryption, blocks, federation, storage]
|
||||
|
||||
**Document ID:** `ASL/ENCRYPTED-BLOCKS/1`
|
||||
**Layer:** L2 -- Encrypted storage semantics (no transport)
|
||||
|
||||
**Depends on (normative):**
|
||||
|
||||
* `ASL-STORE-INDEX`
|
||||
* `ASL/FEDERATION/1`
|
||||
* `ASL/LOG/1`
|
||||
|
||||
**Informative references:**
|
||||
|
||||
* `ASL/DOMAIN-MODEL/1`
|
||||
* `ASL/POLICY-HASH/1`
|
||||
|
||||
---
|
||||
|
||||
## 0. Conventions
|
||||
|
||||
The key words **MUST**, **MUST NOT**, **REQUIRED**, **SHOULD**, and **MAY** are to be interpreted as in RFC 2119.
|
||||
|
||||
ASL/ENCRYPTED-BLOCKS/1 defines semantics for storing encrypted blocks across domains. It does not define encryption algorithms, key management, or transport.
|
||||
|
||||
---
|
||||
|
||||
## 1. Purpose
|
||||
|
||||
This document defines how encrypted blocks may be stored in a foreign domain without transferring semantic authority or decryption capability.
|
||||
|
||||
---
|
||||
|
||||
## 2. Core Principle (Normative)
|
||||
|
||||
A domain MAY store encrypted blocks for another domain, but MUST NOT assert semantic meaning for those bytes.
|
||||
|
||||
Meaning is owned by the domain that holds the decryption keys and index entries.
|
||||
|
||||
---
|
||||
|
||||
## 3. Encryption Model (Normative)
|
||||
|
||||
### 3.1 Block Encryption
|
||||
|
||||
Before sealing, a block MAY be encrypted:
|
||||
|
||||
```
|
||||
plaintext_block
|
||||
-> encrypt(K)
|
||||
-> ciphertext_block
|
||||
-> BlockID = H(ciphertext_block)
|
||||
```
|
||||
|
||||
Rules:
|
||||
|
||||
* Encryption occurs before sealing.
|
||||
* `BlockID` is computed over ciphertext bytes.
|
||||
* Deterministic encryption is NOT required.
|
||||
|
||||
### 3.2 Key Ownership
|
||||
|
||||
* Encryption keys are owned by the originating domain.
|
||||
* Keys MUST NOT be federated or embedded in index metadata.
|
||||
* Decryption metadata MUST remain local to the originating domain.
|
||||
|
||||
---
|
||||
|
||||
## 4. Foreign Domain Storage (Normative)
|
||||
|
||||
A foreign domain storing encrypted blocks:
|
||||
|
||||
* Treats ciphertext blocks as opaque bytes.
|
||||
* MAY retain or GC blocks under its local policy.
|
||||
* MUST NOT create semantic index entries for those blocks.
|
||||
|
||||
---
|
||||
|
||||
## 5. Originating Domain References (Normative)
|
||||
|
||||
The originating domain:
|
||||
|
||||
* Maintains index entries referencing the ciphertext `BlockID`.
|
||||
* Applies normal visibility, log, and snapshot rules.
|
||||
* Uses local decryption metadata to materialize plaintext.
|
||||
|
||||
---
|
||||
|
||||
## 6. Cross-Domain References (Informative)
|
||||
|
||||
Two references are distinct:
|
||||
|
||||
* **Storage reference:** foreign domain stores ciphertext blocks.
|
||||
* **Semantic reference:** originating domain records artifact visibility and meaning.
|
||||
|
||||
Foreign storage does not imply federation of semantics.
|
||||
|
||||
---
|
||||
|
||||
## 7. Non-Goals
|
||||
|
||||
ASL/ENCRYPTED-BLOCKS/1 does not define:
|
||||
|
||||
* Key exchange or key discovery
|
||||
* Encryption algorithm choices
|
||||
* Transport or replication protocols
|
||||
* Storage layout or block packing rules
|
||||
|
|
@ -138,6 +138,8 @@ Absence of optional attributes MUST be encoded explicitly.
|
|||
* Filters are immutable once built
|
||||
* Filter construction MUST be deterministic
|
||||
* Filter state MUST be covered by segment checksums
|
||||
* Filters SHOULD be snapshot-scoped or versioned with their segment to avoid
|
||||
unbounded false-positive accumulation over time
|
||||
|
||||
---
|
||||
|
||||
|
|
|
|||
119
tier1/asl-indexes-1.md
Normal file
119
tier1/asl-indexes-1.md
Normal file
|
|
@ -0,0 +1,119 @@
|
|||
# ASL/INDEXES/1 -- Index Taxonomy and Relationships
|
||||
|
||||
Status: Draft
|
||||
Owner: Architecture
|
||||
Version: 0.1.0
|
||||
SoT: No
|
||||
Last Updated: 2025-01-17
|
||||
Tags: [indexes, content, structural, materialization]
|
||||
|
||||
**Document ID:** `ASL/INDEXES/1`
|
||||
**Layer:** L2 -- Index taxonomy (no encoding)
|
||||
|
||||
**Depends on (normative):**
|
||||
|
||||
* `ASL/1-CORE-INDEX`
|
||||
* `ASL-STORE-INDEX`
|
||||
|
||||
**Informative references:**
|
||||
|
||||
* `ASL/SYSTEM/1`
|
||||
* `TGK/1`
|
||||
|
||||
---
|
||||
|
||||
## 0. Conventions
|
||||
|
||||
The key words **MUST**, **MUST NOT**, **REQUIRED**, **SHOULD**, and **MAY** are to be interpreted as in RFC 2119.
|
||||
|
||||
ASL/INDEXES/1 defines index roles and relationships. It does not define encodings or storage layouts.
|
||||
|
||||
---
|
||||
|
||||
## 1. Purpose
|
||||
|
||||
This document defines the minimal set of indexes used by ASL systems and their dependency relationships.
|
||||
|
||||
---
|
||||
|
||||
## 2. Index Taxonomy (Normative)
|
||||
|
||||
ASL systems use three distinct indexes:
|
||||
|
||||
### 2.1 Content Index
|
||||
|
||||
Purpose: map semantic identity to bytes.
|
||||
|
||||
```
|
||||
ArtifactKey -> ArtifactLocation
|
||||
```
|
||||
|
||||
Properties:
|
||||
|
||||
* Snapshot-relative and append-only
|
||||
* Deterministic replay
|
||||
* Optional tombstone shadowing
|
||||
|
||||
This is the ASL/1-CORE-INDEX and is the only index that governs visibility.
|
||||
|
||||
### 2.2 Structural Index
|
||||
|
||||
Purpose: map structural identity to a derivation DAG node.
|
||||
|
||||
```
|
||||
SID -> DAG node
|
||||
```
|
||||
|
||||
Properties:
|
||||
|
||||
* Deterministic and rebuildable
|
||||
* Does not imply materialization
|
||||
* May be in-memory or persisted
|
||||
|
||||
### 2.3 Materialization Cache
|
||||
|
||||
Purpose: record previously materialized content for a structural identity.
|
||||
|
||||
```
|
||||
SID -> ArtifactKey
|
||||
```
|
||||
|
||||
Properties:
|
||||
|
||||
* Redundant and safe to drop
|
||||
* Recomputable from DAG + content index
|
||||
* Pure performance optimization
|
||||
|
||||
---
|
||||
|
||||
## 3. Dependency Rules (Normative)
|
||||
|
||||
Dependencies MUST follow this direction:
|
||||
|
||||
```
|
||||
Structural Index -> Materialization Cache -> Content Index
|
||||
```
|
||||
|
||||
Rules:
|
||||
|
||||
* The Content Index MUST NOT depend on the Structural Index.
|
||||
* The Structural Index MUST NOT depend on stored bytes.
|
||||
* The Materialization Cache MAY depend on both.
|
||||
|
||||
---
|
||||
|
||||
## 4. PUT/GET Interaction (Informative)
|
||||
|
||||
* PUT registers structure (if used), resolves to an ArtifactKey, and updates the Content Index.
|
||||
* GET consults only the Content Index and reads bytes from the store.
|
||||
* The Structural Index and Materialization Cache are optional optimizations for PUT.
|
||||
|
||||
---
|
||||
|
||||
## 5. Non-Goals
|
||||
|
||||
ASL/INDEXES/1 does not define:
|
||||
|
||||
* Encodings for any index
|
||||
* Storage layout or sharding
|
||||
* Query operators or traversal semantics
|
||||
|
|
@ -281,6 +281,13 @@ ASL/LOG/1 does not define:
|
|||
|
||||
---
|
||||
|
||||
## 10. Invariant (Informative)
|
||||
|
||||
> If it affects visibility, admissibility, or authority, it goes in the log.
|
||||
> If it affects layout or performance, it does not.
|
||||
|
||||
---
|
||||
|
||||
## 10. Summary
|
||||
|
||||
ASL/LOG/1 defines the minimal semantic log needed to reconstruct CURRENT.
|
||||
|
|
|
|||
203
tier1/asl-sops-bundle-1.md
Normal file
203
tier1/asl-sops-bundle-1.md
Normal file
|
|
@ -0,0 +1,203 @@
|
|||
# ASL/SOPS-BUNDLE/1 -- Offline Authority and Admission Bundle
|
||||
|
||||
Status: Draft
|
||||
Owner: Architecture
|
||||
Version: 0.1.0
|
||||
SoT: No
|
||||
Last Updated: 2025-01-17
|
||||
Tags: [sops, admission, authority, offline]
|
||||
|
||||
**Document ID:** `ASL/SOPS-BUNDLE/1`
|
||||
**Layer:** L2 -- Offline authority transport (no runtime use)
|
||||
|
||||
**Depends on (normative):**
|
||||
|
||||
* `ASL/DAP/1`
|
||||
* `ASL/DAM/1`
|
||||
* `ASL/OCS/1`
|
||||
* `ASL/POLICY-HASH/1`
|
||||
|
||||
**Informative references:**
|
||||
|
||||
* `ASL/OFFLINE-ROOT-TRUST/1`
|
||||
* `ASL/DOMAIN-MODEL/1`
|
||||
|
||||
---
|
||||
|
||||
## 0. Conventions
|
||||
|
||||
The key words **MUST**, **MUST NOT**, **REQUIRED**, **SHOULD**, and **MAY** are to be interpreted as in RFC 2119.
|
||||
|
||||
ASL/SOPS-BUNDLE/1 defines an offline transport container for authority material. It does not define runtime APIs or key management behavior beyond bundle contents.
|
||||
|
||||
---
|
||||
|
||||
## 1. Purpose
|
||||
|
||||
The ASL SOPS Bundle is a sealed, offline-deliverable container used to transport authority material for:
|
||||
|
||||
* Domain admission
|
||||
* Authority bootstrap
|
||||
* Courtesy leasing
|
||||
* Initial artifact ingestion
|
||||
* Disaster recovery and rescue
|
||||
|
||||
It is a transport and custody format only. It MUST NOT be used for runtime access or online signing.
|
||||
|
||||
---
|
||||
|
||||
## 2. Design Principles (Normative)
|
||||
|
||||
1. Offline-first
|
||||
2. Self-contained
|
||||
3. Minimal trust surface
|
||||
4. Explicit separation of authority vs policy
|
||||
5. Human-inspectable before decryption
|
||||
6. Machine-verifiable after decryption
|
||||
|
||||
---
|
||||
|
||||
## 3. Container Format (Normative)
|
||||
|
||||
* Outer format: SOPS-encrypted YAML or JSON.
|
||||
* Encryption targets: age keys, PGP keys, or hardware-backed keys.
|
||||
* Only `contents.*` is encrypted; metadata remains readable.
|
||||
|
||||
Recommended filename:
|
||||
|
||||
```
|
||||
asl-admission-<domain-id-short>.sops.yaml
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4. High-Level Structure (Normative)
|
||||
|
||||
```yaml
|
||||
asl_sops_bundle:
|
||||
version: "0.1"
|
||||
bundle_id: <uuid>
|
||||
created_at: <iso8601>
|
||||
purpose: admission | rescue | recovery
|
||||
domain_id: <DomainID>
|
||||
contents:
|
||||
authority: ...
|
||||
policy: ...
|
||||
admission: ...
|
||||
optional:
|
||||
artifacts: ...
|
||||
notes: ...
|
||||
sops:
|
||||
...
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 5. Authority Section (Normative)
|
||||
|
||||
### 5.1 Root Authority
|
||||
|
||||
```yaml
|
||||
authority:
|
||||
domain:
|
||||
domain_id: <DomainID>
|
||||
root_public_key:
|
||||
type: ed25519
|
||||
encoding: base64
|
||||
value: <base64>
|
||||
root_private_key:
|
||||
type: ed25519
|
||||
encoding: base64
|
||||
value: <base64>
|
||||
key_created_at: <iso8601>
|
||||
```
|
||||
|
||||
Rules:
|
||||
|
||||
* Root private keys MUST NOT leave the bundle.
|
||||
* The bundle SHOULD be destroyed after use.
|
||||
|
||||
### 5.2 Domain Authority Manifest (DAM)
|
||||
|
||||
The DAM MUST be embedded verbatim:
|
||||
|
||||
```yaml
|
||||
authority:
|
||||
dam: <DAM object>
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 6. Policy Section (Normative)
|
||||
|
||||
```yaml
|
||||
policy:
|
||||
policy_hash: <hash>
|
||||
policy_document: <DomainPolicy object>
|
||||
```
|
||||
|
||||
Rules:
|
||||
|
||||
* The `policy_hash` MUST match the canonical hash of `policy_document`.
|
||||
* The policy document MUST be compatible with ASL/POLICY-HASH/1.
|
||||
|
||||
---
|
||||
|
||||
## 7. Admission Section (Normative)
|
||||
|
||||
```yaml
|
||||
admission:
|
||||
requested_scope: <scope>
|
||||
courtesy_lease: <optional>
|
||||
admission_request: <DAP admission object>
|
||||
```
|
||||
|
||||
Rules:
|
||||
|
||||
* `admission_request` MUST satisfy ASL/DAP/1 admission requirements.
|
||||
* Courtesy lease requests MUST be explicit and MUST NOT imply authority.
|
||||
|
||||
---
|
||||
|
||||
## 8. Optional Sections (Informative)
|
||||
|
||||
Optional sections MAY include:
|
||||
|
||||
* Artifacts for bootstrap
|
||||
* Notes for operators
|
||||
|
||||
Optional content MUST NOT be required for admission.
|
||||
|
||||
---
|
||||
|
||||
## 9. Validation Rules (Normative)
|
||||
|
||||
An ASL-HOST MUST reject the bundle if:
|
||||
|
||||
1. The SOPS envelope cannot be decrypted by allowed keys.
|
||||
2. `policy_hash` does not match `policy_document`.
|
||||
3. The DAM is missing or invalid.
|
||||
4. Admission request violates ASL/DAP/1 requirements.
|
||||
|
||||
---
|
||||
|
||||
## 10. Non-Goals
|
||||
|
||||
ASL/SOPS-BUNDLE/1 does not define:
|
||||
|
||||
* Online key usage or rotation
|
||||
* Long-term key storage requirements
|
||||
* Transport mechanisms for bundles
|
||||
* Host-side UI or operator workflow
|
||||
|
||||
---
|
||||
|
||||
## 11. SOPS Role Clarification (Informative)
|
||||
|
||||
SOPS is used as a **transport envelope** only:
|
||||
|
||||
* It protects authority material in transit.
|
||||
* It does not establish trust or replace signatures.
|
||||
* After decryption, only the payload bytes are hashed or signed.
|
||||
|
||||
SOPS containers MUST NOT be treated as authority artifacts.
|
||||
Some files were not shown because too many files have changed in this diff Show more
Loading…
Reference in a new issue