Skip to content

Python implementation of the Dimensional Logic framework applied to the Prisoner’s Dilemma and AI model selection. Demonstrates how reflexivity and contextuality can stabilize cooperation and improve decision-making in large language models.

Notifications You must be signed in to change notification settings

drwolfgangstegemann-sudo/Dimensional-Logic-Prisoners-Dilemma

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 

Repository files navigation

Dimensional Logic for LLM Output Selection

This repository contains a proof-of-concept implementation of applying Dimensional Logic (σ₂, μ₃, κ₄) to large language model (LLM) outputs.
The goal is to improve reflexivity (reasoning about reasoning) and context coherence when selecting between candidate answers generated by an LLM.


🔍 Background

Classical logic evaluates statements in binary terms (true/false).
Dimensional Logic extends this by introducing additional operators:

  • σ₂ (Systematic Derivation) – ensures structural consistency in reasoning
  • μ₃ (Reflexivity) – models recursive awareness of reasoning steps
  • κ₄ (Context Coherence) – evaluates how well reasoning fits into a broader context

This framework allows us to score and re-rank LLM responses not only by surface plausibility but also by epistemic depth.


📂 Repository Contents

  • dimensional_llm_selector_en.py → Python implementation of dimensional scoring and selection
  • (optional) Example CSV / heatmap → demo results from a toy experiment

🚀 Usage

  1. Clone this repo:

    git clone https://github.com/yourusername/dimensional-logic-llm-selector.git
    cd dimensional-logic-llm-selector
  2. Run the script with sample outputs:

    from dimensional_llm_selector_en import dimensional_score, select_best_response
    
    outputs = [
        "The capital of France is Paris.",
        "The capital of France is Lyon.",
        "The capital of France is Madrid."
    ]
    
    context = "Geography, European capitals"
    
    best = select_best_response(outputs, context)
    print("Best response:", best)
  3. Example output:

    Best response: The capital of France is Paris.
    

📊 How It Works

Each LLM response is scored as:

Score = α · μ₃(response, others) + β · κ₄(response, context)
  • μ₃: Reflexive alignment – does the answer make sense relative to others?
  • κ₄: Contextual coherence – does the answer fit into the given context?
  • α, β: Tunable weights to balance reflexivity and coherence.

If the dimensional score passes a threshold, the response is preferred over naive likelihood-based ranking.


🧠 Applications

  • More reliable multi-agent reasoning
  • Reducing hallucinations in LLMs by epistemic filtering
  • Extending game theory and decision theory with reflexive/contextual dimensions
  • Foundations for epistemic-aware AI systems

📜 License

MIT License – free to use, modify, and share.


📖 References

About

Python implementation of the Dimensional Logic framework applied to the Prisoner’s Dilemma and AI model selection. Demonstrates how reflexivity and contextuality can stabilize cooperation and improve decision-making in large language models.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages