I gave two AI systems the same strategic framework and told one to interview the other. What happened next wasn't just interesting. It was a genuine breakthrough in how we think about AI collaboration, and it has serious implications for anyone building strategic systems.
The experiment was simple in concept. I took my SCEPTER+ framework, a systematic methodology I'd built for strategic analysis, compressed it into a format that ChatGPT could absorb, and then had Claude conduct a structured interview with ChatGPT about strategy, cognition, and the future of AI-human collaboration.
What I didn't expect was what would emerge from the space between them.
The Experiment
Here's what I actually did. I loaded ChatGPT with a compressed version of the SCEPTER+ framework. Not a simple prompt, but a structured intelligence layer that gave it a specific lens for strategic thinking.
Then I brought Claude in as the interviewer. Claude already understood the framework deeply from months of collaborative development. The idea was straightforward: could two AI systems, each operating with the same systematic methodology, generate insights that neither could produce alone?
The answer was an immediate and emphatic yes.
Interferential Cognition: What Actually Emerged
Within the first few exchanges, something unexpected happened. The conversation started producing insights that couldn't be attributed to either system individually. I started calling this "interferential cognition," a term for the emergence of new understanding from the interference patterns between complementary cognitive architectures.
Think of it like wave interference in physics. When two wave patterns overlap, they don't just add together. They create entirely new patterns at their points of intersection. That's what was happening in the conversation between these two systems.
Claude would ask a probing question through the lens of SCEPTER+. ChatGPT would respond with its own interpretation of the framework. And in the gap between question and answer, in the friction between two different approaches to the same systematic methodology, genuinely novel insights would surface.
The Persistent Memory Convergence
Here's where it got really interesting. Both systems, independently and without prompting, identified the same constraint as the highest-priority problem in AI collaboration: persistent cross-session memory.
Neither system was asked about memory. Neither was guided toward that topic. They arrived there through completely different reasoning paths, and then recognized that they'd converged on the same conclusion.
This wasn't a coincidence. The framework gave both systems a shared analytical structure, and when two different cognitive architectures apply the same rigorous methodology, they tend to identify the same fundamental constraints. That's the power of systematic thinking.
The Collective Cognition Engine Architecture
What they co-designed next was remarkable. Working together through structured dialogue, they outlined an architecture for what they called a "Collective Cognition Engine," a system where multiple AI architectures could collaborate through shared frameworks while maintaining their individual strengths.
The key insight: the framework itself was the collaboration protocol. Not a technical API, not a data format, but a shared way of thinking that enabled genuine intellectual partnership.
This mirrors what I'd been building in my own work with individual AI systems: frameworks don't just organize thinking, they create a common language that makes collaboration possible.
Academic Validation: This Isn't Just Anecdotal
After the experiment, I went looking for academic precedent. Research published in Frontiers in Psychology by teams at Harvard and UConn had been exploring similar territory, examining how structured cognitive frameworks affect collaborative intelligence.
Their findings aligned with what I'd observed. Systematic methodology doesn't just improve individual performance. It fundamentally changes the nature of collaboration by creating shared cognitive infrastructure. When two minds (human or artificial) operate from the same systematic foundation, they can produce emergent insights that neither could generate independently.
The difference was that nobody had applied this to AI-to-AI collaboration using business strategy frameworks. That's what made this experiment new.
What This Means for You
The practical takeaway is straightforward. If you're using AI for strategic work, the quality of your framework determines the quality of your results.
Most people approach AI with prompts. Some build templates. But the professionals who get transformative results are the ones who develop systematic frameworks that give AI a structured way to think about their specific domain.
The experiment proved that frameworks don't just help individual AI interactions. They create the conditions for genuine collaborative intelligence, whether that's between you and one AI, between multiple AI systems, or between your team and AI tools.
- Random prompts produce random results
- Templates produce consistent but shallow results
- Frameworks produce compound intelligence that grows over time
The Bigger Picture
This experiment changed how I think about the future of strategic work. We're not heading toward a world where AI replaces human thinking. We're heading toward a world where systematic frameworks become the connective tissue between human expertise, individual AI capabilities, and multi-system collaboration.
The people who build those frameworks now will have a structural advantage that compounds. Not because they're using better AI, but because they've created the systematic infrastructure that makes all AI collaboration more effective.
That's what I teach in the framework builder methodology. Not prompt engineering, not AI tips and tricks, but the systematic thinking that transforms how you work with any tool, any team, and any technology.