The Super Linear Concurrent Architecture
The Super Linear Concurrent Architecture:
50% Pipeline Efficiency using Real-Time AI
A real-time AI powered concurrent memory access system can achieve 50% pipeline efficiency
by reducing data exceptions in a multicore CPU
by reducing data exceptions in a multicore CPU
- A single-layer unified cache system with a protocol for all pipelines that uses a predictive MMU looking ahead at the data needed by all pipelines and then preparing it in advance
- Multiple 100bps data lanes offering ultra-high speed and ultra-high bandwidth memory access
- An active memory system designed for all types of memory access behaviors from up to 64 processor pipelines
- A tightly coupled concurrent multicore CPU and the real-time AI memory system for instant data access
The Super Linear Concurrent Architecture:
Three Layers of Real-Time AI Memory System

- There are 3 layers in Revatron’s real-time AI memory system design with each layer requiring different modification levels in a multicore CPU
- Layer 1: No modification in multicore CPUAdd a unified cache and predictive DDR controllers in the main memory system to increase the page hit percentage in DDR memory
- Layer 2: Memory System ModificationRemove L1 and L2 cache and replace MMU with one powered by real-time AI to enable predictive access of DDR
- Layer 3: Pipeline ModificationAdd variable handling in the pipeline to increase efficiency even when data is not locally available
Recent Comments