top of page

Neuronspike Technologies


We are fabless semiconductor company developing brain-inspired AI chipsets for generative AI models to help enterprises build proprietary AI systems. Our first chip, Neuronspike Moore, has up to 21x faster speed comparing to existing processors in the market.


Our chip designs are based on compute-in-memory architecture where computations happen within the memory. This allows ultra high throughput computations on our chips.


One Neuronspike Moore chip can achieve throughput performance of 4 A100 Nvidia GPUs in generative AI.


We will soon start accepting pre-orders and partnerships.


Our mission is to develop fast and efficient chipsets to help enterprises to create the future and improve lives using artificial intelligence.

Performance comparison on Llama-8B model

Inference with Neuronspike Moore chip

Inference with Nvidia A100 chip

Towards AGI

Generative AI models and multi-modal AI models will potentially lead to versatile artificial general intelligence where machines can reason, perform visual, language, and decision-making tasks. However, these models have risen in size and expected to grow by 1000x in next 3 years. 

This creates the need of solution to memory wall in von neumann architecture, which is the architecture in CPUs and GPUs. Meaning, memory bandwidth limits the computational throughput of processors systems due to requirement of moving large data within processor systems. 


Compute-in-memory architecture offers promising solution to memory wall. This means computations happen on the memory without moving the data around, thus resulting in more than 20x performance gains in memory-bound computations like in generative AI.  


We are bringing compute-in-memory architecture to the markets so that the humanity can advance towards artificial general intelligence.

supported by

bottom of page