dorsaVi pushes deeper into AI hardware with early-stage RRAM testing


dorsaVi has taken another step in its pivot toward advanced semiconductor technology, beginning device-level testing of resistive random access memory (RRAM) chips as part of its roadmap to a 22-nanometre neuromorphic computing platform. The move places the small-cap ASX company squarely within one of the technology sector’s hottest themes: the growing strain on conventional memory architectures as artificial intelligence workloads explode.

The company recently received its initial RRAM test silicon and has commenced early-stage characterisation work at the 180 nm node. This phase focuses on evaluating device performance, material interfaces and how the technology behaves under manufacturing conditions.

While still at an exploratory stage, the testing represents the first step in a staged pathway aimed at scaling the technology down to a more advanced 22 nm node - a process expected to unlock greater memory density, faster access speeds and lower power consumption.

The “memory wall” becomes an industry problem

The strategic rationale behind the work lies in a growing bottleneck facing AI infrastructure.

As machine learning workloads expand, modern processors increasingly spend more time waiting for data than performing calculations. Industry observers often refer to this limitation as the “memory wall” - where computing power advances faster than the systems that feed it with data.

Conventional computing architectures rely on constant transfers between processors and external memory. These data movements consume energy and introduce latency, creating efficiency constraints that grow more pronounced as workloads scale.

Some estimates suggest that moving data between processors and memory can account for as much as 70–90 per cent of the energy consumed by certain AI systems.

The result is rising demand for new architectures capable of bringing computation closer to the data itself. In-memory computing and neuromorphic systems, where memory elements also participate in processing tasks, are emerging as potential solutions.

For investors, the theme is already visible in the share price performance of major global memory manufacturers. Companies such as SanDisk, SK Hynix and Micron have seen substantial market capitalisation growth over the past year as demand for AI-related memory infrastructure surges.

What RRAM could enable

dorsaVi’s approach centres on RRAM, an emerging non-volatile memory technology capable of storing data while also enabling computation within memory arrays.

Unlike traditional architectures where memory and processing are separate, RRAM can potentially allow both functions to occur within the same structure. In neuromorphic designs, arrays of memory cells can act as artificial synapses, enabling efficient machine learning operations.

The company’s roadmap envisions RRAM acting as the foundation for a new class of ultra-efficient AI hardware aimed at “ultra-edge” environments such as robotics, drones and autonomous systems.

These devices often operate under strict constraints: limited power supply, restricted cooling capacity and the need for real-time decision making without reliance on cloud connectivity.

According to the company’s technology plan, the transition from a 40 nm node to a 22 nm process could deliver improvements including lower write voltages, faster write speeds and more efficient compute-in-memory operation.

Targets include write voltages below 2.0 volts, improved reliability at high temperatures and compute-in-memory array efficiency exceeding 20 tera operations per second per watt.

Such performance gains would be particularly relevant for battery-powered systems or embedded AI applications where energy efficiency is paramount.

Neuromorphic ambitions

The RRAM development program is closely linked to dorsaVi’s broader neuromorphic computing strategy.

Neuromorphic hardware attempts to mimic the structure of biological neural systems, using dense networks of artificial synapses and neurons to perform tasks such as pattern recognition and sensory processing more efficiently than traditional processors.

In this framework, RRAM can serve as the non-volatile memory fabric that stores neural network weights while also enabling analog-style computation.

This architecture potentially allows ultra-edge devices to process sensor inputs and run AI models locally rather than sending data back to remote servers.

For applications such as autonomous robotics, wearable medical systems or industrial monitoring, the ability to make decisions instantly and locally could provide meaningful advantages.

Still early days

Despite the technological promise, investors should recognise that the program remains in its early stages.

The current phase involves device characterisation and optimisation following receipt of the first RRAM test wafer, with insights from this work expected to inform further development and scaling.

Commercial deployment - if it occurs - would require additional validation, manufacturing development and integration into larger computing architectures.

Nevertheless, the company argues the initiative aligns with structural trends shaping the semiconductor industry.

Chief executive Mathew Regan said accelerating AI infrastructure is putting increasing pressure on power efficiency and memory utilisation across computing systems.

“The rapid expansion of AI infrastructure is placing increasing pressure on power efficiency and memory utilisation across the computing stack,” Regan said.

While much current investment remains focused on large data-centre hardware, Regan believes the next phase of AI growth will increasingly occur in distributed devices operating at the edge.

“Our RRAM-based in-memory and neuromorphic computing platform is being developed to reduce data movement and enable ultra-low-power, low-latency intelligence where efficiency is critical.”

The investor takeaway

For dorsaVi, the RRAM program represents an attempt to reposition the company within a rapidly evolving semiconductor landscape.

The shift toward AI-enabled edge devices, combined with tightening memory supply chains, could create opportunities for alternative memory technologies capable of delivering higher performance per watt.

Whether the company can translate early-stage development into commercially viable silicon remains to be seen.

But in a world increasingly constrained by the memory demands of artificial intelligence, even small players exploring new architectures may find themselves operating in a sector where technological breakthroughs carry outsized potential.


Rate article from Staff Writer:
Article feedback:
Your feedback is used for quality monitoring purposes and will not be shared publicly.