Loading
Loading
Neuromorphic Computing Requires Co-Design Across Materials, Devices, Circuits, and Algorithms That No Single Discipline Can Provide
Neuromorphic computing — processors that mimic neural architectures to achieve brain-like energy efficiency and parallel processing — requires simultaneous co-design across four layers that are studied by different disciplines with incompatible design tools, evaluation metrics, and optimization criteria. Materials scientists develop memristive and phase-change devices optimized for switching speed and endurance, but these device-level metrics don't translate to circuit-level performance. Circuit designers build crossbar arrays assuming ideal device behavior that real devices don't exhibit (non-linearity, variability, drift). Algorithm researchers develop spiking neural network architectures assuming ideal hardware that doesn't exist. Neuroscientists study biological neural computation using frameworks that neither hardware designers nor algorithm researchers can translate into engineering specifications. The result is that each layer is optimized independently, producing impressive results at each level that fail to compose into competitive systems.
Current AI computing is projected to consume 4–5% of global electricity by 2030, driven by the fundamental inefficiency of running neural network algorithms on von Neumann architectures. The human brain processes equivalent computation using ~20 watts — 6 orders of magnitude more energy-efficient than current AI hardware. Neuromorphic computing could close this gap, but only if the cross-disciplinary integration challenge is solved. Intel's Loihi and IBM's TrueNorth demonstrate that neuromorphic chips can achieve 100–1000× energy efficiency improvements for specific tasks, but these systems were designed as monolithic projects within single organizations — an approach that doesn't scale to the diversity of materials, architectures, and applications needed for broad neuromorphic computing deployment.
Vertically integrated neuromorphic projects (Intel Loihi, IBM TrueNorth, BrainScaleS, SpiNNaker) produce working systems but use conventional CMOS technology rather than emerging devices, leaving the materials-level efficiency gains on the table. Emerging-device researchers (memristors, spintronic devices, photonic synapses) demonstrate individual devices with promising properties but cannot evaluate system-level performance because they lack circuit and algorithm design expertise. Neuromorphic benchmarks (SNNBench, N-MNIST) evaluate algorithms but don't capture hardware constraints. Co-design frameworks in electronic design automation (EDA) exist for conventional semiconductor design but assume well-characterized device models — neuromorphic devices are too immature and variable for standard EDA tools. The IEEE IRDS roadmap identifies the co-design gap as a top challenge but provides no mechanism to bridge it.
Cross-layer simulation frameworks that allow researchers at each level to evaluate how their design choices propagate through the full stack — so a materials scientist can see how their device variability affects algorithm accuracy, and an algorithm designer can see how their network topology demands specific device properties. Standardized neuromorphic device models (analogous to SPICE models for transistors) that capture real device non-idealities in a format circuit designers can use. Cross-disciplinary design challenge problems where the same application target (e.g., keyword spotting, visual object detection) is used to evaluate contributions at each layer, enabling direct comparison of materials-level vs. circuit-level vs. algorithm-level improvements.
A team could take a specific neuromorphic application (e.g., keyword spotting in audio) and implement it at two different abstraction levels — an ideal algorithm-level simulation and a device-constrained circuit-level simulation using published memristor device models — documenting how real device non-idealities degrade the algorithmic performance and identifying which device parameters matter most. A hardware team could fabricate a small crossbar array using commercial memristive devices and benchmark it against simulation predictions, characterizing the reality-model gap. Relevant disciplines: electrical engineering, materials science, computer science, neuroscience, physics.
Targets C5 (Disciplinary Silos) and C13 (Frontier Science Convergence). Has all 3 C13 core tags (`failure:disciplinary-silo`, `failure:theoretical-gap`, `breakthrough:knowledge-integration`). The disciplinary silo spans materials science, electrical engineering, computer science, and neuroscience — four disciplines with fundamentally different design methodologies, simulation tools, and evaluation metrics. Source includes IEEE IRDS (non-NSF) — diversifying C13's source base. Distinct from existing digital briefs by focusing on the cross-discipline co-design gap rather than a specific computational challenge. The `temporal:newly-tractable` tag reflects that recent advances in memristive devices, spiking neural network algorithms, and neuromorphic chip fabrication have made the co-design challenge both more urgent and more tractable than a decade ago.
IEEE International Roadmap for Devices and Systems (IRDS), "Beyond CMOS and Emerging Research Devices," 2024; Schuman, C.D. et al., "Opportunities for neuromorphic computing algorithms and applications," Nature Computational Science, 2, 10–19, 2022; Christensen, D.V. et al., "2022 roadmap on neuromorphic computing and engineering," Neuromorphic Computing and Engineering, 2(2), 022501, 2022; accessed 2026-02-25