![]() Herein, we present a thorough study to formulate an HDC architecture search space. Furthermore, the approach leads to inferior accuracy and efficiency, which suggests that HDC cannot perform competitively against deep neural networks. Currently, HDC design is largely carried out in an application-specific ad-hoc manner, which significantly limits its application. ![]() This paper represents the first effort to explore an automated architecture search for hyperdimensional computing (HDC), a type of brain-inspired neural network. All in all, HW/SW co-design is the key for efficient yet reliable in-memory hyperdimensional computing for both conventional CMOS technology and upcoming emerging technologies. We demonstrate that inference accuracy does remain high despite the larger error probability, while large area and power savings can be obtained. In addition, we investigate the robustness of HDC against errors when the underlying in-memory hardware is realized using emerging non-volatile FeFET devices instead of mature CMOS-based SRAMs. We also demonstrate that the resiliency against errors is application-dependent. This opens doors to explore new tradeoffs. Despite such a high error probability, the inference accuracy is only marginally impacted. Experimental results for SRAM reveal that variability-induced errors have a probability of up to 39 percent. Thanks to HDC's resiliency against errors, the complexity of the underlying hardware can be reduced, providing large energy savings of up to 6x. We accurately model, for the first time, the voltage-dependent error probability in SRAM-based and FeFET-based in-memory computing. Our modeling is based on 14nm FinFET technology fully calibrated with Intel measurement data. In this work, we investigate and model the impact of imprecise in-memory computing hardware on the inference accuracy of HDC. In-memory computing is rapidly emerging to overcome the von Neumann bottleneck by eliminating data movements between compute and storage units. However, HDC is overwhelmingly data-centric similar to traditional machine-learning algorithms. It is a promising alternative to traditional machine-learning approaches due to its ability to learn from little data, lightweight implementation, and resiliency against errors. Surprisingly, a low (compared to a high) direct association showed no difference in brain activation, which raises the question about the diverging cognitive processes of the two priming types.īrain-inspired hyperdimensional computing (HDC) is continuously gaining remarkable attention. At a low semantic similarity, we observed increased functional connectivity of the LIFG to the fusiform gyrus, the hippocampus, the anterior cingulate cortex and the orbitofrontal cortex, indicating a connective pattern analogous to the semantic layer of the AROM. We further manipulated the prime and target word by co-occurrence-based direct association and semantic similarity in a full-factorial design. In this study, we employed psychophysiological interaction analysis on the neuroimaging data of a semantic priming experiment, targeting the LIFG as main region to resolve semantic conflicts. Direct empirical evidence for its connectivity assumptions is so far missing, however. Hence, an interactive activation model, the associative read-out model (AROM), utilizes co-occurrences in its semantic processing layer and proposes connectivity from the LIFG along the ventral visual stream. Recent applications of computationally calculated word co-occurrences allowed the prediction of left inferior frontal gyrus (LIFG) activation during semantic word processing.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |