Determine Associative Strength Attributes of Computational Values Represented as Points in Hyperdimensional Space Using Hypervectors
John Melendez
Global Taiwan Industry Business Director (<<<New Career) * Advanced Tech Researcher * Tech Writer
It is possible to determine associative strength attributes among multiple computational values represented as points in hyperdimensional space using hypervectors. There are a few key aspects of hyperdimensional computing that enable this:
By leveraging these properties, it's possible to:
* For more information on binding and bundling operations in hyperdimensional computing, please see: ??Three Common Operations of Hyperdimensional Computing (HDC)
This allows capturing complex associative relationships between multiple elements in an abstract geometric construct represented by the high-dimensional space. The associative strengths emerge from the interactions and similarities of the hypervectors in this space.
Hypothetical Example
It is possible to determine associative strength attributes among multiple computational values represented as points in hyperdimensional space using hypervectors. Let's see a hypothetical example to illustrate this concept:
Suppose we are working with a hyperdimensional space of 10,000 dimensions to represent different attributes of cars. We could create hypervectors for various car features like:
Each of these would be a 10,000-dimensional hypervector with pseudo-random values.
Now, let's say we want to represent a specific car - a red luxury sedan. We can combine these hypervectors using operations like binding (typically element-wise multiplication) to create a new hypervector:
CAR = SEDAN RED LUXURY
This resulting CAR hypervector represents a point in our 10,000-dimensional space.
To determine associative strengths between different cars or car attributes, we can use similarity measures like cosine similarity or Hamming distance between their respective hypervectors[1]. For example:
The closer these hypervectors are in the high-dimensional space (i.e., the higher their similarity measure), the stronger their associative strength.
Furthermore, we can use operations like bundling (typically vector addition) to represent sets or combinations of concepts. For instance, we could create a hypervector representing "all luxury vehicles in our database" by adding together the hypervectors of individual luxury cars.
This example demonstrates how hyperdimensional computing can capture complex associative relationships between multiple elements in an abstract geometric construct represented by the high-dimensional space. The associative strengths emerge from the interactions and similarities of the hypervectors in this space.
This is a hypothetical example, but it illustrates the principles that could be used in real-world applications of hyperdimensional computing (HDC) for tasks like classification, pattern recognition, and reasoning.
领英推荐
About the author:
John has authored tech content for MICROSOFT, GOOGLE (Taiwan), INTEL, HITACHI, and YAHOO! His recent work includes Research and Technical Writing for Zscale Labs?, covering highly advanced Neuro-Symbolic AI (NSAI) and Hyperdimensional Computing (HDC). John speaks intermediate Mandarin after living for 10 years in Taiwan, Singapore and China.
John now advances his knowledge through research covering AI fused with Quantum tech - with a keen interest in Toroid electromagnetic (EM) field topology for Computational Value Assignment, Adaptive Neuromorphic / Neuro-Symbolic Computing, and Hyper-Dimensional Computing (HDC) on Abstract Geometric Constructs.
John's LinkedIn: https://www.dhirubhai.net/in/john-melendez-quantum/
Citations:
?