@themanual4am I'm not sure how to answer. In philosophy of mind, "representation" is a major theme. Biological systems produce "models" of themselves and their environments, at various scales and at various levels of fidelity. These are often redundant, presumably for robustness.
No intelligence has direct access through the world. Meaning must be inferred from observations to make these models, and only then can cognitive or computational systems analyze, understand, and choose actions. This happens in single cells, but also in larger systems composed of cells. It also shows up in AI agents. A huge factor in how intelligent such a system is, and what kind of "thinking" its capable of is in terms of what models it builds and has access to. Sometimes, these models can be emergent in an artificial neural network, but that's not something we can expect or rely on.
1/2