Talk:Thought vector

Latest comment: 4 years ago by 2001:2003:F0C4:C00:95E9:D320:2529:1AB4
WikiProject iconLists
WikiProject iconThis article is within the scope of WikiProject Lists, an attempt to structure and organize all list pages on Wikipedia. If you wish to help, please visit the project page, where you can join the project and/or contribute to the discussion.
???This article has not yet received a rating on the project's importance scale.

'Thought Vectors' are notable - I read somewhere that two companies have been founded to research and exploit them. No cite though. And there are articles all over the place about them. JeffDonner (talk) 23:07, 9 October 2017 (UTC)Reply

Thought Vectors are a nice way of expressing the high dimensional abstraction (vectorisation) of encoder/decoder networks, as found in machine translation or natural language understanding applications (for example). I think they are a significant pedagogical idea because they help people understand how these sorts of thing work. The concept of a "thought vector" represents a kind of apogee between feature extraction and output generation, when the network has exercised everything it has learned about understanding it's input, right before starts applying this interpretation to the task of generating output. It's really cool because calling this abstraction a "thought vector" helps us understand how the information processing is occurring in the neural network, but also because it's more than an analogy, it's quite reasonable to believe that our actual thoughts essentially work in the same way; that thinking occurs by interactions between the maximally-abstract semantic encodings in our biological networks, which are then subsequently articulated into output phenomena such as language, internal dialogue, or other behaviour (i.e. what we experience when we think is not an accurate reflection of how thinking works, it is more of a downstream consequence of the thinking activity). So it's maybe an analogy that helps us understand, or it's maybe a clever insight into biological cognition. This is significant because the history of AI (before the success of deep learning) is full of blind alleys and misconceptions based on the notion that observed "intelligence" (excellence in context) must be caused by the thing we experience when we think (e.g. internal dialogue), whereas the success of deep learning leverages these intrinsic abstractions very effectively to achieve excellence at tasks that have no obvious/logical structural resemblance to an address in a high dimensional space. Chris Gough -- 3rd September 2019 — Preceding unsigned comment added by 114.72.192.215 (talk) 13:41, 4 September 2019 (UTC)Reply

This is closely related to the concepts of word embeddings and sentence embeddings, vector-based abstractions which are crucial to the performance of modern Transformer-based language models which internally encode the former into the latter before decoding output. I'd like to see this article touch on both the pragmatic usage of such embeddings as a useful mathematical abstraction for solving natural language problems, as well as the neuroscientific and linguistic hypothesis that thought vectors may be analogous to the way the human brain encodes thought.

2001:2003:F0C4:C00:95E9:D320:2529:1AB4 (talk) 13:12, 27 January 2020 (UTC)Reply