Juyang (John) Weng is a Chinese-American computer engineer, neuroscientist, author, and academic. He is a former professor at the Department of Computer Science and Engineering at Michigan State University and the President of Brain-Mind Institute and GENISAMA.[1]
Juyang (John) Weng | |
---|---|
Nationality | Chinese-American |
Occupation(s) | Computer engineer, neuroscientist, author, and academic |
Academic background | |
Education | BSc., Computer Science MSc., Computer Science PhD., Computer Science |
Alma mater | Fudan University University of Illinois at Urbana-Champaign |
Thesis | (1989) |
Doctoral advisor | Thomas S. Huang Narendra Ahuja |
Academic work | |
Institutions | Brain-Mind Institute GENISAMA Michigan State University |
Weng has conducted research on grounded machine learning by intersecting computer science and engineering, with brain and cognitive science. In collaborative research efforts with his coworkers, he has explored mental architectures and computational models for autonomous development across various domains such as vision, audition, touch, behaviors, and motivational systems, both in biological and engineered systems. He has authored two books, Natural and Artificial Intelligence: Introduction to Computational Brain-Mind and Motion and Structure from Image Sequences, and is the editor of the book series 'New Frontiers in Robotics.' In addition, he has published over 300 articles.
Weng is a Life Fellow of the Institute of Electrical and Electronics Engineers (IEEE) and the Founder and President of the Brain-Mind Institute, and the startup GENISAMA. He is also the Founder and Editor-in-chief of the International Journal of Humanoid Robotics and the Brain-Mind Magazine and an Associate Editor of the IEEE Transactions on Autonomous Mental Development (now Cognitive and Developmental Systems).[2] Additionally, he served as a Guest editor for five special issues, including What AI and Neuroscience Can Learn from Each Other: Open Problems in Models and Theories, Cognitive Computation,[3] The Special Issue on Brain Imaging-informed Multimodal Analysis, IEEE Transactions on Autonomous Mental Development,[4] and The Special Issue on Autonomous Mental Development, International Journal of Humanoid Robotics.[5]
Education
editWeng obtained his BS degree from Fudan University in 1982, followed by earning his M.Sc. and Ph.D. degrees in computer science from the University of Illinois at Urbana-Champaign in 1985 and 1989, respectively.[6]
Career
editFollowing his Ph.D., in 1990, Weng began his academic career as a visiting assistant research professor at the Beckman Institute of the University of Illinois, Urbana. From 1992 to 1998, he served as an assistant professor at Michigan State University, becoming associate professor in 1998 and professor in 2003. Retired now[7]
Research
editWeng's research revolves around grounded machine learning, spanning vision, audition, natural language understanding, planning, and real-time hardware implementations. He is also involved in technology transfer through his startup, GENISAMA, which focuses on grounded, emergent, natural, incremental, skull-closed, attentive, motivated, and abstract systems. His theoretical contributions include mathematically proving that Developmental Networks (DNs) he developed can learn any universal turing machines and establishing a theory on Autonomous Programming For General Purposes (APFGP), supporting Conscious Machine Learning.[8][9]
Weng has worked on developmental networks from Cresceptron to DN3 to achieve the first-ever conscious learning algorithm which is free from "deep learning" misconduct.[10] His research has been featured on 'Discovery Channel, Enel, and BBC.[11]
Motion and structure analysis
editFrom 1983 to 1989, Weng's research work during his master's degree and Ph.D. degree focused on the analysis of the motion of objects and estimating 3D structures from motion.[12] He realized that such model-based approaches can provide with piecemeal insights but are too restrictive for understanding how animal brains learn vision and other brain skills. Soon after his Ph.D. degree work, he started Cresceptron.[13]
Cresceptron
editCresceptron represented a direction that Weng later termed Autonomous Mental Development (AMD). In 1992, he and his collaborators pioneered the development of a framework titled Cresceptron for segmenting and recognizing real-world 3D objects from their images through automated learning.[13] The framework was tested for visual recognition, specifically recognizing 3-D objects from 2-D images and segmenting them from cluttered backgrounds without the need for handcrafted 3D models. It employed techniques such as stochastic distortion modeling, view-based interpolation, and a combination of individual and class-based learning approaches. Cresceptron achieved seven significant accomplishments, including the development of techniques like learning large-scale 3D objects with a deep convolutional neural network (CNN) and feature-independent learning for extensive datasets, among others. It was also established that Cresceptron significantly differs from later "deep learning" networks due to its approach of developing a sole network using Hebbian learning (i.e., unsupervised in all hidden layers).[14][15]
SHOSLIF
editWeng introduced another framework named SHOSLIF which provided a unified theory and methodology for comprehensive sensor-actuator learning.[16] It addressed single sensory problems as well as critical issues that Cresceptron faces, such as the automated selection of the most valuable features, the automatic organization of sensory and control information through a coarse-to-fine space partition tree, resulting in a remarkably low, logarithmic time complexity for content-based retrieval from extensive visual knowledge bases.[17] It also deals with handling invariance through learning, enabling online incremental learning, and facilitating autonomous learning, among other objectives.[18][19]
SAIL and Dav robots
editFrom 1998 to 2010, Weng developed SAIL[20] and Dav robots[21] using sensory mapping models including self-aware self-effecting (SASE), staggered hierarchical mapping (SHM), and incremental hierarchical discriminant regression (IHDR) methods. It has been applied to the recognition of occluded objects,[22] speech recognition,[23] vision-guided navigation,[24] and range-based collision avoidance.[25]
Autonomous developmental networks
editSince 2005, Weng and his team have been working on the development of brain-like and cortex-like Developmental Networks (DNs) and their embodiments Where-What Networks (WWNs)[26] using brain-like architecture, including modeling pathways, laminar 6-layer cortex, and brain areas.[27][28] In addition, they have analyzed how the brain deals with modulation, time, and space and have created three versions (DN1 through DN3) by 2023. A significant enhancement introduced in the transition from DN-2 to DN-3 involves initiating the brain-size network from a single-cell zygote. This means a fully autonomous process for brain patterning from a single cell. The key mechanisms of patterning include the Lobe Component Analysis (LCA)[29] and Synaptic Maintenance,[30] which automatically maintain the global smoothness of brain representation and local refinements of area representations. This approach enabled the developmental algorithm to progressively develop sensors, a complex brain, and motor functions in a sequential and self-organizing manner, ensuring that the wiring and pattern formation processes occur automatically from the initial conception stages throughout the entire life of the system.[31]
These Developmental Networks (DNs) and Where-What Networks (WWNs 1–9) have been developed for versatile visual learning in complex environments.[32] DNs can recognize objects and autonomously determine where and what to focus on using self-generated task context. Furthermore, these WWNs and DNs have been applied to general-purpose vision,[33] temporal visual event recognition,[34] vision-guided navigation,[35] learning audition while learning to speak,[36] and language acquisition as brain's responses to text temporal events.[37]
Weng is the first to formally raise that robotic consciousness is necessary for AI, consciousness can and should be learned (i.e., developed), and proposed a fully implementable algorithm to do so. He proposed DN3[31] as the engine to conduct conscious learning[38] where a robot is able to become increasingly conscious, like an infant and then a child, through its 'living' experience in the physical world which typically include human parents and teachers. However, there is no central controller within DN3's skull, emphasizing that consciousness should not be statically handcrafted and must encompass elements beyond a programmer's design.[31]
Controversies
editSince 2016, Weng has alleged instances of plagiarism and post-selection misconduct on a worldwide scale, but the implicated institutions have not yet acknowledged his allegations.
Plagiarism controversy
editWeng alleged that many deep learning networks that use images of 3D objects copied their key idea from Cresceptron[13] but almost all later deep learning publications did not cite Cresceptron. He highlighted that the Cresceptron (for 3D) is very different from the Neocognitron[39] (for 2D) because the Cresceptron is a fundamental departure from Neocognitron. Cresceptron enables a neural network to grow incrementally from a zero-neuron hierarchy and learn 3D objects from their 2D images in cluttered scenes. This is different from the aspect graphs of the 1990s and all other methods that had an inside-skull human teacher as a central controller.[40] This alleged plagiarism includes HMAX at MIT[41] and the ACM Turing Award 2018.[42] Without internal weight supervision like human manual selections[39][41] and error-backprop,[42] feature learning and sharing in hidden areas of Cresceptron are based on (unsupervised) Hebbian mechanisms.[43]
Post-Selection controversy
editWeng raised the issue of Post-Selection in AI and argued that it constitutes misconduct. He addressed that many AI methods require two steps in their training stage. The first step consists of training multiple systems by randomly fitting a fit data set. The second step consists of Post-Model Selection (Post-Selection). The Post-Selection chooses a few luckiest trained systems or relies on human manual parameter-tuning based on the systems’ errors on a validation data set. He alleged that Post-Selection in AI contains two types of misconduct: (1) cheating in the absence of a test, because the Post-Selection step belongs to the training stage; (2) hiding bad-looking data, because less lucky systems were not reported.[10]
Weng further alleged that more categories of AI methods suffered from their Post-Selection steps, such as Neocognitron, HMAX, Deep Learning, Long Short-Term Memories, Extreme Learning Machines, Evolving Networks, Reservoir Computing, Transformers, Large Language Models, ChatGPT, and Bard, as long as they contain the Post-Selection step, which is either automatic or requires human manual tuning. He mathematically reasoned that the luckiest system on a validation set gives only an expected performance on a future test set that is only near the average performance of all trained systems on the validation set.[10]
Weng has sued institutions to address the issue of alleged misconduct outside of academia including Alphabet, in the United States District Court for the Western District of Michigan (Civil Action No. 1:22-cv-998)[44] and the US Court of Appeal 6th Circuit (Civil Action No, 23–1567).[45]
Awards and honors
editBibliography
editSelected books
edit- Motion and Structure from Image Sequences (1993) ISBN 978-3642776458
- Natural and Artificial Intelligence: Introduction to Computational Brain-Mind (2019) ISBN 978-0-985875718
Selected articles
edit- Weng, J., Huang, T. S., & Ahuja, N. (1989). Motion and structure from two perspective views: Algorithms, error analysis, and error estimation. IEEE transactions on pattern analysis and machine intelligence, 11(5), 451–476.
- Weng, J., Cohen, P., & Herniou, M. (1992). Camera calibration with distortion models and accuracy evaluation. IEEE Transactions on pattern analysis and machine intelligence, 14(10), 965–980.
- Weng, J., Ahuja, N., & Huang, T. S. (1993). Optimal motion and structure estimation. IEEE Transactions on pattern analysis and machine intelligence, 15(9), 864–884.
- Weng, J., McClelland, J., Pentland, A., Sporns, O., Stockman, I., Sur, M., & Thelen, E. (2001). Autonomous mental development by robots and animals. Science, 291(5504), 599–600.
- Weng, J., Zhang, Y., & Hwang, W. S. (2003). Candid covariance-free incremental principal component analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 25(8), 1034–1040.
References
edit- ^ "Weng, Juyang". www.cse.msu.edu.
- ^ "Brain-Mind Institute: Programs". www.brain-mind-institute.org.
- ^ "Cognitive Computation". Springer.
- ^ "Guest Editorial Multimodal Modeling and Analysis Informed by Brain Imaging—Part I". IEEE Transactions on Autonomous Mental Development. 7 (3): 158–161. 2015. doi:10.1109/TAMD.2015.2495698.
- ^ "EDITORIAL". International Journal of Humanoid Robotics. 04 (2): 207–210. June 1, 2007. doi:10.1142/S0219843607001047 – via CrossRef.
- ^ "The First Conscious Learning Algorithm Avoids 'Deep Learning' Misconduct". micdat-conference.com.
- ^ "School of Computer Science". gs.fudan.edu.cn.
- ^ Weng, Juyang (August 1, 2020). "Autonomous Programming for General Purposes: Theory". International Journal of Humanoid Robotics. 17 (4): 2050016. doi:10.1142/S0219843620500164. S2CID 222069968 – via CrossRef.
- ^ Weng, Juyang; Zheng, Zejia; Wu, Xiang; Castro-Garcia, Juan (2020). "Autonomous Programming for General Purposes: Theory and Experiments". 2020 International Joint Conference on Neural Networks (IJCNN). pp. 1–8. doi:10.1109/IJCNN48605.2020.9207149. ISBN 978-1-7281-6926-2. S2CID 221663728.
- ^ a b c Weng, Juyang (January 12, 2023). "On "Deep Learning" Misconduct". arXiv:2211.16350 [cs.LG].
- ^ "BBC News - SCI/TECH - Time for real intelligence?".
- ^ Weng, J.; Huang, T.S.; Ahuja, N. (1989). "Motion and structure from two perspective views: algorithms, error analysis, and error estimation". IEEE Transactions on Pattern Analysis and Machine Intelligence. 11 (5): 451–476. doi:10.1109/34.24779.
- ^ a b c Weng, J.; Ahuja, N.; Huang, T.S. (1992). "Cresceptron: A self-organizing neural network which grows adaptively". [Proceedings 1992] IJCNN International Joint Conference on Neural Networks. Vol. 1. pp. 576–581. doi:10.1109/IJCNN.1992.287150. ISBN 0-7803-0559-0. S2CID 12826256.
- ^ Weng, J.J.; Ahuja, N.; Huang, T.S. (1993). "Learning recognition and segmentation of 3-D objects from 2-D images". 1993 (4th) International Conference on Computer Vision. pp. 121–128. doi:10.1109/ICCV.1993.378228. ISBN 0-8186-3870-2. S2CID 8619176.
- ^ Weng, John (Juyang); Ahuja, Narendra; Huang, Thomas S. (November 1, 1997). "Learning Recognition and Segmentation Using the Cresceptron". International Journal of Computer Vision. 25 (2): 109–143. doi:10.1023/A:1007967800668. S2CID 15774356 – via Springer Link.
- ^ "SHOSLIF: A Framework for Sensor-Based Learning for High-Dimensional Complex Systems".
- ^ Weng, Juyang; Chen, Shaoyun (October 1, 1998). "Vision-guided navigation using SHOSLIF". Neural Networks. 11 (7): 1511–1529. doi:10.1016/S0893-6080(98)00079-3. PMID 12662765 – via ScienceDirect.
- ^ "Cresceptron and SHOSLIF: Toward Comprehensive Visual Learning".
- ^ "On Comprehensive Visual Learning".
- ^ Yilu Zhang; Juyang Weng (2001). "Grounded auditory development by a developmental robot". IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222). Vol. 2. pp. 1059–1064. doi:10.1109/IJCNN.2001.939507. ISBN 0-7803-7044-9. S2CID 60876898.
- ^ Zeng, Shuqing; Weng, Juyang (November 1, 2007). "Online-learning and Attention-based Approach to Obstacle Avoidance Using a Range Finder". Journal of Intelligent and Robotic Systems. 50 (3): 219–239. doi:10.1007/s10846-007-9162-9. S2CID 17379559 – via Springer Link.
- ^ "A Developing Sensory Mapping for Robots" (PDF).
- ^ "Auditory Learning: A Developmental Method" (PDF).
- ^ "Incremental Hierarchical Discriminant Regression" (PDF).
- ^ "Online-learning and Attention-based Approach to Obstacle Avoidance Using a Range Finder" (PDF).
- ^ Weng, Juyang; Luciw, Matthew D. (November 1, 2014). "Brain-Inspired Concept Networks: Learning Concepts from Cluttered Scenes". IEEE Intelligent Systems. 29 (6): 14–22. doi:10.1109/MIS.2014.75. S2CID 1827336 – via Semantic Scholar.
- ^ Weng, Juyang (August 1, 2007). "On developmental mental architectures". Neurocomputing. 70 (13): 2303–2323. doi:10.1016/j.neucom.2006.07.017 – via ScienceDirect.
- ^ Weng, Juyang; Zeng, Shuqing (June 1, 2005). "A Theory of Developmental Mental Architecture and the Dav Architecture Design". International Journal of Humanoid Robotics. 02 (2): 145–179. doi:10.1142/S0219843605000454 – via CrossRef.
- ^ Juyang Weng; Luciw, M. (2009). "Dually Optimal Neuronal Layers: Lobe Component Analysis". IEEE Transactions on Autonomous Mental Development. 1: 68–85. doi:10.1109/TAMD.2009.2021698. S2CID 11713799.
- ^ Wang, Yuekai; Wu, Xiaofeng; Weng, Juyang (2011). "Synapse maintenance in the Where-What Networks". The 2011 International Joint Conference on Neural Networks. pp. 2822–2829. doi:10.1109/IJCNN.2011.6033591. ISBN 978-1-4244-9635-8. S2CID 8840080.
- ^ a b c "A Developmental Network Model of Conscious Learning in Biological Brains". www.researchsquare.com. June 7, 2022.
- ^ Solgi, Mojtaba; Weng, Juyang (January 1, 2015). "WWN-8: Incremental Online Stereo with Shape-from-X Using Life-Long Big Data from Multiple Modalities". Procedia Computer Science. 53: 316–326. doi:10.1016/j.procs.2015.07.309.
- ^ Wang, Yuekai; Wu, Xiaofeng; Weng, Juyang (November 1, 2012). "Skull-closed autonomous development: WWN-6 using natural video". The 2012 International Joint Conference on Neural Networks (IJCNN). pp. 1–8. doi:10.1109/IJCNN.2012.6252491. ISBN 978-1-4673-1490-9. S2CID 2978754 – via www.academia.edu.
- ^ Luciw, Matthew D.; Weng, Juyang; Zeng, Shuqing (2008). "Motor initiated expectation through top-down connections as abstract context in a physical world". 2008 7th IEEE International Conference on Development and Learning. pp. 115–120. doi:10.1109/DEVLRN.2008.4640815. ISBN 978-1-4244-2661-4. S2CID 7228.
- ^ Zheng, Zejia; He, Xie; Weng, Juyang (January 1, 2015). "Approaching Camera-based Real-World Navigation Using Object Recognition". Procedia Computer Science. 53: 428–436. doi:10.1016/j.procs.2015.07.320.
- ^ Wu, Xiang; Weng, Juyang (November 1, 2021). "Learning to recognize while learning to speak: Self-supervision and developing a speaking motor". Neural Networks. 143: 28–41. doi:10.1016/j.neunet.2021.05.006. PMID 34082380 – via ScienceDirect.
- ^ "Conjunctive Visual and Auditory Development via Real-Time Dialogue" (PDF).
- ^ Weng, Juyang (John) (April 15, 2022). "An Algorithmic Theory for Conscious Learning". 2022 the 3rd International Conference on Artificial Intelligence in Electronics Engineering. Association for Computing Machinery. pp. 1–8. doi:10.1145/3512826.3512827. ISBN 9781450395489. S2CID 245851454 – via ACM Digital Library.
- ^ a b Fukushima, Kunihiko (April 1, 1980). "Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position". Biological Cybernetics. 36 (4): 193–202. doi:10.1007/BF00344251. PMID 7370364. S2CID 206775608 – via Springer Link.
- ^ "Learning Recognition and Segmentation Using the Cresceptron". www.cse.msu.edu.
- ^ a b Serre, T.; Wolf, L.; Bileschi, S.; Riesenhuber, M.; Poggio, T. (2007). "Robust Object Recognition with Cortex-Like Mechanisms". IEEE Transactions on Pattern Analysis and Machine Intelligence. 29 (3): 411–426. doi:10.1109/TPAMI.2007.56. PMID 17224612. S2CID 2179592.
- ^ a b LeCun, Yann; Bengio, Yoshua; Hinton, Geoffrey (May 1, 2015). "Deep learning". Nature. 521 (7553): 436–444. Bibcode:2015Natur.521..436L. doi:10.1038/nature14539. PMID 26017442. S2CID 3074096 – via www.nature.com.
- ^ Weng, Juyang (2021). "Post-Selections in AI and How to Avoid Them". arXiv:2106.13233v2 [cs.LG].
- ^ "Weng v. Nat'l Sci. Found". vLex.
- ^ "Juyang Weng, et al v. Natl Science Fndtn, et al". Justia Dockets & Filings.
- ^ "NSF Award Search: Award # 9410741 - RIA: Learning-Based Object Recognition from Images". www.nsf.gov.
- ^ "IEEE Fellow".