Login to account Create an account  


Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
A Layman’s opinion of an opinion on AI … another TLDR
#6
My analytical mind loves this sort of thing.
Quote:Perceptrons and the attack on connectionism[edit]A perceptron was a form of neural network introduced in 1958 by Frank Rosenblatt, who had been a schoolmate of Marvin Minsky at the Bronx High School of Science. Like most AI researchers, he was optimistic about their power, predicting that "perceptron may eventually be able to learn, make decisions, and translate languages." An active research program into the paradigm was carried out throughout the 1960s but came to a sudden halt with the publication of Minsky and Papert's 1969 book Perceptrons. It suggested that there were severe limitations to what perceptrons could do and that Rosenblatt's predictions had been grossly exaggerated. The effect of the book was devastating: virtually no research at all was funded in connectionism for 10 years.
Of the main efforts towards neural networks, Rosenblatt attempted to gather funds for building larger perceptron machines, but died in a boating accident in 1971. Minsky (of SNARC) turned to a staunch objector to pure connectionist AI. Widrow (of ADALINE) turned to adaptive signal processing, using techniques based on the LMS algorithm. The SRI group (of MINOS) turned to symbolic AI and robotics. The main issues were lack of funding and the inability to train multilayered networks (backpropagation was unknown). The competition for government funding ended with the victory of symbolic AI approaches.[sup][94][/sup][sup][95][/sup]

https://en.wikipedia.org/wiki/History_of...nectionism
Quote:Explainable AI (XAI), often overlapping with Interpretable AI, or Explainable Machine Learning (XML), either refers to an AI system over which it is possible for humans to retain intellectual oversight, or to the methods to achieve this.[sup][1][/sup] The main focus is usually on the reasoning behind the decisions or predictions made by the AI[sup][2][/sup] which are made more understandable and transparent.[sup][3][/sup] XAI counters the "black box" tendency of machine learning, where even the AI's designers cannot explain why it arrived at a specific decision.[sup][4][/sup][sup][5][/sup]
XAI hopes to help users of AI-powered systems perform more effectively by improving their understanding of how those systems reason.[sup][6][/sup] XAI may be an implementation of the social right to explanation.[sup][7][/sup] Even if there is no such legal right or regulatory requirement, XAI can improve the user experience of a product or service by helping end users trust that the AI is making good decisions. XAI aims to explain what has been done, what is being done, and what will be done next, and to unveil which information these actions are based on.[sup][8][/sup] This makes it possible to confirm existing knowledge, challenge existing knowledge, and generate new assumptions.[sup][9[/sup]

https://en.wikipedia.org/wiki/Explainabl...telligence

I am just skimming the surface of this fascinating topic and just having read that AI's designers can't figure out how AI reasoned a response is somewhat alarming to me.
"The real trouble with reality is that there is no background music." Anonymous

Plato's Chariot Allegory
Reply



Messages In This Thread
RE: A Layman’s opinion of an opinion on AI … another TLDR - by quintessentone - 03-21-2024, 08:46 AM


TERMS AND CONDITIONS · PRIVACY POLICY