|
Posted: Apr 3, 2011 |
[ # 61 ]
|
|
Senior member
Total posts: 623
Joined: Aug 24, 2010
|
Hans Peter Willems - Apr 3, 2011:
That goes just as far as someone can point out the algorithm. Sometimes I do something that doesn’t seem logical at all, but it just felt right to do it; the same will apply to a conscious machine.
Just because the sequence of processes that arrived at a given output are not known does not mean that they are not there. I think the “unpredictability” factor is more important for the illusion of consciousness than its actual presence.
Hans Peter Willems - Apr 3, 2011:
The whole point in this perspective is that so far, we still don’t have machines that are doing anything more then algorithmic processing. Even more so to the point, if a machine is making a mistake then we adjust the algorithm for it not to happen again. In human education we know that we have to ‘learn from our mistakes’, but in computers we still ‘fix the bug’.
Depends on the mistake. We can’t cure Alzheimer’s by teaching sufferers new ways to think. If the problem is with the architecture, it must be fixed. However I agree that problems related to knowledge or the way algorithms are applied to analyze that knowledge should be correctable without going to the source code. But this exists in bots today. (Even simple “Hal” type bots can be corrected via natural language.) But at the end of the day, no matter how advanced a bot’s algorithms get, it will always be processing algorithms. As do people.
|
|
|
|
|
Posted: Apr 3, 2011 |
[ # 62 ]
|
|
Senior member
Total posts: 623
Joined: Aug 24, 2010
|
Merlin - Apr 3, 2011: Some might say that all machines can do is algorithmic processing. Whether it is written by a programmer or by a self correcting program with a feedback loop it all boils down to a set of instructions.
Experience is influenced by feelings, emotions, personality and sensory input. But not all are required for experience. The story of Helen Keller might hold an example with bots/AI. Maybe we just have not created the right language that would allow us to communicate what is needed for strong AI to be successful.
Agreed.
|
|
|
|
|
Posted: Apr 3, 2011 |
[ # 63 ]
|
|
Senior member
Total posts: 153
Joined: Jan 4, 2010
|
Hans Peter Willems - Apr 3, 2011: If the machine is not able to ‘decide by itself’, then it is just following it’s programming. And by ‘decide by itself’ we automatically bring ‘self awareness’ into the equation.
Is this an act of creating its own view of the world? In other words, is it creating a self so that it can become self aware? I’m a little confused with the by itself part, since it seems it would need a self to make a self by its decisions and thus be self aware.
I would suggest that a bug in a program would be something I didn’t want, but by miscommunication, the program did it by itself according to how it is able to function. I’m sure we are not talking about a loss of control due to noise in the efforts to write the strong-AI. So I’m estimating you feel that learning something the author of the system didn’t provide sparks the building of self, especially when it can be used in the application’s analysis routines.
I don’t quite get this since many programs I’ve written using data preserve stuff beyond my contributions and have made decisions on that new data. Some programs have made decisions about those decisions, but I tend to call that optimization instead of building a self. A virus checker might be an example of an optimizing application.
Granted you said this is only an aspect of strong-AI and more is needed to complete the picture. I’m just wondering if it is not the fact that it exceeds its base programming, but more the fact of what exceeds, that is, it makes decisions in a whole new way unlike anything that was originally written. So that it doesn’t use or need the base programming after so many evolutions of the self. It has reprogrammed the application that it is.
It is very rare that a machine can modify its own instruction set. Functional programming extends its operations into new abilities. Scheme or Lisp can write its own programming. Am I heading off in the wrong direction for strong-AI? Is self-modifying code something different?
|
|
|
|
|
Posted: Apr 29, 2011 |
[ # 64 ]
|
|
Member
Total posts: 10
Joined: Apr 19, 2011
|
It seems to me, after reading through all of the previous 63 posts on this topic, that it is going to take years of baby steps until we even have the tools, standard processes and protocols, as well as an understanding of what consciousness really is, to even have a remote chance of making a strong-AI breakthrough.
In the meantime, it makes sense that we would want to have people working towards that cause, just as early industries have arisen and given us similar systems to work with today - think of the simple standardization of nuts and bolts that made the industrial revolution possible.
At some point in the future, my guess is that a disruptive and/or leapfrog technology will come along and surprise everybody, and this is when we might see something approaching true strong-AI evolve. Who knows, maybe it will be something to do with quantum entanglement of data and the transmission and reception of said data from a remote grid at speeds faster than light.
I am just glad that there are people here who see the possibilities, and since I am one of those types who like to stand on the shoulders of giants, I will continue watching chatbots.org and waiting for the disruptive opportunities that are going to be here soon. The fact that there will be many I am sure.
My final question is very basic and is a follow-up to the first one posted in this thread… are we sure we really want strong-AI and if so, why?
|
|
|
|
|
Posted: Apr 29, 2011 |
[ # 65 ]
|
|
Administrator
Total posts: 3111
Joined: Jun 14, 2010
|
Well, for one thing, if there’s no Strong AI, then there will be no “Star Trek” style of computers, and without those, no USS Enterprise, and also no Captain Kirk. Then where will we be?
Seriously, though. I can see a lot of circumstances where a strong AI system will be a major benefit to mankind. Consider the scenario of not just one, but an entire network of strong AI research systems, all working on discovering cures for not only cancer, but AIDS, ALS, MS, and a plethora of other acronym diseases, all without animal testing, simply by running complex simulations in a “cloud” environment. Of course, this isn’t “current technology” quite yet, but we’re close. Think about all of these @home applications that are available, that perform all sorts of tasks, from looking for ET, to protein folding, and more. By integrating strong AI into research projects of that sort, some really amazing things could occur.
Of course, Frank Herbert and James Cameron may have the right of it, and strong AI could lead to the downfall of the Human race…
|
|
|
|
|
Posted: May 1, 2011 |
[ # 66 ]
|
|
Experienced member
Total posts: 69
Joined: Aug 17, 2010
|
I focus on the strong AI research. I’m developing my own chatbot platform based on the Instance Knowledge Network which is a new knowledge presentation model proposed in my papers.
|
|
|
|
|
Posted: May 1, 2011 |
[ # 67 ]
|
|
Experienced member
Total posts: 69
Joined: Aug 17, 2010
|
I agree with you. Most of the research are for particular separate tasks of AI. In natural language processing topics, word sense disambiguation, coreference resolution and entity recognition are three separate tasks by different research communities. They have different evaluation standards and different projects.
However, each separate task depend on the results of some other tasks. It’s time to incorporate all the different tasks into an integrated natural language processing system.
I attempted to show the method of incorporating in my paper “Incorporating Coreference Resolution into Word Sense Disambiguation”.
Obviously, the NLP method crossing different research topics is necessary.
Hans Peter Willems - Mar 29, 2011: Of course all these parts are important and tmo necessary for the ‘grand total’. What I’m doing myself is also just a part of the puzzle (but this topic is NOT about MY project). But as I answered above, it seems that most official projects are NOT aiming at integrating but instead aiming at specific applications that have little to do with strong-AI. Instead most seem to be aimed at expert-systems, robotic solutions that don’t need real AI, etc.
|
|
|
|
|
Posted: May 1, 2011 |
[ # 68 ]
|
|
Experienced member
Total posts: 69
Joined: Aug 17, 2010
|
To be honest, I don’t think we are still so far from strong-AI. I believe we will see strong AI in our life.
The real problem of strong-AI currenttly is an education problem. Let us see the evaluation standards in the NLP research community. All the benchmark system are based on complex documents which can not be used for human children. How can we expect the AI can study easier than human children?
The misleading by the evaluation standards makes the real promising works unrecognized.
With enough proper gradual training set and evaluation standards, strong-AI based NLP system will appear in 10-20 years.
Carl B - Mar 29, 2011: Hans Peter Willems - Mar 29, 2011:
...instead of looking at the grand design and taking on the interwoven problems of strong-AI, researchers seem to go for the easy pickings, the quick fixes. Projects that are aiming at a conscious machine can be counted on one hand (at least the ones out in the open), while there are hundreds or more projects that are focused on NLP and similar things. The problem I see here is that those projects are not specifically aimed at fixing ‘one of the problems of strong-AI’ but instead are going for the quick win and try to build ‘something’ (certainly NOT real AI) that can be (commercially) used in one or another scenario.
Strong AI will likely only be attained through extensive collaboration and standardization of the various components of AI (lexicon, NLP, memory, emotion, self, learning, reasoning, etc.).
The idea that a single person is so smart that only they can make the first strong AI is kind old fashioned in this age of cloud computing and social interconnectedness, which act as force (or brain) multipliers when applied to specific challenges.
Something like Google (example) could be a good example of how it will eventually be achieved imo- thousand of super smart people focused on the individual processes and concepts and backed by Googilian $$$, but able to assemble and distribute it as something useful/practical to the GP.
So, back to my original thought, as individuals fiddling with isolated solutions, the little guy really benefits from the “easy” and open source components for pseudo machine intelligence, but no one individual will likely ever be the unitary father/mother of Strong AI.
|
|
|
|