AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

AGI Project
 
 

On my first post here I mentioned my objective of building an AGI (Artificial General Intelligence), and that I would post some results about this date. Unfortunately less that what I was expecting (and I was not expecting much). Anyway, slowly but surely, there is progress.

First post here: https://www.chatbots.org/ai_zone/viewthread/3228/

The core of the project, the learning algorithm, is not done yet, in fact I’ll just start working on that now. Before that, I put my efforts on setting up the project (it’s structure, design, libraries, build tools, version control, documentation, etc…), and making a realtime graph software representation tool.

I named this representation tool NetView, it works as a library (shared memory is no fun). It’s a software to represent graphs similar to graphviz (simpler) but designed to work real-time and pull the data from memory instead of waiting for other applications to send it. (So it’s not like graphviz at all XD)

I also implemented a basic signaling algorithm (based on actual neurons). This is the result:
https://media.giphy.com/media/l1IBir76hla3nQOAg/giphy.gif

So now that I have my debugging tool done, it’s time to work on the learning algorithm (finally!). Feel free to ask any questions.

 

 
  [ # 1 ]

Hi again, Robert!

The GIF was nice. Don’t have a clue as to what it represents right now, but it’s something. cheese

I’m wondering what your “roadmap” looks like for this project. Obviously there are going to be a number of hurdles to overcome, and decisions to be made regarding the way your AGI will interact and behave, and I find that process the most interesting. There’s also the “thing” of agreeing what makes an AGI a success. Without getting too philosophical about it, what are your thoughts (in general - no need to argue specifics at this point) about what makes an AGI “intelligent”. I have my ideas on the subject, and I’m happy to share, but I’d like to know your thoughts as well.

For example, many people feel that for AGI to be truly intelligent, it must be self-aware without being programmed to be that way. I happen to disagree, but this is just my opinion. I’m not saying that it CANNOT be self-aware by algorithm; in fact, I don’t think that an AGI needs to have much in the way of self awareness at all. At least, not in the early stages (holy CRAP! Dave is rooting for Skynet! big surprise ) Besides, we Humans don’t yet agree on what constitutes self-awareness, in a specific sense, nor what living beings on the planet are blessed with that trait, so who am I to make judgements? wink

Anyway, I’d be happy to discuss your ideas.

 

 
  [ # 2 ]

The gif represents a NN. Nodes are neurons, links are synapses. When a node is green it represents it’s been charged. The blue on links represents the potential (the amount of charge that will be transmitted when a neuron fires). Potential needs to recharge after every fire.

The next steep is to make the learning algorithm. This algorithm crates and deletes neurons as well as synapses. In short, synapses deteriorate over time unless they transmit. When enough deteriorated they are deleted. A neuron with no synapses is also deleted. Let’s call this reinforcement. Then every now and then random paths between two firing neurons are crated, let’s call this exploring. There is a variable controlled by the environment, let’s call it “good”, that sets the intensity in witch reinforcing and exploring occur. Basically the more “good” the more reinforcing and less exploring, at negative good you have more exploring and less reinforcing. The idea is that when something that the system considers “good” happens whatever path is responsible for that sticks, whilst if something bad happens the path responsible deteriorates and new paths are made.

I’m already working on this though I still need more testing to do. Once I see results I’ll post again. (If I lose hope I’ll probably post too XD).

As for what I understand as AGI, I don’t think it requires anything out of the definition. Meaning, it doesn’t require self-awarness, feelings, or believing in a omnipotent being. The definition of intelligence says: “The ability to acquire and apply knowledge and skills”. Personally I would continue the sentence with “... from the environment.” A human with no senses acquires no skills. So the behavior of a AGI will depend greatly on the environment (the input and output neurons) you provide. If you want it to think like a human, you’ll have to give it a human body and put it in human society. However, to check that the AGI is working is enough to test if it behaves intelligently in whatever environment you provide (considering the environment itself).

For example on this initial stage I provide a virtual environment consisting of a dot in the space surrounded by big dangerous circles. The environment tells the AGI that circles are bad. The AGI will have to come with some strategy of it’s own to avoid them. And of upmost importance is, if I do modifications on the environment, the AGI will have to recognize the new situation and adapt, also, on it’s own.

 

 
  login or register to react