pkgsrc-Users archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: Artificial intelligence, genetic algorithms, and neural nets



On Wed, 10 Feb 2016, outro pessoa wrote:
You need to objectively and voluntarily submit yourself to the conditions in which you would want the subject to learn.

Well, I'm pretty green with AI coding practices. I'm just starting a couple of (small) projects on my own so I can learn. I have played with libneural, which so far I can only get to make simple boolean decisions. Right now, as a human, I can do a lot more than I can code any AI to do. Too bad there is no compiler for my brain's code besides my brain. :-)

Submersion into different bits of sociology and psychology along with ethics would be a great start.

Well, that sounds pretty far beyond my current level. One of my projects is a chatterbox. I am still just working on a basic parser and grammatical library. Now that I've dug in, it seems pretty vast and daunting. I still haven't got to the point I can 'teach' my own code much of anything beyond how to do really simple logic problems it hasn't seen, yet.

The idea of intellect or intelligence as a single linear cause and effect is at fault at times. In order for the subject to learn both the hardware and software must be synchronous with the end results.

Again, that sounds wonderful, but far beyond what I can do right now. I'm just taking small steps.

You are attempting to make a limited result in a closed and controlled environment. What would happen when a random event from absolute reality kicks in?

What happens with my code? It breaks spectacularly. :-) My chatterbox told me yesterday that "an ocean is like a pond". That's the most enlightening and interesting thing it's lex'd out so far, and I pretty much spoon-fed it to get it that far.

It is best to approach Assistive - and not Artificial - intelligence from a collective perspective.

Interesting turn-of-phrase, there. I like it. My dream is to be able to help the blind and other disabled folks use AI to handle tasks that are difficult for them, but fairly easy otherwise. However, $DAYJOB gets in the way of anything too serious, and I'm simply not good enough, yet. I got to 300-level math in college, and I'm realizing the math at play here:

1. Isn't calculus.
2. Is still significantly above my level.

It's like the AI researchers have invented a whole new language for themselves. Since I'm an amateur, I'm still getting acquainted with the various modalities of computational neurology like "threshold logic", "linear classifier", or "decision hyperplane".

Since the brain of most animals has different parts, then AI would be composed of different technologies. Being able to convince any of you of this seems to be an exercise in futility.

It sounds perfectly rational to me. It's actually a rather insightful observation. It would be fascinating to approach the code design with this in mind, but I'm simply not good enough, yet. There are too many parts un-built for me to start with that as a goal for any of my rinky-dink projects.

You sound pretty passionate about the topic, are you currently an AI researcher yourself? As I mentioned, I'm nothing but an amateur. I'll take advice/tips/facts from anyone who seems to know something interesting.

-Swift


Home | Main Index | Thread Index | Old Index