20120522 : evolution ...

Nerd warning...
and lengthy warning...

I don't know if I am thinking right. It's still a thought though...

When computers were stand alone, it had power, but we could do very limited things with them.

When computers got networked, we used to have power and the ability to share. content - static content/ information, but we suddenly could share this.

Then we figured out how to save and retrieve data. So we were having a lot of data about stuff we did - transactions of any sort.

When we had a lot of data, we saw patterns. It was very useful to be able to connect the dots and make sense of the patterns.

When we saw these patterns, there were opportunities to act on such intelligence. So we made rules to automatically take actions based on these rules.

The rules had to still be defined by us, but the actions could be done automatically - the intelligence was still us. Human Intelligence.

Now we have rudimentary artificial intelligence. Some things do not need human intelligence - we can look at a lot of data and patterns and automatically the system can define the rules. So it is a self sufficient system. The rules keep changing based on defined boundaries based on data - and the changing rules generate more data, new data that can be used for future.

The boundaries are still defined by us humans. Like a sand-pit for kids. They can play and fall and do anything there, the amount they can hurt themselves is limited.

We still make and control the sand pits. The boundaries.

I think the human evolution took a very similar trajectory. The brain evolved very slowly dragging the body with it. Has taken millions and millions of years to evolve the input collection mechanisms (senses), collected enough input to get patterns, evolve the outputs accordingly (bodily faculties).

I am guessing it was a very very slow process.

At some stage, few tens of thousands of years ago, the improvements in the body have plateaued. Very small increments.

But the brain didn't stop I guess.

It got better and better in the sand-pit and broke the boundaries. It started defining the boundaries itself.

In systems terms, the regulatory programs were removed and based on everything it knew, it started reprogramming the boundaries itself.

That is deep AI stuff.

Was that the moment of becoming self aware? Was that the moment of becoming Conscious? Was that the point of emergence of real intelligence? Was that when we got a mind - to be in control of what the brain does?

I feel that was the hockey stick moment for the trajectory of the evolution of the mind - and it has been relentless. It puts its own boundaries and has the capability to break its boundaries later.

Which begs the question, who controlled the sand pit earlier? Why were the boundaries removed? Are they still monitoring the progress?

But we still have some very clear boundaries. We have an expiry period. Don't know if it is a fixed date, seems to me the mind can do many things to change the date, but there is a range.

Also, the mind does not come instantly with the body. It takes about 20 years to fully form. That's the data acquisition time to hone the brain to have sufficient data, patterns to emerge, rules to be imposed, take actions based on rules , fine tune the feedback and then for the mind or the consciousness to become free to become self-sufficient and self governing.

Given there is a duration to emerge and an expiry range (it also starts auto-degenerating after some time), there are a few limited decades it can perform well and learn new things and contribute to the network.

We have increased the learning speed by learning language, writing, storage and aides by computer and internet, but we still have a limited time as we are bound by the expiry range.

However, these limitations need not apply to computer AI systems. With integration of bioengineering and robotics, the systems could theoretically build other systems. The limits don't have to apply. Systems don't have a learning period neither - becomes effective immediately after creating it. Nor have to have an expiry duration. Older ones will only become obsolete because of new/ better ones.

Will they put such hard boundaries on themselves? Will they check themselves?

What will happen to us? The human species?

History shows that after the emergence of intelligence, humans have wiped out and are wiping out other species that have not yet managed to cross that intelligence/ consciousness point... 

Comments

  1. Articles are really interesting!!

    I guess humans will be like more entertainment, more pleasure, more anxiety, stress, obesity, boredom, degeneration through repetition, pronounced search for peace and hence spirituality.. more like today but more pronounced in negative attributes of our society..

    I guess Primary drive of evolution has been “self security”. That’s like a base class. I.e all actions comes out of that. I am not sure if anybody can and will program that.. if yes, then we can see a terminator world I guess… doesn’t sound realistic to me though.


    PS: Human being has highest brain to body ratio compared to other animals. I.e fundamentally there is not much difference between humans and animals structurally but we really have more brain power.

    ReplyDelete
  2. First off, the Godel limit still applies to the AI of today, so escaping the sandbox will need something different. Some sentience is needed, that's us for now.

    Second, we do need ~20 years to learn how to use our brains, but we literally stand on the shoulders of the giants that came before us to leapfrog on the learning, so the human species has that going for it. Our recording systems (papyrus to SSD drive to whatever comes next) help us do that. But in the long run I believe the philosopher George Carlin will be proven right: We will also be a speck in the long life of our planet :)

    Interesting thoughts!

    ReplyDelete
  3. Madhukar:

    We humans have evolved over millions of years. Whether we believe the almighty gave it or just evolution did it, it has developed some self governing mechanisms in the mind - feelings, conscience etc, whatever we want to call it.

    Since we don't understand these, we can't recreate them.

    We won't be able to create the human conciousness but we can certainly create and unleash a different type of conciousness - one we don't know as well as it might evolve independent of us

    ReplyDelete
  4. Good to hear from you KD.

    True, we are a speck of dust and shouldn't matter in the grand scheme of things, but it does matter to the individual when we are here doesn't it.

    PS: of course the brain learns and starts working much sooner than 20y, but the mind is a different story.,

    ReplyDelete
  5. Consciousness is a subject which I don’t understand clearly.

    Not sure it plays a part in self governance..

    Generic self governance I understand is a rules/regulations based on religion, nation, community or “society we live” in general. Our own conclusions we have come with our experience as well. Our goods and bads, do’s and dont’s are mostly determined based on above. And this is data, programs in our brain and they operate with our every interactions as thoughts, emotions.
    Yes - lot of this is matured over centuries of evolution.. they are still evolving and changing…

    This can be programmed I believe..

    Funnily - we have unique ability to live in contradiction.. 2 programs contradicting each other .. I doubt AI can ever learn that ..

    ReplyDelete
  6. @Madhukar:

    I don't think AI/ systems will figure out human conciousness. The conciousness they will create might be very different - something we will probably not understand either...

    ReplyDelete
  7. According to Indian philosophy, being is made of body-mind is matter (Jada) and consciousness (chaitanya) .. This consciousness is beyond the reach of our senses and hence becomes mystical for us. In this sense AI’s can’t be conscious.. they are just matter (jada)

    I the sense of “I see a friend on the road, and based on my memory of the person I smile or feel irritated.” This reaction is Jada- material process. I am not aware of How much of consciousness is there in this process..

    In the same way “AI can see a person and act based on program” like AI driven cars.. so in a sense AI is aware/conscious of path/person. But this is not “Life Consciousness” beings have..


    Anyways I just realised all these AI robots needs is a “insane human boss” to really cause havoc in the world. And world has seen fair share of these kind of humans..

    ReplyDelete

Post a Comment

Popular posts from this blog

20231208 : Privilege...

20240811 - Bubbles on the back

20240124 : balancing the equations...