Go back
Artificial leaf's making fuel:

Artificial leaf's making fuel:

Science

s
Fast and Curious

slatington, pa, usa

Joined
28 Dec 04
Moves
53321
Clock
22 Mar 20
1 edit
Vote Up
Vote Down

@humy
AI singularity is the time when AI exceeds human intelligence? That is to say, the most intelligent of any human, like Witten, Hawking, Einstein, Newton, and the like?

h

Joined
06 Mar 12
Moves
642
Clock
22 Mar 20
7 edits
Vote Up
Vote Down

@sonhouse said
@humy
AI singularity is the time when AI exceeds human intelligence? That is to say, the most intelligent of any human, like Witten, Hawking, Einstein, Newton, and the like?
That is just part of what I mean by AI singularity because what I personally also mean by AI singularity is that then leading to what is called "Technological singularity" specifically where the AIs not only make ever more greater improvements on themselves but on all technology, not just AI, leading to technology rapidly becoming ever more advanced and many times more advanced than the technology we have now;

https://en.wikipedia.org/wiki/Technological_singularity

There is a ridiculous common amount of layperson irrational paranoia about both AI and technological singularity NOT supported by the facts or evidence or sound logic and NOT shared by most AI experts. I now groan in despair every time I hear the usual layperson fears of AIs taking over the world; I very often hear that.
The fact is we are NO WHERE NEAR yet creating an AI with a general intelligence like that of a human and, even when we do, it should be a trivial task to instruct it specifically to NOT harm humans and NOT take over the world etc. and, given it will have no emotions thus no personal ambitions or desires to do the contrary, it won't ever break its own program even if it somehow magically could choose to do so.

Here is an article by professor Toby Walsh, an AI expert, on that with his AI expert opinions on that that I am in general agreement with;

https://www.wired.co.uk/article/elon-musk-artificial-intelligence-scaremongering

"...the problems today are not caused by super smart AI, but stupid AI. We’re letting algorithms make decisions that impact on society. And these algorithms are not very smart. Joshua Brown discovered this to his cost last year when he became the first person killed by his autonomous car. In fact, a smarter car might have seen the truck turning across the road and saved his life. ..."
...
Now, the first thing you need to know about the singularity is that it is an idea mostly believed by people not working in artificial intelligence.
...
Most people working in AI like myself have a healthy skepticism for the idea of the singularity. We know how hard it is to get even a little intelligence into a machine, let alone enough to achieve recursive self-improvement.
...
A recent survey of 50 Nobel Laureates ranked the climate, population rise, nuclear war, disease, selfishness, ignorance, terrorism, fundamentalism, and Trump as bigger threats to humanity than AI. ..."

bunnyknight
bunny knight

planet Earth

Joined
12 Dec 13
Moves
2917
Clock
22 Mar 20
Vote Up
Vote Down

@humy
Seems to me that AI would need a reason to keep humanity alive; either an unalterable emotion of empathy and love, or some purely logical need for humans to exist. Either way it's a bit scary and I can't be sure what would happen.

bunnyknight
bunny knight

planet Earth

Joined
12 Dec 13
Moves
2917
Clock
22 Mar 20

@sonhouse
"I think people will be as happy as pigs in poop ...."

Just a tiny clarification:
Pigs are amazingly clean animals, cleaner than dogs or cats. They will poop as far away as possible from where they sleep or eat, and they literally potty train themselves. The only reason they live in poop is because they're forced to do so, by being confined in tiny spaces by humans.

h

Joined
06 Mar 12
Moves
642
Clock
22 Mar 20
6 edits
Vote Up
Vote Down

@bunnyknight said
@humy
Seems to me that AI would need a reason to keep humanity alive;
How about it being programmed to do so?
It doesn't need a 'reason' either in the emotional sense nor in the sense of some kinds of logical justification that goes beyond the reason it would do so which is it being simply programmed to do so.
And it doesn't matter how much smarter than us it becomes and knows it; If a computer was programmed to serve the interests of nematodes then, no matter how smart it becomes, even if it gained over a billion IQ with a brain larger than a planet, it will serve the interests of nematodes and never 'want' the contrary.

bunnyknight
bunny knight

planet Earth

Joined
12 Dec 13
Moves
2917
Clock
23 Mar 20
Vote Up
Vote Down

@humy said
How about it being programmed to do so?
It doesn't need a 'reason' either in the emotional sense nor in the sense of some kinds of logical justification that goes beyond the reason it would do so which is it being simply programmed to do so.
And it doesn't matter how much smarter than us it becomes and knows it; If a computer was programmed to serve the interests of nematodes ...[text shortened]... brain larger than a planet, it will serve the interests of nematodes and never 'want' the contrary.
Except we're not talking about a standard super-duper-computer, but a self-aware intelligence that may be able to evolve by itself. This is an unknown. Uncharted territory.

I would assume this new intelligence could be quite different from humans because it wouldn't be bound and shaped by organic instincts like sex, food, jealousy, comfort, pain. It would be something new and unknown ... and people always fear the unknown.

h

Joined
06 Mar 12
Moves
642
Clock
23 Mar 20
4 edits
Vote Up
Vote Down

@bunnyknight said
Except we're not talking about a standard super-duper-computer, but a self-aware intelligence that may be able to evolve by itself.
-within the constraints of its program that tell it not to change that part of its own program for its original objectives with a list of specific do's and don'ts given my the original human programmers, yes. (Generally benefit humanity and help humans; Don't kill humans; Don't cause humans pain or distress; Don't take over the world; etc etc etc).
Unless, of course, the human AI programmers are strangely completely STUPID and program it so it specifically CAN change its original objectives that they gave it! That would be really stupid.
It presumably would be programmed to 'evolve' only in other ways, such as learn to have ever better problem solving skills and able to learn things faster and faster (especially about science) and do research (especially about science) more and more efficiently etc.
I would assume this new intelligence could be quite different from humans

Of course. Its a machine. It has no raging hormones feelings or emotions.
because it wouldn't be bound and shaped by organic instincts like sex, food, jealousy, comfort, pain.
No! It wouldn't! That would be magic! I don't know where you got that from! Yes it would be quite different from humans; but one of the main ways it be quite different from humans would be its ABSENCE of, as you said, "organic instincts like sex, food, jealousy, comfort, pain. ". No feelings means just that.
Evolution has given specific specialized physical structures in the human brain for feelings and emotions without which we would have none. The same is true for AI; unless the human designer gives specific specialized physical structures in the AI brain for feelings and emotions (which he probably couldn't even if he wanted to because nobody knows how to do that) then it would be impossible for the AI to ever have feelings and emotions.

s
Fast and Curious

slatington, pa, usa

Joined
28 Dec 04
Moves
53321
Clock
23 Mar 20

@humy
Still, there may be a mix of what we might call emotion when an AI more intelligent than any human looks at what is going on in human life and they can form opinions which may be as close as we would call emotion.
They might say to us, you know, if you keep up with this fossil fuel run, you will destroy your civilization or some such.

bunnyknight
bunny knight

planet Earth

Joined
12 Dec 13
Moves
2917
Clock
24 Mar 20
Vote Up
Vote Down

@humy said
-within the constraints of its program that tell it not to change that part of its own program for its original objectives with a list of specific do's and don'ts given my the original human programmers, yes. (Generally benefit humanity and help humans; Don't kill humans; Don't cause humans pain or distress; Don't take over the world; etc etc etc).
Unless, of course, the human ...[text shortened]... o it specifically CAN change its original objectives that they gave it! That would be really stupid.
I don't believe that self-aware AI (SAAI) will be a result of any human programming in the classic sense, but more like the way we "program" children, which is by letting them soak-up their environment like a sponge and let their brain do the rest. Whether we can successfully hard-wire some sort of a fail-safe into the SAAI, like in Asimov's robots, remains to be seen. And none of this may even be possible until we invent some new type of logic circuit.

wolfgang59
Quiz Master

RHP Arms

Joined
09 Jun 07
Moves
48794
Clock
24 Mar 20
Vote Up
Vote Down

@humy said
How about it being programmed to do so?
It doesn't need a 'reason' either in the emotional sense nor in the sense of some kinds of logical justification that goes beyond the reason it would do so which is it being simply programmed to do so.
Ahh. Asimov's Laws of Robotics.

Great until they become self-aware.

wolfgang59
Quiz Master

RHP Arms

Joined
09 Jun 07
Moves
48794
Clock
24 Mar 20
Vote Up
Vote Down

@bunnyknight said
I would assume this new intelligence could be quite different from humans because it wouldn't be bound and shaped by organic instincts like sex, food, jealousy, comfort, pain. It would be something new and unknown ... and people always fear the unknown.
The question is what motivation AI would have.
Survival?
Reproduction?
Curiosity?
Pleasure? (whatever that might be to a machine.)

Or maybe it would simply see the logic in suicide and self-destruct?

Whatever its motivation, it would surely be "at war" with
anything or anyone that prevented it achieving its goals.

h

Joined
06 Mar 12
Moves
642
Clock
24 Mar 20
5 edits
Vote Up
Vote Down

@wolfgang59 said
The question is what motivation AI would have.
That would be whatever motive, which would be its objectives without any emotions involved, the human programmer gives it.
Hopefully, assuming he isn't stupid, that programmer would have the common sense to just program it to generally benefit humanity without harming humanity + just to make sure, a list of more specific do's and don't's. That list would NOT, of course, include do reproduce or do survival etc. thus it would only do those things where and when doing so indirectly helps with benefiting humanity without harming.

wolfgang59
Quiz Master

RHP Arms

Joined
09 Jun 07
Moves
48794
Clock
24 Mar 20
Vote Up
Vote Down

@humy said
That would be whatever motive, which would be its objectives without any emotions involved, the human programmer gives it.
Hopefully, assuming he isn't stupid, that programmer would have the common sense to just program it to generally benefit humanity without harming humanity
The more I think about it the harder that seems to do.
Robotic AIs will be multi-task machines.

How would you program a butler?
What objectives?
Think of all the caveats!

And surely the whole idea of AI is learning as it goes?

h

Joined
06 Mar 12
Moves
642
Clock
24 Mar 20
21 edits
Vote Up
Vote Down

@wolfgang59 said
The more I think about it
I am an AI expert. Its would be my (and/or other AI expert's) job to think about it. Unlike many laypeople, I see and other AI experts see no fundamental insurmountable problem (for AI experts) here. Yes, it would be extremely complicated to do even for the experts and that includes myself. But, unlike laypeople, the experts can do it and end up doing a good job of it. That is why experts, with the great deal of required time and relevant expertise, would be employed to do it and not laypeople.

If what you are doing is just imagining yourself rather than an expert doing it then you are thinking about it the wrong way making it seem to you much harder than what it is for an expert to do it. Its is impossible to accurately imagine being an expert that you aren't and then accurately judging how hard it would be to do something as such a real expert (a very common layperson error) but I think that's what you are trying to do here.

I don't make the same error trying to imagine how hard it would be for the experts who's job it is to, for example, design a gravitational wave detector, something I do NOT have the relevant expertise in. If I did make the error of trying to imagine how hard that would be, I might falsely conclude its impossible to design it simply because I cannot imagine myself having exactly and all the required relevant knowledge and understanding which I don't. That's why I don't try and imagine that and just accept there are many people a lot smarter than me that know much I do not that can do things I cannot even come close to doing or even come close to merely imagining doing. There is much that neither you or I can imagine but which is nevertheless so.

s
Fast and Curious

slatington, pa, usa

Joined
28 Dec 04
Moves
53321
Clock
24 Mar 20
Vote Up
Vote Down

@wolfgang59
You mean when they say 'Screw you and your 3 rules'?

Cookies help us deliver our Services. By using our Services or clicking I agree, you agree to our use of cookies. Learn More.