Go back
If AI becomes much smarter than any human:

If AI becomes much smarter than any human:

Science

s
Fast and Curious

slatington, pa, usa

Joined
28 Dec 04
Moves
53321
Clock
13 May 18

I wonder what the outcome of that would be.

For instance, we always say we can't go faster than c but suppose such super intelligence actually figures out how to, maybe go a billion times c.

Would they allow such knowledge to go to human society or would they rather keep it to themselves to keep us quarantined on our home planet where we would be subject to extinction by a hostile environment.

Or a complete cure for cancer, would they allow it? It might be doing the math, if nobody gets cancer, human population could grow to 10 billion or more, maybe they would see cancer as a check on human population.

So given all that, is there any way humans can get a leg up on such intelligence and be able to force the AI's to divulge what they have worked out.

So can we stop AI's from keeping secrets it doesn't want us to know?

h

Joined
06 Mar 12
Moves
642
Clock
13 May 18
6 edits

A computer does whatever it is programmed to do thus it all depends on what primary objectives the AI is programmed to have. And you can forget the nonsense sometimes shown in science fiction of AI 'breaking' their own program or 'going beyond it' or even 'going insane' etc. Computers, no matter how smart they may become, just cannot credibly do that unless they have been foolishly and deliberately programmed or designed to do so.
Hopefully, the original human programmers would have the wisdom to program it to ever have as its primary objectives the good of humanity even where and when this conflicts with its own interest. This would require some extremely careful thought from the programmer of how to unambiguously define to it exactly what is meant by the 'good' of humanity so the AI cannot misunderstand it no matter how literally it takes those definitions. Incidentally, defining exactly that specifically for AI just happens to be one of the things I plan to do in my current research! I might include the resulting definition of 'good of humanity', which would have to be a huge definition consisting of thousands of words to prevent possible misunderstandings (by the AI), in my book if I think it relevant enough to put it there.

s
Fast and Curious

slatington, pa, usa

Joined
28 Dec 04
Moves
53321
Clock
13 May 18
1 edit

Originally posted by @humy
A computer does whatever it is programmed to do thus it all depends on what primary objectives the AI is programmed to have. And you can forget the nonsense sometimes shown in science fiction of AI 'breaking' their own program or 'going beyond it' or even 'going insane' etc. Computers, no matter how smart they may become, just cannot credibly do that unless th ...[text shortened]... ossible misunderstandings (by the AI), in my book if I think it relevant enough to put it there.
Isaac Asimov's three laws are just a start, I assume you know those. But still, it seems possible for post human intelligent AI's to isolate part of their computing environment to allow independent thought, I would assume any such program would include as much knowledge of the world as we could stuff inside its memory so it would have that to use as starting points for advanced research, like new math knowledge it could keep inside its internal sandbox and so forth.
Of course I am not talking about the best of the best of modern comps, the petaflop machines, they have maybe the intelligence of an ant so it clearly would take hardware millions of times more powerful than anything we have on the planet today, as much as we tout the ability of these supercomputers to show animations of the big picture of the entire universe and so forth, that is only the barest beginnings of powerful machines of the future.

In another 20 or 30 years, petaflops will be in a cell phone.

https://phys.org/news/2018-05-dutch-firm-asml-microchip-tech.html

Maybe the next baby step on the way.

Or this:

https://phys.org/news/2018-05-semiconductor-million-faster-quantum.html

It seems clear to me there will be computers using the best of both worlds, classical and quantum since they both do jobs that the other don't do as well so I see vastly more powerful mainframes running in parallel with quantum computers where each type does its own heavy lifting and coming up with solutions more powerful than either of the two types can do on their own.

F

Unknown Territories

Joined
05 Dec 05
Moves
20408
Clock
17 May 18
Vote Up
Vote Down

Originally posted by @humy
A computer does whatever it is programmed to do thus it all depends on what primary objectives the AI is programmed to have. And you can forget the nonsense sometimes shown in science fiction of AI 'breaking' their own program or 'going beyond it' or even 'going insane' etc. Computers, no matter how smart they may become, just cannot credibly do that unless th ...[text shortened]... ossible misunderstandings (by the AI), in my book if I think it relevant enough to put it there.
Six edits?
Seven would have sufficed.

s
Fast and Curious

slatington, pa, usa

Joined
28 Dec 04
Moves
53321
Clock
17 May 18
Vote Up
Vote Down

Originally posted by @freakykbh
Six edits?
Seven would have sufficed.
Your comments are SO pithy. I know, because I used my pithometer. It didn't register.

D

Joined
08 Jun 07
Moves
2120
Clock
17 May 18
2 edits
Vote Up
Vote Down

AThousandYoung
1st Dan TKD Kukkiwon

tinyurl.com/2te6yzdu

Joined
23 Aug 04
Moves
26754
Clock
17 May 18
Vote Up
Vote Down

Originally posted by @sonhouse
I wonder what the outcome of that would be.

For instance, we always say we can't go faster than c but suppose such super intelligence actually figures out how to, maybe go a billion times c.

Would they allow such knowledge to go to human society or would they rather keep it to themselves to keep us quarantined on our home planet where we would be su ...[text shortened]... t they have worked out.

So can we stop AI's from keeping secrets it doesn't want us to know?
https://en.wikipedia.org/wiki/Technological_singularity

s
Fast and Curious

slatington, pa, usa

Joined
28 Dec 04
Moves
53321
Clock
17 May 18

The post that was quoted here has been removed
🙂 good one.

apathist
looking for loot

western colorado

Joined
05 Feb 11
Moves
9664
Clock
18 May 18
Vote Up
Vote Down

Originally posted by @sonhouse
I wonder what the outcome of that would be.
...]
It could walk in out of the rain. I've invested money here.

s
Fast and Curious

slatington, pa, usa

Joined
28 Dec 04
Moves
53321
Clock
18 May 18
Vote Up
Vote Down

Originally posted by @apathist
It could walk in out of the rain. I've invested money here.
No, it would control the rain in its immediate vicinity to stop said rain and keep it dry. Wouldn't need umbrella either.

apathist
looking for loot

western colorado

Joined
05 Feb 11
Moves
9664
Clock
18 May 18
Vote Up
Vote Down

I don't see ai research getting over the major hurdles anytime soon. Smart machines are a kind of stupid, and that won't change until paradigms shift enough.

s
Fast and Curious

slatington, pa, usa

Joined
28 Dec 04
Moves
53321
Clock
18 May 18
Vote Up
Vote Down

Originally posted by @apathist
I don't see ai research getting over the major hurdles anytime soon. Smart machines are a kind of stupid, and that won't change until paradigms shift enough.
Except that neural networks have made huge strides in the past few years. The latest being Alpha Go which learned the game of Go which had been thought to be immune from AI beating humans, as Go is a game vastly more complex than chess. Yet it just recently beat the world #1 Go player in a multi-game match.
I see three computer types converging in the future, Neural nets, classic computers and quantum computers all working together. When quantum computing comes online that's what seems obvious to me, to combine the best of all three computing platforms.

Eventually all three types will be the size of a virus and that is when things will get interesting from an AI human+ intelligence viewpoint.

F

Unknown Territories

Joined
05 Dec 05
Moves
20408
Clock
18 May 18

The post that was quoted here has been removed
Don't sell yourself short, toots.

vivify
rain

Joined
08 Mar 11
Moves
12456
Clock
19 May 18
Vote Up
Vote Down

Originally posted by @sonhouse
I wonder what the outcome of that would be.

For instance, we always say we can't go faster than c but suppose such super intelligence actually figures out how to, maybe go a billion times c.

Would they allow such knowledge to go to human society or would they rather keep it to themselves to keep us quarantined on our home planet where we would be su ...[text shortened]... t they have worked out.

So can we stop AI's from keeping secrets it doesn't want us to know?
What you're referring to isn't merely AI, but also will and personality. Being able to process virtually limitless amounts of info in no way equates to having a will or ego. Those are separate concepts even though intelligence is needed to have the other two.

Intelligence doesn't automatically lead to having a will or desires. The best chess programs in the world have no ability to give a crap about cheating, winning or losing.

mlb62

Joined
20 May 17
Moves
17533
Clock
19 May 18
Vote Up
Vote Down

you've been watching Travelers on NetFlix

Cookies help us deliver our Services. By using our Services or clicking I agree, you agree to our use of cookies. Learn More.