I wonder what the outcome of that would be.
For instance, we always say we can't go faster than c but suppose such super intelligence actually figures out how to, maybe go a billion times c.
Would they allow such knowledge to go to human society or would they rather keep it to themselves to keep us quarantined on our home planet where we would be subject to extinction by a hostile environment.
Or a complete cure for cancer, would they allow it? It might be doing the math, if nobody gets cancer, human population could grow to 10 billion or more, maybe they would see cancer as a check on human population.
So given all that, is there any way humans can get a leg up on such intelligence and be able to force the AI's to divulge what they have worked out.
So can we stop AI's from keeping secrets it doesn't want us to know?
A computer does whatever it is programmed to do thus it all depends on what primary objectives the AI is programmed to have. And you can forget the nonsense sometimes shown in science fiction of AI 'breaking' their own program or 'going beyond it' or even 'going insane' etc. Computers, no matter how smart they may become, just cannot credibly do that unless they have been foolishly and deliberately programmed or designed to do so.
Hopefully, the original human programmers would have the wisdom to program it to ever have as its primary objectives the good of humanity even where and when this conflicts with its own interest. This would require some extremely careful thought from the programmer of how to unambiguously define to it exactly what is meant by the 'good' of humanity so the AI cannot misunderstand it no matter how literally it takes those definitions. Incidentally, defining exactly that specifically for AI just happens to be one of the things I plan to do in my current research! I might include the resulting definition of 'good of humanity', which would have to be a huge definition consisting of thousands of words to prevent possible misunderstandings (by the AI), in my book if I think it relevant enough to put it there.
Originally posted by @humyIsaac Asimov's three laws are just a start, I assume you know those. But still, it seems possible for post human intelligent AI's to isolate part of their computing environment to allow independent thought, I would assume any such program would include as much knowledge of the world as we could stuff inside its memory so it would have that to use as starting points for advanced research, like new math knowledge it could keep inside its internal sandbox and so forth.
A computer does whatever it is programmed to do thus it all depends on what primary objectives the AI is programmed to have. And you can forget the nonsense sometimes shown in science fiction of AI 'breaking' their own program or 'going beyond it' or even 'going insane' etc. Computers, no matter how smart they may become, just cannot credibly do that unless th ...[text shortened]... ossible misunderstandings (by the AI), in my book if I think it relevant enough to put it there.
Of course I am not talking about the best of the best of modern comps, the petaflop machines, they have maybe the intelligence of an ant so it clearly would take hardware millions of times more powerful than anything we have on the planet today, as much as we tout the ability of these supercomputers to show animations of the big picture of the entire universe and so forth, that is only the barest beginnings of powerful machines of the future.
In another 20 or 30 years, petaflops will be in a cell phone.
https://phys.org/news/2018-05-dutch-firm-asml-microchip-tech.html
Maybe the next baby step on the way.
Or this:
https://phys.org/news/2018-05-semiconductor-million-faster-quantum.html
It seems clear to me there will be computers using the best of both worlds, classical and quantum since they both do jobs that the other don't do as well so I see vastly more powerful mainframes running in parallel with quantum computers where each type does its own heavy lifting and coming up with solutions more powerful than either of the two types can do on their own.
Originally posted by @humySix edits?
A computer does whatever it is programmed to do thus it all depends on what primary objectives the AI is programmed to have. And you can forget the nonsense sometimes shown in science fiction of AI 'breaking' their own program or 'going beyond it' or even 'going insane' etc. Computers, no matter how smart they may become, just cannot credibly do that unless th ...[text shortened]... ossible misunderstandings (by the AI), in my book if I think it relevant enough to put it there.
Seven would have sufficed.
Originally posted by @freakykbhYour comments are SO pithy. I know, because I used my pithometer. It didn't register.
Six edits?
Seven would have sufficed.
Originally posted by @sonhousehttps://en.wikipedia.org/wiki/Technological_singularity
I wonder what the outcome of that would be.
For instance, we always say we can't go faster than c but suppose such super intelligence actually figures out how to, maybe go a billion times c.
Would they allow such knowledge to go to human society or would they rather keep it to themselves to keep us quarantined on our home planet where we would be su ...[text shortened]... t they have worked out.
So can we stop AI's from keeping secrets it doesn't want us to know?
Originally posted by @apathistExcept that neural networks have made huge strides in the past few years. The latest being Alpha Go which learned the game of Go which had been thought to be immune from AI beating humans, as Go is a game vastly more complex than chess. Yet it just recently beat the world #1 Go player in a multi-game match.
I don't see ai research getting over the major hurdles anytime soon. Smart machines are a kind of stupid, and that won't change until paradigms shift enough.
I see three computer types converging in the future, Neural nets, classic computers and quantum computers all working together. When quantum computing comes online that's what seems obvious to me, to combine the best of all three computing platforms.
Eventually all three types will be the size of a virus and that is when things will get interesting from an AI human+ intelligence viewpoint.
Originally posted by @sonhouseWhat you're referring to isn't merely AI, but also will and personality. Being able to process virtually limitless amounts of info in no way equates to having a will or ego. Those are separate concepts even though intelligence is needed to have the other two.
I wonder what the outcome of that would be.
For instance, we always say we can't go faster than c but suppose such super intelligence actually figures out how to, maybe go a billion times c.
Would they allow such knowledge to go to human society or would they rather keep it to themselves to keep us quarantined on our home planet where we would be su ...[text shortened]... t they have worked out.
So can we stop AI's from keeping secrets it doesn't want us to know?
Intelligence doesn't automatically lead to having a will or desires. The best chess programs in the world have no ability to give a crap about cheating, winning or losing.