Go back
Ai

Ai

Science

twhitehead

Cape Town

Joined
14 Apr 05
Moves
52945
Clock
18 Jul 17

Originally posted by @moonbus
We understand how such algorithms work; our incapacity is a matter of not being able to calculate large primes with 'pencil and paper' (or a pocket calculator) to carry out the algorithms 'by hand'.
Nonsense. Nobody would be stupid enough to try and crack modern encryption by hand with a pencil and paper.
We simply cannot crack them which is why they are used. If they were crack-able with even significant effort, then banks and governments would all be at risk.

It is theoretically possible that AI could generate an algorithm we do not understand; that would be something different.
No, not really. Uncrackable is uncrackable whether you understand how its done or not.

Most encryption algorithms are based on generating prime pairs. If AI were to generate an encryption algorithm not based on prime pairs, but on something else entirely, we might not be able to follow what it is doing.
Who cares whether we can follow what its doing? If its encrypted, you cant get at it anyway, whether you know the algorithm or not.

w

Joined
02 Jan 06
Moves
12857
Clock
19 Jul 17

Originally posted by @fabianfnas
You've seen too many Hollywood B-movies too late in the night...
HAL: "How about a nice game of chess?"

w

Joined
02 Jan 06
Moves
12857
Clock
19 Jul 17
Vote Up
Vote Down

Originally posted by @fabianfnas
Mr Freak, do you think AI can create itself spontaneously, given that there is enough information from the beginning?
Is this kind of intelligence benevolent or is it evil by its nature?
AI is not human, therefore, it has no innate tendencies towards such things as empathy and compassion.

What if AI learns that humans are destroying the world via carbon emissions? How would it stop us?

In fact, why are carbon emissions even a problem? Is it not the world of science that created the problem, just like AI?

Will science destroy mankind and ultimately the world?

h

Joined
06 Mar 12
Moves
642
Clock
20 Jul 17
3 edits
Vote Up
Vote Down

Originally posted by @whodey
AI is not human, therefore, it has no innate tendencies towards such things as empathy and compassion.

What if AI learns that humans are destroying the world via carbon emissions? How would it stop us?

In fact, why are carbon emissions even a problem? Is it not the world of science that created the problem, just like AI?

Will science destroy mankind and ultimately the world?
Is it not the world of science that created the problem,

science isn't the problem, humans are. Science gives us certain extra choices we wouldn't otherwise have; then if we make bad choices, WE are to blame, not science.
I should add that, providing we choose to use science to help humanity, science can come to the rescue here; the science behind renewable energy.

Will science destroy mankind and ultimately the world?

No.
The worst enemy of humanity is humanity.
Science is blameless for its misuse.
The same is true for AI.

Incidentally; I am an AI expert amongst other things and I can tell you that the idea that AI, with its current general level of intelligence, could take over the worlds anytime soon (say, within the next 30 years) is totally absurd.

F

Joined
11 Nov 05
Moves
43938
Clock
20 Jul 17
Vote Up
Vote Down

Originally posted by @whodey
AI is not human, therefore, it has no innate tendencies towards such things as empathy and compassion.

What if AI learns that humans are destroying the world via carbon emissions? How would it stop us?

In fact, why are carbon emissions even a problem? Is it not the world of science that created the problem, just like AI?

Will science destroy mankind and ultimately the world?
"AI is not human..." Isn't it? It's programmed by humans, right? It's not created by itself.

"What if..." You mean hypothetically? Hypothetically the moon is made of blue cheese...

"...why are carbon emissions even a problem?" Oh, you are just ranting around? You are not serious? Okay...

Go learn some AI before you think you know anything about the subject.

twhitehead

Cape Town

Joined
14 Apr 05
Moves
52945
Clock
20 Jul 17
Vote Up
Vote Down

Originally posted by @humy
Incidentally; I am an AI expert amongst other things and I can tell you that the idea that AI, with its current general level of intelligence, could take over the worlds anytime soon (say, within the next 30 years) is totally absurd.
I think your projection to the next 30 years is a little unwarranted. 30 years ago we could not predict most of the progress that has happened.
But, the real concern with AI is NOT it taking over the world, but rather it causing considerable problems.
They key issue with AI is that we give it a target and it finds a way to achieve that target. It lacks empathy, fear of punishment or other human attributes, so it is essentially narcissistic and cares only about the goals you give it, or goals it obtains accidentally or via its own thought processes. We fear narcissistic humans for good reason, and very intelligent ones even more.
The simplest scenario is an AI that is tasked with making money on the stock market that realises that manipulating stuff on the internet will affect stock prices. It won't be long before its doing targeted adds on Facebook (very real scenario but not the most worrying) , hacking in to power systems or other sensitive sites, or starting wars.

h

Joined
06 Mar 12
Moves
642
Clock
20 Jul 17
4 edits
Vote Up
Vote Down

Originally posted by @twhitehead
I think your projection to the next 30 years is a little unwarranted. 30 years ago we could not predict most of the progress that has happened.
But, the real concern with AI is NOT it taking over the world, but rather it causing considerable problems.
They key issue with AI is that we give it a target and it finds a way to achieve that target. It lacks ...[text shortened]... not the most worrying) , hacking in to power systems or other sensitive sites, or starting wars.
30 years ago we could not predict most of the progress that has happened.

yes, but I for one correctly, and I would assert rationally, 'predicted' that none of those programs would result in AI taking over the world.

so it is essentially narcissistic

No, it isn't. Being narcissistic requires human emotions, which it doesn't have. Providing it isn't specifically programmed to take cure of its own interests even at the expense of our interest, there is no danger of it having narcissistic-like behavior. Computers are generally selfless.

.. or starting wars.

very unlikely unless the person who programmed the AI is either a psychopath or a very careless moron to not even consider programming in some pretty basic safety protocols or do a stupid bad job of it . In addition, currently, no AI can come close to starting a war even if it was programmed with the goal to specifically do just that and I see no creditable chance of that changing in the next, say, 10 years.

I for one can make an AI (BUT represented as software only, not hardware) and am pretty good at doing that and yet, even if I wanted to, I don't see how it is currently credible that I could personally make an AI programmed to start a war and then for it to actually succeed in actually starting a war! -that would be a miracle given current limitations and I don't see how that situation could creditably change in the next 10 years.

twhitehead

Cape Town

Joined
14 Apr 05
Moves
52945
Clock
20 Jul 17
Vote Up
Vote Down

Originally posted by @humy
No, it isn't. Being narcissistic requires human emotions, which it doesn't have. Providing it isn't specifically programmed to take cure of its own interests even at the expense of our interest, there is no danger of it having narcissistic-like behavior. Computers are generally selfless.
Yea, I used the wrong word. I meant psychopath, as in lacking empathy or remorse.

very unlikely unless the person who programmed the AI is either a psychopath or a very careless moron to not even consider programming in some pretty basic safety protocols or do a stupid bad job of it .
The reality is that the vast majority of developers wouldn't even give such 'basic safety protocols' a second thought. As for stupid people, and very careless morons, we are clearly not short of those.

In addition, currently, no AI can come close to starting a war even if it was programmed with the goal to specifically do just that and I see no creditable chance of that changing in the next, say, 10 years.
It is actually not that difficult to start a war if you lack empathy. Current AI's clearly lack the understanding to be able to do that, so they will require significantly more real world understanding before we get there. I notice the switch to 10 years though. What about 30 years?

I for one can make an AI (BUT represented as software only, not hardware) and am pretty good at doing that and yet, even if I wanted to, I don't see how it is currently credible that I could personally make an AI programmed to start a war and then for it to actually succeed in actually starting a war! -that would be a miracle given current limitations and I don't see how that situation could creditably change in the next 10 years.
I disagree. The recent US election demonstrated for example that utilising big data to manipulate Facebook users is a highly effective strategy. That sort of thing is something current AI technology is actually capable of. The main barriers right now are access to the data and costs. I think with access to the data and sufficient funding, one could easily train an AI to understand the relationship between posts on Facebook and peoples sentiments. Its not a big step from there to recognising the relationship between sentiment and stock prices.
The problem is, the AI is a black box, so we don't realize what it is manipulating or why.

F

Unknown Territories

Joined
05 Dec 05
Moves
20408
Clock
20 Jul 17

Originally posted by @humy
Is it not the world of science that created the problem,

science isn't the problem, humans are. Science gives us certain extra choices we wouldn't otherwise have; then if we make bad choices, WE are to blame, not science.
I should add that, providing we choose to use science to help humanity, science can come to the rescue here; the sci ...[text shortened]... ence, could take over the worlds anytime soon (say, within the next 30 years) is totally absurd.
Totally absurd and yet some of the world's most respected people (Not that that is a guarantee, of course, since some of that category seems to be sheer idiocy) consider the pervasiveness of electronics and AI in particular THE biggest threat to man.
When the world's systems are all brought online and into full automation, it's easy to see how its control could lead to all kinds of messy inconveniences such as a universal and exclusive crypro-currency, wars, whole scale elimination of large groups of people (to help maintain resources), enforced groupthink and general compliance to the system and its needs.
By then, it's too late.
We, in my wild eyed, tin-foil hat wearing paranoid delusional opinion, are much closer to that potential reality than we are away from the same.

h

Joined
06 Mar 12
Moves
642
Clock
20 Jul 17
10 edits
Vote Up
Vote Down

Originally posted by @twhitehead
Yea, I used the wrong word. I meant psychopath, as in lacking empathy or remorse.

[b]very unlikely unless the person who programmed the AI is either a psychopath or a very careless moron to not even consider programming in some pretty basic safety protocols or do a stupid bad job of it .

The reality is that the vast majority of developers wouldn ...[text shortened]... es.
The problem is, the AI is a black box, so we don't realize what it is manipulating or why.[/b]
I meant psychopath, as in lacking empathy or remorse.

An AI that lacks "empathy or remorse" (like they normally do) would generally not be correctly called a "psychopath" since, unlike in humans, an AI doesn't ever need empathy or remorse to override certain human emotions (such as hate, greed, sadism etc) for crudity or evil; because it has no emotions and thus the lack of empathy or remorse shouldn't be reflected in its behavior in the form of psychopathic behavior. Your personal computer lacks empathy or remorse, so would you say your personal computer is a "psychopath" or is in danger of having psychopathic behavior because of this?
If an AI is programmed to have 'good' behavior (such as help people and don't kill people or start wars etc) then it will have 'good' behavior even though it has no empathy or remorse; no empathy or remorse needed to counter emotions to do 'bad' things.

I notice the switch to 10 years though. What about 30 years?

I think 30 years also; just changed it to 10 years to make it an aesthetically nice round figure starting with digit 1.

twhitehead

Cape Town

Joined
14 Apr 05
Moves
52945
Clock
20 Jul 17
Vote Up
Vote Down

Originally posted by @humy
An AI that lacks "empathy or remorse" (like they normally do) would generally not be correctly called a "psychopath" since, unlike in humans, an AI doesn't ever need empathy or remorse to override certain human emotions (such as hate, greed, sadism etc) for crudity or evil; because it has no emotions and thus the lack of empathy or remorse shouldn't be reflected in its behavior in the form of psychopathic behavior.
Psychopaths are often dangerous not because of hate, greed, sadism etc but simply because they don't recognise any negatives to behaviours that harm others. I completely disagree with you on this that such motives are necessary in any way.

Your personal computer lacks empathy or remorse, so would you say your personal computer is a "psychopath" or is in danger of having psychopathic behavior because of this?
Yes.

If an AI is programmed to have 'good' behavior (such as help people and don't kill people or start wars etc) then it will have 'good' behavior even though it has no empathy or remorse; no empathy or remorse needed to counter emotions to do 'bad' things.
I thought you said you knew what AI was. You generally DO NOT program behaviours into AI. Certainly programming 'good' behaviour would be an enormously difficult task. Most AI's are given goals, not behaviour guidelines.
The scenario we are faced with is equivalent to a human playing computer games. You feel no empathy for the opponent in a computer game against computer 'bots' and do whatever it takes to achieve the stated goal. An AI would be very similar, treating the world as a game to be won.

h

Joined
06 Mar 12
Moves
642
Clock
20 Jul 17
6 edits
Vote Up
Vote Down

Originally posted by @twhitehead
You generally DO NOT program behaviours into AI. .
Actually, we sometimes do and I have been doing just that; but somehow I think not with the sort of complex behavior you are talking about here that I would guess requires the AI to have a general intelligence WAY beyond that of any current AI.

But I plan to do something about that in about ~3 years time and after my current research with results from my current research that will make my next research easier. My next research, unlike my current research, will be focused ONLY on AI.

twhitehead

Cape Town

Joined
14 Apr 05
Moves
52945
Clock
20 Jul 17
Vote Up
Vote Down

Originally posted by @humy
Actually, we sometimes do and I have been doing just that; but somehow I think not with the sort of complex behavior you are talking about here that I would guess requires the AI to have a general intelligence WAY beyond that of any current AI.
I strongly suspect it will never happen. It is more likely that behaviours will be learned and we will teach behaviour not program it.

Certainly, AI used for say controlling an autonomous vehicle would not be considered to have 'good' or 'bad' behaviour programmed in at this stage. Almost all autonomous driving is based on just trying to stay on the road.

The key point I am trying to make though is that AI's will not be evolved intelligence and thus will be significantly different from humans. We tend to think that humans learn most of their behaviours, but that is not the case. A large part of our behaviour is evolved. So even if we tried to raise an AI like a child, it would behave very differently from a human.

lemon lime
itiswhatitis

oLd ScHoOl

Joined
31 May 13
Moves
5577
Clock
20 Jul 17

Originally posted by @humy
Is it not the world of science that created the problem,

science isn't the problem, humans are. Science gives us certain extra choices we wouldn't otherwise have; then if we make bad choices, WE are to blame, not science.
I should add that, providing we choose to use science to help humanity, science can come to the rescue here; the sci ...[text shortened]... ence, could take over the worlds anytime soon (say, within the next 30 years) is totally absurd.
science isn't the problem, humans are

Are you suggesting if there were no humans there would still be this thing we call 'science'?

h

Joined
06 Mar 12
Moves
642
Clock
20 Jul 17
Vote Up
Vote Down

Originally posted by @lemon-lime
[b]science isn't the problem, humans are

Are you suggesting if there were no humans there would still be this thing we call 'science'?[/b]
No.
Don't know where you got that from.

Cookies help us deliver our Services. By using our Services or clicking I agree, you agree to our use of cookies. Learn More.