The post that was quoted here has been removedYes, the same for equipment in Tel Aviv. My best friend, Ray Scudero, singer songwriter extraordinaire and luthier, wrote 600 songs, one of which I posted in culture, anyway one summer my wife and kids took vacation back in the states and we (ray and I) had our flat to ourselves for a month. In Tel Aviv there are these small electronics shops and they had bags of piezo elements for sale really cheap so we bought about 50 of them. Back in the flat, we experimented with them as acoustic pickups for mandolin's and guitars and dulcimer and for some instruments like mandolin, the round quarter size shape was not flexible enough so I experimented with a pair of scissors and found if you cut one in half, one half shatters and the other half is fully functional as a transducer.
So that leaves a hemisphere shape and then turning it around and cutting again, the scrap side shatters but one is left with a strip, rectangle about 3 mm wide and 15 odd mm long.
It turned out to be a great pickup and it is still glued to the bridge of my 108 year old Gibson mandolin. Ray has some pro mics and we did A/B tests with mics and switch to pickup and the pickup sounded livelier than the pro mic! I think it cost us about 4 cents. Not bad for an experiment.
I later used those piezo's in a science demo at our local school where our kids attended.
I used them as microphones, I glued one to the bottom of a Burger King plastic cup, the styrofoam kind which was very resonant and soldered RG 174 coax cable (the thinnest coax you can buy) and used it in conjunction with an oscilloscope which I put in a large lens set to throw the image of the scope screen on a 6 foot projector screen and then hooked the cable directly to the scope and had one of the kids put the cup over his chest. You could see his heartbeat quite clearly on the blown up oscilloscope screen which amazed everyone in that room, over 100 kids and 5 teachers. That was a fun day, I built a number of devices using things around the house to show them they can get interesting results with just plastic plates, plastic tubes and such. The kids wrote out 100 thank you notes for that demo.
Anyway, lots of small shops like that in Tel Aviv.
Also Tokyo, I was amazed at all the small shops selling all kinds of equipment. Couldn't carry an ounce with me though, on a brief stay from a military flight coming back from Thailand.
Originally posted by moonbusSo first game should be today march 8.
http://www.bbc.com/news/technology-35746909
The next round: Google vs. the world's champion at go.
At the very least we now know computers have advanced to at least the lower levels of professional Go.
I would like to see a match between the computer and a 3rd dan player. The Euro champ was 2nd dan.
Originally posted by sonhouseSo the program seems to be at least 9 Dan pro now. It beat the sitting world champ go player very first game.
So first game should be today march 8.
At the very least we now know computers have advanced to at least the lower levels of professional Go.
I would like to see a match between the computer and a 3rd dan player. The Euro champ was 2nd dan.
The commentary at the end mentioned it made moves no human would think of.
Sound familiar?
So ends the last bastion of human intelligence.
The post that was quoted here has been removedI guess then the 9 Dan level has sublevels, 9.1, 9,5 etc. Sounds like the neural network has a 10 Dan rating then.
I don't think the situation is over. But the Go world will probably lose AlphaGo as an instrument to improve human play just like the Chess world lost the computer that beat Kasparov.
AlphaGo could really advance the art of Go if it was allowed to be used as such but they have bigger fish to fry so I think the Go community is going to have to come up with something like that on their own. The brute force method that works well against chess players is probably a decade away from working against the top levels of Go or more.
One question in my mind: Now that Alpha had beaten Lee twice, will Lee and company learn from the games played in time for the next game or will it take a long time to understand the implications of the higher level moves of Alpha? I hope they get more than just these 5 games to study.
Originally posted by twhiteheadNot for some kind of general solution like what they have done with checkers, I think they have checkers totally figured out, but to the specific problem of a program achieving a 9 dan rating is not as difficult, which not to say it is by any means trivial. Clearly neural nets are way ahead of brute force anyway.
A pure brute force method will never be viable. It is simply impossible to calculate all the possibilities.
My question is, will Google abandon the go project now that they are winning against the strongest go player on Earth? My guess is yes but it would be a boon to the Go world as a teacher of how to REALLY play the game so the next generation of players will be ten times stronger than players like Lee,
Originally posted by sonhouseI dispute this.
Not for some kind of general solution like what they have done with checkers, I think they have checkers totally figured out, but to the specific problem of a program achieving a 9 dan rating is not as difficult, which not to say it is by any means trivial. Clearly neural nets are way ahead of brute force anyway.
My question is, will Google abandon the ...[text shortened]... lay the game so the next generation of players will be ten times stronger than players like Lee,
Go is unlikely to gain [anywhere near] much in terms the performance of the top players by having
an AI 'teacher' as the game relies almost entirely on long built up experience.
You might be stretched more by playing against an AI that up's it's game to be as strong as you
[or just stronger] as you learn but this is unlikely to garner much improvement over today's top
players playing each other.
There are far far far to many possibilities to try to study and memorise them and until we start
upgrading peoples brains directly with AI we cannot program people with improved technique.
In the same way that having AI that can beat people at chess hasn't made top human chess players
noticeably better.
The AI simply surpasses us in ability and then continues on leaving us in it's wake.
At some point it's likely to happen with a General purpose AI, [AGI: Artificial General Intelligence]
which will likely mark the point at which we either gain a set of benevolent god's looking over us,
or we go extinct [or worse].
For us to significantly gain in improved performance from an AI teacher, we would need to be well short
of maximum human potential at that task. For a ten fold improvement the best players today would have
to be at most 10% of the way to an unmodified humans maximum theoretical ability at the game.
This is highly unlikely for such an ancient and prevalent game as Go.
What Google is likely to do is the same thing that IBM did, move on and apply the things they learned
solving Go [chess] to more important problems.
IBM moved on to Watson which [having obliterated humans at Jeopardy] is now starting to be used for it's
primary purpose of diagnosing human illnesses. [Among other tasks such as the recently announced
use of robotic concierge at a USA Hilton]
IBM with Watson is aiming to reduce the huge number of people who die/suffer from misdiagnosis and
conflicting prescriptions because human beings cannot analyse a medical history or know every possible
interaction of every possible combination of drugs with every possible combination of genetic and lifestyle
factors. And also to bring such abilities to anywhere people have access to the internet.
As such mistakes kill hundreds of thousands annually in the USA, tens of millions world wide [at least].
This work seems rather more important than beating humans at Chess, or Jeopardy, or Go, even if that
was the problem that inspired the problem solving abilities in the first place.
Originally posted by sonhouseThe game of Go has so many possible moves at each step that brute force will always be only part of a good program. Much more important is how to analyse a given position's strength. Chess is such that brute force has a bigger impact and evaluating a given position is easier.
Not for some kind of general solution like what they have done with checkers, I think they have checkers totally figured out, but to the specific problem of a program achieving a 9 dan rating is not as difficult, which not to say it is by any means trivial. Clearly neural nets are way ahead of brute force anyway.
Checkers can be solved with brute force alone.
Originally posted by googlefudgeI was thinking the AI neural net might be finding positions that can be useful to understand which is why I say it would be a good teacher to the top players, revealing strategies that hadn't been thought up by humans yet.
I dispute this.
Go is unlikely to gain [anywhere near] much in terms the performance of the top players by having
an AI 'teacher' as the game relies almost entirely on long built up experience.
You might be stretched more by playing against an AI that up's it's game to be as strong as you
[or just stronger] as you learn but this is unlikely to ga ...[text shortened]... , even if that
was the problem that inspired the problem solving abilities in the first place.
Originally posted by sonhouseI don't think that that would be particularly helpful in improving peoples game.
I was thinking the AI neural net might be finding positions that can be useful to understand which is why I say it would be a good teacher to the top players, revealing strategies that hadn't been thought up by humans yet.
But more importantly that isn't how the AI works.
What you are asking for here is a search through the entire possibility space to find
useful teachable positions. Ignoring for the moment the problem of specifying what
a useful teachable position is to a computer, that search is in so vast a possibility
space that there is not enough computing power in the world to make any meaningful
dent in it even if such positions existed.
What the AI is doing is using strategies to determine the next best move in the current
game. And I am almost certain that it's doing so in ways that an unaugmented human
cannot possibly emulate.