Go back
Computing power?

Computing power?

Science

R
Standard memberRemoved

Joined
10 Dec 06
Moves
8528
Clock
10 Feb 16
Vote Up
Vote Down

Originally posted by twhitehead
Ultimately a processor contains flowing electrons. So the absolute limits are the speed of electricity (which is a little slower than the speed of light), and the frequency with which pulses (changes in the voltage) can occur and still work a transistor.
Making faster transistors has been one of the key developments towards faster computers as has making ...[text shortened]... phene for the wires as that has lower resistance than current wires and thus produces less heat.
Thanks for the info. I just want to say (what I seem to have trouble conveying) that I am wondering if anyone has abstracted the components of the computer in such a way that we might begin to study their various arrangements, types, etc... and the consequential effects on performance in a mathematical/pseudo-physical model kind of a way ( treating the component in question as a macro entity for analysis, not strictly the culmination of its microscopic elements ). Example: The way Classical Mechanics assumes continuum's and ignores the individual atoms in the body even though it is not strictly the truth. It helps tremendously in understanding systems of interactions for organization, optimization, and application. Has anyone begun a macro scale abstraction of computer systems in this manner?

twhitehead

Cape Town

Joined
14 Apr 05
Moves
52945
Clock
10 Feb 16
1 edit
Vote Up
Vote Down

Originally posted by joe shmo
Has anyone begun a macro scale abstraction of computer systems in this manner?
Yes. Processor designers abstract at many different levels, including dealing with things like heat flow and timing signals.
They have to when they are dealing with billions of transistors.
However this is largely kept inhouse by processor developers and as far as I know not so much a topic of university study. This is partly because it is such a specialized field and partly because a lot of it is proprietary and partly because it is forever changing.

Having said that, I could be wrong. There are courses on processor design on the web such as this one:
https://www.youtube.com/playlist?list=PL34856A71E943207F

I also recommend watching this:

D
Losing the Thread

Quarantined World

Joined
27 Oct 04
Moves
87415
Clock
10 Feb 16
Vote Up
Vote Down

Originally posted by joe shmo
Thanks for the info. I just want to say (what I seem to have trouble conveying) that I am wondering if anyone has abstracted the components of the computer in such a way that we might begin to study their various arrangements, types, etc... and the consequential effects on performance in a mathematical/pseudo-physical model kind of a way ( treating the com ...[text shortened]... and application. Has anyone begun a macro scale abstraction of computer systems in this manner?
I haven't read this article and it's more abstract than the level you seem to be talking about, but might be of interest to you.

http://plato.stanford.edu/entries/computation-physicalsystems/

R
Standard memberRemoved

Joined
10 Dec 06
Moves
8528
Clock
11 Feb 16
Vote Up
Vote Down

Originally posted by twhitehead
Yes. Processor designers abstract at many different levels, including dealing with things like heat flow and timing signals.
They have to when they are dealing with billions of transistors.
However this is largely kept inhouse by processor developers and as far as I know not so much a topic of university study. This is partly because it is such a specia ...[text shortened]... 34856A71E943207F

I also recommend watching this:
https://www.youtube.com/watch?v=Jyp6jFCzW44
Interesting stuff, thanks for the videos. I haven't watched any of the lectures yet, but when time permits and my interest spikes i'll give them a go.

R
Standard memberRemoved

Joined
10 Dec 06
Moves
8528
Clock
11 Feb 16
Vote Up
Vote Down

Originally posted by DeepThought
I haven't read this article and it's more abstract than the level you seem to be talking about, but might be of interest to you.

http://plato.stanford.edu/entries/computation-physicalsystems/
Yeah, not exactly what I was searching for, but I really enjoy the philosophical side of these types of things; In fact probably more than the actual thing itself. Again Thanks!

twhitehead

Cape Town

Joined
14 Apr 05
Moves
52945
Clock
11 Feb 16
Vote Up
Vote Down

Originally posted by joe shmo
Interesting stuff, thanks for the videos. I haven't watched any of the lectures yet, but when time permits and my interest spikes i'll give them a go.
I don't think the lectures cover what you asked about. They appear to be more like an introduction to the architecture side of chip design. What you asked about would either be a more advanced course or something you would learn on the job at a chip design company. There are so few companies designing CPUs large enough for this type of analysis that there is not enough market for courses that advanced and much of it will be trade secrets anyway.

But the second video did explain that they abstract at a whole range of different levels including testing and simulating the logic, the heat flow and the electricity at each abstraction level.

googlefudge

Joined
31 May 06
Moves
1795
Clock
11 Feb 16
Vote Up
Vote Down

This is an interesting and on-point article about the death of Moore's law and the problems facing chip designers.

http://arstechnica.co.uk/information-technology/2016/02/moores-law-really-is-dead-this-time/

moonbus
Über-Nerd (emeritus)

Joined
31 May 12
Moves
8711
Clock
11 Feb 16
1 edit
Vote Up
Vote Down

Originally posted by twhitehead
Ultimately a processor contains flowing electrons. So the absolute limits are the speed of electricity (which is a little slower than the speed of light), and the frequency with which pulses (changes in the voltage) can occur and still work a transistor.
Making faster transistors has been one of the key developments towards faster computers as has making ...[text shortened]... phene for the wires as that has lower resistance than current wires and thus produces less heat.
The Next Big Thing is going to be photon processing, rather than electron processing. There are already photonic encryption links--encryption is taking place at the level of single photons. The practical application of which draws straight from Heisenberg's principle that observing a photon changes it: namely, if any third party merely measures/duplicates/records the photon along its network path, the other end will detect a change and know that the signal has been read by an unauthorized party.

twhitehead

Cape Town

Joined
14 Apr 05
Moves
52945
Clock
11 Feb 16
Vote Up
Vote Down

Originally posted by moonbus
The Next Big Thing is going to be photon processing, rather than electron processing.
I doubt if that will have any significant impact on overall computing in the near future (20 years or so). I can't see how using photos will significantly speed up or shrink computer components.

The article googlefudge linked mentions new materials that could dramatically reduce power consumption and heat production which could pave the way for 3D circuitry. Currently the main reason for not going 3D is there is simply no way to get rid of the heat.

h

Joined
06 Mar 12
Moves
642
Clock
11 Feb 16
5 edits
Vote Up
Vote Down

Originally posted by moonbus
The Next Big Thing is going to be photon processing, ....
very unlikely. Electronic signals can already go through the wires at close to the speed of light; but that isn't really the main factor that is limiting speed of microprocessors but rather the limitations of digital frequency of the signals. And then there is the huge power consumption concentrated in one tiny microprocessor that creates a big problem with heat dissipation not to mention the environmentally damaging and expensive energy costs. If you want to know what is the next big thing in computers; Try spintronics;

https://en.wikipedia.org/wiki/Spintronics

When the first practical fully spintronic microprocessors are finally developed, they should run at much greater frequencies thus speeds and yet use virtually no power at all!
It has huge potential.

...and then perhaps the next big thing after that may be quantum computers that will use quantum effects to make just a few hundred particles do, in effect, more calculations per second than there are atoms in the known universe.

https://en.wikipedia.org/wiki/Quantum_computing

but so far practical quantum computers are proving extremely tricky to developed and are probability many years away.

s
Fast and Curious

slatington, pa, usa

Joined
28 Dec 04
Moves
53321
Clock
06 Mar 16
Vote Up
Vote Down

Originally posted by humy
very unlikely. Electronic signals can already go through the wires at close to the speed of light; but that isn't really the main factor that is limiting speed of microprocessors but rather the limitations of digital frequency of the signals. And then there is the huge power consumption concentrated in one tiny microprocessor that creates a big problem with hea ...[text shortened]... quantum computers are proving extremely tricky to developed and are probability many years away.
What kills speed in electronics is capacitance and inductance. They are the elephant in the speed room.

Spintronics doesn't use electric current as such, just the alignment of electrons which is pretty cool, literally🙂

Another road to out of this world speed is the quest for all photonics circuitry.

Use photons instead of electrons. A LOT faster and cooler too.

There is work going on for years to make an all photonics transistor.

Here is one brief wiki about the optical transistor:

https://en.wikipedia.org/wiki/Optical_transistor

All this work is still in the realm of classical computation though, and gates, or gates, XOR gates, Nand gates and such.

If quantum computers get built they will be faster still than even photonic computers.

Cookies help us deliver our Services. By using our Services or clicking I agree, you agree to our use of cookies. Learn More.