YouTube: https://youtube.com/watch?v=1qQE5Xwe7fs
Previous: Facebook's Secret Psychological Experiment
Next: Graphene: The Next Big (But Thin) Thing

Categories

Statistics

View count:1,177,049
Likes:30,818
Comments:2,695
Duration:09:19
Uploaded:2014-07-13
Last sync:2024-11-15 22:00

Citation

Citation formatting is not guaranteed to be accurate.
MLA Full: "The Secret World of Ones and Zeroes: Moore's Law Explained." YouTube, uploaded by SciShow, 13 July 2014, www.youtube.com/watch?v=1qQE5Xwe7fs.
MLA Inline: (SciShow, 2014)
APA Full: SciShow. (2014, July 13). The Secret World of Ones and Zeroes: Moore's Law Explained [Video]. YouTube. https://youtube.com/watch?v=1qQE5Xwe7fs
APA Inline: (SciShow, 2014)
Chicago Full: SciShow, "The Secret World of Ones and Zeroes: Moore's Law Explained.", July 13, 2014, YouTube, 09:19,
https://youtube.com/watch?v=1qQE5Xwe7fs.
Everything that's ever been made or used on a computer comes from transistors and circuits. Join Hank Green for a fascinating new episode of SciShow where we'll dive into the nuts and bolts of what makes our vast world of computing possible! Let's go!

SciShow has a spinoff podcast! It's called SciShow Tangents. Check it out at http://www.scishowtangents.org
----------
Like SciShow? Want to help support us, and also get things to put on your walls, cover your torso and hold your liquids? Check out our awesome products over at DFTBA Records: http://dftba.com/artist/52/SciShow
----------
Or, support SciShow by becoming a patron on Patreon: https://www.patreon.com/scishow
----------
Looking for SciShow elsewhere on the internet?
Facebook: http://www.facebook.com/scishow
Twitter: http://www.twitter.com/scishow
Tumblr: http://scishow.tumblr.com

Thanks Tank Tumblr: http://thankstank.tumblr.com

Sources:
http://www.mooreslaw.org/
http://www.intel.com/content/dam/www/public/us/en/documents/corporate-information/museum-transistors-to-transformations-brochure.pdf
http://www.tldp.org/HOWTO/Unix-and-Internet-Fundamentals-HOWTO/core-formats.html
http://homepage.cs.uri.edu/book/binary_data/binary_data.htm
https://www.youtube.com/watch?v=qm67wbB5GmI
https://www.youtube.com/watch?v=cNN_tTXABUA
http://www.newscientist.com/article/mg21929301.000-parallel-sparking-many-chips-make-light-work.html#.U1iQ3vmSzmc
http://www.newscientist.com/article/mg20527441.600-spasers-set-to-sum-a-new-dawn-for-optical-computing.html#.U1iREfmSzmc
http://www.tldp.org/HOWTO/Unix-and-Internet-Fundamentals-HOWTO/core-formats.html
http://www.extremetech.com/computing/97469-is-14nm-the-end-of-the-road-for-silicon-lithography
http://www.amasci.com/miscon/speed.html
http://newsoffice.mit.edu/2013/computing-with-light-0704
Behold! The transistor, a tiny switch about the size of a virus that can control the flow of a small electrical current. It's one of the most important inventions ever because when it's on, it's on and when it's off, it's off. Sounds simple. Probably too simple. But this "either/or" situation is incredibly useful because it is a binary system. On or off, yes or no, one or zero. But with enough transistors working together we can create limitless combinations of "ons" and "offs", "ones" and "zeros" to make a code that can store and process just about any kind of information you can imagine.

That's how your computer computes, and it's how you're watching me right now. It's all because those tiny transistors can be organized, or integrated into integrated circuits also known as microchips or microprocessors, which can orchestrate the operation of millions of transistors at once. And until pretty recently, the only limitation to how fast and smart our computers could get was how many transistors we could pack onto a microchip.

Back in 1965, Gordon Moore, co-founder of the Intel Corporation, predicted that the number of transistors that could fit on a microchip would double every two years. So essentially every two years computers would become twice as powerful. This is known in the tech industry as Moore's Law, and for forty years it was pretty accurate; we went from chips with about 2,300 transistors in 1972, to chips with about 300 million transistors by 2006.

But over the last ten years we've fallen behind the exponential growth that Moore predicted. The processors coming off assembly lines now have about a billion transistors, which is a really big number, but if we were keeping up with Moore's Law, we'd be up to four or five billion by now.

So why is the trend slowing down? How can we get more transistors onto a chip? Are there entirely different technologies we could be using instead, ones that pose no limitations? And how do billions of little on/off switches turn into movies and music and YouTube videos about science that display on a glowing, magical box? Spoilers: it's not magic; it's science.

[SciShow intro music]

To understand the device that you're using right now as well as the challenges computer science is facing, and what the future of computing might look like, you have to start small with that transistor. A transistor is essentially a little gate that can be opened  or shut with electricity to control the flow of electrons between two channels made of silicon, which are separated by a little gap. They're made of silicon because silicon is a natural semiconductor. It can be modified to conduct electricity really well in some conditions or not at all in other conditions. In its pure state, silicon forms really nice, regular crystals. Each atom has four electrons in its outer shell that are bonded with the silicon atoms around it. This arrangement makes it an excellent insulator. It doesn't conduct electricity very well because all of its electrons are spoken for. But you can make that crystalline silicon conduct electricity really well if you dope it. You know, doping, when you inject one substance into another substance to give it powerful properties, like what Lance Armstrong did to win the the Tour De France seven times, only instead of super-powered tiger blood or whatever, the silicon is doped with another element like phosphorous, which has five electrons in its outer shell; or boron, which has three.

If you inject these in to pure crystal silicon, suddenly you have extra unbonded electrons that can move around, and jump across the gap between the two strips of silicon. But they're not gonna do that without a little kick. When you apply a positive electrical charge to a transistor, that positive charge will attract those electrons, which are negative, out of both silicon strips, drawing them in to the gap between them. When enough electrons are gathered, they turn in to a current. Remove the positive charge, and the electrons zip back in to their places leaving the gap empty. Thus the transistor has two modes: on and off, one and zero.

All the information your computer is using right now is represented by sequences of open and shut transistors. So, how does a bunch of ones and zeroes turn in to me talking to you on your screen right now? Let's just imagine eight transistors hooked up together. I say 8 because one byte of information is made out of 8 bits, that's 8 on or off switches, that's the basic unit of a single piece of information inside your computer. 

Now the total number of possible on/off configurations for those 8 transistors is 256. That means 256 combinations of ones and zeroes in that 8 bit sequence. So let's say our 8 transistor microchip is given this byte of data, that's the number 67 in binary by the way. Okay, so what now?

The cool thing about binary data is that the same string of ones and zeroes can mean totally different things depending on where it's sent.

Different parts of your computer use different decoding keys to read the binary code. So if our teeny tiny little 8 transistor microchip kicks that byte over to our graphics card, our graphics card will interpret it as one of 256 colors. Whichever one is coded as number 67. But if that same byte is sent over to our sound card, it might interpret it as one of 256 different spots mapped on to a sound wave. Each spot has its own sound and our byte will code for a spot number 67, so your speaker will put out that sound.

If it's sent over to the part of your computer that converts data into written language, called the UTF-8 code, it turns it into the letter C. Uppercase C actually, not lowercase c which is a different byte. So our eight transistor processor already has a lot of options; the problem is that it can only manage one byte of data at a time, and even if it's flying through bytes at a rate of a few million per second, which your computer is doing right now, that's still a serious data checkpoint, so we need more transistors, and then more, and more, and more, and more!

And for the past 50 years, the biggest obstacle to cramming more and more transistors onto a single chip, and therefore increasing our processing power, has come down to one thing - how small we can make that gap between the two silicon channels.

In the early days of computing, those gaps were so big that you could see them with the naked eye. Today, a state-of-the-art microchip has gaps that are only 32 nanometers across. To give you a sense of perspective, a single red blood cell is 125 times larger than that. 32 nanometers is the width of only a few hundred atoms.

So, there's a limit to how low we can go. Maybe we can shave that gap down to 22 or 16 or even 10 nanometers using current available technology, but then you start running into a lot of problems.

The first big problem is that when you're dealing with components that are so small that just a few stray atoms can ruin a chip, it's no longer possible to make chips that are reliable or affordable.

The second big problem is heat. That many transistors churning through millions of bytes of data per second in such a small space generates a lot of heat. I mean, we're starting to test chips that get so hot that they melt through the motherboard, and then sometimes through the floor.

And the third big problem is quantum mechanics. Oh, quantum mechanics, you enchanting, treacherous minx. When you start dealing with distances that are that small, you start to face the very real dilemma of electrons just jumping across the gap for no reason, in a phenomenon known as quantum tunneling. If that starts happening, your data is gonna start getting corrupted while it moves around inside your computer.

So, how can we keep making our computers even faster when atoms aren't getting any smaller. Well, it might be time to abandon silicon.

Graphene, for example, is a more highly conductive material that would let electrons travel across it faster. We just can't figure out how to manufacture it yet.

Another option is to abandon electrons because, and get ready to have your mind blown, electrons are incredibly slow. Like, the electrons moving through the wire that connects your lamp to the wall outlet, they're moving at about 8 and a half centimeters per hour. And that's fast enough when electrons only have to travel 32 nanometers, but other stuff can go a lot faster. Like light.

Optical computers would move around photons instead of electrons to represent the flow of data. And photons are literally as fast as anything can possibly be, so you can't ask for better than that. But, of course, there are some major problems with optical computing, like the fact that photons ARE so fast that it makes them hard to pin down for long enough to be used for data. And the fact that lasers, which are probably what optical computing would involve, are huge power hogs and would be incredibly expensive to keep running.

Probably the simplest solution to faster computing isn't to switch to fancy new materials or harness the power of light, but to just start using more chips. If you've got four chips processing a program in parallel, the computer would be four times faster, right?

Welllll, yeah, I mean yes, but microchips are super expensive, and it's also hard to design software that makes use of multiple processors. We like our flows of data to be linear because that's how we tend to process information and it's kind of a hard habit to break.

And then there are some really exotic options, like thermal computing which uses variations in heat to represent bits of data, or quantum computing which deals in particles that are in more than one state at the same time, thereby totally doing away with the whole on-off, either-or system.

So, wherever computers go next, there are gonna need to be some big changes if we want our technology to keep getting smaller, and smarter, and faster.

Personally, I'm holding out hope for the lasers, laser computer- I want one of those.

Thanks for watching the SciShow Infusion, especially to our Subbable subscribers. To learn how you can support us in exploring the world, whether it's inside your computer or outside in the universe, just go to subbable.com/scishow.

And speaking of that whole universe, check out our new channel, SciShow Space where we talk about that, including the latest in space news, and as always don't forget to go to youtube.com/scishow and subscribe, so that you can always keep getting more of this, because I know you like it.

[SciShow outro music]