crashcourse
The Internet and Computing: Crash Course History of Science #43
Categories
Statistics
View count: | 137,978 |
Likes: | 3,744 |
Comments: | 196 |
Duration: | 12:28 |
Uploaded: | 2019-04-08 |
Last sync: | 2024-10-28 21:45 |
Citation
Citation formatting is not guaranteed to be accurate. | |
MLA Full: | "The Internet and Computing: Crash Course History of Science #43." YouTube, uploaded by CrashCourse, 8 April 2019, www.youtube.com/watch?v=ur9aGqcmq8M. |
MLA Inline: | (CrashCourse, 2019) |
APA Full: | CrashCourse. (2019, April 8). The Internet and Computing: Crash Course History of Science #43 [Video]. YouTube. https://youtube.com/watch?v=ur9aGqcmq8M |
APA Inline: | (CrashCourse, 2019) |
Chicago Full: |
CrashCourse, "The Internet and Computing: Crash Course History of Science #43.", April 8, 2019, YouTube, 12:28, https://youtube.com/watch?v=ur9aGqcmq8M. |
We’ve talked a lot about advances in biotech. But none of those could have happened without advances in computing. It’s time to get back to data and explore the unlikely birth, strange life, and potential futures of the Internet. In this episode of Crash Course History of Science, Hank sits down to talk about how computers have changed, where they're going, and how the internet fits into all of this.
***
Crash Course is on Patreon! You can support us directly by signing up at http://www.patreon.com/crashcourse
Thanks to the following patrons for their generous monthly contributions that help keep Crash Course free for everyone forever:
Eric Prestemon, Sam Buck, Mark Brouwer, Laura Busby, Zach Van Stanley, Bob Doye, Jennifer Killen, Naman Goel, Nathan Catchings, Brandon Westmoreland, dorsey, Indika Siriwardena, Kenneth F Penttinen, Trevin Beattie, Erika & Alexa Saur, Glenn Elliott, Justin Zingsheim, Jessica Wode, Tom Trval, Jason Saslow, Nathan Taylor, Brian Thomas Gossett, Khaled El Shalakany, SR Foxley, Yasenia Cruz, Eric Koslow, Caleb Weeks, Tim Curwick, D.A. Noe, Shawn Arnold, Malcolm Callis, William McGraw, Andrei Krishkevich, Rachel Bright, Jirat, Ian Dundore
--
Want to find Crash Course elsewhere on the internet?
Facebook - http://www.facebook.com/YouTubeCrashCourse
Twitter - http://www.twitter.com/TheCrashCourse
Tumblr - http://thecrashcourse.tumblr.com
Support Crash Course on Patreon: http://patreon.com/crashcourse
CC Kids: http://www.youtube.com/crashcoursekids
***
Crash Course is on Patreon! You can support us directly by signing up at http://www.patreon.com/crashcourse
Thanks to the following patrons for their generous monthly contributions that help keep Crash Course free for everyone forever:
Eric Prestemon, Sam Buck, Mark Brouwer, Laura Busby, Zach Van Stanley, Bob Doye, Jennifer Killen, Naman Goel, Nathan Catchings, Brandon Westmoreland, dorsey, Indika Siriwardena, Kenneth F Penttinen, Trevin Beattie, Erika & Alexa Saur, Glenn Elliott, Justin Zingsheim, Jessica Wode, Tom Trval, Jason Saslow, Nathan Taylor, Brian Thomas Gossett, Khaled El Shalakany, SR Foxley, Yasenia Cruz, Eric Koslow, Caleb Weeks, Tim Curwick, D.A. Noe, Shawn Arnold, Malcolm Callis, William McGraw, Andrei Krishkevich, Rachel Bright, Jirat, Ian Dundore
--
Want to find Crash Course elsewhere on the internet?
Facebook - http://www.facebook.com/YouTubeCrashCourse
Twitter - http://www.twitter.com/TheCrashCourse
Tumblr - http://thecrashcourse.tumblr.com
Support Crash Course on Patreon: http://patreon.com/crashcourse
CC Kids: http://www.youtube.com/crashcoursekids
We’ve talked a lot about advances in biotech. But none of those could have happened without advances in computing. It’s time to get back to data and explore the unlikely birth, strange life, and potential futures of the Internet.
The theme of the history of computing is that what we mean by “computing” keeps changing. With the invention of the transistor in 1947, the computer started to shrink! And speed up!
And change meaning yet again, becoming a ubiquitous dimension of contemporary life—not to mention a totally normal thing to yell at. Hey Google... can you roll the intro? [long pause]
Google: I'm not sure.
[Intro Music Plays]
In 1965, Electronics Magazine asked computer scientist Gordon Moore to do something scientists are generally taught not to do: predict the future. Moore guessed that, roughly every year, the number of electronic switches that people could squeeze onto one computer chip would double. This meant computer chips would continue to become faster, more powerful, and cheaper at an absolutely amazing rate. Which might have sounded, like, suspiciously awesome to readers. But Moore’s prediction came true! Although it took eighteen months for each doubling, and, arguably, this was a self-fulfilling prophecy, since engineers actively worked towards it. Moore went on to serve as CEO of Intel and is now worth billions. His prediction is called “Moore’s law.”
Think about what this means for manufacturers. They keep competing to invent hot new machines that make their old ones obsolete. The same applies to methods of data storage, from floppy disks to USB drives to Cloud storage.
Today, engineers face big questions about the physical limit of Moore’s law. Even with new tricks here and there, just how small and fast can conventional chips get? Currently, teams at different chip manufacturers are working to create transistors at the nanometer scale. IBM made a whole computer that’s only one millimeter by one millimeter wide and is about as fast as a computer from 1990.
As computers became smaller and cheaper, they moved from military bases to businesses in the 1960s and to schools and homes by the late 1970s and 1980s. And computers changed these spaces. People got used to using them for certain tasks. But computers were pretty intimidating. Manufacturers had to make them work better with people. So in 1970 the Xerox Corporation founded the Palo Alto Research Center—known as Xerox PARC. Here, researchers invented many features of modern computing.
In 1973, they came up with the Xerox Alto, the first personal computer… But Xerox didn’t think there was a market for computers in the home yet. Other Xerox PARC inventions include laser printing, the important networking standard called Ethernet, and even the graphical user interface or GUI—which included folders, icons, and windows. But Xerox didn’t capitalize on these inventions. You probably know who did.
In the 1970s, two nerds who dropped out of college started selling computers you were meant to use at home, for fun and—you know, to do… stuff with, whatever you wanted. In retrospect, that was the genius of the Apple Two, released in 1977. Along with decades of shrewd engineering and business moves, fun made video game designer and meditation enthusiast Steve Jobs and engineer Steve Wozniak into mega-billionaires. They had a commitment to computing for play, not always just work.
And they weren’t alone. In 1981, IBM started marketing the PC powered by the DOS operating system, which they licensed from Microsoft, founded by Harvard dropout Bill Gates in 1975. By 1989, Microsoft’s revenues reached one billion dollars. You can find out more about college dropouts-turned-billionaires elsewhere. For our purposes, note that some of the inventors who influenced the future of computing were traditional corporate engineers like Gordon Moore. But increasingly, they were people like the Steves who didn’t focus on discoveries in computer science, but on design and marketing: how to create new kinds of interactions with, and on, computers.
Compare this to the birth of social media in the early 2000s. So new social spaces emerged on computers. And connecting computers together allowed for new communities to form—from Second Life to 4chan. For that, we have to once again thank U. S. military research.
ThoughtBubble, plug us in.
Back in the late 1950s, the U. S. was really worried about Soviet technologies. So in 1958, the Secretary of Defense authorized a new initiative called the Defense Advanced Research Projects Agency or DARPA. DARPA set about solving a glaring problem: what happened if Soviet attacks cut U. S. telephone lines? How could information be moved around quickly, even after a nuclear strike? A faster computer wouldn’t help if it was blown to bits. What was needed was a network.
So in part to defend against information loss during a war—and in part to make researchers’ lives easier—DARPA funded the first true network of computers, the Advanced Research Projects Agency Nework, better known as ARPANET. People give different dates for the birthday of the Internet, but two stand out. On September 2nd, 1969, ARPANET went online. It used the then-new technology of packet switching, or sending data in small, independent, broken-up parts that can each find their own fastest routes and be reassembled later. This is still the basis of our networks today!
At first, ARPANET only linked a few universities. But it grew as researchers found that linking computers was useful for all sorts of reasons, nukes aside! And then, on January 1st, 1983, several computer networks including ARPANET were joined together using a standard way of requesting and sharing information: TCP/IP. This remains the backbone of the Internet today. Meanwhile, French engineers created their own computer network, connected through through telephone lines, Minitel, back in 1978—five years before TCP/IP! Minitel was retired in 2012. And the Soviets developed their own versions of ARPANET. But after 1991, these joined the TCP/IP-driven Internet, and the virtual world became both larger and smaller.
The Internet in the 1980s was literally that: a network interconnecting computers. It didn’t look like a new space yet. For that, we can thank British computer scientist Sir Tim Berners-Lee, who invented the World Wide Web in 1990. Berners-Lee pulled together existing ideas, like hypertext and the internet, and built the first web browser to create the beginnings of the functional and useful web we know today. The Web had profound effects. It brought the Internet to millions of people—and brought them into it, making them feel like more they had a home “online,” a virtual place to represent themselves, meet strangers all over the world, and troll educational video shows!
The Web also democratized the tools of knowledge making. From World War Two until 1990, building computers and using them to do work was largely the domain of elites. A short time later, we can trade software on GitHub, freely share 3D printing templates on Thingiverse, and benefit from the collective wisdom of Wikipedia. It’s as if the Internet now contains not one but several Libraries of Alexandria. They’ve radically changed how we learn and make knowledge.
Just as scientific journals had once been invented as printed objects, since 1990, they’ve moved online—though often behind steep paywalls. In fact, Russian philosopher Vladimir Odoevsky predicted way back in 1837 in his book The Year 4338 that our houses would be connected by “magnetic telegraphs.” And that came true only one hundred and fifty years later—not two millennia!
So what's going to happen in another hundred and fifty years? Well, computing seems to be changing unpredictably. Not only because computers are getting faster, but because of at least three more fundamental shifts. One, scientists are experimenting with quantum computers, which work in a different way than “classical,” binary ones. This is called superposition, and it has the potential to make the computers of the future much faster than today’s. This could lead to major shifts in cryptography: the current method of protecting our credit cards works because classical computers aren’t strong enough to factor very large numbers quickly. But a quantum computer should be able to do this kind of math easily. To date, however, quantum computers are not yet finished technologies that engineers can improve, but epistemic objects: things that scientists are still working to understand. So will quantum computing change everything? Or mostly remain a weird footnote to classical computing? I don’t know… we’ll find out!
Fundamental shift two: some researchers across computing, history, and epistemology—the branch of philosophy that asks, what counts as knowledge?—wonder if really really large amounts of data, called Big Data, will change how we do science. One of the main jobs of being a scientist has been to just collect data. But if Internet-enabled sensors of all kinds are always transmitting data back to databases, then maybe the work of science will shift away from data collection, and even away from analysis—AI can crunch numbers—and into asking questions about patterns that emerge from data, seemingly on their own. So instead of saying, "I wonder if X is true about the natural or social world," and then going out to observe or test, the scientist of the future might wait for a computer to tell her, "X seems true about the world, are you interested in knowing more?"
This vision for using Big Data has been called “hypothesis-free science,” and it would qualify as a new paradigm. But will it replace hypothesis-driven science? Even if AI is mostly “weak,” meaning not like a human brain—but only, say, a sensor system that knows what the temperature it is in your house and what you might like it to be—once it’s very common, it could challenge long-held assumptions about what thought is. In fact, many people have already entrusted cognitive responsibilities such as knowing what time it is to AI scripts on computers in their phones, watches, cars, and homes. Will human cognition feel different if we keep giving AI more and more human stuff to take care of? How will society change? I don’t know… we’ll find out!!!
And these are only some of the anxieties of our hyper-connected world! We could do a whole episode on blockchain, a list of time-stamped records which are linked using cryptography and (theoretically) resistant to fraud, and the new social technologies it enables: like cryptocurrency, kinds of money not backed by sovereign nations but by groups of co-invested strangers on the Internet. Will blockchain change money, and fundamentally, trust in strangers? Or is it just another shift in cryptography? A fad? I don't know... we'll find out!
Let’s head back to the physical world to look at the costs of these developments. One feature they have in common is they require ever greater amounts of electricity and rare-earth metals. And older computers become e-waste, toxic trash recycled by some impoverished persons at cost to their own bodies. Even as computers become so small they’re invisible, so common they feel like part of our own brains, and so fast that they may fundamentally change critical social structures like banking and buying animal hoodies on Etsy… they also contribute to dangerous shifts of how we use and allocate natural resources.
Next time, we’ll wrap up our story of the life sciences by asking questions about the future of medicine and the human brain that remain unanswered as of early 2019. History, my friends, is not finished.
Crash Course History of Science is filmed in the Dr. Cheryl C. Kinney Studio in Missoula, Montana and it's made with the help of all these nice people. And our animation team is Thought Cafe.
Crash Course is a Complexly Production. If you want to keep imagining the world complexly with us you can check out some of our other channels like Animal Wonders, The Art Assignment, and Scishow Psych. And if you would like to keep Crash Course free forever for everyone, you can support the series on Patreon, a crowd funding platform that allows you to support the content you love. Thank you to all our patrons for making Crash Course possible with your continued support.
The theme of the history of computing is that what we mean by “computing” keeps changing. With the invention of the transistor in 1947, the computer started to shrink! And speed up!
And change meaning yet again, becoming a ubiquitous dimension of contemporary life—not to mention a totally normal thing to yell at. Hey Google... can you roll the intro? [long pause]
Google: I'm not sure.
[Intro Music Plays]
In 1965, Electronics Magazine asked computer scientist Gordon Moore to do something scientists are generally taught not to do: predict the future. Moore guessed that, roughly every year, the number of electronic switches that people could squeeze onto one computer chip would double. This meant computer chips would continue to become faster, more powerful, and cheaper at an absolutely amazing rate. Which might have sounded, like, suspiciously awesome to readers. But Moore’s prediction came true! Although it took eighteen months for each doubling, and, arguably, this was a self-fulfilling prophecy, since engineers actively worked towards it. Moore went on to serve as CEO of Intel and is now worth billions. His prediction is called “Moore’s law.”
Think about what this means for manufacturers. They keep competing to invent hot new machines that make their old ones obsolete. The same applies to methods of data storage, from floppy disks to USB drives to Cloud storage.
Today, engineers face big questions about the physical limit of Moore’s law. Even with new tricks here and there, just how small and fast can conventional chips get? Currently, teams at different chip manufacturers are working to create transistors at the nanometer scale. IBM made a whole computer that’s only one millimeter by one millimeter wide and is about as fast as a computer from 1990.
As computers became smaller and cheaper, they moved from military bases to businesses in the 1960s and to schools and homes by the late 1970s and 1980s. And computers changed these spaces. People got used to using them for certain tasks. But computers were pretty intimidating. Manufacturers had to make them work better with people. So in 1970 the Xerox Corporation founded the Palo Alto Research Center—known as Xerox PARC. Here, researchers invented many features of modern computing.
In 1973, they came up with the Xerox Alto, the first personal computer… But Xerox didn’t think there was a market for computers in the home yet. Other Xerox PARC inventions include laser printing, the important networking standard called Ethernet, and even the graphical user interface or GUI—which included folders, icons, and windows. But Xerox didn’t capitalize on these inventions. You probably know who did.
In the 1970s, two nerds who dropped out of college started selling computers you were meant to use at home, for fun and—you know, to do… stuff with, whatever you wanted. In retrospect, that was the genius of the Apple Two, released in 1977. Along with decades of shrewd engineering and business moves, fun made video game designer and meditation enthusiast Steve Jobs and engineer Steve Wozniak into mega-billionaires. They had a commitment to computing for play, not always just work.
And they weren’t alone. In 1981, IBM started marketing the PC powered by the DOS operating system, which they licensed from Microsoft, founded by Harvard dropout Bill Gates in 1975. By 1989, Microsoft’s revenues reached one billion dollars. You can find out more about college dropouts-turned-billionaires elsewhere. For our purposes, note that some of the inventors who influenced the future of computing were traditional corporate engineers like Gordon Moore. But increasingly, they were people like the Steves who didn’t focus on discoveries in computer science, but on design and marketing: how to create new kinds of interactions with, and on, computers.
Compare this to the birth of social media in the early 2000s. So new social spaces emerged on computers. And connecting computers together allowed for new communities to form—from Second Life to 4chan. For that, we have to once again thank U. S. military research.
ThoughtBubble, plug us in.
Back in the late 1950s, the U. S. was really worried about Soviet technologies. So in 1958, the Secretary of Defense authorized a new initiative called the Defense Advanced Research Projects Agency or DARPA. DARPA set about solving a glaring problem: what happened if Soviet attacks cut U. S. telephone lines? How could information be moved around quickly, even after a nuclear strike? A faster computer wouldn’t help if it was blown to bits. What was needed was a network.
So in part to defend against information loss during a war—and in part to make researchers’ lives easier—DARPA funded the first true network of computers, the Advanced Research Projects Agency Nework, better known as ARPANET. People give different dates for the birthday of the Internet, but two stand out. On September 2nd, 1969, ARPANET went online. It used the then-new technology of packet switching, or sending data in small, independent, broken-up parts that can each find their own fastest routes and be reassembled later. This is still the basis of our networks today!
At first, ARPANET only linked a few universities. But it grew as researchers found that linking computers was useful for all sorts of reasons, nukes aside! And then, on January 1st, 1983, several computer networks including ARPANET were joined together using a standard way of requesting and sharing information: TCP/IP. This remains the backbone of the Internet today. Meanwhile, French engineers created their own computer network, connected through through telephone lines, Minitel, back in 1978—five years before TCP/IP! Minitel was retired in 2012. And the Soviets developed their own versions of ARPANET. But after 1991, these joined the TCP/IP-driven Internet, and the virtual world became both larger and smaller.
The Internet in the 1980s was literally that: a network interconnecting computers. It didn’t look like a new space yet. For that, we can thank British computer scientist Sir Tim Berners-Lee, who invented the World Wide Web in 1990. Berners-Lee pulled together existing ideas, like hypertext and the internet, and built the first web browser to create the beginnings of the functional and useful web we know today. The Web had profound effects. It brought the Internet to millions of people—and brought them into it, making them feel like more they had a home “online,” a virtual place to represent themselves, meet strangers all over the world, and troll educational video shows!
The Web also democratized the tools of knowledge making. From World War Two until 1990, building computers and using them to do work was largely the domain of elites. A short time later, we can trade software on GitHub, freely share 3D printing templates on Thingiverse, and benefit from the collective wisdom of Wikipedia. It’s as if the Internet now contains not one but several Libraries of Alexandria. They’ve radically changed how we learn and make knowledge.
Just as scientific journals had once been invented as printed objects, since 1990, they’ve moved online—though often behind steep paywalls. In fact, Russian philosopher Vladimir Odoevsky predicted way back in 1837 in his book The Year 4338 that our houses would be connected by “magnetic telegraphs.” And that came true only one hundred and fifty years later—not two millennia!
So what's going to happen in another hundred and fifty years? Well, computing seems to be changing unpredictably. Not only because computers are getting faster, but because of at least three more fundamental shifts. One, scientists are experimenting with quantum computers, which work in a different way than “classical,” binary ones. This is called superposition, and it has the potential to make the computers of the future much faster than today’s. This could lead to major shifts in cryptography: the current method of protecting our credit cards works because classical computers aren’t strong enough to factor very large numbers quickly. But a quantum computer should be able to do this kind of math easily. To date, however, quantum computers are not yet finished technologies that engineers can improve, but epistemic objects: things that scientists are still working to understand. So will quantum computing change everything? Or mostly remain a weird footnote to classical computing? I don’t know… we’ll find out!
Fundamental shift two: some researchers across computing, history, and epistemology—the branch of philosophy that asks, what counts as knowledge?—wonder if really really large amounts of data, called Big Data, will change how we do science. One of the main jobs of being a scientist has been to just collect data. But if Internet-enabled sensors of all kinds are always transmitting data back to databases, then maybe the work of science will shift away from data collection, and even away from analysis—AI can crunch numbers—and into asking questions about patterns that emerge from data, seemingly on their own. So instead of saying, "I wonder if X is true about the natural or social world," and then going out to observe or test, the scientist of the future might wait for a computer to tell her, "X seems true about the world, are you interested in knowing more?"
This vision for using Big Data has been called “hypothesis-free science,” and it would qualify as a new paradigm. But will it replace hypothesis-driven science? Even if AI is mostly “weak,” meaning not like a human brain—but only, say, a sensor system that knows what the temperature it is in your house and what you might like it to be—once it’s very common, it could challenge long-held assumptions about what thought is. In fact, many people have already entrusted cognitive responsibilities such as knowing what time it is to AI scripts on computers in their phones, watches, cars, and homes. Will human cognition feel different if we keep giving AI more and more human stuff to take care of? How will society change? I don’t know… we’ll find out!!!
And these are only some of the anxieties of our hyper-connected world! We could do a whole episode on blockchain, a list of time-stamped records which are linked using cryptography and (theoretically) resistant to fraud, and the new social technologies it enables: like cryptocurrency, kinds of money not backed by sovereign nations but by groups of co-invested strangers on the Internet. Will blockchain change money, and fundamentally, trust in strangers? Or is it just another shift in cryptography? A fad? I don't know... we'll find out!
Let’s head back to the physical world to look at the costs of these developments. One feature they have in common is they require ever greater amounts of electricity and rare-earth metals. And older computers become e-waste, toxic trash recycled by some impoverished persons at cost to their own bodies. Even as computers become so small they’re invisible, so common they feel like part of our own brains, and so fast that they may fundamentally change critical social structures like banking and buying animal hoodies on Etsy… they also contribute to dangerous shifts of how we use and allocate natural resources.
Next time, we’ll wrap up our story of the life sciences by asking questions about the future of medicine and the human brain that remain unanswered as of early 2019. History, my friends, is not finished.
Crash Course History of Science is filmed in the Dr. Cheryl C. Kinney Studio in Missoula, Montana and it's made with the help of all these nice people. And our animation team is Thought Cafe.
Crash Course is a Complexly Production. If you want to keep imagining the world complexly with us you can check out some of our other channels like Animal Wonders, The Art Assignment, and Scishow Psych. And if you would like to keep Crash Course free forever for everyone, you can support the series on Patreon, a crowd funding platform that allows you to support the content you love. Thank you to all our patrons for making Crash Course possible with your continued support.