vlogbrothers
Things are Changing Very Fast
YouTube: | https://youtube.com/watch?v=kqPKNksl9hk |
Previous: | AI tells me if AI will write novels. |
Next: | What I Would Have Tweeted |
Categories
Statistics
View count: | 683,298 |
Likes: | 38,938 |
Comments: | 2,363 |
Duration: | 03:59 |
Uploaded: | 2022-12-09 |
Last sync: | 2024-11-05 04:45 |
Citation
Citation formatting is not guaranteed to be accurate. | |
MLA Full: | "Things are Changing Very Fast." YouTube, uploaded by vlogbrothers, 9 December 2022, www.youtube.com/watch?v=kqPKNksl9hk. |
MLA Inline: | (vlogbrothers, 2022) |
APA Full: | vlogbrothers. (2022, December 9). Things are Changing Very Fast [Video]. YouTube. https://youtube.com/watch?v=kqPKNksl9hk |
APA Inline: | (vlogbrothers, 2022) |
Chicago Full: |
vlogbrothers, "Things are Changing Very Fast.", December 9, 2022, YouTube, 03:59, https://youtube.com/watch?v=kqPKNksl9hk. |
Today on TikTok (after I made this video) I saw a guy arguing that we have a very small window between understanding the potential affects of AI and actually deciding what we can and should do with it. I don't really agree, I think that we both have no window at all (in that, you cannot put this genie back in the bottle) and a window that will be open for a very long time.
When we encounter new technologies, we learn their powers and their dangers and that always takes quite a long time, and then, after that, we develop tools (mostly in the form of taboos) for how they should and should not be used. But you literally can't intelligently form the taboos until you have experienced the negative outcomes.
I don't know what the negative outcomes will be, but I bet there will be a ton. And we're not going to have the systems to handle those negative outcomes for a long time...potentially a generation or two. That's how it's been with the internet, and that's why I'm worried about changes stacking up on changes. It's gonna be a tricky time.
----
Subscribe to our newsletter! http://eepurl.com/Bgi9b
And join the community at http://nerdfighteria.com
Help transcribe videos - http://nerdfighteria.info
Learn more about our project to help Partners in Health radically reduce maternal mortality in Sierra Leone: https://www.pih.org/hankandjohn
If you're able to donate $2,000 or more to this effort, please join our matching fund: https://pih.org/hankandjohnmatch
If you're in Canada, you can donate here: https://pihcanada.org/hankandjohn
John's twitter - http://twitter.com/johngreen
Hank's twitter - http://twitter.com/hankgreen
Hank's tumblr - http://edwardspoonhands.tumblr.com
When we encounter new technologies, we learn their powers and their dangers and that always takes quite a long time, and then, after that, we develop tools (mostly in the form of taboos) for how they should and should not be used. But you literally can't intelligently form the taboos until you have experienced the negative outcomes.
I don't know what the negative outcomes will be, but I bet there will be a ton. And we're not going to have the systems to handle those negative outcomes for a long time...potentially a generation or two. That's how it's been with the internet, and that's why I'm worried about changes stacking up on changes. It's gonna be a tricky time.
----
Subscribe to our newsletter! http://eepurl.com/Bgi9b
And join the community at http://nerdfighteria.com
Help transcribe videos - http://nerdfighteria.info
Learn more about our project to help Partners in Health radically reduce maternal mortality in Sierra Leone: https://www.pih.org/hankandjohn
If you're able to donate $2,000 or more to this effort, please join our matching fund: https://pih.org/hankandjohnmatch
If you're in Canada, you can donate here: https://pihcanada.org/hankandjohn
John's twitter - http://twitter.com/johngreen
Hank's twitter - http://twitter.com/hankgreen
Hank's tumblr - http://edwardspoonhands.tumblr.com
Good morning John, it's funny.
I have also been worried about AI this week, which is actually not very weird at all because we have actual intelligences, not artificial ones. And thus we are bound to be anchored to different interests and worries.
I wrote a couple of books about how when new things arrive, as they inevitably do, we tend to retreat to our camps. That new thing may be a pandemic or a large-scale social change or a really powerful new technology or an alien. It doesn't matter what it is.
The more confused we are the more, the more we head to the place where we feel safest. And then from the safety of that place, we think about the concerns of the place and then we ask questions specifically about that concern and then the more powerful places tend to have their conversations be the ones that are dominant in the broader conversation. And then eventually it all bubbles out into there being two groups that have lots of internal divisions, but mostly are butting with each other.
You can watch these groups starting to define themselves and differentiate themselves right now in real time and it will be probably five or ten years before we actually know what questions we're trying to get answers to and what the kind of dominant perspectives on it will be. And I don't know what they're gonna be, I don't even know what questions we need to ask yet and certainly don't know the answers to those questions. Like the idea of these models is they don't copy things, they look at a lot of different things whether that's text or pictures.
They "learn" how these things are structured and they output things that have that kind of structure. But when they do that, sometimes their procedures make them copy. Like I asked Midjourney to imagine an Afghani woman with green eyes and this is what it gave me.
Midjourney's model does not contain this famous image from the 1980s National Geographic cover, but it has been trained on that image and so it is plagiarizing it. Can you sue it for that? Is it ok for it to have trained on so many copyrighted works that didn't agree to have it train on them?
Should artists and rights-holders be able to opt-out of that training process? Is any of this a violation of existing laws or is it a call for new legislation? Who will use this?
What will it enable? Who will it hurt? I don't know!
People are gonna disagree about this stuff like crazy. So here's my big concern John because of course you gotta have one big one. You're gonna have a lot of little ones, but one big one.
We have not yet gotten through the last massive revolutionary shift in human communication. We don't know what to do about last Tuesday yet! We are still actively figuring out how to be humans and societies with the current tremendously disruptive tools that we were given 10, 20 years ago.
We're just realizing that we live a lot of our lives in places that are not democracies, that we do not get a vote, where the leadership can change based on who has enough money to buy the thing that we live in. A lotta these platforms haven't really ever been through a recession. They've had so much money, they've been able to grow and grow and now are they at a ceiling and what will they do to punch through that ceiling.
What, like we don't know. All that is very big, very disruptive, it's been difficult. And now it feel likes we're about to have another tremendously weird entirely new wrench thrown into the works human communication.
A lotta the things I see people saying ChatGPT is good at, I don't think it is good at like it's not good at replacing Wikipedia. It very confidently states things that are both untrue and true and there's no way to know the difference without fact checking and I don't think that it has a good way of figuring that out itself. I don't just think it's going to be good at that.
But that's a really hard problem and there are really easy ones that I think it'll be great at like, for example, getting people to dislike each other more. And there are lots of people who want people to be more afraid of each other and to dislike each other more, both external and internal. Like a lot of get out to vote measures is about getting people to be more afraid and angry.
And getting people to be afraid and mad at each other, very easy, like trivial problem for an AI like this to solve. All you have to do is emulate a human in its worst moments and in our worst moments, we are not complex. We have not figured out how to be a society inside of the current communications revolution, and while people and societies can and will change very fast, it's usually not fun.
I for one, wouldn't mind if things got a little less interesting for a little while. But we don't get to choose these things. John, I'll see you on Tuesday.
I have also been worried about AI this week, which is actually not very weird at all because we have actual intelligences, not artificial ones. And thus we are bound to be anchored to different interests and worries.
I wrote a couple of books about how when new things arrive, as they inevitably do, we tend to retreat to our camps. That new thing may be a pandemic or a large-scale social change or a really powerful new technology or an alien. It doesn't matter what it is.
The more confused we are the more, the more we head to the place where we feel safest. And then from the safety of that place, we think about the concerns of the place and then we ask questions specifically about that concern and then the more powerful places tend to have their conversations be the ones that are dominant in the broader conversation. And then eventually it all bubbles out into there being two groups that have lots of internal divisions, but mostly are butting with each other.
You can watch these groups starting to define themselves and differentiate themselves right now in real time and it will be probably five or ten years before we actually know what questions we're trying to get answers to and what the kind of dominant perspectives on it will be. And I don't know what they're gonna be, I don't even know what questions we need to ask yet and certainly don't know the answers to those questions. Like the idea of these models is they don't copy things, they look at a lot of different things whether that's text or pictures.
They "learn" how these things are structured and they output things that have that kind of structure. But when they do that, sometimes their procedures make them copy. Like I asked Midjourney to imagine an Afghani woman with green eyes and this is what it gave me.
Midjourney's model does not contain this famous image from the 1980s National Geographic cover, but it has been trained on that image and so it is plagiarizing it. Can you sue it for that? Is it ok for it to have trained on so many copyrighted works that didn't agree to have it train on them?
Should artists and rights-holders be able to opt-out of that training process? Is any of this a violation of existing laws or is it a call for new legislation? Who will use this?
What will it enable? Who will it hurt? I don't know!
People are gonna disagree about this stuff like crazy. So here's my big concern John because of course you gotta have one big one. You're gonna have a lot of little ones, but one big one.
We have not yet gotten through the last massive revolutionary shift in human communication. We don't know what to do about last Tuesday yet! We are still actively figuring out how to be humans and societies with the current tremendously disruptive tools that we were given 10, 20 years ago.
We're just realizing that we live a lot of our lives in places that are not democracies, that we do not get a vote, where the leadership can change based on who has enough money to buy the thing that we live in. A lotta these platforms haven't really ever been through a recession. They've had so much money, they've been able to grow and grow and now are they at a ceiling and what will they do to punch through that ceiling.
What, like we don't know. All that is very big, very disruptive, it's been difficult. And now it feel likes we're about to have another tremendously weird entirely new wrench thrown into the works human communication.
A lotta the things I see people saying ChatGPT is good at, I don't think it is good at like it's not good at replacing Wikipedia. It very confidently states things that are both untrue and true and there's no way to know the difference without fact checking and I don't think that it has a good way of figuring that out itself. I don't just think it's going to be good at that.
But that's a really hard problem and there are really easy ones that I think it'll be great at like, for example, getting people to dislike each other more. And there are lots of people who want people to be more afraid of each other and to dislike each other more, both external and internal. Like a lot of get out to vote measures is about getting people to be more afraid and angry.
And getting people to be afraid and mad at each other, very easy, like trivial problem for an AI like this to solve. All you have to do is emulate a human in its worst moments and in our worst moments, we are not complex. We have not figured out how to be a society inside of the current communications revolution, and while people and societies can and will change very fast, it's usually not fun.
I for one, wouldn't mind if things got a little less interesting for a little while. But we don't get to choose these things. John, I'll see you on Tuesday.