crashcourse
Flirting With Disaster - The Importance of Safety: Crash Course Engineering #28
YouTube: | https://youtube.com/watch?v=WavEcAsI2AY |
Previous: | Neural Networks: Crash Course Statistics #41 |
Next: | Little Theater and American Avant Garde: Crash Course Theater #40 |
Categories
Statistics
View count: | 91,612 |
Likes: | 1,684 |
Comments: | 107 |
Duration: | 11:13 |
Uploaded: | 2018-12-13 |
Last sync: | 2024-12-07 07:30 |
Citation
Citation formatting is not guaranteed to be accurate. | |
MLA Full: | "Flirting With Disaster - The Importance of Safety: Crash Course Engineering #28." YouTube, uploaded by CrashCourse, 13 December 2018, www.youtube.com/watch?v=WavEcAsI2AY. |
MLA Inline: | (CrashCourse, 2018) |
APA Full: | CrashCourse. (2018, December 13). Flirting With Disaster - The Importance of Safety: Crash Course Engineering #28 [Video]. YouTube. https://youtube.com/watch?v=WavEcAsI2AY |
APA Inline: | (CrashCourse, 2018) |
Chicago Full: |
CrashCourse, "Flirting With Disaster - The Importance of Safety: Crash Course Engineering #28.", December 13, 2018, YouTube, 11:13, https://youtube.com/watch?v=WavEcAsI2AY. |
This episode is supported by CuriosityStream
http://www.curiositystream.com/crashcourse
As engineer, sometimes lives will be in your hands, so this week we’re exploring safety and its impact on engineering. We’ll discuss the difference between occupational safety and public safety and how to analyze and review a process for any potential dangers with things like HAZOP. We’ll learn the dangers of having too *many* alarms and look at how important it is to adopt a good mindset of safety culture.
Crash Course Engineering is produced in association with PBS Digital Studios: https://www.youtube.com/playlist?list=PL1mtdjDVOoOqJzeaJAV15Tq0tZ1vKj7ZV
***
RESOURCES:
https://www.britannica.com/technology/safety-engineering
https://www.britannica.com/topic/safety-condition
https://www.teachengineering.org/activities/view/usu_safety_activity1
http://www.higieneocupacional.com.br/download/hazop.pdf
https://www.britannica.com/technology/nuclear-reactor/Liquid-metal-reactors#ref155230
https://www.manufacturing.net/article/2018/07/global-approach-factory-safety
http://www.hse.gov.uk/comah/sragtech/casetexaco94.htm
https://www.icheme.org/~/media/Documents/Subject%20Groups/Safety_Loss_Prevention/HSE%20Accident%20Reports/The%20Explosion%20and%20Fires%20at%20the%20Texaco%20Refinery%20Milford%20Haven.pdf
https://www.encyclopedia.com/history/encyclopedias-almanacs-transcripts-and-maps/working-conditions-factories-issue
https://www.history.com/topics/triangle-shirtwaist-fire
https://www.iaea.org/topics/nuclear-power-plant-safety
https://www.nei.org/fundamentals/safety
http://www.world-nuclear.org/information-library/safety-and-security/safety-of-plants/chernobyl-accident.aspx
https://www.nrc.gov/reading-rm/doc-collections/fact-sheets/chernobyl-bg.html
https://www.nrc.gov/about-nrc/emerg-preparedness.html
http://nuclearconnect.org/know-nuclear/talking-nuclear/reactor-safety
***
Crash Course is on Patreon! You can support us directly by signing up at http://www.patreon.com/crashcourse
Thanks to the following Patrons for their generous monthly contributions that help keep Crash Course free for everyone forever:
Sam Buck, Mark Brouwer, Jennifer French Lee, Brandon Westmoreland, dorsey, Indika Siriwardena, James Hughes, Kenneth F Penttinen, Trevin Beattie, Satya Ridhima Parvathaneni, Erika & Alexa Saur, Glenn Elliott, Justin Zingsheim, Jessica Wode, Eric Prestemon, Kathrin Benoit, Tom Trval, Jason Saslow, Nathan Taylor, Brian Thomas Gossett, Khaled El Shalakany, SR Foxley, Sam Ferguson, Yasenia Cruz, Eric Koslow, Caleb Weeks, Tim Curwick, D.A. Noe, Shawn Arnold, Malcolm Callis, Advait Shinde, William McGraw, Andrei Krishkevich, Rachel Bright, Kathy & Tim Philip, Jirat, Ian Dundore
--
Want to find Crash Course elsewhere on the internet?
Facebook - http://www.facebook.com/YouTubeCrashCourse
Twitter - http://www.twitter.com/TheCrashCourse
Tumblr - http://thecrashcourse.tumblr.com
Support Crash Course on Patreon: http://patreon.com/crashcourse
CC Kids: http://www.youtube.com/crashcoursekids
http://www.curiositystream.com/crashcourse
As engineer, sometimes lives will be in your hands, so this week we’re exploring safety and its impact on engineering. We’ll discuss the difference between occupational safety and public safety and how to analyze and review a process for any potential dangers with things like HAZOP. We’ll learn the dangers of having too *many* alarms and look at how important it is to adopt a good mindset of safety culture.
Crash Course Engineering is produced in association with PBS Digital Studios: https://www.youtube.com/playlist?list=PL1mtdjDVOoOqJzeaJAV15Tq0tZ1vKj7ZV
***
RESOURCES:
https://www.britannica.com/technology/safety-engineering
https://www.britannica.com/topic/safety-condition
https://www.teachengineering.org/activities/view/usu_safety_activity1
http://www.higieneocupacional.com.br/download/hazop.pdf
https://www.britannica.com/technology/nuclear-reactor/Liquid-metal-reactors#ref155230
https://www.manufacturing.net/article/2018/07/global-approach-factory-safety
http://www.hse.gov.uk/comah/sragtech/casetexaco94.htm
https://www.icheme.org/~/media/Documents/Subject%20Groups/Safety_Loss_Prevention/HSE%20Accident%20Reports/The%20Explosion%20and%20Fires%20at%20the%20Texaco%20Refinery%20Milford%20Haven.pdf
https://www.encyclopedia.com/history/encyclopedias-almanacs-transcripts-and-maps/working-conditions-factories-issue
https://www.history.com/topics/triangle-shirtwaist-fire
https://www.iaea.org/topics/nuclear-power-plant-safety
https://www.nei.org/fundamentals/safety
http://www.world-nuclear.org/information-library/safety-and-security/safety-of-plants/chernobyl-accident.aspx
https://www.nrc.gov/reading-rm/doc-collections/fact-sheets/chernobyl-bg.html
https://www.nrc.gov/about-nrc/emerg-preparedness.html
http://nuclearconnect.org/know-nuclear/talking-nuclear/reactor-safety
***
Crash Course is on Patreon! You can support us directly by signing up at http://www.patreon.com/crashcourse
Thanks to the following Patrons for their generous monthly contributions that help keep Crash Course free for everyone forever:
Sam Buck, Mark Brouwer, Jennifer French Lee, Brandon Westmoreland, dorsey, Indika Siriwardena, James Hughes, Kenneth F Penttinen, Trevin Beattie, Satya Ridhima Parvathaneni, Erika & Alexa Saur, Glenn Elliott, Justin Zingsheim, Jessica Wode, Eric Prestemon, Kathrin Benoit, Tom Trval, Jason Saslow, Nathan Taylor, Brian Thomas Gossett, Khaled El Shalakany, SR Foxley, Sam Ferguson, Yasenia Cruz, Eric Koslow, Caleb Weeks, Tim Curwick, D.A. Noe, Shawn Arnold, Malcolm Callis, Advait Shinde, William McGraw, Andrei Krishkevich, Rachel Bright, Kathy & Tim Philip, Jirat, Ian Dundore
--
Want to find Crash Course elsewhere on the internet?
Facebook - http://www.facebook.com/YouTubeCrashCourse
Twitter - http://www.twitter.com/TheCrashCourse
Tumblr - http://thecrashcourse.tumblr.com
Support Crash Course on Patreon: http://patreon.com/crashcourse
CC Kids: http://www.youtube.com/crashcoursekids
Thanks to CuriosityStream for supporting PBS Digital Studios.
Engineering is a dangerous field. You need to worry about the dangers at work and the things you’re creating.
And there’s a lot that can go wrong. You can’t expect to make things that will help change the world without addressing these issues. You need to care about safety. [Theme Music] Safety – in engineering as with everything else – is all about trying to minimize or eliminate the hazards involved with what you’re doing.
Go back a few centuries, and mishaps were usually seen as inevitable and uncontrollable. But by the turn of the 20th century, as tens of thousands of people began dying every year in the US alone because of industrial accidents, something had to be done. The public saw countless workers, including children, being killed in things like coal mine explosions, or the tragic Triangle Shirtwaist Company fire of 1911, where 145 workers died after being trapped inside a burning factory.
Reaction to events like these led to the modern safety protocols in place today – what engineers call safety culture. And the root of this culture is simply the idea that safety must be prioritized and embraced as a whole, not just thrown in here or there. Which is an important mindset to have, because today engineers work on projects that can have serious, permanent consequences – like nuclear power.
The Chernobyl accident of 1986 is a cautionary tale. Due to a flawed design and inadequately trained staff, one of the reactors in the Soviet nuclear power plant exploded, releasing massive amounts of radioactive material. Two workers were killed within hours, 28 more died of radiation exposure in the following weeks, and another hundred or so ended up in the hospital with acute radiation sickness.
Today, decades later, the area around Chernobyl is still uninhabitable. The catastrophe revealed a severe lack of safety culture. Workers didn’t follow proper protocols and prioritized quick results over safety.
Whether or not you're working with something as dangerous as nuclear power, any safety precautions you take will fall under one of two main categories: occupational safety and public safety. Occupational safety is all about the risks you’ll encounter where you work. Think of heavy machinery on construction sites, dangerous chemicals in a lab, or radioactive waste from a nuclear power plant.
Public safety is pretty much a catch-all for everything else. It can be about dangers you come across at home – like a gas leak from your stove– or the ones you’ll encounter while driving to work – like flying debris from a demolition site. We’ve talked before about the ethics behind engineering, but this episode is all about the practical steps you can take to fulfill your ethical obligation.
Because proper safety conditions don’t just happen. You need to have a plan: a set of precautions that will ensure safety as best you can. One step might be wearing the right equipment.
Hard hats and steel-toe-capped boots in a factory. Lab coats and safety goggles in the lab. Maybe a face mask if you’re dealing with toxic fumes.
That might be a good start to address an occupational hazard on your own personal level, but you can’t expect the public to walk around in hard hats and lab coats. As an engineer, you need to think more broadly and consider the people around you, not only while you’re working, but also anyone that will ever be involved with what you create. If you’re building a car, for instance, you should have good safety practices from the second you start designing it to the day it rolls off the lot.
And the first step in any project is to conduct an initial safety analysis. This happens during the design stage of your experiment, the process before you actually start to make anything. You want to know what the hazards are.
Like, Electrical? Chemical? Mechanical?
Once you know that, you need to find your safety limits. How hot can your equipment get before it breaks down? How much pressure can you apply to something before it bursts?
The idea here is to identify potential problems, and mitigate their chances of happening, or their effects if they do. Fortunately, there are lots of resources to help you do this! Are you working with chemicals?
Check out the supplier’s MSDS, or Material Safety Data Sheet, to learn about any potential reactions they could have. Working with any equipment? You should do what you never do at home: actually read the manual – especially if you haven’t worked with that type of machine before.
One of your biggest resources is the people around you! Engineers work together in teams, so talk to others to get advice, especially people from different disciplines. After you do an extensive safety analysis, you need to have a very thorough safety review during the final part of your design process.
You might conduct a hazard and operability study, or HAZOP, to check everything out. A HAZOP breaks down a big, complex design into smaller, simpler sections called nodes. That helps catch any issues you may have missed while looking at the design as a whole.
For instance, something as complicated as a nuclear power plant is built in many smaller stages. You have to find a suitable site, design the power plant to fit that site, fabricate all of your components, build the plant, and then finally get it all powered up and ready to operate. At every step of that process, you need to check out the safety of your operations.
Like when you’re finding a suitable site, you need to make sure you have enough space not just for the reactors, but also for things like ventilation and a safe place to store fuel. But that’s not all. Once you’re out of the design stage and have something up and running, you should conduct safety reviews on a regular basis.
Conditions always change, and things break down in unexpected ways. You want a scheduled process in place to try and catch those changes in a timely manner. And you especially need to have a safety review if you ever change anything, like if you move your operations to a new factory or use different materials to cut cost.
At that point, you should have a formal inspection done by an official representative of either your company or the government, not just for safety, but also for any environmental impacts – which we’ll cover more next time! Now, if something goes wrong and someone gets hurt, you obviously need a thorough review to determine not only the cause, but also to assess if it was anyone’s responsibility. Because the fact is, even after many levels of safety review, things can still go wrong.
You need to know why it happened so that others can avoid making the same mistake. So, you have your resources, like an MSDS, and a safety review, and maybe a HAZOP, but when it comes to your process for ensuring the safety of your project, you can compress all of these safety concerns down to four steps. The first part is “designing out” any hazards.
You want to minimize the inherent dangers of a process as much as possible from the get-go. If you don’t want to start a fire, you’ll want to use fire-resistant materials rather than flammable ones and keep temperatures within an acceptable range. Next, you want to come up with a design that allows for a reasonable margin of error.
Now you rate your lift for a dozen people, it better be able to hold 20, just in case. So you may also need what’s called a “safe fail.” This basically means that if your design ends up failing or shutting down, then it does so as safely as possible, rather than ending in some kind of catastrophe. You’ve seen this first hand if your phone has ever overheated and shut down before any real damage happened.
Finally, you should put in place procedural safeguards. These can range from general safety standards, like training workers and staff, to watching over a process in order to identify any new potential issues. The best systems will have multiple layers of safety involved.
It's a good idea to conduct something like a Layers of Protection Analysis, or LOPA. After making your system as safe as possible, a LOPA helps you add in safety controls like automated responses or warnings. Some of these may be cautionary, like your car’s check engine light, while others will need your immediate attention, like a fire alarm.
Keep building outward and make sure every alarm is properly prioritized. But only if they’re really needed – because too many alarms can actually do more harm than good. Take the Texaco Pembroke Refinery explosion, for example.
In 1994, a large explosion ripped through the Pembroke Refinery in Wales, injuring 26 people. The blast was heard from up to 40 miles away and its shockwave damaged properties within a 10-mile radius. There were many reasons why this explosion occurred, but one was the excessive number of alarms that the operators had to respond to during the emergency. 2,040 total alarms were in the system, with 87% being designated as “high priority”.
But many of them were just given that designation without really addressing how dangerous they actually were. The whole event happened over several hours, but in the last 11 minutes alone, operators had to respond to 275 alarms! They were unable to tell which were critical to safety and which were not, effectively rendering the alarms useless.
So your LOPA needs to help establish an adequate number of alarms and fail-safes set in place, but not too many. And it doesn’t stop there. At some point, you should also put in automated systems that will shut down the process when any issues arise.
One of the problems with the Chernobyl accident was that the operators actually disabled the automatic shutdown mechanisms, and the reactor was too unstable by the time they tried to shut it down themselves. You might also need physical protection systems, like ditches around your plant or some high, sturdy walls. And you definitely need not only a plant-wide response system, but also an emergency community response plan.
For nuclear power plants in the US, the Nuclear Regulatory Commission makes sure that each one has a thorough emergency response system set in place. They require every plant to have a full-scale exercise at least once every 2 years to ensure everyone involved, from the operators to the local government, are fully prepared for the worst. If any of this seems like too much, just remember that people’s lives are often in your hands.
While you can’t eliminate problems entirely, you can do a lot to prevent them. Something like a nuclear power plant can actually be pretty safe if you follow the proper protocols, and put an emphasis on safety culture. But fail to do so, and it can have catastrophic effects.
So just make sure you treat safety with the same level of importance as all the other engineering concepts you’ll learn and you’ll be on the best path to success! Today we learned all about safety and its impact on engineering. We saw the difference between occupational safety and public safety and found out how to analyze and review a process for any potential dangers with things like HAZOP.
Then we learned about LOPA and the dangers of having too many alarms, and ended on how important it is to adopt a good mindset of safety culture. I’ll see you next time, when we’ll talk about environmental engineering. Thank you to CuriosityStream for supporting PBS Digital Studios.
CuriosityStream is a subscription streaming service that offers documentaries and non-fiction titles from a variety of filmmakers, including CuriosityStream originals. For instance, CuriosityStream has “Storm Chaser’s Guide,” which lets you (safely, from home) find out what it’s like to sit through a howling hurricane. Emmy award-winning storm chaser Jeff Piotrowski takes us on a breathtaking journey into the eyewall of Hurricane Harvey, one of the deadliest and costliest storms in U.S. history, as it makes landfall in Rockport, Texas. You can learn more at curiositystream.com/crashcourse and use the code crashcourse during the sign-up process. Crash Course Engineering is produced in association with PBS Digital Studios.
Wanna keep exploring our world? Check out Braincraft. Braincraft is a show that discusses common questions related to neuroscience and psychology.
Host Vanessa Hill will help you understand human motivations in real world situations, exploring why we act the way we do. Crash Course is a Complexly production and this episode was filmed in the Doctor Cheryl C. Kinney Studio with the help of these wonderful people.
And our amazing graphics team is Thought Cafe.
Engineering is a dangerous field. You need to worry about the dangers at work and the things you’re creating.
And there’s a lot that can go wrong. You can’t expect to make things that will help change the world without addressing these issues. You need to care about safety. [Theme Music] Safety – in engineering as with everything else – is all about trying to minimize or eliminate the hazards involved with what you’re doing.
Go back a few centuries, and mishaps were usually seen as inevitable and uncontrollable. But by the turn of the 20th century, as tens of thousands of people began dying every year in the US alone because of industrial accidents, something had to be done. The public saw countless workers, including children, being killed in things like coal mine explosions, or the tragic Triangle Shirtwaist Company fire of 1911, where 145 workers died after being trapped inside a burning factory.
Reaction to events like these led to the modern safety protocols in place today – what engineers call safety culture. And the root of this culture is simply the idea that safety must be prioritized and embraced as a whole, not just thrown in here or there. Which is an important mindset to have, because today engineers work on projects that can have serious, permanent consequences – like nuclear power.
The Chernobyl accident of 1986 is a cautionary tale. Due to a flawed design and inadequately trained staff, one of the reactors in the Soviet nuclear power plant exploded, releasing massive amounts of radioactive material. Two workers were killed within hours, 28 more died of radiation exposure in the following weeks, and another hundred or so ended up in the hospital with acute radiation sickness.
Today, decades later, the area around Chernobyl is still uninhabitable. The catastrophe revealed a severe lack of safety culture. Workers didn’t follow proper protocols and prioritized quick results over safety.
Whether or not you're working with something as dangerous as nuclear power, any safety precautions you take will fall under one of two main categories: occupational safety and public safety. Occupational safety is all about the risks you’ll encounter where you work. Think of heavy machinery on construction sites, dangerous chemicals in a lab, or radioactive waste from a nuclear power plant.
Public safety is pretty much a catch-all for everything else. It can be about dangers you come across at home – like a gas leak from your stove– or the ones you’ll encounter while driving to work – like flying debris from a demolition site. We’ve talked before about the ethics behind engineering, but this episode is all about the practical steps you can take to fulfill your ethical obligation.
Because proper safety conditions don’t just happen. You need to have a plan: a set of precautions that will ensure safety as best you can. One step might be wearing the right equipment.
Hard hats and steel-toe-capped boots in a factory. Lab coats and safety goggles in the lab. Maybe a face mask if you’re dealing with toxic fumes.
That might be a good start to address an occupational hazard on your own personal level, but you can’t expect the public to walk around in hard hats and lab coats. As an engineer, you need to think more broadly and consider the people around you, not only while you’re working, but also anyone that will ever be involved with what you create. If you’re building a car, for instance, you should have good safety practices from the second you start designing it to the day it rolls off the lot.
And the first step in any project is to conduct an initial safety analysis. This happens during the design stage of your experiment, the process before you actually start to make anything. You want to know what the hazards are.
Like, Electrical? Chemical? Mechanical?
Once you know that, you need to find your safety limits. How hot can your equipment get before it breaks down? How much pressure can you apply to something before it bursts?
The idea here is to identify potential problems, and mitigate their chances of happening, or their effects if they do. Fortunately, there are lots of resources to help you do this! Are you working with chemicals?
Check out the supplier’s MSDS, or Material Safety Data Sheet, to learn about any potential reactions they could have. Working with any equipment? You should do what you never do at home: actually read the manual – especially if you haven’t worked with that type of machine before.
One of your biggest resources is the people around you! Engineers work together in teams, so talk to others to get advice, especially people from different disciplines. After you do an extensive safety analysis, you need to have a very thorough safety review during the final part of your design process.
You might conduct a hazard and operability study, or HAZOP, to check everything out. A HAZOP breaks down a big, complex design into smaller, simpler sections called nodes. That helps catch any issues you may have missed while looking at the design as a whole.
For instance, something as complicated as a nuclear power plant is built in many smaller stages. You have to find a suitable site, design the power plant to fit that site, fabricate all of your components, build the plant, and then finally get it all powered up and ready to operate. At every step of that process, you need to check out the safety of your operations.
Like when you’re finding a suitable site, you need to make sure you have enough space not just for the reactors, but also for things like ventilation and a safe place to store fuel. But that’s not all. Once you’re out of the design stage and have something up and running, you should conduct safety reviews on a regular basis.
Conditions always change, and things break down in unexpected ways. You want a scheduled process in place to try and catch those changes in a timely manner. And you especially need to have a safety review if you ever change anything, like if you move your operations to a new factory or use different materials to cut cost.
At that point, you should have a formal inspection done by an official representative of either your company or the government, not just for safety, but also for any environmental impacts – which we’ll cover more next time! Now, if something goes wrong and someone gets hurt, you obviously need a thorough review to determine not only the cause, but also to assess if it was anyone’s responsibility. Because the fact is, even after many levels of safety review, things can still go wrong.
You need to know why it happened so that others can avoid making the same mistake. So, you have your resources, like an MSDS, and a safety review, and maybe a HAZOP, but when it comes to your process for ensuring the safety of your project, you can compress all of these safety concerns down to four steps. The first part is “designing out” any hazards.
You want to minimize the inherent dangers of a process as much as possible from the get-go. If you don’t want to start a fire, you’ll want to use fire-resistant materials rather than flammable ones and keep temperatures within an acceptable range. Next, you want to come up with a design that allows for a reasonable margin of error.
Now you rate your lift for a dozen people, it better be able to hold 20, just in case. So you may also need what’s called a “safe fail.” This basically means that if your design ends up failing or shutting down, then it does so as safely as possible, rather than ending in some kind of catastrophe. You’ve seen this first hand if your phone has ever overheated and shut down before any real damage happened.
Finally, you should put in place procedural safeguards. These can range from general safety standards, like training workers and staff, to watching over a process in order to identify any new potential issues. The best systems will have multiple layers of safety involved.
It's a good idea to conduct something like a Layers of Protection Analysis, or LOPA. After making your system as safe as possible, a LOPA helps you add in safety controls like automated responses or warnings. Some of these may be cautionary, like your car’s check engine light, while others will need your immediate attention, like a fire alarm.
Keep building outward and make sure every alarm is properly prioritized. But only if they’re really needed – because too many alarms can actually do more harm than good. Take the Texaco Pembroke Refinery explosion, for example.
In 1994, a large explosion ripped through the Pembroke Refinery in Wales, injuring 26 people. The blast was heard from up to 40 miles away and its shockwave damaged properties within a 10-mile radius. There were many reasons why this explosion occurred, but one was the excessive number of alarms that the operators had to respond to during the emergency. 2,040 total alarms were in the system, with 87% being designated as “high priority”.
But many of them were just given that designation without really addressing how dangerous they actually were. The whole event happened over several hours, but in the last 11 minutes alone, operators had to respond to 275 alarms! They were unable to tell which were critical to safety and which were not, effectively rendering the alarms useless.
So your LOPA needs to help establish an adequate number of alarms and fail-safes set in place, but not too many. And it doesn’t stop there. At some point, you should also put in automated systems that will shut down the process when any issues arise.
One of the problems with the Chernobyl accident was that the operators actually disabled the automatic shutdown mechanisms, and the reactor was too unstable by the time they tried to shut it down themselves. You might also need physical protection systems, like ditches around your plant or some high, sturdy walls. And you definitely need not only a plant-wide response system, but also an emergency community response plan.
For nuclear power plants in the US, the Nuclear Regulatory Commission makes sure that each one has a thorough emergency response system set in place. They require every plant to have a full-scale exercise at least once every 2 years to ensure everyone involved, from the operators to the local government, are fully prepared for the worst. If any of this seems like too much, just remember that people’s lives are often in your hands.
While you can’t eliminate problems entirely, you can do a lot to prevent them. Something like a nuclear power plant can actually be pretty safe if you follow the proper protocols, and put an emphasis on safety culture. But fail to do so, and it can have catastrophic effects.
So just make sure you treat safety with the same level of importance as all the other engineering concepts you’ll learn and you’ll be on the best path to success! Today we learned all about safety and its impact on engineering. We saw the difference between occupational safety and public safety and found out how to analyze and review a process for any potential dangers with things like HAZOP.
Then we learned about LOPA and the dangers of having too many alarms, and ended on how important it is to adopt a good mindset of safety culture. I’ll see you next time, when we’ll talk about environmental engineering. Thank you to CuriosityStream for supporting PBS Digital Studios.
CuriosityStream is a subscription streaming service that offers documentaries and non-fiction titles from a variety of filmmakers, including CuriosityStream originals. For instance, CuriosityStream has “Storm Chaser’s Guide,” which lets you (safely, from home) find out what it’s like to sit through a howling hurricane. Emmy award-winning storm chaser Jeff Piotrowski takes us on a breathtaking journey into the eyewall of Hurricane Harvey, one of the deadliest and costliest storms in U.S. history, as it makes landfall in Rockport, Texas. You can learn more at curiositystream.com/crashcourse and use the code crashcourse during the sign-up process. Crash Course Engineering is produced in association with PBS Digital Studios.
Wanna keep exploring our world? Check out Braincraft. Braincraft is a show that discusses common questions related to neuroscience and psychology.
Host Vanessa Hill will help you understand human motivations in real world situations, exploring why we act the way we do. Crash Course is a Complexly production and this episode was filmed in the Doctor Cheryl C. Kinney Studio with the help of these wonderful people.
And our amazing graphics team is Thought Cafe.