Previous: Wallace, Darwin's Forgotten Frenemy
Next: What Did Dinosaurs Really Sound Like?



View count:148,514
Last sync:2023-01-18 21:15
Taking photos used to require technical knowledge and time in a lab, but now we have electronic devices in our pockets that do all of the work for us. How do these miracle devices do it?

Hosted by: Olivia Gordon
Support SciShow by becoming a patron on Patreon:
Dooblydoo thanks go to the following Patreon supporters—we couldn't make SciShow without them! Shoutout to Kevin Bealer, Mark Terrio-Cameron, KatieMarie Magnone, Patrick Merrithew, Charles Southerland, Fatima Iqbal, Sultan Alkhulaifi, Tim Curwick, Scott Satovsky Jr, Philippe von Bergen, Bella Nash, Chris Peters, Patrick D. Ashmore, Piya Shedden, Charles George
Looking for SciShow elsewhere on the internet?
Sources: pp 3, 9, 11

If you’re like me, you have a smartphone in your pocket.

It’s easy to forget how incredible it is that one little device can do so many things. Like taking pictures!

At one point, you needed a special camera loaded with light-sensitive film to take a photo. And if you didn’t know what you were doing, all of your pictures would be ruined. But nowadays, you can take hundreds of selfies with just the tap of your finger.

The digital camera inside your phone is doing the heavy lifting, thanks to some feats of electrical engineering. Digital cameras don’t see images the way we do. To them, a picture is just a series of ones and zeros.

So when you take a selfie, the first thing your phone has to do is change light into electronic data that it can understand. Light comes in packets of energy called photons. And to generate an image, a digital camera needs to figure out how many photons are reflecting off your face and coming through the lens, and how much energy those photons have.

To do that, digital cameras usually have an image sensor called a complementary metal-oxide semiconductor, also known as a CMOS sensor. There’s a good chance that this is what’s in your phone camera, or in a DSLR. A CMOS sensor is mostly made up of layers of silicon, which is a semiconductor.

Semiconductors let you control electrical conductivity, or how easily electrons can move through a material. They’re usually materials with chemical impurities mixed in, so they don’t conduct electricity as well as metals, but they’re more conductive than an insulator, like plastic. The CMOS sensor in your phone is made up of a grid of picture elements, known as pixels.

So when you take a selfie, a photon reflects off your face, flies into the camera, and hits a pixel in the CMOS sensor. When the photon hits a silicon atom in a pixel, the atom absorbs the photon’s energy, and its electrons become excited. And if the photon transfers enough energy, one of those excited electrons will get spit out.

This creates a charge in that particular pixel, which the camera measures and interprets as brightness. The more photons that hit a pixel, the stronger the charge, and the brighter that pixel gets. Now, on their own, all these pixels can detect is brightness.

Any picture they take would be black and white. Adding color means there’s another piece to this technology puzzle. Like paint, light has three primary colors, in this case, red, green, and blue.

By mixing them together in different amounts, you can create any other visible color. So to make a color photo, a digital camera has to measure how much red, green, and blue light is hitting the sensor at each pixel, rather than measuring all photons that are hitting each pixel. There are a couple of ways to do this, but one common system, called a Bayer array, involves layering colored filters over groups of four pixels: one red, one blue, and two green.

The array uses two green pixels because our eyes are especially sensitive to green wavelengths of light. With a Bayer array, each pixel can absorb energy from one of the three primary colors of light and become charged. Once the sensor has measured the charges created at each pixel, the camera has to interpret all that color and brightness information to generate an image.

There’s a lot that goes into this, but one important step is called demosaicing. Basically, the camera needs to blend the primary colors together. Otherwise, every pixel would be either pure red, green, or blue.

Demosaicing involves some guesswork, like how much blue light would’ve hit a nearby green pixel. Engineers and mathematicians have several methods to calculate this, but most involve algorithms that compare nearby pixels to each other and average out their values. Thanks to this process and some more computing power, your phone will display a picture on its screen.

But if there’s something you want to do with it, like, say, post it to Instagram, there’s often one more step: the image file needs to be compressed. See, digital cameras collect a lot of data very quickly. And if you want to send your picture over the Internet, you usually want to make it smaller.

There are lots of different algorithms for compressing images, but the one you’ll see most often on the web was developed by the Joint Photographic Experts Group, or JPEG. JPEG compression divides the pictures up into eight by eight blocks of pixels, and simplifies the information needed to describe them. Compression isn’t always perfect, larger blocks can lose too much detail and make the image pixelated, while smaller blocks will preserve more information, but the file size will still be pretty big.

But to social media sites, which have to pay to store all those selfies you post, small files are more important. So thanks to a lot of technology working together, in less than a second, your phone can turn light energy into a digital signal, piece that signal together, and compress it into a neat little package, a picture you can share with the world. If you want to learn more about the science of photography, check out this video on why our eyes might glow red in photos.

And thanks for watching!