YouTube: https://youtube.com/watch?v=DEHsvQ3Ylwg
Previous: The Handmaid's Tale, Part 2: Crash Course Literature 404
Next: To Film School or Not To Film School: Crash Course Film Production with Lily Gladstone #14

Categories

Statistics

View count:230,429
Likes:5,749
Comments:196
Duration:12:39
Uploaded:2017-12-06
Last sync:2024-11-28 15:15

Citation

Citation formatting is not guaranteed to be accurate.
MLA Full: "Psychology of Computing: Crash Course Computer Science #38." YouTube, uploaded by CrashCourse, 6 December 2017, www.youtube.com/watch?v=DEHsvQ3Ylwg.
MLA Inline: (CrashCourse, 2017)
APA Full: CrashCourse. (2017, December 6). Psychology of Computing: Crash Course Computer Science #38 [Video]. YouTube. https://youtube.com/watch?v=DEHsvQ3Ylwg
APA Inline: (CrashCourse, 2017)
Chicago Full: CrashCourse, "Psychology of Computing: Crash Course Computer Science #38.", December 6, 2017, YouTube, 12:39,
https://youtube.com/watch?v=DEHsvQ3Ylwg.
We’ve spent most of this series talking about computers. Which makes sense - this is Crash Course COMPUTER SCIENCE after all. But at their core computers are tools employed by humans and humans are pretty complicated. So today, we’re going to discuss some psychological considerations in building computers like how to make them easier for humans to use, the uncanny valley problem when humanoid robots gets more and more humanlike, and strategies to make our devices work better with us by incorporating our emotions and even altering our gaze. Oh, and we'll talk about Carrie Anne's all time favorite user interface design principle - knurling.

Produced in collaboration with PBS Digital Studios: http://youtube.com/pbsdigitalstudios

Want to know more about Carrie Anne?
https://about.me/carrieannephilbin

The Latest from PBS Digital Studios: https://www.youtube.com/playlist?list=PL1mtdjDVOoOqJzeaJAV15Tq0tZ1vKj7ZV

Want to find Crash Course elsewhere on the internet?
Facebook - https://www.facebook.com/YouTubeCrash...
Twitter - http://www.twitter.com/TheCrashCourse
Tumblr - http://thecrashcourse.tumblr.com
Support Crash Course on Patreon: http://patreon.com/crashcourse
CC Kids: http://www.youtube.com/crashcoursekids

Hi I'm Carrie Anne and welcome to Crash Course Computer Science.

So, over the course of this series we have focused almost exclusively on computers, the circuits and algorithms that make them tick because
this is Crash Course Computer Science. But ultimately
computers are tools employed by everyone. And humans are, well, messy.

We haven't been designed by human engineers from the ground up with known performance specifications we could be logical one moment and irrational the next.

Have you ever gotten angry at you navigation system, surfed Wikipedia aimlessly, begged your internet browser to load faster, or named you rumba? These behaviors are quintessentially human.

To build computer systems that are usable, useful and enjoyable, we need to understand the strengths and weaknesses of both computers and humans. And for this reason when good system designers are creating software they employ social, cognitive, behavioral, and perceptual psychology principles.


 Intro (0:53-1:01)



No doubt you've encountered a physical computer interface that was frustrating to use, impeding your progress. Maybe it was so badly designed that you couldn't figure it out and just gave up. That interface had poor usability.

Usability the the degree to which a himan-made artifact, like software, can be used to achieve an objective effectively and efficiently. To facilitate human work, we need to understand humans. From how they see and think to how they react and interact.

For instance, the human visual system has been well studied by psychologists. Like, we know that people are good at ordering intensities of colors. Here are three, can you arrange these from lightest to darkest? You probably don't have to think too much about it.

Because of this innate ability, color intensity is a great choice for displaying data with continuous values. On the other hand, humans are terrible at ordering colors. Heres another example for you to put in order. Is orange before blue or after blue? Where does green go?

You might be thinking we could order this by wavelength of light like a rainbow, but that's a lot more to think about. Most people are going to be much slower and error prone at ordering.

Because of this innate ineptitude of your visual system, displaying continuous data in colors can be disastrous. You'll find yourself constantly referring back to a color legend to compare items.

However, colors are perfect for when the data are discrete with no ordering, like categorical data. This might seem obvious, but you'd be amazed at how many interfaces get basic things like this wrong.

Beyond visual perception, understanding human cognition helps us design interfaces that align with how the mind works. Like humans can read, remember, and process information more effectively when it's chunked. That is, when items are put together in small, meaningful groups.


Humans can generally juggle 7 items plus or minus 2 in short term memory. To be conservative, we typically see groupings of five or less. Thats why telephone numbers are broken into chunks. Like 317-555-3897.

Instead of being ten individual digits that we'd likely forget, it's three chunks, which we could handle better. From a computer standpoint, this needlessly takes more time and space, so it's less efficient.

But, it's way more efficient for us humans. A trade off we almost always make in our favor since we are the ones running the show, for now. 

Chunking has been applied to computer interfaces for  things like drop down menu items and menu bars with buttons. It would be more efficient for computers to just pack all those together edge to edge. Its wasting memory and screen real estate. But designing interfaces in this way makes them much easier to visually scan, remember and access.

Another essential concept used in interface design is affordances. According to Don Norman, who popularized the term in computing, affordances provide strong clues to the operations of things. Plates are for pushing. Knobs are for turning.

Slots are for inserting into. When affordances are taken advantage of the user knows what to do just by looking. No picture, label or instruction needed. If you've ever tried to pull a door handle only to realize that you have to push it open, you've discovered a broken affordance.

On the other hand a door plate is a better design because it only gives you the option to push. Doors are pretty straight forward, if you need to put any instructions on them, you should probably go back to the drawing board.

Affordances are used extensively in graphical-user interfaces, which we discussed in episode 26. It's one of the reasons why computers became so much easier to use than with command lines.

You don't have to guess what things on screen are clickable, because they look like buttons. They pop out, just waiting for you to use them.

One of my favorite affordances which suggests to users that an onscreen element is drag-able is knurling, that texture added to objects to improve grip and show you where to best grab them. This idea and pattern was borrowed from real-world physical tools.

Related to the concept of affordances is psychology of recognition versus recall. You know this effect well from tests. It's why multiple choice questions are easier than fill-in-the-blank ones.

In general, human memory is much better when it's triggered by a sensory cue like a word, picture, or sound. That's why interfaces use icons, pictoral representations of functions like a trash can for where files go to be deleted. We don't have to recall what that icon does, we just have to recognize the icon.

This was also a huge improvement over command line interfaces, where you had to rely on your memory for what commands to use. Do I have to type "delete" or "remove" or "trash" or "shoot"? It could be anything. 

It's actually RM and Linux, but anyway, making everything easy to discover and learn sometimes means 'slow to access', which conflicts with another psychology concept: expertise.

As you gain experience with interfaces, you get faster, building mental models of how to do things efficiently. So, good interfaces should offer multiple paths to accomplish goals. 

A great example of this is copy-and-paste, which can be found in the Edit drop-down menu of Word processors. And it's also triggered with keyboard shortcuts. One approach caters to novices while the other caters to experts, slowing down neither.

So you can have your cake and eat it too!

In addition to making humans more efficient, we'd also like computers to be emotionally intelligent, adapting their behavior to respond appropriately to their user's emotional state, also called affect. 


That would make experiences more empathetic, enjoyable, or even delightful. This vision was articulated by Rosalind Picard in her 1995 paper on affective computing, which kickstarted an interdisciplinary field combining aspects of psychology, social, and computer sciences.

It spurred work on computing systems that can recognize, interpret, simulate, and alter human affect. This was a huge deal, because we know emotion influences cognition and perception in everyday tasks like learning, communication, and decision-making.

Affect-aware systems use sensors, sometimes worn, that capture things like speech and video of the face, as well as biometrics like sweatiness and heart-rate.

This multi-modal sensor data is used in conjunction with computational models that represent how people develop and express affective states, like happiness and frustration, and social states, like friendship and trust.

These models estimate the likelihood of a user being in a particular state and figure out how best to respond to that state in order to achieve the goals of the system. This might be to calm the user down, build trust, or help them to get their homework done.

A study looking at user affects was conducted by Facebook in 2012. For one week, data scientists ordered the content on hundreds of thousands of users' feeds. Some people were shown more items with positive content while others were presented with more negative content.

The researchers analyzed people's posts during that week and found that users who were shown more positive content tended to also post more positive content. On the other hand, users who saw more negative content tended to have more negative posts.

Clearly, what Facebook and other services show you can absolutely have an effect on you. As gatekeepers of content, that's a huge opportunity and responsibility. Which is why this study ended up being pretty controversial.

Also, if the user is being negative, maybe the computer shouldn't be annoying by responding in a cheery, upbeat manner. Or maybe the computer should attempt to invoke a positive response, even if it's a bit awkward. The correct behavior is very much an open research question.

Speaking of Facebook, it's a great example of computer-mediated communication or CMC, another large field of research. This includes synchronous communication like video calls, where all participants are online simultaneously. As well as asynchronous communication, like tweets, emails, and text messages where people respond whenever they can or want.

Researchers study things like the use of emoticons, rules such as turn-taking, and language use in different communication channels.

One interesting finding is that people exhibit higher levels of self-disclosure - that is, reveal personal information - in computer-mediated conversations, as opposed to face-to-face interactions.

So if you want to build a system that knows how many hours a user truly spent watching the Great British Bakeoff, it might be better to build a chatbot than a virtual agent with a face.

Psychology research is also demonstrated that eye gaze is extremely important in persuading, teaching and getting people's attention. Looking at others while talking is called mutual gaze. This has been shown to boost engagement and help achieve the goals of a conversation, whether that's learning, making a friend, or closing a business deal.

In settings like a video-taped lecture, the instructor rarely, if ever, looks into the camera and instead generally looks the students who are physically present. That's okay for them, but it means people who watch the lectures online have reduced engagement.

In response, researchers have developed computer vision and graphics software that can warp the head and eyes, making it appear as though the instructor is looking into the camera, right at the remote viewer.

This technique is called augmented gaze. Similar techniques have also been applied to video conference calls to correct for the placement of web-cams, which are almost always located above screens.

Since you're typically looking at the video of your conversation partner rather than directly into teh webcam, you'll always appear to them as though you're looking downwards, breaking mutual gaze, which can create all kinds of unfortunate social side effects like a power imbalance.

Fortunately, this can be corrected digitally and appear to participants as though you're lovingly gazing into their eyes.

Humans also love anthropomorphizing objects and computers are no exception. Especially if they move like our robots from last episode. Beyond industrial uses that prevailed over the last century, robots are used increasingly in medical, educational, and entertainment settings, where they frequently interact with humans.

Human-robot interaction or HRI is a field dedicated to studying these interactions, like how people perceive different robot behaviors and forms or how robots can interpret human social cues to blend in and not be super awkward.

As we discussed last episode, there's an ongoing quest to make humans as robot-like in their appearance and interactions as possible. When engineers first made robots in the 1940s and 50s, they didn't look very human at all. They were almost exclusively industrial machines with no human likeness.

Over time, engineers got better and better at making human-like robots. They gained heads and walked around on two legs. But they couldn't exactly go to restaurants and masquerade as humans.

As people push closer and closer to human likeness, replacing cameras with artificial eyeballs and covering metal chassis with synthetic flesh, things started to get a little bit uncanny, eliciting an eerie and unsettling feeling. 

This dip in realism between almost human and actually human became known as the Uncanny Valley.

There's debate over whether robots should act like humans too. Lots of evidence already suggests that even if robots don't act like us, people will treat them as though they know our social conventions. And when they violate these rules, such as not apologizing if they cut in front of you or roll over your foot, people get really mad.

Without a doubt, psychology and computer science are a potent combination and have a tremendous potential to affect our everyday lives. Which leaves us with a lot of questions.

Like, you might lie to your laptop, but should your laptop lie to you? What if it makes you more efficient or happy? Or should social media companies curate the content they show you to make you stay on their site longer? To make you buy more products? They do, by the way.

These types of ethical considerations aren't easy to answer, but psychology can at least try to help us understand the effects and implications of design choices in our computing systems.

But, on the positive side, understanding the psychology behind design might lead to increased accessibility. A greater number of people can understand and use computers now that they're more intuitive than ever. 

Conference calls and virtual classrooms are becoming more agreeable experiences. And as robot technology continues to improve, the population will grow more comfortable in those interactions.

Plus, thanks to psychology, we can all bond over our love of knurling. I'll see you next week!



 Credits (12:06)



Crash Course Computer Science is produced in association with PBS Digital Studios. At their channel, you can check out a playlist of shows like Eons, PhysicsGirl, and It's Okay to Be Smart.

This episode was filmed at the Chad and Stacey Emigholtz Studio in Indianapolis. And it was made with the help of all these nice people and our wonderful graphics team, Thought Cafe.

Thanks for the random access memories! I'll see you next time.