Why the locker-picture sub-plot in Clueless is everything

I’ve been thinking a lot about the sub-plot in Clueless where Cher is trying to hook Ty up with Elton so she arranges that group photo shoot and takes a bunch of pictures of everyone, Elton and Ty, and just Ty, and then gives a picture of Ty to Elton because that is somehow supposed to persuade him into wanting to date Ty. Later she sees it hanging in his locker and takes that as proof that Elton is totally into Ty even though freaking duh he is not. Even later we find out that Elton put it in his locker because Cher took the picture and he is totally into Cher, not Ty, DUH CHER. This confusion causes lots of angst for Ty, which is rude, and presumably some amount of confusion for Elton, but no one cares because he sucks anyway.

Setting aside the fact that it is very weird of Elton to put up a picture of Ty in his locker if he doesn’t like Ty, regardless of who took the picture, and setting aside the assumption that there’s presumably an equivalent subplot in Emma which honestly I could never watch the whole thing because it’s way boring compared to Clueless (and we also assume there’s a similar subplot in Emma the book which I also don’t think I ever finished because ibid.), I keep coming back to Cher’s mistake as one of the most important lessons for our time that everyone could do to remember better.

Lessons from Clueless

This is where I show how this Clueless subplot provides object lessons in statistical reasoning, machine learning, human psychology, and even product design. I am totally not kidding.

One of Cher’s most basic errors is that she creates a self-fulfilling prophecy. This is a boring and obvious observation. Cher is looking for any clues to support not only her theory that Elton likes Ty but also her mission to both make Elton like Ty and set Ty up with Elton. For someone who is supposedly both very smart and, according to Cher herself, really good at reading people, this is a pretty bad slip-up. Confirmation bias is like a basic thing you learn in science to be aware of, and Cher fails at this. (However, I remain convinced by her position in debate class about violence in video games.)

So we have lesson 1 from Cher: Always check yourself for confirmation bias.

What Cher’s fuck-up also reminds us is that you can’t assume why someone does something. We can extend this lesson to what we learned in statistics, about correlation and causation. Just because there is a picture of Ty, and it is in Elton’s locker, does not mean that the picture of Ty caused it being in Elton’s locker. The Ty-ness of the picture happened to correlate with its presence in the locker but both are also correlated with the Cher-ness of the photographer. But it was the Cher-ness of the photographer that caused the being-in-the-locker-ness, not the Ty-ness of the picture. This is kind of a crappy example, not least because I still think Elton’s rationale for putting up the picture at all was pretty boneheaded. He could have picked any of the group pictures and it would have been 1000 times less weird for him to put it in his locker, but then there wouldn’t have been any conflict between our two protagonists so you know, sometimes logic must suffer.

Lesson 2 from Cher: Correlation does not imply causation.

Okay. Those are the boring lessons. Now let’s get to the fun ones. Cher’s fuck-up is a great opportunity to think about ML recommendation models. Similar to the correlation-causation dilemma, and in roughly more computational terms, just because you did an input (Cher took a picture) and there was an output (Elton put it up in his locker) does not mean you (Cher) know why the input turned into the output that it did. To assume you know the reason why — like Cher did — can only cause heartache and the burning of cassettes in fake fireplaces.

To explain, pretend you are Netflix:

> Input: You suggest a movie to a Netflix user.

> Output: Netflix user watches the movie.

As Netflix, you want to give the user a bunch of other movies they will watch. Based on the information you have (the movie that they watched), you will suggest to them some more movies. But how?

Much like Cher’s picture, the movie you watched has lots of different features. Cher’s picture has features like the subject (Ty, or the flower maybe); the photographer (Cher); the format (small rectangle, color). The movie you watched has some features like the actors; the director; the characters; the genre; the language; and so on and so forth. Now, if Cher was a machine learning model, she would be a very naive one. She would be a model that takes into account only one feature — the subject of the image — and bases her conclusions based on that one feature. If she were to give Elton more pictures she thought he would like, they would all have to include Ty as the subject.

This is obviously a bad model, not just because we already know Ty isn’t the reason Elton liked the picture (he was responding to a different feature altogether), but also because we can assume there are other picture subjects that might interest him. Thus, Cher-the-ML-model is also setting herself up for a lot of False Negatives (things she doesn’t give to Elton but that he would like). She’ll also get a lot of False Positives which Elton would start to notice pretty quickly, because she’ll just keep handing him Ty pictures from many different photographers, maybe all of her school pictures since kindergarten, until he’s like “what the fuck Cher”.

If Netflix were like Cher, it would know the movie you watched had a main character named Roger and would assume that is what you care about when choosing a movie. (If it were really like Cher, it would make that assumption because it thought you and Roger might have a romantic entanglement and would suggest you date. Luckily, there has not yet been a Netflix/Tinder merger PLEASE NEVER DO THIS.) Thus, it would then only suggest movies for you with main characters named Roger. You might not have anything against Rogers, but we can assume based on normal human behavior that there are other things more influential in your choice to watch the movie. This means Netflix would miss out on a whole lot of opportunities to give you good movie recommendations if it was only recommending based on one feature (even if it was a less-dumb feature than “characters named Roger”).

I do want to point out here that Netflix’s sometimes-seemingly-tongue-in-cheek genre categorizations like “British murder mystery comedies with strong female leads set in the future” (approximately something that has been recommended to me) is actually a brilliant way of Netflix showing us the kind of interpolations and extrapolations it’s making from our watching patterns and goes to show the depth of features that are considered in making recommendations.

If we were to improve Cher’s ML model, we would want to take lots of different pictures (okay, she already did), and give lots of them to Elton. Then, based on what we know about the pictures (i.e. their features) and what Elton chooses to do with them (put them up in his locker, or not) we can make some reasonable interpretations about what kinds of pictures Elton likes, and give him increasingly better pictures in the future. Unfortunately for Cher-the-ML-model, we as machine learning experts know that the feature Elton cares about is present in all of Cher’s photographs, so the data set is fundamentally flawed. If Cher augmented her data set with some pictures from, say, Ansel Adams, we could get a better signal on what Elton is into.

What machine learning fundamentally tries to do is extrapolate what it is about a given input that makes a given output true (it may then take these learnings and try to interpolate new outputs based on new inputs, which is a clause I’m only throwing in here to trigger my SO as it follows a lengthy debate about what “interpolate” means and how to use it appropriately in this context (and now that I am re-reading this I do believe it’s backwards, math language is hard, just go ahead and switch those)). Cher, the human, which is the thing most ML models are ostensibly created to (try to) replicate, completely fails at this.

Lesson 3 from Cher: make sure you have a good data set before you try to draw conclusions.

The last lesson we can learn from Cher, and this is a bit more nebulous which is my word of the year and it’s about ready to be retired to let me know if you have any recommendations for good substitutes, is that you have to know your users. Cher has a bunch of goals in mind: getting a boyfriend, getting Ty a boyfriend, .. oh yeah and getting her drivers’ license. She makes a bunch of schemes to achieve these goals, but she doesn’t really bother to find out what her users’ (i.e. her friends’) goals are. Obviously as we now know from the locker picture debacle, Elton’s goal is to date Cher, but both Cher’s and Ty’s goal is to get Elton to date Ty. Cher’s goal is also to date Christian, but Christian’s goal is to date a man.

Cher’s schemes, whether it’s the photo for the locker or the cookies in the oven, make the basic and fatal flawed assumption that the person who is the target of her schemes has the same goal that she does (or that she can coerce them into aligned goals). If Cher had just been like, “hey Elton what are you interested in?” then he could have just been like “you” and she could have just been like “ew” and she wouldn’t have even wasted any time with the whole cockamamie photo shoot nonsense. It’s like if Netflix released a feature that was like “make your own movie” because they wanted some new content but obviously no one is actually going to make their own movie because this isn’t TikTok for crying out loud so then no one used the feature. (Thinking of really dumb features is kind of hard so just go with me on this one.) Or maybe it would be like if Netflix was only offering you movies to watch but you were actually trying to just watch a show. Or, simply, Netflix is just offering you movies that you don’t want to watch just like Cher is offering you people that you don’t want to date. Instead she just wastes everyone’s time, but at least we get a movie out of it.

Understanding what people want (who they want to date, what kind of feature or product they want to interact with, what problems they have) before you offer them something (a date, a product, a solution) is step 1 to being a good friend and doing good product design. If it doesn’t solve the user problem then it won’t solve the business problem — if you’re Cher, if it doesn’t solve your friend’s problem then it probably won’t solve your problem, either. Cher’s problem is she doesn’t have a boyfriend or a drivers’ license. Once she realizes that she doesn’t have all the answers (i.e. she starts listening to her friends), she finally gets that boyfriend and her friends start dating the people they want to date. Except Elton, because he still sucks. And Cher gets to date Paul Rudd who we still love even after all these years heart eyes emoji.

Lesson 4 from Cher: you have to know what your users want in order to solve their problem.

I’m sure there are other important life lessons we can draw from Clueless — after all, it is only one of the most important movies of our generation — but that’s all for now.

Bare Hands, Bare Feet, Crushed Skulls

I sent the following text message this morning:

"I'm getting really good at killing fruit flies with my bare hands."

It’s true; I’ve snagged two of them recently, my fist closing quickly like the tongue of a frog, and I intend to keep practicing.

But as soon as I sent it I realized how strange this usage is — had I been, perhaps, wearing gloves, I still would have made the same boast. Which in turn made me wonder, why do we use “bare hands” to mean “without tools”, but “bare feet” to mean only, literally, without shoes or socks?

Continue reading

Fried Dandelions: an Ode to the Internet

dandelions

The internet fucking sucks. It is terrible and is ruining everything. At least, the people on the internet are terrible and are ruining everything.

The internet itself is an amazing place. It’s the kind of place you go when someone says “fried dandelions” and you say “I’m going to go find out about that” and so you internet, and you do. Go ahead and look. It’s not quite as saturated a market as, let’s say, basil pesto, but there’s enough to go on.

Continue reading

But is it authentic? A pseudo-linguistic typological framework of authenticity

Recently, I’ve been bothered by a perceived over-use of the word (and concept of) “authentic”. It’s become a potent buzz-word at least within the food media world, and I’ve noticed it increasingly, perhaps because I’m primed for it, across other conversations, as well.

I’ve been spending a lot of time mulling this over in my mind, and I’ve decided that there are some uses that bother me, and some that don’t so much. They can roughly be divided into two classes: internally-ordained authenticity and externally-ordained authenticity. This is just what I’ve come up with over several weeks of casual ruminating; the world has no shortage of other classification systems, such as those discussed here, which to some extent overlap with the way I am seeing this proposed dichotomy. And there are plenty of uses of the word that don’t really fit neatly into these two classes, either. I use them only as a proxy to discuss the way conversations about experiential (and cultural) phenomena take place and the power dynamics within them.

Continue reading

Being racist is still real.

Two girls walk onto a train, talking quietly amongst themselves.

As they sit down, a woman leans out from the row behind them. “This is the quiet car.” The girls stop, taken aback.

“You don’t need to tell me that,” says one of them.

“Well you were talking …”

“We were talking to get on the train.” *Looks incredulous. I, also, felt incredulous.* They pick up their things which they’d just set down and head back out of the car.

“I was just trying to be nice!” Fruitlessly. But were you?

Pop quiz: who in this story is white, and who is black?

Review of the Day: Mansplaining.

Today, I want to talk about how really annoying shit can happen sometimes even when you are having a sweet day of skiing at a sick mountain. Case in point: on day 3 (and final ski day) of a trip to Mt. Bachelor (thanks, now-defunct MAX Pass), I was teaching my friend and ski buddy on the trip how to telemark, a sport that I have engaged in for the past 8 (eight) (8) consecutive ski seasons, exclusively.

I shouldn’t need to credential myself to set up this story, but I will anyway, just to quash the temptation to nay-say my point of view.

Continue reading

Pheasants & Barley: idealized Nordic cuisine, it’s what’s for dinner

Tonight’s dinner menu is brought to you by this beautiful coffee table decoration that doubles as a cookbook:

Upon hearing that dinner was going to be a frittata (boring — I love breakfast for dinner as much as the next gal but when you eat an egg sandwich almost every morning, doing it again ten hours later just seems uninspired), I opened cookbook nearest my hand for some more out-of-the-ordinary inspiration. That cookbook happened to be Fire and Ice, a cloth-covered photo-essay-cum-“home-cooking”-expedition through the Great White North.

Continue reading

Rant/Ramble of the Day: Thoughts on Plan B (not that kind)

Warning: too long and mostly un-edited.

After reading this post (read it, especially if you’ve every wondered why not to get a PhD), and then this one (also read it), something very particular stuck out to me:

You didn’t have a Plan B and that was stupid. What did you think would happen?

This strikes me an intellectually dishonest on the part of the people asking this question. Our society rewards drive, single-minded pursuit, having one over-arching mission of life (or at least appears to reward, in such a way as to discourage individuals from having many different goals and interests). The heroes in our society are the ones who(se narratives suggest they) dedicated their lives to achieving one great thing. Martin Luther King. Steve Jobs. Amelia Earhart. We don’t have any renaissance men or women anymore, not really. If we do, we have Noam Chomsky, and even he’s more of a two-trick pony. It is both impossible and dispreferred, in this day and age, to excel in multiple fields. No longer can a single public intellectual (or private intellectual, or academic) write and speak with authority and respectability in disciplines ranging from ethics to economics to ecology.

Continue reading