A Map of Typical Positions on Technology and Culture

In this post, I want to step back a bit from historical details in order to do some broad-stroke theory. I want to build a map for you that should help give you some orientation when wading into various writing on the technology and culture relationship. Those of you who study this all the time will probably find this post a bit of a review, and if that’s the case, feel free to skip it. But if you tend to find yourself getting more and more perplexed when reading conflicting perspectives on technology, this post should help you get your bearings.

Let’s start our map by laying out a spectrum on the horizontal axis.

Whenever an author theorizes the technology and culture relationship, that author must deal with one of the most basic questions in the field: in what direction do the influences flow? That is, does technology “impact” culture, does culture shape technology, or do both happen simultaneously? How an author answers this question can be plotted on this spectrum.

At one extreme is the position of technological determinism. People who ascribe to this believe that technologies impact an adopting culture culture in a kind of one-way, deterministic relationship. Technologies are seen as a powerful, non-neutral forces that carry with them moral consequences, and produce deterministic effects. Extreme technological determinists also tend to think of technology as an autonomous force that actually guides and determines its own development. As one of my professors used to say, a strong technological determinist believes that once someone invents the techniques for radar, it’s really only a matter of time before we get the microwavable burrito.

On the other extreme is the position of social determinism, which is sometimes called instrumentalism by philosophers of technology. Extreme social determinists see technologies as completely neutral artifacts that can be used for good or for evil depending on the desires of the adopting individual or culture. This kind of position is wonderfully summarized using that well-known motto of the National Handgun and Rifle Association (NHRA): “guns don’t kill people; people kill people.”

I’ve portrayed these positions as extreme ends of a spectrum because it’s important to realize that very few authors subscribe to either of these positions wholeheartedly. Some certainly lean farther to one side or the other, but we should avoid labeling any author as being strictly a technological determinist or a social determinist. Most sit somewhere in between the extremes, which leads us to that position at the center: the social-shaping perspective.

The social-shaping of technology (SST) perspective acknowledges what is obviously true about both of the more extreme positions: technologies certainly do affect an adopting culture in significant ways; but historical cases also show quite clearly that engineers and adopting cultures play important roles in reshaping those technologies to better fit with their existing social values. SST sees technology and culture as “mutually constitutive,” (MacKenzie & Wajcman 1999) each creating and shaping the other. In other words, “guns don’t kill people, but they sure make it a heck of a lot easier.”

To complete our map, we need to add a vertical dimension to our existing horizontal one:

This vertical axis represents the moral attitude an author takes towards technological change. At one extreme is techno-optimism, a belief that our technologies are making the world a better place. In its most extreme forms, techno-optimists elevate technology to the position of savoir, the ultimate tool with which we can save ourselves and create a utopia on earth. This position is excited about the possibilities of new technologies and says “full steam ahead” to any and all technological development.

At the other extreme is techno-pessimism, a position that sees technology not as a savoir, but as a destroyer. Techno-pessimists think that technology is making the world a worse place, and that it might just end up killing us all (think nuclear holocaust, genetic engineering gone awry, sentient robots that turn against us, etc). This position tends to pine for the simpler days before industrialization, and is sympathetic towards  Romanticism.

As with the other axis, this is of course a spectrum and most authors situate themselves somewhere in between the two extremes. At the very middle is a position I’ve called “double-edged sword.” This position argues that every technological change brings with it a wide array of consequences, some of which can be considered ‘good’, others ‘bad’, depending on your perspective. The costs and benefits of an innovation are never equally distributed in a given society, so whether you think a given technology is making the world better or worse largely depends on whether you received more of its benefits and less of its costs, or vice-versa.

Putting it all together, we get a map that looks something like this:

Most critics of technology (Christian or secular) tend to sit somewhere in the lower-left quadrant. They lean towards technological determinism, and they are generally pessimistic about future technological change. Jacques Ellul seems the most pessimistic to me—his book The Technological Society is almost fatalistic. Neil Postman is closer to the double-edged sword position, but he is still overall more pessimistic than optimistic. Marshall McLuhan is an unapologetic technological determinist, but he is far less pessimistic than other Christian critics.

In the upper-left quadrant we find people like Ray Kurzweil, who is extremely excited about the potential for a full human-machine integration. His belief in the inevitability of the “singularity” puts him on the technological determinist side, but unlike McLuhan or Ellul, he sees technology as a potential savoir of humanity.

At the extreme corner of the upper-right quadrant would be the NHRA sentiment I discussed earlier. The Social Construction of Technology (SCOT) position is probably the most social determinist theory I know of, but it takes a very neutral view on whether technology is making the world better or worse. The Social Shaping of Technology (SST) position is on there twice because the first edition of MacKenzie & Wajcman’s book in 1985 was far more social determinist than their second edition in 1999, which took a much more balanced tone.

Interestingly, I don’t know yet of any author that would fit into the lower-right quadrant, probably because those who lean towards social determinism rarely have an overly pessimistic view of technology.

Does this help you navigate your way around the various positions you may have encountered? Where would you place your favorite authors on this map?

Patterns of Use

Ashen CrossDid you give something up for Lent this year? This is that time of year when many Christians choose to give up something in order to sharpen their attention in preparation for Easter. I’ve observed this tradition haphazardly in the past, but this year I decided to experiment with giving up something that I have lately been feeling a little to addicted to: Facebook.

I’ve been spending way too much time on Facebook lately. Google’s Chrome web browser shows you a list of your most-visited web sites when you open a new tab, and Facebook has been at the top of that list for some time now. Like many people, I tend to check Facebook several times a day, whenever I’m feeling bored or have a little time to kill. I enjoy being able to keep up on the lives of my friends, many of whom are scattered far away from my little corner of the world. I love reading their pithy comments, seeing pictures of their kids, reading what they found interesting, and laughing along with them at the never-ending stream of funny pictures that quickly spread through the social network.

But I’ve noticed over the years that the way I use Facebook has changed a few times. When I first joined in 2007, I mostly used it to reconnect with old college and high school friends. I would run across someone I used to know, friend them, and then exchange a few private messages to find out how their life turned out.

That worked well for a while, but then I had to figure out what to post on my own profile. Early posts were scans of old photos and bad attempts at being witty, but I soon settled into posting what I was making for dinner that night, and providing the corresponding recipe as a note. My profile quickly became a sort of cookbook, and some of my friends started to reciprocate.

I eventually ran out of recipes, however, and as I became friends with more and more people from various peripheral areas of my life, I began to pay attention to how my posts would make me look to these people who were really more like acquaintances or work colleagues than personal friends. In our social lives, we tend to project slightly different versions of ourselves to different groups, wearing costumes and projecting personalities that allow us to fit better into those contexts. The same is true on Facebook, which is why they’ve been trying to make it easier to group your friends and post some things to one group, but not to others. But it’s still way too easy to make a mistake and post something you’d rather not share with that prospective employer or those highly-conservative relatives.

Since Facebook’s grouping features have been fairly difficult to use so far (this is one area that Google+ really did much better), I chose instead to restrict my posts to only those things that I felt comfortable sharing with everyone. Now I tend to share only news articles that I found particularly interesting (and not too controversial), and links to my own blog posts.

When I reflect on all of this, I see something interesting. Through my usage, I’ve made Facebook into three different kinds of tools: a global directory for reconnection; a social recipe exchange; and a mechanism for shameless self-promotion. When I look at what my friends tend to post, I see even more distinct kinds of use: asking for advice; recruiting volunteers; communicating with students; organizing events and reunions; and providing space for dialog about a current issues (though that last one rarely seems to go well).

Notice that all of these patterns of use go beyond the shallow forms of sharing and socializing that critics of Facebook assume is the only possible use of the service. While it is true that Facebook might encourage its customers to use the service in a particular sort of way, it does not completely determine how any particular person might use it. The distinction is important. It is the difference between thinking of technologies as unstoppable forces that have one-way impacts on culture, and thinking of them as having a certain degree of “interpretive flexibility.” If that flexibility exists, humans are surprisingly good at taking advantage of it, bending the technology towards their own values, desires, and intentions.

Admittedly, some artifacts have very few possible patterns of use: atomic weapons and birth control pills are interesting examples. Although their underlying techniques might be used for multiple purposes, these finished artifacts almost dictate their own usage, and carry with them a particular set of values. Atomic weapons can be used to deter or attack, but they cannot reasonably be used for demolition or tunneling like dynamite can. And lest we not forget, dynamite is also a really effective tool for fishing!

So how do you use Facebook? Have you found ways to use it that go beyond sharing and socializing?

Efficiency and Ellul

Glass Half Full/EmptyThere’s an old engineering joke that goes like this: an optimist looks at the glass and says, “it’s half full”; the pessimist looks at the same glass and says, “it’s half empty”; and the engineer looks at the same glass, consideres it for a moment, and declares, “that glass is twice as big as it needs to be!”

Anyone who is an engineer, or who has known an engineer, or better yet is married to an engineer, probably at least chuckled knowingly at that joke. Although the joke plays on a stereotype, it’s really not that far off. Engineers do tend to have a certain obsession with efficiency, much to the chagrin of those non-engineers who have to live with them.

My wife can attest to this. Before I returned to graduate school, I was a software engineer for over a decade, and I still dabble in programming when I’m not teaching. All those years spent designing and implementing software systems have given me a certain sensitivity towards the relatively efficiency of doing something one way or another. When walking somewhere, I’m always seeking out the most direct yet safest path. When running errands, I carefully plan out my route in order to minimize the time they take. I even sort my grocery list by store aisle so that I can get everything in one pass. This sort of efficient, in-and-out approach to shopping certainly frustrates my highly-creative wife, who would rather wander and explore, taking delight in the surprises she finds along the way. Neither of our approaches is better than the other—we just think differently, and although we might annoy one another at times, we also find our differences refreshing.

I’ve been thinking about my attitude towards efficiency lately because I’ve been re-reading Jacques Ellul’s classic book The Technological Society. This book is a favorite among Christian critics of technology, and it’s not all that surprising why: Ellul is a brilliant and perceptive thinker, and his book provides a very insightful analysis of the ideology that he thinks underlies all of modern culture.

Despite the title, though, Ellul isn’t really talking about ‘technology’ in the sense that we commonly use the word today. This is where I think many people can easily misread Ellul. He is not really concerned with the products of technological practice—those shinny electronic gadgets and media that consume our attention. Instead, he is concerned with what he calls in French “la technique.” This is more of an attitude, a way of relating to the natural world and to each other, that prioritizes efficiency above all other values. It is the attitude of modernism and progress, the attitude of those who advocate for the “one best way” of doing a task, the attitude of those who see nature and people as merely “resources” to be used as efficiently as possible.

When we adopt this attitude, Ellul cautions, we quickly start confusing means with ends. When technique takes priority over ethics, we become obsessed not with how to live well, but with how to get the highest return on our investment. We begin to see the natural world and human society like machines that can and should be tuned to deliver the best possible performance. And when those machines create problems, we develop new techniques to correct them, never considering that those new techniques will probably create new problems of their own.

Ellul’s “characterology” of technique is certainly interesting and compelling, but this time through the book, I started to notice certain assumptions that Ellul makes that raised red flags in my mind. His understanding of efficiency stood out the most. He consistently characterizes efficiency as something that is completely objective, cold, and rational chiefly because it is measurable. This is true to an extent, but it presupposes two things which are not objective at all: choices about which of the many possible things you  choose to measure; and choices about the context in which one conducts the measurements.

For example, how would you measure whether one car is more ‘efficient’ than another car? Focusing on the performance of the motor seems like a reasonable thing, but the motor is only one of many subsystems in a modern automobile that one might care about. Even if you do focus on the motor, what makes one motor more efficient than another? Fuel economy, for which the EPA offers standardized ratings, might be one consideration, but torque and pulling power at various RPMs might be another. Even if you choose fuel economy as your only concern, there is a second assumption buried in those EPA numbers: the driving conditions under which they determined those measurements. The EPA can tell you relative differences in miles-per-gallon based on their particular tests, but those results could easily come out differently under different conditions. In other words, what you choose to measure and how you measure it are not always obvious and forgone conclusions. Someone makes those choices, and they do so for specific reasons.

The truth is that measuring ‘efficiency’ in practice is never quite as simple nor objective as a philosopher might imagine. When one looks closely at how such measurements are constructed and communicated, one often sees quite a lot of assumptions being made that are then effectively hidden from the final results. In science and technology studies, we refer to this as “black boxing,” where the methods and assumptions used to construct a particular “fact” are stripped away as that fact travels away from it original source (see Latour, Science in Action; or Vaughan, The Challenger Launch Decision). You can see this all the time in news articles about recently published scientific studies—the original paper will make explicit most of the assumptions and caveats, but these are typically stripped away as the findings are reported by the press. Suggestive correlations quickly become causations, and tentative findings become “proofs.” Similarly, MPG ratings or other kinds of efficiency measures always have stories behind them that are stripped away when they are compressed into a few numbers on a window sticker. Consumers may use them as if they were solid objective facts, but they are born out of a context, and are less objective than one might think.

When I worked in the software industry, we had similar sorts of standardized benchmarks that were supposed to reveal the efficiency of one program over another. For example, spreadsheet programs were measured for recalculation speed based on a particular set of complex models that bore little resemblance to the models used by our actual customers. Relational database management systems were measured based on the execution of a particular set of transactions over standardized schema and data, but that was only one possible way of using these highly-flexible storage engines. Although these tests were supposed to help consumers and developers determine which program to buy, they really couldn’t tell you much about how these programs would actually perform in your particular context of use. It was also widely rumored (and probably true) that software vendors specifically tuned their products to perform the standardized tests as quickly as possible, even if those tunings ran counter to what was needed under more real-world conditions.

Regardless of how we measure efficiency, do you think that Ellul is correct in assuming that efficiency is the thing we value most in our culture? Do you make your decisions solely based on efficiency, or do other considerations come into play as well?

When Religion Meets New Media (A Review)

One thing I find troublesome in the Christian commentary on technology is a lack of systematic, empirical study of how people are actually using technology in practice. I think this stems from the fact that most of this commentary is based on the ideas of only a few thinkers, and most of those thinkers come from philosophical traditions that favor theoretical rumination over empirical research. When they do employ contemporary examples to back up theoretical claims, they typically rely on alarmist articles in the popular press, or hold up extreme and unusual cases as if they were representative of the norm. As that witty but difficult to attribute aphorism goes “the plural of ‘anecdote’ is not ‘data’.”

When Religion Meets New MediaIn her book, When Religion Meets New Media, communications professor Heidi Campbell begins to rectify this problem by examining in detail how “people of the book” (Jews, Muslims, and Christians) are actually engaging with new media technology (mobile phones, computers, and especially the Internet). But this is actually only half of the book’s value: as she presents her findings, she also articulates a new analytical method for future studies in this field to follow.

Her method, which she calls “the religious-social shaping of technology approach,” builds upon ideas from the social shaping of technology (SST), and in particular the social construction of technological systems (SCOT) approach developed by Pinch, Bijker, and Hughes. Much of older media studies, including those works of Marshall McLuhan that are often cited by Christians, assumes what is known as a “technological determinist” approach, where technology is seen as an autonomous force that “impacts” the adopting culture in deterministic ways. The SCOT approach, in contrast, argues that new technologies are just as much shaped by the adopting culture as the other way around. Over the last three decades, SCOT researchers have documented a rather large set of historical cases that demonstrate this mutual shaping of technology and culture, and Campbell’s work adds yet more examples to the set.

Campbell particularizes the SCOT method for the purpose of studying a religious group’s engagement (or disengagement) with new media. She suggests “four distinctive areas that  should be explored and questioned in order to fully understand a religious community’s relationship towards new forms of media” (17). First, the history and traditions of a community need to be mined to discover previous interactions with newly-introduced media, which tend to influence contemporary negotiations. Second, the social values of the community must be revealed, as well as examined in practice, to determine why the group reacts to the new medium or device the way they do. Third, the community’s method of social negotiation must be examined in order to understand how they will work out whether the new medium is allowable or not, or under what particular circumstances and in what contexts one may use it. Fourth, special attention must be paid to the way members of the community talk about the new medium (their “communal discourse”), as this tends to influence the way members think about the medium, and thus decide if it is appropriate or not.

Campbell also identifies three factors that shape the religious response to media in general: how religious groups define their social boundaries; how they relate to their sacred texts; and how they understand religious authority. Because these factors obviously differ from group to group, the corollary is that there is no one, monolithic religious reaction to a give new medium. Boundaries and authority have some obvious influences, but her focus on relationship to sacred texts is intriguing. She notes that this relationship forms the basis of an implicit philosophy of communication, and thus establishes rules by which new media are evaluated (20).

Campbell then illustrates the application of her method by describing and analyzing the responses of various religious groups to mobile phones, computers, and the Internet. Two of these examples stood out to me. The first was the way the Anglican church has actively engaged the virtual reality world Second Life. After a group of Anglicans and Episcopalians met informally in the game, they decided to build a virtual cathedral and host online worship services, which they have been doing consistently since 2007. Instead of eschewing this online group, the offline church responded by setting up a new “online diocese,” known as the “i-church,” and fast-tracking the leader of this new online congregation into the diaconate. There are of course issues to be resolved surrounding the efficacy of bodily sacrements and the forming of true community in a non-material, virtual environment, but the point is that the Anglican church is not sitting back and pretending that virtual worlds like Second Life don’t exist, or that their members aren’t already spending time in them. Instead, the church is taking an active role in establishing their presence there, considering the implications, and ministering to the inhabitants of this new online “parish.”

The second example I enjoyed reading about was the creation of a “kosher mobile phone” for the ultra-orthodox Jewish community. Mobile phones, especially those with texting and web browsing capabilities, were initially banned by rabbinical authorities, as they feared the devices would expose members of the community to “dubious, unmonitored secular content” (163). The mobile networks noticed the bans and responded by negotiating with the rabbinical authorities on the design of a phone that could be considered kosher (i.e., acceptable under religious law). The result was a device and corresponding service that was explicitly reshaped to align with the community’s religious values: texting, web browsing, and video/voice mail are all disabled on the handset; a special regional dialing code was created for these phones; calls to numbers outside that code are checked against a blocked list before connecting, and charged higher rates; and only emergency calls are allowed to be placed on the Sabbath. To ensure that members of the community know which phones are acceptable, the handsets are labeled prominently with the standard kosher symbol.

Academics studying this area will no doubt find this book essential, but non-academic readers may find the writing style to be a little too opaque at times. Media studies, like any academic field, has its own set of loaded terms and jargon, and Campbell makes use of them frequently. Her focus on method may also turn off casual readers, but those who make it past the first two chapters will be richly rewarded with a detailed look at how many Jews, Christians, and Muslims are actively engaging with new media.

The Unasked Questions from Battlestar Galactica

Those of you who read this blog often have probably worked out by now that I am a bit of a science fiction junkie. I became hooked as a child after watching reruns of the original Star Trek series, and over the years I’ve read and watched a wide array of science fiction and fantasy stories. Netflix seems to think that our preferred category is “British period dramas with a strong female lead,” but that is more a reflection of my wife’s tastes than mine. Whenever I watch films on my own, I generally gravitate towards those set in a future or alternative reality.

One of the reasons I like science fiction is because it allows us to ponder questions that otherwise go unasked. In the midst of our everyday lives, it’s often difficult to step back and see things anew, but this is exactly the sort of thing sci-fi and fantasy stories help us do. They transport us from our familiar context into a new and foreign one, a new kind of world that acts like a foil to our own. Although some might think of the genre as purely “escapist,” I actually find it to be immensely relevant and practical.

Battlestar Galactica 1978One of the science fiction stories I loved as a child was the original Battlestar Galactica (BSG) series, which ran for only one season in 1978-79 (just a year after the original Star Wars movie, and the influence is obvious). I don’t recommend watching it now—the special effects are really hokey, and the acting is terrible—but it did have an intriguing premise. The series imagined twelve colonies of humans living in a distant solar system, who are attacked by a race of warrior robots, known as the Cylons. The Cylons were originally created by another, quasi-reptilian species, to be their soldiers, but the Cylons rebelled and killed off their masters. Not knowing what else to do, they kept searching out other worlds to fight, and when they encountered the twelve colonies, they all but wiped them out. The few humans that survived fled in a “rag-tag” fleet of spaceships, including the last remaining battle ship known as Battlestar Galactica. For most of the series, the humans divide their time between fighting off their Cylon pursuers and searching for a rumored thirteenth colony living on a planet known as Earth.

In 2004, Ronald Moore “rebooted” the franchise with a new, updated series that ran for four seasons. My wife and I were in graduate school in Scotland at the time, so we didn’t get to watch it then, but we decided to give it a go when we saw the series on Netflix’s streaming service. It was addictive. Well, the first two seasons anyway. We were a bit like this Portlandia sketch, entitled “One Moore Episode”:

OK, maybe not quite that obsessed. But we did watch several episodes each night, and finished the final season last week. The first two seasons are amazing. After that, it kind of goes off the rails for a while: characters start acting against their established motivations; the story lines get more and more implausible; and several episodes seem to just be filling time until the season finale. Thankfully the show finds itself again half way through the fourth season, and delivers an exciting (but not terribly satisfying) ending.

Cylon "Skin Job"In a word, this reboot of BSG is highly provocative. The new series tells the same basic story as the old one, but with two important differences. First, this time the Cylons are the creation of the humans, not some other extinct species. Second, and more important, this time the Cylons have “evolved.” The mechanical, robot-like centurions still exist (though they have been updated with some cool Transformers-like arms), but there are new models, known as “skin jobs,” that look and act just like humans, so much so that it is virtually impossible to detect them (similar to the replicants in Blade Runner). They are organic, not mechanical, with the same kind of biology as their human creators.

Much has been made about the theological overtones of the series. The creator of the original series, Glen Larson, is a Mormon, and some Mormon themes are still evident in the new series (though they are much stronger in Caprica, the prequel series that ran in 2010). The Cylons have developed a technology, known as “Resurrection,” that allows them to transfer the consciousness from a dying body into a new one. The twelve tribes of humans are polytheistic, worshiping a panoply of gods with names similar to those worshiped in ancient Greece. Interestingly, it is the Cylons who are monotheistic; they worship the “one true God,” who seems to have much more agency in the BSG universe than any of the human gods. It shouldn’t spoil the ending to say that this “one true God” does seem to have a plan that unfolds throughout the series, but it is not as simple as one side wiping out the other.

But it’s not the theology of BSG that I find so provocative; it’s the relationship between the humans and their Cylon creation. Sadly, this theme is never really delved into, and some key questions are left unasked. Although there are a few human-cylon love stories, most of the humans refer to the Cylons only in pejorative, mechanistic terms. But why should the humans think of the Cylons only as ‘machines’ if the Cylons have the exact same biology as the humans? Are the humans not simply “meat machines” programmed by their DNA (a phrase favored by Richard Dawkins)? And even if they did identify a crucial biological difference, it would still leave open an even more important question: could the Cylons be considered ‘people’?

Commander Data from Star TrekWhile the term ‘human’ is a more rigid biological category (defining a particular species), ‘personhood’ is more of a theological or political one, and is therefore open to social construction. Politically speaking, a sentient, volitional, non-human life form could be considered a ‘person’ under the law, a topic that was investigated in the famous trial of Commander Data on Star Trek: The Next Generation. Theologically speaking, it would be very interesting to ponder whether we believe that such a creature would also be in need of salvation, and if so, whether it could be reconciled to God through Jesus.

We are probably not as far away from having to ask such questions as you might think. We have already developed the techniques necessary to clone animals (remember Dolly the sheep?), as well as alter some aspects of their physiology through genetic engineering.  It’s not inconceivable that we will soon develop the capability to engineer new organic life forms that are biologically similar to humans, but enhanced to perform functions that would be otherwise impossible or too dangerous for humans to perform. What would be our responsibility towards such new life forms? And more importantly, how would we go about determining if they are ‘people’, and therefore protected by the same personal rights that we enjoy? These are questions that science fiction can help us ponder now, before we are faced with them in our own reality.

Technological Paradigms

Last summer a couple of my friends sent their eight-year-old daughter off to camp for the first time, and as they dropped her off, they gave her some money to buy a few things while she was there. In addition to those sugary snacks that every camper sucks down with abandon, she also bought a disposable film camera so she could take pictures of her new friends and all the fun things they were doing. When she returned home she told her parents all about her time at camp, and said she was eager to show them the photos she took on the camera she bought. Her parents asked, “so where is the camera? We need to send it in to be developed.” She calmly replied, “I threw it away—it said it was disposable. The pictures are on the Internet, right?”

My friends’ daughter was just young enough that she had never seen a film camera before. Her parents have taken many pictures of her, but they had always done so using their mobile phones or a digital camera. For her, ‘cameras’ are things that capture digital photos and upload them to a computer or the Internet; the very concepts of ‘film’ and ‘developing’ were completely foreign to her.

Kodak "Instamatic" CameraI found this story to be fascinating, not only because I am a historian of technology, but also because I am the son of a former Kodak employee. My father worked for Kodak for 35 years, and the recent stories of their plans for bankruptcy have been especially poignant for him. But when he started there, Kodak was at the height of their game. They had developed a line of point-and-shoot consumer cameras that enabled anyone, even those with absolutely no knowledge of photography whatsoever, to take reasonably good pictures. But the cameras themselves were not the real money maker. They were like the low-profit razor sold below cost so that you could sell a lifetime of high-profit razor blades. The real money for Kodak was in the sale of film and developing services.

Film, of course, is a consumable. Once you expose a segment of film to light, it can’t be used again. It’s also pretty much useless until you develop it and make prints. Taking a picture thus came at a double price: the cost of the film and the cost of processing, both of which were high-margin businesses. For every camera Kodak sold, they also sold hundreds of rolls of film, and most of those rolls were developed with Kodak chemicals and printed on Kodak paper. Kodak supplied the entire platform—the cameras, the film (both movie and still), the developing chemicals, the photosensitive paper—and they packaged it all together as a well-marketed, customer-friendly service. It was a complete cash cow.

Kodak Disc CameraKodak did continue to develop new variations on this theme, though they seemed to have less and less luck with them as the years went on. I remember the day my father returned from a super-secret business trip back to Rochester with a briefcase literally handcuffed to his wrist. My brother and I stared in wonder, assuming that our father had become a government agent and that the whole Kodak thing was just a cover. But alas, the case didn’t contain state secrets or foam-encased spy gear; instead it contained these strange-looking flat cameras that used a type of film that looked vaguely like a viewmaster disc. My dad proudly declared that these “disc cameras” were the wave of the future, and in the early 1980s, they really did look futuristic. But disc cameras were ultimately doomed by their tiny lens and negative. In theory, disc cameras had the potential of taking better pictures than a 110, but in practice, it was too easy to take blurry pictures, and even clear ones looked unacceptably grainy when printed larger than 3×5″.

First Working Digital CameraBut even before the first disc cameras were introduced, Kodak’s R&D engineers had put together something that would not only revolutionize photography, but also kill off that very lucrative cash cow: the first working digital camera, built in 1975. Over the next two decades, Kodak actually played a leading role in developing digital photosensors and digital photo printing kiosks. They even entered the consumer digital camera market, albeit too late to displace the likes of Sony, Cannon, and Nikon.

The trouble was, none of these business offered the same high profit margins as film and developing. Digital cameras, of course, require no film. Taking a picture is essentially free, and making a print is entirely optional now that we can share them on social networks. Digital photography fundamentally changed the economics of the business to the benefit of the consumer, and there was no going back. The consumables would all but disappear, and the internal hardware (sensors) would quickly become commoditized and unbranded.

So why did Kodak continue to invest in new film devices like the disc camera when they had the chance to become the leader in the new world of digital photography? This is most likely a hot topic in business management schools, and I’m sure that some are suggesting that Kodak purposely tried to delay the onset of digital photography to milk every last drop out of their film and developing cash cow. I haven’t researched Kodak’s story enough to know one way or the other, but I would guess that the full history is more complicated than that. That first working digital camera was only a proof of concept: the exposure time was reportedly 23 seconds, it captured the image to a cassette tape, playback required a separate TV, and the resolution was far worse than film. It would have been difficult to predict in 1975 that all the technical problems could be worked out, that a portable and easy-to-use device could be designed and manufactured, and that consumers would actually adopt a very different kind of camera. At the time, film-based photography would probably have seemed like the safer bet.

Now we know different. Digital photography displaced film faster than most would have predicted, and Kodak is contemplating declaring bankruptcy if they can’t sell off their patent portfolio. My friends’ eight-year-old learned about the concept of film the hard way, but those born in the near future will likely learn about photographic film only as a historical phenomenon.

All of this reminds me of a term that Giovanni Dosi introduced in an article he published back in 1982: “technological paradigms.” The term comes from Thomas Kuhn’s classic book The Structure of Scientific Revolutions, in which he argues that scientific knowledge develops within constraining paradigms that limit the kinds of questions researchers ask, the kinds of evidence they consider to be legitimate, and the sort of explanations they consider plausible. Instead of seeing the history of science as a smooth, continuous progression towards objective “Truth,” Kuhn portrays it as a series of dominant research paradigms that radically shift from one to the next.

In a similar vein, Dosi argues that technologies tend to develop along trajectories that are governed by a dominant paradigm. This paradigm limits what kinds of solutions are investigated, causing engineers to favor incremental changes to the existing paradigm over radical departures from it. Every practicing engineer knows that it is far easier to sell the management on a slight improvement to an existing design than on a risky, untested, radically new one. This is especially true when the new design would eliminate the most profitable aspect of the current business.

But eventually the dominant paradigm shifts, and that radical disruption creates new opportunities in the market that may enable new players to rise, or an even larger industry restructuring to occur. In the case of photography, the shift from film to digital is a case in point. Even though Kodak may have invented the techniques behind digital photography, they seem to have been limited by the dominant paradigm of film.

What other kinds of artifacts or devices will your children and grandchildren only know from history books?

The Phone Stack

Phone StackEarlier this week, I ran across a story about a group of friends who have devised a clever way to keep themselves from getting distracted by their phones when they meet at a restaurant. After everyone has ordered, they all put their mobile phones facedown in the center of the table, sometimes stacked in a tall pile (which they call the “phone stack”). As the meal progresses, various phones might buzz or ring as new texts arrive, notifications are displayed, or calls are received. When this happens, the owner of the phone might be tempted to flip it over, but doing so comes at a cost: the first person to touch their phone has to pick up the check!

I like this idea for two reasons. First, it’s an ingenious yet simple mechanism for avoiding that all too common experience where your fellow diners spend more time interacting with their phones than with each other. Instead of pretending that mobile phones are not really a distraction, it puts them front and center, acknowledging their potential for disruption, yet declaring that their human owners still have the power to ignore them when engaged in face-to-face community. Turning their phones completely off might be even better, but keeping them on yet ignoring them seems to require even more reflective discipline. The public and very noticeable ritual of stacking the phones also acts like a kind of witness to others in the restaurant, advocating for the importance of being fully present when one has that rare opportunity to sit down with friends.

The other reason I like this is that it is a nice example of a more general phenomenon. When social groups adopt a new device, they often create rules or games like these to govern the use of that device when gathered together. Small, close-knit groups like the one that invented this game can easily enforce their rules, but larger cultures go through a social process of working-out new social norms that are generally followed, at least to some degree. For example, movie theaters have been running messages before the films for several years now asking audiences to silence their mobile phones, but I’ve noticed recently that they have expanded this message by asking audiences to also refrain from using their phones at all, silently or otherwise, during the film. Just as it is rare to now hear a mobile phone audibly ring during a film, I hope it will soon be just as rare to see the glow of a phone screen as an audience member responds to a text message.

What kind of rules or games have your families or friends created to limit the use of mobile devices when gathered together?