Interview: Ted Chiang (Part Two)

Ted Chiang headshot

Ted Chiang is one of the four writers featured in AALR’s new series of interviews with young Asian American speculative fiction writers. In this final installment of the two-part interview, Chiang discusses his Hugo-winning novella, The Lifecycle of Software Objects, as well as thoughts on science fiction.

AALR proudly presents the final installment of our series of interviews with four young Asian American writers of speculative fiction: Ken Liu, E. Lily Yu, Charles Yu, and Ted Chiang. All under 45, these writers have amassed numerous major science fiction and literary award nominations and awards–evidence of their ability to bridge the parallel universes of speculative and mainstream literatures. Their work differs radically, ranging from peculiar fables to intricate meditations on the relationship between humans and technology. But taken together, that work exemplifies the kind of intersectionist worldview that shifts conventionalized perceptions, encouraging us to think across traditional social and literary categories.

Conducting the interviews is Betsy Huang, Associate Professor of English at Clark University, author of Contesting Genres in Contemporary Asian American Fiction (Palgrave, 2010).


Interview (Part Two of Two): Ted Chiang


Interviewer Betsy Huang is an Associate Professor of English at Clark University and author of Contesting Genres in Contemporary Asian American Fiction.

My conversation with Ted Chiang continues with extended discussions of Chiang’s Hugo-winning works, including the novella The Lifecycle of Software Objects and the short stories “Exhalation,” and “Hell is the Absence of God.” Chiang also shares his thoughts on a wide variety of subjects directly or elliptically related to SF: AIs and robots, slavery and equal rights, the dearth of SF writers of color, social experiments gone bad, and the struggle with publishers over creative control.

Accompanying this part of the interview is an excerpt from Min Hyoung Song’s new book, The Children of 1965: On Writing, and Not Writing, as an Asian American (Duke UP, 2013), in which Song offers a reading of Chiang’s novella, The Lifecycle of Software Objects.

Read Song’s excerpt here.

Read Part One of the interview here.


Betsy Huang: Let’s pick up where we left off.  We were talking about The Lifecycle of Software Objects, about obsolescence, ethical obligations and emotional attachments.  Any lingering thoughts on Lifecycle from our last conversation?

Ted Chiang: One of things I was thinking about when I was writing the story was this conversation I had with a friend many years ago.  I said to this friend, “How do you know when a relationship is serious?  How can you tell when it’s more than just a casual thing?”  And she said, “I think it’s when you can have a big fight and get over it.” I thought this was a very interesting observation. Your ability to get over a fight is an indicator of how much effort you are willing to put into maintaining a relationship.

I think this criterion—how much effort you are willing to put into it—can be applied to a lot of different types of relationships, not just romantic ones.  Let’s say you are involved in a child’s life, maybe it’s your nephew or niece or something like that.  Are you willing to continue to sustain that relationship even when it becomes inconvenient and frustrating for you?  If you are, that indicates the relationship is something you value, something you take seriously.

Or think about being a pet owner. Some people get rid of their pets as soon as the pets become inconvenient, and we tend to regard them as poor pet owners.  Other people will stick with their pets even when they’re sick and require a lot of care, and we regard them as good pet owners because they take their obligations to their pets seriously.  So I think of this as generally a useful metric for evaluating the seriousness of all types of relationships.

One of the things that occurred to me is that the relationships depicted in science fiction between human beings and artificial intelligences never have that dimension to it.  It’s always an entirely one-sided relationship, with the AIs as obedient servants.  Of course there are also stories in which the AIs are malevolent, but let’s leave those aside. Among the stories in which the AIs aren’t the enemy, I can’t recall a story in which human characters had to really make an effort to maintain their relationship with an AI, one where it would have been easier to give up, but the humans made the choice to work through some obstacle to keep the relationship going.

It seems to me that unless human beings are willing to do that, I’m not sure that we can say that we really respect AIs. It doesn’t matter whether their capabilities are comparable to or less than those of humans. If you’re claiming that they have real status as agents and aren’t simply objects or things, at some point you’ll have to endure some inconvenience. You have to accept that that your relationship with them isn’t solely about you.  If no one is willing to do that, you cannot make a credible claim that AIs have any real legitimacy as persons.

This is what I wanted to explore when I worked on Lifecycle.  The protagonists make significant efforts on behalf of their AIs and have real relationships with them.

BH: The relationships you depict in the novel—are they ways of exploring the degree to which humans are capable of forming attachments and acknowledging obligations to non-human beings?

TC: Well, it’s not so much about whether we are capable of that level of commitment, but about whether we are willing.  It’s an indication of whether you view the other party as just a tool, or as something more than that. For example, the relationship between employers and employees can be pretty asymmetrical; a lot of companies feel they have no ethical obligations to the people who work for them.  If you treat AIs as employees that you can turn off whenever you want, then you can’t say that you are thinking them as persons.

BH: We’ve been trained to see them as servants, employees, objects. The whole history of AI in SF is very rooted in AIs as servants or some class or form of menial labor.  Or intellectual labor.

TC: Yes, definitely.

BH: They are always depicted in some kind of a service role.  This started with Rossum’s Universal Robots all the way through Asimov’s robots to Clarke’s and Kubrick’s HAL 9000 to Dick’s androids. The entrenched notion about AIs is that they are slaves and, in more polite company, servants.  It’s difficult to overcome this perception of them in the popular imagination.  What you are doing in Lifecycle, then, is going against this long historical grain of why we create AIs.

But in Lifecycle, the digients are created for entertainment.  Currently there seems to be two classes of AIs that people create for different purposes: one is for work, the other for entertainment.

TC: Yes.  As I said last time, I don’t think you can create an AI that’s a useful servant right off the bat.  I don’t think it’s possible to have a program that, the minute you flip it on, is fully conscious and competent and absolutely loyal….

BH: An employer’s fantasy of the perfect employee….

TC: Right. It’s also worth remembering that, in the real world, the robots and software created to do work don’t resemble people at all.  On the assembly line, it’s more useful to have a robot that looks like a giant arm tipped with a pneumatic drill, not one that’s shaped like a human being.  And when programmers in the real world talk about AI, they’re not talking about conscious software, they’re talking about stuff like the algorithms that let Google guess the rest of your query after you’ve typed in the first word.

BH: So why is there such a fascination in the popular SF imagination to create AIs in the likeness of humans?  There is the familiar rationale of interface—that machines bearing human likeness would make it easier for humans to interact with them. There is also the rationale of building robots in our likeness in order to understand the functions and processes of the human mind and body.  This is largely what guides the work of robotics researchers like Cynthia Breazeal and her Kismet.

I think it’s more than just the improvement of interactivity or the curiosity about human functionality.  If we are creating AI for companionship that’s similar to what a real-life human can give you, what is behind that desire?

TC: As you noted, it’s a convention that goes back to Rossum’s Universal Robots.  And I think the reason for the convention was that it was a metaphor for slavery.  That was the original promise and appeal of robots—that they would be slaves without the guilt.  You can call them servants, but they are essentially slaves because they have no options and no real autonomy.  I think that is the unexamined assumption of science fiction that depicts humanoid robots or human-like AI.  These works are suggesting that it might be possible to have slavery without guilt.

I don’t believe that that’s possible. I think that if an AI is both conscious and as capable as a person, then there will be ethical obligations and you will have just as much guilt.  You would be just as ethically culpable as if you had enslaved a human being.  I don’t think you can have one without the other.  You can’t have conscious beings serving you without there being an ethical dimension to it.

BH: The reality is that historically, you had people who had slaves but who didn’t feel any guilt.

TC: Yes, of course. And we consider those people to have been behaving unethically.

BH: There are robot narratives that do explicitly address the ethical implications of a slave class.  Some are parables or fables about slavery that are spun around a liberation plot and are essentially neo-slave narratives. And in these narratives, the ethical dimension is actually foregrounded because the question these stories take up is how we could have an entire class of people work for you without any rights and privileges and not feel any guilt about it. It’s a kind of shame that these narratives try to reveal.

TC: Yes, there is definitely a class of stories about AIs gaining rights.  But there is a bigger class of stories in which AIs are purely servants and the issue of them getting rights is not addressed at all.  They simply serve as helpful butlers.  It’s this latter class of stories I’m thinking of and writing in response to—the kind that engages in the perpetuation of guilt-free slavery.

BH: Yes, I can see that.  We aren’t encouraged to regard the digients in the same way we think of service robots and AI, like, say, HAL.

TC: Because HAL was competent and mature from the moment he was turned on.  The androids in Do Androids Dream of Electric Sheep? were also apparently fully competent the moment they were turned on. When you can activate something so easily, you feel you can deactivate it just as easily. But I don’t think that type of activation is actually possible.  I think a competent, conscious being can only come about through twenty years of training, being “raised” by others and gaining experience over its lifetime.  And once you have a being that was raised over a twenty-year period, then you will feel an ethical obligation to it.

BH: Electric Sheep makes a modest attempt at addressing this notion of growth and maturity.  The androids are supposedly emotionally “immature” even though they are fully formed physically. That immaturity supposedly contributes to the perception that they can’t control their emotions and act out like small children.

TC: Or that they lack empathy and therefore fail the Voight-Kampff test.

BH: What do you think about empathy as a metric of humanness and the idea of a Voight-Kampff Test in general?

TC: If empathy were used as a test for humanity, we all know people who would fail it. There are plenty of people who are deficient in empathy. We can have a philosophical debate about whether we should deprive those people of human rights, but for the most part, we’re reluctant to do so. That may be related to the fact that we possess empathy.

Babies don’t possess empathy.  Children don’t possess empathy, depending on the context, for a number of years.  But we don’t consider that grounds to destroy them.  Our pets often don’t possess empathy, but we keep them around anyway.

BH: John W. Campbell wrote an essay called “What Do You Mean … Human?”, which features a facetious chart in which he compares the humanness of an idiot, a robot, a chimpanzee, a human baby, and a man with prosthetic aids based on a set of criteria.  The criteria include such things as capacity for logical thought and speech, the “’Do I Not Bleed’ Merchant of Venice test,” and so on.

TC: Let me see if I can find it on the web. [Googles it]

Found it. [Scans it]  Interesting. Historically, we have never given rights to any disenfranchised group on the basis of a test. No one, for example, drew up a list and asked “Do slaves meet these criteria?” and decided that the slaves would be emancipated if they passed the test.  For the women’s vote, no one drew up a list of capacities one needs to have in order to vote and then decided that women would deserve the right to vote when they passed the test.  That has never been how disenfranchised groups gained rights.

BH: Campbell’s essay doesn’t come to that conclusion specifically but it makes the same point you do—that such lists are essentially useless, even in a purely philosophical discussion, let alone one about civil and human rights.

There have been “tests” that aimed to measure and “prove” subhumanness—the measure of cranial size and some would argue the I.Q. test as well.  But these were used to take away rights or privileges, or to justify the discrimination or ill treatment of a target group.

TC: Right. When American slaves got emancipated, or when women got the right to vote, it had nothing to do with their ability to pass tests. It was the result of a popular emotional reaction against unjust laws.  Before slavery could be outlawed, a large number of people had to feel that slavery was wrong. Before women could get the right to vote, a large number of people had to feel that depriving women of the right to vote was wrong.  I think that in order for the disenfranchised to gain rights, there has to be a growing popular sentiment to support it.

BH: Yes.  But let me add that the steps down that avenue are often first taken by the disenfranchised who come to realize the conditions of inequality that bind them, a realization that leads them to rise up.  And they rise up often against the prevailing popular sentiment that they ought to stay in their place.  Popular sentiment works both ways.  The popular sentiment on their side always begins as a minority sentiment against a stronger, more pervasive one.

TC: Of course, and I don’t mean to discount that aspect.  My point is that it was never based on some list of criteria.  Even the disenfranchised have never drawn up a list and said, “Look at this list.  We meet all these criteria. Therefore we deserve rights.”  That has never been an effective strategy.

Take gay marriage. For someone opposed to gay marriage, one of the things that can change their mind is learning that someone in their family is gay. It is having extended interaction with a gay person that leads them to realize that gay people are just like straight people. And so eventually they come around to the idea that marriage is a right that gay people deserve. That type of personal experience is effective in a way that no intellectual argument can be.

Similarly, hypothetical rights for AI would not be based on passing a test. It won’t be a matter of a judge deciding whether a particular AI can compose a sonnet or tell a joke or give the right answer to a question about its feelings, and then granting it freedom. It’ll be when enough people have spent time with these AIs and come to realize that it is wrong for us to enslave them.

I’m not saying that this is a particularly likely future scenario. It’s more likely that we’ll never create conscious software, and if we do, it’s likely that we’ll treat it badly. But if we were ever to create actual conscious software, and if it were ever to get rights, I think it’ll have to come through that avenue, rather than the list or test that is often depicted.

BH: Large-scale robot uprisings, then, would seem equally implausible, particularly in the way they are depicted in popular science fiction such as The Matrix or the new Battlestar Galactica. More plausible and historically accurate are the neo-slave narratives of, say, Octavia Butler’s Kindred or Maureen McHugh’s Nekropolis, in which rights are gained slowly or in the interstices of an inflexible and entrenched system.

But why do we want conscious AIs?  Conscious AI appear even in the more plausible imaginations of McHugh’s Nekropolis and, of course, your Lifecycle.

TC: I’m not sure that we do.

BH: In your novel, after the creation of the digients, other types of AI were developed that didn’t value consciousness in terms of social awareness.  I’m thinking of the Sophonce AIs that were created after the digients became obsolete.  The Sophonce AIs reminded me of, say, autistic savants—AI that have minimal or no social skills but are presumably very efficient in performing intellectual tasks.

TC: Yes, but even then there’s difficulty in finding any commercial application for them.  There are efforts made to commercialize them, but it’s not at all a foregone conclusion.  They may have certain autistic-savant capabilities, but a lot of autistic savants are not readily employable.

I put those in the story to make the point that an AI can be good at a lot of things, but still not be a useful servant.  Being a useful servant is very, very difficult.  And I don’t know that there is a good business case for conscious software.  Google, for example, is very useful, but we don’t want Google to be conscious.  People love their iPhones and find them extremely useful, but I don’t think people would find the phones more useful if they became conscious or self-aware. I don’t think consciousness is an asset in the context of productivity applications.

The only business case I can see for conscious software is on the entertainment side of things. And even for entertainment purposes, the business case for conscious AI would depend on a lot of things.  We’d have to see if it actually is, say, more entertaining than non-conscious AI.  People play with Nintendogs and Tamagotchis, which are not conscious, and those have sold very well. The question is, would conscious software be more entertaining than non-conscious software? If you had a Nintendog that was actually conscious, would that be more fun to have than a non-conscious one?

BH: My gut reaction is no.  Perhaps there would be some initial fascination with the specific ways that the conscious Nintendog is like a real dog.  But then you’d have to treat it like a real dog, which would require the kind of obligation and commitment you spoke of earlier.  While that may offer a more meaningful relationship than the one you’d have with a non-conscious one, I’m not sure that it’d be “fun” because obligation isn’t always “fun.” So we come back to the idea of obligation that is somehow linked to an acknowledgment of the other party as a conscious being.

Even though Nintendogs don’t have consciousness, their owners ascribe consciousness to them.  It’s a projection, of course, and I’m wondering what that projection reveals.  It seems to me that the appeal is in the appearance or illusion of consciousness without the actual consciousness itself. It gives you what you want—the illusion of dealing with something conscious but at the same time remains guilt-free if you abandon it.

TC: Yes, I think you’re probably right, and I think we’ve concluded that conscious software is not commercially viable even for entertainment purposes. There doesn’t seem to be a good business case for it anywhere.

BH: If there is no good business case, is there any other good case to be made for it?

TC: I can’t think of one.

BH: And yet the premise of Lifecycle is that conscious AI are created.  So, what are we to think about people like Ana or Derek, who not only create AI but commit themselves to the AIs?  Are they naïve, or are they people we are to admire?

TC: Well, in the story, one of the premises is that a company thinks there’s a business case for conscious software.  The company makes digients and sells them in the hopes that people will find them so entertaining that they’ll pay money for them.  It turns out in the story that they are wrong, because people eventually get bored with the digients.  And the company folds, the way any company does when its product stops selling.

But there are a handful of people who don’t give up on their digients because they have bonded with the digients emotionally in a way that the majority of the customers did not.  And so even though the majority of customers were able to shut off their digients and feel no guilt at all, there is a small number of people, including Ana and Derek, who were unwilling to turn off the digients.

In a way, Ana and Derek are collateral damage of a failed business venture.  The company took a risk, gambled the way businesses do, but this venture was one where the failure had emotional costs as well as financial ones.  Because if your business plan relies on people becoming emotionally invested in your product, there are going to be consequences when that product that is no longer supported.

BH: I kept thinking, as I read the story, of whether I’d be as devoted to my digient as Ana and Derek are.  I found them kind of extraordinary in their devotion, though I suspect that some might find it foolish.

TC: I don’t think they are foolish at all. But I think that their emotional commitment has put them in a difficult position. Here’s an example of a similar situation.  There was a chimpanzee, Lucy, who was taught sign language and raised alongside a child in a family to compare its development with that of a human infant.  For the first few years of her life, Lucy never saw other chimpanzees and had exclusively human contact.

Eventually this experiment ended, and Lucy grew too strong and the family couldn’t keep her around anymore.  So the primatologist who started this experiment decided to ship her off to Africa, even though she had never known another chimpanzee and was completely unprepared for life in the wild in Africa.  There was a graduate student, Janis Carter, who accompanied the chimpanzee to Africa to spend a few weeks there to help the chimpanzee acclimate to the jungle. When it became obvious that a few weeks wouldn’t be enough, Janis Carter stayed longer, and she wound up living there for twenty years and running a chimpanzee sanctuary in Gambia, West Africa.

I admire Janis Carter enormously for what she’s done. She knew that Lucy wasn’t going to make it on her own, so she rearranged her entire life to help Lucy.  I don’t think she’s foolish in any way, I think she’s a very admirable person.  Her situation was the fallout of a really misguided decision that someone else made many years ago.  Someone else decided to treat a chimpanzee as an object, thinking “yeah, we’ll raise it in our house next to our child for a few years, and then we’ll toss it.”  Janis Carter uprooted her life to repair the consequences of someone else’s bad decision. I think that’s similar to what Ana and Derek are doing.  They are fulfilling ethical obligations created by someone else.  Does that make sense?

BH: It makes great sense.  You’ve elaborated on a point not readily apparent in the novel—the notion that bad decisions can produce a variety of collateral damage and that their negative consequences are often left to others to repair.

That was quite a lot just on Lifecycle. [Laughs]

TC: It was. Would you like me to pick something else to talk about?

BH: Sure.

TC: Earlier you sent me a question about Asian Americans writing genre fiction and how there hadn’t been a lot of Asian Americans writing science fiction until recently.  There were a few before me, but it’s definitely true that Asian Americans are underrepresented in the genre, as are most people of color.

Science fiction is a marginalized genre, and I feel that only recently has it gained any sort of respect at all.  I think that SF is well suited to addressing questions of race or being the Other, but its lack of social respectability has made it a poor choice for people who want to address those issues.  I think it’s hard enough to write about issues of race and get published, even when you’re working in respectable literary fiction.  If you try to do it in genre, it’d be an even steeper uphill battle because there would be, I think, two axes of disenfranchisement to deal with.

I think these reasons may have contributed to the underrepresentation of writers of color in the genre.  Science fiction wasn’t respectable, and if you’re trying to gain respect, science fiction will not be your first choice.  If you’re looking to increase awareness of your experience, the reputation of science fiction will work against you.

BH: You don’t necessarily write SF for these reasons …

TC: I don’t write about race specifically.

BH: But in a sense, so much of SF has been concerned with race.  It just deals with race in primarily an elliptical or metaphoric way through alien encounters, artificial intelligence, and the like.

TC: Yes, those topics can definitely be ways of indirectly discussing race. Although SF is also interested in those topics purely for their own sake, without them being metaphors for anything else. I think many readers outside of SF aren’t interested in aliens or robots unless they’re a metaphor for race.

BH: It seems that writers who choose to write a particular genre midway into their career are rare cases.  Most genre writers are lifers—writers who began their career in the genres of their choice.  Plenty of non-white and white science fiction writers’ works read like nuanced fables of racial concerns. And Butler and Delany have spoken eloquently about SF’s lack of respectability in the same way you described.  But all of these writers didn’t suddenly choose science fiction to talk about race; rather, they have always been SF writers who have also consistently written about race.

For other writers who don’t identify themselves as SF writers first and foremost but who are making forays into the genre—writers we mentioned in part one of this interview, such as Junot Diaz and Colson Whitehead—it may very well be that it’s because the respectability of SF has risen in recent years.

Has being Asian American had any influence or impact on your writing?

TC: I can’t point to any specific examples of how it has influenced me.  [Laughs]  Did you ask your previous interviewees this question?

BH: Yes, and the answers range from not being able to point to anything specific, as it is in your case, to regarding it as thoroughly who they are and seeing it as a crucial aspect in everything they write.  It clearly varies for everyone.

In my book, I said that you backed off from taking “Liking What You See: A Documentary” into the murky waters of racial politics.  I cited your thoughts on this in an interview in which you said this in response to the interviewer’s question about whether we can read faceblindness in the story as a metaphor for raceblindness:

“While I agree that race blindness is an interesting idea, I didn’t think there was any way to make it even remotely plausible in neurological terms.  Because there are just too many things that go into racism.  It seems to me that to eliminate the perception of race at a neurological level, you’d have to rewrite the underpinnings of our social behavior.”

In some ways, I wish I could reframe the way I made my point.  I wasn’t trying to say that you were too chicken shit to take up the issue of race. [Both laugh]  Rather, I was trying to consider what it might take to write a truly plausible story, Ted Chiang-style, to consider racial matters in new ways.

TC: Well, take the quote of mine you cited in your book.  In that example, I wasn’t talking about the artistic challenges of dealing with race, I was pointing to the issue of neurological plausibility. I haven’t written about race so far simply because I haven’t had an idea for a story in which the issue was essential.  I might in the future, but I’m no good at setting out to write a story about a particular theme. I only write when inspiration hits, and so far the issue of race hasn’t done that for me.

BH: That’s fair. The notion of a writer’s social responsibilities is a long and storied debate, and every ethnic and minority writer has had to address this in a way that white male writers simply have not had to.  The responsibility of writing about race—an important one, of course—should not be imposed on writers of color exclusively or taken lightly by anyone.  It’s a wonderful thing when it is done well and reveals the complexities and absurdities of racial identification and differentiation, and it is a painful thing to witness when it is done badly, with specious assumptions and claims.

Moving on: I wanted to ask you about “Exhalation,” which I had just read and am still processing its complexities.  It’s a very technical story.  Could you walk me through the seeds of it?

TC: There were two seeds, I suppose. One was a Philip K. Dick story I read many years ago, called “The Electric Ant.”  It’s about this guy who goes to the doctor after having hurt his arm, and the doctor says, “I can’t treat you. You’re a robot.” And the guy says, “What? I’m a robot?”  The doctor tells him, “Yeah, the original you probably had you built to take his place while he’s off doing something else.  Anyway, I don’t treat robots.  You have to go to a robot repair shop.”

So he does, and when he gets home he ponders the fact that he’s a robot. He calls up the central computer and says, “Computer, identify what type of robot I am and tell me how to open myself up.” And the computer tells him, “push here and your chest plate should open.”  So he does, his chest plate opens up, and when he looks inside his chest, he can see this little spool of punch tape slowly unraveling, which is his mind. I just thought that was a really cool image—this guy looking inside at his own self.

The other seed of the story came from a book I read by Roger Penrose called The Emperor’s New Mind.  He spends a few chapters on background physics and has a chapter on entropy.  He says that it’s not accurate to say that we eat food for energy.  This is what we often say—that we consume food for energy.  But he says that is not accurate because we radiate energy at exactly the same rate we take in energy.  We’re not a battery that’s charging up as we eat food; we are in a state of equilibrium for the most part.  We take in energy and we excrete energy.  And so we can’t really say that we eat for energy because we’re throwing away energy constantly.

By the same token, it’s not accurate to say that the Earth needs the Sun’s energy.  The Earth doesn’t store energy; it radiates energy at the same rate it receives energy from the Sun. And so the Earth is also in equilibrium. So what is it that we derive from food?  What is it that the Earth derives from the Sun? It’s a low-entropy form of energy.  The wavelengths of light that the Earth receives from the Sun have lower entropy than the wavelengths that the Earth radiates out into space. And the chemical energy we humans consume in food has lower entropy than the heat energy that our bodies are radiating all the time.

So, we are actually consuming food for its low entropy.  That’s what we need, not the energy itself.  I thought that this was a very interesting observation, and I wanted to explore the idea in fictional form.

BH: Okay, I’m trying to process the synthesis of those two concepts in the story.  They re-orient what I thought I took from the story.  I guess I was more taken with the “Electric Ant” premise, the idea of taking oneself apart as a kind of literalized self-reflexive exercise.  The entropy dimension is really interesting.

This story, a recent one, has the flavor of some of your older stories in Stories of Your Life and Others.  I’m thinking of “Hell is the Absence of God.” I remember finding your reference to the Book of Job in the story notes interesting, particularly your remark that you don’t understand why God restored everything he took from Job.  That it doesn’t seem consistent with God’s character or something along those lines.

TC: It doesn’t seem consistent with what I see as the message of the Book of Job. The message is that sometimes misfortunes befall you, and even though you are faithful, even though you love God, being faithful doesn’t guarantee good fortune.  Yet the Book of Job ends with Job receiving good fortune for his faith. So, I think that ending sends a mixed message.  I think it would be more consistent if Job had been left with nothing.

BH: Can it be interpreted that the restoration of his belongings is as arbitrary as their removal?

TC: Job’s wife and kids were killed and he gets a new wife and kids, so I suppose it’s debatable whether that can really make up for the death of his previous wife and children.  My recollection is that it is couched as a reward.

BH: My recollection of the book’s message is that the meanings of God’s decisions are un-interpretable, and that one should not try to guess his intentions because it is not the mortal’s place to do so.  I think this is the emphasis of Stephen Mitchell’s edition which I’d read. In that edition, God punishes each of the three friends who visited Job because the friend had tried to offer an explanation of Job’s question of why God had forsaken him.  The friends are punished for having the audacity to claim to know God’s intentions.  Unlike his friends, Job asks God why, but he never gets any answers, nor does he venture an explanation of why God did this to him.  In that way, he preserves the un-interpretability of God.

TC: Maybe I should refresh my memory of the text.  But the lines that I remember are God saying to Job, who are you to ask me for an explanation?  Were you there when I created heaven and earth?  My recollection is that his response to Job is to imply that Job does not even have the status to ask.

In simple storytelling terms, when your protagonist undergoes hardship but receives good fortune in the end, you are implicitly saying that his steadfastness is ultimately rewarded.  This contradicts what God says to Job, which is that you cannot expect to be rewarded just because you are faithful. So I feel that it’s a mixed message, because the text and subtext are at odds.

BH: I see how “Hell is the Absence of God” resonates with this because you’ve got all these people interpreting the appearances of the angels in so many different ways. And here you and I are doing something similar: interpreting the Book of Job differently in terms of its nuances and inflecting different parts of it for our memory of what its message is.

TC: That is definitely the nature of religious debate.  Everyone finds a way to make a case for his or her own position.

BH: When I read the story notes, I thought that one of the interesting strokes of “Hell is the Absence of God” is that you’ve actually rendered irrelevant the question of whether God exists.  It’s true that everyone in the story is concerned with this question.  But it is as though you are saying, “I’ll give you God. I’ll say that God exists.  But that’s still not going to give you any answers and solve any of your problems.”

TC: Yes, the story says, “I’ll grant you that premise …”

BH: This is compelling because arguments between atheists and people of faith often boil down to whether one believes in the existence of God, which is, in my view, a kind of reductio ad absurdum.  So, the idea of rendering that unsolvable mystery moot is a provocative move.

TC: The thing is, changing your mind about whether God exists doesn’t change what has happened to you in your life.  Whether you’ve suffered misfortune or have been the beneficiary of good fortune, those remain true whether you change your mind about God or not. Why was your baby born with cystic fibrosis?  Granting God’s existence doesn’t make that question easier to answer.  Some would say that granting God’s existence would make that question harder to answer.

BH: Right. But I think granting the existence of God does change how you perceive or inscribe meaning to something.

TC: Yes, it does.  But I think that it can make things worse just as often as it makes things better. For every example where God’s existence makes sense of things, there are just as many examples where God’s existence makes things entirely incomprehensible.  Like babies born with cystic fibrosis.

BH: I shared this story with several friends and students and the subject of whether you are a person of faith or an atheist came up.  I read somewhere that you are an atheist…

TC: Yes.

BH: And yet some of these readers thought that you could be a person of faith.  And so I think it’s rather impressive that you are able to pull off the story as someone who is a believer.

TC: I guess that depends on the reader.  Some people have read the story and concluded that I am rabidly anti-Christian.  I guess they think that the depiction of believers is extremely unflattering.  I don’t consider myself rabidly anti-Christian.  I’m an atheist, but I do find religion an interesting question from a theoretical perspective.  I think a lot of people who are lapsed believers, people who were religious in their youth, have strong emotions about religion after losing their faith.  I’m not one of them.  I don’t have a personal beef with religion.  My interest in religion is mostly abstract in nature.

BH: By the way, how do writers, how do you get wind of how your stories are read? Where and how do you get feedback from the masses?  Do you look at reviews and online forums?

TC: It depends on my mood.  [Laughs]  Sometimes I will read online responses to my work, but most of the time I don’t.  Negative reviews can depress me more than positive reviews cheer me up.

BH: You get negative reviews?

TC: Sure.  Every writer does.

BH: I didn’t come across any in my research.  But yes, most of us can’t seem to help but fixate on the negative than the positive.

TC: Yeah, you have to be in the right frame of mind to go looking for reviews.  Sometimes you’re just better off not looking.

BH: You had said in an interview that you pulled “Liking What You See: A Documentary” from an award nomination because you were unhappy with the version of the story that was nominated and that you had a different story in mind.  What was the story you had in mind?

TC: The story in its current form is a pure documentary. I wanted to write the story as a mixture of conventional narrative interspersed with scenes from the documentary.  The protagonist’s story—Tamara Lyons’s—would have been told in conventional fictional form, punctuated with documentary excerpts. That would have been a longer story, because telling her story in narrative form would require more space.  I would have needed more time to write that.

BH: That’s not how you started writing the story?

TC: I had written a bunch of documentary excerpts and I thought that Tamara’s story could become the backbone for them.  I had a deadline to turn in the story for the collection.  When I first signed the contract for the collection, I had to specify a delivery date for the story, and I told my editor that I didn’t know how long it would take.  The editor said, “Make up a date, and if it turns out you need more time, just tell me. And as I have in 100% of such situations in the past, I will give you more time.” So I said okay, and began to work on the story. As the story took shape, the deadline also drew near and I realized I would need more time. I emailed my editor and asked for the extension that he said I could get.  He said “no can do,” that no extension was possible.  Since I had to turn in the story soon, I figured I’d write it strictly in the documentary form rather than in narrative form, because I could finish that faster. And that’s the version of the story that you see.

BH: That was for the Stories of Your Life and Others collection, right?

TC: Yes.

BH: I read somewhere that you hated the dust jacket art for the hardback edition of Stories of Your Life and Others.  Is that true?

TC: I did.

BH: Why?

TC: I don’t think it has anything to do with my work.  There are no muscular nude men in my work.  There are no galloping white horses in my work.  There are no monkeys in my work.  So I don’t see why a painting of a muscular nude man and a galloping white horse and monkeys would be on the cover of my book.

BH: Seems like a collage of weird symbols…

TC: Maybe, but I don’t know what they’re symbols for.  I see no thematic connection to my work at all.

BH: That was an unhappy battle, I take it.

TC: I complained about it.  I asked if we can get new cover art and was told no.  Then I said I’d be willing to pay for the cover art myself.  But the publisher told me it wasn’t about money, it was about the appropriate relationship between a publisher and an author.

BH: Implying that you are behaving inappropriately as an author?

TC: That it would be inappropriate for me to have any say on the cover.  Even if it cost them nothing, they would not give me input on the cover.

BH: Do you like the cover of the paperback better?

TC: For the recently reissued edition, yes—it’s a cover I commissioned, actually.  Despite what the publisher said, I decided to try getting different art for the paperback.  After I was told it was inappropriate for me to have input on the cover art, I had a conversation with another new author who was with the same publisher. The publisher showed him some cover art for his book, and he said he didn’t like it.  So they asked him what he would like, and he told them what he wanted, and they commissioned new art for him!  So I thought, why does he get to specify new art? And they even paid for the new art that he wanted. It was the disparity in treatment that was infuriating.  So I commissioned new art that I felt was thematically appropriate and offered it to them for the paperback, and they said no.

BH: The cover for the original paperback was different from the recently reissued edition.

TC: The cover for the original paperback edition had no artwork on it at all. After they told me they wouldn’t use my art for the paperback edition, I asked them if I could buy back the paperback rights. Then they said, “Oh yes, actually, we can use your art.  We’ll work up a couple of versions based on the art you commissioned and you can choose one.” So I said, okay, let’s see them.  They said there was plenty of time and told me to wait.

Months passed, and I kept asking to see the versions.  Eventually they showed me a cover that had no art at all, just the title and the author’s name against a purplish background, and told me they were using it and that I was stuck with it. So I asked again if I could buy back the paperback rights. And they said, “It’s too late for that.”  Not that it was impossible to buy back the rights, but that it was too late, because they had been dicking me around for months until it became too late.  So the paperback came out with a cover with no art.  And that was the cover on the paperback for many years until they let it go out of print.  Then I was able to get the rights back and get it published somewhere else with the cover art I had previously commissioned.

BH: Crazy.  What about the cover art on The Lifecycle of Software Objects, and in particular the illustrations throughout the novel?

TC: I got to work with the artist on that.  That was published by Subterranean Press, a smaller specialty press, and the publisher there offered me the opportunity to art direct the book.  I chose the cover artist and the book designer, and I worked with both of them.  I worked with the artist on the style for the illustrations, settling on a particular rendering style, and we talked about what sort of poses or images I wanted for each illustration.  And I was happy with how they turned out.

BH: Sounds like a much happier and healthier experience all around.  The hardback and the dust jacket are beautiful, as is the story.


Thanks for being so generous with your time and thoughts, Ted. This has been an incredibly rich conversation about your work.

Accompanying this part of the interview is an excerpt from Min Hyoung Song’s new book, The Children of 1965: On Writing, and Not Writing, as an Asian American (Duke UP, 2013), in which Song offers a reading of Chiang’s novella, The Lifecycle of Software Objects.

In thinking about “browning faces” and “whitening of characters” in print literature, one more example comes to mind with striking suggestiveness. Ted Chiang writes a particular kind of science fiction, one informed both by hard science (Chiang has an advanced degree in computer science from Brown) and by ideas. Hence, his works, all of them to date no longer than tantalizing novellas, are explicitly part of a tradition comprising authors like Isaac Asimov, Arthur C. Clarke, and Ray Bradbury. The focus of his work is less on character and more on setting. He is interested in rendering worlds unfamiliar to his readers as vividly as possible and in mounting within the perimeters of such a world a philosophical investigation into the concepts such worlds enable. In “The Tower of Babylon,” for instance, he imagines what might have happened if the builders of the tower in the Old Testament were not struck down by God but were able to continue their upward construction. What would life in the tower be like? How would a society sustain such a monumental task? More interesting still, it asks its readers to consider what would happen if the builders of the tower had an accurate understanding of their physical universe, a universe that is not like the one the reader knows. In this imaginary universe, the tower can rise into the sky, past the sun and the stars, which orbit as small spheres, until it finally, after generations of hard labor, reaches the hard dome that is the sky in heaven. Nothing about this story is about race, but in asking its readers to imagine an alternate physical universe that is convincingly detailed and consistent, the story stretches the reader’s mind, requiring a mental agility that makes accepting difference easier.

Such agility becomes increasingly necessary as one story after another in Chiang’s collection Stories of Your Life and Others produces a narrative of similar conceptual difficulty, including one that recalls a first encounter between linguists and alien visitors. “Story of Your Life” is neither an invasion narrative nor an extended metaphor about encountering racial differences that easily dissolve after prolonged contact. More important, it is about trying to span a gulf of immense difference, trying to understand a way of seeing that is not chronological but premised on the ability to know the future: “When the ancestors of humans and heptapods [the name given to the aliens because of their radial, seven-armed shape] first acquired the spark of consciousness, they both perceived the same physical world, but they parsed their perceptions differently. . . . We experienced events in an order, and perceived their relationship as cause and effect. They experienced all events at once, and perceived a purpose underlying them all.”16

What is most important about this encounter between humans and aliens is that the narrator, a linguist struggling to bridge the chasm between the different life forms, begins to think differently, acquires the skills the aliens possess for seeing the future: “Heptapod B [the aliens’ written language] was changing the way I thought. . . . There were trance-like moments during the day when my thoughts weren’t expressed with my internal voice; instead, I saw semagrams with my mind’s eye, sprouting like frost on a windowpane.”17 Like the characters in Sabina Murray’s stories, Chiang’s narrator experiences an encounter with difference that changes her. No encounter with difference can leave one simply unaltered and able to transform the other into a mirror reflection of the self. There can be no continuity of character. Such an encounter transforms, and leaves its trace firmly in a sense of self that is newly strange.

So one might ask, does Chiang write Asian American literature? In response, one might conclude, as Betsy Huang does, that his avowed disinterest in imagining a scientific form of colorblindness “reflects a guarded adherence to the conservative techniques of the genre, perhaps at the cost of its radical political potentials.”18 Or one might be able to find ways in which his work can be read, despite what Chiang might have to say on the subject, as offering commentary on the topic of race as it relates to Asian Americans, from something as simple as the fact that the narrator and her future husband in “Story of Your Life” share their first meal together at a Chinese restaurant (which in itself doesn’t say anything about the race of these characters) to something more complex, such as the ways in which this story’s understanding of an encounter with difference resonates with Murray’s understanding of imperialism as an experience that necessarily changes the conqueror as much as it does the conquered.

But perhaps this is the wrong question to ask. It may be more productive to wonder what Chiang’s works, when read as Asian American literature, are able to contribute to an imagining of difference as such. While applicable to “Story of Your Life” and the other stories in this collection, the latter question is even more provocatively and directly addressed in The Lifecycle of Software Objects (a work that deserves to be more widely available and read than it is). While this novella contains several characters with recognizable minority names, including a lead character named Ana Alvarez, who could be Latina or Filipina or neither, the narrative itself never provides details about their backgrounds. One might say, following Kelley’s lead in Slaying the Dragon, Reloaded, that there is a “whitening of character” in this short novel, as ethnicity and race decidedly get pushed into the background. But in this, the narrative propels to the foreground a way of thinking about difference that traverses the boundaries between human, animal, and machine. At the story’s center is the development of self-aware computer programs originally designed as virtual pets; some have cute animal avatars and others equally cute robot avatars. As time goes by, the general public loses interest in them because they prove to be as demanding as real children. And, indeed, for the hardcore group of caretakers who continue to pour resources into nurturing them and finding ways for them to develop, they become just like children. The caretakers find themselves faced with many of the same dilemmas that parents raising more conventional children face, namely how best to help their charges find and explore their potential, how best to expose them to a world that can be cruel and exploitative, and how to decide the appropriate age for letting them make their own life choices and mistakes.

What Ana discovers after years of taking care of her “software object” is that what makes him unique, capable of doing things that other self-aware programs cannot do, is precisely the care she has put into raising him: “If she’s learned anything raising Jax, it’s that there are no shortcuts; if you want to create the common sense that comes from twenty years of being in the world, you need to devote twenty years to the task. You can’t assemble an equivalent collection of heuristics in less time; experience is algorithmically incompressible.”19 Moreover, Jax is now more than an advanced computer program. Like the others who have been raised like him, he “would have once seen the world with new eyes, have had hopes fulfilled and hopes dashed, have learned how it felt to tell a lie and how it felt to be told one.”20 As a result, he cannot be treated simply as a machine or an animal but deserves the same “respect” that humans are afforded. Against the obvious humanism of these conclusions, Derek, another caretaker of the self-aware programs, reaches a slightly different appreciation for difference.

While he would agree that nothing beats experience and that his programs deserve to be respected, he also thinks (according to the narrator), “Marco and Polo aren’t human, and maybe thinking of them as if they were is a mistake, forcing them to conform to [Derek’s] expectations instead of letting them be themselves. Is it more respectful to treat him like a human being, or to accept that he isn’t one?”

Leave a Reply