30 April 2010

Comparing video games to films - 4/4



You can read the first, second, third and fourth parts of this article.

TL;DR. There is no sense pursuing the Citizen Kane of games. Video games are a promising medium as diverse as films. Like other recent entertainments/media/arts/disciplines, it is trying to find its place in society.

The CKoG Chimera

A gamer wrote that no game deserves a comparison to any movie any more than any movie deserves a comparison to any game. Leigh Alexander wanted the comparison with Citizen Kane to stop. So did game publisher Boesky when he wondered how long a medium can survive if it measures its success primarily against another media.

One can not ask a whole industry (and many of its consumers) to stop talking about "the Citizen Kane of games". The game industry, gamer culture, and society more broadly, are more or less unconsciously waiting for the perfect artistic game example that will legitimize the medium. Let us call this game CKoG and suppose it could exist. Before CKoG, video games were basic geek hacks, just "for fun". In the popular mind, CKoG would demarcate the "art" era from the "hack" era.

But legitimacy has become distributed... So maybe we should stop pursuing the CKoG chimera. Maybe video games should not "compare to" films, but rather simply "look at" them. I explain below why it should be done, and how.

A promising medium as diverse as films

In comparing their stories to film or book stories, video games set the highest possible standard. Lafarge already believed in 2000 that the stories found in games were evolving: the fact that games are moving beyond simple happy endings is another signal of emerging maturity in the form. Lafarge actually stated that the stories found in Myst or Dungeons and Dragons are as complex and detailed as the book or film stories. Borut Pfeifer, a game developer, mentioned in February 2008 that games still have much more to achieve as a medium.

How emotions are conveyed in games could be improved in looking at how movies do it. Game publisher Boesky noted that The Citizen Kane reference is interpreted to apply only to emotional aspects, and not the unique attributes of our medium. Obviously, games are not movies, and trying the exact same approach as movies does not always work. I rather suggest comparing games to movies at least for the way movies convey emotions. Many other aspects of video games could be improved when put in parallel with proven film techniques such as lightings, shots and special effects. Copying movies for what they are only leads to interactive movies, which is not what video games want to be.

Jane Pinckard, a game businesswoman, said in April 2010 that other movies than Citizen Kane convey emotions better: I really don’t care about the Citizen Kane of games I want the Pride and Prejudice of games!. A blogger (gamer?) posted: Citizen Kane [...] is by no means the most important movie to define cinema. Birth of a Nation defined the epic. Metropolis might be the first sci-fi/dystopian vision. Safety Last could be the first high-concept comedy. Seeking the “Citizen Kane” of games is a silly endeavor because you should be seeking not one but several video games that redefined the genre in some manner.

There are maybe as many video game genres as film genres. Action, adventure and sport are genres shared by the two media, but certainly horror movies have been inspiring survival/horror games. Certainly interesting new game genres could emerge from film genres, and vice-versa. Can you imagine strategy movies? If yes, then Seven Samurai by Kurosawa might be a precursor. Were they inspired by the family film genre, game designers could open up a huge casual game market.

Trying to find its place in society

Game designer Steve Gaynor wrote in September 2008 that great games are almost always hidden under the juvenile veneer of big guns, tanks, zombies, robots and so forth. He stated that games and comics both remain marginalized, infantilized media and he bet that fifty years from now they'll be just as mature and well-respected as comic books are today (that is to say, not much). Iroquois Pliskin considered this to be the Nightmare scenario: Games are a young medium with a lot of potential-- maybe even a greater porential [sic] than comics-- but they've been shoehorned into catering into the narrative and experiential needs of the teenage male. But like many gamers of my age and tastes, I hope for a future where video games break out of the historical path laid out by comics. Chris Hecker's talk at MIGS in November 2009 relayed this issue to the industry when he envisioned three futures to games: respected like movies, "ghettoized" like comics, or "something different" that the industry still has to determine/build.

Games are not the only discipline looking for its place in society. Software engineering researchers are also trying to define what "designing software" exactly means. In The once and future focus of software engineering (2007), Taylor and Van der Hoek compared software design to civil engineering: Bridge design as it is today would not be as advanced without the careful study of past structural failures [...] How do we perform in software in this regard?. See also the whole section titled Directions From Looking Outside of CS. In A Future for Software Engineering? (2007), Osterweil stated The future of software engineering is in our hands, advocating for curiosity-driven rather than problem-solving-oriented software engineering research. As a game developer, are you being curious or are you just following trends? Are you innovative, or are you copying designs who have been proven to work? Do you want to increase your market share, gain public recognition or advance the state of the art?

Meshing?

For video games to become recognized in our society as art pieces, they need to be related (and compared) to other artistic media. Bogost went in this direction: video games will only be important when — and if — others can point to our medium — to particular examples of it — and locate moments of individual insight that mattered in their lives. Even Boesky recognized that: Rather than asking our students [in game developer school] whether the Citizen of Kane has been created, let's see the film school ask whether the Mario of film has been achieved.

The previous paragraph reveals how efficient Ebert's strategy can (involuntary?) be. In rejecting video games as less artistic than films, the famous film critique reduces their integration in society. Stating video games are not art is more efficient to limit their recognition than just not mentioning them. However, some game critiques are actually going in the opposite direction: they compare games to movies. Eric Swain identified that the story of Brutal Legend lacked one part (the Return) of the traditional three-part hero's journey.

Finally, gamers are also changing society from the inside. The former generation Y kids are now grown-ups. Some WoW players such as Larisa or Tobold recognized in April 2010 that middle-aged geeks are an increasingly important demographic for MMORPGs. Obviously, not everyone plays video games. But with gaming becoming more and more casual, an increasingly broader audience is being reached every day. Five years ago, I could find at least one person in any friend party who was playing video games. Nowadays, it is hard for me to find friends who do not play any video game. What will it be in 20 years?

29 April 2010

Comparing video games to films - 3/4

You can read the first, second, third and fourth parts of this article.



Findings

During my research on the respective histories of early Gothic horror movies and early video games, I have noted several similarities and differences between the two media.

The earliest film of Human history was Roundhay Garden Scene in 1888. The earliest video game was the Cathode Ray Tube Amusement Device in 1947. Both have the same nanoscopic size in today's cultural literacy: aside from field experts or historians, no one knows about them. In 1902, ie 14 years after the first movie, Le Voyage dans la Lune was realized by Méliès. The movie stays notable and even praised in our modern society. More than a century after its release, Ebert even wrote the movie had artistry and imagination. Surprisingly, the video game released fourteen year after the first video game was ... Spacewar!. Some mention modern video games owe a lot to Spacewar!.

However, Steve Russell is much less famous than Georges Méliès. Ebert considers games are not art because Video games by their nature require player choices, which is the opposite of the strategy of serious film and literature, which requires authorial control. Could video games be more artistic if their authorial intentionality was stronger and more obvious? The Civilization series have not (yet) been considered artistic by Ebert, although the name of their designer, Sid Meier, is clearly associated to the title. The same way James Cameron (and not the actors) is mentioned on Avatar's poster.

The early video game industry seems to have been much more competitive and business-oriented than the film industry. For example, Atari and other companies were "inspired" by the Odissey to make their own home console. Hence in 1975, Magnavox started filling lawsuits against everyone. I do not think such a hard competition happened between early film makers.

Moreover, while the first movies were not much more than recorded theater plays, the first video games were geeky electronic hacks. Tennis for two had been created to attract people to the exposition. It seems many console companies jumped at the chance of copying Odissey just to make money. Moreover, computer game developers of the 1970's such as Don Daglow were just writing games for each other for fun. These approaches differed a lot from the clearly artistic approaches taken by the early movies of the 10's or 20's.

Video games are seeking legitimacy

In September 2008, Steve Gaynor, a game designer, formulated what legitimacy was: a broad cultural relevance to the lives of the general population. In other words, it is not exactly whether video games as a whole have an impact on society at large (financially and so forth) but whether content of the medium itself is relevant to, say, your grandmother. Did Battlezone or Centipede speak to her personal experience?. He gave another illustration when he mentioned that a senator villifying [sic] video games to get his name on people's lips means that the medium is divisive. It also means that the works themselves are completely irrelevant to the senator as pieces of entertainment or expression, or else he'd be enjoying and defending them.

Leigh Alexander asked in April 2009: aren't the cultural and practical differences between film and games so broad that it's useless to analogize? Later in November, she detailed her point: repeatedly raising Kane is amateurish and useless. It's self-defeating shorthand for what Bogost and Wasteland correctly identify as the real desire: legitimacy for games. Matthew Wasteland (a gamer/game critique) formulated the real issue behind the Citizen Kane effigy: people want games that will artistically legitimize them to everyone who doesn’t play them. Eric Swain paraphrased that the "Citizen Kane of games" buzz is about gamer's insecurities and wanting a title to point to that everyone [society] will recognize, though may not have played, as art like Citizen Kane. They [gamers] want that so they wont feel insecure when they talk about thier [sic] hobby. The whole question has nothing to do with intellectual stimulation.

Bogost argued that the artistry of a media can not be established as it used to be half a century ago: Legitimacy has become distributed, a mesh. Indeed, video games are not movies, and the social context has changed a lot between the establishment of film legitimacy and video games birth. With the recent social changes in mind, Lafarge gave in the second paragraph of WINSIDE OUT several areas of change that she thought made games metamorphosing into a richly expressive medium:

  • the convergence of games with fiction and art
  • shifts in representation and the deployment of information in games
  • the assimilation of a filmic first-person point of view
  • the growth of a culture of cheating and hacking
  • rethinking of the win-lose dichotomy
  • the development of immersive role-playing and emergence of cooperative relationships as central to game play


Read the fourth part.

28 April 2010

Video games birth - 2/2

This article is the second part of the origins of video games. Here, I detail the 1965 - 1977 period relatively to arcade, console and mainframe video games. Many more games than those I mention have been published in this period. However, I try to mention only the ones I find the most interesting and innovative in each of the arcade, console and mainframe video game fields.

Arcade video games

The first arcade games were coin-operated. Arcade controllers were very similar to the 1960's controllers: each player had a knob and a few buttons.

Galaxy Game was released in 1971. I could not find any screenshot or video of it. Computer Space was released two months after Galaxy Game in 1971. The video of the game shows how similar it was to Spacewar! from 1961. Both Galaxy Game and Computer Space were 2D space shooters. In 10 years, not much had evolved, but the shmup genre was certainly defined. I think the Cold War space race context influenced a lot the design of video games. Anyway, Pong was published by Atari in 1972. The video game sport genre began.

Gun Fight was published in 1975. Each of the two players controls a cowboy and shoots at the other. Unlike Spacewar! or the other shmups, bullets are limited in number and bounce against the screen. I could find a few screenshots of Gran Trak 10, the first racing game released in 1974. Gran Trak used ROM to store the game data. People played versus an AI. The player could use a steering wheel, two foot pedals, a gear shifter and a knob. Sprint 2 was a racing game published in 1976. This arcade game added two AI cars and more diverse tracks, but the controller stayed the same as Gran Trak 10. Night Driver, a racing arcade game released in 1976, was the first game to show the world in a first-person view. As seen at the end of this video, the speed of the car increases gradually to make the game more difficult for the player.

Breakout was released in May 1976. You can see the video of its port to the VCS 2600. In the original gameplay, orange blocks speed the ball. Each level increases the speed of the ball and the difficulty of the game.

Home consoles

Before 1977, many home consoles embedded the games inside the hardware - few consoles used ROM cartridge. So it makes sense to analyze the consoles as a whole. I found many information about first-gen consoles in this article.

Magnavox Odissey was the first home console. It was released in 1972 and did not use cartridge (the 1978 upgrade of Odissey has cartridges). Odyssey 200 was a 1975 upgrade of the original Odissey console. It contained three built-in variations of Pong.

Pong was ported from arcade to home in 1975 in the "Atari Pong" home console. This console had the game built-in (there was no cartridge). It took a year for Atari to find a retailer interested in funding the fabrication of the home console. Pong was nevertheless a success on Christmas 1975. In June 1976, Magnavox filed a lawsuit against Atari for patent infringement. Like the Pong console, the APF TV Fun released in 1976 had a monaural sound channel. Like Pong, the APF TV Fun had two knobs and several buttons. Four games were built-in: tennis, hockey, squash and single handball.

The Coleco Telstar was a first-gen console series starting in 1976. Its games were built-in Pong variants (hockey, handball, tennis and Basque Pelota) as well as Pinball games. Some 1978 upgrades had sound and games in color. One of the Telstar upgrades had text in French and English for the Canadian market.
The Color TV Game was a series of Nintendo consoles released in Japan only. Although second-generation consoles started to appear in the US around 1977, I think the Japanese market was at that time out of American console makers' reach. The CTG15 had the first controllers linked by a cable to the console, making the play experience more enjoyable. Games were built-in. Some were based on Atari's successes such as Pong or Breakout, but there was also a racing game.

Mainframes

There are several Mainframe games mentioned at wikipedia. I am sure some games are missing or have been forgotten since the 1970's. The mouse had been invented in 1963 and the ball mouse in 1972. Hence players could already use a mouse and a keyboard to play mainframe games on terminals.
These slides from Pamela Fox provided a lot of information (and screenshots).

PDP-10

PDP-10 games nearly always relied on text-based UI for the input and output. Sometimes, the possible player actions or game feedback were printed (on paper, at 10 or 30 characters per second). Lunar Lander appeared in 1969 (on PDP-8). It was apparently textual (I could not find any screenshot) and was ported to a graphic terminal of PDP-10 in 1973. The company who made the graphic terminal commissioned the game to be written in 1973 as a demonstration of the capabilities of the terminal. The user input was taken from a light pen. Starting in 1971, Don Daglow wrote several games during his college years. Baseball was coded in 1971. No screenshots or videos were found. Then Star Trek in 1972 and Dungeon in 1975. Dungeon was the first RPG and it was multiplayer. Colossal Cave Adventure (or simply, Adventure) was an adventure game created in 1976. The game shows both recreational and educative elements.

PLATO

The TUTOR language was introduced in 1967 for PLATO III. TUTOR made it possible to code PLATO games. The third slide of How College Students Influenced Gaming shows a quite exhaustive timeline of the PLATO games. Users had only a keyboard - no mouse - to interact with a PLATO terminal.

pedit5 was coded in 1974. It was the first dungeon crawler game and it was using some of the Dungeon and Dragon rules. The name of the game, pedit5, was deliberately misleading in order to hide it from administrators who had forbidden them. Following the same naming strategy, another dungeon crawler called m199h was coded in 1974 (and deleted). dnd was allowed to stay on the PLATO mainframe by system administrators in 1975. Before that, at least 7 major versions of dnd [...] were deleted from the PLATO system for being illicit games on computer system designed solely for education. In dnd players could buy items from vendors and face the first boss monster of video games (a dragon).

Empire appeared in 1973. The game was accepted by mainframe administrators because it was part of a class work. Up to 30 players could play the same game of Empire "online". It has been upgraded regularly until 1980. The game looks a lot like Spacewar! - spaceships attacking each other. Nevertheless, the game mechanics were more complex as teams could use spaceships with different characteristics ("strong but slow" versus "weak but fast"), and spaceships had two different weapons on-board.

Spasim was a 32-player 3D networked space shooter coded in 1974 and inspired by Star-Trek (and previously mentioned PLATO game Empire). In Panther (1975) players were driving tanks. The terrain in Panther was generated randomly.

Other mainframes

Several other important games were released on other mainframes than PDP-10 and PLATO. Highnoon was written in BASIC in 1970. You can play its emulation. Hunt the Wumpus was coded in 1972 also in BASIC. It was a text-based maze adventure game. Later implementations had graphics (see this video) but the first version of the game was totally textual. Maze War was done in 1974. Two players (connected by an Ethernet cable) wandered in a 3D maze and tried to shoot at each other. It was the first FPS (see the gameplay on a Xerox machine). The Oregon trail was released in 1974 as well. The goal of the game was to teach children about the 19th century pioneer life. In fact, the gameplay is mostly about resource management.

27 April 2010

Video games birth - 1/2

The quite thorough history of video games given in this article starts with the first video games in the late 1940's and ends in 1977. 1977 is a pivotal year for three reasons. First, second generation consoles appeared - with cartridges! Second, the Golden age of arcade video games started. Third, the home computer entered the market. The simultaneous improvements made in these three different hardwares definitely triggered the adoption of video games in our modern society.

It makes sense to analyze the gameplay and graphics of early video games separately. Gameplay and graphics certainly limit, extend or complement each other. However I think the progress made in each of them were made independently. Video game graphics did not improve thanks to gameplay innovations. New gameplays did not appear specifically because developers/researchers found new ways to display objects on a screen. Looking at the controllers also helps to understand which physical affordances the players could have.

1945 - 1965: Origins

The first video games were, like the first movies, technological proofs of concepts. They relied on electronical/electrical engineering prowesses of the time. One could argue that these games were more about cathode ray tube hacks than proper computer logic and graphics. Wikipedia mentions the 1947 Cathode Ray Tube Amusement Device and the 1951 Nimrod as precursors of video games.

OXO (1952) was the first video game, according to students from CMU. It was a version of tic-tac-toe and was only playable on the Cambridge University EDSAC computer. This video shows how the game can be played in an EDSAC emulator. One can measure how clumsy the use of a rotary telephone controller was: the player had to dial the number of the location where he/she wanted to put his/her symbol instead of simply pointing at the location on the screen.

Tennis for two (1958) is a two-player tennis game on oscilloscope. The physicist Higinbotham, creator of the game, reported: I knew from past visitors days that people were not much interested in static exhibits, so for that year. I came up with an idea for a hands-on display – a video tennis game. A video of two people playing the game shows basic elements of gameplay. Around 0:40, the right player seems to dominate the player on the left. The game was only playable on the Brookhaven National Laboratory device, hence hundreds of people lined up to play “Tennis for Two”. For each player, the controller consisted of a knob (for the direction) and a button (to hit the ball).

Other games followed, all developed in universities or by true hackers and which platform were university computers. Examples are Mouse in the Maze (1959), Spacewar! (1962) or the PLATO platform (early 1960's).


The second part of the early video game history deals with mainframes, arcade and consoles.

Comparing video games to films - 2/4

You can read the first, second, third and fourth parts of this article.

The game industry has been buzzing a lot about the Citizen Kane of games. Because Citizen Kane was released in 1941, the history given here stops in 1941. Although Citizen Kane is a drama film, I have chosen to study the gothic horror genre history. I simply enjoy more watching old monster movies than old drama movies. I think the technical and conceptual improvements observed in the monster movie genre can be generalized to the overall film industry.

Silent films


Silent films (as art pieces and not simply proof-of-concepts) emerged in the 1890's. 1890's movies lasted in general less than 3 minutes (although Jeanne d'Arc by Méliès in 1899 was 10-minute long). Monster movies of the early 20th century were often inspired by novels. Translated into today's game jargon, this means they did not create any original IP. But there were several smart features and prowesses. To my mind, these early prowesses are characteristic of art. For instance, silent films had no spoken dialogs or sound effects but only a background music. The stories featured young heroes in love, a main bad guy, suspense, fear and even an awareness of the audience: the dialog slides stay long enough for everyone to have enough time to read them.

Frankenstein in 1910 lasted approximately 12 minutes and was shot in three days, which was a little longer than usual. Interesting techniques are already used. Around 7:40, for example, the scene takes place in the room of Frankenstein. On the left, a chair and a table where Frankenstein sits. On the right, a mirror pointing towards the entrance of the room. Thanks to the mirror, the spectator is the first to see the monster entering the room. This scene is also the time when the monster sees himself in a mirror and despises his creator for his ugliness. Positioning the mirror this way (presumably tried to) put the spectator in the place of the monster. This mirror play is actually repeated later in the movie (11:20) to bring dramatic effects. When looking in the mirror, Frankenstein sees his evil creation instead of his own image. Interestingly, this mirror effect is often used in Citizen Kane ...

Nosferatu was realized in 1922 and lasts 94 minutes. Some kind of special effects happen, such as around 1:01 when Nosferatu disappears inside the barn, the shadow projection of Nosferatu at 1:19, or Nosferatu's death at 1:22. The film conveys an atmosphere. One could argue that Frankenstein (1910) mentioned above was a simple theater play recorded thanks to a film camera. Nosferatu is unarguably a movie, and it represents well the Gothic horror movie genre.

Sound films

Sound films were invented in 1900, but they started to be seen commercially only in the late 1920's. Thanks to sound films, musical films could emerge as a movie genre. Sound films also gave voice to actors. This additional task meant actors could be in trouble if they could not perform well vocally. I could not check the source personally, but actors such as Anny Ondra in Blackmail (1929) have experienced contrasting consequences of these technologies. She had a lot of success in the Hollywood silent movies, but her Czech accent was felt unsuitable for the film. But let us go back to Gothic horror movies...

Unlike the 1910 Frankenstein, the 1931 Frankenstein's story did not follow the original novel. In the novel, the monster is smart, able to speak and only starts to be violent when his creator refuses to create a wife for him. Frankenstein is a student and has worked alone on his creature. How Frankenstein makes his creature is not mentioned explicitly. In the 1931 movie, a limping assistant helps Frankenstein, now professor, to make his creature. The spectator watches Frankenstein collecting dead flesh from graves and giving birth to the monster electrically using thunder. The monster is completely mute except for grunts and growls and his violence comes from the "criminal brain" that was used to make him. Undoubtedly, the 1931 Frankenstein sound movie convey the creature growls more efficiently than the 1910 version. However, the actor roles have been oversimplified, maybe to reach a more mainstream audience?

Dracula (1931) is considered as a classic of the era and of its genre. Bela Lugosi, playing Dracula, seems to have been instrumental in the film success. Although the audience can hear the actors' voices and sound effects, there is no background music (except during the credits). Other films of 1931 such as the Bollywood Alam Ara used songs extensively. Hence, it becomes clear that not all technical features available at the time were being used in Dracula. Hence, the movie's success had, apparently, not much to do with technological improvements. The special effects stay relatively minor. For instance, the transformation of Dracula into a bat is always done off screen, there is no smoke effect. Considering Nosferatu had special effects a decade before, this lack of special effects in Dracula might have been disappointing for the audience.


Read the third part.

25 April 2010

Comparing video games to films - 1/4

You can read the first, second, third and fourth parts of this article.

Roger Ebert

In November 2005, Roger Ebert, a movie critique, wrote: I did indeed consider video games inherently inferior to film and literature. There is a structural reason for that: Video games by their nature require player choices, which is the opposite of the strategy of serious film and literature, which requires authorial control. several reations of video games websites followed.

During a conference in late June 2007, Clive Barker, an English artist criticized Ebert's position of 2005. Ebert answered to Barker in July 2007. More reactions of video games websites and journalists followed.

Some days ago, in a response to a TED talk by Kellee Santiago, Ebert defended his point again: Video games can never be art. Very very many reactions followed on video games news websites and blogs (see also this google search). [Certainly off-topic, but worthy of interest: these different reaction magnitudes (2005 < 2007 << 2010) show how much video games communities have sprouted in the last few years.]

Citizen Kane

According to Keith Boesky, the "Citizen Kane of games" buzz started when Trip Hawkins first ran EA ads asking whether a computer can make you cry. This morphed into the question of when we would see the "Citizen Kane" of games. I could not find any source confirming that the actual source of the buzz was Trip Hawkins or EA. No date either. Anyway, Citizen Kane has been mentioned regularly since 2004.

  • January 2004: Shayne Guiliano, a video game industry member, first mentioned Citizen Kane in a response to Ernest Adams about the visual impacts of video games.It is a misconception to say that visuals are not an excellent way of illustrating the internal states on mind. This problem was first solved in the film "Citizen Kane".
  • October 2006: John Gaeta, a visual effects designer mentioned the Citizen Kane of gaming.
  • March 2005: Warren Spector wondered how the video game industry could implement better stories: Citizen Kane was not a particularly successful movie… but RKO was willing to take a chance. We need to get to that point.

In February 2009, Boesky wrote that the "Citizen Kane of games" idea is poisoning young developers' minds. In April 2009, the topic was discussed between Bogost and Alexander, and some game critique reactions followed. Guillermo del Toro said in May 2009: In the next 10 years, there will be an earthshaking Citizen Kane of games.

In October 2009, Michael Thomsen, an IGN video game expert, mentioned during an ABC podcast that Citizen Kane has been hailed by film critics for decades as one of the best movies in history. And if Kane had a symbiotic partner in the world of video games, it would be the Metroid Prime trilogy. Eric Swain, a game critique, objected: Saying that this movie revolutionized the populous into thinking films were important, saying that before it they weren’t thought of as art and afterwards they were, well there’s no other way to put it, it is a lie. It is an artificial pinpoint created by its almost universal placement on top 10 lists and because of it has had its own mythos inflated beyond the reality of the film. Others have also reacted.

Nowadays, many critics, game journalists and developers use the reference recurrently.

Stating the problem

Given their very different histories, how can these two media/domains/arts be compared? (this is not a rhetorical question!) Clive Barker said in June 2007 that video games is a medium that’s barely 2 decades old, and he (Ebert) is saying oh, there’s no 'War And Peace' yet – of course there isn’t! When asked by Alexander about Why Raising 'Kane' Won't Help Games' Legitimacy, Bogost explained: It's a red herring, because we think that having a Citizen Kane will prove our artistic legitimacy, but masterworks are not how artistic legitimacy is proven anymore. This series of posts (kind of) aims at contributing to Bogost's point in comparing the early history of films and games.

While I am not a film expert/critique and I do not know anything about film theory, I can read wikipedia: the first movie was realized in the late 1880s. Judging from the content, it was more a technological proof of concept than anything art-related. I give a short history of cinema as an art, focusing arbitrarily on (vampire/zombie) gothic horror movies. Focusing on a particular genre makes it shorter and easier to analyze unknown materials, and I guess the same conclusions apply for other film genres as well (eg epic, adventure or Westerns).


Read the second part.

22 April 2010

[Literature] Productivity and play in organizations

In Productivity and play in organizations by Hansen et al. (2009) are described the reactions of executives when asked about using virtual worlds (VW) for their business. Hansen et al. analyzed 25 business executive written reports after they had spent some time evaluating Second Life as a valuable platform for companies. 7 sensible topics have been extracted from the reports by the researchers.

In the context of virtual worlds, productivity can have very broad and different meanings depending on who is producing, who is getting benefits from the production, what is created, and so on. Hence the researchers narrow and explain their definition of productivity: they try to answer the question In what ways can virtual worlds enhance the operation of everyday organizations?. In other terms, they look for productivity as measured through … revenue generation and cost control. Asking executives their opinion was important for this study for two reasons. First, executives are the ones who are effectively in charge of revenue generation and cost control. Second, they are instrumental in the adoption and appropriation of such a technology by the company because they are the ones who decide of using a VW or not.

The methodology followed by Hansen et al. deals with analyzing the reflection papers produced by MBA students and extract the key information from them. The first phase of their analysis was a grounded-theory-oriented comparison of the reports to identify patterns (open-coding). After having had a sense of the overall content, they started to gather the arguments in favor and against the use of VW for business (selective-coding).

Seven tensions, or points of disagreements between respondents, were identified. I replaced some of the cells of table 2 given in the article to illustrate the arguments in favor and against the use of Second Life for business.

Tension In favor Against
Popularity 40,000 residents at any given time Residents not in the business-oriented locations, Web2.0 social websites have 100M+ users
First-mover Get used to SL now for long-term benefits The SL phenomenon is slowing down, we wait for more robust VW platforms
Demographic Young and tech savvy Geekiness, social awkwardness
Anonymity Honest and uninhibited information Trust issues & misinformation
Sociality VW brings more social presence than other electronic media Limited social cues
Experience Immersion & 3D prototyping Lack of authenticity
Social Benefit Freedom (virtual tourism, expression) and therapy Dehumanizing

These tensions have also been cross-tabulated with the business application they affected: marketing and brand awareness, training and distance learning, meetings and collaboration, product innovation and testing, recruitment and interviewing, and virtual tours. Marketing and organizational training were the two domains where VW could bring the most valuable help to businesses. However, respondents recognized that marketing in the context of VW is very recent and requires particular skills the company does not always have. Community marketing, a new skill of the community manager?

In the last decade, the business press has been split in two sides: those who say VW are the future, and the careful, more conservative ones. Based on a CMC approach, Hansen et al. remark that lack of control and depersonalization are the two main concerns with the use of VW for businesses. In the light of previous CMC works, the reluctance to use VW may decrease as familiarity with the medium increases. However, VW provide synchronicity and 3D graphics, affordances unseen in older electronic media such as email or forums. Research has a role to play in determining if previous CMC results still apply to VW.

20 April 2010

[Literature] Fundamentals of Game Design, ch6: Character Development

Even though characters should have a clear role to the player, they must not be too stereotypical (if they are, they should have a little something that differentiates them from the standard stereotype). Characters should be credible. Even if they can be complex, they must stay consistent. Like in movies, a game's IP and marketing often rely on the main character. Games often use the main character's name for the game title to make consumers associate the IP with the character. Examples are Super Mario Bros, Duke Nukem or Sonic the Hedgehog. In short, characters should be appealing, believable, and the players should be able to identify with them. Incongruity and Disharmonious elements can be introduced for humor's sake.

Relationship between player and avatar

In RPGs where the player avatar is designed by the player, the avatar has no personality other than what the player chooses to create. In old textual adventure games (or Half Life), avatars (like Gordon Freeman) are nonspecific: the game designer does not need to specify or ask anything to the player about the avatar because the player never sees it. But computer graphics improved. It became awkward to write more and more complex stories about empty avatars.

Avatars became specific. Depending on how the player controls the avatar, he/she will not identify with it the same way. Some avatars such as Mario or Lara Croft remain puppets inside the hands of the player. Others such as April Ryan from The Longest Journey have their own will (she refuses to act too dangerously when the player asks her to). The avatar utterances can also be very important to the player. April Ryan speaks a lot (and sometimes gives clues to the player when she talks) while Gordon Freeman from Half-Life does not speak at all. The player can be directly inside the avatar (Half-Life) or just suggesting where it could move (The Longest Journey). Semispecific avatars are between nonspecific and specific avatars: the player does not know enough about them to form an opinion, but these avatars have a decent background (like Link in Zelda or Mario).

Men can identify with female avatars as long as the character is acting in a role that men are comfortable with such as exploration or adventure. However, women can be disgusted by hyper-sexualized female avatars. While men tend to simply use an avatar as a puppet, women care more about it and may appreciate to be able to customize it as they wish. The more details given by the designer about an avatar, the more independent it will be.

Personality

Three factors help show a character's personality: its appearance, behavior and language.

Visual appearances: art-driven character design

Art-driven character design consists of thinking about characters' appearance first. It is usually employed for quite shallow and straightforward characters that can also be used in other media like TV or comics. Characters can be humanoid (2 legs, 2 arms and a head), non-humanoid (vehicles, machines, animals or monsters) or hybrids , robots like C-3POr cyborgs). Cartoon appearances can provide 4 stereotypes to characters:

  • cool (detached but focused, clever and often rebellious to authority) like Ratchet or Otcho
  • tough (aggressive, strong, often hyper-sexualized) like Duke Nukem or Lara Croft,
  • cute (large eyes and heads, round body, innocent look but sometimes handle weapons bigger than themselves), like Mario, Sonic or Pikachu
  • goofy (funny, comedic), typically like Goofy from Disney

Representations of stereotypes may vary depending on the culture (cute in a manga is not the same as cute in Tintin or Marvels) and age (cute or scary is differently represented for 5-year-old girls or 30-year-old men). Although kids love cartoonish characters, they hate goody-two-shoes ones.

Clothes and weapons suggest a lot about a character: see Darth Vader's helmet or Indiana Jones' hat and whip. A rapier suggests elegance, while a meat cleaver suggests blood and violence. Jewelry and accessories such as crowns, bracelets or rings also help a lot to recognize a character's role. They can also act as containers of skills or powers that can be transfered between characters. Names (Bugs Bunny), nicknames (Snake), clothes color palette (blue, red and yellow for Superman versus plain black for Batman) or sidekicks (Tails for Sonic or Watson for Holmes) also help define the characters. Sidekicks can also sometimes help the player (the fairy Navy in Zelda) or provide an additional perspective of the hero to the player.

Concept art is done early in the design process and should not consists of too elaborated drawings. The concept arts are to be used by the marketing and programmer teams to get a rough idea of the game.

Behaviors: story-driven character design

Story-driven character design consists of thinking about the character's role, personality and behavior rather than its appearance. Artists come after the designer has decided how the avatar interacts with the game mechanics. Even though the interactions were not really complex, SSX Tricky gained a lot in including meaningful character rivalries in a snowboard game.

When characters appear for the first time to the player, there is a minimum of information about them to give to the player: where does the new character come from? Why does the avatar meet him/her? Also, character traits should be shown/seen/experienced rather than directly mentioned in the game handbook. Behaviors convey more depth about character's personalities than their appearances, providing the player has opportunities to observe these behaviors.

A character can be described by its attributes. Status attributes such as Health Points change frequently while characterization attributes such as age or gender (nearly) never change in the game. Emotional states and relationships like in the Sims are another very recent kind of attributes that describe characters' behaviors.

Dimensionality can give a more realistic perception of a character. The table nearby, deeply inspired by figures 6.9 to 6.12 of the book, illustrates the possible dimensions of characters found in the Lord of the Rings. Zero-dimensional characters have binary emotional states with no mixed feelings. They may have more than two emotional states, though. One-dimensional characters only have one emotion that can change during the game. Two-dimensional characters have multiple non-conflicting impulses, they face no ambiguity, while three-dimensional characters can have contradictory and conflicting emotions producing inconsistent behaviors. Three-dimensional characters can do things they do not really want to, reluctantly, or even sabotage their own efforts subconsciously.

Number of character dimensionsLord of The Rings exampleFigure
0 dimOrcs
1 dim Gimli and his attitude towards elves change over time
2 dim Denethor never faces any moral dilemma... until the end
3 dim Gollum towards the Ring

Characters, especially the hero, can grow while the player progresses through the game. They can grow physically, intellectually, morally or emotionally. RPGs often feature a rich and complex growth of the hero and other characters in the game. The stats of the character, its appearance, skills, language, interactions with other characters or even the plot can evolve to show various types of evolution (more power, more knowledge, etc.). Some character archetypes such as the mentor or the rival have proven they were instrumental in the success of a story, but they should be used wisely.

Language: audio design

Characters can also be recognized to their notorious sound (Darth Vader's breath) or phrases ("What's up doc?" from Bugs Bunny). Much of sound design involves psychological expectations: "glug glug glug" for a drowning person or metallic sound when metallic-looking objects are touched. Sounds must also fit the movements of the character.

Accent or vocabulary specific of a time-period, social class or country helps setting the context of the game. Bad grammar reveals bad schooling or Master Yoda. Speed of speech can indicate excitement, boredom, anxiety or suspicion. The tone of the speaker and vocal quirks such as slutter also convey a lot about the character.

Test your skills

  • Think about a human two-dimensional character as a child, teenager and adult. Give several attributes giving clues about the age and maturity of the character at each stage.
  • Imagine two characters whose strengths and weaknesses complement each other. Show how they seem unalike but nevertheless complement each other quite well. Show how they are weak when they are alone.

19 April 2010

[Literature] Fundamentals of Game Design, ch5: Creative Play

Self-defining play

Choosing an avatar is an act of self-definition because the players can identify to the avatar he/she controls. A player can choose an avatar from the beginning (in Monopoly or car video games for instance), customize his/her avatar (in old RPGs where the character acquires new skills or equipment, for instance) or build the avatar from scratch (in modern RPGs). The player can modify two types of attributes of his/her avatar: the functional and the cosmetic attributes.

The functional attributes affect the gameplay. Functional attributes can change during the game (eg XP) or be defined by the player at the beginning (like strength, dexterity, intelligence, etc. from Dungeons and Dragons). The player should be able to know how the choices made concerning his/her avatar's attributes will impact the game. Giving players a random number of points to assign to their attributes allows them to make interesting choices and create an avatar who reflects their own personality or fantasies without unbalancing the game. Include a default configuration for players who do not want to spend too much time in their avatar creation.

Cosmetic attributes such as eye or skin color are not part of the core mechanics but bring a lot of fun and do not need to be balanced. Cosmetic attribute must stay cosmetic attributes after updates of the game (example: bigger avatars are not stronger).

Creative Play

Creative Play happens when the player builds or designs things in games such as Sim City or Barbie Fashion Designer. Provide saving and sharing functionalities (cf Sporeopedia or Pokemon global trade station). Creative play can be constrained or freeform.

Constrained creative play provides a structure or tools for the player's creativity, and features can be unblocked as the game progresses. Constraints can be based on the game money (SimCity or RPGs), on the physics of the world (Bridge Construction Set), or some aesthetics standards. The aesthetics rules can either be established by the game designer beforehand, they can procedurally change over time or the public could also vote online for their favorite.

Freeform play sets no restriction at any time on the player's creativity. Constrained creative play-games sometimes offer a constraint-free sandbox mode. The construction of Spore creatures is an example of freeform play.

Other plays

The Movies or Stunt Island are games that feature storytelling-play. They let the players make their own movies and share them online. The player communities around the Sims also produced stories with commented screenshots of the game.

With mods, players can edit levels, items, characters and many other parts of the game. FPS hardcore players sometimes develop stronger bots (in the sense of FPS opponents, not cheating programs) than those given in the original game. However, UGC can sometimes be very ugly, inappropriate, or even amoral (porn, racism).

Test your skills

  • Think of a game where the player can build something different than vehicles, buildings or cities. What is the reward given to the player? How could money be spent into the different pieces of this new construction? Could there be upgrades?
  • Find a set of real-world existing aesthetics rules (in architecture, clothing, design, music, interior decoration or landscaping for instance). How could your game follow these rules to measure/appreciate the player's creations?
  • How can you make clear to the player the consequences of his/her avatar customization decisions during the avatar creation?
  • How can you create a sense of community between your players? How do you allow them to share their creations with others?

13 April 2010

[Literature] Fundamentals of Game Design, ch4: Game World

A game world is the place where the player pretends to be while in the magic circle. Th world is vital to sustain the interest of new players. Experimented players (of Counter Strike for example) sometimes stop viewing the game world and instead focus on the core mechanics (jumping, hiding and shooting at strategic moments). However, customers buy the game for the (visual, audio) fantasy of its game world made apparent on the box at the retailer shop. Mechanics are experienced after the game has been bought. [Although some stores do have consoles on-site so that customers can try demos of AAA games in the shop].

Dimensions

Games have physical dimensions. Spatially, it can be 2D (side-scrollers), "2 and a half D" (god-view RTS), 3D (Tomb Raider) or 4D (which should actually be two different but related 3D worlds, like in some action-adventure games where the hero has a special sense/skill that lets the player see the world differently). The spatial dimension must serve the entertainment value of the game: Lemmings 3D was less successful than the original Lemmings in 2D. The scale encompasses the absolute and relative sizes of objects, people, terrain, etc. as well as their speed. It is sometimes needed to distort scales a little, particularly in a god-view world. Zoom-in and out when needed: houses or dungeons have to be zoomed-in to be explored in 2D-RPG for instance. Some games have natural boundaries (sport, driving or indoor/underground FPS) but sometimes the boundaries need to be disguised to keep the magic circle: mountains, water, deserts, etc. or forcing the player to return where the action is (flight simulators). Unless the game world can be represented as a sphere (cf Populous), a cube or such 3D shape.

Temporally, day and night can be meaningful (cf Baldur's Gate). Most games which use time as a significant element skip or fasten uninteresting periods when nothing happens (between missions or when all the Sims work for instance). Allowing soldiers to fight continuously permits the player to play continuously without a pause also. Anomalous time consists of giving an incredible amount of time for a task to be done, absolutely or relatively to other tasks in the game: building a house takes nearly as much time as gathering berries, or the Sims take 15 minutes of the game to go to the mailbox. Letting the player choose the speed of the game is a way to cope with too long or too boring periods of the game.

The environmental dimensions consists of the cultural context and the physical surroundings. The cultural context contains the overall background of the world (religion, politics, architecture, landscape, personal stories, etc.) The UI should may to the cultural context (ie tribal look for a game with tribes). The physical surroundings composed of visuals and sounds define what the game looks like and are influenced by the cultural context. The level of detail determines the realism of the world. Rule of thumb: include as much details as you can until it begins to harm the gameplay. The style consists of both the content (eg medieval city or a hospital) and the way to present the content, ie the drawing style (Impressionistic, black-and-white, etc.). Keep the style consistent throughout the game. Try to find original settings off the beaten tracks: everyone designs games set in muddy and feudal European Middle-Age while at the same time, Islamic culture was magnificent. Angkor Vat, Easter Island or Machu Picchu are other original settings. [How many world designers subscribe to National Geographic?] All is grist for the mill, but borrowing concepts from movies is a quick-and-dirty backdrop.

The emotional dimension in a single-player game comes from the storytelling and the gameplay. Multiplayer games also rely on relations between players. Until recently, games have been seen only as light entertainment ... [but that] doesn't mean that's all they can be. Emotions can come from stimulating challenges at the appropriate difficulty. Emotions range from a fulfillment of power, greed or ambition in Tycoon or god games. For suspense to work well, the player needs to feel vulnerable and unprepared. Love, jealousy and outrage can be felt by the player if he/she identifies with a character. Saving the universe may convey some emotions to kids but adults will laugh at it. Trying to be fun can restrict the field of emotions conveyed (sorrow, guilt, despair is far from fun). The potential for our medium to explore emotions and the human condition is much greater than the term fun game allows for, but publishers and current markets want fun.

Games sometimes let the player do things he/she can/should not do in real life. Hence the designer can define the game's own ethics and morality. They are actually part of the culture but need a particular attention. If the player has to kill people to win, then killing is not morally bad. [Koster would argue that players do not give any credit to morality, they only take into accounts the mechanics for the fun of it]. Violence happens everywhere, the only problem is how it is portrayed: chess pieces can be killed but it is very far from killing humanoid characters with realistic graphics.

Bonus

  • Choose one film maker and one composer whose works could fit well together into creating an emotional tone for a RTS, a mature action game or a child (non-violent) adventure game.
  • Pick a game. Which actions are rewarded? Are they moral in real life?

12 April 2010

[Literature] The field site as a network

In The field site as a network: A strategy for locating ethnographic research from 2009, Burell explores a variety of strategies devised by researchers to map social research onto spatial terrain. She relies on her work in Ghana to suggest considering the field as a network rather than a traditional physical place, particularly for Internet-related studies.

Traditional anthropology in remote villages assumed that external influences were minor to the field site, and the ethnographer could discover the terrain as the study went. In contemporary ethnography, the vision of the place in which the study takes place influences the results of the study. Hence an ethnographer should ponder wisely which stance to take on the field. In doing so, the researcher simultaneously acknowledges the limitations of the study and builds the field site.

Burell mentions Marcus' paper when she explains that he (and other anthropologists) shifted from a notion of culture as essentially stationary to culture as constituted by intersection and flow. In the light of his paper, we know that Marcus mentioned the follow the people/objects/metaphor. She singles out Marcus' paper for being the only one in her survey to explain how fieldwork may be located in ethnographic studies.

Some ethnographers - such as Mitchell - argue that the Internet is not a physical place at all: profoundly antispatial... you can not say where it is … you can find things in it without knowing where they are. However, other researchers, particularly those conducting virtual ethnography, showed that some individuals experienced the Internet as profoundly spatial and social. T.L. Taylor underlined the duality of Internet users: a physical body facing a computer and an online avatar in a virtual world. Another disagreement among the ethnography community came with the question of the sharp division between offline and online spaces. Some argued the technologically-mediated rupture did not allow for connections between real- and virtual-lives. Others such as Miller and Slater, based on their Trinidad field study in 2003, put forward that Internet is continuous to other social spaces. Real configurations can influence virtual ones. The network framework proposed by Burell in her article adopts this view as it tries to escape strong offline-online divisions.

To introduce her network framework, Burell raises what she calls a logistics issue: if interactions and events relevant to the field study happen everywhere in the system, how can the researcher be sure to know anything? How and when does the ethnographer know he/she has enough data to know what is actually happening on the field? I found Marcus argued that ethnographers could validly dig only parts of the whole system provided they show which stance they take. However, he did not really explain why. Burell explains this limitation is a logistical accommodation. Indeed, Internet is too wide to be studied as a whole. Moreover, she relies on her 8-month Internet-use field study in Ghana to illustrate her point more precisely.

Ghana marketers include international into their business names to make them sound more prestigious, and at the same time pretend products are local while they come from abroad. Local and global are not meaningful or discernable as distinct categories. Moreover, she found quickly that cybercafés customers came irregularly, from many different places and for different reasons, making it difficult for her to find any satisfying result. Her dynamic network framework helped her solve this issue: the continuity (through connections) of networks let her link people, places, objects and so on.

The framework attempts to render concrete some of Marcus' suggestions for multi-sited ethnography. It describes six steps to build a proper field site.

  1. Following a trail (through what she calls an entry point) brings a more meaningful social-spatial mapping than choosing people randomly for instance. Example: A is interviewed, B comes by car to pick A up. The researcher asking if B can be interviewed as well is what I call following a trail, the entry point being A.
  2. The trails can be of various kinds: telecommunication, transportation, road and social networks are examples given by Burell.
  3. Studying a current site does not mean not being aware of other places. For instance, in the Ghanaian Internet cafés she studied, she realized that customers communicated to people from many other countries. She saw the café as a point of intersection and in doing so, she avoided having to spend time to know what happens in those other countries.
  4. The multi-sited ethnographer should react to what is said in interviews before the end of the field study. In doing so, the researcher can establish early connections between places, people or objects mentioned during those interviews and eventually investigate those while he/she is on the field.
  5. When people do not know some places, they imagine them spatially: the Internet or foreign countries for example. The real-life Ghanaian popular imagination influences how they perceive the virtual spaces. Interviews can reveal such popular perceptions.
  6. One simple way to determine when to stop is when time runs out. In fact, there comes a time in the fieldwork when nothing new emerges. Meaning saturation … does not rely on spatial boundaries to define the ending point of research.

11 April 2010

[Literature] The emergence of multi-sited ethnography

The emergence of multi-sited ethnography by George Marcus is a literature survey from 1995 explaining how ethnography has moved from traditional single-site studies to recent multi-site studies. Marcus details domains that fit multi-sited ethnography and the different methods followed by multi-sited ethnography. [MMOG are perfectly suitable for this kind of methodology; in fact, I think this is the kind of methodology Celia Pearce followed in Communities of Play].

Marcus describes first the two “modes” of ethnographic research:

  • Traditional single-site ethnography studies mostly colonial contexts and focuses on relationships, language and objects to show the emergence of “new cultural forms”.
  • Contemporary multi-sited ethnography has no specific space- or time-frame. This is particularly suitable for migration or media studies. The author argues this method is efficient in revealing “cultural logics so much sought after in anthropology” because it consists of “following connections, relationships and associations” from one site to another.

Three possible methodological issues to multi-sited ethnography are raised by the author:

  • Because ethnography is a close and local study of people, its results can not be extended to characterize the whole system. The goal of multi-sited ethnography is not to establish a global view of the system thanks to local analysis but rather to determine the connections between locales inside the whole system.
  • Being in multiple sites presumably reduces the field knowledge. In fact, Marcus argues that traditional single-sited ethnography that relies on previous work in other locales are actually doing multi-sited ethnography. Mobility does not mean more shallow findings but rather the ability to appreciate movements, transitions and translations between locales.
  • Ethnography relies on the power relationships in taking the subalterns (those dominated) point of view. Multi-sited ethnography does not forget this crucial aspect: it does not simply add local information out of the subaltern focus, and it does not simply compare individuals of different sites. Multi-sited ethnography associates and links individuals in different time and space situations.

Many disciplines use multi-sited ethnography. Media studies research the production by the industry of TV programs and movies but also their reception by the audience. [Games are being integrated little by little into media studies as well.] Media studies also encompass the study of indigenous media production. There are also the reproduction and reproductive technologies (feminist and medical anthropology), epidemiology, new modes of communications (Internet, mobile phones), environmentalism and biotechnology.

A multi-sited ethnographer can base his/her research methodology in focusing on the:

  • People: “Stay with the movements of a particular group”. The most common style.
  • Things: “tracing the circulation of a material object such as commodities, gifts, money, works of art ...”. Quite widespread in capitalist world systems.
  • Metaphor: trace “associations that are most clearly alive in language use and print or visual media”.
  • Plot, story or allegory: “myth analysis”
  • Life or biography: “succession of narrated individual experiences”
  • Conflict: mostly used in the “anthropology of law”, often about mass-media topics of interest.

10 April 2010

NPC and virtual society

Think inside or outside the box?

In order to immerse the player, game designers build what is called a magic circle. In games, NPC are instrumental in the setting up of the magic circle. Their dialogs set the tone of the world. Their quests teach the player which basic avatar actions (walk, talk, attack, buy) can be used to achieve higher-level goals (leveling, getting appropriate equipments or traveling). Such NPC somehow teach which actions can be done in the society.

A society can be described (maybe not entirely, though) by what it contains or what it supports. In Western societies for example, charity or monogamy are seen positively. But a society could also be described by what it does not support (selfishness and polygamy). In fact, immersing a player in a world with original social rules could be done more easily in showing the player the few "bad" NPC rather than the anonymous crowd of "good" NPC. For example, Jon Irenicus in the beginning of chapter 2 of Baldur's Gate II is being apprehended by the Cowled Wizards. His casting spells and killing people inside the city clearly defines him as an outlaw because of the Wizards trying to arrest him. With this only one cinematic sequence, the player understands his acts will have consequences. The message is understood more directly, meaningfully and intuitively than if it was done by several "good" NPC (eg a tutorial character mentioning in a dialog that "every act has its consequences" or even city Guards saying "I keep an eye on you"). Illustration nearby: covering the entire (white) box surface needs more crosses than marking the (red) box boundary.

A sense of belonging

As seen before, NPC can help the player know the rules of the virtual society. But they also can strengthen the magic circle in giving the player a sense of belonging to this virtual world.

Robert Hercz, a Canadian journalist from the Saturday Night wrote that Psychopaths are not like the rest of us. In his psychopath examples, he includes the con man, whose real-self is manipulative, lying, parasitic, and irresponsible. Success psychopath movies such as The Silence of the Lambs or Dexter (TV series) insist on the differences between "them" (psychopaths) and "us" (normal people): they can kill people in cold blood (pun?), they manipulate people without remorse, etc. These differences make us remember that we are not psychopath. Hence they comfort us in our belonging to society.

Similarly, Bergson in Laughter explains that we laugh mainly to compensate for a "bug" in a situation. He gives the example of people not paying attention: stumbling on the sidewalk curb, colliding with a streetlight or falling from a chair one just tried to sit on. The lack of attention is the bug, transforming the attentive humans into stupid and blind machines. Laughter is a social protection.

In video games, NPC designs are most of the time based on the function they provide to the player (quests are used to earn XP, merchants to make money and monsters to complete quests or earn XP) and not on the experience provided. Designing flat true NPC does not strengthen the magic circle. One could argue that in movies, kicking the dog, You have failed me or You have outlived your usefulness followed by the horrified faces of the "normal" people go in this way. To my mind, they are just clichés used to show how really bad the Big Bad is; their goal is not the magic circle. So for games, simple NPC dialogs or actions could easily convey the sense of belonging to the society. Why not seeing a NPC spontaneously laughing at another during an embarrassing situation? If having a lot of money should not be a symbol of success in your favorite MMOG, then why not thinking of NPC who criticize rich players?