More on the MVC pattern.
I wrote in September 2011 about which type of MVC would be more appropriate to make games, but this was all quite abstract. Here are more concrete thoughts, after having had to implement it myself using Pygame. But first: a diagram!
Pygame provides a ticker that regulates the game loop with a 10-ms precision. The elements that need to be awaken by the game loop are colored in pink on the diagram above. In more details, they are:
- Mechanics: Some events independent of the player have to be triggered at some points. For instance, the screen could turn red after 5 minutes in a scenario, the player's money could generate interest every 10 seconds, or monsters' AI needs to process the game state to determine what to do next every 50ms in fight mode but every second in idle mode. That's why your mechanics have to be called every loop iteration.
- Renderer: In Pygame, this corresponds to blitting sprites onto the screen, and then flipping the screen, and/or playing sounds. With the architecture displayed above, the frame rendering rate (say, 60 FPS) is independent of the main loop frequency (say, 100 iterations per second). This is useful for machines with a decent CPU but a weak graphic card because the view could be configured to refresh the screen only 30 times per second, but the logic could still run at 60 or more iterations per second. But there's more! since the renderer accesses the game state, it can determine if the load is going to be too heavy with 30 fps, and decrease the frame rate gracefully without having to slow down the mechanics or input controller.
- Input Controller: events such as clicks or keys pushed are processed one after another during each loop iteration. The input controller then sends a translated version of these events (e.g. 'Q' stands for stop the game) to the Main Controller (e.g.
- Network Controller: events may be sent by the server at any time. The client may also need to send actions to the server at any time. Therefore, the network controller is called every loop iteration. This is done using PodSixNet:
ConnectionListener.Pump()for the pulling and
EndPoint.Pump()for the pushing. Under the hood, PodSixNet uses
asyncore(which apparently calls a normal
Note: When the network controller needs to send a part of the game state on the network, it calls the Main Controller to return him the data from the game state itself.
Keys vs Clicks
When the player pushes a key, the scenario is very easy to follow:
- Input controller translates
- Main controller calls the appropriate mechanics
- Mechanics update the state
- Next frame, the renderer displays the state.
The scenario is slightly different for clicks.
- Input controller receives click type (left/right/middle/both/...) and click position (x,y)
- Input controller gives the view controller the click type and the position.
- From the click position, the view controller parses the list of sprite coordinates and dimensions to detect which sprite(s) has been clicked.
- For each click type, the clicked sprite has a callback to the main controller. This callback was set by the view controller when the view as a whole was created. Hence the solid arrow from view controller to main controller (true dependence), and the dotted arrow from sprites to main controller (blind callback set by another component).
- Main controller calls the appropriate mechanics, etc.
How good is this?
From the two scenarios above, I see at least two aspects of this architecture breaking the traditional MVC. First, the view is not selected by the controller. Rather, the view looks at the model to know what subview or view mode to switch to. For instance, if I push the escape key, my MainController will ask the model to store it, and the renderer itself will decide what to do with that new information stored, whatever it means for the model.
Second, clicks require the controller to ask the view what those clicks mean. I was at first reluctant to affect the button behaviors dynamically because it decreases understandability. When you read the button code, you don't know what the button is doing at all. In fact, all you see in the code is a
raise(NotImplementedError). You have to go look inside the view controller to see what is being affected to that button's
on_left_clicked(). On the other hand, the gain in modularity is pretty sweet: you can change the presentation of an object, whether in the HUD or in the game world, independently of its logic. If you want to try another view (say 3d instead of the current top-down 2d), then that new view only needs to provide 2 "services":
render() for the main loop, and
process_click(pos,type) for the input controller.
Edit 31 Dec, 2011: Just saw this example from Shandy Brown on using the Mediator pattern as a middle-man that views and controllers pubsub to. I like the "loggers as views". However, I'm not sure the clock-triggered events should be in a controller; the model should have some game logic in it.