Real-Time Multiplayer Gaming Keeping Everyone on the Same Page Abstract

合集下载
  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

Real-Time Multiplayer Gaming: Keeping Everyone on the Same Page
Sean A. Pfeifer
Embry-Riddle Aeronautical University
sarcius@
Abstract
In the multiplayer world, whether it is in simulations or games, keeping clients and servers updated with the correct information can be a difficult problem to tackle. In a business where millions of customers rely on your systems to react in a timely fashion, it is important to properly handle latency. One could also take into consideration that even projects such as military simulations must deal with these issues. One of the main issues is dealing with network latency on both the client and server side. Another huge issue should come to mind at the same time: what about people who are attempting to cheat, or misuse the system? In other words, how do we provide a means to deal with the latency issue while preventing these systems from being abused to provide an unfair advantage to certain players? There are a multitude of ways to deal with both issues, with trade offs for each that must be balanced in order to create an acceptable experience.
1. Introduction
Latency and cheating issues create major problems when dealing with a multiplayer application that is supposed to be “real-time.” The idea of “real-time” is used in this context to describe applications that have a low tolerance for missed deadlines. In this case, the systems are “soft real-time,” with flexible deadlines focused more on player experience. Many times, fixing one issue can have a negative affect on the other; when clients have more authority and logic, they can operate under high latency, however this usually makes a wide opening for cheaters.
In these games, the game environment is usually constantly changing on the server, and thus clients need to be kept properly updated. Many times events occur server-side that require a player to react in a short period of time or face consequences, and in this way the entire system is real-time. Usually, deadlines may be missed and the consequences may only be a slight degradation, however many the response time constraints differ based on the application.
It may help to put the problem into perspective if one imagines the applications being described as multiplayer action games, such as the “first-person shooter” or “flight simulation” categories of games. For instance, if a player is defending their base from an enemy scout who is about to open a door to enter, a delay of a fraction of a second for that one player can cause the loss of the game for the whole team. Another good example would be a pilot attempting to land an airplane, and then all of the sudden the controls and instruments stop responding. If these controls and instruments don't respond within a certain period of time, the pilot is very likely to have a harsh landing. When these sorts of things happen, players tend to get very agitated, and if it's a constantly recurring issue, your product sales may suffer. The timing issues also pertain to other games, such as MMORPGs (Massively Multiplayer Online Role-Playing Games), however those applications can arguably be classified as “softer” in regards to latency.
2. Latency Issues
One may argue that the latency problem isn't substantial when taking into account today's high-speed networks. Today's high-speed networks are indeed helpful, however, many people have slow or unreliable broadband connections [1]. Running on a local area network is not feasible for the architecture of the majority of multiplayer games. Because of this, users can expect to experience packet loss, slow connections, and out of order packets, all due to the connections from one end to the other. This may make the problem seem like a simple networking issue, but, as you will see, the issue has a more central role in creating the application itself. On a Local Area Network (LAN), latencies are generally under 10 milliseconds, however, when dealing with the Internet as a whole, latencies can range from 100 milliseconds to 500 milliseconds [2]. Good
connections are generally around 50 milliseconds [2].
As stated before, having latent clients has a different impact on the gameplay of different genres. In general, games that require split-second reaction time, such as combat simulation or first-person shooter games, have smaller tolerance for latency and missed deadlines than others.
Figure 2.1: Latency Thresholds per Genre [2]
Figure 2.1 is a table that lists the approximate latency thresholds for a few of the major classes of games [2]. This data was collected as part of game latency studies, which illustrated “the effects of latency on performance for player actions with various deadline and precision requirements” [2]. The studies measured performance for certain time-related actions in different game genres, such as accuracy in a shooter game, or time taken to complete a lap in a racing game [2]. The different models presented as part of the figure describe how a player is represented in-game. The “avatar” model describes a game in which a player has a certain character to represent him or her inside of the game. Avatar games may have different perspectives, usually first-person or third-person, which describe whether the camera view is looking from the character's eyes or from outside, looking at the character. The “omnipresent” model is a situation where the player has no visible character, but instead, for instance, an overhead view of the game area and directs other units to certain locations. As you can see, the example genres that you would expect to have high reaction requirements have lower latency thresholds, while those that are a bit slower paced can tolerate more lag-time.
Figure 2.2: Latency-Performance Measures in Genres [2]
The graph in Figure 2.2 describes the performance in each of the three previously introduced game types [2]. The gray area “is a visual indicator of player tolerances for latency,” and the areas below are generally unacceptable [2]. Of course, these tolerances vary depending on the game and the player, but this graph should give a general idea of the concept [2].
Rather than attempting to compensate for this issue by improving the connection – which is out of the hands of the game-
developers – we can use techniques to allow “the game to compensate for connection quality” [1]. Two common techniques
are client-side prediction and lag compensation [1]. Each of these techniques has its own drawbacks and “quirks,” but they
can assist in solving the overall problem of unacceptable lag for players. Note that these do not actually reduce the latency of
the connections, but rather the perceived latency in the game for players.
Client-Side Prediction
Client-side prediction is a method that attempts to “perform the client's movement locally and just assume, temporarily, that
the server will accept and acknowledge the client commands directly” [1]. As a game designer, you have to be able to let go
of the idea that clients should be “dumb terminals,” and must build more of the actual game logic into the clients [1].
However, the client isn't in full control of the simulation – there is still an “authoritative server” running to ensure clients are
within certain bounds (for example: not instantly teleporting across the map when they have to walk) [1]. With this
authoritative server, “even if the client simulates different results than the server, the server's results will eventually correct
the client's incorrect simulation” [1]. One potential problem with using this technique is that “this can cause a very
perceptible shift in the player's position due to the fixing up of the prediction error that occurred in the past” [1].
To perform this prediction, the client stores a certain number of commands that have been entered by the user, and when there
is lag in the connection, the client uses the last command acknowledged by the server and attempts to simulate using the most
recent data from the server [1]. In a popular multiplayer game called Half-Life, “minimizing discrepancies between client and server logic is accomplished by sharing the identical movement code” for clients and servers [1]. One issue with this
method of latency reduction is that clients will likely end up running the same commands repeatedly until they are
acknowledged by the server, and deciding when to handle sounds or visual effects based on these commands [1]. This is all
fine and nice in attempting to predict one's own movement, but what about predicting the movement of others in the game
world, so they don't seem to lag about?
One of two major methods of determining the location of other objects in the game world is “extrapolation” [1].
Extrapolation is performed on the client, and makes an attempt to simulate an object forward in time to predict the next
position of an object [1]. Using this method, clients can reduce the effect of lag if the extrapolated object has a straight,
predictable path. However, in most first-person shooter games, “player movements are not very ballistic, but instead are very
non-deterministic and subject to high jerk” [1]. This possible constant change in player movement makes it unrealistic to
apply this method to this circumstance. In order to help fix the large error that can occur in extrapolation, the extrapolation
time can be reduced, effectively reducing how far in the future the object is predicted to be [1]. However, players must still
lead their targets, even with “instant-hit weapons,” because of the latency being experienced [1]. In addition, players may
have an extremely difficult time hitting opponents that seem to be “'warping' to new spots because of extrapolation errors”
[1].
The second major method is “interpolation,” which “can be viewed as always moving objects somewhat in the past with
respect to the last valid position received for the object” [1]. In this method, you buffer data in the client, and display the data
after a certain period of time [1]. This method will help with the visual smoothness of other objects in the game-world,
however can make the interaction latency issue worse – the players are not actually being drawn as fast as data is received,
but rather drawn, say, 100 milliseconds in the past [1].
Lag Compensation
Another common technique for compensating for latent connections is “lag compensation.” Lag compensation can be
thought of as “taking a step back in time, on the server, and looking at the state of the world at the exact instant that the user
performed some action” [1]. So, this technique doesn't perform client-side actions, but rather deals with the state of objects
on the server. Note that we are completely moving the state of the object back in time, and not simply the location [1]. As a
result, players can run on their own systems without seeming to experience latency [1]. The game design must be modified
to take this functionality into account; this technique requires servers to store a certain amount of historical data in order to
perform the “step back in time”.
This may seem like a great remedy for latency, however, like client-side prediction, it has its drawbacks. At times,
“inconsistencies that sometimes occur ... are from the points of view of the players being fired upon” [1]. One example of
these inconsistencies is when a “highly lagged player shoots at a less lagged player and scores a hit, it can appear that the
lagged player has somehow 'shot around a corner'” [1]. This sort of issue is not usually as extreme in “normal combat
situations,” however it can still occur at times [1]. In order to attempt to fix this and make it fair, the server should most
likely only accept commands from a reasonable period of time in the past, otherwise the majority of players could have an
unacceptable experience due to a small amount of extremely lagged players.
3. Cheating
The previously introduced techniques assist in handling latency-performance issues in multiplayer games, but may interfere with other aspects of the game. The big question here is “How much authority and information do I want to give the clients?” When dealing with “dumb terminals,” clients simply send the server messages for actions they wish to perform, and the server replies with the reaction.
If client-side prediction is done, the issue of cheating seems to be removed, as clients have no authority over decisions that are made. However, cheaters could possibly still abuse the system. One such example would be what's known as a “time-cheat,” giving the cheater an unfair advantage by allowing him or her to “see into the future, giving the cheater additional time to react to the other players' moves” [3]. A cheater using this technique may be hard to detect by players of the game, as it may seem that that the player is simply lagging and has good luck [3]. An example would be a cheater who has low latency, and is receiving data on time, but reports that data has been received late. In this circumstance, the cheater is claiming he or she received data late, and may make decisions based on past data. For instance, the cheater could fire a weapon at the previous position of a player, report that he fired in the past, and score a hit. If a player is able to use past data like this, they could potentially perform flawlessly, ruining the fairness of the game!
In order to deal with this, one may employ a few different solutions. One solution would be to simply place anti-cheat software on the client's system, and update your software as new cheats are found. This solution can be an annoyance for players, as an extra, intrusive piece of software is generally frowned upon by gamers. In addition, this solution depends on cheats to occur in the wild where they can be analyzed to be fixed, and only after the cheat has become rampant. This method may be effective enough for some applications, however, may be unacceptable for others. Instead, the communication protocol can be modified to prevent certain kinds of time-cheats, using a protocol like the “sliding pipeline protocol” [3]. With this protocol, whose details are described in [3], it is guaranteed that “no cheater sees moves for a frame to which it has not yet committed a move and ... that no cheater may continually decide on a move with more recent information than a fair player had” [3]. In general, the game designer must take into account the fact that there will be players attempting to cheat in this way, and can adjust, on the fly, the amount of time the server will “look into the past” based on the latency of all users [3].
As expected, issues can also occur when clients are running their own simulations of the game-world. When clients simulate the world they may be given more information than a client is normally allowed to see. For example, if a wall is in front of a player, that player usually cannot see what is on the other side, and so will not be informed of it. A common cheat would be to abuse this system to enable the player to see the locations of other players or objects – to see through walls. One of the only ways to prevent cheating in this instance is to install anti-cheat programs (such as Valve Anti-Cheat) that attempt to detect known cheats.
When running simulations, clients must also be kept in check to prevent impossible or unfair actions. As noted before, there must be an authoritative server that checks on the actions of each client. A simple instance of this would be when a cheating client claims they have raced six laps around a race track, when in reality the race had just started. The client would report this, and the server would have to verify that the client's position had moved a valid amount, or else it should reject the claim made by the client. Without proper checking by the authoritative server, clients can get away with performing impossible feats.
All in all, the latency issue must be dealt with while keeping the possibility of cheaters in mind – unless, of course, you don't care about cheaters!
4. Conclusions
Latency issues in games are still very real today, even if the majority of players have high-speed connections. The trick in dealing with these issues lies in determining what type of technique to use for a specific system. Each of the techniques presented in this paper have their own quirks and downfalls, and each is very different in implementation. As each technique is deeply rooted in the functionality of the client and/or the server, the team must decide what combination of methods will be used during design. In addition to dealing with the latency issue, the team may or may not decide to take into account how cheaters will attempt to abuse the system – each method of latency reduction may introduce different possibilities for cheating. Again, the idea isn't to reduce the actual lag-time between client and server, but make “users find [the game's] performance acceptable in terms of the perceptual effect of its inevitable inconsistencies” [4]. Ultimately, the goal of the game designers when dealing with latency should be to make the game playable as well as fair.
5. References
[1]Bernier, Yahn, Latency Compensating Methods in Client/Server In-game Protocol Design and Optimization, Valve,
[Cited 2007, October 4], Available HTTP: http://www.resourcecode.de/stuff/clientsideprediction.pdf.
[2]Claypool, Mark, Claypool, Kajal, On Latency and Player Actions in Online Games, July 8, 2006, [Cited 2007,
October 4], Available HTTP: ftp:///pub/techreports/pdf/06-13.pdf.
[3]Cronin, Eric, Filstrup, Burton, Jamin, Sugih, Cheat-Proofing Dead Reckoned Multiplayer Games (Extended
Abstract), University of Michigan, [Cited 2007, October 4], Available HTTP:
/games/papers/adcog03-cheat.pdf.
[4]Brun, Jeremy, Safaei, Farzad, Bousted, Paul, Managing Latency and Fairness in Networked Games, [Cited 2007,
October 4], Available HTTP: /ft_gateway.cfm?id=1167861&type=pdf.。

相关文档
最新文档