September 2009 Archives
Valve's Double Entendre
Today's the big launch day of Valve's much anticipated Left 4 Dead update including the two-level campaign Crash Course… sigh.
I really wonder if Valve has a QA department. It seems to break in one of three ways—either it silently launches a local server (despite lobby settings), puts the server you connect to into a livelock requiring a manual restart, or makes each player connect to their own local IP. It's not an isolated problem, with their forums being full of complaints. It makes me yearn for the days when patches weren't forced down your throat by some crap like Steam.
It actually works in single-player, so I tried the new campaign there. It uses a Death Toll theme, looking a lot like the level where you start in the church. In what seems to be a storage area, you jump from room to room or walk down the alleys connecting the rooms. Each level is uninventive, looking like one big area instead of smaller ones that vary to keep it fresh. Every room and alley looks the same, just a random scattering of boxes and cars.
Really, there's not much to left say about Crash Course. Given how short it is and how unfinished and monotonous it looks, I can't imagine myself playing it much in versus. Much rather be playing Absolute Zero or Death Aboard. If versus actually worked, that is.
Windows 7 to support non-OEM CableCARD
TV-on-PC users rejoice! CableCARD support is finally coming to PC expansion cards available through retail channels.
Windows has long used the Broadcast Driver Architecture (BDA) to communicate with TV tuner cards, but the folks in charge of CableCARD had a major problem with it: there's no DRM support. Because of this they forbade selling any add-on cards alone, and any TV tuners you could buy would only work with analog or ClearQAM (unencrypted) channels, which typically means low-def or local channels only. The only way to get CableCARD support on a PC was to buy a full OEM setup that included the tuners.
One of the new features in Windows 7 is the new PBDA (Protected BDA) API which, you guessed it, supports DRM. With PBDA, WDDM, and HDCP, the signal can be protected from the tuner all the way to the monitor. Microsoft kept quiet and avoided acknowledging any questions about it during the test, but many testers speculated it would be part of a bigger push from Microsoft to open up CableCARD add-on support, and it turns out we were right. I wouldn't be surprised to see announcements of new hardware from Hauppauge and other tuner manufacturers.
I watch a lot of TV—usually in the form of a small box in the corner of the screen while I'm coding, so I've got plenty of time. I currently have two Hauppauge HVR-2250 cards for a total of four tuners. This works great for my local channels like NBC and FOX but there are always some shows I like on cable channels, so I'll be looking forward to some of the new hardware, like Ceton's new 6-tuner CableCARD behemoth.
Playing a lot of Section 8
A month or so ago, I learned that an acquaintance of mine from a few years ago had got a job at TimeGate Studios making maps for their new title, Section 8. I missed the closed beta, but had enough fun with the open one to make me want to acquire the full game when it came out.
Apparently about half of their dev team are hardcore Tribes players, and to a degree it shows. The maps, while not as big as the ones in Tribes, are vast and open compared to most modern games. Players are able to customize their soldier with two guns, two secondary utilities, and ten points to distribute across a number of passive powerups. Players also get a shield, a jet pack, super-sprinting, and lock-on.
Section 8 is a capture-the-point game with a twist—as players start to earn points for various achievements, the game automatically starts up mini-objectives to complete. This turns out to be a great way to keep things challenging and fresh, while giving players a good reason to come out of their bases. If you turtle in a base and don't complete your objectives, the other team will win. This gives the game a higher learning curve than most other games, but most should only take a few days to get it down.
Section 8 is a multiplayer game, so I'd caution you against buying it if you're expecting a good single-player story. Some sites mentions that it has a single-player campaign, but it really only consists of multiplayer with bots tied together in an hour long tutorial story where an unseen general is yelling reasons to complete all the objectives you'd normally complete in multiplayer. But that's okay—the real fun is in the multiplayer.
Spawning is a unique experience in this game. You get hurled out of ships in orbit and are able to break mid-air to adjust your landing position. With a bit of skill and luck, you can actually land on enemies for a very satisfying instant kill. To combat you landing in enemy territory, anti-air comes standard in all bases and players can deploy more if they choose. Anti-air becomes crucial to gameplay—if yours gets taken down, the enemies will start to swarm in right on top of you. Players dropping down within an anti-air radius will either be shot down or take heavy damage before ever seeing another player.
The maps will remind you a lot of Tribes. They are big and open, with 2-4 bases scattered around them. They all feature dead zones defining their boundaries, which can change depending on the maximum number of players. The bases are pretty good, with an intricate futuristic design. Despite the large maps, the area in between the bases are for the most part also well very detailed. The mini-objectives will usually take place in these areas, so you may end up spending more time outside of a base than in one.
Character customization is one of the crucial areas of the game. You get ten points to spread across various passive power-ups modifying your armor, shield, attack strength, lock-on duration and resistance, accuracy, and a lot of other things. This is probably the biggest area to master—even after three weeks playing (two in the beta, one in final), I am still tweaking my passives to better support my play style. Several of them are very obvious in their usefulness, but others take a bit of play time to fully grasp.
Part of your load-out is two weapons. Unlike most games, Section 8 makes no distinction between primary and secondary weapons—it lets you choose whatever combination you want, be it a pistol and knife or a machine gun and missile launcher. There are several weapons to choose from, but unfortunately there isn't much diversity between them. If there is one area this game doesn't shine in, it's this. What we have now are basically all your boring standard bullet-based futuristic army weapons. Each varies in accuracy, shield piercing ability, and armor damage, but they're all just boring stuff we've seen a thousand times before. I would have liked to have seen some Tribes-inspired energy weapons.
One thing the game's weaponry took from Tribes is the projectile-based guns, compared to games like Quake where the shotgun was hitscan (instant-hit). Ask any Quake Custom-TF player, and they will confirm the $25 shotgun can often be formidable against the $3000 rocket launcher if the wielder has good enough aim. In Section 8, all of the bullets fired are actual projectile tracer rounds that take some small time to reach their target. This is one of my favorite features, and I'm often disappointed that more games don't use it. Forcing players to lead their shots introduces a whole new dimension of skill to the game.
The two utilities you pick for your class are also pretty important. These include grenades, mortars, sensors, repair kits, and some others. Some of these provide a service to your whole team, so with some good organization you could get a really unstoppable squad. The grenades are basically proxy mines that you throw. They stick to walls, vehicles, and blow up if they get near an enemy. The mortars are like precision MIRVs, letting you drop concentrated groups of explosions that are great against pretty much everything.
There are a few things all players get. The first is a super-sprint, letting you fairly quickly travel the long distances of the map. You can use it to ram enemies, taking off their shield. The second is a jet pack with about five seconds of use before recharging. It is basically only useful for jumping small hurdles, or a quick large jump onto buildings in conjunction with sprint.
The third is probably the most controversial feature of the game: lock-on. While many multiplayer games have aim-bot cheats made for them, it's actually built into Section 8 as a slow charging 5-10 second lock-on ability. I've noticed a lot of mediocre players have grown to depend on it, and all the good players take advantage of this by developing strategies to make others waste their lock-on before jumping in with a good aim. They deserve some major props for creating a well balanced aim-bot that doesn't feel totally lame.
As you complete objectives, frag enemies, and capture points, you will be awarded with money to spend on deployables. You can buy supply depots, turrets, mechs, tanks, and anti-air. All of these are very effective in their own ways, but for some reason many players seem to forget to deploy anything until the match is almost over.
The game does have some flaws that will hopefully be patched soon. Like several Games For Windows Live games before it, Section 8 has plenty of people unable to launch it due to outdated GFWL installs. The game pulls down servers from the master list very slowly over an XBox-encrypted link. The in-game voice chat doesn't feature automatic gain, making most voices get drowned out by the action. The persistent stats system looks pretty cool but has been plagued with issues since the launch. Servers seem to be infrequently unstable, sometimes crashing or booting players. A few of the servers I've connected to seemed to lose sync, causing jumps as everything corrected itself every few seconds. Oddly enough none of these flaws existed in the open beta, making me wonder if the GFWL integration, which wasn't in the beta, had anything to do with it.
Flaws aside, I'm very happy with this game. There's a lot of fun to be had, and it delivers one of things I want most in a game: a large scale of skill that isn't reachable after only a few weeks of playing. Is it worth $50? I'm not sure there's enough content—8 maps—for me to say so. Maybe wait until it's $30 or $40. I'm hoping they release the map editor and enable mods. It's got a lot of potential for some good player-made content.
Efficient stream parsing in C++
A while ago I wrote about creating a good parser and while the non-blocking idea was spot-on, the rest of it really isn’t very good in C++ where we have the power of templates around to help us.
I’m currently finishing up a HTTP library and have been revising my views on stream parsing because of it. I’m still not entirely set on my overall implementation, but I’m nearing completion and am ready to share my ideas. First, I’ll list my requirements:
- I/O agnostic: the parser does not call any I/O functions and does not care where the data comes from.
- Pull parsing: expose a basic stream of parsed elements that the program reads one at a time.
- Non-blocking: when no more elements can be parsed from the input stream, it must immediately return something indicating that instead of waiting for more data.
- In-situ reuse: for optimal performance and scalability the parser should avoid copying and allocations, instead re-using data in-place from buffers.
- A simple, easy to follow parser: having the parser directly handle buffers can easily lead to spaghetti code, so I’m simply getting rid of that. The core parser must operate on a single iterator range.
To accomplish this I broke this out into three layers: a core parser, a buffer, and a buffer parser.
The core parser
Designing the core parser was simple. I believe I already have a solid C++ parser design in my XML library, so I’m reusing that concept. This is fully in-situ pull parser that operates on a range of bidirectional iterators and returns back a sub-range of those iterators. The pull function returns ok when it parses a new element, done when it has reached a point that could be considered an end of the stream, and need_more
when an element can’t be extracted from the passed in iterator range. Using this parser is pretty simple:
typedef std::deque<char> buffer_type; typedef http::parser<buffer_type::iterator> parser_type; buffer_type buffer; parser_type p; parser_type::node_type n; parser_type::result_type r; do { push_data(buffer); // add data to buffer from whatever I/O source. std::deque<char>::iterator first = buffer.begin(); while((r = p.parse(first, buffer.end(), n)) == http::result_types::ok) { switch(n.type) { case http::node_types::method: case http::node_types::uri: case http::node_types::version: } } buffer.erase(buffer.begin(), first); // remove all the used // data from the buffer. } while(r == http::result_types::need_more);
By letting the user pass in a new range of iterators to parse each time, we have the option of updating the stream with more data when need_more
is returned. The parse()
function also updates the first iterator so that we can pop any data prior to it from the data stream.
By default the parser will throw an exception when it encounters an error. This can be changed by calling an overload and handling the error result type:
typedef std::deque<char> buffer_type; typedef http::parser<buffer_type::iterator> parser_type; buffer_type buffer; parser_type p; parser_type::node_type n; parser_type::error_type err; parser_type::result_type r; do { push_data(buffer); // add data to buffer from whatever I/O source. std::deque<char>::iterator first = buffer.begin(); while((r = p.parse(first, buffer.end(), n, err)) == http::result_types::ok) { switch(n.type) { case http::node_types::method: case http::node_types::uri: case http::node_types::version: } } buffer.erase(buffer.begin(), first); // remove all the used // data from the buffer. } while(r == http::result_types::need_more); if(r == http::result_types::error) { std::cerr << "an error occured at " << std::distance(buffer.begin(), err.position()) << ": " << err.what() << std::endl; }
The buffer
Initially I was testing my parser with a deque<char>
like above. This let me test the iterator-based parser very easily by incrementally pushing data on, parsing some of it, and popping off what was used. Unfortunately, using a deque means we always have an extra copy, from an I/O buffer into the deque. Iterating a deque is also a lot slower than iterating a range of pointers because of the way deque is usually implemented. This inefficiency is acceptable for testing, but just won't work in a live app.
My buffer class is I/O- and parsing-optimized, operating on pages of data. It allows pages to be inserted directly from I/O without copying. Ones that weren't filled entirely can still be filled later, allowing the user to commit more bytes of a page as readable. One can use scatter/gather I/O to make operations span multiple pages contained in a buffer.
The buffer exposes two types of iterators. The first type is what we are used to in deque: just a general byte stream iterator. But this incurs the same cost as deque: each increment to the iterator must check if it's at the end of the current page and move to the next. A protocol like HTTP can fit a lot of elements into a single 4KiB page, so it doesn't make sense to have this cost. This is where the second iterator comes in: the page iterator. A page can be thought of as a Range representing a subset of the data in the full buffer. Overall the buffer class looks something like this:
struct page { const char *first; // the first byte of the page. const char *last; // one past the last byte of the page. const char *readpos; // the first readable byte of the page. const char *writepos; // the first writable byte of the page, // one past the last readable byte. }; class buffer { public: typedef ... size_type; typedef ... iterator; typedef ... page_iterator; void push(page *p); // pushes a page into the buffer. might // be empty, semi-full, or full. page* pop(); // pops the first fully read page from from the buffer. void commit_write(size_type numbytes); // merely moves writepos // by some number of bytes. void commit_read(size_type numbytes); // moves readpos by // some number of bytes. iterator begin() const; iterator end() const; page_iterator pages_begin() const; page_iterator pages_end() const; };
One thing you may notice is it expects you to push()
and pop()
pages directly onto it, instead of allocating its own. I really hate classes that allocate memory – in terms of scalability the fewer places that allocate memory, the easier it will be to optimize. Because of this I always try to design my code to – if it makes sense – have the next layer up do allocations. When it doesn't make sense, I document it. Hidden allocations are the root of evil.
The buffer parser
Unlike the core parser, the buffer parser isn't a template class. The buffer parser exposes the same functionality as a core parser, but using a buffer instead of iterator ranges.
This is where C++ gives me a big advantage. The buffer parser is actually implemented with two core parsers. The first is a very fast http::parser<const char*>
. It uses this to parse as much of a single page as possible, stopping when it encounters need_more
and no more data can be added to the page. The second is a http::parser<buffer::iterator>
. This gets used when the first parser stops, which will happen very infrequently – only when a HTTP element spans multiple pages.
This is fairly easy to implement, but required a small change to my core parser concept. Because each has separate internal state, I needed to make it so I could move the state between two parsers that use different iterators. The amount of state is actually very small, making this a fast operation.
The buffer parser works with two different iterator types internally, so I chose to always return a buffer::iterator
range. The choice was either that or silently copy elements spanning multiple pages, and this way lets the user of the code decide how they want to handle it.
Using the buffer parser is just as easy as before:
http::buffer buffer; http::buffer_parser p; http::buffer_parser::node_type n; http::buffer_parser::result_type r; do { push_data(buffer); // add data to buffer from whatever I/O source. while((r = p.parse(buffer, n)) == http::result_types::ok) { switch(n.type) { case http::node_types::method: case http::node_types::uri: case http::node_types::version: } } pop_used(buffer); // remove all the used // data from the buffer. } while(r == http::result_types::need_more);
The I/O layer
I'm leaving out an I/O layer for now. I will probably write several small I/O systems for it once I'm satisfied with the parser. Perhaps one using asio, one using I/O completion ports, and one using epoll. I've designed this from the start to be I/O agnostic but with optimizations that facilitate efficient forms of all I/O, so I think it could be an good benchmark of the various I/O subsystems that different platforms provide.
One idea I've got is to use Winsock Kernel to implement a kernel-mode HTTPd. Not a very good idea from a security standpoint, but would still be interesting to see the effects on performance. Because the parser performs no allocation, no I/O calls, and doesn't force the use of exceptions, it should actually be very simple to use in kernel-mode.