Coding Range

Performance optimization, or Why does my SQL query take an hour to execute?

October 4th, 2015

I love small performance optimizations that make a huge difference, such as these two from Bruce Dawson of the Chrome team and formerly of Valve Software.

This is the tale of one I found at work in May of this year (2015, for you future people).

I’m posting this because I haven’t seen it documented anywhere online, and hopefully this will save someone else the same headache in future.

In the partiular module I was building, I needed to store and retrieve XML from a database which is hosted by Microsoft SQL Server. I was expecting the XML to be small - just a few kilobytes at most - but it turned out that in certain situations the XML can read 100MB, and even go beyond that.

Whilst trying to troubleshoot network errors on a production system, I noticed that it was taking a very long time to get to the error, even though there was very little code. The pseudocode was something like this:

var xmlArray = LoadXmlFromDatabase();
foreach (var item in xmlArray)
{
    SendToServer(item);
}
DeleteItemsFromDatabase(xmlArray);

After adding some logging, I discovered that almost all of the execution time was spent loading the XML from the database. At the advice of a colleague I blindly tried two possible solutions - batching in groups of 10, and then down to one-by-one processing, but to no avail. That was when I went into hardcore performance troubleshooting mode.

There was nothing obvious with a performance analyzer (both Microsoft Visual Studio’s inbuilt one and JetBrains dotTrace), so I set up a standalone repro case and went to down.

The internally built framework at work returned the XML records as a custom implementation of System.IO.TextReader, so I rebuilt that. (Note that this is not the exact code, but a close-enough approximation. I don’t actually remember what the exact code looked like, but this works well enough for a repro.)

The TextReader looked something like this:

    class SqlXmlColumnReader : TextReader
    {
        public SqlXmlColumnReader(SqlConnection connection, string table, string primaryKeyColumn, string xmlDataColumn, object primaryKeyValue)
        {
            lazyReader = new Lazy<IDataReader>(() =>
            {
                // If you ever use this in an actual application, ensure that the parameters are
                // validated to ensure no SQL injection.
                var commandText = string.Format(
                    CultureInfo.InvariantCulture,
                    "SELECT [{0}] FROM [{1}] WHERE [{2}] = @pkvalue;",
                    xmlDataColumn,
                    table,
                    primaryKeyColumn);
                command = new SqlCommand(commandText);
                command.Connection = connection;
                command.Parameters.Add(new SqlParameter
                {
                    ParameterName = "pkvalue",
                    Value = primaryKeyValue
                });

                // SequentialAccess: "Provides a way for the DataReader to handle rows that
                // contain columns with large binary values. Rather than loading the entire row,
                // SequentialAccess enables the DataReader to load data as a stream.
                // You can then use the GetBytes or GetChars method to specify a byte
                // location to start the read operation, and a limited buffer size for the data
                // being returned."
                var reader = command.ExecuteReader(CommandBehavior.SequentialAccess);

                if (!reader.Read())
                {
                    throw new InvalidOperationException("No record exists with the given primary key value.");
                }

                return reader;
            });
        }

        Lazy<IDataReader> lazyReader;
        SqlCommand command;
        long position;

        public override int Peek()
        {
            return -1;
        }

        public override int Read()
        {
            var buffer = new char[1];
            var numCharsRead = lazyReader.Value.GetChars(0, position, buffer, 0, 1);
            if (numCharsRead == 0)
            {
                return -1;
            }

            position += numCharsRead;

            return buffer[0];
        }

        protected override void Dispose(bool disposing)
        {
            if (disposing)
            {
                if (lazyReader != null)
                {
                    if (lazyReader.IsValueCreated)
                    {
                        lazyReader.Value.Dispose();
                        lazyReader = null;
                    }
                }

                if (command != null)
                {
                    command.Dispose();
                    command = null;
                }
            }

            base.Dispose(disposing);
        }
    }

The repro case created a database, created a table, inserted a large amount of dummy XML data, read it back, then deleted the database. Here’s the code I used for that:

    class Program
    {
        static void Main(string[] args)
        {
            // Use the same unique ID every time for exact reproducibility
            var primaryKey = new Guid("4506E355-B727-4112-82A6-52E0258BAB0D");

            var connectionStringBuilder = new SqlConnectionStringBuilder();
            connectionStringBuilder.DataSource = @"(localdb)\MSSQLLOCALDB";

             // To identify us in SQL Profiler, etc.
            connectionStringBuilder.ApplicationName = "Sql Xml Demo";

            using (var connection = new SqlConnection(connectionStringBuilder.ToString()))
            {
                connection.Open();
                Console.WriteLine("Connected to LocalDB.");

                connection.ExecuteNonQuery("CREATE DATABASE [SqlXmlDemo];");
                connection.ExecuteNonQuery("USE [SqlXmlDemo];");

                Console.WriteLine("Created SqlXmlDemo database.");

                connection.ExecuteNonQuery(@"CREATE TABLE [Foo]
                (
                    [PK]        UNIQUEIDENTIFIER    NOT NULL    PRIMARY KEY,
                    [XmlData]   xml                 NOT NULL
                );");

                var stopwatch = new Stopwatch();
                stopwatch.Start();
                var dummyXml = CreateDummyXml(5 * 1024 * 1024); // Megabytes (technically Mibibytes)
                stopwatch.Stop();
                Console.WriteLine("Time taken to create 5MB of xml: {0}", stopwatch.Elapsed);
                stopwatch.Reset();

                // Insert 10MB of XML into our table.
                using (var command = new SqlCommand(
                    "INSERT INTO [Foo] ([PK], [XmlData]) VALUES (@PK, @XmlData);"))
                {
                    command.Connection = connection;
                    command.Parameters.Add("PK", SqlDbType.UniqueIdentifier).Value = primaryKey;
                    command.Parameters.Add("XmlData", SqlDbType.Xml).Value = dummyXml;

                    stopwatch.Start();
                    command.ExecuteNonQuery();
                    stopwatch.Stop();
                    Console.WriteLine("Time taken to insert 5MB of xml: {0}", stopwatch.Elapsed);
                    stopwatch.Reset();
                }

                // Actual reading bit
                stopwatch.Start();
                using (var reader = new SqlXmlColumnReader(
                    connection,
                    "Foo",
                    "PK",
                    "XmlData",
                    primaryKey))
                {
                    // Gets immediately garbage-collected, but we've done the work so we have
                    // our benchmark time.
                    var text = reader.ReadToEnd();
                }
                stopwatch.Stop();
                Console.WriteLine("Time taken to read 5MB of xml: {0}", stopwatch.Elapsed);
                // End actual reading bit

                Console.WriteLine("Done!");
                connection.ExecuteNonQuery("USE [master];");
                connection.ExecuteNonQuery("DROP DATABASE [SqlXmlDemo];");
                Console.WriteLine("Dropped SqlXmlDemo database.");
            }
        }

        static string CreateDummyXml(int targetSize)
        {
            // Base64 uses 4 characters - thus 4 bytes of output - to represent 3 bytes of input.
            // Our wrapper tag "<Foo></Foo>" adds 11 bytes;
            var builder = new StringBuilder(targetSize);
            builder.Append("<Foo>");

            var targetBase64Length = targetSize - 11;
            if (targetBase64Length > 0)
            {
                var randomDataSize = targetSize * 3 / 4;
                if (randomDataSize > 0)
                {
                    var randomData = new byte[randomDataSize];
                    using (var rng = new RNGCryptoServiceProvider())
                    {
                        rng.GetNonZeroBytes(randomData);
                    }

                    builder.Append(Convert.ToBase64String(randomData));
                }
            }

            builder.Append("</Foo>");
            return builder.ToString();
        }
    }

    static class SqlExtensions
    {
        public static void ExecuteNonQuery(this SqlConnection connection, string commandText)
        {
            using (var command = new SqlCommand(commandText))
            {
                command.Connection = connection;

                command.ExecuteNonQuery();
            }
        }
    }

Thus, the SQL command executed to read in this repro boiled down to:

SELECT [XmlData] FROM [Foo] WHERE [PK] = '4506E355-B727-4112-82A6-52E0258BAB0D';

Looking at the execution plan, it seems extremely straightforward:

Running it on the other hand, showed a different story. Modified for 1MB, 2MB, 3MB, 4MB and 5MB of XML data, it’s quite clearly taking an exponential performance hit.

C:\temp\SqlXmlDemo\SqlXmlDemo\bin\Debug>SqlXmlDemo.exe
Connected to LocalDB.
Created SqlXmlDemo database.
Time taken to create 1MB of xml: 00:00:00.0086780
Time taken to insert 1MB of xml: 00:00:00.2394446
Time taken to read 1MB of xml: 00:00:01.5584338
Done!
Dropped SqlXmlDemo database.

C:\temp\SqlXmlDemo\SqlXmlDemo\bin\Debug>SqlXmlDemo.exe
Connected to LocalDB.
Created SqlXmlDemo database.
Time taken to create 2MB of xml: 00:00:00.0138875
Time taken to insert 2MB of xml: 00:00:00.3643061
Time taken to read 2MB of xml: 00:00:04.8057987
Done!
Dropped SqlXmlDemo database.

C:\temp\SqlXmlDemo\SqlXmlDemo\bin\Debug>SqlXmlDemo.exe
Connected to LocalDB.
Created SqlXmlDemo database.
Time taken to create 3MB of xml: 00:00:00.0195720
Time taken to insert 3MB of xml: 00:00:00.4948130
Time taken to read 3MB of xml: 00:00:10.4895621
Done!
Dropped SqlXmlDemo database.

C:\temp\SqlXmlDemo\SqlXmlDemo\bin\Debug>SqlXmlDemo.exe
Connected to LocalDB.
Created SqlXmlDemo database.
Time taken to create 4MB of xml: 00:00:00.0252961
Time taken to insert 4MB of xml: 00:00:00.9727775
Time taken to read 4MB of xml: 00:00:21.4877780
Done!
Dropped SqlXmlDemo database.

C:\temp\SqlXmlDemo\SqlXmlDemo\bin\Debug>SqlXmlDemo.exe
Connected to LocalDB.
Created SqlXmlDemo database.
Time taken to create 5MB of xml: 00:00:00.0304792
Time taken to insert 5MB of xml: 00:00:00.7277516
Time taken to read 5MB of xml: 00:00:38.0254933
Done!
Dropped SqlXmlDemo database.

At around the 45MB mark, it took my machine an entire hour to read that data back out of the SQL database, and my machine was no slouch!

Messing around with different options and enlisting the help of the top SQL and .NET experts at my company yielded no answers as to this performance quandry. There was nothing obviously wrong with the code - in a performance analyzer, almost none of the execution time was in my code; most of it was inside IDataReader.GetChars. Running the query itself is also quite fast.

This left me stumped for days, until I “decompiled” the .NET Framework using ILSpy (and later, checked the Microsoft Reference Source). Eventually, I stumbled upon a code branch in System.Data.SqlClient.SqlDataReader, which is only hit if the following three cases are true:

  1. The column is a Partially Length-Prefixed (PLP) column, which an XML column is.
  2. CommandBehavior.SequentialAccess is specified.
  3. The column is an XML column.

In this case, instead of using the internal GetCharsFromPlpData function, it uses the internal GetStreamingXmlChars function. For some mysterious unknown reason, GetStreamingXmlChars seems to be incredibly slow.

The workaround in this case, then, is to use nvarchar(max) (i.e. multibyte text) instead of xml as the query column type. I changed the query from this:

SELECT [XmlData] FROM [Foo] WHERE [PK] = '4506E355-B727-4112-82A6-52E0258BAB0D';

To this:

SELECT CONVERT(NVARCHAR(MAX), [XmlData]) FROM [Foo] WHERE [PK] = '4506E355-B727-4112-82A6-52E0258BAB0D';

The end result was an increase in performance by an incredible amount.

This provided:

  • For a 1MB record, a 9.4x performance improvement
  • For a 5MB record, a 53.1x performance improvement.
  • For a 45MB record, a 3706x performance improvement.

For a proper fix though, I’ve opened this issue on Microsoft Connect. Hopefully this will be fixed in a future .NET Framework release.

Raspberry Pi: Day One

August 2nd, 2015

Rough notes from day one with a Raspberry Pi. All my experience only, may not be true for everyone, forever.

  • The Pi has LEDs that turn on when the device is on. If the device doesn’t seem to do anything after plugging in USB power, make sure the cable is plugged in properly at both ends. :)
  • Raspbian wants a class-4 (4MB/s) micro-SD card. Windows 10 IoT Core wants a class-10 (10MB/s) micro-SD card. I bought two class-4 cards. Whoops!
  • Windows 10 IoT Core still boots to the demo app with a class-4 card. I haven’t gotten any further yet.
  • Following the official instructions1 for Linux installs an installer on your SD card, not the OS itself. Boot from the SD card to install Raspbian (or if you have ethernet, one of the other variants available to set up over the web).
  • The NOOBS torrent downloads than the direct download.
  • If you have an HDMI display higher than 1080p, you may have to customize the output settings in /boot/config.txt. I had flickering red stripes before adjusting the configuration.2.
  • The D-Link DWA-131 Rev. A is supposed to work out of the box. The Rev. B may or may not, I’m not sure. The Rev. E1 (which I purchased) does not, and I can’t find a way to build and install drivers for it. :(
  • Windows 10 IoT Core also doesn’t recognise the D-Link DWA-131 Rev. E.
  • Installing Raspbian from SD Card installs kernel version 3.18.something. Running rpi-update updates to 4.0 at this point in time.
  • Using GPIO.BOARD setup in the GPIO Python module counts pins, not GPIO ports. GPIO 7 in that configuration is physical pin 7, which is actually GPIO pin 4, but invoked in code as pin 7.
  • Raspbian comes with vi and nano preinstalled, but not vim. vim must be apt-gotten.
  • A breadboard is just a lump of plastic with electrical connections between the holes. The breadboard I got - and it looks like most if not all follow this convention - is divided into two. The pins on the left side are connected horizontally to eachother, and the pins on the right side are connected horizontally to eachother.
  • Be careful when wiring something up to the pins on the device. I’m not 100% sure, but I think I might have shorted it somehow and triggered a reboot - when I looked up, the GUI was gone and it was at the post-boot login screen. It seems to still be fine.
  • If IoTCoreImageHelper.exe doesn’t show up in the Windows 10 search, it can be found at C:\Program Files (x86)\Microsoft IoT\.
  • The Realtek 8168 NIC in my P8Z68-V LX motherboard has automatic crossover functionality, so you don’t need a crossover cable for Internet Connection Sharing.
  • Various tutorials call for different resistors for the simple circuit of RPi GPIO -> LED -> RPi Ground. The 100 ohm LED I had worked without a resistor, and didn’t blow anything up. :)
  • The Pi fits into the official case without requiring to be screwed down. If it moves within the official case, re-seat it carefully.

Thoughts on Using Exceptions

February 2nd, 2015

When I started this blog, and then neglected it and revived it several times, I never thought I’d use the advice-column format. When a friend of mine asked me for some advice though, I thought I should share it with anyone else who might have the same question.

They definitely didn’t teach this type of writing in high-school English classes.

Anyway, an anonymous friend asks:

Earlier at work during a code review meeting we were chatting about validation, what we should do and what we should avoid doing. I wrote a bit of code that automatically gets executed when my web application wants to ensure that a String looks like a valid date. This code essentially calls a standard Java API method to parse the String as a date, and if a parsing exception is thrown by this API method, then I catch the exception and say that the validation failed. My colleagues told me that this is not how I should have done it, for several reasons:

  • Generally speaking, throwing exceptions isn’t good, performance wise, as you’re building an object with references to your whole stack.
  • Exceptions should be for… exceptional cases, not expected cases.
  • Exceptions shouldn’t be used for validation. They’re meant to say that something fucked up.

They advised me to determine whether a date looks valid using a regex and only then send it to the parsing API method. All these explanations make sense to me, but I’d like to get the opinion of someone completely unrelated to the company.

I don’t know how much you know Java, but since a lot of languages support exceptions, my question is pretty much language-agnostic. So what do you think of that?

In my opinion, your coworkers are right. But also wrong.

I generally deal with C# at work so what I say below will likely reflect that, but it should apply to most programming environments.

Your colleagues said that throwing exceptions isn’t good from a performance point of view. This has been my experience as well. In one place at work, for example, the only way to return a HTTP 304 from a particular web application, due to the framework we use, is to throw an exception. Often, a 304 response, which means “hey, your cached value is good to use”, is slower than actually retrieving the full resource. We check this by last modified date, so we don’t need to load the value itself to revalidate the cache. This means that checking the date, throwing the exception and unwinding the stack is sometimes slower than checking the date, connecting to a database server and returning the actual value.

Conversely - I haven’t tested this one myself but try { new Guid(..) } catch is allegedly faster than try { Guid.Parse } catch or Guid.TryParse. When making performance decisions, these need to be checked on a case-by-case basis, using a profiling tool. Don’t assume.

Your colleagues also said that exceptions should be used for exceptional cases, not expected cases. This is something I try to hold to, though frameworks sometimes don’t allow for it (as above). Sometimes you have to use a try..catch or throw an exception with no other alternative. Sometimes, though, you have a choice, and I usually try follow the choices without exceptions.

Objective-C takes an interesting approach to exceptions. Objective-C uses NSError objects (as output parameters) and a nil or NO (FALSE) return value to indicate an expected failure. NSException is reserved for when it’s clear that the programmer screwed something up.

Java - as far as I know - makes exceptions part of the method contract, and the compiler enforces that you catch every expected exceptions (or rethrow it as an unchecked exception type). In my experience, however, Java does use exceptions for many expected failures. So does C#. That doesn’t mean that we can’t do better in application code, but it does make it harder.

Lastly, your colleagues said that exceptions shouldn’t be used for validation. I agree with this, since the whole point of validation is to take untrusted input and validate it. Failure is expected, that’s the entire point of validation. However, this is not mutually exclusive with catching an exception from Java’s Date API. For example (based on what I found from Google quickly):

DateValidationResult validateDateInput(string inputValue)
{
    try
    {
        DateTimeFormatter formatter = DateTimeFormatter.ofPattern("yyyy MM dd");
        LocalDate date = LocalDate.parse(inputValue, formatter);
        return DateValidationResult.fromLocalDate(date);
    }
    catch (DateTimeParseException ex)
    {
        return DateValidationResult.failure;
    }
}

This uses an exception for validation, but doesn’t expose it. On the other hand, I disagree with the below example:

LocalDate validateDateInput(string inputValue) throws ValidationException
{
    try
    {
        DateTimeFormatter formatter = DateTimeFormatter.ofPattern("yyyy MM dd");
        LocalDate date = LocalDate.parse(inputValue, formatter);
        return date;
    }
    catch (DateTimeParseException ex)
    {
        throw new ValidationException(ex);
    }
}

This example uses an exception for validation, but uses the exception to indicate success or failure, and ultimately, control program flow.

So on those three points, I pretty much agree with your colleagues. However, I don’t agree with their recommendation to pre-validate.

When you pre-validate with a regular expression or other means, your failure flow looks like this:

  1. Check if the value is valid.
  2. The value is invalid. Return a failure.

Your success flow though, looks like this.

  1. Check if the value is valid.
  2. The value is valid. Pass it to a system function.
  3. Parse and check if the value is valid.
  4. The value is valid. Return the new value type.
  5. Return the new value type from the system function.

Thus, when adding pre-validation, you end up validating the value twice on the success case.

For this, I recommend using a function that returns a value on success, and an error value on failure. This way:

  • You avoid exceptions
  • You only examine the input once

In C#-land, most types have a TryParse function with this signature:

bool TryParse(string value, out TypeName outValue)

This returns true on success and returns the parsed value via the out parameter, and returns false on failure. This can then be used as follows:

string untrustedInput = "...";
DateTime dateTime;
if (!DateTime.TryParse(untrustedInput, out dateTime))
{
    // Handle the failure
}

// Continue success operation

It looks like Java has a similar API. SimpleDateFormat.parse(String, ParsePosition) is documented to return a Date object on success and null on failure, so that should give you the best of both worlds.

So, to summarise:

  1. Don’t make assumptions about performance. Check them.
  2. Avoid using exceptions for expected code paths. Reserve them for when something has gone really wrong, or when you have no choice.
  3. Make sure your code doesn’t unnecceasrily double up on itself. It might not look like repeated code, but it can still be.

Retrospective: Source 2 Leaks versus Reality

August 10th, 2014

Last week the Dota 2 Workshop Tools (Alpha) were released, which include a build of Dota 2 running on the Source 2 engine. I thought it would be interesting to see how it correlates with Source 2’s history of leaks, so first up is a bit of history.

History

19th July, 2010

On the 19th of July 2010, the Alien Swarm SDK was released. This was the only Source SDK code release to be based off the Left 4 Dead-series of branches, with the Source engine being somewhere between that of Left 4 Dead and that of Portal 2.

In the code was the first reference to Source 2. Tier0, a Valve library that gets embedded into just about every library and executable that Valve ship, contained hints of a new logging system in Source 2.

//-----------------------------------------------------------------------------
// ** NOTE FOR INTEGRATION **
// This was copied over from source 2 rather than integrated because 
// source 2 has more significantly refactored tier0 logging.
//
// A logging listener with Win32 console API color support which which prints 
// to stdout and the debug channel.
//-----------------------------------------------------------------------------
#ifndef _X360
class CColorizedLoggingListener : public CSimpleLoggingListener
{

The code shows support for printing console messages with arbitrary colours, not just the red, yellow and white of Source’s error, warning and info messages respectively.

6th August, 2012

Just over two years later, Source Filmmaker was released to the public. Source Filmmaker was originally a part of Team Fortress 2’s public beta, but it was removed from the final game with the promise of a return after the release of all of the Meet The Team videos. Meet The Pyro was released on the 27th of June, and shortly after came the toolkit used to make videos inside the Source Engine.

When ValveTime and Facepunch started pulling it to pieces, it became clear quickly that Source Filmmaker was laden with reference to Source 2, including:

  • Icons for ‘Source 2 Tools’, which may be built entirely using Qt and Python
  • 64-bit support
  • ‘gameinfo.gi’ replacing ‘gameinfo.txt’
  • New file formats - vmod, vproj, vmdl, vgame

3rd November, 2012

A few months later, 4chan’s /v/ visited Valve HQ for Gabe Newell’s 50th birthday, Gabe confirmed Source 2’s existence.

“We’ve been working on Valve’s new engine stuff for a while,” Newell responded, “we’re probably just waiting for a game to roll it out with.” When asked for confirmation that Source 2 would actually be a new engine and not an extension to previous iterations, Newell simply said “Yeah!” - The Escapist

1st June, 2013

In mid-2013, Steamworks became semi-public. Anyone was now able to sign up to Steamworks, view the documentation and download the SDK, after accepting the Steamworks NDA.

Some enterprising individuals soon discovered that the Steamworks partner website was not completely locked down, and was able to obtain the names associated with different Steam application IDs. Facepunch user DevinWatson discovered that application 235480 had the name ‘Left 4 Dead 3’.

Furthermore, an unnamed Valve Employee listed Source2 work on their profile, involving:

  • A “new component architecture system for game entities”
  • Integrating “the gameplay portion of… Left 4 Dead 2 into Source 2”

19th June, 2013

Just a few weeks later, Valve’s Jira bug tracker was accidentally made public. Of huge interest was the names of the various groups, which would reveal some sort of internal structure at Valve and thus the projects that Valve was working on. Whilst some of the groups were of little to no interest - as it appears that Jira was just synchronized with Active Directory - many were dead giveaways of either team structure or internal (Exchange) mailing lists, including:

  • Half-Life 3
  • L4D3
  • L4D3 Audio
  • L4D3 Developers
  • Source 2 Code Analysis Results
  • Source 2 Gameplay
  • Source2
  • Source2 Artists
  • Source2 Builds
  • Source2 Characters
  • Source2 Entities
  • Source2 Minidumps
  • Source2 Proto-games
  • Source2 Tools
  • Source2 Triage
  • Source2_Assertions
  • Source2_ContentAssertions
  • Source2Dev
  • Episode 3
  • Episode 3 Movie
  • src2sdk_assertions
  • src2sdk_contentassertions
  • src2sdk_minidumps
  • left4dead3_assertions
  • left4dead3_contentassertions
  • left4dead3_minidumps
  • Dota_Assertions
  • Dota_Contentassertions
  • Dota Minidumps

5th August, 2013

In the leadup to The International 3, some Valve fans were given a tour of Valve’s headquarters in Bellevue. One of the photos taken was of something not intended for fan viewing - Valve’s internal Perforce changelog.

Perforce Monitor

The web-based interface, seemingly built by Daniel Jennings, revealed that Valve were actively working on:

  • Source 2
    • VScript
    • Tier4 library (Source 1 only had Tier0 to Tier3)
    • Left 4 Dead 3
      • test_networking unit test
      • devtest level
    • ‘vagrp’ files in model_editor
  • SteamOps FBS
  • Other not-so-interesting things

Cross-referencing the names from the changelog to the Jira groups showed that the developers committing Source 2 changes were in the Source 2 and Half-Life 3 Jira groups:

  • Ted Carson
  • Kerry Davis
  • Ken Birdwell
  • Jay Stelly (Source 2 only, not Half-Life 3)
  • Jeff Hameluck

14th January, 2014

Sam Latinga a.k.a slouken was hired to join the Linux cabal in mid-2012. He created Simple DirectMedia Layer, most commonly known as SDL. On January 14th, he commited a changeset to SDL with the following included in the comment:

The reasoning behind this change is that source2 in -tools mode has a single OpenGL context that is used with multiple different windows. Some of those windows are created outside the engine (i.e. with Qt) and therefore we need to use SDL_CreateWindowFrom() to get an SDL_Window for those.

The Source Engine already used SDL for Linux and Mac OS X, but this changeset suggested that Source would use SDL and Qt for the Source 2 Tools, and possibly for the engine itself.

27th January, 2014

On January 27th, there were not one but two big leaks.

Firstly, NeoGAF user crazy buttocks on a train, a.k.a CBOAT posted screenshots from a confidential presentation which clearly showed Left 4 Dead 2 being rebuilt in ‘Source 2.0’ with hugely improved detail on the Plantation level.

Source2 Presentation - Plantation Level in L4D2

Shortly afterwards, Facepunch user testinglol posted a screenshot from some sort of Perforce web interface.

P4 Web - //source2/main/game

It shows the source tree of //source2/main/game, including most notably:

  • cs2 (Counter-Strike 2)
  • dota (Dota 2)
  • dota_core
  • dota_imported
  • hl3 (Half-Life 3)
  • hl3_imported
  • left4dead2_imported
  • left4dead2_source2
  • left4dead3
  • tf (Team Fortress 2)
  • tf_imported
  • sdktools
  • An installscript for Steam Application ID 244670* l>

4th March, 2014

On March 4th, Gabe Newell along with a couple of other developers did an Ask Me Anything session on Reddit. When asked about Source 2, this was their response:

The biggest improvements will be in increasing productivity of content creation. That focus is driven by the importance we see UGC having going forward. A professional developer at Valve will put up with a lot of pain that won’t work if users themselves have to create content.

(Note: UGC is User-Generated Content)

6th August, 2014

On August 6th, the Dota 2 Workshop Tools (Alpha) were released to the public, which included a 64-bit Windows build of Dota 2 running on top of Source 2.

Expectations vs Reality

“significantly refactored tier0 logging”

This can be seen in VConsole2 and Dota 2. Logging now logs to different channels, suchas ‘InputBindSystem’ and ‘SoundSystem’. Logs can be filtered, searched through, and individual channels can be broken out into new VConsole tabs.

Different channels can be bound to different colours, as well as having different colours for Default, Alternate, DETAILED, MESSAGE, WARNING, ASSERT and ERROR.

Source2 - VConsole2

Source2 - VConsole2 Channel Settings

Expectation: Fulfilled.

Source 2 Tools, using SDL, Qt and Python

I haven’t done a detailed analysis of the tools, but dota_ugc/game/bin/win64 includes pyside-python2.7.dll, python27.dll, pythoncom27.dll, pythoncomloader27.dll pywintypes27.dll, QtCore4.dll, QtGui4.dll, QtOpenGL4.dll, SDL2.sll, shiboken-python2.7.dll.

Michael Sartain, one of Valve’s Linux developers, has also blogged about Qt on a couple occasions.

Source Filmmaker is also built on Qt, so given the above it makes sense that the tools are built on Qt.

Update: It’s definitely using those libraries:

Source2 - VConsole2 Channel Settings

Expectation: Fulfilled.

64-bit Support

Source 2 doesn’t just support 64-bit, but the Dota 2 Workshop Tools are 64-bit only. Valve have said 32-bit is coming soon.

Expectation: Fulfilled.

gameinfo.gi file to replace gameinfo.txt

The Source 2 build of Dota 2 uses gameinfo.gi, which is a Valve KeyValues format file that looks like an extention of Source’s gameinfo.txt.

Expectation: Fulfilled.

New file formats - vmod, vproj, vmdl, vgame

The Dota 2 Workshop Tools use vmat, vtex, vfont, vmap, vpcf, vrman, vrmap, vsndevts, vsndstck, vsurf and vmdl files.

Expectation: Fulfilled.

More than just an extention to previous iterations of Source

Just about everything is different. Not going to provide a solid in-depth analysis here, but I feel confident enough to say

Expectation: Fulfilled.

“new component architecture system for game entities”

I haven’t seen any of this yet, but I don’t know enough about the entity system in Source to confidently comment on this.

Expectation: Unknown.

Left 4 Dead 2 on Source 2

Nothing yet.

Expectation: Still Waiting.

Half-Life 3 on Source 2

Nothing yet.

Expectation: Still Waiting.

Left 4 Dead 3 on Source 2

Nothing yet.

Expectation: Still Waiting.

Source 2 SDK

The only SDK we’ve seen so far is the Workshop Tools.

Expectation: Partially Fulfilled.

Dota 2 on Source 2

Apart from the dota, dota_core and dota_imported references, the community missed something that’s incredibly obvious in retrospect.

Have a look through the Jira groups / Active Directory groups / Exchange lists again. There’s a pattern:

  • Source2 Minidumps
  • Source2_Assertions
  • Source2_ContentAssertions
  • src2sdk_assertions
  • src2sdk_contentassertions
  • src2sdk_minidumps
  • left4dead3_assertions
  • left4dead3_contentassertions
  • left4dead3_minidumps
  • Dota_Assertions
  • Dota_Contentassertions
  • Dota Minidumps

Dota has three groups that fall in line with the other confirmed Source 2 group sets (Source2, src2sdk, left4dead3). These are the only sets of groups that have Assertions, Contentassertions and Minidumps groups.

Expectation: Fulfilled (Alpha, the rest is probably coming soon).

Source 2 to have a Tier 4 library

engine2.dll, client.dll and server.dll contain references to CTier4AppSystem. Below are the strings from engine2.dll:

.?AV?$CTier4AppSystem@VIEngineSound@@$0A@@@
.?AV?$CTier4AppSystem@VIVEngineClient2@@$0A@@@
.?AV?$CTier4AppSystem@VIVEngineServer2@@$0A@@@
.?AV?$CTier4AppSystem@VIUploadGameStats@@$0A@@@
.?AV?$CTier4AppSystem@VINetworkStringTableContainer@@$0A@@@
.?AV?$CTier4AppSystem@VIBenchmarkService@@$0A@@@
.?AV?$CTier4AppSystem@VIEngineService@@$0A@@@
.?AV?$CTier4AppSystem@VIGameResourceService@@$0A@@@
.?AV?$CTier4AppSystem@VIGameUIService@@$0A@@@
.?AV?$CTier4AppSystem@VIInputService@@$0A@@@
.?AV?$CTier4AppSystem@VIMapListService@@$0A@@@
.?AV?$CTier4AppSystem@VINetworkClientService@@$0A@@@
.?AV?$CTier4AppSystem@VINetworkServerService@@$0A@@@
.?AV?$CTier4AppSystem@VINetworkService@@$0A@@@
.?AV?$CTier4AppSystem@VIRenderService@@$0A@@@
.?AV?$CTier4AppSystem@VISoundService@@$0A@@@
.?AV?$CTier4AppSystem@VISplitScreenService@@$0A@@@
.?AV?$CTier4AppSystem@VIStatsService@@$0A@@@
.?AV?$CTier4AppSystem@VIToolService@@$0A@@@
.?AV?$CTier4AppSystem@VIUserInfoChangeService@@$0A@@@
.?AV?$CTier4AppSystem@VIVDebugService@@$0A@@@
.?AV?$CTier4AppSystem@VIEngineServiceMgr@@$0A@@@
.?AV?$CTier4AppSystem@VIHostStateMgr@@$0A@@@
.?AV?$CTier4AppSystem@VIKeyValueCache@@$0A@@@
.?AV?$CTier4AppSystem@VITextMessageMgr@@$0A@@@
.?AV?$CTier4AppSystem@VIGameEventSystem@@$0A@@@

Expectation: Fulfilled.

Team Fortress 2 on Source 2

Nothing yet.

Expectation: Still Waiting.

Steam Application ID 244670

There’s a file in dota_ugc/game/dota named steam_244670.inf:

Source2 - VConsole2 Channel Settings

The contents of the file follows the standard steam.inf convention:

ClientVersion=23
ServerVersion=23
PatchVersion=1
ProductName=dota2_s2_main
appID=244670

The product name above is dota2_s2_main, which likely means Dota 2 Source 2 Main.

Expectatation: Fulfilled.

Focus on User-Generated Content

If you really need to read this section, just close this article and go play around with the Dota 2 Workshop Tools. Seriously. The Source 2 tools that ship with the Dota 2 Workshop Tools blow the Source 1 tools out of the water, and while I haven’t tried to build anything serious myself, ‘everyone’ says it’s incredibly easy to create content with.

Expectation: Fulfilled.

Deploying using Git Push

May 12th, 2014

All the cool kids are doing it, so why not take it out for a spin?

There seem to be plenty of over-complicated post-recieve hooks on Stack Overflow and various tutorials, but I wanted something nice and simple.

I store my web sites in /srv/www/<site name>, so that provides a nice and easy convention to base a deploy script on.

I put a bare git repository in /srv/deploy/<site name> and added the following post-receive hook:

#!/bin/bash

GIT_DIR=$(dirname $(dirname $(readlink -f $0)))
DEPLOY_DIR=/srv/www/$(basename $GIT_DIR)
git "--work-tree=${DEPLOY_DIR}" "--git-dir=${GIT_DIR}" checkout -f

Note: Remember to set the execute bits (chmod +x) on the hook script. I forgot to. 😔

This does the following:

  1. Read the path to the hook script file (readlink -f $0)
  2. Get just the directory from the script path (/srv/deploy/<site>/hooks)
  3. Get just the directory from the hook dir path (/srv/deploy/<site>) and store it as the GIT_DIR variable
  4. Get just the folder name of GIT_DIR (<site>) and build the DEPLOY_DIR path as /srv/www/<site>
  5. Check out a copy of the git repository working tree from GIT_DIR to DEPLOY_DIR.

Then to set up the push, I need to add a git remote as follows:

git remote add production <host machine>:/srv/deploy/<site>

Deploying becomes as easy as git push from the master branch. I think I can push from other branches too - which would be neat for staging - but I haven’t tried it yet.

The convention-based nature of this script means I can easily re-use the same script for every site. The same nature also makes it less readable. Yay tradeoffs!

Jinx on an A380

May 5th, 2014

I was watching this documentary last night on the construction of the Airbus A380s.

One line in particular stood out (14:30):

Just imagine what would happen if anything threatened the integrity of one of those fuel tanks. Like a catastrophic engine failure that sends shards of metal tearing through them. That’s the nightmare that haunts engineers at Rolls-Royce, and they’re about to destroy an engine to ensure that never occurs.

MegaStructures, meet QF32.

Four Corners summary

Four Corners video on ABC

Look Up 

May 4th, 2014

Well this is ironic.

The rhyme and message reminded me of Dr. Seuss without the abstracted characterization, but every word that Gary Turk says above is spot-on.

Queueing TFS Builds En Masse

April 2nd, 2014

Today I was trying to set up a stress-test for a Team Foundation Server build. What I would like for it to do is just run rolling builds until I arrive at work tomorrow to go over the logs and see how the test went.

Unfortunately, TFS’s rolling builds trigger doesn’t seem to trigger a build if there are no check-ins since the previous build started. The logical next options was to just queue a whole bunch of builds that would last roughly until tomorrow morning.

So what I could have done is go get a calculator1, figure out how many minutes there are until early tomorrow morning, then figure out how many 15-minute builds fit into that time and repeat the process of manually queueing a build N times until there are enough builds in the build queue to last until tomorrow morning.

Or being a programmer, I could script it. Microsoft provide a rather poorly documented but fully-featured .NET library for accessing TFS, which provoved useful.

using System;
using Microsoft.TeamFoundation.Build.Client;
using Microsoft.TeamFoundation.Client;

namespace BuildMassQueue
{
         class Program
         {
                 static void Main(string[] args)
                 {
                          var collection = TfsTeamProjectCollectionFactory.GetTeamProjectCollection(new Uri("http://my.tfs.server:8080/tfs/MyCollection"));
                          var buildServer = collection.GetService<IBuildServer>();
                          var buildDefinition = buildServer.GetBuildDefinition("MyTeamProject", "MyBuildDefinition");

                          // 8AM tomorrow morning
                          var endTime = new DateTime(2014, 04, 03, 08, 00, 00);
                          var startTime = DateTime.Now;
                          var difference = endTime - startTime;
                          var numBuildsThatCanFit = (int)(difference.TotalMinutes / 15);

                          Console.WriteLine("About to queue {0} builds. Press any key to continue...", numBuildsThatCanFit);
                          Console.ReadKey();

                          for (int i = 0; i < numBuildsThatCanFit; i++)
                          {
                                   var request = buildDefinition.CreateBuildRequest();
                                   buildServer.QueueBuild(request);
                          }
                          Console.WriteLine("Done!");
                 }
         }
}

The API documentation is pretty miserable, but once you get past that it’s surprisingly simple.


  1. Shameless plug for Numerical, the best calculator app I’ve ever used.