Jump to content
rihapat

Dev Diary #302 The Nature of Testing

Recommended Posts

Ahoy Adventurers!

In this week's dev diary, us in the QA department have been asked to talk about game dev experience from our point of view. We've been scratching our heads how to convey the Tester Experience TM and after some thought and pondering, we've decided to focus on the life cycle of an update and put forth the following description of events. As you read this account of ours, try to immerse yourself in the spirit of good-natured humor sprinkled with just a tiny bit of desperation. We do love game dev in general and our game in particular after all, but perhaps being tasked with finding the mistakes and shortcomings in everything we touch reflects in the mentality and jokes we make. But all we do, we do to help make Ylands as amazing as possible.

Ground Zero - An earthquake update hits the floor

The culmination of several months' worth of pressure effort and preparations. At long last, an update is released, however the work isn't done here by any means. For us at QA headquarters, the ride continues and the show goes on.

During the release maintenance window, we scramble to double check that crucial systems and infrastructure work. Some things we can reasonably expect to work once live, but things like logging into the app, the workshop, servers, exploration and similar cornerstones of the game can hide issues we only discover last minute. Usually things are fine, though sometimes the maintenance window needs to be prolonged for us to be able to deliver. And in the absolute worst case scenario, if a blocker is only uncovered now, we have the option to break the glass and hit the Big Red Button to roll back the update, restore the previous versions and dive deep into what went wrong. Needless to say, nobody likes to even consider this option, least of all us at QA HQ.

Let's assume things went more or less well, losses are within parameters and move on to the next stage. Our wonderful community manager announces that the maintenance is over and an update is live and we start dealing with the fallout.

DD_302_Steam_1920x1080_02.jpg

First responders & Damage control

It's close to assured that there will be issues to iron out after an update. Nobody is perfect after all, and while all of us in the dev team give it our best to achieve that state, it's ultimately impossible. So, in the first days or possibly weeks after an update, we spend much of our time investigating any issues that eluded attention until now and roll out hotfixes and patches to control the fallout. The tempo is rapid and while we all long for relief in this time, we know it's important to keep the pressure up. Rapid QA response to fixes and other commits, attention to detail and in some cases, figuring out why a critical issue flew under the radar are the items on the agenda in this period. Forecast: crunches with a chance of meatballs. Eventually, the most burning issues are wrangled and we can finally let out a tired sigh of relief. For a short while, it's done.

Humanitarian relief

A thick line between the previous update and the next has been drawn and we finally start looking to the future. We regain our strength, we refocus, we learn. The length of this window heavily depends on how many patches were pushed for the last update and there always exists the possibility that relief never comes - the sheer amount of necessary work takes so long that we can find ourselves deep into the next update's life cycle even before the previous one is over. Speaking of the next update...

Another crisis on the horizon

When an earthquake hits, a tsunami may follow. The development of features and new content doesn't halt just because things needed fixing. Eventually, we need to turn our eyes to what's on the horizon. There is a surge in workload as requests for testing start coming en masse and we give one last tearful wave to the short period of reprieve as it sails away. The quicker we dispatch these tests, the more time for adjustments, polish and feedback there is and, as it logically follows, the better the state these new features will be in. And as we send back bugs and feedback, retest, report newly found bugs, return whatever wasn't sufficiently improved, rinse and repeat.

Damage projections

As the ongoing cycle matures, there comes a time when everyone needs to take a step back and look at the larger picture. Before the dev team fully commits to a given state of the game, feature states need to be re-evaluated and prioritized. Sometimes you can be enthusiastically crunching away at a feature but find out you didn't quite have the time to get it done, sometimes the core is done but during development you find out there are simply too many issues for you to be able to fix before it's too late and sometimes, a feature was just too large and took so long to finish that we at QA HQ simply don't have the time and people to properly make sure it's in the best possible state. This is the point where hard decisions have to be made. Will a feature be postponed? Will its scope be reduced? And will it still be viable after that? Sometimes it is better to postpone than to release something we're not happy with, after all.

A tsunami Data lock confirmed

The data just came in from the boys in the lab and it's dire. The vague threat of another disaster has taken a more concrete shape, the dreaded data lock. From here on out, no new features get in and it's all about polish, fixes and making sure everything works together. And even then, every new merge needs to be considered carefully - is it important enough? Is it safe? Can it break something else? Is it a new issue or something we've been able to live with already? These and many more questions are behind every new ticket we throw at the rest of the dev team from now on.

Evacuation and infrastructure reinforcement

After data lock, a new branch is built - Release Candidate. It is here that any merges deemed important and safe enough go and are tested in the name of stabilizing the build and making sure it comes out in it's best form. For us at QA HQ, that's not all however. It is at this moment that we start retesting any and all new things, be it new features or tweaks of existing ones. No ticket gets left behind as we have to make sure that nothing broke between us first seeing a ticket and now, after hundreds and hundreds of commits that could have potentially influenced whatever we're looking at. We also launch a large scale integrity test of the whole game around this time to ensure that there are no critical weak points in existing systems. It is a long and laborious process, but we couldn't proceed without it.

Simulations and drills

Sometimes, when there is a reasonable need, it is prudent to prepare the public at large for the coming disaster. This is where an experimental build might be released for you all to romp around in once the RC build is in a reasonable enough state and there is need for community feedback on the changes the dev team has prepared. This spells another workload surge for us as we need to go through your reports and figure out what's already known, what's new, what needs to be fixed ASAP and to compile your feedback to figure out how to best utilize it. Often times balance changes are made based off of what we gather during this period, which then need to be tested once again.

The tide draws near

As we're closing in, mobile and Windows Store submissions need to be covered as well. These can take anywhere between a few days up to weeks to resolve for each submission. The workflow changes somewhat during this time as we need to allocate more time to testing on these platforms before their respective submissions to ensure the builds are healthy, and once a submission has been dispatched we can then reallocate all of that time back onto the main game to run the same tests on the steam build.

The quiet before the storm

It's almost here. The suspense is incredible. Tensions are high once again. Everything seems quiet, things look fine. Maybe it's the job, maybe it's just who we are as people, but whenever things go too well, all of us here at QA HQ start sweating bullets. Like any leadup to a big event, this is a very stressful time. After all, we could always find a last-minute blocker, which is a scenario nobody wants to see. In theory, should such a scenario occur, this is where QA can veto a build instead of giving it the coveted green light. This means an update could be postponed by a week or two to fix something absolutely crucial to the release, but schedule is often very tight and the release process is a complex matter, so unless it's something that renders the game or a large part of it unusable, we often need to proceed anyway. As a result, we might start getting first responders ready even before the disaster hits. During the last moments of this journey, an announcement about an upcoming maintenance is released to the public and soon after, the big moment is upon us all.

Ground Zero - A tsunami An update hits the shore

This seems familiar, doesn't it? Almost like we've been here before, one might say.

As we close this entry here at QA HQ, we would like to add that there are many aspects of our work, additional intricacies and processes that might even differ between different parts of the game. Those are, however, stories for another time.

Stay QAssy!

  • Like 4

Share this post


Link to post
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now

×