Mid-Playtest Feedback Methods

Mid-Playtest Feedback Methods

Part 1 - The problem with playtests

Group playtests are the most unique thing about doing user experience testing on games. The majority of games user research methods, such as interviews, usability tests, surveys, and focus groups look pretty much like they would if we were testing accounting software. Consumer playtests are an exception, those moments when we bring a bunch of normal players into the studio to play the game while it’s still in development.

The magic of playtests lies in their unique combination of in-house confidentiality and scale. An hour of playtesting with twenty participants produces twenty hours of data. Running many participants in parallel enables us to collect enough information to be able to draw real conclusions on the things that are most special about games, such as difficulty and fun. For a game to be successful, we need players to feel just the right amount of challenge, frustration, and confusion. Playtests are how we measure and adjust the experience to match the design intent.

Running large groups of participants comes at a cost, however. In a one person usability test, we can use “think aloud”, letting the player give us a running monologue of their thoughts about the game while the researcher takes notes. The one-to-one ratio of researcher to participant means that the researcher can act as an interpreter, matching the sentiment of what the player is saying to where they are in the game. That works great for a single person, but it’s not practical to have a researcher for every participant in a playtest.

We can track behavior well enough in a big study by recording video of their play and using automated in-game logging, but sentiment and emotional reaction is trickier to measure. It’s already common practice to give players questionnaires after each section of play, but those questionnaires suffer from well-known biases. For example, in a post-mission survey, players are more likely to talk about the initial and final portions of a mission than things that happen in the middle. And small things will tend to get forgotten after a while, even if they caused delight or frustration in the moment.

What’s needed is a method for players to tell us how they’re feeling while playing, and automatically log that feedback without the intervention of an observer.

This is my solution:

A prism-shaped white acrylic box with 5 arcade-style buttons

This is one of the feedback boxes I use when testing my games. I’ve gone through a number of iterations on the basic method, but they all boil down to giving the participants buttons to push and tell me what they think while they play.

Sometimes those buttons have been software buttons in the UI, some have been on the controller, and some have been on a separate peripheral like the one shown here. They all share the basic principle of giving players a way to express their opinion in the middle of a mission and logging it automatically. This fine detail feedback is what has helped the development teams I’ve worked with ship games with better moment-to-moment retention than ever before.

Subjective opinions are exactly what we need to fill out our picture of the user experience. We’re already logging all the objective details like deaths, kills, and how long the game is taking, what we need to know is whether or not the player is pissed off or bored. If a bunch of the participants say they’re bored at a particular point in the game, there’s probably something there worth fixing.

In this article, I’ll describe several examples of how this method has helped on bestselling games, then offer a step-by-step walkthrough of how to design and build your own playtest feedback system.


Part 2 - Case studies

In my sixteen years as an industry user researcher, I’ve tried a number of different variations of this technique. Because every game is a unique research challenge, the best method for letting players tell me what they’re feeling in the middle of a game has turned out to be quite different on each game. Here are some of my case studies for letting players provide mid-game feedback during playtests on my games. Hopefully, they’ll provide a clear picture of how mid-game feedback actually works in games research and give you some ideas on how you can collect better feedback on your own games.

Disclaimer: These case studies represent my own opinions and experiences working on the games at the time and do not necessarily reflect the official views of the studios, their current development and research teams, or the current state of these games.

Halo 2 and Halo 3 (2004 - 2007)

Halo 2 was one of the first Microsoft-published games that used “extended playtesting”, bringing in users to play for an entire weekend. They’d play all day Saturday and Sunday, which worked out to around ten hours of gameplay per player, enough to play through the entire campaign. It might sound a little grueling, but the playtesters loved it. They got to spend an entire weekend playing an unreleased game, eating free donuts and pizza and then smugly tell their friends that they’d been sworn to secrecy.

While we asked players a short set of questions after every mission, it was difficult to connect post-mission feedback to particular parts of the mission. When someone said “I got lost after the fight”, it’s tough to tell which fight they meant. If they said the mission felt too hard, did they mean the whole thing or just the end?

The game needed a way to collect actionable granular feedback regularly throughout the sessions. Randy Pagulayan, working in cooperation with design lead Jaime Griesemer, came up with a list of five quick responses that the designers would find actionable if players gave them mid-play:

  • Too easy
  • I'm fine, making progress
  • Too hard, I don't know what to do next
  • Too hard, I don't know where to go next
  • Too hard, I keep getting killed

The next decision was how often to ask for feedback. The more frequently players were prompted, the more it would interfere with the normal experience. But if we asked too rarely, large chunks of the mission would go by without any feedback. After some trial and error, the team settled on three minute intervals as a happy medium.

Halo 2 had an aggressive production schedule, and the development team at Bungie was reluctant to invest scarce engineering time into an unproven research technique. Fortunately, there was already a way to bring up an arbitrary menu of commands in game: the debug menu. This was a development tool used by anyone playing through the game to teleport through levels, make themselves invincible, or do dozens of other useful tricks. Most importantly, the debug menu could easily be configured to show just our simple set of five commands and to come up at regular intervals. The debug menu also automatically paused the game when it appeared, giving the player space to choose a response. In Randy’s own words: “Bam! Problem solved.”

A screenshot from Halo 2 with a crude menu in the middle with five feedback options

(This is a recreation of what the Halo 2 playtest feedback debug menu looked like.)

We also used a system created by another Microsoft games researcher, Jun Kim, to create links from the data visualizations directly to the video recordings we’d made of the playtest. This will be discussed more later, but the video linking was essential to the giving the feedback its proper impact.

The main virtue of this approach was that it happened in the game, and that it happened automatically. Players couldn’t forget to respond or miss the message when it popped up. The answers were a simple, easy to understand set of options, and the feedback was logged directly to the same databases as our normal gameplay telemetry. It did exactly what we needed, providing regular mid-mission sentiment data that enabled designers to identify problems inside the structure of their missions.

However, it wasn't a perfect system either, and two things in particular caused problems in this version. First, it was a mandatory interruption of gameplay every three minutes. We did have some code in there to delay the message during combat, but that caused it to pop up right after fights, leading to some score inflation. Fundamentally though, the questions were being asked on a schedule rather than in sync with the player experience. We might really want to know about one particular fight, but there was no way to guarantee that the prompt came up during that fight.

The second problem with popping up the message in the middle of the game was … people were in the middle of a game. Sometimes they just smashed a random button to make the popup go away out of resentment, and sometimes they were in the middle of another action (especially jumping) and hit “A” which caused the system to select the default item on the menu. We initially dealt with this by making the default option neutral (I’m fine) and later made it so that the default option was a unselectable blank item, forcing them to at least choose something else on the menu. However, a significant fraction just learned to press “B” to dismiss the menu and get on with their game. Having their game paused at arbitrary moments was pretty jarring.

There was also the downside that this system replaced the standard debug menu, which meant the debug menu wasn’t available for actual gameplay purposes and we had to enter all debug commands with a keyboard during our playtest. Not crippling, but awkward. Players would get stuck in a wall or whatever and we’d have to plug in a keyboard and laboriously type in these long arbitrary commands to teleport them out.

We ended up using the same system with a few refinements on Halo 3. By that time, we were plotting both the gameplay data and the subjective feedback on top of level maps using Tableau, which greatly improved our ability to interpret the data.

A map of a Halo 3 mission with many small dots marking locations where players died.

(Sample image of player deaths from a playtest of Halo 3)

Shadow Complex (2008)

Shadow Complex was made by Chair Entertainment, later known for creating the popular Infinity Blade series, and published by Microsoft under the Xbox Live Arcade program. It was a 2.5d platformer adventure game set in a vast underground base, with a ton of cool abilities and movement modes. Players were expected to traverse the base multiple times, getting into different fights in the same spaces. The layout was deliberately mazelike, so it was difficult for players to describe where they’d had problems. We needed the same mid-mission feedback functionality as we’d had on Halo, but scoped down for a smaller game.

No alt text provided for this image

(Screenshot of Shadow Complex)

Because Shadow Complex was a smaller project, the development team at Chair couldn’t spare the resources to put a feedback system in game the way the Bungie team had. To reduce the size and technical difficulty of the request, I asked them to add the simplest possible data logging: saving to a text file on the hard drive. One of the events they added was a so-called “heartbeat” event, another innovation carried over from the Halo playtesting effort. This event triggered every few seconds no matter what the player was doing, logging their location, health, shields, etc. In a live game, this would be a large volume of data, but in the lab with so few players and relatively short play sessions, it was easily manageable.

I ended up writing a small application that would be shown on the second monitor during the playtest. This program allowed the players to, at any point, give us the same feedback about being lost, bored, etc. as well as having a comment box they could fill out using the keyboard. Just like for the game logging, the results were simply saved to a text file on the PC. The program would also flash to prompt the player to enter feedback if they went too long without entering anything.

A mockup of an interface for entering feedback for Shadow Complex

(This is a recreation of what the Shadow Complex feedback program looked like. The radio buttons at the top logged the fun score, the “Lost/Bored/Confused/Hard” sections were big buttons, and the bottom was a comment box.)

Once a playtest was over, I collected and consolidated all the logging files off of the development kits, as well as the output of the logging app. Each entry in the feedback app was matched to the closest heartbeat event in game, giving us the approximate location where the player was at the time.

A game map from Shadow Complex with dots for feedback moments

(Recreation of what the Shadow Complex data looked like)

The upside of this approach was that it allowed a much wider range of feedback than can normally be obtained in-game. Because the app was a separate program, it could easily do a lot of things that would take a lot of UI dev time to put in the game, such as adding a comment box to allow players to add extra context to their feedback. It also required relatively little effort by the development team, since they only had to put in a very minimalist logging system and everything else could be done by an outside researcher.

The downsides were that the syncing was a very manual process, taking me several hours after every test. And requiring players to leave the game to give feedback always has risks. They might forget to pause the game, which could get their character killed or make it seem as if they spent more time in an area than was actually the case. The feedback tool was on the second monitor next to the gameplay monitor, in the players’ peripheral vision, and not all players remembered it or even noticed when it flashed at them.

Overall, though, this proved to be a really successful shoestring adaptation of the method. Here’s a quote from an interview with Donald Mustard, the game’s creative director, soon after release:

"I think probably one of the greatest advantages we had with the game is... So, Shadow Complex was published by Microsoft Game Studios. Because we were part of MGS, we got access to those labs. As early as we were allowed in there, we were putting the game in there to get as many people as we could playing the game.

That feedback was crazy. We'd get like basically our 2D map back with all these dots saying, "Here's where someone got lost. Here's where someone turned around. Here's where everyone got stuck at this spot." I think that without that data, we would have been up a creek because that just gave us such a broader perspective on things that we thought were so painfully obvious that nobody thought was obvious at all. We were definitely able to smooth out the huge bumps."

The Chair folks had come into our first meeting expecting little research support from Microsoft, maybe a couple of generic usability tests. But I’d gotten overexcited about applying the lessons of Halo on a radically different title and prepared a sample report from what we could get out of this kind of playtesting, including a mocked up visualization of what the data would look like on top of the map. That turned out to be a surprisingly effective way of helping prospective partners understand the potential benefits of this kind of logging, and I’ve used the same approach on several other games since. Just as it’s difficult for a player to evaluate the fun in a design doc, it can be difficult for a designer to evaluate the benefits of a proposed playtest study from a verbal description alone. Mockups of the results makes it much easier for the team to see how they would act if the data was real.


Destiny 1 (2010-2012) - Controller gestures

It would be natural to assume that Destiny could just use the same system Halo had, but Destiny had one crucial difference: Destiny couldn’t be paused. The identity of the game as a “shared world shooter” meant that no individual player could pause the game because that would interfere with other players’ experience. We couldn’t bring up an in-game dialog like Halo because the combat would continue behind it, potentially killing the player. And we couldn’t ask them to answer questions on a second monitor like Shadow Complex because their character might be killed while they were distracted.

We had to come up with a system for allowing players to give feedback instantly from within the game, but without any obstructing UI. Brandi House, a bright young researcher on the team at Bungie, took inspiration from a “claw” gesture that Bungie QA testers used to flag bugs in the game. The “claw” was a complicated set of simultaneous button presses, involving both hands and a half a dozen specific buttons pressed simultaneously. It was used to mark the location of technical bugs during QA playthroughs, and was an intentionally awkward gesture to ensure that no one could press it accidentally. But the functionality underlying the claw solved the primary problem of allowing the player to log an opinion in the middle of a mission without blocking the player’s vision or leaving the game.

Our version of the claw was simplified to just pressing the PS4 controller touchpad and a face button simultaneously. Each station had an instruction sheet that explained how to use the button combinations.

Here’s what the instruction sheet looked like:

No alt text provided for this image


Here’s what the data looked like for one of the Destiny playtests:

A map of a Destiny area with feedback dots labelled "Awesome", "Frustrated", and "Lost"

We connected these data visualizations (again, made in Tableau) to the video recordings of each individual player, and made them directly available to the designers. Designers typically hate reading reports or watching full playtest videos, but the links effectively condensed dozens of hours of testing into a perfect highlight reel. After every playtest, designers would bring up the maps of their mission and click through the feedback links on the map, knowing each one represented a particular player’s opinion and watching exactly where they were experiencing a problem.

Part of what made this approach so effective was that the researchers became an invisible conduit between the players and the designer. The designers didn’t have to take our word for anything, these were the points where players spoke up on their own to say what their experience was like. Designers could watch the experiences from their desks, without any risk of misunderstanding or researcher bias. The system provided unfiltered raw data in bite-sized chunks, enough to help them understand what was going on but without requiring them to watch hours of raw video.

The upside was that this method required no UI (UI development resources were already fully committed) and didn’t interrupt gameplay. The downside was that players tended to forget that the gesture was possible. Even with instructions staring them in the face, it was tough to remember to do the button combination in the heat of battle. Also, since the controller was obviously being used for gameplay at the same time, any missed button in the combination meant the player was accidentally firing off a grenade or jumping to their death.


Destiny 1 (2014-2016) - Boxes

We wanted to move the system off of the controller, while making it more prominent and harder to ignore. It was proposed that we create physical boxes with buttons, with the idea that having them on the desk in front of the player would be a permanent reminder that we expected them to push the buttons. We looked around at various commercial options, but all of them were too expensive. Multiplying even a modest per unit cost times the number of stations in the lab added up fast. We began brainstorming potential options. We needed cheap, robust, and not too embarrassing to look at.

Roy Cole, a former electrical engineer working on the Bungie research team as a data analyst, took on the job of making the boxes a reality. Because Bungie had made three games for the Xbox 360, we actually had a large cardboard box in the back of the IT storage closet with dozens of broken controllers. Roy scavenged the central circuit boards out of a few dozen controllers as well as their cables, then wired them up to standard arcade game buttons designed to withstand thousands of angry activations by gamers. When looking for appropriate enclosures to hold the hardware, Roy noticed his girlfriend’s Tupperware container was exactly the right dimensions.

The result was a sturdy, sleek-looking box with four colorful buttons and a USB cable that could be connected to the recording computer. It was much easier for players to see and remember, and we definitely got more feedback than the earlier claw method.

A black plastic box with four buttons labelled "Awesome", "too hard", "Too easy", and "Lost"

(The outside of one of the Bungie button boxes, note the painted over “Sterlite” logo.)

No alt text provided for this image

(The inside of one of the Bungie button boxes, note the Xbox 360 circuit board)

The drawback of this design was that buttons were on top of a flat black box, and players could sometimes forget they were there while they were looking at the screen. Also, because the boxes were connected to the PC rather than directly to the Xbox or PlayStation development kit running the game build, we had to use a piece of custom intermediary software to get the data uploaded to the same logging servers that stored the rest of the game’s data.

---

As you can see in each of these case studies, the core of the method remains the same: the player has a way to provide emotional feedback during their play. The exact implementation is driven by the nature of the game, the concerns of the design team, and the technology and development resources available at the time. 

In the next section, I’ll go over the key questions that need to be answered to build a similar system for your own game.


Part 3 - Designing your own feedback system


Here are the key questions to ask of any proposed mid-game feedback system:

1) What kind of mid-game feedback is actionable?

This is different for every game, and even within a game will change over the course of development. Even so, a simple “I like this” and “I don’t like this” will be almost always useful. You can also do all sorts of specialized tests with unique feedback options, such as an environment art playtest with players choosing between “beautiful”, “ugly”, “too dark”, etc.

2) What input method will be least disruptive to the flow of play?

In the case studies above, you read about in-game pop-ups, external apps, controller gestures, and dedicated peripherals. The input method is only constrained in that it be clear, easy to access, and create as little interruption as possible in gameplay. 

3) Will you prompt users for feedback? 

Interrupting players mid-game for feedback is controversial in any user research project, but particularly in an automated system because there’s no human judgment in the loop saying “don’t interrupt just yet.” I feel like prompting produces enough additional data to be worth the slight disruption, but you’ll have to make the call for yourself on your game.

At a minimum, I’ve often found it necessary to give verbal reminders periodically during longer playtests, as players tend to get absorbed in the game and forget to give feedback.

4) How deliberate/frequent is each prompt?

I’ve found the Halo method of “every three minutes with some randomness” to be a reasonable middle ground. Again, this will depend on the game; a fast moving game might require more frequent prompts to be able to reliably collect feedback on all sections, a slow moving game might need a slower prompt to avoid collecting too much redundant data.

Depending on the game, it can be useful to have the prompt occur at specific moments. For example, in a game with frequent short levels, you could prompt after each death. On Destiny, we even considered triggering prompts from the game’s scripting language so they would show up at specific appropriate times during a mission.

5) Where does the feedback data go?

Are we saving data to the local hard drive? Uploading to the game servers? How are you going to get it from the place it’s saved to your own PC or database for analysis?

6) How does the feedback data sync up with the gameplay and survey data?

Ideally, feedback events are logged from within the game client, which makes it easier to include all necessary contextual data such as the player’s ID, what map they’re on, their XYZ coordinates, timestamp, etc. However, in a lot of cases (such as the Shadow Complex case study above), the data is logged separately and must be matched up. And in almost all cases, the logged data is distinct from any survey data for the player.

Make sure you work through the mechanics of matching up the data sources into a final, coherent dataset before you run the playtest. Knowing that the survey answers for “John Doe” go with the gameplay data for “player ID 54621” and the feedback data logged on the harddrive on “station #7” is both vital and easy to screw up.

7) How does the data sync up with the video?

Video recording generally happens in a separate application from either gameplay or feedback logging. You need to make sure that you can match up a logged event with the correct point in the video.

In the Halo days, I had to run around the lab starting each game client at the same moment I started the video recording for that station. That way, I knew a gameplay event that happened 3 minutes after the client launched also happened 3 minutes into the video. This worked, but it was very easy for things to get out of sync. If the game crashed and had to be restarted, we’d lose our matchup.

One other trick to consider is show the current clock time in the video, at least at the start. If the feedback events also include the current time, you can work out the correct offset for each event.

In the method described below, the feedback buttons are logged within the OBS recording software, with timestamps that specifically reference points in the recording. This makes coordinating the video and the buttons extremely easy.

8) Where will the videos live?

Hosting large playtest videos is a hassle, especially if you want to be able to share links to specific points in the videos. At Microsoft and Bungie, we stored them in a network share and wrote a small webscripts to launch videos at specific points when fed the correct parameters. Other video hosting methods have built in capabilities for linking to specific times in the video, such as the Microsoft Stream application that we use at ArenaNet or the YouTube example in the walkthrough below.

9) How will you share the final package of results with the rest of the dev team?

As any engineer will tell you, it’s one thing to get all this working on your desktop PC and quite another to roll it out to a studio. For the Halo and Destiny versions, we baked the results into Tableau packaged workbooks that could be distributed to the team. At other studios, I’ve just provided an Excel sheet of links for designers to click through. In the example below, there’s a Google sheet that contains the list of feedback events and their video links.


Part 4 - Plans and Technical Walkthrough

Now that I’ve explained why and how I use feedback boxes, here’s a walkthrough guide on how to do it yourself. There are certainly other ways to go about building this sort of system, but my intent here is to make something cheap, effective, and easy to build. I’ve limited the design to off-the-shelf hardware and software, and the total cost should be somewhere around $30 per box. My intent is to make it as easy as possible for anyone reading this to try it and evaluate the methodology for themselves.


Software

1) Open Broadcaster Software: https://obsproject.com/

OBS is a free, open source video recording program. It will let you record videos from several sources (e.g. the game, a webcam to record facial expressions, etc.) into a single file and supports the Infowriter plugin to record your button presses.

2) Infowriter plugin for OBS: https://obsproject.com/forum/resources/infowriter.345/

Infowriter is a free, open source plugin for OBS which can log button presses while OBS is recording. Most importantly, it logs a timestamp for each button press relative to the time when the OBS recording started. This means that you can use those timestamps to jump to the correct point in the video.

3) Antimicro: https://github.com/AntiMicro/antimicro

Antimicro is a free, open source program for translating the button input from a controller into standard keyboard commands that Infowriter can log.

4) NohBoard: https://github.com/ThoNohT/NohBoard

NohBoard is a free, open source program that displays a virtual keyboard on screen which lights up when a key is pressed. This is useful for capturing keypresses in the recording stream and can act as a backup method for synchronizing the video stream with the key presses.


Hardware


1) USB encoder kit ($13)

A usb encoder is a small piece of hardware used by hobbyists to make their own arcade setups. They’re cheap, plug and play, and work with any computer via USB. There’s a ton of variations on this on available online under the phrase “arcade USB encoder”, I’m not endorsing any particular one. Pick one that comes with a USB cable, board, and wires, and I recommend choosing one that uses 3 pin connectors because it makes it easier to have the buttons light up when pressed.

2) Arcade buttons ($10)

Again, there’s tons of sellers for these and I’m not endorsing any particular one. I find multicolored buttons useful because it’s easier for participants to associate colors with feedback types (“Red means hard”). If you want them to light up, get the ones with LEDs inside.

3) Useful bits

Adhesive standoffs to mount your encoder to the enclosure.

Adhesive zip tie mounts to keep your wires neat inside the case.


Enclosure

For the actual box to hold the buttons and encoder, feel free to use whatever you have handy and feel comfortable working with: Acrylic, wood, glued-together Lego, whatever. As you saw in the Destiny example earlier, I’m not above using Tupperware. Lab equipment is not supposed to be elegant, it needs to be rugged and functional. Repeated use by angry participants is surprisingly tough on hardware.

However, if you’re feeling high tech, here’s a set of plans for cutting out an appropriate enclosure with a laser cutter. The examples you see here are made from $2 of scrap acrylic, cut for free on a laser cutter at my local library.

Laser cutter plans for the box

Laser cutter plans for the label slot


My goals for this enclosure design were:

  • 5 buttons, to match the 5 point scales we use for our questionnaires
  • Simple shape, requiring minimal tools and skill to fabricate and assemble
  • Easy access to the inside for repairs and upgrades
  • Raising the buttons up into the player’s field of view so they aren’t forgotten
  • Angled so that the buttons are naturally braced against pushes
  • Soft rubber skids to protect the desk and prevent sliding on pushes
  • Labels for buttons are easily changed, but also protected from the participant


No alt text provided for this image


Assembly

Step 1) Prepare your enclosure.

Here’s my cemented together acrylic enclosure, made using the plans above.

No alt text provided for this image

Step 2) Mount buttons

No alt text provided for this image
No alt text provided for this image


Step 3) Wire buttons to encoder board

No tools are involved here, the wires that come with the usb encoder kit just snap on. Different kits use slightly different wiring instructions, so I’ll leave you to follow the instructions in your particular kit.

Here’s an image of how a button is wired in this particular kit:

No alt text provided for this image
No alt text provided for this image
No alt text provided for this image


Step 4) Mount board

Attach the board to the inside of the enclosure using adhesive standoffs. I’ve also used a zip tie with an adhesive back to keep the usb cord from potentially pulling on the circuit board connection.

No alt text provided for this image


That’s it! All done, no tools required.

Here’s a higher resolution album of the assembly.


Software setup

Step 1) Install OBS, Antimicro, NohBoard, and Infowriter

OBS, Antimicro, and NohBoard are standalone programs, Infowriter is a set of DLLs that go in the “obs-plugins” subdirectory.

Step 2) Plug your feedback box into the PC via the USB cable

Step 3) Open Antimicro and confirm that the buttons are registering.

When you press a button on the feedback box, the corresponding button in the Antimicro UI should light up.

No alt text provided for this image


Step 4) Assign buttons in Antimicro

Click on “Button 1” through “Button 5” and assign each one a key from the dialog box that pops up. Choose buttons which aren’t used in your particular game. In this example, I’m using 1 to 5 on the num pad.

No alt text provided for this image

Step 5) Set up OBS recording profile

No alt text provided for this image

This profile used in this example includes the game, a face camera, and I’ve added NohBoard to show my keyboard and allow you to see exactly when I press a feedback button. However, all you really need for the system to work is the game video and Infowriter as sources.


Step 6) Assign buttons in Infowriter

Right click on the “infowriter” option under “Sources”, select “Properties”. For each button on your feedback box, assign the text you want to be logged when pressed.

No alt text provided for this image


That’s it, you’ve got a working playtest feedback setup! Here is a sample playtest video I recorded using this system.

Once the play session is finished, you should see a saved Infowriter logfile on your hard drive. Here’s the log file from the video above.


Creating video links

Now for the magic, linking the feedback button presses to the video. Fortunately, this is one of those things that looks super impressive but isn’t particularly technically complicated.

Here’s what the raw Infowriter log looks like:

No alt text provided for this image

Next, we need the timestamps for each button press from an Hours:Minutes:Seconds format into just seconds, then add a few seconds margin to each time to give the viewer a moment to orient before the button press happens.

No alt text provided for this image

Then we build a URL link for each button press, assembling it as a string from these parts:

  • The base URL, the same for every link: https://www.youtube.com/watch?v=
  • Video name, which would be different for each video from each playtest: yCSmXKbhtGw
  • Timestamp in seconds: &t=83


Assembled, the completed URL looks like this:

https://www.youtube.com/watch?v=yCSmXKbhtGw&t=83


Here’s a Google Sheet that takes the events from the demo video and automatically converts them to links that connect to the correct point in the video for each event.


Conclusion

Looking back on the various approaches to this problem, I see a lot of technical complexity in the service of a simple goal: letting players say what they think. Over time, I expect that we as a field will continue to evolve more elegant solutions and this sort of thing will become a casual part of playtesting, just as video recording and questionnaires have done.

Regardless of how the technology changes, I believe the goal is one that will always be with us. The most important aspect of a playtest will always be how players feel about what they’re experiencing. That’s what in-studio playtests are fundamentally for, creating a confidential space where players experience what we’re building and tell us what they think. Pushing a button to say that the game is fun is a very crude and simple thing, but it’s also exactly why playtests exist.

In writing this article and giving as thorough a tutorial as I can, I hope to make it a little bit easier for every studio in the industry to run better playtests with more detailed feedback. I think it will make the experiences you build better, just as it’s done for mine, and I look forward to playing your games.

Jonathon Dobbs

Software Test Engineer 3 @ Microsoft

5 年

Solid read. Thanks for sharing!

Lidia Chía Jiménez

Lead Game UX Designer @ Fall Guys

5 年

Thanks so much for sharing yout knowlege and all those useful resources!

Jonathan Dankoff

Insights Director at Haven Studios Inc

5 年

Hey! If anyone is interested in a software version, here's on I made super quickly with my brother that runs in gsheets. https://docs.google.com/spreadsheets/d/1s6QADF8j4576T3NugD4FYMckm1GfsLq_oj5CAMQ-EdM/edit#gid=0 Just copy it to your own drive, press a button, enable the SlapBox app, pop it on an old phone in front of your participant, and bob's your uncle. It's easily customizable, free, and super simple to use.

Thanks a ton for writing and sharing your process... it's take a lot of preparation and thought and I so appreciate this piece! If you follow other content producing researchers, I would love to hear about them.

要查看或添加评论,请登录

John (??GDC) Hopson的更多文章

  • Walking Past Imposter Syndrome

    Walking Past Imposter Syndrome

    Everyone who isn’t a sociopath gets impostor syndrome from time to time. It’s a natural consequence of comparing other…

    7 条评论
  • The Usability Guy

    The Usability Guy

    The worst thing a coworker ever called me was “the usability guy”. When you’ve got polished set of procedures for…

    2 条评论
  • A Simple Formula for Better Games

    A Simple Formula for Better Games

    The Scarab is an alien vehicle used by the Covenant enemies in the Halo series, one of the first franchises I ever…

    4 条评论
  • Games User Research Virtual Summer Camp

    Games User Research Virtual Summer Camp

    Announcing the unofficial 2020 GUR Virtual Summer Camp! Just because we’re social distancing doesn’t mean we stop being…

    11 条评论

社区洞察

其他会员也浏览了