Jump to content

Cassiel

Phase 5 Beta Testers
  • Posts

    2
  • Joined

  • Last visited

About Cassiel

  • Birthday 01/14/2000

Personal Information

  • Allegiance
    GDI

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Cassiel's Achievements

Newbie

Newbie (1/14)

  • Week One Done Rare
  • One Month Later Rare
  • One Year In Rare

Recent Badges

2

Reputation

  1. It's a competitive gamemode played, as of recently at least, mostly casually. Nod requires tactics and team work to be effective, whereas someone who has no idea what they're doing can always pick up a mammoth and at the very least be a nuisance to the enemy team. One other thing i noticed is that GDI seems to have the advantage in high playercount matches, which we're getting pretty consistently since the Firestorm trailer dropped (since, y'know, GDI benefits from the "strength in numbers" paradigm). You get everything from people who don't know the map, yet pilot a chinook in a chinook rush, to people who outright refuse to cooperate. It's gotten significantly harder to command as of late, and i'm sure i'm not the only one who noticed that. As GDI, provided you have a few people on your team who play reps and defense, you can be fairly useful just by being in a med or a mammoth, while Nod more often than not doesn't have that luxury (ahem, SBHs). We could try having a server with a lower player cap, like 40 or 50, and see how that works.
  2. Long post ahead, arm yourselves with patience Since the Firestorm expansion is slowly approaching release and testing will be needed, I figured I'd share some QA knowledge that might help with future testing phases. Do note that this is not a by-the-book rundown of professional QA practices, but rather a compilation of the "essentials", from personal experience as an ex-tester. It's highly likely that not all of these practices will be worth adopting due to the size of our dev team and the limited time left for testing (and, obviously, the fact that this is all a passion project, not a corporate backed commercial product). Some of the terminology used will differ from what i worked with, while some aspects will be left out entirely (so as not to risk violating NDA terms), these are at the discretion of whoever creates the QA infrastructure (i.e Jira/Confluence pages) and decides how the roles would function. Who knows, if this model is adopted and it turns out to work well with Firestorm, it could be the start of the QA framework used for future RenX content implementation and even other projects. A few bullet points: • Standardized bug report & handling environment (Jira) • Specialization • Documentation (Confluence) • Streamlined workflow (Splitting of responsibilities / areas of expertise and of testing) People involved with quality assurance would be specialized as follows: • Tester (in charge of the actual, hands-on testing of the end-user experience, should be a role mostly reserved for experienced players) • Analyst (in charge of creating and updating documentation, as well as forwarding submissions and maintaining contact with devs and following along with the development process) • Lead/Manager (normally separate roles, in charge of recruiting testers, managing roles and the testing process in itself, as well as maintaining and updating the QA infrastructure as needed) • Developer (self-explanatory) *Do keep in mind these are not meant to emulate the actual corporate positions, but are rather simplified and likely flexible roles to be changed at the discretion of community leaders and in accordance to the availability of volunteers. I'd say that the analyst role will likely be taken on by the devs, at least until more people volunteer. A standardized bug report environment ensures consistency throughout the entire testing process, while also keeping track of the project's evolution as it is developed. The usual flow of the bug reporting/fixing process would go as follows: 1. Submission as a standard form by the tester, followed by assigning the now <open> issue to the analyst in charge of the issue's specific area of occurrence (like PvP, PvE, MTX, Writing, Map design etc). 2. Reviewing and (if necessary) retouching of the submitted issue by the analyst assignee, then forwarding the now <open and confirmed> issue to the developer (or development team) in charge of the respective field. This is also the stage in which the submitted issue could be given a priority tag, in the case that the number of bug submissions becomes overwhelming for the development team to handle them as they come. Should the report not conform with the agreed-upon submission form, be invalid from a documentation standpoint or a duplicate of another submission, the issue will either be closed or assigned back to the tester. 3. Once in the hands of the development team, the issue can either be fixed for the next build or be postponed. Usually issues that hinder the ability to test other aspects of the game or cause crashes/other technical failures are among the first to be fixed, while minor issues like low-impact edge cases or small graphical artifacts are left on the backburner. The issue will be assigned to the general QA inbox once it is believed to have been fixed as <open, regression pending>. 4. The analysts will now choose which of the "fixed" issues to assign back to their respective submitters based on the build on which the current testing phase is taking place and the available "fixed" issues for that build. 5. The final phase consists in regressing the issue, i.e attempting to recreate the bug according to the information and steps provided in the report. Should the issue persist, it will be forwarded back to the development team's inbox as <open, not fixed>. If the issue has indeed been fixed, it will be closed by the tester as <closed, fixed>. In the case of a large-scale test, people that have not specifically volunteered as testers could contribute by compiling and sending their findings to someone within the QA team, so that the issues can be formally submitted. As for documentation, it generally serves two purposes: to make testing possible for people who've never played the game (which i suppose will likely not be the case here) and to enable a grey-box approach to testing (simulate end-user experience while also knowing exactly what SHOULD happen). Forgoing documentation is entirely possible when testing aspects like art or, in general, aspects of the game in which anyone could notice if something was off. This is what i propose for now, any corrections or suggestions would be appreciated. I have also attached a detailed bug submission template to this post. Submission format.docx
×
×
  • Create New...