Quality Assurance list

Adam F
Adam F
Last updated 

Purpose

A concise checklist to ensure design, interface, copy, and functional consistency before delivery. The QA process is to be carried out between the specialists who served as the primary owners of the work — that means those who, within their respective specializations, spent the most time working on their specific aspect of the project, or a task. Each will have to confirm that the final product indeed matches the content, the visual, and technical specifications outlined in the brief. 

To make the document easier to scan and follow, we'll use the following symbols to assign specific responsibilities. The symbol next to the action means that the specified specialist is expected to perform the action, and will be responsible for the result:
📝 for the project copywriter
🎨 for the project designer
💻 for the project developer

Copy handover

This section covers the handover from copy to design. Ideally design would not begin before the copywriting phase has concluded (the reverse-gear clause helps with this), but since that's rarely an issue, this phase is allowed to stretch through the design phase, with amendments to copy being made until the design phase has concluded. Once the design has been completed, the copy should be, too. We allow this lenience because copy is usually easy to replace once the core shape is defined, and minor copy edits don't significantly affect the production process.
  • Peer-review all copy regions with approved copy document (📝🎨) 

Design handover

This section covers the handover from design to development. It is recommended that designs get locked once development starts, and that any retroactive changes are discouraged (reverse-gear). This is because the impact of retroactive changes can become more consequential once the project gets deep into development, and can cause significant delays in delivery, as well as ramping-up internal costs due to possible structural changes requiring structural rewrites.

The documents that are to be reviewed in this section are reviewed in Zeplin.
  • Confirm that all spacing values and grid systems are consistent across the artboards. (🎨)
  • Confirm all brand assets, typography and icons match the confirmed design. (🎨)
  • Confirm all assets are downloadable from within the artboard, or are provided via Basecamp Docs, by uploading the to a designated folder. (🎨)
  • Confirm design write-up is complete, as an instructional context document for developers to fully represent the design vision. (🎨)
  • Confirm all the elements are selectable in the artboards. (🎨)
  • Provide responsive (tablet 481–768px, mobile 320–480px) solutions for all delivered screens, with noted breakpoints. If the solution isn't provided, and unless otherwise noted in the write-up, it will be assumed that the design is to be the same as that of an already provided screen. (🎨) 
  • Test color contrast with Contrast Checker in order to ensure accessibility. (🎨)   

Design & UI Consistency 

This and the following sections all handle the post-development testing phase immediately prior to delivery. The final result of this process will be delivered directly to the partner, and therefore this process should result in the version that is equal to the approved design, and has no errors, no glitches, no quirks whatsoever.

The only exceptions to this should be platform-specific exceptions that have been communicated and agreed upon before the beginning of production e.g. if the project was written for specific platforms, it should not be expected to work on a different type of platform which was not specified in the brief. For instance, if we're forgoing support of very old browsers  with no activity among the target demographic (which is our default), and everyone's okay with that, then "slight weirdness" in those browsers is deemed permissible.

All test findings are to be reported to the project developer, who will then apply adjustments.
  • Post-production copy review; confirm all the copy is in the right place, with no typos. (📝)
  • Ensure iOS and Android tab wrappers follow the intended thematic differences. (🎨💻)
  • Stress-test all break-points, especially the "awkward middle" sizes i.e. 640–760px, 870–1060px, 1120–1280px, 1500+. Consult the project designer in case there are any weird-looking results in such ranges. (💻)
  • Validate that all spacing matches the design source. (🎨💻)
  • Confirm the hover, focus, and active states of all interactive elements. (🎨💻)
  • Confirm typography hierarchy. (🎨💻)
  • Confirm typography styling. (🎨💻)
  • Check dark/light mode if applicable. (🎨💻)
  • Validate that gradients, shadows, and borders match design exactly. (🎨💻)

Functional quality

This section is for stress-testing the product. The idea here is to try and break the product's functionality in order to find weak spots, and to repair anything we find.
  • Test all forms including validation errors, edge cases, and empty results. (💻)
  • Confirm that asynchronously loaded elements will correctly update container heights (e.g., dynamic tables, tabs). (💻)
  • Verify that active/highlighted/disabled classes all update reliably. (💻)
  • Confirm that all components are transitioning smoothly e.g. tabs, accordions. (💻)
  • Test interactions on touch devices (swipe, scroll, long-press). (💻)

Cross-Browser & Cross-Device Testing

The goal here is to test and confirm consistent and working interface across different clients and devices. This means essentially repeating the steps outlined in the "Design and UI consistency" and "Functional quality" sections, but using different browsers and different devices. The responsibility for this falls upon the personnel already assigned to those tasks. The only exception being copy, as it is expected for the content to be the same in all devices.

It is the responsibility of a developer working on the project to provide the list of browsers and devices that will be subject of testing. (💻)

After the developer provides this list, the responsibility to test and report will be of the personnel assigned to specific tasks. In case anyone is missing tools that would enable them to carry out testing, they should contact their lead, who will then act in order to provide the necessary tools.

Because browser market share is subject to change, instead of creating a static list and running the risk of it going obsolete, we should defer to the following tools, and use current data as the basis for testing. We should always ensure to cover a minimum of 95% of the users worldwide, as that is the industry standard, and strive for 97% with graceful degradation above:
  • StatCounter: the relevant data for testing would be the following:
  • Can I Use and Browserslist's calculator: in case some of the features and functionalities that were provided by design, or discussed as a part of UI, are not supported by certain browsers or partners, which would make the coverage fall below 95%, the project developer should discuss this with the project designer in order to secure a solution or identify workarounds. In case no such fix can be reasonably created or fixed, communicate this in a clear and specific manner with the project manager, who can then decide on the course of action. (🎨💻)  The report of any unsupported feature should consist of:
    • List of unsupported browsers and devices, with usage percentage.
    • Described visual or functional failure.
    • Proposed workarounds, or proposed degradation for listed devices.

Accessibility 

The goal is to ensure the interface is fully usable and perceivable for all users. This will be achieved verifying semantic structure, keyboard accessibility, proper ARIA usage, and sufficient color contrast through automated and manual evaluation checks. 
  • Run automated audit with WAVE (🎨💻)
  • Check keyboard navigation across all focus-able elements (💻)
  • Confirm proper ARIA labels and roles for interactive components (💻)
  • Confirm functionality in w3m for a good sense of screen reader functionality.
  • Validate semantic structure (🎨💻)

Performance testing

The goal here is to ensure the site loads efficiently and without errors, to identify performance bottlenecks, identify oversized and sub-optimized assets, generally find and repair all issues that degrade stability or speed.