28 Testing

CONTENT

  • Why Testing Is Essential
  • Types of Testing
  • Designing for Smooth Interactions
  • Best Practices

SLIDE DECKS

Some of the material presented in this chapter will be discussed in class. It is your responsibility to ensure you cover all the concepts presented both in class and in this textbook.

Testing is often associated with coding and debugging, but it’s much more than that. And it begins the second you start a project. We test by asking questions of our users or clients to ensure we understand what needs to be completed. We test our requirements by critically assessing them. We test our solutions by developing prototypes to collect feedback. Testing is much “bigger” than coding and debugging.

Testing in software design is a crucial process to ensure that a system not only functions correctly from a technical perspective but so it also meets user expectations. In short, we need to test to ensure that our system 1) prevents errors from happening (by supporting the user as they interact with out system), and 2) gracefully handles unexpected scenarios. A robust testing strategy goes beyond validating code correctness; it also ensures the system’s usability, reliability, and resilience in real-world scenarios.

Why Testing is Essential

Testing is not just about finding and fixing bugs in code; it is an iterative process aimed at improving both functionality and user experience. Key reasons to test include:

  1. Ensuring Functionality: Verify that the system performs its intended tasks under expected conditions.
  2. User-Centric Design: Identify and address potential issues that could frustrate or alienate users, such as unclear interfaces or unhelpful error messages.
  3. Graceful Failure: Develop systems that fail in a predictable and user-friendly manner, preserving data integrity and minimizing user disruption.
  4. Data Protection: Safeguard the system and database from invalid, unexpected, or malicious input, ensuring both security and usability.

Types of Testing

Testing can be categorized into three types based on the scope and focus: 1) closed testing, semi-open testing, and 3) open testing. Each are described below.

Closed testing focuses on basic input and output verification. This ensures that user-visible outputs align with expected results. This doesn’t involve evaluating code or algorithms. The test is simply: if I put data into my system, will my system process it properly and provide me with consistent and accurate output. In this case, you can think about it from the point of view of using a calculator. If you enter 2 times 5, you expect it to result in 10 every time, no matter how often you do this. You don’t know what is happening behind the user interface of the calculator, you just know it’s working. Another example you could consider: checking whether a login form correctly validates email addresses and passwords. This type of testing is often considered testing based on the “user view”.

Semi-open testing examines internal functionality and interactions that might be related to network connections, memory usage, database connections, ports, etc. We consider this because we want to ensure that our system doesn’t fail completely when, for example, a database connection is lost. We want to ensure that our system detects the issue, protects whatever data has been entered, and informs the user of the situation.

Open Testing is a complete test of everything related to the system. This includes testing all branches of the code, error handling, documenting functionality, etc. It is often considered the “developer view” of testing. This could include challenging the system under extreme scenarios such as higher than expected data loads, or considering edge cases, or exploring how the system will respond given low bandwidth data connections.

Designing for Smooth Interactions

A significant part of testing focuses on how users interact with the system and how the system handles their input. Good design can minimize errors and enhance the overall user experience. For this reason, designers should consider ways to prevent system failures. This can be done in numerous ways, including:

  • Input validation: To prevent bad data from entering the system, design it using User Interface elements that force the data into the proper format (e.g., date pickers, drop-down menus).
  • Provide real-time feedback: When things go wrong (or even before they do), it’s a good idea to keep the user informed. Highlight errors in data so that they can be corrected. Provide the user an opportunity to learn and correct without losing any of the work they had completed with your system.
  • Graceful Handling of “Weird” Data: When you test code, you often want to consider edge cases. In design, the concept is similar. For example, how might you handle and edge case related to a person entering an age of 102 when you are collecting age data? Consider if the age is truly essential for your system? Are you protecting the database? Implement strict validation and sanitization processes to prevent malformed or malicious data from being stored.
  • Clear Error Messaging: Error messaging should be informative. Communicate errors in a helpful, non-technical manner. Avoid generic error messages like “Something went wrong.” Instead, provide actionable advice, such as “Please enter a valid email address.” Ensure the messages are easy to see/find. Don’t place the errors at the bottom of a screen in a tiny font.
  • Failing Gracefully: Design your system to handle failures in a predictable and stable manner. Never let the system crash. And definitely don’t lose user data. For example: If a payment process fails, ensure the user is informed without losing the items in their shopping cart.

Best Practices for Testing?

Start testing early in the project design to identify issues before they get worse. Talk to users to gather feedback on usability and functionality (i.e., conduct paper prototyping and/or wireframing sessions). Document all testing processes and results to track progress and ensure repeatability. And of course, continuously refine tests to include new use cases, edge cases, and scenarios.

License

Community-Engaged Systems Analysis & Software Design Copyright © 2024 by Daniel Gillis and Nicolas Durish. All Rights Reserved.

Share This Book