Test Case Planning
Test cases should be documented based
on the original business requirements to verify the features of the
individual elements and customizations. Test case documentation should
be written at a level that people who are not technical or part of the
implementation phase can perform the testing by following the
documented test cases.
Where possible, some of the test cases can be
recorded as web tests using Visual Studio Team System. These can be
automatically executed as part of your daily build. This way, you can
decrease the amount of overall resources required for actual testing.
Test cases should always be created for your
development customizations. At the highest priority, they should
concentrate on verifying the customizations, not out-of-the-box
features, which have already been tested and are supported by
Microsoft. If customizations include highly customized master pages, it
is good practice to verify the standard out-of-the-box features because
heavily customized master pages may break out-of-the-box features.
A common mistake with test case creation is that
they are written based on already developed customizations. This
results in a low-quality test case that tests the current outcome, not
the original business requirement. You should test based on the
original business requirement!
Each test case should have a high coverage of
main and alternative outcomes of the individual feature. Another common
issue is overly automating the setting of properties (for example, web
part properties) in your script. This may lead to false verification
because each test is testing only success cases and does not verify
issues related to wrong property values.
You should consider the following when planning test cases:
- Missing configurations in SharePoint for checking error handling of the web part
- Invalid entries as configuration values for the web part with expected error handling
- Using a web part in alternative places, not only in the planned location
Each test case should include a clear definition
of what is and is not tested. This helps a tester focus on the relevant
issues. For example, a separate test should be used to check UI
consistency across a number of customizations, rather than in a
specific test case focused on the features provided by the
customization. Each test case should have clear passing or failing
criteria. This requires the expected outcome to be defined in detail.
Because many of the SharePoint customizations are
based on some out-of-the-box features or services, each test case
should also include the prerequisites from an environment and resource
point of view. For example, testing a custom search results web part
requires the correct configuration in your testing environment.
Performance Testing
Performance testing
can be considered from either the IT professional or development point
of view. From the IT professional point of view, performance testing
ensures that the hardware is adequate for the planned usage and
identifies performance bottlenecks. From the development perspective,
performance testing focuses on reducing the impact on server resources
per page request, the page payload size reduction for first and
subsequent requests, and, lastly, the efficiency of client-side code.
A common mistake made by many development teams
is a failure to use .NET code-performance profiling tools to
proactively analyze and optimize the efficiency of their code during
the project development cycle, rather than reactively when an issue is
reported by the IT professional team, or, even worse, in production.
Following are some other performance testing considerations:
- Mature test environments — To get
repeatable results, the environment should be stabilized and documented
so that in subsequent releases a similar setup can be created. Ensure
that the environment does not have any other load so that results and
metrics are comparable to previous test results.
- Population of test data set and information
— Create scripts and tools to populate the required information, which
mimics the production usage. There’s no point testing intranet
performance if it actually doesn’t have any content or site structures.
- Deciding the adequate stress level for testing
— Plan your stress test usage models based on available capabilities of
the tools you use, such as how many concurrent users access the site.
Performance testing activities depend on the
life-cycle stage of your project and deployment. You can conduct
performance testing in this environment before the initial release or
public release is done to your production environment. Because you most
likely cannot repeat a performance test in your production environment
in later releases, it’s beneficial to conduct a test also in an
alternative environment, which can then be used in future phases as
your baseline test environment. This means that if in the following
phase performance decreases 10% in your reference environment, it will
do the same in production.
Test results with this kind of baseline testing
are not precise but can provide you a clear indication on the
performance impact of the changes applied in a particular version.
Multiple simulated performance tests should be
performed before the implementation phase of the project starts.
Identify performance bottlenecks as early as possible to avoid
development rework in later iterations. A good practice is to conduct
performance testing as soon as you have a feature-ready release.
Continue to repeat performance tests to demonstrate improvements
against your initial performance benchmark. Continue repeating
performance testing for maintenance releases done after the initial
release of the customizations.
For example, say a previous intranet project follows the release cycle defined in Figure 2. As you can see from Figure 2,
there were five iterative releases during development, and after that,
development was changed to a quarterly release mode with optional
bug-fix releases between quarterly releases.
Version 1.0 of the performance tests was created
at the same time as the feature-ready release, meaning a release when
all functionalities have at least high-level functionality available
based on requirements, but when implementation has not yet been
polished for actual production usage.
These tests were updated and performed three
times before the actual production release to identify possible issues
as early as possible and to ensure that any fixes do not degrade
performance. Each new major release provided updated performance tests.
More important, the original performance tests used against Version 1.0
are performed to compare previous and current results.
By including performance tests in the portal
life-cycle model in the maintenance phase, you can test the implication
of changes to your production environment (such as patches to an
operating system, a SQL Server, SharePoint cumulative updates, and
service packs) and your customizations.
Functional Testing
Functional testing
should be performed in an environment that simulates your production
environment to ensure that your features and customizations are working
properly. Even though SharePoint 2013 can be deployed to a client
operating system, do not use this as your testing platform because the
behavior will differ from a server-side test environment.
For midsize or large deployments, you should use
a separate QA environment that mimics the production environment. You
must realize that a testing environment should be based on multiple
servers and not one server. For example, multiple Web Front Ends (WFEs)
and load balancing cause a user’s page request to behave differently
than that of a single server.
Functional testing should be based on the test
cases representing the business requirements to ensure features are
properly verified before moving to the next stage in the deployment
process. If automated tests are used in the project, manual functional
testing should concentrate on areas and features that cannot be tested
reliably using automation.
In SharePoint deployments, functional testing
contains both solutions testing and UI styling verification. For
example, issues related to UI rendering is a good example of a test
that is quite often not precisely tested in projects.
User Acceptance Testing
User acceptance testing
is the final verification of the version or deployment before it is
deployed to the production environment. Quite often, business and key
stakeholders are involved in the execution of the tests. This involves
a combination of manual execution of “use cases” and test cases
produced in previous steps of the project life cycle. User acceptance
testing is a key milestone that helps the business and project
stakeholders decide whether to move forward with your latest solution.
User acceptance testing should always be
conducted for any solutions moving from preproduction to the production
environment. It should also be conducted by the “customer” of the
project and not by the developers. You must document the findings to
enable project stakeholders to decide on the next actions. For example,
these might include signoff or acceptance of your release and possible
remaining issues to be fixed.
From a project management point of view, always
remember that a completely bug-free solution is rare. In most projects,
the project and business teams have decided on the maximum number of
bugs at each severity level. Ensure that you provide enough time to
respond to issues that may be picked up in user acceptance testing.
Therefore, do not schedule user acceptance testing too close to your
release date. Ensure that you have a buffer.
Defect Tracking
Numerous methods exist to manage defect tracking,
and many different tools can be used. At a minimum, all relevant
project team members should have access to enter and edit defects in
one centralized location. In SharePoint projects, defects are usually
tracked within SharePoint or using Team Foundation Server.
SharePoint provides issue tracking lists that can
be further customized based on project need. The challenge with using a
SharePoint-based tracking list is that developers would have two
different tools to use. Team Foundation Server is the preferred
approach, which provides nice centralized task lists directly in Visual
Studio for developers. All other project team members could use the
Team Foundation Server web access to manage issues and bugs.
When testing is planned, it’s also important to
agree on the process of handling defects and how they should be
documented. Following are some key considerations for the creation of
defects:
- Priorities — Each defect should
be prioritized so that bug fixing can start from the most-critical
issues and move to less-important issues. Prioritization should be
agreed on by team members for the project to avoid all defects being
prioritized too high.
- Descriptions — Each defect should
have a detailed description of the issue. There should always be some
business requirement or specification pointer, which justifies why it
is a defect. Defects shouldn’t be used to sneak enhancements into your
project scope. Only use defects for existing features. The description
should provide enough detail to reproduce the issue; otherwise, it will
get lobbed back to the tester as Unable to Reproduce. If the issue
cannot be reproduced, there’s no way to ensure that it’s fixed after
code changes.
- Screen shots — Screen shots provide a simple and efficient way to provide more information on the encountered issues.
- Time — SharePoint has extensive
logging, which provides additional information on an encountered error,
or it can even be used directly to solve the root cause of the defect.
If there’s no exact time on when the bug or issue was encountered,
there’s no way to use this valuable information. Remember that your
development team may work in different time zones.
Other Testing Considerations
Testing should be planned carefully to
ensure that the required quality level is met. Testing should be a
clear phase in the overall project plan and not considered as a buffer
for development.
Because testing is based on requirements of the
project, test planning can be started at the same time as the planning
of the technical architecture or customization architecture.
When testing is conducted, ensure that user
accounts with different levels of permissions are used to identify any
permission issues in the code or configuration. Ensure that your
developers verify that their code works in their development
environment using different users and permissions before checking in
their code.