Release Notes v2.28: API report and Azure Apimanager and AWS API Gateway life cycle (2026-02-12)

API Report

A new button has been enabled to generate a 360 API report. The report includes:

  • Global information
  • API briefing information
  • Definition quality information
  • Functional test information
  • Performance test information
  • Security test information
  • Mocking information
  • Comments from the person who generates the report.

AWS API Gateway

A new lifecycle has been created for AWS API Gateway.

  • AWS: Initializer: It allows you to configure the API so that it can be deployed on AWS API Gateway.
  • Deployer: Allows deploying the API on AWS.
  • Promoter: Allows you to move the project to another environment

Synchronizer: Allows synchronizing the API with the apiquality.

The following image shows how it is configured:

Azure API Management

A new lifecycle has been created for Azure API Manager.

  • Azure: Initializer: It allows you to configure the API so that it can be deployed on AWS API Gateway.
  • Azure: Deployer:It allows deploying the API on AWS.
  • Azure Promoter: It allows you to move the project to another environment.
  • Azure: Synchronizer: It allows synchronizing the API with the apiquality.

The following image shows how it is configured:

Mapping of properties of the API tab

A new feature has been added to the API briefing (API sheet) that allows mapping fields to API properties.

New calculation of test scoring

From now on, quality scoring will be separated into 2 sub-scorings: test density and success rate.

Success Rate

This metric evaluates the actual behavior of the defined scenarios:

  • 100% – Verified (A):All scenarios are behaving as expected.
  • 95% – 99% – Stable (B):There are minor issues, but the core functionality is solid.
  • 80% – 94% – Degraded (C):Significant functions fail; not recommended for production.
  • 50% – 79% – Unstable (D):The API is unreliable; it requires deep development.
  • < 50% – Inside (E):More things are failing than working.

Test Density score

Formula:$\text{Density} = \frac{\text{Total number of test cases}}{\text{Total number of API endpoints}}$

  • < 1.0 -> Critical (D): Insufficient testing. Some endpoints have zero coverage.
  • 1.0 – 2.5 -> Basic (C): Probably only the “Happy Paths” (ideal flows) are covered.
  • 3.0 – 5.0 -> Healthy (B): Good coverage of success, error, and validation states.
  • > 5.0 -> Robust (A): High confidence; includes edge cases and safety boundary testing.

Final Score

Calculation:$\text{Final Score} = \text{Success Rate} \times \text{Density Factor}$

  • 95 – 100 -> A (Ready for Production): Highly reliable with in-depth testing coverage.
  • 85 – 94 -> B (Stable): Good coverage, but with minor glitches or insufficient testing in isolated areas.
  • 70 – 84 -> C (Warning): Significant gaps in testing or too many failures in “edge cases”.
  • 50 – 69 -> D (Unstable): High risk; the API does not have enough testing and fails frequently.
  • < 50 -> F (Inside): Critically low test level or most tests fail.

Results of functional tests in the scoring dashboard

The display of tests and their results directly from the scoring screen has been added.

Performance scoring

The performance scoring has been revised to work as follows:

Default rules in the style guide

The style rules have been revised so that only the following rules are activated when creating a new organization:

Bugs and pequeñas mejoras

Minor improvements

  • The section on OpenApidiff scoring 360 has been removed.
  • The model listing screen has been improved so that the URL can be copied when it overflows.
  • The branch naming algorithm has been revised

Bugs

  • A minor bug that hid the import apis button in the catalog when filters were applied has been fixed.
  • A bug that caused importing an API from the API Hub to fail has been fixed.
  • The bug that caused the repository to get stuck in an infinite spinner when it had not been created has been fixed.