# List of All Metrics

<table><thead><tr><th width="139">Metric Type</th><th width="191">Metric</th><th>Definition</th></tr></thead><tbody><tr><td>Issue</td><td>Issue Cycle Time</td><td>Total time an issue spends in the "In-Progress" status category</td></tr><tr><td></td><td>Issue Lead Time</td><td>Time from issue creation to the issue being moved to the "Done" status category</td></tr><tr><td></td><td>Issue Time in Status</td><td>Time an issue spends in each status category, broken down by status</td></tr><tr><td></td><td>Issues Created</td><td>Number of issues created</td></tr><tr><td></td><td>Issues Completed</td><td>Number of issues moved to the "Done" status category</td></tr><tr><td></td><td>Points Completed</td><td>Number of story points associated with issues moved to the "Done" status category. Issues without story points are counted as 0.</td></tr><tr><td></td><td>Issues Injected</td><td>Number of issues added to the sprint after it started</td></tr><tr><td></td><td>Points Injected</td><td>Number of story points associated with issues injected into the sprint. Issues without story points are counted as 0.</td></tr><tr><td></td><td>Issues Completed Per Member</td><td>Number of issues moved to the "Done" status category per team member</td></tr><tr><td></td><td>Points Completed Per Member</td><td>Number of story points moved to the "Done" status category per team member. Issues without story points are counted as 0.</td></tr><tr><td></td><td>Issues Created Per Member</td><td>Number of issues created per team member</td></tr><tr><td>PR</td><td>PR Review Time</td><td>Time from pull request open to merging. Time in draft states are excluded from review time calculation.</td></tr><tr><td></td><td>PR Review Time Breakdown</td><td>Breakdown of review time includes three metrics: first response, rework, and idle completion time. Sum of these three equals the review time metric.</td></tr><tr><td></td><td>First Response Time</td><td>Time from pull request open to the first comment. If no comments exist, first response time is 0.</td></tr><tr><td></td><td>Rework Time</td><td>Time from first comment to last commit. If no comments exist, pull request open is considered the start time of rework time</td></tr><tr><td></td><td>Idle Completion Time</td><td>Time after rework is completed to merging a pull request</td></tr><tr><td></td><td>PRs Merged</td><td>Number of pull requests merged</td></tr><tr><td></td><td>PRs Unlinked</td><td>Number of pull requests unlinked to a Jira issue</td></tr><tr><td></td><td>PRs Merged Per Member</td><td>Number of pull requests merged per member</td></tr><tr><td>Sprint</td><td>Sprint Completion Rate (Issues)</td><td>Number of issues completed over total number of issues at the end of the sprint</td></tr><tr><td></td><td>Sprint Completion Rate (Points)</td><td>Number of points completed over total number of points at the end of the sprint</td></tr><tr><td></td><td>Commitment Reliability Rate (Issues)</td><td>% of issues completed out of the total sprint scope (excluding injected issues)</td></tr><tr><td></td><td>Commitment Reliability Rate (Points)</td><td>% of story points completed out of the total sprint scope (excluding injected issues)</td></tr><tr><td>Deployment</td><td>Deployments Count</td><td>Number of deployments made</td></tr></tbody></table>

## Customizations <a href="#h_695626fdd7" id="h_695626fdd7"></a>

You can group by or filter any metric based on their metric type.

<table><thead><tr><th width="174">Metric Type</th><th>Group by and Filter options</th></tr></thead><tbody><tr><td><strong>Issue or Sprint</strong></td><td>Assignee</td></tr><tr><td></td><td>Priority</td></tr><tr><td></td><td>Issue Type</td></tr><tr><td></td><td>Label</td></tr><tr><td></td><td>Project</td></tr><tr><td></td><td>SLA Status</td></tr><tr><td></td><td>Investment</td></tr><tr><td></td><td>Epic</td></tr><tr><td></td><td>Story Points</td></tr><tr><td></td><td>Custom JIRA Field</td></tr><tr><td></td><td>Team</td></tr><tr><td><strong>PR</strong></td><td>Author</td></tr><tr><td></td><td>Reviewer</td></tr><tr><td></td><td>Label</td></tr><tr><td></td><td>Repository</td></tr><tr><td></td><td>Team</td></tr><tr><td><strong>Deployment</strong></td><td>Service</td></tr><tr><td></td><td>Environment</td></tr><tr><td></td><td>Repository</td></tr><tr><td></td><td>Team</td></tr></tbody></table>

## FAQ <a href="#h_695626fdd7" id="h_695626fdd7"></a>

<details>

<summary>Can I track DORA metrics?</summary>

Haystack gives building blocks for each DORA metric. You'll need to create your own dashboard.&#x20;

* [Change Lead Time](#h_410aa1d99b) (Issue Cycle Time)
* [Deployment Frequency](#h_4f830cc15c) (Deployments Completed)
* [Mean Time to Recovery](#h_d824c0c5b3) (Issue Lead Time)
* [Change Failure Rate](#h_60ef6662ed)

If you are in the search of what metrics to track, we highly recommend checking the following posts.&#x20;

* [first-principles-of-engineering-metrics](https://help.usehaystack.io/guides/first-principles-of-engineering-metrics "mention")
* [engineering-metrics-video-series](https://help.usehaystack.io/guides/engineering-metrics-video-series "mention")

</details>

<details>

<summary>When to use average vs 85th percentile calculation method?</summary>

When deciding between using the average or the 85th percentile for calculations, consider these points:

* **Averages** are useful when your data is consistent and you don't expect outliers. However, they can be distorted by extreme values.
* **85th Percentile** is better for data with potential outliers. It gives a value below which 85% of the data falls, reducing the impact of extreme values.

For time-based metrics, which often have outliers, the 85th percentile is usually more appropriate. This method provides a clearer view of typical performance. Use the 85th percentile for metrics like:

* Issue Cycle Time
* Issue Lead Time
* PR Review Time
* Review Time Breakdown
* First Response Time
* Rework Time
* Idle Completion Time

## Why 85th percentile? <a href="#h_f3a0da9493" id="h_f3a0da9493"></a>

We'll be focusing on Issue Cycle Time metric and its distribution, but the base logic can be applied to all metrics mentioned above.

Issue Cycle Time is the total time spent in `in-progress` status category. Due to the nature of how developers work, we can see Issue Cycle Time being a long tail distribution:

![](https://933147321-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FbiIqB8vq91hNrILQ1MAF%2Fuploads%2FXChpXWRitJc8aJwmIJhP%2Fimage.png?alt=media\&token=85d53439-9370-40d4-a6b6-1b2bebe13b12)

If we use the average Issue Cycle Time, we will get **1.62 days**.

Average is a good indicator of understanding the total time spent and if the team is getting better using trends.

However, if we want to answer the question "How long would a new issue take?", it will fall short of answering this question. Average is skewed by lots of short few minutes or long issues.

If we look at median, we can say with 50% confidence that this issue will take **0.91 days**. Using the median is less practical since we will be ignoring all the bigger issues.

To answer the question, "how long would a new issue take to resolve?" we should use 85th percentile. Essentially this means that with 85% confidence, we can say that the given issue will be completed in 3.26 days.

Using 85th percentile gives us:

1. Practically point of view on how long it takes for the issue to be completed
2. The result doesn't get skewed by lots of shorter issues which most likely have lower business value.

## Conclusion <a href="#h_a9aed35f14" id="h_a9aed35f14"></a>

At Haystack, we want to give more actionable insights and decrease the amount of noise there is in the data. Looking at 85th percentile is an [industry-standard](https://www.scrum.org/resources/blog/getting-85-agile-metrics-actionableagile-part-1) among Product Managers.

We recommend using 85th percentile when calculating time metrics allowing you to understand and take action on your software delivery.<br>

</details>

<details>

<summary>Can I see Mean Time to Recovery (MTTR)?</summary>

Recommended Read: [How to Improve Quality](https://usehaystack.notion.site/Improving-Quality-bcc88c70311b4290b278d5227d92ecbd?pvs=4)

### Option 1: Track via Haystack <a href="#h_7b51c52d06" id="h_7b51c52d06"></a>

Haystack has a tight integration with Jira, but not with issue trackers. This means you'll need to utilize Jira to be able to track it inside Haystack.

This is also a best practice. All work that engineering team does should be tracked in their issue trackers.

**1. Create Jira Issues**

* Automatically create issues in Jira using [OpsGenie](https://support.atlassian.com/opsgenie/docs/view-team-alert-mttar-analytics/)
* Automatically create issues in Jira using [Pagerduty](https://support.pagerduty.com/docs/jira-cloud)

**2. Create Graph In Haystack**

Once you have created a consistent way to create tickets inside Jira whenever incident happens, next step is to track MTTR.

1. Go to [Reports](https://delivery.usehaystack.io/reports) page
2. Click [New Graph](https://cdn.zappy.app/e174e0699b568acb2ee26d75f3836b6f.png) icon
3. Select `Issue Lead Time` as the metric
4. Select the filters that represents an incidents
   1. Typically teams use
      1. `Issue Type: Bug` & `Priority: Highest`
      2. `Issue Type: Incident`
5. Select the visualization type
   1. Use `value` to see the raw number
   2. Use `line` to see it over time
6. Click Save

<img src="https://downloads.intercomcdn.com/i/o/1014748306/8c4cb9d093067ad6a02d50aa/Zappy+App+Screenshot+(3).png" alt="" data-size="original">

### Option 2: Track via Incident Management Systems <a href="#h_0c56373676" id="h_0c56373676"></a>

MTTR is a commodity metric meaning almost all Incident Management Systems support it built-in.

Below are the links for most common Incident Management Systems

* [Pagerduty](https://support.pagerduty.com/docs/insights)
* [OpsGenie](https://support.atlassian.com/opsgenie/docs/view-team-alert-mttar-analytics/)

</details>

<details>

<summary>Can I see Change Failure Rate (CFR)?</summary>

CFR is a metric that can be phrased as "Percentage of our deployments caused incidents".

As long as you have the definition of an incident, you will be able to track how CFR using either Issue completed or Deployments count filtered by an indicator that represents incidents.

Want more actionable metrics for quality? Checkout [how-to-improve-quality](https://help.usehaystack.io/guides/how-to-improve-quality "mention")

</details>

<details>

<summary>Can I see Development Time metric?</summary>

No.

As an alternative you can use **Issue time in status** metric which would give more reliable data.

Development time metric typically refers to time it takes to open a pull request. We have deprecated this metric as it was an unreliable metric. It's only available for older accounts.

There are 2 ways this metric can be calculated which both has reliability issues:

1. **JIRA**: Mixing timestamps of Version Control & Jira for the start and end of development time leads to large inconsistencies in the data; especially given unlinked pull requests, developers moving an issue to in-progress state chaoticly and other outliers that reduce data quality.
2. **Version Control**: Mixing timestamps of Version Control & Git for the start and end of development time leads to large inconsistencies in the data; especially given git squashing where it's easy to have 0 development time for some developers who uses this method.

</details>

<details>

<summary>How do I ensure if the metrics are correct?</summary>

You can always understand the raw data of any metric by clicking to the graph. A drill-in modal will appear with all the data points that was used to calculate that metric.

### How are my team metrics calculated? <a href="#h_f956d5820d" id="h_f956d5820d"></a>

The following settings effect how the teams metric is calculated

1. [Board Settings](https://cdn.zappy.app/3dac4e5ee40fb83f8f7f2e21798e14ed.png)
   1. **If Kanban**: Shows all issues in your board based on your settings
   2. **If Sprint**: Shows all issues in your sprint based on your settings.
      1. *Note: If an issue is in-progress but not in your sprint, Haystack will not show that issue.*
2. [Sprint Filter](https://cdn.zappy.app/93b090fc40d18bce29b06dc710806d05.png) (only for sprint based boards)
3. [Member Settings](https://cdn.zappy.app/f6a3b7c457c1eea86d54e64e0f303d77.png)
4. [Advanced Board Settings](https://cdn.zappy.app/9db1697d88b8fdd0edb7b034e151f76e.png)

You can always understand the raw data of any metric by clicking to the graph. A drill-in modal will appear with all the data points that was used to calculate that metric.

</details>

<details>

<summary>Do PR metrics count unlinked PRs?</summary>

Yes. All Pull Requests merged by team members will be included in calculations.&#x20;

If you want to check specifically unlinked PRs, check `PRs unlinked` metric.

</details>

<details>

<summary>Are Tasks/Stories/Epics/Subtasks calculated in Issue based metrics?</summary>

Haystack does an implicit filter on metrics to ensure the data looks correct with an initial glance.

`Subtask` and `Epic` issue types are not calculated part of any metric. All other issue types would be included in the metrics.

\
You can validate this by using "[Issue Type](https://help.usehaystack.io/features/list-of-all-metrics)" filter.

</details>

<details>

<summary>Does Haystack exclude weekends and holidays?</summary>

Haystack does not exclude any particular date ranges like holidays, PTO, weekends, etc.

While initially this may feel like your metrics are artificially increased, we think about metrics from the perspective of the customer. In this case, the customer is still waiting during the weekends and the increased metrics continues to be a great signal on where bottlenecks might lie.

</details>
