Last week I spent 3 hours clicking through SonarQube trying to copy-paste coverage data into a spreadsheet for our sprint planning.
Three. Hours.
My PM wanted a simple list: which files need tests? Which lines aren't covered?
SonarQube has all this data. But getting it out in a format humans can actually use? Pain in the ass.
So I built export scripts that run in your browser console. No installs. No auth tokens. Just paste and go.
Why SonarQube's Export Sucks
Look, SonarQube is great at showing you code quality metrics. The dashboard is nice. The drill-downs work.
But try to:
- Export all issues with file paths and line numbers
- Get a list of files below 80% coverage
- See exactly which lines need tests
- Share this with your team in a readable format
You're fucked.
The built-in export gives you JSON or CSV that's basically useless. No proper formatting. Missing context. Can't sort by what matters.
I needed something better.
The Hack: Browser Console Scripts
Here's the thing about SonarQube - it's all browser-based. Which means the data is already there in your browser. You just need to grab it.
Browser console + fetch API = instant access to everything.
No backend needed. No API tokens. No complex setup.
Just open DevTools (F12), paste a script, hit Enter. Done.
Script 1: Export All Issues with File and Line
This one gets every issue - bugs, vulnerabilities, code smells - with exact file paths and line numbers.
const PROJECT_KEY = 'your-project-key';
const exportIssuesWithLines = async () => {
console.log('📋 Fetching issues...\n');
let allIssues = [];
let page = 1;
let hasMore = true;
while (hasMore) {
const response = await fetch(
`/api/issues/search?componentKeys=${PROJECT_KEY}&statuses=OPEN,CONFIRMED&ps=500&p=${page}`
);
if (!response.ok) break;
const data = await response.json();
allIssues = allIssues.concat(data.issues || []);
hasMore = data.issues && data.issues.length === 500;
page++;
console.log(` 📥 Fetched ${allIssues.length} issues...`);
}
let report = 'ISSUES REPORT\n';
report += `Total: ${allIssues.length}\n\n`;
allIssues.forEach((issue, index) => {
const filePath = issue.component.replace(`${PROJECT_KEY}:`, '');
const line = issue.line || 'N/A';
report += `${index + 1}. ${issue.severity} - ${issue.type}\n`;
report += ` File: ${filePath}\n`;
report += ` Line: ${line}\n`;
report += ` Message: ${issue.message}\n`;
report += ` Effort: ${issue.effort || 'N/A'}\n\n`;
});
const blob = new Blob([report], { type: 'text/plain' });
const link = document.createElement('a');
link.href = URL.createObjectURL(blob);
link.download = `${PROJECT_KEY}-issues.txt`;
link.click();
console.log(`✅ Exported ${allIssues.length} issues`);
};
exportIssuesWithLines();
What this does:
The pagination loop is key. SonarQube limits API responses to 500 items max. So we keep calling the API with increasing page numbers until we've got everything.
The replace call strips the project key from the component path. SonarQube returns "PROJECT:path/to/file.ts" but we just want "path/to/file.ts".
The Blob and createElement('a') trick downloads the file directly. No server needed. The browser does all the work.
Script 2: Files That Need Coverage
This one answers: "Which files need tests?"
const PROJECT_KEY = 'your-project-key';
const exportFilesNeedingCoverage = async () => {
console.log('📋 Fetching coverage data...\n');
let allFiles = [];
let page = 1;
let hasMore = true;
while (hasMore) {
const response = await fetch(
`/api/measures/component_tree?ps=500&asc=true&metricSort=coverage&s=metric&metricSortFilter=withMeasuresOnly&p=${page}&component=${PROJECT_KEY}&metricKeys=coverage,uncovered_lines,lines_to_cover`
);
if (!response.ok) break;
const data = await response.json();
if (data.components && data.components.length > 0) {
allFiles = allFiles.concat(data.components);
hasMore = data.components.length === 500;
page++;
} else {
hasMore = false;
}
}
const filesBelow80 = allFiles.filter(file => {
const coverage = file.measures?.find(m => m.metric === 'coverage');
return coverage && parseFloat(coverage.value) < 80;
});
let report = `FILES NEEDING COVERAGE (Below 80%)\n`;
report += `Total: ${filesBelow80.length} files\n\n`;
filesBelow80.forEach((file, index) => {
let coverage = 0;
let uncoveredLines = 0;
let linesToCover = 0;
file.measures?.forEach(measure => {
if (measure.metric === 'coverage') coverage = parseFloat(measure.value);
if (measure.metric === 'uncovered_lines') uncoveredLines = parseInt(measure.value);
if (measure.metric === 'lines_to_cover') linesToCover = parseInt(measure.value);
});
const filePath = file.path || file.name;
report += `${index + 1}. ${filePath}\n`;
report += ` Coverage: ${coverage.toFixed(2)}%\n`;
report += ` Uncovered Lines: ${uncoveredLines}\n`;
report += ` Lines to Cover: ${linesToCover}\n\n`;
});
const blob = new Blob([report], { type: 'text/plain' });
const link = document.createElement('a');
link.href = URL.createObjectURL(blob);
link.download = `${PROJECT_KEY}-coverage.txt`;
link.click();
console.log(`✅ Exported ${filesBelow80.length} files`);
};
exportFilesNeedingCoverage();
The breakdown:
metricSort=coverage sorts files by coverage percentage (lowest first). So the files that need the most work show up first.
metricSortFilter=withMeasuresOnly skips files without coverage data. Config files, type definitions, etc.
The filter for files below 80% is arbitrary. Change it to whatever threshold your team uses.
Script 3: Get Exact Uncovered Line Numbers
This is the nuclear option. For each file, it fetches the actual source code and identifies which specific lines aren't covered.
const PROJECT_KEY = 'your-project-key';
const exportUncoveredLineNumbers = async () => {
console.log('📋 Fetching files...\n');
let allFiles = [];
let page = 1;
let hasMore = true;
while (hasMore) {
const response = await fetch(
`/api/measures/component_tree?ps=500&asc=true&metricSort=coverage&p=${page}&component=${PROJECT_KEY}&metricKeys=coverage,uncovered_lines`
);
if (!response.ok) break;
const data = await response.json();
if (data.components && data.components.length > 0) {
allFiles = allFiles.concat(data.components);
hasMore = data.components.length === 500;
page++;
} else {
hasMore = false;
}
}
const filesBelow80 = allFiles.filter(file => {
const coverage = file.measures?.find(m => m.metric === 'coverage');
return coverage && parseFloat(coverage.value) < 80;
});
console.log(`Found ${filesBelow80.length} files. Fetching line details...\n`);
let report = `UNCOVERED LINES REPORT\n`;
report += `Total Files: ${filesBelow80.length}\n\n`;
for (const file of filesBelow80) {
let coverage = 0;
file.measures?.forEach(m => {
if (m.metric === 'coverage') coverage = parseFloat(m.value);
});
const filePath = file.path || file.name;
report += `File: ${filePath}\n`;
report += `Coverage: ${coverage.toFixed(2)}%\n`;
try {
const linesResponse = await fetch(
`/api/sources/lines?key=${encodeURIComponent(file.key)}&from=1&to=5000`
);
if (linesResponse.ok) {
const linesData = await linesResponse.json();
const uncoveredLines = [];
linesData.sources?.forEach(line => {
if (line.lineHits === 0 || line.coverageStatus === 'uncovered') {
uncoveredLines.push(line.line);
}
});
if (uncoveredLines.length > 0) {
report += `Uncovered Lines: ${uncoveredLines.join(', ')}\n`;
}
}
} catch (error) {
report += `Could not fetch line details\n`;
}
report += '\n';
}
const blob = new Blob([report], { type: 'text/plain' });
const link = document.createElement('a');
link.href = URL.createObjectURL(blob);
link.download = `${PROJECT_KEY}-uncovered-lines.txt`;
link.click();
console.log('✅ Export complete!');
};
exportUncoveredLineNumbers();
Why this matters:
When you tell a dev "AuthController.ts needs more coverage," they have no idea where to start.
When you say "AuthController.ts lines 42, 67, 89-105 aren't covered," they can fix it in 10 minutes.
The sources/lines endpoint returns every line of code with coverage status. We filter for lineHits === 0 which means that line never executed during tests.
CSV Export (For Excel People)
Some teams want spreadsheets. Fine.
const PROJECT_KEY = 'your-project-key';
const exportIssuesCSV = async () => {
let allIssues = [];
let page = 1;
let hasMore = true;
while (hasMore) {
const response = await fetch(
`/api/issues/search?componentKeys=${PROJECT_KEY}&statuses=OPEN,CONFIRMED&ps=500&p=${page}`
);
if (!response.ok) break;
const data = await response.json();
allIssues = allIssues.concat(data.issues || []);
hasMore = data.issues && data.issues.length === 500;
page++;
}
let csv = 'File,Line,Severity,Type,Message,Effort\n';
allIssues.forEach(issue => {
const filePath = issue.component.replace(`${PROJECT_KEY}:`, '');
const message = (issue.message || '').replace(/"/g, '""');
csv += `"${filePath}","${issue.line || 'N/A'}","${issue.severity}","${issue.type}","${message}","${issue.effort || 'N/A'}"\n`;
});
const blob = new Blob([csv], { type: 'text/csv' });
const link = document.createElement('a');
link.href = URL.createObjectURL(blob);
link.download = `${PROJECT_KEY}-issues.csv`;
link.click();
};
exportIssuesCSV();
The double-quote escaping (.replace(/"/g, '""')) prevents CSV injection. If an issue message contains quotes, Excel won't break.
How to Use These
- Open your SonarQube project in browser
- Press F12 to open DevTools
- Go to Console tab
- Paste one of these scripts
- Change
PROJECT_KEYto your project - Hit Enter
- File downloads automatically
No npm install. No build step. No deployment. Just works.
Why This Matters
Code quality tools are pointless if the data stays trapped in a dashboard.
Your PM needs to see which features are risky. Your team lead needs to plan test coverage sprints. Your devs need specific line numbers to fix.
These scripts give you that. In seconds.
I use them every sprint planning. Export issues on Monday. Assign them Tuesday. Done by Friday.
SonarQube finally becomes useful instead of just another dashboard nobody looks at.
The Complete Toolkit
I've got like 10 variations of these now:
- Export only CRITICAL/BLOCKER issues
- Group issues by file
- Export duplicated code blocks
- Export security hotspots
- Quality gate status reports
All browser console. All instant.
Want them? They're in this conversation thread. Pick what you need and customize the PROJECT_KEY.
Or build your own. The SonarQube API is actually pretty straightforward once you realize you can just... call it from the browser.
Stop wasting hours on manual exports. Automate this shit and get back to building.
Comments