Showing “KPI Descriptions” — the most data-rich of 3 sheets in this workbook.

SN Internal Publishing KPIs Dashboard (Q1 2026)

Monthly quality and turnaround KPIs for copyediting, typesetting, corrections, RFT, and TAT across production stages, Jan–Mar 2026.
#
#
KPI Name
Stage
KPI Description
Analysis Process
Input Materials
Calculation Methodology
Validation
Target
Output
Data
Jan
Feb
Mar
1 1a Source of Corrections – Copyediting Quality S300 Measures the quality of copyediting by analyzing author corrections received at S300 stage by MPS vs Authors preference. MPS errors categorised as grammar, language, style, references, and consistency. • Retrieve dispatch report from iTrak and select 20% chapters randomly • Extract all author corrections from S300 stage • Analyze each correction against manuscript, CE file, email instructions, and stylesheet • Classify the corrections (MPS vs Author Preference) • MPS (CE) errors further categorized as grammar, style, references, consistency. • Validate the errors through CFT discussion • Identify the root cause and actions Author corrections file (S300) Author manuscript Copyedited file/Stylesheet Job Sheet Email Instructions CE Quality % = EXP (-CE errors / Total # corrections audited) • Errors identified by the CoOE team are reviewed and validated by the CE Team Lead • Classification accuracy is cross-checked (CE errors vs. author preference) • Validation is performed against the stylesheet and project-specific guidelines • Stakeholder meetings are conducted post-validation to confirm findings and agree on action items 0.95 CE Quality % Error distribution (Pareto) Top error categories Corrections 106 479 738
2 Errors 2 20 26
3 Quality% 0.9813 0.9591 0.9654
4 1b Source of Corrections – Typesetting Quality S300 Evaluates typesetting quality by analyzing author corrections across S300 and S650 stages, identifying layout, formatting, and design-related errors. This KPI also helps assess errors between stages and ensures adherence to design specifications. • Retrieve dispatch report from WMS and randomly select 20% chapters • Capture corrections from both S300 & S650 stages • Map against design specs, CE file, and proofs • Classify into: • Author preference • TS errors (layout, alignment, figures, tables, equations, pagination) • Perform stage-wise comparison (S300 vs S650 analysis) Author corrections (S300 & S650) Author manuscript Job Sheet Design Spec TS Quality % = EXP (-TS errors / Total # corrections audited) • Errors identified by the CoOE team are reviewed and validated by the Pagination Team Lead • Classification accuracy is cross-checked (TS errors vs. author preference) • Validation is performed against the design spec and project-specific guidelines • Stakeholder meetings are conducted post-validation to confirm findings and agree on action items 0.95 TS Quality % Stage-wise error trend Top error categories Corrections 106 479 738
5 Errors 1 4 5
6 Quality% 0.9906 0.9917 0.9932
7 2 Correctly Implemented Corrections S300 Tracks the accuracy of implementing author corrections across stages by measuring how effectively corrections are incorporated in revised proofs. This KPI ensures that all author inputs are correctly addressed and minimizes missed or partially implemented corrections. • Retrieve dispatch report from WMS and randomly select 20% chapters • Track all author corrections across revisions • Compare author corrections vs revised proofs • Mark each as: • Correctly implemented • Missed • Perform root cause analysis for missed corrections • Identify team/stage-wise gaps Author corrections (S300 & S650) Revised Proofs Job Sheet Design Spec Correction Accuracy % = 1-(Missed Corrections / Total # Corrections Audited) × 100 • Missed corrections identified by the CoOE team are reviewed and validated by the Pagination Team Lead • Validation is performed against the design spec and project-specific guidelines • Stakeholder meetings are conducted post-validation to confirm findings and agree on action items 0.99 Correction accuracy % Missed corrections count Corrections 368 1,782 956
8 Errors 3 1 0
9 Quality% 0.9918 0.9994 1
10 3 RFT (Right First Time) – S200 & S600 S200 Measures the capability to achieve 'Right First Time' quality without rework. Directly contributes to improved efficiency, reduced cost of quality, and consistent delivery excellence. • Capture errors identified during QC/audit • Map errors to stage (S200 / S600) • Calculate RFT based on error-free output • Perform trend analysis across titles • Identify top defect drivers Source of corrections result QC findings Error counts RFT % = (Errors / Total audited pages) × 100 • QA validation (sample-based) • Cross-check with SOC and QC reports 0.95 RFT % Stage wise trend Pages 106 479 738
11 Errors 3 24 31
12 Quality% 0.9721 0.9511 0.9589
13 4 Proof Quality Disapprovals (PQD) – S650 S650 Monitors final proof files quality performance by tracking print approval rejections. Acts as a critical indicator of end-to-end process effectiveness and minimize rework. • Track all titles submitted for final proof approval • Identify rejected vs approved titles • Categorize rejection reasons (technical / quality / spec deviation) • Perform RCA for each PQD title • Identify recurring defects and high-risk workflows PQD logs Final Proof approval/rejection data Defect classification PQD % = 1-[(# Disapproval titles / Total # Titles Delivered)] × 100 • Validate PQD classification with SN feedback / approval logs • RCA review by CoOE team • Validation with PM/PE (if required) 1 PQD % Root cause analysis Recurring issues High-risk titles Delivered titles 3 0 7
14 Affected titles 0 0 0
15 Quality% 1 - 1
16 5 Post Pub Corrections S650 Monitors the quality of final published titles by tracking corrections raised after publication. Acts as a key indicator of gaps in upstream processes (Pre-editing, CE, QC, and Proof stages) and helps minimize rework, client dissatisfaction, and reputational risk. • Track all titles published and identify those with post-publication corrections • Categorize corrections (content, formatting, technical, author preference, missed corrections) • Perform RCA for each post-publication corrections • Map defects to originating stages (Pre-editing/CE/MC/QC) • Identify recurring issues and high-risk workflows • Establish preventive actions to avoid recurrence Post-publication correction logs Final published files (PDF/ePub) Proof/Revises versions Post-Pub Correction % = (# Titles with Post-Pub Corrections / Total # Titles Published) × 100 • Validate postpub correction classification with SN feedback / approval logs • RCA review by CoOE team • Validation with PM/PE (if required) 1 Post-Pub Correction % Root Cause Analysis (RCA) summary Recurring issue trends Stage-wise insights High-risk titles/workflows identified Corrective & Preventive Actions (CAPA) Delivered titles 3 0 7
17 Postpub correction count 1 0 0
18 Quality% 0.67 - 1
19 6 VOC – Feedback Analysis & Action Tracking All Transforms SN feedback into actionable insights by tracking trends, root causes, and closure effectiveness. Enables continuous improvement, enhances client experience, and ensures alignment with customer expectations. • Collect feedback from all channels (PPC, emails, reviews) • Categorize into: • Positive • Negative • Map feedback to process/stage/department/operator • Track RCA and action items • Monitor closure timelines and recurrence Client feedback (PPC/emails) Feedback categories RCA tracker Actions implementation tracker NA • Validate feedback categorization with QA lead • RCA effectiveness review • Action closure verification (evidence-based) • Monthly governance review VOC dashboard Feedback trends RCA effectiveness Action closure % Recurring issue trend Feedback Count 0 0 0
20 7 TAT for Each Stage (S200, S300, S650) All Evaluates operational efficiency by measuring adherence to turnaround time commitments across production stages. Supports proactive decision-making to address bottlenecks, optimize resource utilization, and ensure on-time delivery performance. • Capture received and delivered dates for each stage • Calculate TAT at chapter/title level • Compare against SLA benchmarks • Identify delay reasons: • Internal delays • Dependency delays (author/client) • Perform trend and bottleneck analysis Job received date Completion date SLA benchmarks On-time % = (# Chapters Delivered On-time / Total # Chapters Delivered) × 100 • System-generated due date validation (no manual override) • Random audit of job logs • Cross-check with tracker and delivery records Stage-wise TAT On-time % Delay % Avg TAT vs SLA Delivered titles 3 0 7
21 Average TAT (Days) 94 0 171
22 Overall TAT% 1 - 0
23
24
25
26
27 S50 4 4
28 S200 30 34
29 S300A 7 41
30 S300 12 53
31 S600 7 60
32 S65A 5 65
33 S650 14 79
34
35 Client Target 94
Double-click to expand
Sign in to edit this dataset. Sign in

Expand Analysis

Embed this dataset

Paste this code into your blog or website. Readers can search, sort, and paginate the data.

<iframe src="https://data.tablepage.ai/d/sn-internal-publishing-kpis-dashboard-q1-2026?embed=1" width="100%" height="500" frameborder="0"></iframe>

Works on WordPress, Ghost, and any site that supports iframes.

Drop to create a new dataset CSV, TSV, or Excel
Uploading...

Upload your own dataset

Explore any CSV with AI insights, charts & filters. Free, no account needed.