Every analytical decision in a GMP laboratory relies on the validity of the reference standard behind it. Whether it is calculating assay results, establishing calibration curves, or confirming identity, the confidence in the analytical output data is only as strong as the reference material used.
This is where the choice between a primary and secondary standard directly affects the reliability of the reported value. Primary standards are established by pharmacopoeial authorities and carry legal and metrological certainty, making them the point of origin for traceability. Secondary standards are used for routine analyses, but only when their value has been scientifically demonstrated against a specific primary standard batch and controlled throughout their lifecycle.

This article points out the key characteristics, qualification processes, and uses of both types of standards, as well as the risks and regulatory considerations associated with their application.
Definition and Characteristics of Primary Standards
Primary standard definition: “A standard designated or widely acknowledged as having the highest metrological qualities and whose property value is accepted without reference to other standards of the same property or quantity, within a specified context.” – European Pharmacopoeia 11.3, Chapter 5.12
The primary reference standard carries the highest degree of confidence in measurement and serves as the definitive reference for all comparative testing. It is not derived or assigned based on other standards; instead, its value is self-contained, scientifically established, and accepted without question within the specified analytical context.
Origin
Primary reference standards are supplied by official bodies such as the European Pharmacopoeia Commission, the United States Pharmacopeial Convention (USP), the British Pharmacopoeia (BP), and other widely recognized national control laboratories.
Where a European Pharmacopoeia reference standard is referred to in a monograph or general chapter, it represents the official standard that is alone authoritative in case of doubt or dispute.
This means that when a monograph prescribes a Ph. Eur. reference standard, its use is non-substitutable unless specifically justified and documented. These standards are integral to ensuring pharmaceutical products comply with compendial requirements.
Key characteristics of primary standards:
- High purity and chemical integrity
- Characterized using multiple analytical techniques
- Established through interlaboratory collaboration and statistical analysis
- Stored under controlled conditions and continuously monitored for stability
Qualification Process
The qualification (standardization) of primary reference standards is a multi-step procedure that ensures each standard meets the regulatory requirements for pharmaceutical use.

1. Substance Characterisation
The identity of the substance is confirmed using analytical methods, such as nuclear magnetic resonance (NMR), mass spectrometry (MS), infrared spectroscopy (IR), and elemental analysis, each of which verifies a specific structural feature. This ensures the material is chemically consistent with the reference compound.
2. Purity Assessment and Confirmation
Purity is evaluated by determining related substances, water content, residual solvents, and inorganic impurities. If applicable, loss on drying may be used to substitute for separate moisture and solvent determinations. To verify results, an independent method such as Quantitative Nuclear Magnetic Resonance (qNMR), Differential Scanning Calorimetry (DSC), or titration is used for confirmation, especially when moderate purity is present.
3. Assigned Content and Uncertainty
The content is calculated using the mass balance approach, where all quantified impurities are subtracted from 100%. An uncertainty is then established and must fall within scientifically accepted limits to ensure the result is reliable and defensible for pharmacopoeial use.
4. Interlaboratory Validation
When needed, interlaboratory studies are performed to ensure the reproducibility of the assigned content. Multiple qualified labs analyse the same material under a shared protocol, and only compliant results are used, particularly for assay standards or complex substances.
5. Establishment Report
A formal report is compiled by EDQM and approved by the European Pharmacopoeia Commission. It documents test results, methods, uncertainty calculations, and scientific justifications. This report supports traceability, regulatory scrutiny, and lifecycle control of the standard.
Intended Use
Primary reference standards are used where the highest level of measurement confidence is required. Because they are compendial and legally authoritative, they serve as the reference point in regulated pharmaceutical testing.
They are typically applied in:
- Pharmacopoeial tests, where the standard is cited in a monograph or general chapter
- Method validation, particularly during assay development or regulatory submissions
- Content assignment, where high certainty is required
- In-house preparation and qualification of Secondary and Working Standards
Regulatory compliance or dispute resolution, where test results may be challenged or audited
| Use Case | Description / Why Primary Standard Is Required |
|---|---|
| Pharmacopoeial Tests | Used when prescribed in a monograph or general chapter; legally authoritative reference for compliance. |
| Method Validation | Applied during assay development, validation, or regulatory submissions to ensure highest measurement confidence. |
| Content Assignment | Required when assigning content with high certainty, especially for mass balance or purity calculations. |
| Preparation of Secondary/Working Standards | Serves as the reference point for qualifying secondary and working standards. |
| Regulatory Compliance & Dispute Resolution | Provides defensible results in audits, investigations, or regulatory challenges due to compendial authority. |
Secondary Reference Standards
ICH Q7 Secondary Standard Definition: A substance of established quality and purity, as shown by comparison to a primary reference standard, used as a reference standard for routine laboratory analysis.
ICH Q7 Guideline – Note: The suitability of each batch of the secondary reference standard should be determined before first use by comparing it with a primary reference standard. Each batch of the secondary reference standard should be requalified periodically in accordance with a written protocol.
Secondary standards are generally introduced to limit the routine use of primary standards, particularly in quality control settings where frequent testing is required. To be acceptable, a secondary standard must demonstrate the same analytical property or properties as the primary standard it is qualified against, ensuring accurate and traceable results.

Origin of Secondary Standards
Secondary reference standards are not simply materials assigned a value by comparison to a primary standard. Their credibility depends on the competence of the organisation that produces them, and this competence must be demonstrated under recognised quality frameworks. ISO 17034:2016 sets the global requirements for reference material producers (RMPs), ensuring that any secondary standard is homogeneous, stable, correctly value-assigned, traceable, and supported by full documentation and lifecycle control.
In the pharmaceutical context, this means that when a manufacturer, contract testing laboratory, or specialised supplier prepares a secondary standard for routine use, the process should follow ISO 17034 (or an equivalent accredited system).
Qualification Process
Secondary standards must be qualified through traceable comparison to an appropriate primary standard. This qualification ensures that the secondary standard is linked to the official standard for its intended use.
1. Qualification Against Primary Standard
The secondary standard must have the same property as the primary standard it is compared to, for example, its assay value or identity. Qualification must be supported by:
- A validated analytical method appropriate to the property being evaluated
- A documented protocol outlining the comparison, acceptance criteria, and traceability to a specific primary standard batch
2. Documentation and Lifecycle Control
Once qualified, the secondary standard must be managed under the site’s quality system, including:
- Assignment of a batch number, defined storage conditions, and a requalification or expiry date
- Issuance of a Certificate of Analysis (CoA) detailing the analytical method, results, uncertainty (if applicable), and reference to the qualified primary standard
Intended Use
Secondary reference standards are primarily used in routine analytical testing, where the daily use of primary standards would be impractical due to cost, limited availability, or conservation concerns.
They are typically used for:
- Routine quality control: assay, identification, or impurity testing
- In-process testing and stability studies
- Working standards, used repeatedly in validated methods
- System suitability testing, where traceable consistency is essential
Traceability of Secondary Standards
Traceability is fundamental to the acceptance and reliability of secondary reference standards. Since compendial authorities do not establish these materials, their analytical value must be demonstrated by a validated comparison to a primary reference standard. This traceability must be documented, transparent, and supported by full analytical characterization to ensure equivalence to the property of interest, most often assay or purity.
A properly qualified secondary standard is usually a certified reference material (CRM) produced under ISO 17034 and ISO/IEC 17025. These standards include a Certificate of Analysis (CoA) that documents:
- The assigned value (e.g., assay content),
- Its measurement uncertainty,
- The method(s) of comparison to a recognized primary standard (e.g., USP or Ph. Eur..), and
- Traceability chain linking the secondary material to the SI unit or to the compendial source.
Traceability must be based on metrologically sound principles, meaning the assigned value of the secondary standard must match the property established in the primary standard through a scientifically valid comparison. This includes ensuring the analytical method is appropriate and the reference batch of the primary standard is clearly documented.
| Traceability Element | Requirement | Evidence |
|---|---|---|
| Link to Primary Standard | Must reference exact primary batch | CoA listing batch number |
| Validated Method | Method suitable for property tested | Method validation report |
| Measurement Uncertainty | Assigned value must include uncertainty | Uncertainty calculation |
| Comparison Data | Raw data demonstrating equivalence | Chromatograms, spectra, calculations |
| Ongoing Maintenance | Requalification after changes | Updated CoA or requalification report |
Furthermore, traceability is not a one-time process. Requalification is required if any change occurs in the compendial method, material status, or batch conditions. Without active maintenance of traceability, the use of secondary standards poses compliance and quality risks in regulated testing environments.
Differences Between Primary and Secondary Reference Standards
| Criteria | Primary Standard | Secondary Standard |
|---|---|---|
| Source | Issued by official pharmacopoeial bodies (e.g., Ph. Eur., USP) | Produced by accredited manufacturers of reference standards (ISO 17034) |
| Official Status | Compendial and authoritative when referenced in a monograph | Not compendial; must be demonstrated as suitable through qualification |
| Traceability | Self-contained (scientifically established) | Traceability must be demonstrated to the specific property of a primary standard |
| Use in Testing | Compendial tests, validation, and regulatory compliance | Routine QC testing; official tests only when traceability to primary is established |
| Qualification Requirement | No user-side qualification needed | Qualification must be demonstrated with a validated method, protocol, and acceptance criteria |
| Uncertainty | Low | Usually higher than primary standards |
| Legal Standing | Legally binding when cited in pharmacopoeial procedures | Not legally binding; acceptance depends on traceable qualification |
Uncertainty Between Primary and Secondary Standards
Confidence in test results depends not only on method performance but also on the integrity of the reference standard used. As a general rule, secondary standards carry greater uncertainty compared to primary standards. This variability must be acknowledged, controlled, and reflected in the overall measurement uncertainty, especially when reporting official or regulatory results in the laboratory.

Increased Uncertainty with Secondary Standards
Uncertainty in analytical results originates from the entire measurement system, the method, instrument, analyst, environment, and the reference standard. Secondary standards, unlike primary standards, are not independently certified but are established by comparison against a primary standard. This extra step in traceability introduces an additional source of uncertainty, which accumulates through the measurement chain and can affect the reliability of final results.
Uncertainty Safety Measures
When laboratories use secondary standards, they should consider compensating for increased uncertainty by tightening internal acceptance ranges or applying guard bands. These safety margins ensure that reported values remain within the true specification limits.
For example, a specification range of 98.0–102.0% may be internally reduced to 99.0–101.0% when using a secondary standard with higher uncertainty to prevent borderline results from being incorrectly accepted.
Regulatory Expectations
While global regulatory authorities use slightly different terminology, their core expectation remains the same: Whether a laboratory uses a primary or secondary standard, it must be qualified, traceable to an authoritative source, scientifically justified, and used within a controlled system.
What changes between USP, Ph. Eur., FDA, or WHO is not the principle, but the way they express it. Some emphasize compendial authority, others focus on traceability or GMP control.
United States Pharmacopeia (USP)
The United States Pharmacopeia (USP) designates its reference standards as official and mandatory when cited in compendial monographs. Suppose a secondary or in-house standard is used in place of a USP Reference Standard. In that case, it must be fully characterized, directly compared to the USP standard, and supported by validated analytical methods. Users may implement use traceable secondary standards to support testing and ensure ongoing compliance with compendial requirements, especially when such testing is intended to confirm conformance (General Chapter <1010>).
European Pharmacopoeia (Ph. Eur.)
According to Ph. Eur. 5.12, primary standards are the sole authoritative reference materials whenever they are cited in a monograph or general chapter. Secondary standards may be used for routine testing only when they have been established through direct comparison with the corresponding primary standard using a scientifically justified and traceable procedure. The qualified characteristic (such as assay value or identity) must align with that of the primary standard, and the use of the secondary material must remain within the validated scope of that qualification.
World Health Organization (WHO)
WHO guidance permits the use of secondary reference materials for routine testing when their value is established through a validated comparison with a recognized primary standard. The assigned property, analytical procedure, and storage/retest criteria must be documented in detail, and the secondary standard should be controlled in accordance with GMP principles (WHO TRS 996, Annex 2).
United States Food and Drug Administration (FDA)
According to the FDA’s Guidance for Industry: Analytical Procedures and Methods Validation for Drugs and Biologics (2015), reference standards used in analytical procedures should be suitable for their intended purpose and properly supported by qualification data. Official reference standards are commonly obtained from recognized pharmacopoeial or regulatory sources such as the USP, European Pharmacopoeia, Japanese Pharmacopoeia, WHO, NIST, or CBER. When a laboratory uses an in-house standard, it must be qualified against the current official reference standard to establish traceability and confirm that it provides equivalent performance for the measured property. This qualification typically includes appropriate characterization, comparison testing, and review of the impurity profile, ensuring that the in-house material remains scientifically justified and suitable for ongoing routine use
ICH Q7 – GMP Guideline for APIs
ICH Q7 provides clear definitions for both primary and secondary reference standards in the context of API manufacturing and sets expectations for how they should be established, maintained, and controlled under GMP.
A primary reference standard under ICH Q7 must be authentic, highly pure, and supported by an extensive set of analytical data. It may come from an official compendial source, but it may also be prepared in-house through independent synthesis or further purification of high-quality production material.
Regardless of origin, the laboratory must document the source, storage conditions, and usage history of every primary reference standard to maintain traceability throughout its lifecycle.
When no compendial or officially recognised primary standard exists, an in-house primary standard must be created. ICH Q7 emphasises that full identity and purity testing is required to justify its status, and the supporting analytical package must be retained as part of the GMP documentation.
Secondary standards under ICH Q7 must be prepared, identified, tested, approved, and stored under defined conditions. Before first use, each batch must be demonstrated to be suitable by direct comparison with a primary standard, and periodic requalification must be performed according to a written protocol. This establishes a continuous link to the primary reference and ensures that the secondary material maintains appropriate analytical performance for routine use.
ISO 17034: General Requirements for the Competence of Reference Material Producers
ISO 17034 defines the quality framework for organisations that produce certified reference materials (CRMs), including secondary standards used in pharmaceutical testing. While not explicitly written for GMP laboratories, it is the globally recognised benchmark for establishing that a reference material producer operates with the necessary scientific, technical, and quality competence.
Under ISO 17034, reference materials must be demonstrated as homogeneous, stable, and correctly value-assigned. Any secondary standard produced under this framework must include traceability to an authoritative source, typically a compendial primary standard, and the assigned value must be supported by validated analytical methods and uncertainty calculations. The certification must also include documented storage conditions, shelf life or re-test intervals, and instructions for correct use.
For GMP laboratories, ISO 17034 accreditation is highly relevant to supplier qualification. It provides confidence that the secondary standard has been produced under a controlled system and that its value assignment can withstand scientific and regulatory scrutiny. This means many GMP labs qualify secondary standards based on ISO 17034/ISO 17025 accreditation and the accompanying CoA, without needing to experimentally re-establish the entire comparison in-house.
FAQ
Can I Use a Secondary Standard in Place of a Primary Standard for a Compendial Test?
Primary standards are the preferred choice for compendial testing because they are directly established and certified by official bodies such as the USP or Ph. Eur. However, secondary standards can also be used in official testing, provided they are adequately qualified against the corresponding primary standard.
This qualification must demonstrate traceability, comparable purity or assigned value, and suitability for the intended method. In practice, this means secondary standards are acceptable, especially for routine use, so long as their use is scientifically justified, documented, and controlled within the validated scope.
What Is the Difference Between a Working Standard and a Secondary Standard?
A secondary standard is a reference material whose value has been assigned by comparison to a primary standard. A working standard is a type of reference standard that the laboratory qualifies for routine use. It must be traceable to a primary standard, but because it is established under the lab’s own conditions and equipment, it generally carries higher uncertainty than certified secondary standards.
Can a Laboratory Use a Secondary Standard Without Qualifying It Against a Primary Standard?
Yes. Most GMP laboratories do not perform experimental qualification of every secondary standard. Instead, they qualify the supplier (e.g., ISO 17034/ISO 17025 accredited reference standard producer) and review the Certificate of Analysis to confirm traceability to a primary standard. This supplier-based qualification approach is acceptable, provided it is documented, justified, and controlled within the quality system. In-house testing is only necessary when the laboratory prepares its own working standard, uses a non-certified material, or no primary standard is available.
Does Using a Secondary Standard in Method Validation Affect Regulatory Acceptance or Measurement Uncertainty?
Using a secondary standard is acceptable in method validation, as long as its traceability and suitability are clearly documented. However, because a secondary standard introduces one more step in the traceability chain, it may contribute a slightly higher uncertainty compared to a primary standard. This does not make the method unacceptable, but laboratories should be aware of the uncertainty contribution and ensure it remains within validated limits. Regulators focus less on which type of standard is used and more on whether it is traceable, scientifically justified, and properly controlled.
What Documentation Should Be Available During an Inspection to Prove Traceability of Reference Standards?
Inspectors expect to see that the laboratory knows where the standard came from, how it was accepted, and that it remains suitable for use. This typically includes the Certificate of Analysis, supplier qualification records, internal approval and receipt logs, storage and usage records, and—if applicable—requalification or retest documentation. Experimental qualification data are only required if the laboratory prepares its own working standard or uses a non-certified supplier. The key expectation is not re-testing, but traceability and controlled use within the quality system.
Related Article: Good Documentation Practices In Pharma Industry
Do Primary Reference Standards Have Expiry Dates?
Official primary standards (e.g., Ph. Eur., USP) typically do not carry user-visible expiry dates. Instead, their suitability is maintained through ongoing stability monitoring by the issuing authority. Users must always check the current status on the supplier’s website before use.
Final Thoughts
The distinction between primary and secondary reference standards shapes every stage of analytical decision-making in a GMP laboratory. Primary standards provide the fixed point of metrological certainty on which pharmacopoeial methods, regulatory expectations, and scientific comparability are built. On the other end, secondary standards make day-to-day testing feasible, but only when their assigned value can be transparently traced back to an authoritative source and supported by documented qualification.
Both types of standards exist within the same control framework: scientifically justified characterization, validated comparison, documented traceability, defined storage and requalification practices, and alignment with the method for which they are used. When that structure is weakened, uncertainty increases, and data becomes difficult to defend.
Ultimately, the reliability of GMP data does not depend on the label “primary” or “secondary,” but on the integrity of the system that governs how each standard is sourced, qualified, monitored, and used. Laboratories that treat reference standards as controlled, lifecycle-managed materials, not consumables, create an analytical environment in which results are consistent, reproducible, and ready to withstand scientific and regulatory scrutiny. In this context, the reference standard represents the primary source of metrological certainty in the analytical system.






