Understanding Factbase's Institution-Level Research Metrics

FACTBASE AI SUMMARY
  • Institution-level research metrics require special methodological considerations that differ from both author-level and country-level analysis.

Overview

Institution-level research metrics require special methodological considerations that differ from both country-level and author-level analysis. This guide focuses specifically on how research metrics are calculated and interpreted for universities, research institutes, laboratories, and other organisational entities.

Understanding these institutional nuances is essential for accurate benchmarking, strategic planning, and capability assessment.


What Makes Institution Metrics Different?

Unique Challenges

Institution-level metrics face complexities not present at other levels:

1. Author Affiliation Ambiguity

  • Authors may list multiple institutional affiliations
  • Institutional hierarchies (department → school → university)
  • Historical name changes and institutional mergers
  • Visiting scholars and joint appointments

2. Collaboration Patterns

  • High intra-institutional collaboration (same university, different departments)
  • Inter-institutional collaboration within same country
  • International institutional partnerships

3. Organisational Structure

  • Parent-child relationships (MIT → MIT Computer Science)
  • Federated systems (University of California system)
  • Research institutes vs teaching universities
  • Corporate research labs vs academic institutions

4. Temporal Instability

  • Researchers move between institutions
  • Credit attribution depends on affiliation at time of publication
  • Institutional mergers/splits change historical attribution

These factors require careful methodological choices to ensure fair, accurate assessment.


Fractional Author Methodology for Institutions

Basic Principle

Fractional credit ensures each paper's total attribution equals 1.0 globally, with credit divided among all contributing institutions proportionally.

Step-by-Step Calculation

Step 1: Identify institutional affiliations for each author

Example paper:

Title: "Advances in Quantum Error Correction" Authors:

  • Author 1: MIT (USA)
  • Author 2: Stanford University (USA)
  • Author 3: MIT (USA)
  • Author 4: University of Oxford (UK)
  • Author 5: MIT (USA) Total: 5 authors

**Step 2: Calculate fractional credit per institution**

Institution fractional credit = (Number of authors affiliated) ÷ (Total authors)

MIT: 3 ÷ 5 = 0.6 Stanford: 1 ÷ 5 = 0.2 Oxford: 1 ÷ 5 = 0.2 Total: 1.0 ✓


**Interpretation**: 
- MIT receives 0.6 paper credits (60% contribution)
- Stanford receives 0.2 paper credits (20% contribution)
- Oxford receives 0.2 paper credits (20% contribution)

---

### Multi-Institutional Author Affiliations

**When one author lists multiple institutions**:

**Example**:

Author: Dr Jane Smith Affiliations:

  • Primary: MIT (USA)
  • Secondary: Max Planck Institute (Germany)

**Standard approach**: Divide author's credit equally among listed affiliations

Dr Smith's contribution per institution:

  • MIT: 0.5 author-credits
  • Max Planck: 0.5 author-credits Total: 1.0 author-credit ✓

**In full paper calculation**:

Paper with 4 authors:

  • Author 1: Harvard only = 1.0 credit
  • Author 2: MIT only = 1.0 credit
  • Author 3: MIT + Max Planck = 0.5 MIT, 0.5 Max Planck
  • Author 4: Oxford only = 1.0 credit

Total author-credits: 4.0

Fractional credit by institution:

  • Harvard: 1.0 ÷ 4 = 0.25
  • MIT: (1.0 + 0.5) ÷ 4 = 0.375
  • Max Planck: 0.5 ÷ 4 = 0.125
  • Oxford: 1.0 ÷ 4 = 0.25 Total: 1.0 ✓

---

### Institutional Hierarchies

**Challenge**: How to handle department/school/university relationships?

**Example affiliations**:

Author affiliation string: "Computer Science Department, School of Engineering, Massachusetts Institute of Technology, Cambridge, MA, USA"


**Factbase approach**: Attribute to **most specific verifiable entity** in Research Organization Registry (ROR)

**ROR hierarchy**:
  • Massachusetts Institute of Technology (ROR: 042nb2s44) └── No sub-units in ROR

**Result**: Credit to "MIT" as single entity (no separate credit for CS Department)

**Why**: 
- ROR provides standardised institutional identifiers
- Prevents double-counting within single institution
- Enables consistent cross-institutional comparison
- Sub-departmental data often unreliable/incomplete

**Exception**: Some large institutions have ROR entries for major research units:
  • University of Cambridge (ROR: 013meh722) └── MRC Laboratory of Molecular Biology (ROR: 03xez0e59)

**In this case**: Papers can be attributed separately to either entity based on author declaration.

---

### Institutional Name Standardisation

**Challenge**: Same institution appears with many name variations

**Example variations for one institution**:
  • MIT
  • M.I.T.
  • Massachusetts Institute of Technology
  • Mass Inst Technol
  • Massachusetts Inst Tech
  • Massachusetts Institute of Tech

**Factbase solution**: Map all variations to single ROR identifier

All variations → ROR: 042nb2s44 → "Massachusetts Institute of Technology"


**Benefits**:
- Eliminates duplicate entries
- Ensures complete paper count
- Enables accurate historical analysis
- Facilitates name change handling

---

### Historical Institutional Changes

**Challenge**: Institutions merge, split, or change names over time

**Example 1 – Merger**:

2015: Imperial College London (separate) 2015: Imperial College School of Medicine (separate) → 2007: Merged into Imperial College London


**Factbase approach**:
- Papers 2007+: Attributed to merged "Imperial College London"
- Papers pre-2007: Attributed to original entities (if ROR records exist)
- Historical analysis accounts for structural change

**Example 2 – Name Change**:
  • "Univerza v Ljubljani" (Slovenian name)
  • "University of Ljubljana" (English name) → Same ROR, consolidated under English canonical name

**Result**: Complete historical record under single identifier.

---

## Fractional Weighting for Topic Attribution

### How Topics Are Assigned to Institutions

**Each paper** is classified into one or more strategic topics (e.g., "Quantum Computing", "Artificial Intelligence", "CRISPR Gene Editing").

**Institution receives fractional topic credit** proportional to paper contribution:

**Example**:

Paper: "Quantum Machine Learning Algorithms" Topics:

  • Quantum Computing (primary)
  • Artificial Intelligence (secondary)
  • Machine Learning (tertiary)

Institutions:

  • MIT: 0.6 paper credit
  • Oxford: 0.4 paper credit

Topic attribution: MIT in Quantum Computing: 0.6 papers MIT in Artificial Intelligence: 0.6 papers MIT in Machine Learning: 0.6 papers

Oxford in Quantum Computing: 0.4 papers Oxford in Artificial Intelligence: 0.4 papers Oxford in Machine Learning: 0.4 papers


**Result**: Institution's topic profile reflects its actual research contribution weighted by authorship.

### Aggregating Topic Contributions

**To find MIT's total papers in "Quantum Computing" (5Y)**:

MIT_QC_papers = Σ(Fractional_credits) for all quantum computing papers with MIT authors

Example across 3 papers:

  • Paper A: 0.6 credit (MIT 3 of 5 authors)
  • Paper B: 1.0 credit (MIT 2 of 2 authors)
  • Paper C: 0.25 credit (MIT 1 of 4 authors)

Total: 0.6 + 1.0 + 0.25 = 1.85 fractional papers in Quantum Computing


**Interpretation**: MIT contributed the equivalent of **1.85 papers** to quantum computing research (from these 3 papers).

---

## Fractional Weighting for TMCM Calculation

### Individual Paper TMCM

Each paper has a single TMCM value (regardless of how many institutions contributed):

Paper_TMCM = (Citations to paper) ÷ (Median citations for topic-year)


**Example**:

Paper in Quantum Computing (2023):

  • Citations: 20
  • Topic median: 5
  • Paper TMCM: 20 ÷ 5 = 4.0×

This TMCM applies to **all institutions** that contributed to the paper.

---

### Institution-Level TMCM Aggregation

**Formula**:

Institution_TMCM = Σ(Paper_TMCM × Institutional_fraction) ÷ Σ(Institutional_fraction)


**This is a weighted average** where each paper's TMCM is weighted by the institution's fractional contribution.

**Example – MIT in Quantum Computing (2023)**:

| Paper | Citations | TMCM | MIT authors | Total authors | MIT fraction | Weighted contribution |
|-------|-----------|------|-------------|---------------|--------------|---------------------|
| A | 20 | 4.0× | 3 | 5 | 0.6 | 2.4 |
| B | 15 | 3.0× | 2 | 2 | 1.0 | 3.0 |
| C | 10 | 2.0× | 1 | 4 | 0.25 | 0.5 |
| D | 5 | 1.0× | 2 | 3 | 0.67 | 0.67 |

MIT TMCM = (2.4 + 3.0 + 0.5 + 0.67) ÷ (0.6 + 1.0 + 0.25 + 0.67) = 6.57 ÷ 2.52 = 2.61×


**Interpretation**: On average, MIT's quantum computing papers from 2023 received **2.61× the median citations** for the topic, accounting for MIT's varying levels of contribution across different papers.

---

### Why Fractional Weighting Matters for TMCM

**Without fractional weighting** (incorrect):

Simple average: (4.0 + 3.0 + 2.0 + 1.0) ÷ 4 = 2.5×

Problem: Treats minor contribution (Paper C, 0.25 fraction) same as major contribution (Paper B, 1.0 fraction)


**With fractional weighting** (correct):

Weighted average: 2.61×

Benefit: Papers where MIT had larger role (more authors) have more influence on institutional TMCM, which accurately reflects MIT's actual quality output


---

## Periodic TMCM for Institutions

### Annual Institutional TMCM

**For each year**, calculate institution's TMCM using fractional weighting:

Institution_Year_TMCM = Σ(Paper_TMCM × Institution_fraction) ÷ Σ(Institution_fraction)


**Example – MIT Quantum Computing Annual TMCMs**:

| Year | Fractional papers | Weighted TMCM sum | Annual TMCM |
|------|-------------------|-------------------|-------------|
| 2020 | 52.3 | 168.2 | 3.2× |
| 2021 | 58.7 | 176.1 | 3.0× |
| 2022 | 64.2 | 218.3 | 3.4× |
| 2023 | 71.8 | 258.5 | 3.6× |
| 2024 | 78.5 | 219.8 | 2.8× |

---

### Multi-Year Period TMCM

**Average annual TMCMs** to get period TMCM:

Period_TMCM = (Σ Annual_TMCM) ÷ (Number of years)

MIT 5Y TMCM = (3.2 + 3.0 + 3.4 + 3.6 + 2.8) ÷ 5 = 3.2×


**Interpretation**: Over 2020-2024, MIT's quantum computing research averaged **3.2× the median citations** for the topic.

See: [Understanding Periodic Research Impact Analysis](#) for detailed methodology on multi-year windows.

---

## Special Issues with Institution-Level Metrics

### Issue 1: Intra-Institutional Collaboration

**Challenge**: Papers with multiple authors from same institution

**Example**:

Paper with 5 authors:

  • 3 from MIT Computer Science
  • 2 from MIT Physics Total: 5 authors, all MIT

Standard fractional: MIT gets 5 ÷ 5 = 1.0 credit ✓


**Not a problem**: Fractional methodology handles this correctly. Institution gets full credit because all authors are affiliated, regardless of internal departmental distribution.

**Why this matters**:
- Encourages cross-departmental collaboration
- Doesn't inflate counts (still 1.0 paper)
- Accurately reflects institutional output

---

### Issue 2: Inter-Institutional Collaboration (Same Country)

**Challenge**: Papers with authors from multiple institutions in same country

**Example**:

Paper with 6 authors:

  • 3 from MIT (USA)
  • 2 from Stanford (USA)
  • 1 from Harvard (USA)

Fractional credits:

  • MIT: 3 ÷ 6 = 0.5
  • Stanford: 2 ÷ 6 = 0.33
  • Harvard: 1 ÷ 6 = 0.17 Total: 1.0 ✓

Country-level (USA): 6 ÷ 6 = 1.0 ✓


**Result**: 
- Institution-level totals (0.5 + 0.33 + 0.17 = 1.0) equal country-level total
- Fair attribution among collaborating institutions
- No inflation at any level

**Interpretation**: Domestic inter-institutional collaboration properly credited.

---

### Issue 3: Corporate vs Academic Affiliations

**Challenge**: How to handle corporate research affiliations?

**Example**:

Paper with 4 authors:

  • 2 from MIT (academic)
  • 1 from Google Research (corporate)
  • 1 from DeepMind (corporate, UK)

Fractional credits:

  • MIT: 2 ÷ 4 = 0.5
  • Google Research: 1 ÷ 4 = 0.25
  • DeepMind: 1 ÷ 4 = 0.25

**Factbase approach**: 
- Corporate research labs treated as distinct institutions (have ROR IDs)
- Enables tracking corporate research contribution
- Facilitates academic-industry collaboration analysis

**Example corporate research entities with ROR**:
- Google Research (ROR: 03v8tnc06)
- Microsoft Research (ROR: 04nx51t37)
- IBM Research (ROR: 03wnrjx87)
- Bell Labs (ROR: 00dwxcc56)

---

### Issue 4: Visiting Scholars and Joint Appointments

**Challenge**: Authors with temporary or dual institutional affiliations

**Example**:

Author: Dr Alice Chen Publication affiliation (2023):

  • Primary: Stanford University (permanent position)
  • Visiting: Max Planck Institute for Quantum Optics (1-year fellowship)

**Factbase approach**: Credit both institutions as declared on publication

Dr Chen contributes:

  • 0.5 credit to Stanford
  • 0.5 credit to Max Planck

In 4-author paper:

  • Stanford: 0.5 ÷ 4 = 0.125
  • Max Planck: 0.5 ÷ 4 = 0.125

**Rationale**:
- Reflects actual research collaboration at time of publication
- Visiting scholars bring expertise and contribute to host institution
- Historical record shows where research actually occurred

**Implication**: Institutions benefit from hosting visiting scholars in metrics.

---

### Issue 5: Student Affiliations

**Challenge**: Graduate students publishing with multiple advisors from different institutions

**Example**:

PhD student enrolled at MIT, working with:

  • Primary advisor at MIT
  • Co-advisor at Harvard Medical School
  • External collaborator at ETH Zurich

Student lists all three affiliations on paper


**Factbase approach**: Credit all listed affiliations per author declaration

Student contributes:

  • 0.33 to MIT
  • 0.33 to Harvard
  • 0.33 to ETH Zurich

**Why**: Publication affiliation represents student's research context at time of work, even if primary enrolment is at one institution.

---

### Issue 6: Institutional Mergers and Splits

**Challenge**: Handling structural changes over time

**Example – Merger**:

2010: Institution A publishes 100 papers/year 2010: Institution B publishes 50 papers/year 2011: A and B merge into Institution C

How to count Institution C's historical output?


**Factbase approach**:

**Post-merger** (2011+):
- Papers attributed to "Institution C" (merged entity)

**Pre-merger** (before 2011):
- Papers remain attributed to "Institution A" and "Institution B" (original entities)
- Historical totals: A + B ≠ C (different entities)

**For trend analysis**:
- Flag structural change year
- Report "combined historical" as A + B + C if user requests
- But primary metrics keep entities separate to reflect reality

**Example historical view**:

Institution C 10Y analysis (2015-2024):

  • 2015-2024: Papers as "Institution C" post-merger
  • Note: Institution C formed 2011 from merger of A and B

Historical context available but not automatically aggregated


---

### Issue 7: Federated Institutional Systems

**Challenge**: Multi-campus systems (e.g., University of California, CNRS France)

**Example**:

University of California system:

  • UC Berkeley (ROR: 01an7q238)
  • UCLA (ROR: 046rm7j60)
  • UC San Diego (ROR: 0168r3w48)
  • UC System (parent ROR: 00pjdza24)

**Factbase approach**: Attribute to **most specific campus** declared by author

Author declaration: "University of California, Berkeley" → Credit to UC Berkeley (not UC System level)

Author declaration: "University of California" → Credit to UC System (if ROR exists, otherwise map to nearest specific campus)


**System-level aggregation**: Available on request but not default

UC System total = Sum(UC Berkeley + UCLA + UC San Diego + ... + UC System direct)


**Why separate**:
- Campuses compete independently in rankings
- Different research profiles (Berkeley ≠ UCLA)
- Enables fair campus-level benchmarking

---

### Issue 8: Small Institution Volatility

**Challenge**: Small institutions have volatile metrics due to low paper counts

**Example**:

Small Institution X: 15 fractional papers in quantum computing (5Y)

  • One breakthrough paper (TMCM = 25×) dominates
  • Other 14 papers average TMCM = 1.2×

Institution TMCM = (25 + 1.2×14) ÷ 15 = 2.8×

Next year:

  • Breakthrough paper ages out of 5Y window
  • Institution TMCM drops to 1.2×

**Factbase approach**:

**Minimum threshold flags**:
- Institutions with <20 fractional papers in period: "Low sample size"
- Institutions with <50 fractional papers: "Moderate confidence"
- Institutions with >50 fractional papers: "High confidence"

**Reporting**:

Small Institution X – Quantum Computing – 5Y Papers: 15 (⚠️ Low sample size) TMCM: 2.8× (⚠️ Low confidence)

Interpretation: Metrics volatile due to small output; interpret with caution.


---

### Issue 9: Missing or Incomplete Affiliation Data

**Challenge**: Some papers lack clear institutional affiliations

**Example scenarios**:

Scenario 1: Author affiliation missing entirely Scenario 2: Affiliation string too vague ("University, USA") Scenario 3: Affiliation not in ROR registry (small/new institution)


**Factbase approach**:

**Scenario 1** – Missing affiliation:
- Exclude author from institutional attribution
- Recalculate fractions among authors with known affiliations
- Flag paper as "incomplete metadata"

**Scenario 2** – Vague affiliation:
- Attempt fuzzy matching against ROR
- Manual review for high-profile papers
- Otherwise, exclude from institutional counts

**Scenario 3** – Institution not in ROR:
- Add to Factbase custom institution registry
- Create canonical name and unique ID
- Map future papers to same entity

**Impact on coverage**:
- ~85-90% of papers have complete, mappable affiliations
- Coverage higher for recent papers (2015+) than historical
- Coverage higher for well-indexed journals than conferences

---

## Institution-Specific Metrics Summary

### Core Institutional Metrics

**Volume**:
- Fractional paper count (institutional output)
- Paper share within topic (% of global topic output)
- Annual growth rate (expanding or contracting?)

**Quality**:
- Mean TMCM (average research quality)
- Periodic TMCM (3Y, 5Y, 10Y for trajectory)
- Median TMCM (typical paper quality)

**Excellence**:
- Top 1% share (breakthrough research concentration)
- Top 10% share (high-quality research breadth)
- Excellence counts (absolute breakthrough output)

**Collaboration**:
- International collaboration rate (% papers with foreign co-authors)
- Domestic collaboration rate (% papers with other national institutions)
- Solo institutional papers (% with no external collaborators)

**Diversity**:
- Topic diversity (how many topics institution contributes to)
- Citation diversity (international vs domestic recognition)
- Partner institution count (collaboration breadth)

---

## Comparing Institutions Fairly

### Appropriate Comparisons

**Consider when comparing institutions**:
- Institutional size (staff, students, budget)
- Research intensity and mission
- Geographic and funding context
- Historical trajectory and age

**Factbase enables filtering and peer group selection** to ensure appropriate comparisons.

---

## Best Practices for Institution-Level Analysis

### Do:

✅ **Use fractional counting** for fair multi-institutional comparison

✅ **Check sample size** before interpreting metrics (flag <20 papers)

✅ **Consider institutional context** (size, mission, age, funding model)

✅ **Examine trends over time** (3Y vs 5Y vs 10Y) not just snapshots

✅ **Combine volume and quality** (papers + TMCM, not just one)

✅ **Account for collaboration patterns** (solo vs collaborative research)

✅ **Flag structural changes** (mergers, splits, name changes)

✅ **Use appropriate peer groups** for benchmarking

### Don't:

❌ **Don't use whole counting** for institutional comparisons (inflates collaborative institutions)

❌ **Don't ignore sample size warnings** (small N = volatile metrics)

❌ **Don't rely on single metric** (TMCM alone insufficient)

❌ **Don't assume linear relationships** (2× papers ≠ 2× capability)

❌ **Don't ignore missing data** (check coverage before concluding)

❌ **Don't over-interpret small changes** (±5% could be noise)

❌ **Don't use for individual evaluation** (institutional metrics ≠ researcher quality)

---

## Example: Complete Institutional Profile

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ MASSACHUSETTS INSTITUTE OF TECHNOLOGY (MIT) QUANTUM COMPUTING – 5Y (2020-2024, 2025 YTD) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

INSTITUTIONAL IDENTIFIERS ROR ID: 042nb2s44 Country: United States Established: 1861

VOLUME METRICS Fractional papers: 325.7 Whole count papers: 487 (collaborative research) Topic paper share: 3.9% (rank #4 globally among institutions) Annual growth: +6.8% per year

FRACTIONAL WEIGHTING DETAILS Average authors/paper: 4.3 Average MIT fraction: 0.67 (67% authorship when involved) Solo institutional: 28% (no external collaborators) Domestic collab: 45% (other US institutions) International collab: 27% (foreign institutions)

QUALITY METRICS (TMCM) 5Y TMCM: 3.8× Median TMCM: 3.2× 3Y TMCM: 4.1× (improving trajectory) 10Y TMCM: 3.5× (sustained excellence)

TMCM WEIGHTING ANALYSIS Papers with high MIT fraction (>0.8): TMCM = 4.2× Papers with low MIT fraction (<0.3): TMCM = 3.1× → Higher quality when MIT has major role

EXCELLENCE DISTRIBUTION Top 1%: 12.5% of papers (12.5× over-representation) Top 5%: 28.3% of papers (5.7× over-representation) Top 10%: 41.2% of papers (4.1× over-representation) Top 1% count: 40.7 papers (rank #3 globally)

CITATION ANALYSIS Total citations: 28,450 Citations/paper: 87.3 (well above topic average) International rate: 88% (strong global recognition) Domestic rate: 12% (healthy, low self-citation) HHI: 0.08 (very diverse citation network)

COLLABORATION NETWORK Partner institutions: 147 unique institutions worldwide Top collaborators: Stanford (42 joint papers) Harvard (38 joint papers) Caltech (31 joint papers)

TRAJECTORY Quality trend: Improving (3Y > 5Y > 10Y) Volume trend: Growing (+6.8% annually) Share trend: Stable (±0.3% over 5Y) Excellence trend: Increasing (top 1% share rising)

CONFIDENCE LEVEL Sample size: ✓ High (>300 papers) Affiliation quality: ✓ High (96% papers with verified ROR) Temporal coverage: ✓ Complete (all 5 years well-represented)

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ ASSESSMENT: Elite global leader in quantum computing research with exceptional quality (3.8× median), high excellence concentration (12.5% in top 1%), improving trajectory, and strong international recognition. High-quality output particularly strong when MIT has major authorship role (>80% contribution). ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━


---

## Summary

**Institution-level research metrics** require careful methodological treatment of:

**1. Fractional Author Attribution**
- Credit divided proportionally among all contributing institutions
- Multi-affiliation authors split equally among listed institutions
- Ensures global totals equal actual paper counts

**2. Fractional Topic Weighting**
- Institution's topic contribution reflects weighted authorship
- Enables fair topic-level comparisons
- Aggregates correctly across all topics

**3. Fractional TMCM Weighting**
- Papers weighted by institutional contribution fraction
- Higher institutional role = more influence on institutional TMCM
- Accurately reflects quality of institutional output

**4. Special Institutional Considerations**
- Hierarchies (departments vs universities)
- Name standardisation (ROR identifiers)
- Historical changes (mergers, splits)
- Collaboration patterns (intra/inter-institutional)
- Sample size volatility (small institutions)

**Best practices**:
- Always use fractional counting for comparisons
- Check sample sizes before interpreting
- Examine trends, not just snapshots
- Combine volume, quality, and excellence metrics
- Flag structural changes and mergers
- Account for collaboration patterns

**These institutional research metrics are one component of Factbase's capability assessment.** Combine with actor intelligence (key researchers) and asset intelligence (infrastructure, patents) for comprehensive institutional capability analysis.

---

*For related methodology guides, see:*
- *[Understanding TMCM: Topic Median Citation Multiple](#)*
- *[Understanding Fractional Credit in Research Output Attribution](#)*
- *[Understanding Periodic Research Impact Analysis](#)*
- *[Research Metrics Guide](#) (comprehensive metric reference)*

*For technical specifications, API documentation, or questions about institutional analysis, contact the Factbase team.*

Subscribe to Factbase Docs

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe