Laboratories and clinical research programs routinely run three different categories of system side by side: a LIMS, a LIS, and a CTMS. The acronyms get used interchangeably in vendor decks and request-for-proposal documents, and the boundaries get even blurrier when one product markets itself as “all of the above.” That ambiguity is usually harmless during day-to-day operations. It becomes expensive at audit time, when an inspector asks a simple-sounding question: “Where is this piece of data the system of record, and what is the audit trail for the change you just showed me?”

This post lays out which system normally owns which data, where overlaps are legitimate and where they are warning signs, and the questions a sponsor, central lab, or hospital lab should answer before an FDA, CAP, or ICH inspection arrives.

The three systems at a glance

The labels describe roles, not products. A vendor can ship a single platform that plays two of these roles, but the roles themselves are stable.

  • LIMS — Laboratory Information Management System. Built for research, contract, central, environmental, manufacturing QC, and public health labs. The unit of work is a sample.
  • LIS — Laboratory Information System. Built for clinical diagnostic labs that report patient results. The unit of work is a patient order and a resulted test.
  • CTMS — Clinical Trial Management System. Built for sponsors, contract research organizations, and central labs running interventional studies. The unit of work is a study, a site, a subject, a visit, and a milestone.

Same word — “test” — means three different things across these systems. In a LIMS, a test is a method run on a sample. In a LIS, a test is an orderable analyte tied to a patient. In a CTMS, a test is an assessment scheduled at a visit. The data they each authoritatively own follows from that difference.

What a LIMS owns

A LIMS is the system of record for the sample and what was done to it. Concretely, that usually means:

  • Sample registration, identifiers, accessioning, parent/aliquot lineage, and chain of custody.
  • Storage location, freezer/box/position, and movement history.
  • Sample status (received, in test, on hold, disposed) and the events that changed it.
  • Method, test, and panel definitions; the SOP version applied to a sample.
  • Instrument runs, raw and processed results bound to the sample, reagent and lot traceability.
  • Worklist execution, review and approval signatures, and the audit trail behind both.
  • Stability program data, environmental monitoring data, and QC trends where the lab runs them.

A LIMS does not natively own a patient medical record, a clinical visit schedule, or a study protocol. It may import or reference them, but it is not where they originate or where they are reconciled. When a research LIMS pretends to own subject demographics — typically because no other system in the program is doing it — that is a flag worth raising, because the LIMS audit trail was not designed for the kind of attribution that subject data requires.

What a LIS owns

A LIS is the system of record for a patient’s laboratory order and result. Its scope sits inside a regulated clinical care workflow:

  • Patient demographics and orders, usually received from an EHR via HL7 v2 or FHIR.
  • Test order entry, accessioning of the specimen tied to the patient, and resulting.
  • Reflex testing rules, delta checks, critical-value flags, and notification workflows.
  • Result release, amendment, and corrected-report history.
  • Reporting back to the ordering physician’s EHR, billing system, and the patient where applicable.
  • Quality control runs, Levey-Jennings tracking, proficiency testing samples, and the records CAP and CLIA inspectors expect to see.

The LIS does not own the protocol-level schedule that says “draw a CBC at visit 4” — that lives in a CTMS or in the EDC system. A LIS also does not, by itself, satisfy 21 CFR Part 11 audit trail requirements for research data; its audit trail is built for the diagnostic care record, which is governed by CLIA, CAP accreditation, and HIPAA, not by Part 11.

This distinction is the source of half the confusion in the central lab world. Central labs run patient-tied diagnostic workflows under CLIA and at the same time act as a research lab under 21 CFR Part 11 for sponsor studies. Many run both a LIS and a research LIMS for exactly that reason. Some hybrid platforms try to do both — and they can — but the audit boundary has to be explicit.

For a longer treatment of where the LIMS/LIS line falls and how to choose, see our LIMS Selection Services page.

What a CTMS owns

A CTMS is the system of record for how a clinical trial is run, not for the data the trial collects. That difference is what catches sponsors out. A CTMS authoritatively owns:

  • The study definition, protocol version, schedule of activities, and visit windows.
  • The site list, site activation status, principal investigators, IRB approvals, and regulatory document expirations.
  • Subject enrollment counts and screening, randomization, and dropout status — usually populated by feeds from EDC and IRT/RTSM, not entered by hand.
  • Monitoring visit plans, monitoring reports, action items, and signatures.
  • Milestone, contract, and payment tracking for sites and vendors.
  • Risk indicators (KRIs/QTLs) and the actions taken on them.

A CTMS does not own the case report form (eCRF) data — that belongs to the EDC. It does not own randomization assignments — those belong to the IRT/RTSM. It does not own the actual lab results from the central lab — those belong to the LIMS or LIS and are exchanged into the EDC or the sponsor’s data warehouse as a separate feed. When a CTMS is asked to hold those datasets because “we just need them in one place,” the program has either built a data warehouse and called it a CTMS, or it is about to fail an audit on data lineage.

For sponsors and CROs trying to draw the right line between CTMS, EDC, IRT, and supporting platforms, our Clinical Trials CTMS and Central Lab CTMS pages walk through how the responsibilities tend to split in practice.

Where the overlaps are legitimate, and where they are not

Some overlap is inevitable. The question is whether each overlapping field has one authoritative source and one or more downstream consumers, or whether two systems both think they are the master.

Legitimate overlaps usually look like this:

  • Sample identifiers appear in the LIMS, the LIS (for patient-tied diagnostic specimens), and the EDC (as a lab result reference). The LIMS or LIS issues them; the others reference them.
  • Subject identifiers appear in the CTMS, the EDC, and the central lab LIMS. The EDC normally issues them; the CTMS and LIMS reference them.
  • Test/assay names appear in the LIMS, the LIS, and the EDC. The catalogue lives in one place — usually the LIMS or LIS — and the others map to it.
  • Site identifiers appear in the CTMS and the central lab LIMS. The CTMS issues them.

Warning-sign overlaps look like this:

  • Subject demographics are entered twice — once in the EDC and once in the LIMS — with no reconciliation feed.
  • Visit dates live in the CTMS and again in the EDC and disagree by hours or days.
  • Sample status is updated in both the LIMS and a sponsor spreadsheet, and the spreadsheet is the one people actually look at.
  • Result corrections happen in the LIS but never propagate to the EDC, so the locked study database disagrees with the patient’s medical record.

A useful exercise for any program: draw a one-page diagram with the four or five real systems your trial uses, list every shared field, and write the name of the system of record next to each. If two names appear, you have an audit finding waiting to happen.

Why all of this matters at audit time

Auditors do not ask “do you have a LIMS.” They ask harder questions:

  • Attribution. Who created this record, who changed it, when, and why? The answer is in the audit trail of the system of record. If two systems both think they own the field, the auditor will ask for both audit trails and look for divergence.
  • Data integrity (ALCOA+). Attributable, Legible, Contemporaneous, Original, Accurate — plus Complete, Consistent, Enduring, and Available. Each principle ties back to a system having a clear boundary for what it records and how it records it. Spreadsheets that “supplement” the system of record are a recurring source of 483s for this reason.
  • 21 CFR Part 11. Electronic records and electronic signatures need controlled access, audit trails, and signature meaning. The boundary matters because a Part 11 audit will follow the data — from instrument to LIMS to EDC to submission dataset — and ask for a controlled chain of custody at each hop.
  • EU Annex 11 and ICH E6(R3). Annex 11 mirrors much of Part 11 for European-regulated processes. ICH E6(R3), now in force, explicitly expects sponsors to know which systems are part of their critical-to-quality data flow, to manage them risk-proportionately, and to be able to explain the data lineage end to end. A muddled CTMS/EDC/LIMS boundary is exactly the kind of thing E6(R3) inspectors are now looking for.
  • CLIA and CAP. For LIS-resident diagnostic results, CAP inspections will pull on quality control, proficiency testing, and result-amendment records. Pretending a research LIMS is also a CLIA-compliant LIS — or vice versa — is the fastest path to a deficient inspection.

The systems were not designed to enforce these boundaries for you. They were designed to do their own job well. The boundary is an architectural choice, made (or not made) by the people who own the program. When the boundary is explicit, audits are about confirming the controls. When it is implicit, audits are about reconstructing what actually happened, which is the version no one enjoys.

Drawing the boundaries before the inspection

A short checklist that tends to surface the right conversations:

  1. List every system that holds protocol, subject, sample, or result data. Include spreadsheets and Access databases. Especially those.
  2. For each shared field, name the system of record. One name only.
  3. For each integration between two systems, write down: direction, frequency, transport (HL7, FHIR, REST, SFTP, manual), and which side handles conflicts.
  4. Confirm each system’s audit trail captures who, what, when, and old-vs-new value for every change to a regulated field. If it does not, decide whether you are accepting the gap or remediating it.
  5. Validate the integrations, not just the systems. A CTMS that imports enrollment from an EDC is only as compliant as the integration that moves the data.

Most programs that do this exercise find at least one warning-sign overlap they did not know they had. That finding is a gift — it is cheaper to fix the boundary now than to defend it during an inspection.

If you are working through that exercise on a real trial or lab program — sample chain of custody, LIS/LIMS split, CTMS scope, validation effort — that is the kind of work our Computer System Validation, LIMS Selection Services, and LIMS Professional Services teams do every week. Contact us if it would help to compare notes.