<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en-US"><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="https://kennethjbrandt.com/feed.xml" rel="self" type="application/atom+xml" /><link href="https://kennethjbrandt.com/" rel="alternate" type="text/html" hreflang="en-US" /><updated>2026-01-24T04:10:44+00:00</updated><id>https://kennethjbrandt.com/feed.xml</id><title type="html">Kenneth Brandt</title><subtitle>GCP+GxP quality systems and quality operations,inspection readiness,vendor oversight,and practical execution.</subtitle><author><name>Kenneth Brandt</name></author><entry><title type="html">Alert+Action Limits: A Starter Checklist for GxP Monitoring</title><link href="https://kennethjbrandt.com/gxp/ai/quality/2026/01/06/alert+action-limits-starter-checklist.html" rel="alternate" type="text/html" title="Alert+Action Limits: A Starter Checklist for GxP Monitoring" /><published>2026-01-06T00:00:00+00:00</published><updated>2026-01-06T00:00:00+00:00</updated><id>https://kennethjbrandt.com/gxp/ai/quality/2026/01/06/alert+action-limits-starter-checklist</id><content type="html" xml:base="https://kennethjbrandt.com/gxp/ai/quality/2026/01/06/alert+action-limits-starter-checklist.html"><![CDATA[<p>Alert limits and action limits are one of the fastest ways to make a monitoring program defendable. They also prevent the two failure modes auditors see all the time: dashboards with no decisions, and decisions with no thresholds.</p>

<h2 id="what-good-looks-like">What good looks like</h2>
<ul>
  <li>Alert limit = investigate signal and document assessment</li>
  <li>Action limit = execute a defined response (change control,CAPA,rollback,process controls)</li>
</ul>

<h2 id="starter-checklist">Starter checklist</h2>
<p>1) Define the baseline (validated reference)
2) Pick acceptance metrics (what “acceptable” means)
3) Separate signals (data drift,concept drift,performance drift)
4) Set limits by risk (patient impact drives tightness+SLA)
5) Assign owner+backup (named accountability)
6) Define response SLAs (review,escalation,closure)
7) Document decisions (review minutes matter)
8) Link actions to change control (threshold updates,retraining,rollback)</p>

<h2 id="visual-checklist">Visual checklist</h2>
<p><img src="/assets/img/checklists/alert-action-limits-a-starter-checklist-02.png" alt="Alert+Action limits starter checklist" /></p>

<p>If you want an editable template version,reach out via the contact page.</p>]]></content><author><name>Kenneth Brandt</name></author><category term="gxp" /><category term="ai" /><category term="quality" /><category term="Monitoring" /><category term="Alert Limits" /><category term="Action Limits" /><category term="CPV" /><category term="Validation" /><category term="Drift" /><summary type="html"><![CDATA[A practical checklist for setting alert+action limits that are risk-based, operational, and audit-friendly.]]></summary></entry><entry><title type="html">Audit Trails+Log Retention: A Starter Checklist for Inspection Readiness</title><link href="https://kennethjbrandt.com/gxp/quality/data-integrity/2026/01/06/audit-trails+log-retention-starter-checklist.html" rel="alternate" type="text/html" title="Audit Trails+Log Retention: A Starter Checklist for Inspection Readiness" /><published>2026-01-06T00:00:00+00:00</published><updated>2026-01-06T00:00:00+00:00</updated><id>https://kennethjbrandt.com/gxp/quality/data-integrity/2026/01/06/audit-trails+log-retention-starter-checklist</id><content type="html" xml:base="https://kennethjbrandt.com/gxp/quality/data-integrity/2026/01/06/audit-trails+log-retention-starter-checklist.html"><![CDATA[<p>Most inspection pain around audit trails is not about the audit trail feature. It’s about governance: what is captured,who reviews it,where evidence lives,and how long you can produce it on demand.</p>

<h2 id="starter-checklist">Starter checklist</h2>
<p>1) Define scope (systems and records that support GxP decisions)
2) Confirm event coverage (create,modify,delete,override,access)
3) Set review requirements (who reviews,how often,what triggers review)
4) Define retention (records+metadata+raw logs+reports)
5) Validate retrieval (can you produce it quickly,consistently,completely)
6) Ensure traceability (signal-&gt;review-&gt;decision-&gt;action)
7) Tie to SOPs (roles,escalation,investigation paths)</p>

<h2 id="visual-checklist">Visual checklist</h2>
<p><img src="/assets/img/checklists/audit-trails-log-retention-a-starter-checklist-02.png" alt="Audit trails and log retention starter checklist" /></p>

<p>A simple rule: if you can’t retrieve the audit trail record package quickly,you don’t have audit trails as a control. You have a hope.</p>]]></content><author><name>Kenneth Brandt</name></author><category term="gxp" /><category term="quality" /><category term="data-integrity" /><category term="Data Integrity" /><category term="Audit Trails" /><category term="Log Retention" /><category term="ALCOA+" /><category term="Inspection Readiness" /><summary type="html"><![CDATA[A practical checklist for audit trails and log retention that supports defensible decisions and faster inspections.]]></summary></entry><entry><title type="html">Control Strategy Without the Jargon: Control Layers That Survive an Audit</title><link href="https://kennethjbrandt.com/gxp/quality/qbd/2026/01/02/control-strategy-without-the-jargon.html" rel="alternate" type="text/html" title="Control Strategy Without the Jargon: Control Layers That Survive an Audit" /><published>2026-01-02T00:00:00+00:00</published><updated>2026-01-02T00:00:00+00:00</updated><id>https://kennethjbrandt.com/gxp/quality/qbd/2026/01/02/control-strategy-without-the-jargon</id><content type="html" xml:base="https://kennethjbrandt.com/gxp/quality/qbd/2026/01/02/control-strategy-without-the-jargon.html"><![CDATA[<p>Control strategies get overcomplicated fast. Teams drown in jargon, giant trace matrices, and “one more spreadsheet.”
But auditors (and operators) care about something much simpler:</p>

<p><strong>Can you show quickly and consistently what you control, who owns it, and where the evidence lives?</strong></p>

<p>This post lays out a practical approach that scales from development into commercial manufacturing.</p>

<h2 id="the-core-idea-in-one-line">The core idea (in one line)</h2>
<p><strong>CQA → CPP/CMA → control layers → owner/approvals → evidence</strong></p>

<p>If you can’t trace your strategy end-to-end, it won’t scale—and it won’t hold up under scrutiny.</p>

<hr />

<h2 id="step-1-start-with-cqas-what-must-be-true-for-the-patient">Step 1: Start with CQAs (what must be true for the patient)</h2>
<p>CQAs are the outcomes you’re protecting: identity, strength, purity, sterility assurance, content uniformity, etc.</p>

<p><strong>Practical rule:</strong> write each CQA in plain language:</p>
<ul>
  <li>What does “good” look like?</li>
  <li>What would “bad” mean for patient impact?</li>
</ul>

<p>If you can’t explain a CQA without acronyms, the map will turn into paperwork instead of control.</p>

<hr />

<h2 id="step-2-link-cppscmas-what-drives-those-cqas">Step 2: Link CPPs/CMAs (what drives those CQAs)</h2>
<p>CPPs and CMAs are the levers that meaningfully move CQAs.</p>

<p><strong>Keep it honest:</strong></p>
<ul>
  <li>If it doesn’t move a CQA, it’s not a CPP/CMA—it’s noise.</li>
  <li>If it moves a CQA, it needs a control layer you can defend.</li>
</ul>

<hr />

<h2 id="step-3-define-control-layers-controls-exist-in-layers-not-one-test">Step 3: Define control layers (controls exist in layers, not one test)</h2>
<p>Most failures come from “single-point control thinking”:</p>
<blockquote>
  <p>“We test it at the end, so we’re fine.”</p>
</blockquote>

<p>That doesn’t scale. Robust control strategies use <strong>layers</strong>, for example:</p>
<ul>
  <li><strong>Material controls</strong> (supplier qualification, incoming acceptance)</li>
  <li><strong>Process controls</strong> (setpoints, alarms, in-process checks)</li>
  <li><strong>Procedural controls</strong> (SOPs, training, line clearance)</li>
  <li><strong>Analytical controls</strong> (IPC testing, release testing)</li>
  <li><strong>System controls</strong> (access, audit trails, data integrity)</li>
</ul>

<p><strong>Goal:</strong> multiple independent ways to prevent/detect a bad outcome.</p>

<hr />

<h2 id="step-4-assign-owners--approvals-raci-that-isnt-theater">Step 4: Assign owners + approvals (RACI that isn’t theater)</h2>
<p>A control strategy fails when ownership is ambiguous.</p>

<p>At minimum, capture:</p>
<ul>
  <li><strong>Owner:</strong> accountable for control performance &amp; upkeep</li>
  <li><strong>Approvers:</strong> who approves changes (QA, MSAT, Validation, etc.)</li>
  <li><strong>Escalation path:</strong> who gets called when limits are breached</li>
</ul>

<p>If an auditor asks “who owns this control?” your answer should be immediate.</p>

<hr />

<h2 id="step-5-link-evidence-where-it-lives-how-to-retrieve-it">Step 5: Link evidence (where it lives, how to retrieve it)</h2>
<p>Controls don’t exist unless you can produce evidence.</p>

<p>For each control layer, link:</p>
<ul>
  <li>The <strong>record type</strong> (batch record section, log, system report)</li>
  <li>The <strong>system of record</strong> (eQMS, MES, LIMS, historian, DCS, etc.)</li>
  <li>The <strong>retention expectation</strong> (as applicable)</li>
  <li>The <strong>retrieval path</strong> (who pulls it, how long it takes)</li>
</ul>

<p><strong>Fast test:</strong> can your team retrieve a representative record in under 10 minutes?</p>

<hr />

<h2 id="step-6-review-cadence--triggers-change-control-is-your-friend">Step 6: Review cadence + triggers (change control is your friend)</h2>
<p>A control strategy should be reviewed on purpose, not “when someone remembers.”</p>

<p>Use:</p>
<ul>
  <li><strong>Periodic review</strong> (quarterly/ annually depending on risk)</li>
  <li><strong>Triggered review</strong> at change control events:
    <ul>
      <li>material/supplier change</li>
      <li>parameter range change</li>
      <li>equipment/software change</li>
      <li>deviation trend/complaint trend</li>
      <li>method change</li>
    </ul>
  </li>
</ul>

<p>This is how you keep the strategy aligned to the validated state.</p>

<hr />

<h2 id="common-failure-modes-what-breaks-first">Common failure modes (what breaks first)</h2>
<p>In practice, traceability usually breaks at:
1) <strong>Handoffs</strong> (development → tech transfer → commercial)
2) <strong>Ownership gaps</strong> (everyone is involved, nobody is accountable)
3) <strong>Evidence sprawl</strong> (records scattered across systems)
4) <strong>“One test” thinking</strong> (weak layering)
5) <strong>Unmanaged change</strong> (strategy doesn’t update when reality changes)</p>

<hr />

<h2 id="a-simple-audit-question-to-pre-answer">A simple audit question to pre-answer</h2>
<p><strong>“Show me how this CQA is controlled, and prove the controls are working.”</strong></p>

<p>If you can answer that with:</p>
<ul>
  <li>the map</li>
  <li>named owners</li>
  <li>linked evidence</li>
  <li>review history</li>
</ul>

<p>…you’re in a strong position.</p>

<hr />

<h2 id="question-for-you">Question for you</h2>
<p>Where does your traceability usually break first, <strong>process controls, test strategy, or documentation handoffs</strong>?</p>]]></content><author><name>Kenneth Brandt</name></author><category term="gxp" /><category term="quality" /><category term="qbd" /><category term="QbD" /><category term="Control Strategy" /><category term="CQA" /><category term="CPP" /><category term="CMA" /><category term="GMP" /><category term="Data Integrity" /><summary type="html"><![CDATA[A practical way to map CQAs to CPPs/CMAs, define layered controls, assign ownership, and link evidence so the strategy scales—and stands up in audits.]]></summary></entry><entry><title type="html">Model Drift in GxP: What It Is (and Isn’t) + A Practical Monitoring Playbook</title><link href="https://kennethjbrandt.com/gxp/ai/quality/2026/01/02/model-drift-in-gxp-practical-monitoring.html" rel="alternate" type="text/html" title="Model Drift in GxP: What It Is (and Isn’t) + A Practical Monitoring Playbook" /><published>2026-01-02T00:00:00+00:00</published><updated>2026-01-02T00:00:00+00:00</updated><id>https://kennethjbrandt.com/gxp/ai/quality/2026/01/02/model-drift-in-gxp-practical-monitoring</id><content type="html" xml:base="https://kennethjbrandt.com/gxp/ai/quality/2026/01/02/model-drift-in-gxp-practical-monitoring.html"><![CDATA[<p>“Model drift” in GxP isn’t a vibe. It’s a <strong>controlled, measurable change from a validated baseline</strong>.</p>

<p>This matters because the moment a model touches a GxP decision, monitoring becomes part of the <strong>validated state</strong>, not a nice-to-have.</p>

<h2 id="what-drift-is-and-isnt">What drift is (and isn’t)</h2>
<p><strong>Drift is:</strong> model behavior changing over time relative to a validated baseline.</p>

<p><strong>Drift is not:</strong></p>
<ul>
  <li>a single bad prediction</li>
  <li>random noise</li>
  <li>“the model auto-updating” unless you actually deploy changes</li>
</ul>

<p>The point is defensibility: can you explain what changed, how you detected it, and what you did about it?</p>

<hr />

<h2 id="the-three-drifts-to-monitor-separately">The three drifts to monitor (separately)</h2>
<p>Most programs fail because they blend signals and can’t interpret the cause.</p>

<p>1) <strong>Data drift</strong><br />
Inputs shift (sensor calibration, upstream process changes, new suppliers, different operators, etc.)</p>

<p>2) <strong>Concept drift</strong><br />
The relationship between inputs and outcomes changes (process physics/ biology changes, new failure modes)</p>

<p>3) <strong>Performance drift</strong><br />
Model metrics degrade (AUROC, precision/recall, MAE, calibration, false positives/negatives)</p>

<p><strong>Practical rule:</strong> treat these as three dashboards, not one number.</p>

<hr />

<h2 id="step-1-define-the-validated-baseline">Step 1: Define the validated baseline</h2>
<p>You need a frozen reference point:</p>
<ul>
  <li>training dataset version + lineage</li>
  <li>model version+configuration</li>
  <li>decision thresholds</li>
  <li>intended use statement</li>
  <li>validation summary metrics (what “acceptable” means)</li>
</ul>

<p>If you can’t anchor to baseline, you can’t prove drift.</p>

<hr />

<h2 id="step-2-set-acceptance-metrics--limits-alert-vs-action">Step 2: Set acceptance metrics + limits (alert vs action)</h2>
<p>Borrow the mindset from process validation:</p>
<ul>
  <li><strong>Alert limit:</strong> investigate trend/signal</li>
  <li><strong>Action limit:</strong> formal response (change control, model rollback, CAPA, etc.)</li>
</ul>

<p>Limits should map to risk:</p>
<ul>
  <li>higher patient-impact decisions → tighter limits and faster response SLAs</li>
  <li>low-risk supportive tools → wider limits and scheduled review</li>
</ul>

<hr />

<h2 id="step-3-define-triggers-owners-and-response-slas">Step 3: Define triggers, owners, and response SLAs</h2>
<p>A drift program fails when “someone should look at it” is the plan.</p>

<p>Minimum:</p>
<ul>
  <li><strong>Trigger:</strong> metric threshold, scheduled review, or change-control event</li>
  <li><strong>Owner:</strong> accountable for review+decision</li>
  <li><strong>SLA:</strong> how fast you review/escalate when triggered</li>
  <li><strong>Approvals:</strong> who signs off on major actions (QA, Validation, etc.)</li>
</ul>

<hr />

<h2 id="step-4-evidence-that-survives-an-audit">Step 4: Evidence that survives an audit</h2>
<p>Auditors don’t want your dashboard screenshot. They want the story and the record.</p>

<p>Keep:</p>
<ul>
  <li>monitoring logs + reports (with timestamps)</li>
  <li>review minutes (signals discussed, decisions made)</li>
  <li>investigations when signals cross thresholds</li>
  <li>change control records for updates/retraining/threshold changes</li>
  <li>traceability from signal → decision → action</li>
</ul>

<p>If it isn’t documented, it didn’t happen.</p>

<hr />

<h2 id="step-5-change-control-events-that-should-trigger-review">Step 5: Change control events that should trigger review</h2>
<p>At minimum, consider drift review when:</p>
<ul>
  <li>upstream process parameters or ranges change</li>
  <li>supplier/material changes occur</li>
  <li>instrument or software updates occur</li>
  <li>labeling/method changes affect outcomes</li>
  <li>complaint/deviation trends shift</li>
</ul>

<p>This is how monitoring stays connected to the validated state.</p>

<hr />

<h2 id="a-simple-audit-question-to-pre-answer">A simple audit question to pre-answer</h2>
<p><strong>“How do you know the model is still performing as validated, and what happens when it isn’t?”</strong></p>

<p>If you can show:</p>
<ul>
  <li>baseline definition</li>
  <li>monitoring plan + thresholds</li>
  <li>owner/SLA</li>
  <li>decision records</li>
  <li>change control linkage</li>
</ul>

<p>…you’re in good shape.</p>

<hr />

<h2 id="question-for-you">Question for you</h2>
<p>What’s your current drift trigger: <strong>metric thresholds, periodic review, or change-control events</strong>?</p>]]></content><author><name>Kenneth Brandt</name></author><category term="gxp" /><category term="ai" /><category term="quality" /><category term="Model Drift" /><category term="GxP" /><category term="AI Validation" /><category term="Monitoring" /><category term="Data Integrity" /><category term="Audit Trails" /><summary type="html"><![CDATA[A practical, audit-friendly approach to defining drift from a validated baseline, setting triggers, assigning owners, and documenting decisions.]]></summary></entry></feed>