How to maintain AI systems after deployment and ensure long‑term performance
Deploying a model is not the end of the journey. Real‑world data changes, user behavior evolves, and model performance can degrade over time. A complete AI lifecycle includes monitoring, evaluation, retraining, and continuous improvement.
Two major reasons models degrade over time:
Retraining keeps the model aligned with current data and user needs.
Retrain periodically (weekly, monthly, quarterly) depending on the domain.
Retrain when performance drops below a threshold or drift is detected.
Automated pipelines retrain the model as new data arrives (common in large‑scale systems).
# Pseudocode for a retraining workflow
1. Collect new labeled data
2. Validate and clean the data
3. Retrain the model on combined old + new data
4. Evaluate performance on a validation set
5. Run safety and bias checks
6. Deploy the new model version
7. Monitor again
Every deployed model should have a version number. If a new version performs poorly, you must be able to roll back instantly.
Run a new model alongside the old one without affecting users. Compare predictions to ensure safety before full rollout.
Deploy the new model to a small percentage of users first. If results look good, expand gradually.
For high‑risk applications (medical, legal, financial), humans review or approve model outputs. This improves safety and provides high‑quality data for retraining.
Now that you understand the full model lifecycle, you're ready for the final topic in this series: Lesson 40: Ethics, Safety, and Responsible AI.
← Back to Lesson Index