Just curious.. I see a lot of people stating that they use check standards every so often during chromatographic runs. However, what is the acceptable % recovery for such standards, and, if fthe check standrd is outside this range, do you invalidate the entire run and start all over again??
![]()
![]()
![]()
![]()
By Anonymous on Wednesday, May 10, 2000 - 01:00 pm:
98.0-102.0% agreement between standards is acceptable, which I think is quite generous. And yes, it does invalidate your run and you'll have to repeat it; that's the whole point of having a check standard. If it's not within the specified range, then you have a method problem which will have the same impact on your samples, or you can't reproducibly make standard solutions, which will also have an impact on your samples.
![]()
![]()
![]()
![]()
By SteveL on Saturday, May 13, 2000 - 12:10 am:
In general, 98.0-102.0% is an acceptable tolerance. The main purpose of check standards is to demonstrate the stability of system performance over the course of a data set. All of our data sets require that standard solutions are injected both before and after a series of sample injections (i.e. they "bracket" the sample injections). If the check standard is outside the 98-102% range, we normally do not invalidate the run. The assumption is that the instrument response drifted over time. We simply average the responses of the bracketing standards and use that average response factor to quantify the samples.
Of course, this is not an acceptable practice when performing assays of high-purity materials. A potential swing of 2% one way or the other is very significant at that level. In cases where you're checking impurity levels or assaying low-concentration ingredients in a formulation, the practice provides very acceptable quantitation and is much more practical than invalidating the entire run.
Posting is currently disabled in this topic. Contact your discussion moderator for more information.