How a drift baseline affects integration

Chromatography Forum: LC Archives: How a drift baseline affects integration
Top of pagePrevious messageNext messageBottom of pageLink to this message  By Anonymous on Tuesday, December 11, 2001 - 12:50 pm:

Dear all:

Every chromatographer out there knows that when you use a gradient method, depending on the wavelength you use, most likely, you are going to get a drift baseline. My question is: how is the intergration of a peak which sitting on top of a raised baseline compared with the same peak sitting on a flat baseline (as you would obtain from a isocratic run). In other words, when baseline, for some reason, raised, will the peak keep the same size or not? Either answer, I would appreciate for some reasoning behind it. Thanks a lot


Top of pagePrevious messageNext messageBottom of pageLink to this message  By Tom Mizukami on Tuesday, December 11, 2001 - 05:35 pm:

Yes your peak should have the same area. However, the slope of the baseline may make it more difficult for the software to accurately determine the beginning and end of the peak, especially if your peak is tailing.

Absorbances are additive and Beer's Law would still apply to your compound even in the presence of background absorbance.


Top of pagePrevious messageNext messageBottom of pageLink to this message  By Anonymous on Wednesday, December 12, 2001 - 12:24 pm:

Tom:
I would agree with you about additive nature of absorbances. But my experience told me the other way around. We juat got some data recently: when using a bigger bandwiths in the method, a higher baseline was obtained. All those peaks that sitting on top of the raised baseline kept approximately same size (supposed to be bigger when the bandwidth increased); however, those peaks that sitting on the flat part of the baseline increased the size as expected. You may argue that the UV profiles of all peaks differ from each other, but actually they don't. My previous experience also told me that the peak area decreased when the baseline was somehow raised. Do you have any explaination?
Thanks a lot for the valuable disccusions


Top of pagePrevious messageNext messageBottom of pageLink to this message  By Tom Mizukami on Wednesday, December 12, 2001 - 05:35 pm:

Peak areas do not necessarily increase if you increase the bandwidth.

What is going on in a diode array detector is that the light from the D2 lamp is going through the flow cell and is being reflected off a diffraction grating and onto a photodiode chip (array). When you select a larger bandwidth the output from more diodes is being averaged.

If your detection wavelength is centered on the lambda max of your analyte increasing the bandwidth will decrease the signal. With multiple species there can be subtle shifts in the spectra that depending of the wavelength setting and bandwidth can cause one to increase and another to decrease.

However, by averaging the output from multiple diodes the noise should be decreased by much more than the signal thus improving the signal to noise ratio, if an appropriate bandwidth is selected. Good luck.


Top of pagePrevious messageNext messageBottom of pageLink to this message  By Anonymous on Wednesday, December 12, 2001 - 07:44 pm:

Very recently, I completed a series of experiments to better understand the effect of changing bandwidth on signal and noise. As mentioned in the previous message, peak signals can increase or decrease depending on the shape of your spectra , the size of the BW and the detection wavelength. As for noise, it becomes a bit tricker to predict what will happen. Using ACN/Water MP, noise was very low and did not always decrease across a BW range. If I had to predict a trend, my data indicates that S/N decreased slightly over the BW range that I tested. I think if I had used a less transparent MP (MeOH/Water), perhaps the trend would have been different.


Top of pagePrevious messageNext messageBottom of pageLink to this message  By Anonymous on Monday, January 7, 2002 - 12:15 pm:

Tom:
So the output from a PDA (or DAD) is averaged across the wavelength range we set, what about from a VWD (variable wavelength detector)? Agilent's VWD's default bandwidth is 6.5nm, does that mean if I run the same method on a PDA using a 6.5 nm bandwidth, I would get the same result as that was obtained on a VWD?

Thanks a lot for your help.


Top of pagePrevious messageNext messageBottom of pageLink to this message  By Tom on Tuesday, January 22, 2002 - 02:42 pm:

Sorry, I didn't see this thread for a long time.

It is hard to compare the VWD and the DAD, they have completly different optical benches. I not sure what you mean by the "same result". If you mean peak area or height - these will be affected by many factors only one of which is bandwidth.

The one area where I think bandwidth is critical is if you are attempting to quantitate impurities based on response factors. Good luck.


Top of pagePrevious messageNext messageBottom of pageLink to this message  By Anonymous on Wednesday, January 30, 2002 - 08:42 am:

Dear All,

I am currently running an IC gradiemt method and having problems with band broadening and tailing. I have looked at the usual things but am open to suggestions, can anyone help me?


Top of pagePrevious messageNext messageBottom of pageLink to this message  By Chris Pohl on Monday, February 4, 2002 - 12:50 pm:

Anonymous,

Can you supply a bit more information? What is the column, column dimensions, eluent concentration range, gradient program details, eluent source (proportioning valve or eluent generator), sample loop size, analyte concentrations?


Top of pagePrevious messageNext messageBottom of pageLink to this message  By Jim Gorum on Monday, February 4, 2002 - 06:44 pm:

Noname,
Integration algorithms were developed for mini/microcomputers about 30 years ago when memory and cpu time were at a premium. A few small improvements have occurred since but very few. Most of the work has been on display, data management, and reports.
Baselines are selected by derivative methods, some used 1st and some second. With high noise, 1st methods rule, with low noise 2nd derivative methods rule. Noise causes the start and end of integrations to occur at different times for a series of runs on a single sample.
A sloping base line interfears with the selection of the baseline especially with 1st derivative methods. The integrator must usually be optimized by the user to get good integrations. The computers of today could do it well but they do not. When you go from an isocratic method's baseline to a sloping one, you must reset the slope at which the integrator declares peak start or end. This problem is mostly with 1st derivative methods.
The cure would be 2nd derivative methods except for noise. Your tailing peaks makes that even worse. In chromatography all peaks tail, just some worse than others. If you noticed the tailing it is severe. A second derivative method has a difficult time of telling when the peak rate of change of the rate of change goes to zero, the condition of peak end for ideal peaks. Again you have to set integration parameters. Sometimes even telling it to stop integrating at a specific volume.
The point is you can not compare performance of the chromatography and leave the integrator with the same settings. You will get extra or too little area with the change of the baseline. Overall one method might have better accuracy or precision with the best integrator settings for its baseline.
Jim


Top of pagePrevious messageNext messageBottom of pageLink to this message  By Beppe on Monday, February 4, 2002 - 11:51 pm:

If you have integration difficulties with ramping baselines, just try blank substraction.
If you HPLC system is of good quality and in good shape, you can get immpressive results.
A keypoint is to keep the same (and minimum) time between injections (a delay may cause build-up from mobile phase component A and then varying baseline humps).


Top of pagePrevious messageNext messageBottom of pageLink to this message  By Anonymous on Saturday, July 20, 2002 - 03:06 am:

My method is validated against one of the impurity specification is 0.5%. ( linearity shown from LOQ to 150%, i.e after loq the next station point is 50% level to the specification and r2 is 0.99 & accuracy was done from 50 to 150%, Accuracy and precission was shown good at LOQ 0.03%) But as per batch trend the limit has been tightened to 0.1%. Here my doubt is for this limit is the validation holds good or needs revlidation (with this limit, the existing linearity validation does not show + or - 20% linear). If not required how to justify. If required to validate what are the parameters need to consider for revalidation.


Top of pagePrevious messageNext messageBottom of pageLink to this message  By Anonymous on Monday, July 22, 2002 - 06:53 am:

Keep in mind that an r2 of 0.99 doesn't mean a thing over the concentration range you have (0.5 to 150%) Try this: Input a "perfect" linearity for 50-150% data points ((50,50) (75,75) (100,100) etc) then add a (.5,0) as if you got no response at all for a 0.5% solution. Suprise, you get r2= >0.99999! Try (.5, 10) as though you got 20 times the response you expect at 0.5%, now r2 = 0.998. My point is that with conventional r2 type specs, it's inappropriate to try to establish linearity over such a huge range. Better to evaluate response factors at each concentration to get meaningful data.


Add a Message


This is a private posting area. A valid username and password combination is required to post messages to this discussion.
Username:  
Password: