Could someone please clarify the practical application of using a delayed injection in a gradient method.
I've swapped out a 4.6 mm for 2.1 mm i.d. column (same length) and dropped the flow from 1.0 to 0.2 ml/min. How long do I now delay the injection on the 2.1 mm column? Must I already know the system dwell volume?
![]()
![]()
![]()
![]()
By Anonymous on Tuesday, December 16, 2003 - 07:46 am:
The rule of thumb is to equilibrate for 5X column volume + 3X system volume.
4.6X150mm @ 1.0 mL/min on a 650 uL system
8.5 minutes for the column, 2.0 minutes for the system.
2.1X150 @ 0.20 mL/min, same 650 uL system
13.0 minutes for the column, 10.0 minutes for the system.
![]()
![]()
![]()
![]()
By Anonymous on Tuesday, December 16, 2003 - 05:11 pm:
Anon 1 back again, and still a bit confused.
Since the flow rate was scaled, it should take the same amount of time to flush 5 column volumes through either column - I can see how you need that extra time to include 3x system volume on the low flow method, but re-equilibration at the end of a gradient wasn't really my problem...
If I have a system with 650 ul dwell volume, switch from a 4.6 mm to 2.1 mm column and scale the flow from 1.0 to 0.2 ml/min, the separation should be equivalent (as linear velocity through the column is equivalent) but delayed by the extra time taken for the gradient to get to the column (39 sec vs 195 sec) ie. everything shifted to higher RT by just over 2½ min.
John Dolan's LCGC article from October 2002 seemed to indicate the delayed injection technique would let me bring the RTs back in line with the 4.6 mm column and keep an equivalent separation, but wether you delay it or not you're still at the mercy of extracolumn broadening which is more significant with the 2.1 mm column.
I'm basically at the conclusion that if I can spare losses in resolution then lowering the column i.d. is only going to save me some mobile phase. If there's a critical pair in the separation then just don't bother?
please add your thoughts to this waffle :-)
![]()
![]()
![]()
![]()
By Anonymous on Wednesday, December 17, 2003 - 05:13 am:
Those calculations of equilibration time are based on an empty tube (pi*r2*L).
Given a similar particle size, you would expect the separation to be very similar, but delayed as you say. Using injection delay helps overcome this problem (it goes by several name depending on the maker of the system). You never increase resolution (keeping the same particle size) when scaling down, what you do get is higher sensitivity. The peaks of interest elute in a smaller volume (ie higher concentration) therefore bigger peaks.
![]()
![]()
![]()
![]()
By RH on Wednesday, December 17, 2003 - 06:36 am:
Dear An,
the problem with gradients and dwell volume is not just the time it takes the gradient to get to the column but the extracolumn effects. The gradient profile at the column head is never the same as programmed but seems much more like a "smoothed" gradient curve. This is due to mixing effects during the passage of your extracolumn volume and is much more significant with low flowrates (as such using microbore columns). So You should try to minimize this extracolumn volume by using smaller mixing chambers, short connections with low ID and for low pressure systems even microbore check valves.
![]()
![]()
![]()
![]()
By Anonymous on Wednesday, December 17, 2003 - 03:34 pm:
Thanks RH, from the comments can I presume that the slight differences in selectivity noted when a column (and flow rate) is scaled down is due to mixing effects in the pre-column volume while the band broadening becomes more significant since the analytes move slower through the post-column tubing..
The sensitivity issue is a good tip, I didn't pick that up from the LCGC article.