We are interested in GCxGC, not as a quantitative tool, but rather ONLY for fingerprinting purposes of complex samples. Many chromatographers claim that GC x GC has exceptional high separating powers. Compared to single dimension separation this is certainly true, but I am worried that the power of the technique is over emphasised. I am sure that traditional heart cutting MUST generate much more plates which will therefore provide much better resolution FOR A SINGLE CUT. Thus if ones interest lies only in a specific compound, or a specific region (peaks) in a single dimensional separation, I will always use heartcut and not GCxGC. GCxGC may provide better detection limits, but this is also perhaps an overstatement, because the capacity of columns used in heartcut separations is much higher, which will thus allow the detection of trace components. The linearity of the second column in GCxGC seems very limited (column overload occurs very easily), and that is why one often see horrible peak shapes in GCxGC applications. At the end of the day, it seems to me that the two techniques are complimentary to each other, but GCxGC will never replace heartcut separations - that is if maximum resolution is required and speed of analyses is not so critical.
By HW Mueller on Wednesday, April 23, 2003 - 12:41 am:
Not having done GC for about one year I am apparently already getting rusty. What is the diff between GCxGC and "heartcut" methods?
By Anonymous on Wednesday, April 23, 2003 - 01:30 am:
GCxGC means that you have a second column at the End of the first column which runs permanently prior to detection, thus obtaining "slices" of the chrom. that are again seperated, whereas Cutting means doing this only once, methinks.
By Anonymous on Wednesday, April 23, 2003 - 04:23 am:
Heartcut GC-MS works fine.
GCxGC/MS would be a challenge.
Adjust your detector sampling rate!
By HW Mueller on Wednesday, April 23, 2003 - 06:36 am:
That qualifies as a diff now? ....beginning to understand all the new names coming up for stuff that was done for the last 30 years.
By Ian on Wednesday, April 23, 2003 - 12:45 pm:
To Anonymous (April 23): GCxGC/MS is no longer a challenge...see www.leco.com. They have a commercial instrument that can do this. I asked for comments on my statement that Heartcut separations MUST provide better resolution than GCxGC FOR A SINGLE CUT. Reason: Typical 2nd column length in GCxGC is about 50cm, while heartcut separations are done on conventional capillary columns, typical 50 metres. The rule of thumb for resolution states that resolution is doubled if the column length is increased by a factor of four. Thus 25 times better resolution on a conventional column !!!
I would love to see the comments of anyone who works with GCxGC.
By Anonymous on Thursday, April 24, 2003 - 07:11 am:
I don't understand why you are asking this question. I would think it is self-evident that you are right, that a single heart-cut with an optimised second column of long length is going to give you a better result for that cut than a GCxGC system. But the whole idea of GCxGC as far as I understand it is fractions from the whole of the first run are transferred to the second column. Most of the workers in the field seem to refer to the technique as a "comprehensive" two dimensional separation for this reason. The second column necessarily has to work fast to cope with each fraction as it comes off the first column. Your question is like asking whether a marathon runner on the last 400m of the run can beat a 400m runner who only runs the last 400m.
By Uberto on Tuesday, April 29, 2003 - 06:02 pm:
Does anyone have successful experience working on GCxGC/MS using quadrupole mass selective detector? I understand that the quadrupole MSD may not have enough acquisition speed for fast GC.
By Anonymous on Wednesday, April 30, 2003 - 12:53 am:
GCxGC produces peak widths of less than 100 milli seconds. In order to have enough data points on a peak, you will need a MS operating at a minimum of 50 spectra/second, and preferably even higher (up to 200 spectra/second). No quadrupole will provide this speed. Only a Time of Flight MS can acquire data at these rates.
By Marcel van Duyn on Friday, May 30, 2003 - 05:15 am:
Working with chromatography for over 20 years (of wich 10 years with MS) and being involved in GCxGC now, I can say that you are all (more or less) in the right. Looking for specific components a Heart-cut (called 2D) with optimised colums gives you the most resolving power, thus the best separation. If you however are not looking for the needle in the haystack but trying to describe the haystack itself, GCxGC cannot be beaten! Even then things can be optimised. There is no reason not to go for a very long first dimension column (say 60 meter) and a long second dimension, this in combination with a very long "modulation" time. This of course leaves you with extremely long runtimes but also with the same information you would get out of heart-cut system!
In our experience however (oil industrie) the standard GCxGC setup (with the short second dimension column) usually gives enough separation for the majority of the separations from C1-C20 (at least).
Bare in mind the dimension of the second column (100 micron with 0.1 or 0.05 micron film) and realise that N per meter is quit a bit larger for those columns then for the tradional ones.
PS I think that a 50 m column over a 0,5 m this gives you (on the same column dimensions) a 100 times longer column, thus a sqrt(100)=10 times better resolution and not 25 times as stated.
For the comment about hooking up a MS to the system, the TOF is the best way to do it but the latest version Quads give scan speeds up to about 10-15Hz, wich will do if you slow down your experiment. Just make shure that the second dimension Pw >1 sec and you will have the desired 10 points per peak!
By Anonymous on Monday, June 2, 2003 - 04:17 am:
Marcel- I agree with most of what you say. However, I do not understand your point about having a very long second column in GCxGC analysis, and a long modulation time. A long first column would be no problem. However, if you have a long second column, where do the peaks from the first column go while the second column is performing its long run? If you just store them in the modulation system by having a long modulation time then you will destroy the separation from the first long column. So you would get a better separation from the second column but a worse separation from the first column.
By Marcel van Duyn on Tuesday, June 10, 2003 - 12:21 am:
Anonymous (june 2)
The trick is to make sure that you have 3 cuts over the first dimension peaks. So you have to slow down the first dimension separation (flow and or oven-ramp) to obtain this. On doing so you gain in first dimension separation while gaining in the 2e dim aswell. The limit lies (as expected) within the van Deemter curve, because at to low flows in the first dimension column the separation "dies". In the case you describe (just hold up the components eluting from the first dimension column) you are killing your first dimension separation, wich makes it "non-comprehensive", and is a waste of time.