Why Hybrid-composite Mosaics? 
LRGB versus LOSC Comparison (mouse over image) 
So why do I often image with two different telescope-camera setups and then combine them to get a completed image called a hybrid-composite mosaic? Well the answer is fairly simple: I live in a region of the USA where the weather is very uncooperative for astroimaging! The rocky mountain west has many attractions for astroimaging such as low light pollution, low humidity (excellent transparency), high elevations. But it has two major drawbacks: relatively poor seeing or air stability (due to the ever present jet stream and mountain induced air turbulence) and uncooperative weather -- ie, clouds! I combat the seeing by imaging with short focal length instruments. To combat the weather I decided a few years ago to try imaging all the data for any given object in one night. In doing so I would at least be able to get some completed images which were not interrupted by clouds.  
To get the type of data that I like, I need fairly long total exposure times. This was often a problem in that one night I would get all the luminance data but the next night maybe I could only get the R and B data (often not even that!). Many times, the weather would be poor for imaging for weeks and then I could not finish the image. Sure, I could make a synthetic G layer but this is not what I desired. So after having faced this issue many times I decide to double my nightly data. 
In making a hybrid-composite mosaic (a term coined by Rob Gendler in an on-line article for Sky and Telescope) I get high resolution luminance data using an Astro-Physics Riccardi-Honders astrograph (RHA), while at the same time a get color data using a Takahashi FSQ-106. These telescopes respective focal lengths are 1159mm and 530mm. With the RHA I use an FLI Proline-16803 camera and with the FSQ an SBIG STL-11002XCM OSC camera. The image scales respectively are 1.60 arcsec/pixel and 3.49 arcsec/pixel. The OSC (one-shot color) camera lets me get all the color data without having to use filters and thus loose data due to clouds. Each image frame has all the color data -- hence the name "one-shot color"! 
So what does this get me? Since I image with two setups each night I get all the data to compete an image that same night. Also, I essentially get two images of the same region -- one with higher resolution and more detail, and one with a much wider FOV. See the image below of my field of views (FOV) for each setup. 
Getting these two images has been fun. I am often surpised by how seeing a wider FOV puts into perspective the object that I was targeting! 
Are their any down sides? Sure. Cost for two setups (two mounts, two scopes, two cameras, software, etc) -- that is obvious. Also there are two sets of data to process -- OK that may not be an acual downside but maybe for some. 
What about image fidelity? That is, does the hybrid image suffer from poor color detail since the scope/camera combo I use to get the color data has a much larger image scale than the luminance setup? No, in general, not really. Mouse over the image at the top of the screen. As you can see, the LOSC image compares favorably with the LRGB (all the data taken through the RHA using filters). The color is good. The detail is good too. One thing is obvious though -- the LRGB star are tighter; that is, less bloat. This star bloat can be improved by reducing the star size in the OSC image before combining with the luminance image (I did not do this in this Sh2-140 example -- call me lazy). But in general, the images are pretty close. The nebulosity (which is what I am after) does not suffer from this technique. 
Since I have been using this technique my image production rate has increased. That has been good since I can image more of my desired targets without compromising image quality. If you too suffer from similar imaging limations then I would encourage you to give it a try!