Sunday, September 27, 2020

First Light: M51

We chose M51 as the first image of our new scope:
(click on image for full resolution)

M51 is a spiral galaxy in a distance of 23 million light years and has a diameter of 76,000 light years - which means it's slightly smaller than our own galaxy (105,700 light years). It was discovered by Charless Messier on October 13, 1775 - though he only discovered the main galaxy. The smaller galaxy (NGC 5159) was discovered 6 years later by Pierre Méchain.
The most prominent feature of this galaxy is the encounter with the smaller galaxy (at the bottom in the image). Although it looks like a frontal encounter, the smaller galaxy (NGC 5159) is actually passing behind the larger galaxy. This process has been going on for hundreds of millions of years.
Another interesting aspect is the larger number of supernovae in M51. There have been supernovae in 1994, 2005 and 2011 - three supernovae in 17 years is much more than what we see in other galaxies. It's not clear what causes this - and if the encounter with NGC 5159 has something to do with it.

It took me a long time to figure out how to process images from it:
  1. Normal Calibration of the individual Frames
    We realized that the shutter on the ML50100 camera isn't completely shut and let some light in. In order to take bias and dark frames, we had to cover the scope (I used a Telegizmo cover for a Dobsonian that we could pull over the entire scope, finder scope and mount:
  2. Using the DefectMap process to correct dark (cold) and white (hot) columns.
  3. Equalizing all images.
  4. Using the SubframeSelector process to create weights for all images and mark the best images.
    And just so that I don't forget the parameters:
    Scale: 1x1: 0.48 2x2: 0.96
    Gain: 0.632
    Weighting: (15*(1-(FWHM-FWHMMin)/(FWHMMax-FWHMMin)) + 15*(1-(Eccentricity-EccentricityMin)/(EccentricityMax-EccentricityMin)) + 20*(SNRWeight-SNRWeightMin)/(SNRWeightMax-SNRWeightMin))+50
  5. Stack the images (I stacked all images against the best Ha image)
  6. Use the LocalNormalization process to improve normalization of all frames
  7. Use ImageIntegration to stack the images
  8. Use DrizzleIntegration to improve the stacked images
  9. Use LinearFit on all images
  10. Use DynamicBackgroundExtraction to remove any remaining gradients
  11. Use ColorCombine on the Red, Green and Luminance images to create a color image
  12. Use StarNet to remove the stars from the Ha image
  13. Use the NRGBCombination script to add the Ha data to the color image
  14. Use PhotometricColorCalibration to create the right color balance
  15. Use BackgroundNeutralization to create an even background
  16. Use SCNR to remove any green residues
  17. Stretch and process the image (no more noise reduction - the advantage of dark skies!!!)
  18. Enhancing the Feature Contrast (the Pixinsight tutorials from LightVortexAstronomy are awesome!!!)
  19. I then use the Convolution process on the RGB data to remove any processing noise in the colors.
  20. Process the Luminance image the same way
  21. Sharpen the image just so slightly
  22. Use LRGBCombination to apply the luminance image to the RGB image
  23. Do some final processing (usually just CurvesTransformation to drop the background a little and maybe adjust Lightness and Saturation to bring out the object better).
  24. Done!!!
Interestingly M51 was also the first images I took with my Celestron EDGE scope 7(!!) years ago: https://mstriebeck-astrophotography.blogspot.com/2013/06/first-image-with-new-setup-m51.html. What a different darker skies, better equipment and 7 years experience make :-)

Removing stars with StarNet++

To cleanly fold Ha data into our LRGB images, we want to remove all stars (otherwise the star colors will be distorted). In the past I tried various options:
  • Creating a star mask and subtracting that from the image. Which leaves black holes in the image. It's not too bad as these black holes are where stars are. But if the stars in the star mask are not the same size then the real stars, some artifacts are left.
  • I tried Straton - but could never get a clean image out of it.
So, I was exited when I read about StarNet++ - an ML-based algorithm that removes stars. And it even comes with the latest version of Pixinsight:

First time, I wasn't sure what these "weights" files are... Turns out, that these are files Starnet files that DON"T ship with Pixinsight. Weird.

If you go to the SourceForge project of StarNet you download the whole standalone software, unpack it and have the files in there:
    mono_starnet_weights.pb
    rgb_starnet_weights.pb

You copy them into the Pixinsight library folder (or a subfolder), click on "Preferences" and select them:

Now, one challenge is that this process only works on non-linear images - but we want to fold the Ha data into the RGB data in the linear stage.

I found this video where somebody explained how to create a "reversible stretch". We use the ScreenTransferFunction:

And click on "Edit Instance Source Code" (the little square icon at the bottom):

The middle value in the first row (here: 0.0051) is the midtone stretch factor. We use that to create a PixelMath stretch function: mtf(0.00051,$T)

When we apply this to our image instead of the ScreenTransferFunction stretch we get a stretched linear image that looks like this:

Not the best contrast, but enough to let StarNet do its work. If we now apply the StarNet process to this image, we get this (this can take quite a while!):

It's a little bit hard to tell, but the stars are gone. We change the expression in the PixelMath window to: mtf(1-0.00051,$T) which reverses the stretch. If we now stretch the image again with ScreenTransferFunction we see that the stars are gone:

The larger stars leave some shadow behind, but that won't matter too much. We can use this image to fold the Ha data into the RGB data. I use the NRGBCombination script (under Scripts->Utilities) for this.

Saturday, September 26, 2020

Equalizing the ML50100 images

The ML50100 is basically two chips that are just next to each other. Each side has its own readout - and with that its own characteristics.

Here is a 1x1 bias frame:

The different levels are clearly visible. But even more, if zooming into the Master Bias:

There is even different patterns on each side. Again, it's basically two separate chips...

Now, one question was: Can we calibrate these differences out? By taking very good dark and flat frames, would the resulting light frames be clean (i.e. don't have different levels on the left and right hand side?)

My answer is: no! I tried everything (using darks with vs. without bias, correcting flats with darks with / without bias...) I always ended up with different levels. Now, taking very high quality frames can probably reduce that. So, I went back to Dark Skies New Mexico to cover the scope and take dark and bias frames making sure that absolutely no light comes in:
(I know, it looks horrible - but it did the trick).

It's easy to imagine a PixelMath expression that equalizes both halfs: Get the mean of a background part on the left and right side and multiple one side by the quotient of the two. You can get the mean by creating a preview of an area without stars using the Statistics process.

The problem is that you need to do this for each frame!!! If you do this on the stacked image, you end up with a seam in the middle when the various frames were first aligned and then stacked. And doing this for each frame individually is a royal PITA.

So, I wrote a simple Pixinsight script that does this across many individual images. First, you load one image (after calibration and applying the DefectMap) and put two previews on it - near the center and both covering mostly background:


The script uses these two previews, applies them to all other images, measures the means on both sides and corrects the right-hand side:

Good thing is: this made me learn the Javascript-based Pixinsight scripting (bad thing: there is no real reference or tutorial!!!)

Two things to remember:
  1. I have to choose previews that have some empty space around them. Because we are dithering our images, they will all be offset a little into any direction. I.e. the preview will shift around.
  2. It's better to equalize images from each side of the meridian separately as they will be rotated by 180 degrees.
With this, I can easily equalize all frames before stacking!!!!

Here is the code for the script (I know it's not the cleanest ...):

#include <pjsr/TextAlign.jsh>

#include <pjsr/Sizer.jsh>

#include <pjsr/StdIcon.jsh>

#include <pjsr/StdButton.jsh>


function equalize_images(LeftPreview, RightPreview, FileList) {

Console.writeln("Equalizing " + (FileList.length+1) + " images against Reference Image " +

LeftPreview.fullId.substr(0, LeftPreview.fullId.lastIndexOf("->")));

var ReferenceImageWindow = LeftPreview.window;

Console.writeln("Dimension of Reference Image: Height=" + ReferenceImageWindow.mainView.image.height +

" Width=" + ReferenceImageWindow.mainView.image.width);

var success = true;

for ( var i = 0; i < FileList.length; i++) {

var CurrentImage = ImageWindow.open(FileList[i]);

if (CurrentImage.length < 1) {

(new MessageBox("Unable to open file: " + FileList[i], "Error",

StdIcon_Error, StdButton_Ok

)).execute();

throw new Error("Unable to open file: " + FileList[i]);

success = false;

} else if (CurrentImage.length > 1) {

(new MessageBox("Multi-Image files are not supported (" + FileList[i] + ")", "Error",

StdIcon_Error, StdButton_Ok

)).execute();

throw new Error("Multi-Image files are not supported (" + FileList[i] + ")");

success = false;

} else if (CurrentImage[0].mainView.image.height != ReferenceImageWindow.mainView.image.height ||

CurrentImage[0].mainView.image.width != ReferenceImageWindow.mainView.image.width) {

(new MessageBox("Image " + FileList[i] +

" does not have same dimension as reference image.", "Error",

StdIcon_Error, StdButton_Ok

)).execute();

throw new Error("Image " + FileList[i] +

" does not have same dimension as reference image.");

success = false;

} else {

Console.writeln("Processing " + (i+1) + " of " + FileList.length +

" image: " + FileList[i]);

var CurrentImageLeftPreview =

CurrentImage[0].createPreview(ReferenceImageWindow.previewRect(LeftPreview));

var CurrentImageLeftMedian = CurrentImageLeftPreview.image.median();//computeProperty("Median").median();

var CurrentImageRightPreview =

CurrentImage[0].createPreview(ReferenceImageWindow.previewRect(RightPreview));

var CurrentImageRightMedian = CurrentImageRightPreview.image.median();//computeProperty("Median").median();

var Correction = CurrentImageRightMedian / CurrentImageLeftMedian;

Console.writeln(" Median of left Preview= " + CurrentImageLeftMedian);

Console.writeln(" Median of right Preview=" + CurrentImageRightMedian);

Console.writeln(" Correction= " + Correction);

var PixelMathProcess = new PixelMath();

PixelMathProcess.expression = "iif(x()>=w()/2,$T[0]/" + Correction + ",$T[0])";

PixelMathProcess.executeOn(CurrentImage[0].mainView);

var DotPosition = CurrentImage[0].filePath.lastIndexOf(".");

var PathWithoutExtension = CurrentImage[0].filePath.substring(0, DotPosition);

var Extension = CurrentImage[0].filePath.substring(DotPosition);

var NewFilePath = PathWithoutExtension + "_equalized" + Extension;

Console.writeln("Saving equalized image as " + NewFilePath);

CurrentImage[0].saveAs(NewFilePath, false, false, false, false);

CurrentImage[0].close();

Console.writeln();

}

}

if (success) {

(new MessageBox("Done!", "Done",

StdIcon_Information, StdButton_Ok

)).execute();

}

}


function apply_previews(LeftPreview, RightPreview, FileList) {

var image_list = ImageWindow.windows;

var active_window = ImageWindow.activeWindow;


Console.writeln("Found " + image_list.length + " images.");


for ( var i = 0; i < image_list.length; i++ ) {

console.write("" + i + " Image " + image_list[i].filePath);

var file_path = image_list[i].filePath;


if ((image_list[i].height != active_window.height) ||

(image_list[i].width != active_window.width)) {

console.writeln(" - has different resolution than active window - skipping!");

} else {

//if (file_path == active_window.filePath) {

// console.writeln(" - is active window - skipping!");

//} else {

console.writeln(" - creating previews");

var new_left_preview = image_list[i].createPreview(LeftPreview);

var left_median = new_left_preview.computeProperty("Median").median();

var new_right_preview = image_list[i].createPreview(RightPreview);

var right_median = new_right_preview.computeProperty("Median").median();

var correction = right_median / left_median;

Console.writeln("Left=" + left_median + " Right=" + right_median +

" correction=" + correction);

var pixelMath = new PixelMath();

pixelMath.createNewImage = true;

var s = "iif(x()>=w()/2,$T[0]/" + correction + ",$T[0])";

pixelMath.expression = s;

pixelMath.executeOn(image_list[i].mainView);

var dot_pos = file_path.lastIndexOf(".");

var all_but_extension = file_path.substring(0, dot_pos);

var extension = file_path.substring(dot_pos);

var new_file_path = all_but_extension + "_equalized" + extension;

var new_image = ImageWindow.activeWindow;

Console.writeln("Saving as " + new_file_path);

new_image.saveAs(new_file_path, false, false, false, false);

new_image.close();

//}

}

}

}


function equalize_dialog() {

this.__base__ = Dialog;

this.__base__();


this.LeftPreviewLabel = new Label();

this.LeftPreviewLabel.text = "Left Preview";

this.LeftPreviewLabel.minWidth = 30;

this.LeftPreviewViewList = new ViewList();

this.LeftPreviewViewList.getPreviews();

this.LeftPreviewSizer = new HorizontalSizer();

this.LeftPreviewSizer.add(this.LeftPreviewLabel);

this.LeftPreviewSizer.add(this.LeftPreviewViewList);


this.RightPreviewLabel = new Label();

this.RightPreviewLabel.text = "Right Preview";

this.RightPreviewLabel.minWidth = 30;

this.RightPreviewViewList = new ViewList();

this.RightPreviewViewList.getPreviews();

this.RightPreviewSizer = new HorizontalSizer();

this.RightPreviewSizer.add(this.RightPreviewLabel);

this.RightPreviewSizer.add(this.RightPreviewViewList);


this.inputFiles = new Array;


this.files_Label = new Label(this);

this.files_Label.text = "Images to equalize";

this.files_Label.minWidth = 100;


this.files_TreeBox = new TreeBox( this );

this.files_TreeBox.multipleSelection = true;

this.files_TreeBox.rootDecoration = false;

this.files_TreeBox.alternateRowColor = true;

this.files_TreeBox.setScaledMinSize( 500, 200 );

this.files_TreeBox.numberOfColumns = 1;

this.files_TreeBox.headerVisible = false;


for ( let i = 0; i < this.inputFiles.length; ++i )

{

let node = new TreeBoxNode( this.files_TreeBox );

node.setText( 0, this.inputFiles[i] );

}


this.filesAdd_Button = new PushButton( this );

this.filesAdd_Button.text = "Add";

this.filesAdd_Button.icon = this.scaledResource( ":/icons/add.png" );

this.filesAdd_Button.toolTip = "<p>Add image files to the input images list.</p>";

this.filesAdd_Button.onClick = function()

{

let ofd = new OpenFileDialog;

ofd.multipleSelections = true;

ofd.caption = "Select Images";

ofd.loadImageFilters();


if ( ofd.execute() )

{

this.dialog.files_TreeBox.canUpdate = false;

this.dialog.inputFiles.length = 0;

for ( let i = 0; i < ofd.fileNames.length; ++i )

{

let node = new TreeBoxNode( this.dialog.files_TreeBox );

node.setText( 0, ofd.fileNames[i] );

this.dialog.inputFiles.push( ofd.fileNames[i] );

}

this.dialog.files_TreeBox.canUpdate = true;

}

};


this.filesClear_Button = new PushButton( this );

this.filesClear_Button.text = "Clear";

this.filesClear_Button.icon = this.scaledResource( ":/icons/clear.png" );

this.filesClear_Button.toolTip = "<p>Clear the list of input images.</p>";

this.filesClear_Button.onClick = function()

{

this.dialog.files_TreeBox.clear();

this.dialog.inputFiles.length = 0;

};


this.filesButtons_Sizer = new HorizontalSizer;

this.filesButtons_Sizer.spacing = 4;

this.filesButtons_Sizer.add( this.filesAdd_Button );

this.filesButtons_Sizer.addStretch();

this.filesButtons_Sizer.add( this.filesClear_Button );

this.filesButtons_Sizer.addStretch();


var ReadPreviewsButton = new PushButton();

ReadPreviewsButton.text = "Equalize";

ReadPreviewsButton.onClick = function() {

var LeftPreview = this.dialog.LeftPreviewViewList.currentView;

var RightPreview = this.dialog.RightPreviewViewList.currentView;

if (LeftPreview == "") {

(new MessageBox("Must set Left Preview", "Error",

StdIcon_Error, StdButton_Ok

)).execute();

} else if (RightPreview == "") {

(new MessageBox("Must set Right Preview", "Error",

StdIcon_Error, StdButton_Ok

)).execute();

} else if (LeftPreview.fullId.substr(0, LeftPreview.fullId.lastIndexOf("->")) !=

RightPreview.fullId.substr(0, RightPreview.fullId.lastIndexOf("->"))) {

(new MessageBox("Select both Previews from same Image.", "Error",

StdIcon_Error, StdButton_Ok

)).execute();

} else if (this.dialog.inputFiles.length == 0) {

(new MessageBox("Must select at least one image to equalize.", "Error",

StdIcon_Error, StdButton_Ok

)).execute();

} else {

var ReferenceImageWindow = LeftPreview.window;//.mainView.image;

equalize_images(LeftPreview,

RightPreview,

this.dialog.inputFiles);

}

}


this.sizer = new VerticalSizer();

this.sizer.add(this.LeftPreviewSizer);

this.sizer.add(this.RightPreviewSizer);

this.sizer.add(this.files_Label);

this.sizer.add(this.files_TreeBox);

this.sizer.add(this.filesButtons_Sizer);

this.sizer.add(ReadPreviewsButton);


this.adjustToContents();

}


equalize_dialog.prototype = new Dialog();


function main() {

var dialog = new equalize_dialog();

dialog.execute();

}


main();


A hair on the Blue Filter !!!!

When we took flat images, we could see a weird artifact on one side:

It looks like something not quite on the lens or the filter - but slightly in front or so.

And even worse!!! It doesn't seem to be stable. Here is a different flat:

And of course these also show up on the actual images. I tried various things but couldn't find a way to eliminate them :-( So for now, we have to live with them - which means cutting out the left hand side of the images...

Creating Defect Maps

The ML50100 camera that we are using in our Namid observatory has quite a number of defect rows:





























As visible in this zoom, it's black and white defects:

Some of the defects span the entire column, some only go some way up.

Initially, I ignored these, but they are too many and too prevalent and cause artifacts in the stacked and processed images. So, I looked into DefectMap and CosmeticCorrection. I didn't have much luck with CosmeticCorrection - it didn't clean up my images. And it had A LOT of parameters - too much for my impatience. DefectMap on the other side is nice and simple. Clearly I might leave some data on the table. DefectMap just knows "Pixel Defect" vs. "Pixel OK". And if the pixel is defect it just gets replace by an interpolation based on the pixels around it.

I then tried to figure out how to create a defect map (an image where pixels are either 1 = OK, or are 0 = bad). I ended up creating them manually:

First, I would stack many images of the same binning (no alignment! can be different filters) - no pixel rejection!!!


Then I would zoom into the lower part of the image and slowly scan from one side to another:


Once I reach a defect column, I scroll up until I see the end (or all the way up if the entire column is defect). Now I can use Pixinsights readout to see which coordinates I am. I would do that for all defect columns. 

Finally, I put these coordinates into a PixelMath expression (iif(x)=="bad column", 0, 1) - see full expressions below. Select "Create New Image" under destination and apply this PixelMath process to the image that we created.

The DefectMap process doesn't have a file/view list. I.e. I create an Image Container with all the images that I want to fix and then apply the process to that container. Using the defect maps from below and stacking the same images as before results in this:

The vertical streaks are gone - yei!

Here are the exact expressions that I used. I put them here mostly as a reference for myself in case I need to change or regenerate my defect maps.

1x1: 
iif(x()==212 | 
x()==220 | 
(x()==308 & y()>=5457) | 
(x()==405 & y()>=5686) | 
(x()==419 & y()>=2404) | 
(x()==675 & y()>=5015) | 
(x()==855 & y()>=4662) | 
x()==954 | 
(x()==1444 & y()>=476) | 
(x()==1660 & y()>=594) | 
(x()==1999 & y()>=451) | 
(x()==2609 & y()>=316) | 
(x()==3042 & y()>=5611) | 
(x()==3162 & y()>=4475) | 
(x()==3216 & y()>=4798) | 
(x()==3231 & y()>=381) | 
(x()==3461 & y()>=3868) | 
(x()==3509 & y()>=5645) | 
(x()==3583 & y()>=2380) | 
(x()==3631 & y()>=4101) | 
(x()==4161 & y()>=1949) | 
(x()==4331 & y()>=5718) | 
(x()==4409 & y()>=2365) | 
(x()==4569 & y()>=450) | 
(x()==4778 & y()>=2613) | 
(x()==4992 & y()>=2957) | 
(x()==5461 & y()>=5106) | 
x()==5541 | 
(x()==5763 & y()>=3177) | 
(x()==5859 & y()>=534) | 
(x()==6058 & y()>=5447) | 
x()==6098 | 
(x()==6110 & y()>=3363) | 
(x()==6336 & y()>=4897) | 
x()==6393 | 
(x()==6866 & y()>=2828) | 
(x()==6877 & y()>=5863) | 
x()==7217 | 
(x()==7222 & y()>=1135) | 
(x()==7448 & y()>=3435) | 
x()==7703 | 
x()==7952 ,0,1) 

2x2: 
iif(x()==106 | 
x()==110 | 
x()==453 | 
x()==477 | 
x()==617 | 
x()==632 | 
x()==948 | 
x()==999 | 
x()==1608 | 
x()==1702 | 
x()==1791 | 
x()==2284 | 
x()==2477 | 
x()==2770 | 
x()==3049 | 
x()==3193 | 
x()==3196 | 
x()==3269 | 
x()==3433 | 
x()==3588 | 
x()==3608 | 
x()==3851 | 
x()==3974 | 
x()==3976 | 
x()==4058 | 
(x()==154 & y()>=2741) | 
(x()==202 & y()>=2843) | 
(x()==209 & y()>=1180) | 
(x()==427 & y()>=2341) | 
(x()==722 & y()>=239) | 
(x()==802 & y()>=1249) | 
(x()==830 & y()>=332) | 
(x()==915 & y()>=2321) | 
(x()==1457 & y()>=1887) | 
(x()==1521 & y()>=2807) | 
(x()==1581 & y()>=2237) | 
(x()==1615 & y()>=191) | 
(x()==1754 & y()>=2822) | 
(x()==1815 & y()>=2050) | 
(x()==2165 & y()>=2859) | 
(x()==2080 & y()>=974) | 
(x()==2389 & y()>=1209) | 
(x()==2399 & y()>=1244) | 
(x()==2496 & y()>=1482) | 
(x()==2637 & y()>=2688) | 
(x()==2730 & y()>=2553) | 
(x()==2881 & y()>=1585) | 
(x()==3027 & y()>=2351) | 
(x()==3055 & y()>=1681) | 
(x()==3168 & y()>=2448) | 
(x()==3438 & y()>=2932) | 
(x()==3531 & y()>=1832) | 
(x()==3611 & y()>=567) | 
(x()==3615 & y()>=2041) | 
(x()==3724 & y()>=1709) | 
(x()==3823 & y()>=863) ,0,1)