Sunday, September 27, 2020

First Light: M51

We chose M51 as the first image of our new scope:
(click on image for full resolution)

M51 is a spiral galaxy in a distance of 23 million light years and has a diameter of 76,000 light years - which means it's slightly smaller than our own galaxy (105,700 light years). It was discovered by Charless Messier on October 13, 1775 - though he only discovered the main galaxy. The smaller galaxy (NGC 5159) was discovered 6 years later by Pierre M├ęchain.
The most prominent feature of this galaxy is the encounter with the smaller galaxy (at the bottom in the image). Although it looks like a frontal encounter, the smaller galaxy (NGC 5159) is actually passing behind the larger galaxy. This process has been going on for hundreds of millions of years.
Another interesting aspect is the larger number of supernovae in M51. There have been supernovae in 1994, 2005 and 2011 - three supernovae in 17 years is much more than what we see in other galaxies. It's not clear what causes this - and if the encounter with NGC 5159 has something to do with it.

It took me a long time to figure out how to process images from it:
  1. Normal Calibration of the individual Frames
    We realized that the shutter on the ML50100 camera isn't completely shut and let some light in. In order to take bias and dark frames, we had to cover the scope (I used a Telegizmo cover for a Dobsonian that we could pull over the entire scope, finder scope and mount:
  2. Using the DefectMap process to correct dark (cold) and white (hot) columns.
  3. Equalizing all images.
  4. Using the SubframeSelector process to create weights for all images and mark the best images.
  5. Stack the images (I stacked all images against the best Ha image)
  6. Use the LocalNormalization process to improve normalization of all frames
  7. Use ImageIntegration to stack the images
  8. Use DrizzleIntegration to improve the stacked images
  9. Use LinearFit on all images
  10. Use DynamicBackgroundExtraction to remove any remaining gradients
  11. Use ColorCombine on the Red, Green and Luminance images to create a color image
  12. Use StarNet to remove the stars from the Ha image
  13. Use the NRGBCombination script to add the Ha data to the color image
  14. Use PhotometricColorCalibration to create the right color balance
  15. Use BackgroundNeutralization to create an even background
  16. Use SCNR to remove any green residues
  17. Stretch and process the image (no more noise reduction - the advantage of dark skies!!!)
  18. Enhancing the Feature Contrast (the Pixinsight tutorials from LightVortexAstronomy are awesome!!!)
  19. I then use the Convolution process on the RGB data to remove any processing noise in the colors.
  20. Process the Luminance image the same way
  21. Sharpen the image just so slightly
  22. Use LRGBCombination to apply the luminance image to the RGB image
  23. Do some final processing (usually just CurvesTransformation to drop the background a little and maybe adjust Lightness and Saturation to bring out the object better).
  24. Done!!!
Interestingly M51 was also the first images I took with my Celestron EDGE scope 7(!!) years ago: What a different darker skies, better equipment and 7 years experience make :-)

Removing stars with StarNet++

To cleanly fold Ha data into our LRGB images, we want to remove all stars (otherwise the star colors will be distorted). In the past I tried various options:
  • Creating a star mask and subtracting that from the image. Which leaves black holes in the image. It's not too bad as these black holes are where stars are. But if the stars in the star mask are not the same size then the real stars, some artifacts are left.
  • I tried Straton - but could never get a clean image out of it.
So, I was exited when I read about StarNet++ - an ML-based algorithm that removes stars. And it even comes with the latest version of Pixinsight:

First time, I wasn't sure what these "weights" files are... Turns out, that these are files Starnet files that DON"T ship with Pixinsight. Weird.

If you go to the SourceForge project of StarNet you download the whole standalone software, unpack it and have the files in there:

You copy them into the Pixinsight library folder (or a subfolder), click on "Preferences" and select them:

Now, one challenge is that this process only works on non-linear images - but we want to fold the Ha data into the RGB data in the linear stage.

I found this video where somebody explained how to create a "reversible stretch". We use the ScreenTransferFunction:

And click on "Edit Instance Source Code" (the little square icon at the bottom):

The middle value in the first row (here: 0.0051) is the midtone stretch factor. We use that to create a PixelMath stretch function: mtf(0.00051,$T)

When we apply this to our image instead of the ScreenTransferFunction stretch we get a stretched linear image that looks like this:

Not the best contrast, but enough to let StarNet do its work. If we now apply the StarNet process to this image, we get this (this can take quite a while!):

It's a little bit hard to tell, but the stars are gone. We change the expression in the PixelMath window to: mtf(1-0.00051,$T) which reverses the stretch. If we now stretch the image again with ScreenTransferFunction we see that the stars are gone:

The larger stars leave some shadow behind, but that won't matter too much. We can use this image to fold the Ha data into the RGB data. I use the NRGBCombination script (under Scripts->Utilities) for this.

Saturday, September 26, 2020

Equalizing the ML50100 images

The ML50100 is basically two chips that are just next to each other. Each side has its own readout - and with that its own characteristics.

Here is a 1x1 bias frame:

The different levels are clearly visible. But even more, if zooming into the Master Bias:

There is even different patterns on each side. Again, it's basically two separate chips...

Now, one question was: Can we calibrate these differences out? By taking very good dark and flat frames, would the resulting light frames be clean (i.e. don't have different levels on the left and right hand side?)

My answer is: no! I tried everything (using darks with vs. without bias, correcting flats with darks with / without bias...) I always ended up with different levels. Now, taking very high quality frames can probably reduce that. So, I went back to Dark Skies New Mexico to cover the scope and take dark and bias frames making sure that absolutely no light comes in:
(I know, it looks horrible - but it did the trick).

It's easy to imagine a PixelMath expression that equalizes both halfs: Get the mean of a background part on the left and right side and multiple one side by the quotient of the two. You can get the mean by creating a preview of an area without stars using the Statistics process.

The problem is that you need to do this for each frame!!! If you do this on the stacked image, you end up with a seam in the middle when the various frames were first aligned and then stacked. And doing this for each frame individually is a royal PITA.

So, I wrote a simple Pixinsight script that does this across many individual images. First, you load one image (after calibration and applying the DefectMap) and put two previews on it - near the center and both covering mostly background:

The script uses these two previews, applies them to all other images, measures the means on both sides and corrects the right-hand side:

Good thing is: this made me learn the Javascript-based Pixinsight scripting (bad thing: there is no real reference or tutorial!!!)

Two things to remember:
  1. I have to choose previews that have some empty space around them. Because we are dithering our images, they will all be offset a little into any direction. I.e. the preview will shift around.
  2. It's better to equalize images from each side of the meridian separately as they will be rotated by 180 degrees.
With this, I can easily equalize all frames before stacking!!!!

Here is the code for the script (I know it's not the cleanest ...):

#include <pjsr/TextAlign.jsh>

#include <pjsr/Sizer.jsh>

#include <pjsr/StdIcon.jsh>

#include <pjsr/StdButton.jsh>

function equalize_images(LeftPreview, RightPreview, FileList) {

Console.writeln("Equalizing " + (FileList.length+1) + " images against Reference Image " +

LeftPreview.fullId.substr(0, LeftPreview.fullId.lastIndexOf("->")));

var ReferenceImageWindow = LeftPreview.window;

Console.writeln("Dimension of Reference Image: Height=" + ReferenceImageWindow.mainView.image.height +

" Width=" + ReferenceImageWindow.mainView.image.width);

var success = true;

for ( var i = 0; i < FileList.length; i++) {

var CurrentImage =[i]);

if (CurrentImage.length < 1) {

(new MessageBox("Unable to open file: " + FileList[i], "Error",

StdIcon_Error, StdButton_Ok


throw new Error("Unable to open file: " + FileList[i]);

success = false;

} else if (CurrentImage.length > 1) {

(new MessageBox("Multi-Image files are not supported (" + FileList[i] + ")", "Error",

StdIcon_Error, StdButton_Ok


throw new Error("Multi-Image files are not supported (" + FileList[i] + ")");

success = false;

} else if (CurrentImage[0].mainView.image.height != ReferenceImageWindow.mainView.image.height ||

CurrentImage[0].mainView.image.width != ReferenceImageWindow.mainView.image.width) {

(new MessageBox("Image " + FileList[i] +

" does not have same dimension as reference image.", "Error",

StdIcon_Error, StdButton_Ok


throw new Error("Image " + FileList[i] +

" does not have same dimension as reference image.");

success = false;

} else {

Console.writeln("Processing " + (i+1) + " of " + FileList.length +

" image: " + FileList[i]);

var CurrentImageLeftPreview =


var CurrentImageLeftMedian = CurrentImageLeftPreview.image.median();//computeProperty("Median").median();

var CurrentImageRightPreview =


var CurrentImageRightMedian = CurrentImageRightPreview.image.median();//computeProperty("Median").median();

var Correction = CurrentImageRightMedian / CurrentImageLeftMedian;

Console.writeln(" Median of left Preview= " + CurrentImageLeftMedian);

Console.writeln(" Median of right Preview=" + CurrentImageRightMedian);

Console.writeln(" Correction= " + Correction);

var PixelMathProcess = new PixelMath();

PixelMathProcess.expression = "iif(x()>=w()/2,$T[0]/" + Correction + ",$T[0])";


var DotPosition = CurrentImage[0].filePath.lastIndexOf(".");

var PathWithoutExtension = CurrentImage[0].filePath.substring(0, DotPosition);

var Extension = CurrentImage[0].filePath.substring(DotPosition);

var NewFilePath = PathWithoutExtension + "_equalized" + Extension;

Console.writeln("Saving equalized image as " + NewFilePath);

CurrentImage[0].saveAs(NewFilePath, false, false, false, false);





if (success) {

(new MessageBox("Done!", "Done",

StdIcon_Information, StdButton_Ok




function apply_previews(LeftPreview, RightPreview, FileList) {

var image_list =;

var active_window = ImageWindow.activeWindow;

Console.writeln("Found " + image_list.length + " images.");

for ( var i = 0; i < image_list.length; i++ ) {

console.write("" + i + " Image " + image_list[i].filePath);

var file_path = image_list[i].filePath;

if ((image_list[i].height != active_window.height) ||

(image_list[i].width != active_window.width)) {

console.writeln(" - has different resolution than active window - skipping!");

} else {

//if (file_path == active_window.filePath) {

// console.writeln(" - is active window - skipping!");

//} else {

console.writeln(" - creating previews");

var new_left_preview = image_list[i].createPreview(LeftPreview);

var left_median = new_left_preview.computeProperty("Median").median();

var new_right_preview = image_list[i].createPreview(RightPreview);

var right_median = new_right_preview.computeProperty("Median").median();

var correction = right_median / left_median;

Console.writeln("Left=" + left_median + " Right=" + right_median +

" correction=" + correction);

var pixelMath = new PixelMath();

pixelMath.createNewImage = true;

var s = "iif(x()>=w()/2,$T[0]/" + correction + ",$T[0])";

pixelMath.expression = s;


var dot_pos = file_path.lastIndexOf(".");

var all_but_extension = file_path.substring(0, dot_pos);

var extension = file_path.substring(dot_pos);

var new_file_path = all_but_extension + "_equalized" + extension;

var new_image = ImageWindow.activeWindow;

Console.writeln("Saving as " + new_file_path);

new_image.saveAs(new_file_path, false, false, false, false);






function equalize_dialog() {

this.__base__ = Dialog;


this.LeftPreviewLabel = new Label();

this.LeftPreviewLabel.text = "Left Preview";

this.LeftPreviewLabel.minWidth = 30;

this.LeftPreviewViewList = new ViewList();


this.LeftPreviewSizer = new HorizontalSizer();



this.RightPreviewLabel = new Label();

this.RightPreviewLabel.text = "Right Preview";

this.RightPreviewLabel.minWidth = 30;

this.RightPreviewViewList = new ViewList();


this.RightPreviewSizer = new HorizontalSizer();



this.inputFiles = new Array;

this.files_Label = new Label(this);

this.files_Label.text = "Images to equalize";

this.files_Label.minWidth = 100;

this.files_TreeBox = new TreeBox( this );

this.files_TreeBox.multipleSelection = true;

this.files_TreeBox.rootDecoration = false;

this.files_TreeBox.alternateRowColor = true;

this.files_TreeBox.setScaledMinSize( 500, 200 );

this.files_TreeBox.numberOfColumns = 1;

this.files_TreeBox.headerVisible = false;

for ( let i = 0; i < this.inputFiles.length; ++i )


let node = new TreeBoxNode( this.files_TreeBox );

node.setText( 0, this.inputFiles[i] );


this.filesAdd_Button = new PushButton( this );

this.filesAdd_Button.text = "Add";

this.filesAdd_Button.icon = this.scaledResource( ":/icons/add.png" );

this.filesAdd_Button.toolTip = "<p>Add image files to the input images list.</p>";

this.filesAdd_Button.onClick = function()


let ofd = new OpenFileDialog;

ofd.multipleSelections = true;

ofd.caption = "Select Images";


if ( ofd.execute() )


this.dialog.files_TreeBox.canUpdate = false;

this.dialog.inputFiles.length = 0;

for ( let i = 0; i < ofd.fileNames.length; ++i )


let node = new TreeBoxNode( this.dialog.files_TreeBox );

node.setText( 0, ofd.fileNames[i] );

this.dialog.inputFiles.push( ofd.fileNames[i] );


this.dialog.files_TreeBox.canUpdate = true;



this.filesClear_Button = new PushButton( this );

this.filesClear_Button.text = "Clear";

this.filesClear_Button.icon = this.scaledResource( ":/icons/clear.png" );

this.filesClear_Button.toolTip = "<p>Clear the list of input images.</p>";

this.filesClear_Button.onClick = function()



this.dialog.inputFiles.length = 0;


this.filesButtons_Sizer = new HorizontalSizer;

this.filesButtons_Sizer.spacing = 4;

this.filesButtons_Sizer.add( this.filesAdd_Button );


this.filesButtons_Sizer.add( this.filesClear_Button );


var ReadPreviewsButton = new PushButton();

ReadPreviewsButton.text = "Equalize";

ReadPreviewsButton.onClick = function() {

var LeftPreview = this.dialog.LeftPreviewViewList.currentView;

var RightPreview = this.dialog.RightPreviewViewList.currentView;

if (LeftPreview == "") {

(new MessageBox("Must set Left Preview", "Error",

StdIcon_Error, StdButton_Ok


} else if (RightPreview == "") {

(new MessageBox("Must set Right Preview", "Error",

StdIcon_Error, StdButton_Ok


} else if (LeftPreview.fullId.substr(0, LeftPreview.fullId.lastIndexOf("->")) !=

RightPreview.fullId.substr(0, RightPreview.fullId.lastIndexOf("->"))) {

(new MessageBox("Select both Previews from same Image.", "Error",

StdIcon_Error, StdButton_Ok


} else if (this.dialog.inputFiles.length == 0) {

(new MessageBox("Must select at least one image to equalize.", "Error",

StdIcon_Error, StdButton_Ok


} else {

var ReferenceImageWindow = LeftPreview.window;//.mainView.image;






this.sizer = new VerticalSizer();









equalize_dialog.prototype = new Dialog();

function main() {

var dialog = new equalize_dialog();




A hair on the Blue Filter !!!!

When we took flat images, we could see a weird artifact on one side:

It looks like something not quite on the lens or the filter - but slightly in front or so.

And even worse!!! It doesn't seem to be stable. Here is a different flat:

And of course these also show up on the actual images. I tried various things but couldn't find a way to eliminate them :-( So for now, we have to live with them - which means cutting out the left hand side of the images...

Creating Defect Maps

The ML50100 camera that we are using in our Namid observatory has quite a number of defect rows:

As visible in this zoom, it's black and white defects:

Some of the defects span the entire column, some only go some way up.

Initially, I ignored these, but they are too many and too prevalent and cause artifacts in the stacked and processed images. So, I looked into DefectMap and CosmeticCorrection. I didn't have much luck with CosmeticCorrection - it didn't clean up my images. And it had A LOT of parameters - too much for my impatience. DefectMap on the other side is nice and simple. Clearly I might leave some data on the table. DefectMap just knows "Pixel Defect" vs. "Pixel OK". And if the pixel is defect it just gets replace by an interpolation based on the pixels around it.

I then tried to figure out how to create a defect map (an image where pixels are either 1 = OK, or are 0 = bad). I ended up creating them manually:

First, I would stack many images of the same binning (no alignment! can be different filters) - no pixel rejection!!!

Then I would zoom into the lower part of the image and slowly scan from one side to another:

Once I reach a defect column, I scroll up until I see the end (or all the way up if the entire column is defect). Now I can use Pixinsights readout to see which coordinates I am. I would do that for all defect columns. 

Finally, I put these coordinates into a PixelMath expression (iif(x)=="bad column", 0, 1) - see full expressions below. Select "Create New Image" under destination and apply this PixelMath process to the image that we created.

The DefectMap process doesn't have a file/view list. I.e. I create an Image Container with all the images that I want to fix and then apply the process to that container. Using the defect maps from below and stacking the same images as before results in this:

The vertical streaks are gone - yei!

Here are the exact expressions that I used. I put them here mostly as a reference for myself in case I need to change or regenerate my defect maps.

iif(x()==212 | 
x()==220 | 
(x()==308 & y()>=5457) | 
(x()==405 & y()>=5686) | 
(x()==419 & y()>=2404) | 
(x()==675 & y()>=5015) | 
(x()==855 & y()>=4662) | 
x()==954 | 
(x()==1444 & y()>=476) | 
(x()==1660 & y()>=594) | 
(x()==1999 & y()>=451) | 
(x()==2609 & y()>=316) | 
(x()==3042 & y()>=5611) | 
(x()==3162 & y()>=4475) | 
(x()==3216 & y()>=4798) | 
(x()==3231 & y()>=381) | 
(x()==3461 & y()>=3868) | 
(x()==3509 & y()>=5645) | 
(x()==3583 & y()>=2380) | 
(x()==3631 & y()>=4101) | 
(x()==4161 & y()>=1949) | 
(x()==4331 & y()>=5718) | 
(x()==4409 & y()>=2365) | 
(x()==4569 & y()>=450) | 
(x()==4778 & y()>=2613) | 
(x()==4992 & y()>=2957) | 
(x()==5461 & y()>=5106) | 
x()==5541 | 
(x()==5763 & y()>=3177) | 
(x()==5859 & y()>=534) | 
(x()==6058 & y()>=5447) | 
x()==6098 | 
(x()==6110 & y()>=3363) | 
(x()==6336 & y()>=4897) | 
x()==6393 | 
(x()==6866 & y()>=2828) | 
(x()==6877 & y()>=5863) | 
x()==7217 | 
(x()==7222 & y()>=1135) | 
(x()==7448 & y()>=3435) | 
x()==7703 | 
x()==7952 ,0,1) 

iif(x()==106 | 
x()==110 | 
x()==453 | 
x()==477 | 
x()==617 | 
x()==632 | 
x()==948 | 
x()==999 | 
x()==1608 | 
x()==1702 | 
x()==1791 | 
x()==2284 | 
x()==2477 | 
x()==2770 | 
x()==3049 | 
x()==3193 | 
x()==3196 | 
x()==3269 | 
x()==3433 | 
x()==3588 | 
x()==3608 | 
x()==3851 | 
x()==3974 | 
x()==3976 | 
x()==4058 | 
(x()==154 & y()>=2741) | 
(x()==202 & y()>=2843) | 
(x()==209 & y()>=1180) | 
(x()==427 & y()>=2341) | 
(x()==722 & y()>=239) | 
(x()==802 & y()>=1249) | 
(x()==830 & y()>=332) | 
(x()==915 & y()>=2321) | 
(x()==1457 & y()>=1887) | 
(x()==1521 & y()>=2807) | 
(x()==1581 & y()>=2237) | 
(x()==1615 & y()>=191) | 
(x()==1754 & y()>=2822) | 
(x()==1815 & y()>=2050) | 
(x()==2165 & y()>=2859) | 
(x()==2080 & y()>=974) | 
(x()==2389 & y()>=1209) | 
(x()==2399 & y()>=1244) | 
(x()==2496 & y()>=1482) | 
(x()==2637 & y()>=2688) | 
(x()==2730 & y()>=2553) | 
(x()==2881 & y()>=1585) | 
(x()==3027 & y()>=2351) | 
(x()==3055 & y()>=1681) | 
(x()==3168 & y()>=2448) | 
(x()==3438 & y()>=2932) | 
(x()==3531 & y()>=1832) | 
(x()==3611 & y()>=567) | 
(x()==3615 & y()>=2041) | 
(x()==3724 & y()>=1709) | 
(x()==3823 & y()>=863) ,0,1)

Sunday, July 5, 2020

Automating Startup and Shutdown

When we start imaging, we have to:
  1. Turn on the Power Supply (and the mount, though we normally don't turn it off)
  2. Check if the mount is still up, and if not, start it through the MGBox v2 pulse (this only sends a pulse that will turn on the mount if it's off and turn it off if it's on, that's why we have to check first).
  3. Start the Planewave Heater Control Program (PWI3)
  4. Make sure that Dimension4 is running (to keep computer time exact)
  5. Check if the clocksync program is running, and if not, start it (to keep mount time synchronized to the exact computer time)
And at the end, do the opposite:
  1. Close clocksync
  2. Close PWI3
  3. Shutdown mount
  4. Turn off power supply
I found a couple of useful utilities/libraries:
  • dlipower - a python library to interact with the power web switch
  • MountCMD - a Java library to interact with the mount
With these, could whip up some Python code to automate both:
import dlipower
import time
import subprocess
import socket



MGBOX_PATH = "C:\\Program Files (x86)\\MGBox V2\\MGBox.exe"

PWI_PATH = "C:\\Program Files (x86)\\PlaneWave Instruments\\PWI3\\PWI3.exe";

DIMENSION4_PATH = "C:\\Program Files (x86)\\D4\\D4.exe"

CLOCKSYNC_EXE = "clocksync_w.exe"
CLOCKSYNC_PATH = "C:\\Program Files (x86)\\10micron\\ClockSyncW\\clocksync_w.exe"

def process_exists(process_name):
    call = 'TASKLIST', '/FI', 'imagename eq %s' % process_name
    # use buildin check_output right away
    output = subprocess.check_output(call).decode()
    # check in last line for process name
    last_line = output.strip().split('\r\n')[-1]
    # because Fail message could be translated
    return last_line.lower().startswith(process_name.lower())

# --------- Turn on Mount and Power Supply ---------

# Connection parameters to web UI of power switch
switch = dlipower.PowerSwitch(hostname=POWER_SWITCH_IP, userid=POWER_SWITCH_USERNAME, password=POWER_SWTICH_PASSWORD)

print("Turning on Mount (it should not have been off, but just in case)")

print("Turning on the Power Supply for all equipment")

print("Wait a little to make sure...")

# --------- Starting Mount ----------

# We can only send an on/off pulse, i.e. we first have to check if the mount is already powered on

print("Checking if mount is started by pinging IP address")
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
mount_is_running = sock.connect_ex((MOUNT_IP_ADDRESS, MOUNT_PORT))
if mount_is_running != 0:
   print("Starting mount")[MGBOX_PATH, "-P", MGBOX_COM_PORT, "-T", "1500"])
   print("Wait quite a while ...")
   print("Mount is already started!")

# --------- Starting Planewave Heater control app ---------

# The planewave app is reentrnt, i.e. we don't have to check if it's already running
print("Starting Planewave Heater Control app")

# --------- Making sure that Dimension4 is up and running ---
print("Check if Dimension4 is running")
if not process_exists(DIMENSION4_EXE):
   print("Start Dimension4")
   print("Wait a little time to make sure ...")
   print("Dimension4 already running. moving along ...")

# --------- Starting mount time sync -------------------------

# First, check if it's already running
print("Check if time sync program is already running")
if not process_exists(CLOCKSYNC_EXE):
   print("Starting time sync program")
   print("Wait a little to make sure ...")
   print("Time Sync program is already started")

import dlipower
import time
import subprocess
import socket



MGBOX_PATH = "C:\\Program Files (x86)\\MGBox V2\\MGBox.exe"

PWI_EXE = "PWI3.exe"
PWI_PATH = "C:\\Program Files (x86)\\PlaneWave Instruments\\PWI3\\PWI3.exe";

DIMENSION4_PATH = "C:\\Program Files (x86)\\D4\\D4.exe"

CLOCKSYNC_EXE = "clocksync_w.exe"
CLOCKSYNC_PATH = "C:\\Program Files (x86)\\10micron\\ClockSyncW\\clocksync_w.exe"

def process_exists(process_name):
    call = 'TASKLIST', '/FI', 'imagename eq %s' % process_name
    # use buildin check_output right away
    output = subprocess.check_output(call).decode()
    # check in last line for process name
    last_line = output.strip().split('\r\n')[-1]
    # because Fail message could be translated
    return last_line.lower().startswith(process_name.lower())

# --------- Stopping mount time sync -------------------------

# First, check if it's running
print("Check if time sync program is running")
if process_exists(CLOCKSYNC_EXE):
   print("Stopping time sync program")["taskkill", "/im", CLOCKSYNC_EXE])
   print("Time Sync program isn't running ...  moving on ...")

# --------- Stopping Planewave Heater control app ---------

#First, check if it's running
print("Check if Planewave Heater Control app is running")
if process_exists(PWI_EXE):
   print("Stopping Planewave Heater Control app")["taskkill", "/im", PWI_EXE])
   print("Planewave Heater Control app isn't running ...  moving on ...")

# --------- Shutdown Mount -------------------------
print("Shutdown mount")
print("Wait to make sure that the mount is really turned off")

# --------- Turn off Power Supply ---------------------

# Connection parameters to web UI of power switch
switch = dlipower.PowerSwitch(hostname=POWER_SWITCH_IP, userid=POWER_SWITCH_USERNAME, password=POWER_SWTICH_PASSWORD)

print("Turning off the Power Supply for all equipment")


Works like c charm! That was a fun activity!!!

Wednesday, July 1, 2020

The ML50100

The FLI ML50100 is an amazing camera. With a pixel size of 6 microns and 8176 x 6132 pixels!!!

But, boy, is it tricky to calibrate the images. There are several challenges:
  • Because of the large pixel size, the shutter is also very large - and hence slow. Flats need to be at least 8-10 seconds - otherwise the shutter will show up.
  • Being front illuminated, the chip has RBI.
  • The chip is so big, that it actually has two readouts - one for each half. And each readout has slightly different characteristics (bias, gain)
  • Richard thinks that there could be bias drift depending on Ambient temperature
Here is a (stretched) 10 min dark master frame:

And a bias master frame:

The seams are clearly visible. 

First, calibrate light frames using the master dark:

Pre-dark calibration:
If you look closely, you can see the seam in the middle ...

Post-dark calibration:

Looks pretty clean. But if I now stack my (40) luminance frames (without alignment - I just want to check if the seam is gone):

The seam is clearly still there, i.e. the individual frames weren't completely cleared up. And this was even more complicated with the flat frames. Because our flat panel broke, we had to use sky flats, i.e. the exposure time is different for each. And because of the two different halves, we can't scale our darks, i.e. we have to take darks at the same duration as the flats ...

We have three theories where this could come from:
  1. The RBI issue - though Richard said at 10 minute exposure and -15C we should see much if any
  2. The bias is dependent on the ambient temperature
  3. Our darks aren't clean (we had some darks with clear light in them - maybe they all suffer from minor light leaks ...)
I ran two experiments to check this:

1. Darks with roof closed vs. open
I took several 10 min dark frames with the roof close and the roof open (with some moonlight):
Roof OpenRoof ClosedRoof OpenRoof ClosedRoof OpenRoof Closed

The levels are just so slightly higher when the roof was open...

2. Bias at different ambient temperatures
I took a stack of 20 bias frames at various ambient temperatures:

Ambient Temp [C]Level - OverallLevel - LeftLevel Right

Not much difference - except the first step from 28 to 26.3 degrees. The difference here was that the 28 degree bias were taken with the roof close, the other when it was open...

This seems to indicate that we have some light leaks, but not so much that ambient temperature plays a role.

If this is indeed the case, then there isn't much that we can do about it now... But in a month, I'll be at the site again (for some leftover work). At that time I could take better darks by either covering the scope (in particular the front) or taking the camera off completely and take darks with the front fully covered.