Another update to AstroMC

Hmm, in the day or so since I updated the Foster Systems software to 2.4-1 they released 2.5-1. So, I went ahead and updated to that one now. Firmware is not B9.

Advertisements

Update AstroMC, FocusMax

Updated AstroMC/B6 from 2.3.1 to 2.4.1/B8. I am hoping this will eliminate the recent spate of ACP disconnects because “dome is already slewing”. This began when I installed the replacement dome controller, which is the new format in the gray box. We’ll see.

Also updated FocusMax to 4.1.0.25. When I ran the new version it attached to Maxim (already running) and immediately clobbered the STF8300 USB port. Maxim did not see them in the camera list.

Tried repowering 8300, reseating USB cable, no luck. Finally rebooted Thor, and everything seems to work again.

PixInsight versus Maxim: Subframe Calibration Comparison

Introduction

While trying to process some images taken with my SBig STF-8300M in PixInsight I was encountering problems; PI would not
correctly read images from my SBig STF-8300M, either the original image collected by Maxim or the calibrated image. This seems to have to do with whether the images are saved in 16-bit signed or unsigned format; there is a lot of discussion in the PI forum about how this should be done, Maxim using non-standard FITS files, etc. In the end it appears the “fix” is to manually reformat all of the calibrated light subs in Maxim into 32 bit floating point format.

During the forum discussion people made statements like

      It’s a really bad idea to let maxim touch your files at all… the less maxim does, the better.
      Stop letting maxim calibrate images and do it all in PI. the results are superior anyway.
      Avoid MaximDL like the plague once it has captured the image.

Interesting. Until now, I have been happy with Maxim calibrating the images. It is easy, happens automatically under ACP/Maxim, and the only problems encountered were caused by Some Idiot using bad flat or dark subs. Googling around, I do not find any studies showing the relative performance of Maxim versus PI in the calibration of images. I don’t know the source of these statements indicating the superior performance of PI in calibration.

On the other hand, my earlier tests on image alignment showed that PI was superior to Maxim and Registar. Maybe PI has better performance in calibration? Would this offset the extra manual work required to prepare and calibrate the images for PI
calibration (compared to almost no effort in Maxim calibration)?

So, here goes my test. Does PixInsight calibrate light subs better than Maxim?

Test data

I assembled a test data set of bias, dark, flat, and light frames. I use the same frames for calibrating under both PI and Maxim.

  • 1 light frame (an OIII light sub, 30 minute exposure). This is the image to be calibrated. The camera is trying to get to -10C, but the ambient is high enough that the actual temperature is -4C. I just grabbed a sub from last night’s imaging run. Maxim calibrates using the original file. Since PI cannot open this file correctly, it uses a copy of the same file saved as 32 bit floating point.
  • 20 Dark frames at -10C, 30 minute subs each. Collected in Maxim.
  • 25 dusk sky flats with OIII filter collected in Maxim under ACP’s control. These flats are between -5 and -10 degrees C; at dusk the ambient temperature is still around 100 Fahrenheit so the camera can’t quite cool to -10C.
  • 40 Bias frames at -10C collected in Maxim.

In PI, I set the probe to display values in 16 bit integer mode. Hopefully this allows comparison of “ADU” units with the
corresponding Maxim values.

The Maxim calibration involved using “Set Calibration” to load the various calibration individual subs into Maxim and clicking
the button that creates Masters for the groups of files (Bias, Dark, and Flat). I then opened the raw file and clicked “Calibrate”.

Maxim Calibration results showing the nebula (upper left) and the 3 master files. Click for full size.

Maxim Calibration results showing the nebula (upper left) and the 3 master files. Click for full size.

In the fight to get PI to calibrate the images, I tried several different cases as listed below. The various cases are variations on using floating point versions of the various calibration files for PI.

Note that only Case 4 adequately calibrated the images.

Case 1 – All files float

+ all files (bias, dark, flat, light) are exported in floating format from Maxim
+ FITS configured to rescale with 0-65535

FITS file config used to access floating point files.

FITS file config used to access floating point files.

Result: the PI conversion runs, but is not correct.

All individual subs (light, dark, flat, bias) exported from Maxim as 32 bit float. PI builds its own masters. Click to enlarge.

All individual subs (light, dark, flat, bias) exported from Maxim as 32 bit float. PI builds its own masters. Click for full size.

  • Calibrated image is too dark, zeros in background, no nebulosity.
  • flat is wrong; it looks a lot like the bias.
  • values in the PI created calibration master files are different from Maxim.
  • Bias ranges 40000..15000..40000 across frame.
    Maxim => 19500..1900..1950
  • Flat ranges 0..6000..0 across frame
    Maxim => 15500..18000..15500
  • Dark values around 1000 across frame
    Maxim => around 300 across frame

Case 2 – Fits configuration at 32767

This was the same as Case 1, except that the FITS configuration is set to 0..32767 instead of 65535. This was tried in an attempt to address image values being off by a factor of 2.

Result: PI Conversion fails to run.

After PI calibrates the individual flats the integration process tries to stack the flats to make the master. This process fails because one of the calibrated flats is “open in another process” and cannot be opened. On different runs, it will be a different file that is open in another process.

Case 3 – Use Maxim calibration masters

It turns out the Maxim calibration masters are already in float format. This case uses those masters directly PI, rather than converting the individual subs to float for PI to create its own masters.

FITS is configured as 65535 again (for the light frame).

  • Result: Calibration runs, but it looks like the flats are not correctly applied.
  • Dust motes are visible.
  • Darker center of image. Looks like flats are too strong?
  • Values (flat, dark, bias) of the masters when opened in PI match those reported by Maxim.

Case 4 – Use original individual 16 bit Maxim calibration files

Another weirdness – it turns out the individual calibration subs created by Maxim are readable by PI, even though they are in the same 16-bit format which doesn’t work for the light frames. For unknown reasons the calibration files can be read.

So, in this case allow PI to use the original Maxim individual subs to build it’s masters. The only thing different between PI and Maxim processing in this case is PI starts with the float version of the light sub.

+ Calibration files (bias, dark, flat) are left in “16 bit” state as created by Maxim. No conversion performed.
+ Light file is exported in 32 bit float format. Can tell this is necessary because PI does not open this 16 bit file correctly.
+ FITS configured to rescale with 0..65536

Result: As seen below, the calibration looks pretty good, very similar to the Maxim result.

PI calibration using Maxim individual calibration subs directly to create its own masters.

PI calibration using Maxim individual calibration subs directly to create its own masters. Click for full size.

  • Dark master displays around 2000 “ADU”, versus Maxim’s 300.
  • Bias master is slightly lower than Maxim: 1870..1800..1870
  • Flat master is similar to Maxim, 15800..18000..15800
  • Bias has much larger values than dark. I don’t understand this. However, the same thing is true in Maxim, so OK.

Comparison of PI versus Maxim calibrated light sub

So, how does the case 4 calibration of the light frame differ between the Maxim process and the PI process? The image below shows the two images, where the left image is the PI result and the right image is the Maxim result (converted to float).

PI and Maxim calibration results. Click to enlarge.

PI (left) and Maxim (right) calibration results. Click for full size.

Visual results:

  • The Maxim file is smaller, 2Mb versus 8Mb for Float form. The PI result is also an 8 Mb xisf file. I don’t know how Maxim got the file down to 2Mb, maybe that is a clue why PI can’t open it?
  • PI result has more hot pixels.
  • PI result has more noise (much grainier to my eye).
  • Stars look the same (roundness, brightness) even when zoomed in significantly.
  • Maxim result may have more dark pixels; a little hard to tell because of the increased noise in the PI image.

Tool Measurements

I used several PI tools to measure the noise and SNR for the converted images. Measurements for the Maxim result used the Maxim calibrated image converted to 32 bit float.

Statistics tool results:

Maxim K    PI K
count (%)   99.75290    99.95980
count (px)  2116573    2120963
mean        0.0072616    0.0075407
median      0.0071210    0.0072508
stdDev      0.0068079    0.0073814
avgDev      0.0011528    0.0013096
MAD         0.0009753    0.0009751
minimum     0.0000006    0.0000476
maximum     0.9739980    0.9766412

  • PI has more pixels?
  • PI has slightly higher background (median)
  • PI has larger deviations

Noise comparison script:

The NoiseEvaluation tool was used after performing a linear fit on the PI image to match it to the Maxim result.

PI    = 8.6e-4
Maxim = 9.34e-4

  • Maxim has slightly more noise? This does not jive with what I see visually.

SNR Calculation:

Calculated the SNR ratio as SNR ratio = MAD / Noise
PI = 1.13   Maxim = 1.044

  • Again, the PI result seems to have a slightly better SNR ratio by 0.09, although visually I wouldn’t have thought so.

ContrastBackgroundNoiseRatio tool:

Parameters: Background percentile 100th  cycle-spins 8
Maxim    PI
Median        466.48    475.15
Contrast        61.31    68.67
Background Mean    466.41    481.45
Background Noise    66.63    77.95
CBNR        0.92    0.88

  • This tool indicates the Maxim result has a better contrast (0.92 vs 0.88) and less noise (66.63 vs 77.95).

Conclusions:

Visually it seems clear the Maxim result is better. The various PI tools indicate that the two calibrations are fairly close; some tools indicate the Maxim image is better, some indicate the PI image is better. One might suspect I am not running the tools in the correct way:)

Case 5 – Case 4 plus Superbias and optimized dark frames

I turned on the optimize dark frames parameter (0.004) in the Darks frame screen. I also created 2 Superbias frames, one based on the Case 1 Bias master (from float format bias frames) and one based on the Case 4 bias master (based on master created from original Maxim bias subs).

Adding Superbias and dark frame optimization. Click to enlarge.

Adding Superbias and dark frame optimization. Click for full size.

This did not work. The background is over-reduced (lots of zero values). Somehow the superbias is not working correctly. Both superbias files had the same result.

I also tried only setting the Optimize Darks parameter = 0.004. This is the only difference between this frame and the Case4 result. It is fairly indistinguishable from the Case4 result; optimizing the darks didn’t seem to help any.

Thus, it seems that

  • the superbias makes things worse, and
  • optimizing dark frames does not help appreciably.

Final Conclusions

  1. The Maxim calibration appears to be at least as good as the PI calibration; I think it is better.
  2. There may be further fine tuning possible in PI to improve the results.
  3. Using PI for calibration requires a lot of manual operations compared to the almost automatic operation of Maxim. I see no reason to change to PI for calibration.

Accessing my internal system using brew.my-sky.com

ACP provides a neat feature where you can get a no-ip address which allows you to access your observatory using a unique name (mine is brew.my-sky.com).

I have the problem of not being able to access my observatory using my-sky.com from within my home network. Access works fine from outside the home, but within your LAN you have to address the observatory as 192.168.x.x. This leads to awkward coding of web pages/twiddlers, bookmarks that don’t always work, etc.

Solutions
1) Supposedly, buying a fancier router that supports NAT loopback can resolve this issue. I was preparing to do this; the model I was looking at costs about $130. I did not test this out since solution 2 worked.

2) Change the etc/hosts file on your home computer. This file resolves name addresses before the system goes out to the Internet for DNS resolution.

In my case, the observatory computer 192.168.1.10 hosts the ACP server. My home computer is where I mostly work from. I want my twiddlers, and my blog entries, to use brew.my-sky.com to access a couple of web pages I have created. These do things like show current weather, forecasts, and operate the power controller in the observatory.

The hosts file is located in c:\Windows\system32\drivers\etc. I added the line
<code>
192.168.1.10 brew.my-sky.com
</code>
There should be no leading spaces. A single space or tab separates the IP address and the name.

Caveats:
1) When you edit the hosts file, Windows will not let you save it back in the etc directory. Save it into your Documents directory, remove the .txt extension Notepad insists on adding, then COPY (don’t move) the file to etc.

2) In theory the system immediately starts using the new hosts file. To make sure, use a cmd.exe window and enter “ipconfig /flushdns” to clear the cached entries.

3) test by pinging your domain name.    “ping brew.my-sky.com”
a) If the host file is not active, you will see your remote IP address (like 75.67.23.110).
b) If it works, you will see the local address 192.168.1.10.

4) You may need to restart your browser? I did not need to, but several net posts indicated this was necessary. Firefox in particular may cache DNS entries.

5) One system continued to have problems (a laptop with Windows 8.1). I found that I needed to add permissions for Users to the hosts file (Properties/Security on file hosts). It would only let me add a particular setting, but that worked fine.

Maxim hangs STF8300

I have been having an irregular problem where ACP imaging would fail when Maxim gets hung on reading the camera. I start an imaging run and it fails after several images.

Symptoms:
– The cooling on the camera reports as being off. When the camera is restarted, it often gives a Filter Wheel err 7, or Maxim crashes.
– The exposing tab says “Reading”. Guiding appears to be continuing.
– Sometimes Maxim gets a filter wheel error 7.

I set up some tests, using only Maxim and the camera (no FocusMax, ACP, SkyX) running sets of 40-50 exposures and found that
a) the problem occurs if I guide on the camera simulator rather than STi.
b) the problem occurs if I have no guiding.
c) the problem does NOT occur if the exposures are 5 min or less. The original problem occurred with 30 minute exposures; it still occurs with 10 minutes.

Found that the camera drivers are out of date again via SbigDriverChecker. I updated the windows drivers, and also updated the camera firmware.

Initial test indicates that the problem has gone away; will need to keep an eye on this.

I also found a scary web post indicating there is a setting in the Power Settings tab which causes Windows to turn off power to a USB port if it hasn’t been used for a few minutes (unclear how many). I will turn this setting off, by default it is on.

Update: turned off the power setting, started to run a series of Darks. Keeps failing, similar to the previous problems. However, slight difference is it indicates “300 of 300” in the progress bar rather than “Reading”. So, will have to keep looking into this.:(

Update: well, it appears that the problem is due to a slightly loose USB cable. I went out and re-plugged in the cable on both ends, and it might have been slightly loose on the computer end. After that, a couple of tests worked OK. Crossing my fingers…

FocusMax resolved?

For months, I have been having trouble with FocusMax / ACP interaction problems.

The Problem

FocusMax (3.8.0.15) keeps crashing with the typical “Object not set” message. This only occurs when ACP creates a FocuserControl object to adjust the focuser position when a filter gets changed. The crash is intermittent and occurs at different times in the processing. I.e., ACP creates the focuser object, issues perhaps 8-10 commands against it, then releases the object. The crash may occur after any one of the commands.

The ACP AutoFocus.vbs would often fail; the script FilterOffsets.vbs would always fail after 3-5 filters. Typical ACP imaging scripts do regular filter changes for pointing and focusing. These scripts would typically fail after an hour or two. Problems occur with either Maxim5 or Maxim 6, with real cameras or using simulators.

I tried a number of things; eventually I rebuilt the entire observatory computer. I restored the original system from the backup partition, then reloaded all of the software. It continued to fail; now I was also seeing a problem where the Optec was failing to report position to FM.

We (Steve Brady, Bob Denny, myself) were leaning toward some type of hardware issue. So, I went out to map out the USB cables so I could

1) understand how everything was connected,

2) ensure that appropriate USB 1.0 versus 2.0 hubs are being used. I sometimes get a message “This device can run faster” when the STF8300 is powered up.

The map is posted elsewhere. I found a very nice utility USBview, a microsoft tool which unfortunately is not included in the base OS. It draws a useful tree structure showing where evrything is attached and what USB version it uses.

First, I discovered that when I rebuilt the system I installed an old version of the Optec driver (mislabeled folder). Installed the correct version, not the Optec problem is gone.

Second, two weird things happened when I was tracing the USB cables. I was unplugging and re-plugging in cables to make sure I had the right cable/device pairing.

a) twice, when I plugged the cable back into the computer the OS installed the device drivers. Weird, the device was already installed and operating. The system should not need to re-install drivers, they are already there.

b) When I plugged in the Edgeport cable, I got a Blue Screen of Death. The Edgeport supplies 4 RS-232 ports off a single USB cable; I use 2 of these to operate the two focusers Optec and LazyFocus. When the system was rebooted the Edgeport showed up but none of the ports were there. I uninstalled/reinstalled the Edgeport driver, rebooted, and the COM ports were now there.

After weirdness (b), everything is now working. Somehow the USB drivers were “messed up”, but got themselves straightened out. I can now runs AtoFocus.vbs and FilterOffsets.vbs with no problems, using eithere Simulators, the STF8300, or the ST2000M. Everything looks peachy now:)

I moved to FocusMax version 4, the new (non-free) version. Problems continued, with a new wrinkle

More Thoughts on Drizzling

Thinking about the effects of combining SubFrameSelector approved frames (versus using all the eye-balled frames), and the effect of drizzling. I was looking specifically at the integrated image noise. However, maybe there would be a separate metric indicating the Signal to Noise ratio. Perhaps the noise is greater in the “better” frames, but the overall sharpness or contrast or something of the image is better.

So, I wandered through PixInsight and pulled in some more metrics as shown in the table below.

The column N indicates the number of subframes combined to make this image. For example, in the Green subs I combined 7 subs approved by SubFrameSelector for the first image. I combined all 9 available (eye-balled OK) frames for the second image.

Script FWHMEccentricity provided the columns Median Eccentricity,

Script NoiseEvaluation provided Noise StdDev (shows up on the console).

SubFrameSelector provided FWHM, SNRWeight, Noise. When images are drizzled, I divided the resulting FWHM by 2 for comparison.

Statistics tool provided MAD.

ContrastBackgroundNoiseRatio tool provided the CBNR value, which I think is fairly analogous to SNR.

A PixInsight forum discussion indicated that the ratio of avgDev to Noise would result in a SNR analog. I used MAD instead of avgDev since I happened to have those values, and MAD seemed to follow the pattern as avgDev.

ImageIntegration Effects
N Count MAD FWHM Median Eccent CBNR Noise StdDev SNR=MAD /Noise SNR Weight Noise
Green
SubFrames 7 0.002620 1.221 0.5315 0.19 9.45E-04 2.77 34.44 61.91
All Frames 9 0.002459 1.243 0.5422 0.26 8.44E-04 2.91 36.84 55.32
SubFrames & Drizzle 7 0.002304 1.36 0.4657 4.50 5.88E-04 3.92 56.23 38.51
All Frames & Drizzle 9 0.002603 1.3945 0.4893 5.12 6.22E-04 4.18 66.71 40.85
SubFrames & Drizzle4 9 0.002300 1.326 0.4771 5.50 3.42E-04 6.72 172.7 22.44
Red
 SubFrames 9 0.004080 1.182 0.5098 0.25 9.91E-04 4.12 45.97 64.93
 All Frames 10 0.004120 1.19 0.519 0.31 9.55E-04 4.32 48.58 62.58
 SubFrames & Drizzle 9 0.003550 1.299 0.4212 6.72 5.87E-04 6.04 83.94 38.49
 All Frames & Drizzle 10 0.003150 1.302 0.4363 7.36 4.99E-04 6.31 92.05 32.72
Halpha
 SubFrames 18 0.000649 1.252 0.5035 0.24 1.52E-04 4.26 39.05 9.977
 SubFrames & Drizzle 18 0.000524 1.3945 0.4352 6.97 7.94E-05 6.60 78.98 5.206
 SubFrames & Drizzle4 18 0.000527 1.35225 0.4446 7.66 4.61E-05 11.43 237.8 3.021
OIII
 SubFrames 11 0.001290 1.765 0.4085 0.44 4.68E-04 2.76 32.69 30.68
 All Frames 16 0.001201 1.775 0.3931 0.21 4.19E-04 2.87 37.7 27.43
 SubFrames & Drizzle 11 0.001220 1.847 0.3936 6.18 3.08E-04 3.96 61.19 20.2
 All Frames & Drizzle 16 0.001060 1.8765 0.3656 6.66 2.57E-04 4.12 71.67 16.85
UHC
 SubFrames 22 0.004610 2.004 0.402 0.28 6.09E-04 7.57 115.3 39.9
 All Frames 28 0.004426 2.05 0.3505 0.41 5.84E-04 7.58 114 38.28
 SubFrames & Drizzle 22 0.003934 2.1115 0.4063 9.24 3.61E-04 10.91 211.1 23.63
 All Frames & Drizzle 28 0.003540 2.1205 0.3273 9.21 3.25E-04 10.89 212.2 21.3

Conclusions

  1. Drizzle really helps the SNR ratios. I suppose this shouldn’t be a big surprise – I imagine that is the point of the whole process.
  2. Looking at Drizzling by 4 instead of 2, the noise only changes slightly but the SNR goes up significantly.
  3. Drizzling really helps the eccentricity of the stars. PI forums indicate that ideally, stars should have about 0.44 or less for eccentricity. My raw frames tend to be about 0.5, but after drizzling the eccentricity drops to 0.42 or better! Somehow the stars are rounder.

I looked at the Eccentricity plots produced by the FWHMEccentricity tool. The eccentricity is distinctly better in the drizzled image.

 

Eccentricity map for non-drizzled image.

Eccentricity map for non-drizzled Green image.

Eccentricity map for same Green images drizzled.

Eccentricity map for same Green images drizzled.

PixInsight SubFrameSelector and Drizzle

I was reading about the PixInsight tool SubFrameSelector, and wanted to see if it would be more useful for eliminating poor images from my work process. So, I decided to try with and without bad frames to see the effect. Two issues were in my head:

  1. Historically, I have simply eyeballed the images and removed obviously bad subframes. Trailed stars, guiding errors, etc. Perhaps a more rigorous approach would yield sharper results.
  2. I read the presentation Image integration techniques : Increasing Signal/Noise Ratio and outlier rejection with PixInsight by Jordi Gallego. Most of the discussion was typical and reasonable. However, at the end he showed a shocking example where he added an obviously bad subframe with guide errors, and got a better integrated image! Hmmm, maybe I shouldn’t be eliminating the bad frames!

I had some images awaiting processing. They are from an SBig ST2000XM through a Tak Sky90, with various filters. I haven’t finished taking all the images (missing Blue, the HAlpha needs to be redone). Also, I have been experimenting with using a UHC filter for my Luminance frames. I know I am not supposed to use UHC for astrophotography, although I don’t know why.

I used the expression

(FWHMSigma < 2) && (SNRWeightSigma > -2) && (EccentricitySigma < 2.5)

to approve subframes. I should note that I had already removed obviously bad frames, so this was rejecting frames that “looked” OK.

At the same time, I wanted to play with the Drizzle tool. The criteria for using this are

  1. Lots of frames, on the order of 20. My sub counts range from 7 to 28 (15 minute) subs, depending on which selection criteria I use. So, I will test the effect of drizzling on smaller sub counts.
  2. Undersampled frames. My setup gives 3.74 arcseconds per pixel, so I think I meet this criteria. The FWHM reported by FWHMEccentricity is less than 2 pixels.

I also want to see the effect of drizzling with a scale of 2 versus 4. If 2 works, maybe 4 would be better, other than the huge file size?

The table below summarizes the results. I either integrated the subs recommended by SubFrameSelector, or used all of the frames. In the HAlpha case SubFrameSelector accepted all 18 frames, so there isn’t a smaller subset case.

I used the MAD value reported by the Statistics tool as my measure of noise. I also looked at average deviation (Statistics) and the NoiseEvaluation script; these values showed the same patterns as the MAD results, so I didn’t put them in the table.

ImageIntegration Effects
SubFrameSelector and Drizzle
n (Number Frames) MAD
Green
SubFrames 7 0.002620
All Frames 9 0.002459
SubFrames & Drizzle 7 0.002304
All Frames & Drizzle 9 0.002603
SubFrames & Drizzle4 9 0.002300
Red
SubFrames 9 0.004080
All Frames 10 0.004120
SubFrames & Drizzle 9 0.003550
All Frames & Drizzle 10 0.003150
Halpha
SubFrames 18 0.000649
SubFrames & Drizzle 18 0.000524
SubFrames & Drizzle4 18 0.000527
OIII
SubFrames 11 0.001290
All Frames 16 0.001201
SubFrames & Drizzle 11 0.001220
All Frames & Drizzle 16 0.001060
UHC
SubFrames 22 0.004610
All Frames 28 0.004426
SubFrames & Drizzle 22 0.003934
All Frames & Drizzle 28 0.003540

Conclusions

1. Rejecting subframes seems to be counterproductive.

Adding the additional subframes, rather than excluding the SubFrameSelector rejects, generally gave a somewhat lower noise result. The exception was the Red filter, where adding the one rejected frame made the noise slightly higher.

2. Drizzling works well, even with small frame counts.

Even with only 7 subs, drizzling gives lower noise. The noise is reduced, and the image is clearly better visually. Many of the dark pixels are gone, and the stars are much rounder and softer. See the zoomed previews below.

Drizzling with a scale of 4 versus 2 did not give appreciable improvement, so I don’t need to deal with the huge files:)

 

7 Green subs, no drizzling.

7 Green subs, no drizzling.

7 Green subs with drizzle.

7 Green subs with drizzle.

 

Arduino Temperature Project

My Arduino temperature probe is about done. A small Arduino Uno uses two TMP36 temperature sensors to log temperature. The temperatures are retrieved by the newly written ACCycle.exe written in VB.NET. The previous version was an HTA, but Microsoft has kindly removed access to COM ports (MSCOMM.OCX) in Windows 8 so I needed to rewrite everything.

temperature_tmp36fritz.gif

The above image from the Adafruit site article summarizes the connection to the TMP36 sensor. Very easy.

I wired two 1/8 inch stereo jacks for the sensors. One sensor is on a very short (4″) wire, the other is on a longer (about 6′) cable. The plugs go into two jacks mounted on the side of the project box.

JackWiring

Wiring of the 1/8 Stereo plug and jack. The sensor is inserted into the TO-92 socket; it can be pulled out if needed (maybe it will fry itself or something?).

The project box was $1 from Walmart, found in the Business items section (pens, stationary, tape, etc). I cut a hole for the USB port and drilled holes for the jacks and Arduino mounting screws.

The project box with Arduino and jacks. The two probes are shown plugged in.

The project box with Arduino and jacks. The two probes are shown plugged in.

The Arduino checks every second for any characters transmitted on the Serial line (USB). As seen in the image, the small yellow LED lights when the serial input is checked. If it receives a “T” character it reads the two sensors and returns a string of the form

196.07 192.01 F 91.15 88.90 C 1.41 1.39 V

The first two fields are Port 1&2 Fahrenheit temperatures; the next two fields are the Centigrade values, and the last two are the actual voltages read.

Main screen of ACCycle, showing 3 operational modes: Temperature Control, Timer Control, and Manual Control.

Main screen of ACCycle, showing 3 operational modes: Temperature Control, Timer Control, and Manual Control.

Options screen for ACCycle

Options screen for ACCycle

The ACCycle program displays the two probe temperatures, and optionally uses one of the probes to turn the Air Conditioner on/off.

The temperature data is logged into two files:

  1. A Boltwood one-line file. I feed this to Weather Display so I can track the temperature of the controlling probe.
  2. A log file of the temperatures in csv format so I can plot the temperature over time in Excel.

If someone is interested in the source to either the Arduino sketch or the ACCycle vb.NET program, drop me a line at eridanibrew – AT – gmailDOTcom.

 

FocusMax fixed

Finally, success with the FocusMax problem. Steve made some changes in constructor/destructor logic and cleaned up a memory leak.  The new version 3.8.0.5 installed and ran fine for 320 iterations using ACP, Maxim, etc using the FocusFilterTest plan.