Kernel Filters in HTML+JavaScript

BalloonHeader

Download Code

Kernel filters are a common approach for modifying images for various applications of image processing. They can be used to sharpen an image, blur it, or extract attributes about a picture for further processing. Implementation of the filters is simple and straight forward. I wanted to do some experiments with Kernel filters on my phone. But to my surprise the available options were not many. I decided to make my own. Before developing something for my phone I started off from a browser since my Chromebook was handy. Here I’m sharing the results.

What is a Kernel

Kernels are known by many names. Kernel, convolution matrix, and mask all refer to the same thing. Convolution is the process of adding together the values of neighboring elements of an image and applying some weight to each of the pixels. The weights, or kernel, are often expressed using matrix notation. For each one of the pixels in an image the kernel is applied to the pixel and it’s neighboring pixels to determine the new intensity for the pixel.

Manipulating Images in HTML and JavaScript

In HTML and JavaScript the image element doesn’t give direct access to its pixels for manipulation. Instead the canvas element can be used to read and write pixels. Well, not directly. With the canvas element there’s a method named getPixelData() that will return a structure that has a number array of the intensities of the pixels. After manipulation of the elements the the result can be copied back into the canvas with putPixelData().

Visually we see the pixel data as being organized in rows and columns. In memory it is organized in a single dimensional array. To read and write the correct pixel you’ll need to know how it’s organized. A single pixel is composed of 4 numbers; 3 of the numbers are for intensities of red, green, and blue and the fourth number is for transparency. These 4 elements make up a single pixel. Pixel data is usually saved continuously starting with the upper left most pixel of an image as the first one to be encoded and moving to the right from there. Once the end of a row is reached the encoding continues starting with the left most pixel on the next row.

Pretend that you had an image that was 10 pixels wide and 10 pixels tall. If you wanted to read from the pixel on the third row and fourth column (keeping in mind that zero based addressing is being used) we would need at least 20 pixels into the array to get to the third row and then another 3 more pixels to get to the fourth column. In other words we need to read the twenty third pixel. Since pixels are composed of four elements this works out to the reading starting with index 92 of the array to get the red portion of the pixel and indices 93, 94, and 95 to read the green, blue, and transparency portions. Given an X and Y coordinate the equation for determining what address to start reading at is as follows.

PixelIndex = (y*imageWidth+x)*4;

Since the application of the kernel can overlap with pixels that are outside the range of the image I needed to decide how to deal with attempts to read pixels that are outside of range. I could have a constant value returned (like zero for all elements), have the read address wrap around to the other side of the image, or I could cap the read coordinates. I chose to cap the read coordinates. Attempts to read a coordinate that is less than zero will result in coordinate being changed to zero. An attempt to read beyond the edge of the image results in the edge of the image being read.

I’ve covered enough theory for us to build our first kernel filter in JavaScript. Now to get to building. Kernel filters are arrays of multipliers. They can be of any dimension. The basic pieces of information that we’ll need are the dimensions of the kernel and an array holding the values for each element of the kernel. We also need to mark which position in a filter represents the center pixel.

function kernel(width, height, centerX, centerY) {
	this.width = width;
	this.height = height;
	this.centerX = centerX || Math.floor(height/2);
	this.centerY = centerY || Math.floor(width/2);
	this.weightArray = [];
	for(var h=0;h<height;++h) {
		this.weightArray.push([]);
		for(var w=0;w<width;++w) {
		 	this.weightArray[h].push(0);
		}
	}
}

Given an image we need to get the image data from the image into the canvas. The canvas has a method named drawImage that will do this.

var width  = imageElement.naturalWidth;
var height = imageElement.naturalHeight;   
var canvas = $('')[0];
var ctx = canvas.getContext('2d');
ctx.drawImage(img,0,0);
var image = ctx.getImageData(0,0,width,height);
var pix = image.data;

To apply the filter, we will need to have a structure that contains the source data and another for writing the results. The results cannot be written to the same structure that we are reading from as this would overwrite some of the pixels that still need to be read for other processing.

var getPix = function(x,y) {
      x = Math.max(0, Math.min(x, width -1));
      y = Math.max(0, Math.min(y, height-1));
      var address = (y*width+x)*4;
      return [pix[address+0], pix[address+1], pix[address+2], pix[address+3]];
    }
    
    var getFilteredPix = function(x,y, kernelFilter) {
      var retVal = [0,0,0,0];
      for(var fy=0;fy<kernelFilter.height;++fy) {
        for(var fx=0;fx<kernelFilter.width;++fx) {
          var m = kernelFilter.weights[fy][fx];
          var pix = getPix(x+fx-kernelFilter.centerX, y+fy+-kernelFilter.centerY);
          retVal[0]+=pix[0]*m;
          retVal[1]+=pix[1]*m;
          retVal[2]+=pix[2]*m;
          retVal[3]+=pix[3];
        }
      }
      return retVal;
    }
    
    for(var yp=0;yp<height;++yp) {
      for(var xp=0;xp<width;++xp) {
        var newVal = getFilteredPix(xp,yp);
        var address = (yp*width+xp)*4;
        resultPixelData[address+0] = newVal[0];
        resultPixelData[address+1] = newVal[1];
        resultPixelData[address+2] = newVal[2];
        resultPixelData[address+3] = newVal[3]
      }
    }

With that in place we can now view the results of various kernel filters. Using the same source image here are a few filters and the result of them being applied. This is the original image that I’ll be working with.

balloon

Identity

0 0 0
0 1 0
0 0 0

As suggested by the name the identity filter does not result in any change on
the image, much like other identity operations in math like adding 0 to a number
or multiplying and dividing by 1.

balloon

Edge Detection

-1 -1 -1
-1 8 -1
-1 -1 -1

The edge detection filter highlights high contrast areas of an image resulting in lines
showing where these areas meet. If you wanted to produce an outline of a subject this
would be one of your go-to filters.
balloonEdge

 

Emboss

-2 -1 0
-1 1 1
0 1 2

The Embose filter produces an image with a 3d effect making it look like the image has been pressed into a material. Various areas of the image will appear to be raised or depressed.

balloonEmbose

Box Blur

0.111 0.111 0.111
0.111 0.111 0.111
0.111 0.111 0.111

The Box Blur simply averages the pixels in an area together. Here I show a 3×3 filter. For the image shown here I actually used a 10×10 filter for the sake of exagerating the effect to make it more visible here.

balloonBlur

This gives me something quick I can use for testing out image filters. It could be better though. Right now, to apply a different filter I need to modify code. Wouldn’t it be nice if the filter data were externalized allowing for filters to be saved and shared? I’ll look at that the next time I revisit this project.

GTX 1050, WDDM 2.2, and Windows Mixed Reality

I’ve got some Windows Mixed Reality Immersive headsets in hand. The experience is pretty cool. But I wanted to figure out what the minimum requirements are to use them so that we could get new hardware for some of the other developers. Microsoft has minimum requirements listed on a page. Not being one to take such a thing on word value (especially not for a new product) I decided to validate these requirements. The item I was questioning was the video card. The requirements list the NVidia GTX 1050 as the minimum video card. I made my way over to my local Best Buys and picked one up.

It was installed into a Machine that already had the Windows 10 Creator’s Update on it. When I started the Mixed Reality application I got the following.

CantRunMixedRality

I tried several driver versions from the ones released in April (version 381.65, which were the first to have VR support) to the most recent at the time of this writing (385.28).

Digging a little deeper I received a rather cryptic message from the NVidia GeForce software on Virtual Reality support. The software told me that this video card didn’t meet requirements for Virtual Reality. I needed to have at least an NVidia GTX 1050, and the card in the machine was only a NVidia GTX 1050. That’s not a typo, it showed the same card for both the required minimum and what was installed. I get the impression that there was the intention to support VR in this card but it just never happened.

As of yet the consumer release of the Mixed Reality features has not occurred. We are still in a time frame in which things could change rapidly. This card might be supported by then. From some exchanges with others though of you are looking to get a card that supports the Windows Mixed Reality headsets start of with NVidia’s GTX 1060 as a minimum.

Using PowerShell to Setup SpectatorView

SpectatorViewSetup

I have some down time and spent some of the time to clear the drive on a computer, reinstall Windows, and rebuild my development environment. While I was doing this I decided to try out SpectatorView for the HoloLens. For those unfamiliar SpectatorView is a solution for creating recordings of what one sees through a HoloLens. The HoloLens does have a recording feature built in, but that feature is low resolution. Using SpectatorView one can produce a high resolution recording. Using a high resolution camera mounted to a HoloLens and a video capture a computer takes the motion data stream from the HoloLens to overlay objects from a HoloLens program onto a video stream. I tried it out last week with a 1K camera (Canon 5D Mark III) and it works great!

One of the personal goals that I have is that when possible to automate setup steps for a development environment, especially if I may need to do them again. I expect to see a fast return on investment for doing this either through time saved when I seed to setup an environment for myself again or when coworkers are able to save time by being able to use the scripts that I’ve made. Up until now my use of PowerShell has been light, but it looked to be the perfect scripting language for this task. For the most part PowerShell gives access to the COM, WMI, and .Net objects in a scripted environment.

Software and Files Needed for Spectator View

To setup SpectatorView there are a number of software components that are needed.

    • Visual Studio
    • Unity
    • BlackMagic DeckLink SDK
    • Hololens Companion Toolkit
    • Hololens Toolkit
    • OpenCV 3.2
    • Canon SDK (Optional)

Unity and Visual Studio are frequently used within my team, so I’m starting off with the assumption that these two components are already present. Getting the other components is easy enough, so I don’t expect an attempt at scripting the acquisition and installations for it to save much time. But I also feel that initial attempts at scripting are better applied to something simple to allow for problems to be found before being applied to more complex scenarios. The Canon SDK can only be downloaded if someone registers with Canon, requests access to the SDK, and then downloads the SDK after receiving approval to download the SDK. Since there are manual steps involved in getting access to the Canon SDK I did not script the acquisition of the file. Similarly the BlackMagic Decklink SDK also requires registration to download. While I could not script the acquisition of these two files I was still able to handle them post download in my script. Each version of these SDKs have a slightly different name since the version number is a part of the file name. To keep the script easy to use it will figure out the actual name of the file when run. If a new version of one of the SDKs were to be used it would only be necessary to replace the ZIP file being used.

Cloning the HoloLend Companion Kit with github

The Hololens Companion Kit is the easiest of the components to acquire through a script. It can be downloaded using GIT. So I won’t spend much time talking about how to download it and only mention it at all since it is a necessary component.

$openCVUrl = "https://downloads.sourceforge.net/project/opencvlibrary/opencv-win/3.2.0/opencv-3.2.0-vc14.exe?r=http%3A%2F%2Fopencv.org%2Freleases.html&ts=1501614414&use_mirror=iweb";
$companionKitFilePath = $PSScriptRoot + "\HoloLensCompanionKit";
$kitIsDownloaded = Test-Path $companionKitFilePath;
if(!$kitIsDownloaded) {
	git clone $hololensCompanionKitURL;
}

If you are not familiar with what the Test-Path command means don’t worry about it just yet. I’ll explain it’s use when it is used on another component.

File Paths

Leaving nothing to be ambiguous many of the file locations referenced are relative to the location of the power script file. In PowerScript There is a variable named $PSScriptRoot whose value is the full path to the folder from which the script is run. While not absolutely necessary I build paths to various files and folders using this variable.

Downloading OpenCV

For downloading the OpenCV source I’ve placed the URLs for OpenCV from one of the mirrors in a variable at the top of my script. If I ever wanted to change the version of OpenCV used I would need to change the value in this variable. There are several ways to download a file in PowerScript. I decided to use .Net’s WebClient because of speed and predictability. I considered using BITS but when using BITS to download you can’t know when the service will get around to downloading the file; it will do so on it’s own schedule. Downloading the file with the WebClient is easy. But it provides no feedback while it works. Just so that someone using the script doesn’t think that something is wrong I decided to print a message letting them know to hold on for a moment.

$openCVUrl = "https://downloads.sourceforge.net/project/opencvlibrary/opencv-win/3.2.0/opencv-3.2.0-vc14.exe?r=http%3A%2F%2Fopencv.org%2Freleases.html&ts=1501614414&use_mirror=iweb";			
$webClient = New-Object System.Net.WebClient;
$openCVArchivePath = $openCVFolder + "\archive.exe"
Write-Host "Downloading OpenCV. This is going to take a while..." -foreground "Green"
$webClient.DownloadFile($openCVUrl,$openCVArchivePath );

If you were to take the above and put it in a file with a “ps1” extension and run it you’ll find that a file downloads named archive.exe. OpenCV for Windows is distributed in a self extracting archive (Which is why it uses an EXE extension instead of ZIP). Once the files is downloaded if you were to run it you would be greeted with a prompt asking where you want the file to be extracted. For automating the setup I don’t want the archive to give these prompts. I’ll invoke it passing to it on the command line the location that it should use. Adding those arguments to the above script we end up with the following.

$openCVUrl = "https://downloads.sourceforge.net/project/opencvlibrary/opencv-win/3.2.0/opencv-3.2.0-vc14.exe?r=http%3A%2F%2Fopencv.org%2Freleases.html&ts=1501614414&use_mirror=iweb";			
$webClient = New-Object System.Net.WebClient;
$openCVFolder = "${PSScriptRoot}\openCV\openCV3.2"
$openCVArchivePath = $openCVFolder + "\archive.exe"
Write-Host "Downloading OpenCV. This is going to take a while..." -foreground "Green"
$webClient.DownloadFile($openCVUrl,$openCVArchivePath );
Write-Host "Download complete";
& "${openCVArchivePath}" -o "${openCVFolder}" -y

This works. But i wanted it to be possible to run this script more than once if it failed for some reason. To prevent the script from reinstalling OpenCV again if it had been installed before I check for the existence of the OpenCV folder. This is a less than thorough test as it would not detect conditions such as an archive being partially unzipped before failing. But this is satisficing for my purposes. The Test-Path< command can be used to determine whether or not a file object exists at some path. I wrapped the above code in a block that checks for the existence of the OpenCV folder first.

$openCVIsDownloaded = Test-Path $openCVFolder
if(!$openCVIsDownloaded) {
	## Download code goes here
}

Unpacking the Canon and BlackMagic SDKS

The BlackMagic and Canon SDKs are both in ZIP files. Unpacking them will be about the same. I’ll only talk about the Canon SDK for a moment. But the there’s equal applicability to what I am about to say to both. Like the OpenCV SDK I’ll check to see if the folder for the Canon SDK exists before trying to unpack it. The script requires that the Canon SDK zip file be in the same folder as the script. The Canon SDK versions all start with the same prefix. They all start with EDSDK. To find the file I use the Get-ChildItem command to get a list of files. If there is more than one version fo teh SDK in the folder sorting the results and taking the last should result in the most recent one being selected. PowerShell allows the use of negative index numbers to address an item from the end of a list. Index -1 will be the last item in the list. Taking the last item and getting its FullName value will give the path to the zip file to be unzipped. The Expand-Archive command will unzip the file to a specified path.

################################################
# Unpacking the Canon SDK
################################################
$CanonSDKIsPresent = Test-Path "${PSScriptRoot}\CanonSDK";
if(!$CanonSDKIsPresent)
{
	#Find the Canon SDK Zip(s) present
	$canonSdkArchiveList = Get-ChildItem "${PSScriptRoot}\EDSDK*.zip" | Sort;
	$canonSDKZip = $canonSdkArchiveList[-1].FullName
	Expand-Archive  -path $canonSDKZip -DestinationPath "${PSScriptRoot}\CanonSDK";
} 
################################################
# Unpacking the Black Magic SDK
################################################
$BlackMagicSDKIsPresent = Test-Path "${PSScriptRoot}\BlackMagicSDK";
if(!$BlackMagicSDKIsPresent) 
{
	#Find the Black Magic SDKs present
	$blackMagicSDKList = Get-ChildItem "${PSScriptRoot}\Blackmagic_Decklink_SDK*.zip";
	$blackMagicZip = $blackMagicSDKList[-1].FullName;
	Expand-Archive  -path $blackMagicZip -DestinationPath "${PSScriptRoot}\BlackMagicSDK";
}

Modifying the Visual Studio Dependencies File

The Visual Studio project that is part of the SpectatorView software requires some updates so that it knows where the various SDKs are located. The path to the BlackMagic SDK, Canon SDK, and OpenCV software must be added to it. The dependencies file dependencies.props is an XML file. The Common Language Runtime has classes for manipulating XML files. I use one of these to update this file. The exact path of each component could differ depending on which SDK version is used. Rather than hard code the path I use the Get-ChildItem command again to query for the name of folders. For the CanonSDK the line of script code to get the path looks like the following.

$canonPath = (Get-ChildItem "${PSScriptRoot}\CanonSDK")[-1].FullName+"\Windows";

There is a little more nesting that occurs with the other two SDKs. But the lines for getting their paths is similar.

$openCVPath = ((Get-ChildItem (Get-ChildItem "${PSScriptRoot}\opencv" | ?{$_.PsIsContainer} )[-1].FullName)|?{$_.PsIsContainer})[-1].FullName+"\sources\include";
$blackMagicPath = (Get-ChildItem "${PSScriptRoot}\BlackMagicSDK" | ?{$_.PsIsContainer})[-1].FullName +"\Windows";

With those paths populated I now need to load the XML file, update the values, and write them back. The Common Language Runtime’s XML class is available to PowerShell. Given a string that contains XML if that string is cast to [xml] it will be parsed and all the nodes available through its properties. To get the XML string from the contents of the dependencies files the Get-Content command will be used. Given a file path it returns the contents as a string.

$dependencies = [xml] (Get-Content -Path "${PSScriptRoot}\${HoloLensCompanionKit}\HoloLensCompanionKit\SpectatorView\dependencies.props");
$dependencies.Project.PropertyGroup[0]."OpenCV_vc14" = $openCVPath ;
$dependencies.Project.PropertyGroup[0]."DeckLink_inc" = $blackMagicPAth;
$dependencies.Project.PropertyGroup[0]."Canon_SDK" = $canonPath;
$dependencies.Save( "${PSScriptRoot}\${HoloLensCompanionKit}\HoloLensCompanionKit\SpectatorView\dependencies.props" );

Next Steps

The script is at the bottom of this post. Running this script doesn’t result in SpectatorView being 100% setup. It is still necessary to go through calibration (a step that requires you to physically do some things) in front of the camera) and copying the libraries from the sample project into your own project. There are opportunities for further automating the setup. But I felt this was a good time to write about what is working at this moment (if I wait until it is perfect ot may never get posted).

clear;
################################################
# A few download URLs
################################################
$openCVUrl = "https://downloads.sourceforge.net/project/opencvlibrary/opencv-win/3.2.0/opencv-3.2.0-vc14.exe?r=http%3A%2F%2Fopencv.org%2Freleases.html&ts=1501614414&use_mirror=iweb";
$holoLensCompanionKitURL = "https://github.com/Microsoft/HoloLensCompanionKit";
$webClient = New-Object System.Net.WebClient;
Write-Host "Running from ${PSScriptRoot}";


$companionKitFilePath = $PSScriptRoot + "\HoloLensCompanionKit";
$openCVFolder = "${PSScriptRoot}\openCV\openCV3.2"
$openCVArchivePath = $openCVFolder + "\archive.exe"

$kitIsDownloaded = Test-Path $companionKitFilePath;
if(!$kitIsDownloaded) {
	git clone $hololensCompanionKitURL;
}

$openCVIsDownloaded = Test-Path $openCVFolder
if(!$openCVIsDownloaded) {
	New-Item "$openCVFolder"  -type directory
	Write-Host "Downloading OpenCV. This is going to take a while..." -foreground "Green"
	$webClient.DownloadFile($openCVUrl,$openCVArchivePath );	
	Write-Host "Download complete";
	& "${openCVArchivePath}" -o "${openCVFolder}" -y
}
################################################
# Unpacking the Black Magic SDK
################################################
$BlackMagicSDKIsPresent = Test-Path "${PSScriptRoot}\BlackMagicSDK";
if(!$BlackMagicSDKIsPresent) 
{
	#Find the Black Magic SDKs present
	$blackMagicSDKList = Get-ChildItem "${PSScriptRoot}\Blackmagic_Decklink_SDK*.zip";
	$blackMagicZip = $blackMagicSDKList[-1].FullName;
	Expand-Archive  -path $blackMagicZip -DestinationPath "${PSScriptRoot}\BlackMagicSDK";
}
################################################
# Unpacking the Canon SDK
################################################
$CanonSDKIsPresent = Test-Path "${PSScriptRoot}\CanonSDK";
if(!$CanonSDKIsPresent)
{
	#Find the Canon SDK Zip(s) present
	$canonSdkArchiveList = Get-ChildItem "${PSScriptRoot}\EDSDK*.zip" | Sort;
	$canonSDKZip = $canonSdkArchiveList[-1].FullName;
	Expand-Archive  -path $canonSDKZip -DestinationPath "${PSScriptRoot}\CanonSDK";
} 
else 
{
	Write-Host "Canon SDK already present";
}
################################################
# Modifying the Dependencies File
################################################
$canonPath = (Get-ChildItem "${PSScriptRoot}\CanonSDK")[-1].FullName+"\Windows";
$openCVPath = ((Get-ChildItem (Get-ChildItem "${PSScriptRoot}\opencv" | ?{$_.PsIsContainer} )[-1].FullName)|?{$_.PsIsContainer})[-1].FullName+"\sources\include";
$blackMagicPath = (Get-ChildItem "${PSScriptRoot}\BlackMagicSDK" | ?{$_.PsIsContainer})[-1].FullName +"\Windows";

$dependencies = [xml] (Get-Content -Path "${PSScriptRoot}\${HoloLensCompanionKit}\HoloLensCompanionKit\SpectatorView\dependencies.props");
$dependencies.Project.PropertyGroup[0]."OpenCV_vc14" = $openCVPath ;
$dependencies.Project.PropertyGroup[0]."DeckLink_inc" = $blackMagicPAth;
$dependencies.Project.PropertyGroup[0].= $canonPath;
$dependencies.Save( "${PSScriptRoot}\${HoloLensCompanionKit}\HoloLensCompanionKit\SpectatorView\dependencies.props" );

Resolving Problems Connecting to the Gear S2/S3 for Development

On occasions I develop for the Gear S2/S3 watches from Samsung. (From a development perspective these watches are nearly identical, so I will collectively refer to them as the Gear S watches). When returning to develop after a period away from them there are a few mistakes that I find I sometimes make. Looking in some support forums I see there are others that make these mistakes too. To both help out others that run into this (and as a note to myself) I’ve made this post to cover some of the checks necessary.

  1. Ensure Debugging is Enabled
  2. Ensure Wifi is Always Enabled
  3. Check the watch’s IP address
  4. Ensure the watch is unlocked
  5. Connect to the watch from SDB
  6. Redeploy the development certificate

 

Ensure Debugging is Enabled

Before anything else will work debugging must be enabled on the watch. This setting will be cleared if you’ve done a hard reset on the watch or if you have connected it to a different phone. You can change the setting by navigating to Settings ➜ Gear Info ➜ Debugging and ensure that the setting is checked.

Ensure WiFi is always enabled

You’ll want to have WiFi set to always on. If you have it set to “Auto” you might not be able to connect. If it is set to “Off” then you will invariably will not be able to connect. Setting WiFi to “Always On” will cause the battery to drain excessively. When developing you’ll want to have the charging cradle close by. To set WiFi to always be on navigate to Settings ➜ Connections ➜ WiFi ➜WiFi and select “Always On.”

 

Check the Watch’s IP Address

You need to know the watch’s IP address to attach to it for debugging and deployment. Remember that the IP address will be different if you go to a different wireless network or could be different if you reconnect to the same network.  To see the watch’s IP address navigate to Settings ➜ Connections ➜ Wi-Fi ➜ Wi-Fi networks ➜ select your network ➜ scroll down to the IP address.

Ensure the Watch is Unlocked

The watch must be unlocked for the initial connection. While this may be obvious what is less obvious is how quickly the watch can become locked again. The heart rate monitor on the back of the watch also acts as a presence detection sensor; the watch is aware of when it’s been removed from your wrist and will go into a locked state almost immediately if you have a lock code/patter on it. When handling the watch if your finger passes over this sensor the watch may lock. You could unlock the watch, set it down in the cradle, and it could be locked again because of your finger coming close to the sensor.

Heart Rate Monitor on the back of the Gear S2

Connect to the Watch using SDB

Before opening Tizen Studioconnect to the watch using SDB. From the command line on your computer (or Terminal if you are on a Mac) navigate to the folder that contains Tizen Studio and then into the tools folder inside of it. Type the following substituting your own IP address here.

sdb connect 192.168.1.181

If this is the first time the watch has connected to the machine from which you are typing the command the watch will prompt you to accept an RSA key. If you don’t accept it the connection attempt will fail. Sometimes when you attempt to connect the command line tool will print a failure message the first time even though it has actually connected. Run the command a second time and you’ll get a message that the watch is already connected.

Redeploy your Development Certificate

You only need to do this if the watch has been reset since the last time you’ve done development on it (or if you’ve never developed on the watch before).  Certificate management is a topic of it’s own; I won’t go into it here. Provided that you have a handle on development certificates the above should be enough to get your watch connected to your computer for development.

 

Working Around the Missing Real Time Clock in Windows IoT

I’ve got a project planned involving Windows IoT for which I need the system to have the correct time. The Raspberry Pi with Windows IoT has no real time clock. It initializes the time over NTP when connected to a network. But when not connected to a network the time will be wrong.   It has no support for a real time clock at this point in time. That’s no good. I searched for how people have worked around this. A frequent solution was to add a real time IC and have ones solution communicate directly with the real time chip and ignore the native APIs built around time. I don’t like this solution. I’d like to maintain compatibility with other systems and not make something that is dependent on a specific implementation of a clock. I wanted a way to get the Raspberry Pi to initialize from the RTC instead of NTP. 

It took a while to create a solution for this because the APIs needed to manipulate the system time are not available to UWP applications. I managed to create a solution with a power shell script, a program that I made, and a real time clock. The complete solution is available at CodeProject.com. Here is a summary of the solution. The stand alone application I made only does two things. It can read the time from the real time clock chip that I used and print it out as a string. It can also look at the system time and set the time in the real time clock to match. I use the latter of these two capabilities to set the time on the real time clock. When ever the system boots up the first of those capabilities is used to expose the current time in the RTC  to the Power Shell environment. From there I can use the Set-Date command to update the system time. I’ve saved a Power Shell script to run every time the system turns on to do just this. Now when I turn on my Raspberry Pi off network within a few seconds of turning it on it now has the right time. 😊

ViewModelBase and DelegateCommands in Sample Code

I’m working on some sample code for some upcoming blog post. There are two classes that are used throughout the examples and will probably be used in future samples. I wanted to mention them here for future blog post. This is applicable to both UWP and WPF projects. The first is the base class that I use for all of my ViewModel classes. 

using System;
using System.ComponentModel;
using System.Linq.Expressions;
using System.Threading;


namespace Common
{
    public class ViewModelBase
    {

        public static System.Threading.SynchronizationContext SyncContext;
        protected void OnPropertyChanged(string propertyName)
        {
            if (PropertyChanged != null)
            {
                SendOrPostCallback  a = (o) => { PropertyChanged(this, new PropertyChangedEventArgs(propertyName)); };
                if (SyncContext == null)
                    a(null);
                else
                    SyncContext.Send(a, null);
                
            }
        }

        protected void OnPropertyChanged(Expression> expression)
        {
            OnPropertyChanged(((MemberExpression)expression.Body).Member.Name);
        }

        public event PropertyChangedEventHandler PropertyChanged;
    }
}

The field SyncContext is needed for code that runs asynchronously on another thread (for WPF projects this is a Dispatcher instead). If the code attached to PropertyChanged interacts with the UI an exception will occur if this interaction happens on a different thread than the one on which the UI controls were created. When the OnPropertyChanged method is called the SyncContext is used to marshal control back to the UI thread.  One of the OnPropertyChanged methods takes as an argument an expression. I prefer to use this when passing the name of the field being updated to the OnPropertyChange handler because it provides the advantage of compile checking for typos in the name and will be updated if the Rename command is used on a property.  The other frequently used class(es) is the DelegateCommand

using System;
using System.Windows.Input;

namespace Common
{
    public class DelegateCommand : ICommand
    {
        public DelegateCommand(Action execute)
            : this(execute, null)
        {
        }

        public DelegateCommand(Action execute, Func canExecute)
        {
            _execute = execute;
            _canExecute = canExecute;
        }

        public bool CanExecute(object parameter)
        {
            if (_canExecute != null)
                return _canExecute();

            return true;
        }

        public void Execute(object parameter)
        {
            _execute();
        }

        public void RaiseCanExecuteChanged()
        {
            if (CanExecuteChanged != null)
                CanExecuteChanged(this, EventArgs.Empty);
        }

        public event EventHandler CanExecuteChanged;

        private Action _execute;
        private Func _canExecute;
    }

    public class DelegateCommand<T> : ICommand
    {
        public DelegateCommand(Action<T> execute)
            : this(execute, null)
        {
        }

        public DelegateCommand(Action<T> execute, Func<T, bool> canExecute)
        {
            _execute = execute;
            _canExecute = canExecute;
        }

        public bool CanExecute(object parameter)
        {
            if (_canExecute != null)
            {

                return _canExecute((T)parameter);
            }

            return true;
        }

        public void Execute(object parameter)
        {
            _execute((T)parameter);
        }

        public void RaiseCanExecuteChanged()
        {
            if (CanExecuteChanged != null)
                CanExecuteChanged(this, EventArgs.Empty);
        }

        public event EventHandler CanExecuteChanged;

        private Action<T> _execute;
        private Func<T, bool> _canExecute;
    }
}

The DelegateCommand class is used to make commands that can be bound to a button. This allows us to use DataBinding to associate code with a button through data binding. 

XNA Animated Sprite code uploaded to CodeProject.com

I’ve uploaded some code I was working on to animate sprites in XNA.

Animating a sprite isn’t difficult, but I wanted some way to animate them but reduce the coupling between code and the animation. The Content Pipeline is perfect for this. So I created a component that will handle the animation scenarios that I need along with a content extension so that I could load these animations as content. Right now the animation information is in an XML file. This is a stepping point towards having a graphical tool for handling this.

You can read about the code here or see a brief description of it in the video below

Welcome to the Site Mirror!

It seems that my hosting provider has gone through some changes for the worst; once a week for the past three weeks this site has gone offline because of some failure or data loss at my provider’s location. Because of the decrease in reliability I’ll be looking for a new provider. In the mean time I’ve started mirroring my content at here. Any new content I write will also be published here(I may just use WordPress as the primary host for this site, still undecided). If the main site ever goes down remember you can see the content heretoo.

Mango Beta 2 Available for Phones Today!

The Beta 2 Mango Windows Phone Tools are available to developers today! Included with the beta is the ability for developers registered with the AppHub to flash their retail devices.

I know there are some non-developers out there that want to also flash their phones and they may wonder how they get get their phones reflashed with the Mango beta. For the time being they cannot. There is an inherent risk in reflashing the phone; you could end up with a bricked phone if something goes bad. If this happens Microsoft has budgeted to take care of repairing up to one phone per developer. But Microsoft doesn’t see this risk as being appropriate for user audiences. [Some] developers on the other hand are willing to risk their device’s life and limb to have early access to something new. If you brick your device today Microsoft won’t be prepared to act on it for another couple of weeks. That’s not the best case scenario. But the alternative was to wait another couple of weeks before releasing the Mango tools. If you don’t feel safe walking the tight rope without a safety net then don’t re-flash your device yet.

According to the Windows Phone Developer site if you are a registered developer you will receive an e-mail inviting you to participate in early access to Mango.

Changing the Pitch of a Sound

I got a tweet earlier today from some one asking me how to change the pitch of a wave file. The person asking was aware that SoundEffectInstance has a setting to alter pitch but it wasn’t sufficient for his needs. He needed to be able to save the modified WAV to a file. It’s something that is easy to do. So I made a quick example

Video Example

I used a technique that comes close to matching linear interpolation. It get’s the job done but isn’t the best technique because of the opportunity for certain types of distortion to introduced. Methods with less distortion are available at the cost of potentially more CPU cycles. For the example I made no matter what the original sample rate was I am playing back at 44KHz and adjusting my interpolation accordingly so that no unintentional changes in pitch are introduced.

To do the work I’ve created a class named AdjustedSoundEffect. It has a Play() method that takes as it’s argument the factor by which the pitch should be adjusted where 1 plays the sound at the original pitch, 2 plays it at twice its pitch, and 0.5 plays it at half its pitch.

If you are interested the code I used is below.

using System;
using System.IO;
using System.Net;
using System.Windows;
using System.Windows.Controls;
using System.Windows.Documents;
using System.Windows.Ink;
using System.Windows.Input;
using System.Windows.Media;
using System.Windows.Media.Animation;
using System.Windows.Shapes;
using Microsoft.Xna.Framework.Audio;

namespace J2i.Net.VoiceRecorder.Utility
{
    public class AdjustedSoundEffect
    {
        //I will always playback at 44KHz regardless of the original sample rate. 
        //I'm making appropriate adjustments to prevent this from resulting in the
        //pitch being shifted. 
        private const int PlaybackSampleRate = 16000;
        private const int BufferSize = PlaybackSampleRate*2;

        private int _channelCount = 1;
        private int _sampleRate;
        private int _bytesPerSample = 16;
        private int _byteCount = 0;
        private float _baseStepRate = 1;
        private float _adjustedStepRate;
        private float _index = 0;
        private int playbackBufferIndex = 0;
        private int _sampleStep = 2;

        private bool _timeToStop = false;

        private byte[][] _playbackBuffers;

        public bool IsPlaying { get; set;  }

        public object SyncRoot = new object();


        private DynamicSoundEffectInstance _dse;

        public static AdjustedSoundEffect FromStream(Stream source)
        {
            var retVal = new AdjustedSoundEffect(source);
            return retVal;
        }

        public AdjustedSoundEffect()
        {
            _playbackBuffers = new byte[3][];
            for (var i = 0; i < _playbackBuffers.Length;++i )
            {
                _playbackBuffers[i] = new byte[BufferSize];
            }
                _dse = new DynamicSoundEffectInstance(PlaybackSampleRate, AudioChannels.Stereo);
            _dse.BufferNeeded += new EventHandler<EventArgs>(_dse_BufferNeeded);
        }

        void SubmitNextBuffer()
        {
            if(_timeToStop)
            {
                Stop();
            }
            lock (SyncRoot)
            {
                byte[] nextBuffer = _playbackBuffers[playbackBufferIndex];
                playbackBufferIndex = (playbackBufferIndex + 1)%_playbackBuffers.Length;
                int i_step = 0;
                int i = 0;

                int endOfBufferMargin = 2*_channelCount;
                for (;
                    i < (nextBuffer.Length / 4) && (_index < (_sourceBuffer.Length - endOfBufferMargin));
                    ++i, i_step += 4)
                {

                    int k = _sampleStep*(int) _index;
                    if (k > _sourceBuffer.Length - endOfBufferMargin)
                        k = _sourceBuffer.Length -endOfBufferMargin ;
                    nextBuffer[i_step + 0] = _sourceBuffer[k + 0];
                    nextBuffer[i_step + 1] = _sourceBuffer[k + 1];
                    if (_channelCount == 2)
                    {
                        nextBuffer[i_step + 2] = _sourceBuffer[k + 2];
                        nextBuffer[i_step + 3] = _sourceBuffer[k + 3];
                    }
                    else
                    {
                        nextBuffer[i_step + 2] = _sourceBuffer[k + 0];
                        nextBuffer[i_step + 3] = _sourceBuffer[k + 1];

                    }
                    _index += _adjustedStepRate;
                }

                if ((_index >= _sourceBuffer.Length - endOfBufferMargin))
                    _timeToStop = true;
                for (; i < (nextBuffer.Length/4); ++i, i_step += 4)
                {
                    nextBuffer[i_step + 0] = 0;
                    nextBuffer[i_step + 1] = 0;
                    if (_channelCount == 2)
                    {
                        nextBuffer[i_step + 2] = 0;
                        nextBuffer[i_step + 3] = 0;
                    }
                }
                _dse.SubmitBuffer(nextBuffer);
            }
        }

        void _dse_BufferNeeded(object sender, EventArgs e)
        {
            SubmitNextBuffer();
        }

        private byte[] _sourceBuffer;
        

        public AdjustedSoundEffect(Stream source): this()
        {
            byte[] header = new byte[44];
            source.Read(header, 0, 44);

            // I'm assuming you passed a proper wave file so I won't bother 
            // verifying  that  the  header  is properly formatted and will 
            // accept it on faith :-)

            _channelCount = header[22] + (header[23] << 8);
            _sampleRate = header[24] | (header[25] << 8) | (header[26] << 16) | (header[27] << 24);
            _bytesPerSample = header[34]/8;
            _byteCount = header[40] | (header[41] << 8) | (header[42] << 16) | (header[43] << 24);
            _sampleStep = _bytesPerSample*_channelCount;
            _sourceBuffer = new byte[_byteCount];
            source.Read(_sourceBuffer, 0, _sourceBuffer.Length);


            _baseStepRate = ((float)_sampleRate) / PlaybackSampleRate;
        }

        /// <summary>
        /// 
        /// </summary>
        /// <param name="pitchFactor">Factor by which pitch will be adjusted. 2 doubles the frequency,
        /// // 1 is normal speed, 0.5 halfs the frequency</param>
        public void Play(float pitchFactor)
        {
            _timeToStop = false;

            _index = 0;
            lock (SyncRoot)
            {
                _adjustedStepRate = _baseStepRate * pitchFactor;
                _index = 0;
                playbackBufferIndex = 0;
            }
            if(!IsPlaying)
            {
                SubmitNextBuffer();
                SubmitNextBuffer();
                SubmitNextBuffer();
                _dse.Play();
                IsPlaying = true;
            }
        }

        public void Stop()
        {
            if(IsPlaying)
            {
                _dse.Stop();
            }
        }
    }
}