About the Author

Danny is a Senior Software Engineer at InterKnowlogy in Carlsbad, CA. Danny began acquiring his expertise in software engineering at Neumont University in Salt Lake City, Utah where he graduated with a BSCS. Danny’s passion for technology has led him throughout the Microsoft Stack including .NET, C#, XAML, and F#. Danny has an expertise in NUI (The Natural User Interface) having built numerous multi-touch and gesture based interfaces for software applications across a broad spectrum of devices. Currently his passion includes building Windows Universal Apps and WPF Applications driven by gesture interaction using the Microsoft Kinect. Danny is an Alumnus Microsoft MVP for Windows Platform Development and speaks all over the country including That Conference, Spark Conference, and SoCal Code Camp. When not building beautiful software, Danny is an outdoors man and devoted husband and father. He loves to camp, hike and mountain bike. Follow him on twitter @dannydwarren

Windows Phone 7 – Tombstoning

I looked all over the internet to find a good solid article on Tombstoning in Wp7, however it seemed like everyone had a different perspective and each came up a little short of what I wanted. The biggest problem is defining what Tombstoning really is. So lets begin with that.

Definition:

Tombstoning – Using the Deactivation event for the application in order to save state that will be restored during the Activation event.

It’s as simple as that. Now how can we save state? There are a few options to accomplish this.

Local Database (Persistent)

Working with Paul Rohde and Travis Schilling we created an application that utilized the local database for our application. It was pretty nice. It’s completely accessed by LINQ to SQL as far as we can tell. Travis was the man behind all of that work, and he seemed to say it wasn’t difficult once you understand how the DB is created.

IsolatedStorage (Persistent)

In Isolated Storage you have the option to save key-value paired rich objects in IsolatedStorageSettings.ApplicationSettings. This dictionary has a TryGetValue<T>(string name) method that is very useful. This makes for very quick saves of complex object graphs. There is also the option to save files using IsolatedStorageFile and IsolatedStorageFile. I have not used these, but they seem straight forward enough. Microsoft, however does not suggest using any IsolatedStorage options when trying to Tombstone. Why? It’s slower than the alternative PhoneApplicationService.Current.State dictionary.

PhoneApplicationService (Transient)

The PhoneApplicationService is the preferred way to Tombstone your application according to a post from Microsoft. It’s faster than any kind of persistent storage according to that post. You would use PhoneApplicationService.Current.State to add and read transient data. The downside to this dictionary is that the TryGetValue method is not generic by default. We’ll have to create an extension method to have it work the same as IsolatedStorageSettings’ TryGetValue<T>().

What data do I save?

This question, I think, used to be more cut and dry. I remember creating an application pre-Mango (7.5) and if you left the application then went back to the application no data would be persisted because the application was closed not Tombstoned. However I’m seeing that in Mango the current page being viewed when the application is deactivated is not reliably lost. A strange statement, however true. When I deactivate my application I’d expect the VisualTree and object graph to be lost, thus requiring me to save all of the required data during the Deactivated event so it could be restored during the Activation event. I’m not experiencing this. I’m seeing the object graph being kept around as well as associated VMs. Do I need to save this data since I can see it’s not being lost? After talking with Tim Askins we’ve decided it’s best to save the data even if we don’t need to restore it. The saying, “Better safe than sorry” comes to mind.

Best Practices and Cautions

  • Save all data that can be treated as Session data in PhoneApplicationService and store all permanent data in your preferred form of persistent storage (Database or IsolatedStorage).
  • Don’t try to call any web services that would normally be used to store the data if the application continued to run. If data needs to be sent to a web service, enable your application with a background task.
  • Remember you only have 10 SECONDS MAX to be fully Tombstoned before the OS will close your app.
  • Note that you should make sure all of your objects being stored in PhoneApplicationService or IsolatedStorage are marked as DataContract and each property is a DataMember. We also saw this was not really enforced, but I think it’s best practice to do this. Again, “Better safe than sorry.” Plus, Microsoft says that it is required.
  • Save persistent data as it’s changed rather than waiting for the application to be deactivated.

Launching/Closing vs. Activating/Deactivating

The Launching and Closing events should be used to load and save persistent data. Activating and Deactivating events should be used to load and save persistent and transient data. The less persistent data you must handle during Activation and Deactivation the better. Persistent storage is slower than transient storage and therefore reduces the user experience.

Special Thanks

After scoping out the web I’ve found the following links very helpful. Each have different perspectives and insights that I think are worth mentioning as part of the great whole called Tombstoning.

Installer Woes-Installing Dependencies

For the last week I have been working with a user of Informant to help them get the application installed. It seemed like they had all of the prerequisites and were following all of the correct installation steps. Today I finally had RECESS time to devote to this. After finding an unused laptop and installing a fresh copy of Win7 (no SP1) and .NET Framework 4 Client Profile, I found that Informant failed to install. This was a surprise to me, but decided to see if installing .NET 4 Extended Profile would solve the problem. This solved the problem. My coworker Travis Schilling proceeded to help me get the Informant installer to require the .NET 4 Framework Extended Profile. I then refreshed my dependencies for the Installer project. This caused one of the common libraries to move into its default location. The library was originally moved in order to make use of Probing Paths. To fix this problem I only needed to move the library into the correct folder again. I have now released Informant v0.1.6.0 with the fixed installer.

How to force the dependency:

  1. Right-click the Installer project
  2. Select Properties
  3. In the Properties window click the button Prerequisites…
  4. Check all of the correct prerequisites for your application
  5. Click OK
  6. Click Apply then OK
  7. Expand the Installer project
  8. Expand Detected Dependencies
  9. Double-click Microsoft .NET Framework
  10. In the Properties pane change the Version value to .NET Framework 4
  11. Fix the location of any common libraries that may have been moved in the installer package

Informant Architecture

With the release of Informant v0.1.5.0 today I wanted to take a minute and talk about some of the changes and challenges I faced with this iteration. Let me first tell you what my goal was with this iteration. I wanted to create a single code base that could be reused in the Windows Service world as well as in WPF4, Silverlight 4, and Windows Phone 7. The challenge was finding which kind of class library can be shared with other kinds of applications. I discovered that Silverlight 3 was the appropriate base library type to use. Any .NET solution can reference SL3 libraries. I hoped to be able to make all of my model objects in SL3 assemblies and share them across all applications. However, I ran into some show stoppers in namespace conflicts. The model objects would be used in WCF therefore they needed to have DataContract and DataMemeber attributes applied to them. The namespace System.Runtime.Serialization cannot be shared between SL3 and WPF4. I got the following exception when I tried this: “Could not load file or assembly ‘System.Runtime.Serialization, Version=2.0.5.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e’ or one of its dependencies. The system cannot find the file specified.” WPF4 does not understand and is not compatible with this version of the SL3 framework. Super big bummer! This meant that I could only use Interfaces to enforce my model object APIs. Since I wanted consistency this is exactly what I did. Only one SL3 library is shared between all of the UI applications and the Windows Service.

The next hurdle was handling a WCF service reference that would work in all scenarios. I found that this was also not possible, or rather was not ideal for my situation. I wanted my WPF4 application to be very rich and have have duplex channels which are not supported in SL3/4 or Wp7. This meant that every framework had to have it’s own non-reusable reference to the service. These references live in the DataProviders.[Framework] assemblies. Even though I know that the model objects coming from the service are implementing the Interfaces defined in the shared library, I cannot treat these objects as those Interfaces when they come out of the service. I must be able to do this in order for my UI, ViewModels, and other DataProvider dependent assemblies to be shared across frameworks. To overcome this I made a partial class for each model object coming from the service and had the partial I made implement the correct Interface. This caused two problems for me. First, the Interfaces were using IEnumerable<T> which is not supported in WCF. The models from WCF all use List<T> when they come across the wire. In order to solve this I had to tell WCF to change the names of the IEnumerable<T> properties when they were sent across the wire and I had to make each IEnumerable<T> property in the partial I made use the List<T> property from the WCF partial class as the backing field. This was preferred over changing the Interfaces to enforce desired behavior. Second, there was an error that said the WCF partial and the partial I made had difference base classes after I implemented the Interface. This was so very confusing, and it took me a while to figure out. Turns out Visual Studio 2010 (and perhaps prior versions) specify each model as inheriting from object.

public partial class Contact: object

This behavior is unnecessary and redundant since every object in C# already inherits from object by default. I had to go through the Auto Generated code and remove all explicit inheritance from object. After I did this the application compiled just fine. Strangely enough, now whenever I update my service reference the explicit inheritance is added back in, but does not cause an error as it did the first time. While I’d love to understand what exactly was going on with the error, it seems to have resolved and is no longer causing me issues so I’ve stopped looking into it.

The UI presented the final difficulty. We use DelegateCommands all over in our ViewModels here at InterKnowlogy and I wanted to do the same in my VMs. The problem is the underlying implementation of the DelegateCommand class, and other classes, are so different that a VM specifically designed for each framework must exist. Much of the core functionality of each VM can be shared in a base VM but each framework must still have a specific VM implementation. The idea that VMs can be reused without modification is false, but a good deal of each VM can be shared, and that’s exactly what I took advantage of.

With all of that said and done I now have a working version of Informant for WPF4 referencing a Windows Service that is installed on the same machine. The service is required so users can close the application and leave their computer on and Informant will still send all of the desired SMS messages. The Wp7.5 (Mango) version is next on my plate. This version will require me to refactor the Windows Service to support a RESTful API. The service will also need to be hosted on a publicly accessible server. My goal is to enable Wp7 users to be able to send SMS messages to their contacts and contact groups. A SL5 version will then be next available on the same server as the RESTful service.

The ability to reference SL3 libraries in the different .NET frameworks has greatly decreased the amount of work required to develop a framework specific application that fulfills the same purpose. It’s quite amazing and awesome. However, a small word of caution should be made known. The new Windows 8 WinRT framework is NOT compatible with this pattern. Each library must be a WinRT library compiled against WinRT. So I will not be able to reuse the libraries, as is, in Windows 8 WinRT. This is a huge bummer, but makes sense as I’ve learned more about what WinRT really is.

Installers Woes–Probing Paths

Today for RECESS I was working on Informant. I need a single installer to lay down my Windows Service and WPF application. I’ve done this once before in VS2008, so I figured it wouldn’t be too hard to do in VS2010. I did not anticipate any breaking changes between the two versions however, in VS2010 the way that dependencies are tracked is different than in VS2008. I have two projects that have a shared assembly but in VS2010 the dependency is injected only once for the first project output that claims it. My coworker Dan Hanan introduced me to Probing Paths. This allowed me to have the WPF application in the Application Folder of the installer and the Windows Service application in a folder named Service as a sibling to the WPF Application. In the WPF App.config I added the following probing path:


<runtime>
	<assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">
		<probing privatePath="Service"/>
	</assemblyBinding>
</runtime>

This allowed me to to have the common assembly dependency in the Service folder and still allow the WPF application to access it. This way the dependency is injected only once to the project installer, but used multiple times.

Google ClientLogin URL Changed

Since Feb. 2010 I have been working on a pet project called Informant. The project interfaces with Google Voice and allows the user to send SMS messages to a group of contacts from their Google Contacts. The application has been working great since its first release in Mar. 2010. However, in the last month I’ve received reports that it was not working. Users have been unable to authenticate with Google which was most concerning. I started to look into a solution and found nothing online that could help me. With out time to fix the issue, the problem has persisted until today. I decided to take a different approach and try to find another .NET implementation of the Google Voice API. After looking for a while I found one on Source Forge called GVoiceDotNet. Their implementation is authenticating exactly the same way as I’m doing in Informant. Both projects use the ClientLogin API for Google Accounts. I didn’t notice any differences until I starting doing a side by side comparison of the URLs we are using to authenticate. The change, while small, made a big difference.

My Original URL: https://www.google.com/accounts/ServiceLoginAuth?service=grandcentral

GVoiceDotNet URL: https://accounts.google.com/ServiceLoginAuth

There is only one breaking change between these two URLs. No, it’s not the service=grandcentral. That part is apparently not required and is ignored. The URL www.google.com/accounts is no longer valid, it is now accounts.google.com.

I am totally surprised that Google changed this URL and even more surprised that I can’t find any documentation about this change. Perhaps I’m just looking in the wrong place? It doesn’t really matter, I’m just stoked to have Informant back up and running!

If you don’t use Google Voice, you should definitely check it out! And check out Informant on Codeplex!

Dynamic Actions in MVC 3

In MVC 3 you get a lot of stuff for free. Convention determines a lot of the basic stuff. This is very awesome and helps alleviate some of the more mundane and boring work involved with setting up an ASP.NET site. However, with magic comes limitations. In my particular case I need to have an action with an arbitrary number of parameters without the need to have an overload in my controller for each parameter. Lets look at what I’m talking about.

My example is a bit of a stretch, but should help explain what I’m doing. If I have a controller named ContactsController with an action named Find which takes one parameter which is a name, then I could call http://site/Contacts/Find/Jon and this would return the contact with the name Jon. However, if I want an action named CreateGroup which will take in a list of names which will then create a group including all of the people whose names were passed in, I’d need to do http://site/Contacts/CreateGroup/Jon/Holly/Drew/Brian/. What I will be addressing in this blog is how to do this without the need for an overload of CreateGroup for 2, 3, 4, etc. names as parameters.

In the Global.asax file we will need to map the correct routes. If we use the MVC 3 Internet Template in VS2010 then we’ll have the following default route:

routes.MapRoute(
	"Default", // Route name
	"{controller}/{action}/{id}", // URL with parameters
	new { controller = "Home", action = "Index", id = UrlParameter.Optional } // Parameter defaults
);

This default route handles the Find action just fine, however it will not work with CreateGroup. We need a way to specify an unlimited number of parameters. In order to accomplish this we need the following route:

//NOTE: This prevents creating routes after this route that have a static number of parameters
routes.MapRoute(
	"DynamicActionRoute",
	"{controller}/{action}/{*rawParams}"
);

What we did here is say that for any given Controller and any given Action use the no parameter Action regardless of how many parameters there are. So in our controller we will now use:

public class ContactsController : Controller
{
	public ActionResult Find(string id)
	{
		return View();
	}

    public ActionResult CreateGroup(string rawParams)
	{
		var names = rawParams.Split('/');
		//TODO: process names
		return View();
	}
}

As long as the name of the parameter in the route matches the name of the parameter in the Action we will receive our string of names. After we parse it out we’ll be good to go!

Some Words of Caution

When doing this you cannot have an overloaded Action named CreateGroup. If CreateGroup is overloaded you will get the following error of ambiguity:

The current request for action ‘CreateGroup’ on controller type ‘ContactsController’ is ambiguous between the following action methods:
System.Web.Mvc.ActionResult CreateGroup(System.String) on type Mvc3RazorPoC.Controllers.ContactsController
System.Web.Mvc.ActionResult CreateGroup() on type Mvc3RazorPoC.Controllers.ContactsController

This makes sense in our situation, because we always want to handle all parameters in this one Action.

Also, this should be the last route in the Global.asax file. It will prevent overloaded routes from being fired correctly. If you have an Action that takes in 2 parameters and the route is placed after our dynamic route the dynamic route will kick in and the parameters will not be filled correctly even though the Action will be called correctly. If the 2 parameter routed is above the dynamic routed all will be well and work as expected.

Windows 8 WinRT Development 101

Let there be no mistake made that it’s pretty exciting that Microsoft is starting to release early bits for some of its products for us to test out. Windows 7, Visual Studio 2008 (Orcas), VS2010, VS11, Win8, etc. are just a few that come to my mind. These early releases are full of bugs and are definitely not production quality, neither should they be. So I’m just going to go ahead and preface everything I’m going to write with, “The Win8 WinRT framework is pre-beta and therefore should not be expected to work exactly as prescribed.” With that said there are a few key features that don’t work as expected and I’ll be touching on some of those.

Motivation
I’m currently on a project with two other developers here at InterKnowlogy, Dan Hanan and Carmine Sampogna. We are at the tail end of porting what we call our Kiosk Framework to Windows 8 on the WinRT framework. The project has been a ton of fun and we’ve learned a lot. The project has not been as straight forward as we would have hoped. WinRT XAML is similar in capabilities to Silverlight 3 and 4. Sometimes you still feel like you’re in SL2, but again it is pre-beta software.
Environment
Carmine and I are running Win8 from bootable VHDs that we made following the steps in a post by another coworker Travis Schilling. Dan received one of the coveted Win8 Tablets from BUILD. And we’ve installed the full developer preview of VS11.

WinRT Gotcha’s
Only one merged resource dictionary supported: We like to organize our XAML Resources into Resource Dictionaries to help with readability. It also allows multiple controls to share a single resource and multiple developers to work on different resources at the same time without the need for merging XAML files. So naturally we went down this path only to find that this is not supported yet in WinRT. We got the following error:

We found some help from someone who had already encountered the problem: http://metrotrenches.wordpress.com/2011/10/07/tailored/.

DependencyProperties are not typed: DependencyProperties are very important when designing UserControls that you intend to be used in different contexts with different DataContexts. In the current version of WinRT DPs are not typed, which is a big bummer. This work around was found at: http://social.msdn.microsoft.com/Forums/en-US/winappswithcsharp/thread/4fed80a8-1343-41ca-8c27-b20a00689f65.

ElementName bindings only sorta-kinda work: ElementName bindings help simplify and centralize values. One element contains the values and a bunch of others can grab those values. They are very important when it comes to attempting to implement a MutltiBinding workaround. However, we have found that in many cases ElementName bindings don’t work while in other cases it does. Not sure why.

Can’t retrieve non-template resources such as double, GridLength, Thickness, etc.: I discovered this because ElementName bindings were failing in a very specific case.


var myGridLegnth = App.Current.Resources["MyGridLength"];  //fails
var myDataTemplate = App.Current.Resources["MyDataTemplate"] as DataTemplate; //works

Works just fine if you are accessing a DataTemplate, ControlTemplate, or Style. However it seems to fail for structs, value types, and complex types.

DataTemplates cannot contain ContentControls that use ContentTemplateSelectors in some cases: I was trying to refactor a DataTemplate into App.xaml (because we can’t have multiple ResourceDictionarys) and I would continually get an Engine Execution Exception.

No Implicit DataTemplates: However there are implicit Styles.

Other Thoughts
One of the most annoying parts about development is that every run of our application is cut short by a FatalExecutionException which claims may be a bug in the CLR, or an ExecutionEngineException with no details. We can’t seem to get around these. Also, any exceptions thrown by bad XAML at runtime have zero details about what is wrong. Sometimes we get exceptions that have nothing to do with XAML or our UserControl but the exception is still thrown by a UserControl. Very Strange.
My coworker Dan has a post with more gotcha’s and his take on Win8. Overall, Win8 is cool, but the preview is severely lacking. We’re hoping it doesn’t take another 6+ months to get new bits!

Timers and Alarms in Windows Phone 7.5 (Mango)

One thing I really wish Microsoft had built into the Windows Phone 7 from the beginning was a simple timer. The iPhone has one. The Android has one. Windows Phone 7.5 still has yet to add this simple functionality. So I took the task upon my self to develop one. The first thing I didn’t understand was which Scheduler was the correct Scheduler to use. There were a lot introduced with Wp7.5 found here. Before I even looked at that page I opened up Visual Studio and found the “Windows Phone Scheduled Task Agent” project template and started to play with that. My immediate thought was “PeriodicTask is for scheduling when a timer should run.” This is definitely not the case since PeriodicTasks only run once every 30 minutes or so. Once I discovered Alarm notifications it was very simple to sort all of that out.

The Wp7.5 SDK provides a lot of great controls. However, I found it lacking a way to specify hours, minutes, and seconds through a picker. Luckily Coding4Fun took care of that for me with the TimeSpanPicker. Armed with an Alarm and the TimeSpanPicker I was able to create a super simple timer application.

A few notes about the outcome of the application:

  1. It is not possible to reuse sounds built into the Phone for your custom timer. I still don’t understand why Microsoft will not allow this. Why can’t I have my alarm play one of the built in sounds let alone why can’t I have it play my favorite song when the alarm finishes?
  2. I’ve found that the alarm is accurate up to 30 seconds. Meaning, I found that the alarm notification would not actually notify me right away. The longest delay I’ve found is 30 seconds, but usually it’s within 10 seconds. This is acceptable, but again disappointing.
  3. Sometimes the alarm notification will not display if the application that created it is running. Perhaps this is by design? One could assume that if the application is setting an alarm that it doesn’t intend on running when it completes, and if it is running then it should be tracking the time on its own.

Remote Debugging

Recently we’ve been developing an application that runs on a Win7 PC and has a slimmed down version that runs on Win7 Tablets. In our case we’re using the HP Slate. Our development machines are running Win7 and VS2010. The development machines are on our domain while the HP Slates are not. The project has been going for some months now and we’ve been relying heavily on logging using Enterprise Library to know what errors we’re up against. However, we all know that as the complexity of an application increases so does the likelihood that we may log incorrect, useless, or unrelated data. Enter the need for remote debugging. I thought this would be straightforward and easy but I was very much mistaken.

Getting Ready to Remote Debug
I started on the (MSDN http://msdn.microsoft.com/en-us/library/y7f5zaaa.aspx) in the section (How to: Set Up Remote Debugging http://msdn.microsoft.com/en-us/library/bt727f1t.aspx). Here I learned that firewalls and permissions are the first huge hurdle called out by Microsoft. I must say when it comes to limited time firewalls are out. So instead of fighting my way through setting up firewalls to work correctly I just disabled them for all machines involved for this remote debugging session. Later in this same article it tells how to install the software required for the remote machine in order to debug remotely. There is broken link to the download center for the software required, but the optional location is on the VS2010 installation media in the folder (VS Media)\Remote Debugger. Be sure to install the correct version on the remote machine. Instructions for this are in the article.

Running the Remote Debugging Monitor
STOP! Why? Because you really should figure out which users you’re going to use on both the VS2010 host machine and the remote machine before trying to run the Debugging Monitor.
Selecting the Appropriate User Accounts for Remote Debugging
I tried to remote debug at this point and found there is one more piece of information that I was missing. Going back to the list of articles on the (MSDN http://msdn.microsoft.com/en-us/library/y7f5zaaa.aspx) the next important section is (Remote Debugging Across Domains http://msdn.microsoft.com/en-us/library/9y5b4b4f.aspx). This section lists the restrictions relating to user accounts right away. I will copy the first section here for review:
To connect to msvsmon, you must run Visual Studio under the same user account as msvsmon or under an administrator account. (You can also configure msvsmon to accept connections from other users.)
Visual Studio accepts connections from msvsmon if msvsmon is running as a user who can be authenticated on the Visual Studio computer. (The user must have a local account on the Visual Studio computer.)
With these restrictions, remote debugging works in various scenarios, including the following:

  • Two domains without two-way trust.
  • Two computers on a workgroup.
  • One computer on a workgroup, and the other on a domain.
  • Running the Remote debugging monitor (msvsmon) or Visual Studio as a local account.

Therefore, you must have a local user account on each computer, and both accounts must have the same user name and password. If you want to run msvsmon and Visual Studio under different user accounts, you must have two user accounts on each computer.

You can run Visual Studio under a domain account if the domain account has the same name and password as a local account. You must still have local accounts that have the same user name and password on each computer.

End Quote.

We will cover the last part in detail since that is the scenario I was faced with. In simple terms if your domain is named us and the user account is danny (us\danny) then the VS2010 hosting machine must have a local user account named danny with admin privileges with the same password as the us\danny account. Also, the remote machine, in my case the HP Slate, must have a local user account named danny with the same password as the other two danny accounts and must also be an admin on the machine. After all accounts are created and have been accessed for the first time restart all machines and boot the VS2010 hosting machine into the domain account (us\danny) and the remote machine into the local account with the same name as the domain account (machine name\danny).

This essentially means that three(3) accounts named danny must exist: us\danny, vs2010 machine name\danny, and remote machine name\danny and all of them must be admins on their respective boxes and all must use the same password.

Running the Remote Debugger (For Real)
Again returning to the list of articles on the (MSDN http://msdn.microsoft.com/en-us/library/y7f5zaaa.aspx) the next article is (How to: Run the Remote Debugging Monitor http://msdn.microsoft.com/en-us/library/xf8k2h6a.aspx). You have two choices at this point. Run the monitor as needed or run the monitor as a service. Because we will be constantly debugging using the HP Slate I decided to run the monitor as a service which requires a little extra work. You will need to run Local Security Policy. Then under Local Policies\User Rights Assignment there is a policy named Log on as a service that must be granted for the user that will be running the debugging monitor.
Performing a Remote Debug
I’m still struggling with this a little. I cannot figure out how to have VS2010 compile and launch the application I want to debug on the remote machine. Instead copy the compiled project files from the bin\debug folder of your project to the remote machine (I suggest a location easy to access and replace files in). Run the executable on the remote machine then in VS2010 of the hosting machine open the Attach to Process dialog (ctrl + alt + p). There is a drop down name Qualifier that when opened will detect the machine you’re trying to remote debug. Select the remote machine then attach to the process on the remote machine like you would as if the process were on the VS2010 hosting machine.

Thoughts
It only took a few hours to figure this all out. The part that kept killing me was the fact that even after you create a user account if you don’t log on as that user then it will not be available for use. Remote debugging is definitely cool! It helped us figure out quite a few issues in our remote code very quickly. Now that I know how to set up a remote debugging environment I will be using it more often.

No more IIS7 WCF REST 404 Error!

Recently I’ve been trying to host a WCF REST .NET 4.0 service in IIS7. I downloaded the template for VS2010 used for creating REST services as explained here. Running the sample application found on that website worked fine from VS2010. I then switched to hosting the service in IIS7. The result however was a 404 error, not exactly what I was hoping for. My coworker Dan Hanan tried the same thing on his machine with complete success. We looked at the service configuration, IIS7 settings, among other obscure settings without success. Google wasn’t especially helpful since most of the results kept saying things like “add an *.svc file” (which is not required/used in .NET 4.0 REST services), others mentioned changing an IIS7 handler mapping so that the *.svc file would not be required, but none of these solutions helped. Tracing didn’t help me get any closer either.

Still the problem felt like a server issue, so I tried to narrow the search. I eventually came across this blog. There was a link to a question on the MSDN forums that mentions a handler mapping named “svc-Integrated-4.0″. When I looked at my handler mappings in IIS7 it did not contain that mapping. The last post on that question contained a link to the MSDN post One-Time Setup Procedure for the Windows Communication Foundation Samples explaining some steps that need to be taken in order to use the .NET 4.0 WCF services in IIS7. When I tried to run the aspnet_regiss and ServiceModelReg.exe commands explained in the post I only ran into more errors.

Frustrated I talked to my coworker Travis Schilling to see if he had any ideas and to see if his IIS7 contained the mapping. His IIS7 did contain the mapping and he was pretty sure it was installed by VS2010. After a little discussion we decided it would be best to do a repair install of VS2010. After the repair installation the mapping showed up in IIS7 and the REST service ran as expected. I’m not sure why the handler mapping wasn’t installed before, I can only assume it’s because I installed VS2010 before I installed IIS7.