Wednesday, December 9, 2009

Cincinnati Bell Wireless and 3G

After some years of using a Nokia 6300 purchased through my carrier, Cincinnati Bell Wireless, I bought a completely unlocked Nokia 5800 from Dell.

I get good GSM 2G coverage around home, but where I work (50 miles away) I've had coverage issues. At my desk in the building, I usually have one or zero bars. But sometimes I see a 3.5G or 3G icon where my usual GSM tower icon lives. But when I attempt or use voice or data in that 3G mode, it fails.

Through some Internet searching, I came across some information posted by "mdo77" at HowardForums:

1) Cincinnati Bell's 3G frequencies are 1700/2100 MHz, which is actually the same at T-Mobile's. AT&T uses the 850/1900 freqencies. All three companies (CBW, ATT, TMO) have the same 2G (EDGE) voice/data frequencies, however.

2) Any CBW phone that has any chance of 3G working has the 1700/2100 bands.

3) CBW's 3G footprint is local only. They have no 3G roaming agreements (at least any that are active). If you're outside of the Cincy/Dayton area, you're EDGE only.

4) Even if/when roaming in 3G is available for CBW customers, it will only be on the 1700/2100 (TMO) bands.

I suspect what's happening is that my phone is picking up the AT&T 3G signal and favoring it over any standard 2G GSM signal (there's actually an AT&T wireless building nearby). However, AT&T doesn't let me through because I’m not an AT&T subscriber. I suspect if I tried a SIM card from an AT&T Wireless friend, I would get AT&T 3G right my desk.

According to Nokia's specifications (http://www.nokiausa.com/find-products/phones/nokia-5800-Xpressmusic/specifications), the Nokia 5800 operates at 850/1900. It would seem, then, that the Nokia 5800 would achieve 3G on AT&T Wireless but not on Cincinnati Bell Wireless.

What troubles me, though, is that Cincinnati Bell Wireless sells the Nokia 5800. If the above is all true then why is Cincinnati Bell Wireless selling a 3G phone that won't work on their own 3G network? Is there any hope of the Nokia 5800 ever working at 3G speeds on your network?

Meanwhile, I've now taken my phone out of Dual Mode so that it still restrict itself to the GSM 2G network.

Monday, December 7, 2009

Another Brick in the Wall

Moments (ok, hours) ago, I posted about my difficulties with WCF authentication and identity propagation. I've now made some progress on the message-level security front. However this progress only removed some of the bricks I've hit in the WCF brick wall.

I've learned now that makecert.exe needs to be run with full administrative privileges (not just a Windows SDK command prompt) when running Vista or Windows 7 (I'm on the latter). It can make certificates all day long, but when it comes to saving them to the local certificate stores, it will fail. By running an administrative command prompt (in my case, I can the Windows SDK command prompt as Administrator), makecert.exe can successfully write certificates to the local store, e.g.:

makecert.exe -sr LocalMachine -ss My -a sha1 -n CN=LocalDevServerCert -sky exchange -pe


Doing this, I now have my externally-accessible outer WCF service communicating to my internal WCF service. The inner-serviced is using wsHttpBinding, a custom UserNamePasswordValidator and the now-installed custom certificate:


<wsHttpBinding>
<binding name="customServiceToFacadeBinding">
<security mode="Message">
<message clientCredentialType="UserName"/>
</security>
</binding>
</wsHttpBinding>
...
<serviceBehaviors>
<behavior name="...">
<serviceCredentials>
<userNameAuthentication
userNamePasswordValidationMode="Custom"
customUserNamePasswordValidatorType="..., ..."
/>
<serviceCertificate findValue="LocalDevServerCert" storeLocation="LocalMachine" storeName="My" x509FindType="FindBySubjectName" />
</serviceCredentials>
</behavior>
<serviceBehavior>


Likewise, in the outer-service's endpoint configuration to the inner-service, I'm using related configuration:


<wsHttpBinding>
<binding name="customServiceToFacadeBinding">
<security mode="Message">
<message clientCredentialType="UserName"/>
</security>
</binding>
...
<client>
<endpoint name="WSHttpBinding_AttorneyFacade"
address="..."
binding="wsHttpBinding"
bindingConfiguration="customServiceToFacadeBinding"
contract="..."
behaviorConfiguration="ClientCredentialsBehavior">
<identity>
<!-- Usually, this is 'localhost', but in cert mode, it needs to match the subject(?) of the certificate -->
<dns value="LocalDevServerCert" />
</identity>
</endpoint>
</client>


In this way, I can now explicitly set the ClientCredentials.UserName.UserName in the outer-service's WCF client and invoke the inner-service's operation and the identity flows through.

But wait, there's more!

I'm still stuck where Silverlight calls the outer-service. I'm limited to the customBinding where I specify a transport (e.g. httpTransport or httpsTransport) or basicHttpBinding (i.e. HTTP, HTTPS). Either way, if I attempt to use transport-level security, such as UserNameOverTransport on customBinding or TransportWithMessageCredential on basicHttpBinding, I'm left with errors indicating that WCF won't send credentials over a non-secure transport -- that is, HTTP. Again, HTTPS is not supported by Cassini, and I can't get Visual Studio to work with IIS.

Next I'm going to investigate using Certificates for message-level protection between the Silverlight client and the outer-service. In production, I probably will use transport-level security via SSL on IIS; however, for local development, I could accept certificate-based message-level protection.

Still, why can't Cassini just support SSL? Or, why can't WCF allow credentials to be sent over an unsecured transport when bound to localhost? Either solution would make developers' lives easier!

Butting Heads with WCF Development

I'm currently working on a project where there is a Silverlight client calling an outer, externally accessible WCF service, which in turn calls an internal WCF service. My goal is to have the user authenticate and their identity propagated on each WFC operation call, without some sneaky reliance on ASP.NET's FormsAuthentication. I've actually had such a sneaky-method half-working three times now, but I feel bad about it.

The most likely "good" solution I keep encountering in wild web entails sending the username and password credentials from the Silverlight client to the outer WCF service as ClientCredentials, validating them in a custom UserNamePasswordValidator and hooking up my own custom MembershipProvider. Then, passing just the username down to the inner WCF service on the outer service's ClientCredentials, trusting that username implicitly, and again hooking up my own custom MembershipProvider.

However, to use any form of credentials passing in WCF, you must either have transport-level security or message-level security. Out of the box, transport-level security means SSL while message-level security means certificates. Here the fun begins.

I'll gladly use transport-level SSL security. Its good enough for now and its relatively easy. Mind you, I don't necessarily need SSL between the outer-service and inner-service, but I can live with that. Unfortunately, Visual Studio's default web environment, Cassini (WebDev.WebServer.exe) does not support SSL. I suspect this limitation isn't just to torture developers trying to do legitimate things, but to prevent cheapskates from trying to run their public production web applications on it. .

Without SSL, WCF refuses to even attempt to propagate any credentials, giving quite descriptive error messages like "Could not find a base address that matches scheme https for the endpoint with binding BasicHttpBinding. Registered base address schemes are [http]" and "Give up all hope." It restricts this to protect us from ourselves. I would argue they could at least accept SSL from localhost. This would still thwart the cheapskates while exonerating most of the developers wrongly punished by this limitation.

So I can't accomplish what I want to accomplish in my current development workstation and configuration. Temporarily peeling myself off of that brick wall, I downloaded the WCF Samples and started investigating message-level security. After much tinkering, I encountered a missing test certificate, so I followed the happy, simple instructions to execute their provided batch file. I was greeted with a handful of errors, including some Access Denied. I tried again from a command prompt with greater privileges and got a different set of errors.

So I turned around and went back to the brick wall that was SSL, this time aiming to have Visual Studio use IIS instead of Cassini. I went to the Web project properties of my WCF project and selected "Use Local IIS Web Server". When I attempted to save, I was slapped with this ominous error:

To access local IIS Web Sites, you must install the following IIS components: IIS6 Metabase and IIS 6 Configuration Compatibility Windows Authentication. In addition, you must run Visual Studio in the context of an administrator account. For more information, press F1. With little expectation, I pressed F1. I was not disappointed -- only because I had such low expectations to begin with: nothing happened.

Now I suspect a part of this problem is me. I'm still new to WCF, having only worked with it for a few months. I've not been to any professional training courses on WCF, and only have one book about it and have only read a couple hundred blogs, articles and MSDN references.

But is there also a lack of support for simple developer environments?

Tuesday, December 1, 2009

Soggy Cell Phone

The result of my own poor judgement, my Nokia 5800 XpressMusic was drowned in a hot tub. The phone itself was only under water momentarily, but the battery was submerged over night. The next morning, I had to brush away a teal-green corrosive coating that had grown on one of the battery's contacts, caused by a small current flowing across the terminals underwater, no doubt aided by the highly chemically treated hot tub water.



My first action was to take off the cover of the phone, remove the battery, SIM card and microSD card and set them on a window sill to dry. After hours of drying and no visible signs of wetness, I attempted to install the battery and turn it on, but nothing happened. Thinking the battery might have been fully discharged, I plugged it into the charger to charge for a half-hour or so to get some juice in it.



Afterwards, it turned half-way on one time. by half-way, I mean the screen lit up and displayed "Nokia", but the handshake and tones never arrived. I turned it off and back on, and it was worse: only the red light and the omni button lit up, steady and glowing. I even tried the battery from my son's Nokia 5800, but it yielded the same results.



Fearing the worst, I did some research online. I learned some things, like:


  1. Put a drowned cellphone in a bowl of uncooked rice to dry it thoroughly.

  2. Never plug in a cell phone that may till have water in it


Oops! I had killed my phone! My son was even upset, and it wasn't even his phone (this time!) He even offered to let me use his phone (who says teenagers can't be sweet?)



I reverted to my scotch-taped Nokia 6300 (great phone -- just not a smartphone). After several days of no GPS, no touch screen, no WiFi, etc. my son started hopping around me, bright eyed with and bursting at the seams with a secret. Just before he would've exploded, he spilled to me that he "fixed my phone." On his own, he had tried his battery in my dead phone, still sitting in pieces on the window sill. This time, though, it turned on! I presume that when I had tried his battery in my phone, it had not yet dried, giving a false-negative



I'm now waiting with fingers crossed for my replacement battery I ordered online from Radio Shack (best price from a name I recognize). I'll post an update with the result after it arrives.

Thursday, October 8, 2009

Dark Code

I've recently encountered several articles dealing with the mysteries of modern physics. Despite most things in physics having names describing what they are, the mysteries are all named to describe that we don't know what they are: Dark Matter, Dark Energy and -- the latest member of the mysterious Darks gang -- Dark Flow.

Darks are more than just a mystery, though. Having a name makes the Darks tangible. They can stand as an answer on their own. Why do the galaxies not fly apart? Dark Matter.

Some of my fellow programmers and I started lamenting that there isn't a mystery name we can apply to something in our line of work. So that programmers, too, can wield this power, we here by dub the term Dark Code.

When someone is working with a system they helped build, but they encounter a behavior they can't account for, no longer must they moan "I'm not touching the record, but something is updating it." Instead, the answer is Dark Code. When you encounter an irreproducible bug: Dark Code.

So go forth and close all of those open issues in your issue management system for they n ow have an explanation: Dark Code.

Tuesday, August 11, 2009

Separating MVC Projects in a WCF World

I'm working on a new ASP.NET MVC project where the business logic is all in a separate WCF services layer. Thus, our controllers orchestrate the UI and communicate to the controllers for the business logic.
Most simple MVC books and tutorials show a single project with separate folders for the Model, Views and Controllers. If you read long enough, you'll find suggestions of splitting the Model, Views and Controllers into their own projects/assembles. Normally, a strongly-typed View would depend on the Model. A Controller would prepare the Model for the View, so it too would depend on the Model. But the Model never depends on the View nor the Controller. We've attempted this on our project, but we hit a snag.
The Controller project has service references to our WCF services. Service references conveniently generate client proxies the services and their data transfer objects right inside the Controller project. These DTOs often suffice for a Model for our Views. However, there are many places where our Views need a more View-friendly Model, so we adpat some of those DTOs into UI-specific Model classes housed in our Model project.
The problem comes when our UI-specific Model needs to contain some of the DTOs. Because the UI-specific Model classes are in the Model project, but the classes our service references convienently generated for us are in the Controller project, you can't dot his without having an assembly reference from Model to Controller. This breaks the rule that the Model should not depend upon anything.
My short-term solution has been to do away with the separate Model project and just house the UI-specific Model classes in the Controller project, alongside the generated classes. I've not yet settled on a long-term solution. Splitting up the generated code (that is, splitting the generated service client and the generated data classes) seems futile. Perhaps the generated data classes shouldn' be used as a Model, but instead each one should have its own UI-specific counterpart in a separate Model project. Time to ponder...

Monday, August 3, 2009

ASP.NET Multi-environment Deployment Configuration

I'm still new to ASP.NET, fresh off of the JEE bandwagon. In many ways, deploying an ASP.NET application to IIS can be blazingly simple (e.g. "just copy the built artifacts to the site folder"). Like JEE, though, there doesn't seem a good, widely agreed upon solution for separating environment-specific configuration from the application-specific configuration.


Like many shops, we have several environments. We have two "debug" environments and two "non-debug" environments where we want things quiet and locked down. The "debug" environments can show full exception detail, verbose logging, etc. The "non-debug" environments -- including production -- do not. Our UAT environment is configured as a "non-debug" environment so that its as close to how it will be experienced in production as possible. Furthermore, the databases, internal URLs and such also vary from environment to environment.


Most of the solutions I came across suggested maintaining multiple web.configs, one for each environment. I dislike such solutions because it requires duplicate maintenance (I'm lazy, and humans inevitably fail to do the "dual" part). I did find one suggestion I liked: use XSL transformation to modify points in your web.config for each environment.


Thus far, I have crafted an XML stylesheet for our dev and uat environments. Each of these is designed to transform our web.config from source control (set up for local environments) into a configuration specific to that environment. The stylesheet itself largely clones the existing XML, replacing specific values or attributes, such as:



  • Connection strings (substituting variables where necessary)
  • Exception detail (toggled on or off)

  • Debuggable ASPx compilation (toggled on or off)

  • WCF service endpoint addresses (URLs specific to each environment)

  • Logging listener log file paths

  • etc.


For example, to turn off debug information for the pages, the following snippet is used:


    <!-- Set the debug compilation to enable/disable insertion of debugging symbols into the compiled page. Because this affects performance, set this value to true only during development. -->

    <xsl:template match="/configuration/system.web/compilation/@debug">


      <xsl:attribute name="debug">true</xsl:attribute>

    </xsl:template>


I've also broken my stylesheets into a base one for the web.config (e.g. my AppServices project's stylesheet is AppServices.xsl) as well as an environment-specific stylesheet that sets the variables for it (e.g. AppServices-dev.xsl, AppServices-uat.xsl):


    <xsl:variable name="logRoot" select="'\inetpub\logs\AppLogs\CIN2010\Dev\AppServices'" />

    <xsl:include href="./AppServices.xsl" />


Thus far, the plan is working quite well. The Hudson CI server is building and automatically deploying the applications, transforming the web.config as it goes. From time-to-time I have to alter the stylesheets to extract another environment-specific setting, but this is rare and straight forward.

Damn DRM

With my new Nokia 5800 XpressMusic came a composite video and stereo audio output cable. This wasn't a feature I had required of my new phone, but it certianly earns it some geek points. It also came with a $50 gfit credit for Amazon's Unbox video service. I eagerly downloaded Dr. Horrible which I heard heard so much about.
After watching Dr. Horrible on my phone, I next wanted to try playing it to my TV via TVersity. However, TVersity couldn't handle transcoding whatever format Amazon delivered it in. A week later, I tried plugging the A/V output cable into my TV directly, only to find a black screen with an icon in the middle: a key with a slash through it -- no doubt a culturally agnostic way of telling me the content was locked.
So I plan to let the rest of my $50 gift credit go to waste because it turns out $50 of Amazon Unbox video content isn't worth a dime.

Wednesday, June 24, 2009

Fulfilling a Dream

Before I started my new job, I had many conversations with the Senior Software Architect discussing domain-driven design, isolating the business layer, and so on. He had faith in this approach and selected it for their new architecture. Now I get to participate in that new architecture.

It has been a good experience so far. The business layer is a host of separate projects that expose themselves as internal business services with data transfer objects matching the domain. The separate presentation layer is an MVC that consumes these services. More specifically, APS.NET MVC controllers are communicating to the business layer application services through WCF proxies, wired together using the Unity DI container. These technoligies have made it realtively easy to adopt this architecture.

It's a breath of fresh air from JEE's stateless session beans, Struts 1.1 and a host of anti-patterns. To be fair, my ex-colleagues and I recognize the error of our ways and architectural limitations (or at least impeadances) imposed by the technologies; however, that application is already into its maintenance cycle and there are not enough people there with the faith to purposefully move towards a better arhitecture.

Monday, June 22, 2009

Losing My Religion

Many programmers defend their languages religiously, and I've done the same for my language of choice, Java, at least for the sake of starting a fight. However, just today I started a new job as a .NET developer. I'm quickly overcoming the hurdles of switching language, libraries, platform, source control, issue management -- not to mention an enitrely new business space for me.

Its amazing how undifferent a language can be. I had always theorized as much, but now I'm experiencing it first hand without the safety and controls of a labratory experiment. The concept of source control is largely the same between Subversion and Vault. The language keywords are finite enough to be easily translatable between Java and C#. Many of the libraries from Java also exist in .NET (Spring.NET, NUnit, MemCache, etc.). The biggest difference so far has been the IDE -- Visual Studio 2008 vs. eclipse.

I'm finding many of the features I love in eclipse to be absent in Visual Studio 2008 (or, at least, undiscoverable to me as of yet). For example, what I affectionately call the "God Key" in eclipse (Ctrl+1) -- because it can do anything -- doesn't seem to have an equivalent in Visual Studio. On the other hand, because Microsoft controls so much of the stack things that were frustratingly difficult in eclipse are a breeze. For example, to run my project on a server, I simply click "Run" and its got a local test server all prepped for me. In eclipse, its a nightmare of getting various ports and Server Runtimes all configured correctly (the price of freedom, I suppose).

So far, its been easier than I thought to switch, but this switch has only just begun.

Tuesday, June 9, 2009

Nokia 5800 XpressMusic

I recently switched from a Nokia 6300 S40 to Nokia 5800 XpressMusic S60v5 all-touch phone. This is despite my ranting that I'd never use an on-screen keyboard and had to have a tactile keyboard. But, after a week or so, I'm finding that the on-screen keyboard with haptic feedback isn't that bad. My accuracy has already improved. With my Zagg Invisible Shield screen protector, the screen is a bit less slick, so my fingtertips don't slide as I type (but my fingertip also doesn't slide as well when I drag). And having an s60 OS means there's even more software available. Early verdict: win!

Tuesday, January 20, 2009

Beside Myself with Sidebars

Since I finally updated to Vista (it's not as bad as everyone said), I've had to choose between the Vista Sidebar and Google Sidebar.  The Google Sidebar is re-sizable and integrates nicely with Google Desktop.  But the Vista Sidebar has some gadgets I just can't find for Google.  The best Google Sidebar gadget I can find for playing music is Music Player, but it doesn't seem to compare to Imp's Player for Vista.  Plus, Logitech makes some cool gadgets for Vista, but not for Google.  And for Google, I can't find just a simple digital clock with no seconds, no date (I use Google Calendar 2 for that), etc.

I know, I know... As a developer, I should just make my own gadget.  But sometimes, I just want to be a user!

Friday, January 16, 2009

Kogan Agora, There You Aren't!

After waiting so long to blog about this, perhaps I jynxed it when I finally did.  The Kogan Agora is indefinitely delayed.  It's one of the top news stories in the mobile/Android community this morning.

This is the source of much personal disappointment and future caution.  With my pre-order, I would've become a new Kogan customer.  Alas, that didn't happen, so I have no loyalty.  If something else better comes along in the meantime (you know -- before "indefinitely" in the future), I won't hold out for Kogan.  In fact, I won't order from Kogan again based on any evidence short of a shipping product.  Furthermore, I suspect that in order to overcome the design shortcomings that spurred this last minute delay, it will affect the price of the phone, making it a far less clear winner.

Sad, too, as I had just started making a Kogan Agora skin for the Android Emulator last night!

Wednesday, January 14, 2009

Kogan Agora, Where Are You?

I've been excited about Google Android since I first heard about it, but then I was disappointed the T-Mobile G1 by HTC. When I then heard about the completely unlocked Kogan Agora, I was once again excited about the possibilities.

After I rambled on and on about it, my wife picked up on it and secretly ordered a Kogan Agora Pro for me in December (thanks, dear!). Now I'm patiently awaiting the official ship date of January 29, 2009, eagerly following the news and official Kogan blog.

There are naysayers out there, and I recognize that the phone has some unfortunate compromises and there's a fear of the date slipping. Still, this phone is an Android phone, and will far axceed my Nokia 6300.

If..., no when it finally arrives, I'll write a review here.