Wednesday, December 9, 2009
Cincinnati Bell Wireless and 3G
I get good GSM 2G coverage around home, but where I work (50 miles away) I've had coverage issues. At my desk in the building, I usually have one or zero bars. But sometimes I see a 3.5G or 3G icon where my usual GSM tower icon lives. But when I attempt or use voice or data in that 3G mode, it fails.
Through some Internet searching, I came across some information posted by "mdo77" at HowardForums:
1) Cincinnati Bell's 3G frequencies are 1700/2100 MHz, which is actually the same at T-Mobile's. AT&T uses the 850/1900 freqencies. All three companies (CBW, ATT, TMO) have the same 2G (EDGE) voice/data frequencies, however.
2) Any CBW phone that has any chance of 3G working has the 1700/2100 bands.
3) CBW's 3G footprint is local only. They have no 3G roaming agreements (at least any that are active). If you're outside of the Cincy/Dayton area, you're EDGE only.
4) Even if/when roaming in 3G is available for CBW customers, it will only be on the 1700/2100 (TMO) bands.
I suspect what's happening is that my phone is picking up the AT&T 3G signal and favoring it over any standard 2G GSM signal (there's actually an AT&T wireless building nearby). However, AT&T doesn't let me through because I’m not an AT&T subscriber. I suspect if I tried a SIM card from an AT&T Wireless friend, I would get AT&T 3G right my desk.
According to Nokia's specifications (http://www.nokiausa.com/find-products/phones/nokia-5800-Xpressmusic/specifications), the Nokia 5800 operates at 850/1900. It would seem, then, that the Nokia 5800 would achieve 3G on AT&T Wireless but not on Cincinnati Bell Wireless.
What troubles me, though, is that Cincinnati Bell Wireless sells the Nokia 5800. If the above is all true then why is Cincinnati Bell Wireless selling a 3G phone that won't work on their own 3G network? Is there any hope of the Nokia 5800 ever working at 3G speeds on your network?
Meanwhile, I've now taken my phone out of Dual Mode so that it still restrict itself to the GSM 2G network.
Monday, December 7, 2009
Another Brick in the Wall
I've learned now that makecert.exe needs to be run with full administrative privileges (not just a Windows SDK command prompt) when running Vista or Windows 7 (I'm on the latter). It can make certificates all day long, but when it comes to saving them to the local certificate stores, it will fail. By running an administrative command prompt (in my case, I can the Windows SDK command prompt as Administrator), makecert.exe can successfully write certificates to the local store, e.g.:
makecert.exe -sr LocalMachine -ss My -a sha1 -n CN=LocalDevServerCert -sky exchange -pe
Doing this, I now have my externally-accessible outer WCF service communicating to my internal WCF service. The inner-serviced is using wsHttpBinding, a custom UserNamePasswordValidator and the now-installed custom certificate:
<wsHttpBinding>
<binding name="customServiceToFacadeBinding">
<security mode="Message">
<message clientCredentialType="UserName"/>
</security>
</binding>
</wsHttpBinding>
...
<serviceBehaviors>
<behavior name="...">
<serviceCredentials>
<userNameAuthentication
userNamePasswordValidationMode="Custom"
customUserNamePasswordValidatorType="..., ..."
/>
<serviceCertificate findValue="LocalDevServerCert" storeLocation="LocalMachine" storeName="My" x509FindType="FindBySubjectName" />
</serviceCredentials>
</behavior>
<serviceBehavior>
Likewise, in the outer-service's endpoint configuration to the inner-service, I'm using related configuration:
<wsHttpBinding>
<binding name="customServiceToFacadeBinding">
<security mode="Message">
<message clientCredentialType="UserName"/>
</security>
</binding>
...
<client>
<endpoint name="WSHttpBinding_AttorneyFacade"
address="..."
binding="wsHttpBinding"
bindingConfiguration="customServiceToFacadeBinding"
contract="..."
behaviorConfiguration="ClientCredentialsBehavior">
<identity>
<!-- Usually, this is 'localhost', but in cert mode, it needs to match the subject(?) of the certificate -->
<dns value="LocalDevServerCert" />
</identity>
</endpoint>
</client>
In this way, I can now explicitly set the ClientCredentials.UserName.UserName in the outer-service's WCF client and invoke the inner-service's operation and the identity flows through.
But wait, there's more!
I'm still stuck where Silverlight calls the outer-service. I'm limited to the customBinding where I specify a transport (e.g. httpTransport or httpsTransport) or basicHttpBinding (i.e. HTTP, HTTPS). Either way, if I attempt to use transport-level security, such as UserNameOverTransport on customBinding or TransportWithMessageCredential on basicHttpBinding, I'm left with errors indicating that WCF won't send credentials over a non-secure transport -- that is, HTTP. Again, HTTPS is not supported by Cassini, and I can't get Visual Studio to work with IIS.
Next I'm going to investigate using Certificates for message-level protection between the Silverlight client and the outer-service. In production, I probably will use transport-level security via SSL on IIS; however, for local development, I could accept certificate-based message-level protection.
Still, why can't Cassini just support SSL? Or, why can't WCF allow credentials to be sent over an unsecured transport when bound to localhost? Either solution would make developers' lives easier!
Butting Heads with WCF Development
The most likely "good" solution I keep encountering in wild web entails sending the username and password credentials from the Silverlight client to the outer WCF service as ClientCredentials, validating them in a custom UserNamePasswordValidator and hooking up my own custom MembershipProvider. Then, passing just the username down to the inner WCF service on the outer service's ClientCredentials, trusting that username implicitly, and again hooking up my own custom MembershipProvider.
However, to use any form of credentials passing in WCF, you must either have transport-level security or message-level security. Out of the box, transport-level security means SSL while message-level security means certificates. Here the fun begins.
I'll gladly use transport-level SSL security. Its good enough for now and its relatively easy. Mind you, I don't necessarily need SSL between the outer-service and inner-service, but I can live with that. Unfortunately, Visual Studio's default web environment, Cassini (WebDev.WebServer.exe) does not support SSL. I suspect this limitation isn't just to torture developers trying to do legitimate things, but to prevent cheapskates from trying to run their public production web applications on it. .
Without SSL, WCF refuses to even attempt to propagate any credentials, giving quite descriptive error messages like "Could not find a base address that matches scheme https for the endpoint with binding BasicHttpBinding. Registered base address schemes are [http]" and "Give up all hope." It restricts this to protect us from ourselves. I would argue they could at least accept SSL from localhost. This would still thwart the cheapskates while exonerating most of the developers wrongly punished by this limitation.
So I can't accomplish what I want to accomplish in my current development workstation and configuration. Temporarily peeling myself off of that brick wall, I downloaded the WCF Samples and started investigating message-level security. After much tinkering, I encountered a missing test certificate, so I followed the happy, simple instructions to execute their provided batch file. I was greeted with a handful of errors, including some Access Denied. I tried again from a command prompt with greater privileges and got a different set of errors.
So I turned around and went back to the brick wall that was SSL, this time aiming to have Visual Studio use IIS instead of Cassini. I went to the Web project properties of my WCF project and selected "Use Local IIS Web Server". When I attempted to save, I was slapped with this ominous error:
To access local IIS Web Sites, you must install the following IIS components: IIS6 Metabase and IIS 6 Configuration Compatibility Windows Authentication. In addition, you must run Visual Studio in the context of an administrator account. For more information, press F1. With little expectation, I pressed F1. I was not disappointed -- only because I had such low expectations to begin with: nothing happened.
Now I suspect a part of this problem is me. I'm still new to WCF, having only worked with it for a few months. I've not been to any professional training courses on WCF, and only have one book about it and have only read a couple hundred blogs, articles and MSDN references.
But is there also a lack of support for simple developer environments?
Tuesday, December 1, 2009
Soggy Cell Phone
The result of my own poor judgement, my Nokia 5800 XpressMusic was drowned in a hot tub. The phone itself was only under water momentarily, but the battery was submerged over night. The next morning, I had to brush away a teal-green corrosive coating that had grown on one of the battery's contacts, caused by a small current flowing across the terminals underwater, no doubt aided by the highly chemically treated hot tub water.
My first action was to take off the cover of the phone, remove the battery, SIM card and microSD card and set them on a window sill to dry. After hours of drying and no visible signs of wetness, I attempted to install the battery and turn it on, but nothing happened. Thinking the battery might have been fully discharged, I plugged it into the charger to charge for a half-hour or so to get some juice in it.
Afterwards, it turned half-way on one time. by half-way, I mean the screen lit up and displayed "Nokia", but the handshake and tones never arrived. I turned it off and back on, and it was worse: only the red light and the omni button lit up, steady and glowing. I even tried the battery from my son's Nokia 5800, but it yielded the same results.
Fearing the worst, I did some research online. I learned some things, like:
- Put a drowned cellphone in a bowl of uncooked rice to dry it thoroughly.
- Never plug in a cell phone that may till have water in it
Oops! I had killed my phone! My son was even upset, and it wasn't even his phone (this time!) He even offered to let me use his phone (who says teenagers can't be sweet?)
I reverted to my scotch-taped Nokia 6300 (great phone -- just not a smartphone). After several days of no GPS, no touch screen, no WiFi, etc. my son started hopping around me, bright eyed with and bursting at the seams with a secret. Just before he would've exploded, he spilled to me that he "fixed my phone." On his own, he had tried his battery in my dead phone, still sitting in pieces on the window sill. This time, though, it turned on! I presume that when I had tried his battery in my phone, it had not yet dried, giving a false-negative
I'm now waiting with fingers crossed for my replacement battery I ordered online from Radio Shack (best price from a name I recognize). I'll post an update with the result after it arrives.
Thursday, October 8, 2009
Dark Code
Tuesday, August 11, 2009
Separating MVC Projects in a WCF World
Monday, August 3, 2009
ASP.NET Multi-environment Deployment Configuration
I'm still new to ASP.NET, fresh off of the JEE bandwagon. In many ways, deploying an ASP.NET application to IIS can be blazingly simple (e.g. "just copy the built artifacts to the site folder"). Like JEE, though, there doesn't seem a good, widely agreed upon solution for separating environment-specific configuration from the application-specific configuration.
Like many shops, we have several environments. We have two "debug" environments and two "non-debug" environments where we want things quiet and locked down. The "debug" environments can show full exception detail, verbose logging, etc. The "non-debug" environments -- including production -- do not. Our UAT environment is configured as a "non-debug" environment so that its as close to how it will be experienced in production as possible. Furthermore, the databases, internal URLs and such also vary from environment to environment.
Most of the solutions I came across suggested maintaining multiple web.configs, one for each environment. I dislike such solutions because it requires duplicate maintenance (I'm lazy, and humans inevitably fail to do the "dual" part). I did find one suggestion I liked: use XSL transformation to modify points in your web.config for each environment.
Thus far, I have crafted an XML stylesheet for our dev and uat environments. Each of these is designed to transform our web.config from source control (set up for local environments) into a configuration specific to that environment. The stylesheet itself largely clones the existing XML, replacing specific values or attributes, such as:
- Connection strings (substituting variables where necessary)
- Exception detail (toggled on or off)
- Debuggable ASPx compilation (toggled on or off)
- WCF service endpoint addresses (URLs specific to each environment)
- Logging listener log file paths
- etc.
For example, to turn off debug information for the pages, the following snippet is used:
<!-- Set the debug compilation to enable/disable insertion of debugging symbols into the compiled page. Because this affects performance, set this value to true only during development. -->
<xsl:template match="/configuration/system.web/compilation/@debug">
<xsl:attribute name="debug">true</xsl:attribute>
</xsl:template>
I've also broken my stylesheets into a base one for the web.config (e.g. my AppServices project's stylesheet is AppServices.xsl) as well as an environment-specific stylesheet that sets the variables for it (e.g. AppServices-dev.xsl, AppServices-uat.xsl):
<xsl:variable name="logRoot" select="'\inetpub\logs\AppLogs\CIN2010\Dev\AppServices'" />
<xsl:include href="./AppServices.xsl" />
Thus far, the plan is working quite well. The Hudson CI server is building and automatically deploying the applications, transforming the web.config as it goes. From time-to-time I have to alter the stylesheets to extract another environment-specific setting, but this is rare and straight forward.