Friday, October 16, 2009

How long to write 1 line of code? The registry gets complex!

So I want to stick a simple value into the registry such as "FolderLocation" at HKLM\Software\Company.

Then I want to have developers able to read that *EASILY* using powershell, c#/vb.net or vbscript.

This is simple to configure and very simple to code. In most of these languages it's one line of code (with maybe a little object instantiation beforehand).

Except if we are running on Vista, or Win 7 where we now have this new thing called "Registry Virtualisation" to contend with - for a while, but not forever...

"This form of virtualization is an interim application compatibility technology; Microsoft intends to remove it from future versions of the Windows operating system as more applications are made compatible with Windows Vista. Therefore, it is important that your application does not become dependent on the behavior of registry virtualization in the system." http://msdn.microsoft.com/en-us/library/aa965884(VS.85).aspx

This joyous bit of technology redirects registry writes/reads on the fly if they involve locations no longer deemed usable by "applications" (which I'm taking to mean "anything non-Microsoft").

For example, if my C# application reads the key mentioned at the top, it will, apparently, be redirected on the fly to read HKEY_CURRENT_USER\Software\Classes\VirtualStore\Machine\Software\Company.

Sure enough, this key is there. But the Company key I created above using regedit isn't. When I read HKLM\software\company with reg.exe (the command line tool), it's there. If I read it with my C# application, it's not.

Better still, this behaviour isn't consistent.
Registry Virtualisation will occur for:
32 bit interactive processes.
Keys in HKEY_LOCAL_MACHINE \Software.
Keys that an administrator can write to.

But it's disabled for:
64 bit processes
Non-interactive processes (e.g. services)
Processes that impersonate a user
and a few others

My dilema is that I will eventually create these keys using a 32 bit or 64 bit service (so no virtualisation), but they will be read by 32 bit or 64 bit interactive processes. The service will be running under a service account and the scripts under a user account - hence the need to use the shared HKLM space.

How on earth do you communicate to a developer where he's suppose to go read and how much code is this going to require to figure out the platform, x86 or x64 etc etc?

So much for someone's idea of making things better. One line of code that would've taken me seconds to write has now turned into a 2 hour slogathon just to uncover the issue. I've still to figure out a solution :-(

For people using batch files, I also store the folder value in an environment variable. I really don't want to go creating loads of environment variables, but it's sorely tempting.

Monday, September 07, 2009

Oldest Trick in the Book

Remember the Anna Kournikova virus - that was the one that arrived as an e-mail inviting the recipient (presumably mostly males) to click to see pictures of the famous tennis star? That was back in February 2001.

The latest trick, while on a technology level is totally different, it uses the same old tried and tested social engineering trick to pull the target in.

This one uses MSN. The target gets a chat from what appears to be a "hot girl". They start chatting and very quickly this girl is suggesting the target go take a look at her web site where there's all sorts of interesting things to see.

Unfortunately, the web site is nothing more than a payload delivery mechanism for some sort of malware.

The really clever bit (and you have to give these people some little degree of credit for how easily they socially manipulate people) is that this "hot girl" on MSN is little more than a computer program designed to hoodwink people into visiting the site.

Mind you - it probably doesn't need to be that sophisticated.
MSN: "Hi handsome - what's your name?"
Target: Fred
MSN: "Wanna see some pictures on my web site?"
Target: absolutely

Thursday, October 04, 2007

ID Theft - asking for trouble?

A few weeks ago at work I placed an order for a bit of equipment for a customer. We don't order very often from this supplier and in fact, this was just the second time. First time round we had to fax a copy of the cheque and present the courier with the cheque (cash on delivery). Fair enough.

Given we wouldn't be buying from them frequently, getting a credit account in place didn't seem worth the effort, so we were quite happy to pay by credit card - nothing unusual there.

They faxed through a form for the credit card details which we had to fax back, so sensible security not asking for the form to be e-mailed back. However, they also wanted a photocopy of my credit card - front and back. To me, this seemed a bit "off", given we're all trying to be so security concious about our personal data and here's a company wanting an exact image of my card.

Now this isn't some little two-bit independent del-boy type trading company, but rather a pan-European company with some 15 years of trading behind it.

Still, I really couldn't fathom why they needed this photocopy and they couldn't really give me a solid explanation. Nor could they convince me that my photocopy would be kept safe. Best they could come up with was "it'll be kept on our server for future use". They also compared it to the fact we faxed a cheque through to them without any problems. My comment that I was sending the physical cheque to them anyway (as that's pretty much how cheques work) and I certainly wouldn't be posting them my credit card didn't sway them from the company line in the slightest.

I could picture a future conversation with the bank though, having perhaps reported some fraudulent transactions on my account. "Do you take all possible precautions to keep your credit card safe? Sure I do, except for all the suppliers I fax a photocopy to, over which I've then got no control".

Needless to say, we cancelled the order and went elsewhere.

This is definitely something to consider though. Any time someone's asking for information that just doesn't seem normal, the alarm bells should start ringing. Even if it's a big company, you've got no idea how good or bad their security is or who that person at the end of the phone really is.

Offline Files Redux

I figured I'd do a revisit to Offline Files (or Client Side Caching), primarily just to clarify what the problems are and the scenarios in which they occur.

Consider the following:

You take your laptop and travel to another branch/division of your organisation. You plug in and immediately your laptop is able to see your server back in your own office over the VPN. So, it pretty much says to itself "OK, operating online, My Documents is located on the server".

It's supposed to do bandwidth analysis and decide that the link is sufficiently slow that it should go offline, but it seldom does. The configuration of what constitutes a slow link is configured in the registry (i.e. bit of brain surgery required) and isn't particularly well documented. I've certainly never been able to get anything sensible to happen from making any changes.

So, the "experience" you get in this scenario is everything you open from your My Documents takes eons to appear, because it gets dragged from the remote server. Worse, any time you save, the save goes back to the remote server (so forget working on that big 50Mb Power Point file). Even worse, applications like Word which autosave, will autosave the work file into the same folder where the document resides - yup, the remote server. That's the start-stop stutter you get right in the middle of your typing.

OK, so you get smart and acquire the Client Side Cache utility (csccmd.exe) and run csccmd /disconnect to force yourself offline. Suddenly My Documents becomes much more sprightly as the files are now being read/written from the copies (client side cache) on your laptop. Life is wonderful again! Open that 50Mb Power Point, add a few words and the save goes back to the laptop.

Now, you realise you need to go grab another picture from your server to add to the presentation. No problem. A quick jump into Network Places, Explorer, mapped drive etc and you can quickly drag it across the VPN? Nope, fraid not. Remember that "csccmd /disconnect" you got smart with above. Well, now the laptop throws a bit of a hissy fit and says "You forced me offline against my wishes and better judgement, so if your My Documents is offline, then so is the whole damned server, so stick that in your pipe and smoke it!".

Well, you're a kick-ass sort of person, so you have "mobsync /synchronise" or "mobsync /logon" up your sleeve to force your laptop back online so you can grab that pesky file. Unfortunately, the laptop still prevails. The conversation sort of goes "OK, you can force me back online if you like, but I'll only allow you to get connected once I've completed a full synchronisation process".

So the laptop proceeds to start checking through all the files to see which ones need to be pushed to the other end. Remember that 50Mb Power Point file - yup, it's got to go the distance to the remote server.

Now that might just about be bearable on a reasonable speed VPN. But imagine you are in a hotel and the VPN is just a touch flaky with all the people in the hotel doing their stuff. Or worse, you've got no broadband or wifi connection, so you've had to resort to the mobile phone over GPRS or GSM, maybe as low as a 9.6k connection". At that stage, you pretty much have to give up and accept the laptop gave you a kicking.

All in, Offline Files is a very frustrating process and that's when you know how it works and what you are doing. Your average business computer user just wants the damned thing to work with the least amount of techno mumbo-jumbo as possible.

That's why Adaptive Backup came to existence.

Wednesday, April 26, 2006

OWA + FBA + EAS Continued

Well, finally cracked it tonight!

OWA + EAS running under it's own IIS root, on a DC with no broken DNS, no scary security tweaks and no need to create alternative virtual directories (per mskb Q234022).

OWA is using FBA (Forms Based Authentication) which is by far the best option for OWA and the mobile devices synchronise with EAS (Exchange Active Sync) on the same root.

This was another funny exercise - lots of brain busting, going round in circles, seeing loads of people on the web having the same problem. Once it was all solved though, the solution is relatively simple.

Post me a comment if you want to discuss the solution!

Tuesday, April 18, 2006

Offline Files - broken technology

I'm going to start with a copy and paste from the blog of a Microsoft guy called Jonathan Hardwick (http://blogs.msdn.com/jonathanh/archive/2004/10/06/239025.aspx). I'm doing this because I e-mailed the text below in response to one of his blogs. Now when I go on a Google relating to Offline Files problems, I spot this and get excited because I think someone else is seeing the same thing - then I realise it's just me. However, he closed this page to comments before I got a chance to go back and reply to his posting to my e-mail.

Q: For ages we've been battling with the fact that when a laptop user goes to a remote site, they work quite happily with their My Documents directory cached offline. However, because they are offline, the whole server is flagged as offline. They can go online and access the server, but then the My Documents files start getting dragged across the line - not good if you were on a 9.6k GSM mobile connection from the other side of the planet! I finally found this documented in Q320819. My reading is that before April 2002, it didn't work the way it's now designed. We've basically got to start looking for alternatives but I was wondering if you had any idea or can find someone who knows why.

A: Yes, the offline files algorithm maintains connection state on a per-server basis instead of a per-share basis. This is to prevent hidden dependencies between files on the same server manifesting themselves as inconsistencies between different shares. Having said that, there are two possible solutions I would try:
Turn off all automatic synchronization, and force users to synchronize manually. Of course, this may be unacceptable for user-experience reasons, i.e. they forget to ever synchronize and then bitch because "the server lost my files" :-)
Use the new slow-link behavior in XP SP2, or alternatively the QFE for XP SP1, WinSE bug 37222. The earlier behavior from KB263097 was that after going offline it would auto-reconnect if the link speed was above 64 KB, set by HKCU\Software\Microsoft\Windows\ CurrentVersion\NetCache\SlowLinkSpeed. However, this only affected reconnections rather than the initial connection, so users had to use "csccmd /disconnect" to force files offline on slow links, and it used reported NIC speed, instead of actual end-to-end speed. Not good. With the new behavior, you can set slow-link policy as before, create HKLM\Software\Microsoft\Windows\ CurrentVersion\NetCache\GoOfflineOnSlowLink and set it to 1, and reboot. Now, whenever the user logs in, if the connection speed to that server is below their slow link speed setting, they'll remain offline as far as their offline files are concerned.


Now the problem we find is you head offsite to a remote office. You get up online and windows says "OK, slow link, lets go offline". My Documents - nice-n-fast. So you merrily sit and update a few large files, let's say a nice 30Mb power point file.

Then you want to go to your main fileserver back in head office and grab some files to stick on your local drive (maybe to work on them, or perhaps just to refer to them). So you browse off to the share on the server and you see nothing. Windows says "hey bud, I told you - slow link - you're offline and that's the way it is".

So we can get back online - just force a sync off the offline files. Once it successfully completes, we are back online to the server and we can copy the files. However, there's just the small matter of getting that 30Mb power point file back to the copy of your My Documents folder on the server. And there lies the problem. Until you get that successful offline sync complete, you are high and dry (or offline and disconnected).

Now put yourself in the position of the sales guy, half way around the world, working on a GSM 9.6k connection. He's been away a few days, working on a presentation and a quotation, all in his My Documents folder. Just before the big meeting he realises he needs a file from the server. Just a tiny little file (it's all he can realistically manage on a GSM phone without bankruptcy). Well, "we are Windows and Windows say - NO".

Heard a rumour this will be fixed in Vista, but that's still close to a year away before we even have the option to use it. Realistically we'll want another 6 months on that to let the early adopters cry over the spilt milk. Oh - and everyone will need a new computer. Yup, that'll go down a treat.

There are a few third party apps around. We used SecondCopy for a while back in pre-offline files (NT4) days. It was OK, but difficult to administer centrally and to monitor. I'm currently trying out Peersoftware's Sync-n-Save application. It's better than second copy, but currently crashing a bit (after a hibernation of the laptop) and it's still difficult to configure centrally. Neither application has the capability to pop-up and say "OK, you are away from base, but I can still see your server - do you want me to run, or suspend and automatically resume when you get back to base".

It's usually around this time I pick on the Office Assistants - nice slick bit of programming. I can imagine the team that created them are a pretty clever bunch of people. THEY SHOULD HAVE BEEN PROGRAMMING STUFF THAT MATTERED!

Hmm - I should have an Office Assistants / Waste of Space blog entry, then all I need do is link to it :-)

OWA + FBA + EAS + New Root = Brain Bust

Take Exchange Server 2003, Outlook Web Access (OWA), Forms Based Authentication (FBA), Exchange Active Sync (EAS) {also sometimes called Server Active Sync - SAS} and you have one hellufa configuration nightmare. This is especially the case if you have other applications hanging under the Default Web root in IIS. Publishing this lot securely on the Internet is fraught with complexity, pretty difficult to get right and very easy to break.

When it breaks, it can go two ways. You lock everyone out, or they start getting IIS Integrated Security dialogs instead of the nice FBA stuff. Bit of a pain, but very quickly noticeable. The other way is when someone or something accidentally lessens the security on the various roots and that often goes unnoticed for a long time.

Suddenly you are NATing outside traffic into something you really don't want published on the web. I saw someone do this once - not with Exchange specifically, but just a little bit of wrong configuration and they published their company Intranet, with anonymous access to all and sundry. Hey ho - all that very valuable private IPR was suddenly on view to all. Ran like this for about 2-3 months until someone questioned a lot of public connections to an internal ftp server (also hanging off that same IIS box). Like I said - not so easy to spot.

The obvious solution (well, it was to me anyway) was to create a new IIS root plus the relevant Exchange virtual roots and NAT the external traffic into that. We want to use FBA (because quite frankly, using IIS Integrated security for OWA is rather prone to the next person opening up the browser and getting your e-mail) and we want it to work with EAS. Not asking for much!

However, this then takes this relatively complex exercise and turns it into a complete and utter brain bust.

I've been running this sort of configuration in the office for about a year now. EAS works fine (had an XDA, now on an iMate SP5 and neither caused too many problems, over and above the standard raft of problems). However, at the time I didn't get enough time to sort out the FBA, so we use IIS Integrated for the OWA component. The number of people using this is pretty limited and they are all IT people, so they know the issues and logoff, clear cache etc.

I've been back on trying to get FBA working with this config off and on for about a month now. Today (and tonight) it's been getting an onslaught. It's been a pretty dismal experience - there's rafts of people in the newsgroups trying this and almost getting there, but coming up short.

Tonight however, I now have a new root with a new Exchange virtual root, with FBA published over SSL, NATed through the firewall, on a separate IP Address, with the Internal DNS not answering with the wrong IP addresses!

Now all I need to do is factor in the EAS, do some decent testing and then rip the whole thing out to properly document the build process. In the overall scale of OWA/FBA/EAS, the process looks like it should be relatively painless and surprisingly(?) not configured in quite the way it might be expected.

More to come....