Here is something that I just learned yesterday that I didn’t know. This was one of those fun things where I knew every piece of the puzzle, but had never “realized” or “made the connection” between all of them.
If you aren’t sure what extension methods are, I wrote a blog post about them back in 2008 that you can check out here.
Here is an example for today:
public static class ExtensionMethods
public static bool IsEmptyStringArray(this string input)
if (input == null) return true;
What I’ve done is just create a method that allows you to call .IsEmptyStringArray() on any string array to find out if it has any items in it. I realize that this is a fairly useless example, but it is contrived for the sake of the demonstration.
Now, if I call a “framework” method on a null string array, I get an error. So, doing something like this:
string nullArray = null;
var hasItems = nullArray.Any();
Results in the error “Unhandled Exception: System.ArgumentNullException: Value cannot be null.”
However, I *CAN* call my extension method on that null array.
string nullArray = null;
var hasItems = !nullArray.IsEmptyStringArray();
This code produces the following result:
How does that work? This is what I had neglected to put together in my mind. When you write an extension method, what actually gets compiled is this:
call bool CodeSandbox2010.ExtensionMethods::IsEmptyStringArray(string)
The “syntactic sugar” part is that you aren’t actually calling a method on the null object at all. You are just calling your method and passing in the parameter, just like any other method. I really like that because it gives you a concise way to write your code without the same null check over and over and over again throughout your codebase. You can just check in the method and then get on with what you’re doing.
Raymond Chen recently wrote a blog post where he talks about blocking shutdown in Windows versions since XP, and – being Raymond Chen – also the history and the why of certain decisions coming out of Redmond. This blog post was picked up on Reddit and people are slamming Windows for every possible thing.
What is interesting is that these people, who are suggesting that “Linux never had to do this”, just don’t get it. Linux is getting better, but it is *still* not user friendly. 90% of the people that work at companies that I’ve worked at could not run Linux without a lot of help. My grandparents couldn’t use Linux without a lot of help. Windows is generally easy to use.
If I generalize a bit, it almost seems like operating systems have their own “magic triangle”. You can have inexpensive, stable, or easy to use… pick only two. Linux is inexpensive and stable. It is free for the operating system and it runs on almost any hardware you can get, but it is NOT easy to use for the average “non geek”. Mac is stable and easy to use. It is known for all of its “user experience” and “it just works”, but it is not inexpensive. Once you own Mac hardware, upgrades to the OS are inexpensive, but to run the OS, you need expensive hardware. There is no good $300 Mac option.
Windows, on the other hand, is inexpensive and easy to use. It is growing more stable, but it still has a lot of quirks, particularly due to being able to support tons of hardware and tons of decades-old software. But easy and cheap is a tradeoff that many users are going to take. Because of that, Windows is going to have a place in the market for years to come, even if its marketshare will continue to erode as the marketshare of the desktop itself erodes.
But my theories about operating systems aren’t the point of this post, they are merely the backdrop. The point is that “nerds” (programmers, sys-admins, geeks, and all computer-savvy types) don’t sympathize enough with the average user. The computer elite just dismiss the average user as “dumb” and wonder why they can’t just remember to type “sudo apt-get install flashplugin-installer” to install flash on their system.
Remember, there are users that take classes on how to use Microsoft Word! They need lessons in “Saving a Document”, “Performing Cut and Paste”, and “Changing the Document’s Font”. I’m not mocking them for this, I’m pointing out the reality that these people are dealing with. To ask them to understand “sudo” and “apt-get” or scavenging the web to find some “driver” for their video card (“what’s a driver”, “what’s a video card”) is asking too much. They just want to get on Facebook, do their taxes, check their email, and watch movies or YouTube. What makes sense for we Nerds does not make sense for them.
Building up that sensitivity to the plight of the average user will make you a better IS/IT person. As long as the prevailing opinion of computer geeks is that the user should be able to perform these <my_sarcasm>easy</my_sarcasm> tasks, people that sympathize with the user are always going to have an easy time finding employment.
I’ve said this before, but I feel like it is one of the most important things I can say to the professional developer/IT pro: “We are in the business of solving other people’s problems”.
Solving other people’s problems doesn’t mean solving them with what works for us. It means giving the best solution for them. It doesn’t matter if you work for a product company or in-house enterprise development. You need to create solutions that meet your customers where they are. The sooner we realize that these things are not always in alignment, the sooner we will delight our customers with the solutions that we suggest and build.