Month: July 2015

Podcasts

Podcast Episode 38 – A Stream of Consciousness Rant: The Development Community

A StreamIt has been a little while since I posted a new episode, so when inspiration struck me while waiting in the car, I didn’t pass up the opportunity to record Episode 38. I used my phone as a voice recorder and shared a kind of stream-of-consciousness rant. Some things had really piled up on me recently, not the least of which was my disgust with much of the development community at large. Major targets of my focus include judgement of new developers, judgement of people by their technology of choice, my dislike of “Why I’m Leaving X” posts, and how being yourself doesn’t mean that you need to be a douche when interacting with others.


Links Mentioned in this Show:
21 Management Things I Learned at Imgur
Asciinema.org

You can also subscribe to the podcast at any of these places:
iTunes Link RSS Feed Listen on Stitcher

Thanks to all the people who listen, and a special thanks to those who have rated me. I really appreciate it.

The episodes have been archived. Click Here to see the archive page.

Swift

The Swift 2.0 Defer Keyword

Couldn't think of a suitable image, so here is a kitten on a computer.At WWDC 2015, Apple announced some substantial updates to the Swift Programming Language for version 2.0. Since its announcement, Swift has been undergoing a lot of updates. In fact, version 1.2 of the language had been released in April 2015 and by June we had the first preview of 2.0.

I decided to blog about some of the changes to the language that really caught my attention. The one that I like the most is the “defer” keyword. In .Net (and other languages), we have this concept of try/finally. Sometimes you can include a catch in there, but it isn’t required. The idea is that after the code executes in the try portion, the code in finally is guaranteed to execute. That’s the perfect place to make sure that you’ve disposed of objects or otherwise cleaned up after yourself. Pretty much every tutorial that I’ve ever seen in .Net has something like this:

try 
{ 
  connection = new SqlConnection(connectionString); 
  connection.Open(); 
  // Do Some Stuff
} 
finally 
{ 
  connection.Dispose(); 
} 

Sharp .Netters might point out that coding a “using” statement accomplishes essentially the same thing with anything that implements IDisposable, but I’m just trying to demonstrate a point 😉

There are a few problems with this try/finally style of coding, however. First, what tends to happen is that developers wrap large blocks of code in try blocks. The problem with that is that you don’t know what actually exists by the time you get to the finally. Maybe you’ve returned early from the method, or maybe an exception was thrown. Now, you’ve got to litter your entire finally block with “if this exists, set it to null” or “if this is open, close it” kind of stuff. That’s where “defer” comes in. Let’s take a look at some code that I ran in an Xcode 7 Beta 2 Playground:

class Foo {
    var Bar : String = ""
    var Baz : Int = 0
}

func showOffDefer() {
    var originalFoo: Foo?
    
    defer {
        print ("Bar was \(originalFoo!.Bar)")
        originalFoo = nil
        print ("Now it is \(originalFoo?.Bar)")
    }
    
    originalFoo = Foo()
    originalFoo!.Bar = "Lorem Ipsum"
    originalFoo!.Baz = 7

    print("We are doing other work")
}

showOffDefer()

Remember, the defer block isn’t called until the function is exited. So, what is written to the console is:

We are doing other work
Bar was Lorem Ipsum
Now it is nil

Do you see the power in that? Now, after I declare an object, I can write a deferred block and it is guaranteed to execute when the function exits. That can be any number of early return statements, that can be because of an exception, or it can be because the function just ran out of code and returned automatically (like mine here). These defer blocks are also managed like a stack, so the most recent defer statements are run first. Let’s see that in action:

class Foo {
    var Bar : String = ""
    var Baz : Int = 0
}

func showOffDefer() {
    var originalFoo: Foo?
    
    defer {
        print ("Original Foo's Bar was \(originalFoo!.Bar)")
        originalFoo = nil
        print ("Now it is \(originalFoo?.Bar)")
    }
    
    originalFoo = Foo()
    originalFoo!.Bar = "Lorem Ipsum"
    originalFoo!.Baz = 7

    print("We are doing other work")
    
    var newFoo: Foo?
    
    defer {
        print ("New Foo's Bar was \(newFoo!.Bar)")
        newFoo = nil
        print("Now it is \(newFoo?.Bar)")
    }
    
    newFoo = Foo()
    newFoo!.Bar = "Monkeys"
    newFoo!.Baz = 42
    
    print("We are doing even more work")
}

showOffDefer()

This gives us

We are doing other work
We are doing even more work
New Foo's Bar was Monkeys
Now it is nil
Original Foo's Bar was Lorem Ipsum
Now it is nil

Hopefully, that example helps make some sense of it. So what did we get? First of all, we got our two print statements that we were doing other work and then even more work showing that the entirety of the function executes before any defer blocks are called. But then it executes newFoo’s defer block first, finishing with originalFoo’s block last.

That seems pretty awesome to me. I realize that Swift isn’t starting anything terribly new here. The Go Programming Language already had this exact concept. That doesn’t mean that it isn’t a good idea.

Swift was created from the very beginning to be a “safe” language and I think defer blocks accomplish this in a few ways. First of all, it ensures that code gets executed. Secondly, it makes sure that only appropriate code is executed (it won’t try to cleanup for objects not yet created). Thirdly, it keeps the cleanup right near the declaration, so readability and discoverability is improved. Why is that safe? If code is easy to read and understand, and the places for modification are well-known and well-understood, then the code is going to be higher quality.

I’m excited for what Swift 2.0 is bringing to the table.