Tag: Dumb Mistakes

Code Tips

Async Void Cannot Access a Disposed Object

A SinkRecently, I was trying to throw a quick method on a controller to create a user on the fly. I was pretty new into a .Net Core Web API project and I just needed to add a quick user to start to test some of the authenticated API calls that I was creating. So, I wrote code that was very similar to this:

[HttpGet("~/peteonsoftware")]
public async void CreateUser()
{
    var user = new ApplicationUser()
    {
        Email = "me@myserver.com",
        UserName = "me@myserver.com"
    };
    var result = await _userManager.CreateAsync(user, "SomePasswordThatIUsed,YouKnowHowItIs...");
}

When I started it up and made a call to http://localhost:56617/peteonsoftware, I got an error that read:

System.ObjectDisposedException occurred
  HResult=0x80131622
  Message=Cannot access a disposed object. A common cause of this error is disposing a context that was resolved from dependency injection and then later trying to use the same context instance elsewhere in your application. This may occur if you are calling Dispose() on the context, or wrapping the context in a using statement. If you are using dependency injection, you should let the dependency injection container take care of disposing context instances.

I could not figure this out. I would debug and stop on the line. My dependency injection was working and my _userManager was populated via dependency injection and I reviewed documentation and what I was calling should work. I lost a lot of time to this. It turns out that I was the victim of some optimization that .Net is doing for me. Notice that my method is async. Notice also that my method returns void. So, as far as .Net is concerned, it can go ahead and return from my method as soon as it fires off that CreateAsync() call because it doesn’t care about the result and it isn’t returning anything anyway.

I naively thought that my await would keep everything holding on until the method returned. NOPE! I found through research that my method was returning before it was done and everything started cleaning up and disposing. Therefore, when the method went to work, everything was disposed, resulting in my error. When I made the small change of no longer returning void, everything worked.

[HttpGet("~/peteonsoftware")]
public async Task CreateUser()
{
    var user = new ApplicationUser()
    {
        Email = "me@myserver.com",
        UserName = "me@myserver.com"
    };
    var result = await _userManager.CreateAsync(user, "SomePasswordThatIUsed,YouKnowHowItIs...");

    // Yes, I know I could just return above, but I like it to be separate sometimes, don't judge me!
    return result;
}

Now when I call, I get back

{"succeeded":true,"errors":[]}

So, lesson to learn… async void is BAD! Lots of unexpected consequences. Always return something from an async method to ensure that nothing gets cleaned up early.

Dumb Mistakes

iOS – TableView Row Height Wrong on 64-bit Only

I had a fun little problem pop up at my client site today. We had a tableview that was displaying data that looked perfect when running on everything except for an iPhone 5s. After a little bit of DuckDuckGo-ing, we came across these Stack Overflow questions here and here that suggested that the heightForRowAtIndexPath method might be the culprit.

When I went and dug in, I found this warning (I did do some cut and paste to get the warning just over near the code):

float to CGFloat Warning

If you can’t read it, that warning says, “Conflicting return type in implementation of ‘tableView:heightForRowAtIndexPath:’: ‘CGFloat’ (aka ‘double’) vs ‘float'”

The code was the following:

- (float) tableView:(UITableView *)tableView heightForRowAtIndexPath:(NSIndexPath *)indexPath
{
    if (indexPath.row % 2 == 0) {
        return 247.0;
    }
    else {
        return 315.0;
    }
}

The signature for that method means that the code should have looked like this:

- (CGFloat) tableView:(UITableView *)tableView heightForRowAtIndexPath:(NSIndexPath *)indexPath
{
    if (indexPath.row % 2 == 0) {
        return 247.0;
    }
    else {
        return 315.0;
    }
}

So what is the explanation? Basically CGFloat was typedef’ed as float. However, Apple wanted you to use CGFloat in case they ever changed what a CGFloat was going to stand for. Apparently, with the move to 64-bit, Apple has done just that. In 32 bit versions, the signatures were identical. However, now in 64-bit land, because the method was returning the wrong type, iOS considered the return value to be 0, breaking our layout. By only changing the return type in the signature to the proper value, the app functioned again properly.

Keep this in mind the next time you might think about being smarter than the language designers and consider not using their typedefs.

Classic POS

Dumb Things I’ve Done in the Name of Crypto (part 2)

Collision, originally from carinsurancecomparison.comLast time, I showed you guys a method of encoding and decoding values that I created and used to send “secret” messages back and forth. It was stupid and naive, but didn’t hurt anyone because it was only used privately. However, I did step it up a notch the next time and it turns out that I knew just enough to be dangerous.

In a production system (albeit an internal one), we had to do our own authentication. I was “smart” enough to know not to store passwords in plain text in the DB. I also knew that storing them with my weak system wouldn’t be good enough. Somewhere I had come across the idea that you store the passwords as the result of some one way mechanism and then when you want to authenticate, you perform your mechanism on the input and compare the results.

That was all well and good.

What I didn’t know was that this was basically what hashing was. What I also didn’t know was that I had several built-in ways to hash values. So, what I did was modify my original encoding code to make it so that I could no longer reverse the process to get the original values. I figured that I could just do some multiplication or division and ditch the remainder, which would ensure that I could never actually recreate the original value.

I don’t remember exactly what I did, but this code below follows the same general idea and is just as dumb.

The results of the horrible 'hash' function

In this case, the values Abcdef1 and Abcdef2 both “hash” out to 6199818961914390671, which is called a “collision” and which is BAD. When done this way, it means that someone with a password of Abcdef1 could also use Abcdef2 to get into their account. Any number of valid passwords greater than 1 is a FAIL!

I realize that there are collisions in MD5 and SHA1, but even those would have been more secure than my nonsense. However, at this time, I had SHA256 available to me and could have been reasonably safe (given the limits of computing power at that given time). The worst part is that my “solution” was audited. We explained that we were one-way hashing and that was good enough. The auditors didn’t know enough to realize that errors could be there.

The moral of the story is that you should NEVER try to write your own cryptography or cryptographic hashes. You probably aren’t smart enough. Even the people who are smart enough publish their work and their very very smart peers try like crazy to break their work. I mean, if Bruce Schneier wouldn’t even use his own algorithms without strenuous peer review, then you shouldn’t either.

Be smart and learn from my mistakes. Use safe, tested, tried and true solutions and never ever roll your own crypto.

Classic POS

Dumb Things I’ve Done in the Name of Crypto (part 1)

Let’s just begin with the obvious. I’m an idiot. Fortunately, (I believe) that I’m less of an idiot now than I was over a decade ago. I mean, I see why this stuff is dumb, so that has to count for something, right? I sleep better at night believing that that is the case.

I’ve always been fascinated by security and encoded/encrypted messages ever since I was little, even before I was interested in programming computers. I used to play the game Hacker on my Commodore 64 and pretend that was me doing things for real. I used to pretend that I was a spy who could get into anything. I used to make up “unbreakable” secret codes so that my friends and I could pass “secret messages” at drops around the neighborhood and school. You get the point.

Well, as soon as I learned anything about programming when I was older, one of the first things I did was “invent” a way to encode messages back and forth. I decided to take a page out of the old A=1, B=2 code book and use the ASCII values for characters. The problem was that if they were left as a string of 2 and 3 digit numbers, it would soon become obvious what they were. I decided that I would just mash them all together and make one long string of numbers to kind of disguise what they were (yay, security through obscurity!).

My first issue was that while A is 65 and Z is 90, a is 97 and z is 122. I can’t easily figure out from a long string of numbers how they should be chunked. I needed them to always be available in a predictable chunk. I figured out that if I multiplied the ASCII value by 4, every character that I cared about would become a 3 digit number. Finally, I had my chunking.

I created a VB6 program that had two textboxes and two associated buttons that encoded and decoded messages for you. I don’t have the source code for that program handy (I’m sure it is on a backup somewhere), but it was easy enough to recreate the important methods here below:

The results of running that program are here:

The results of my encoding/decoding program.

You see that it basically works as advertised. I used it over IM with my brother-in-law a few times to prove the concept and was pretty happy with myself for the results.

Any of you who have your thinking caps on are already starting to see several problems here. If someone got ahold of the program, they could try some things to see if there is a predictable pattern and there is. For instance, A always shows up as 260. Once you know that, you can easily figure out any message with a simple decoder key. You don’t even need a computer at any point. Even if you don’t know that, the encoded messages are still vulnerable (for that reason) to frequency analysis and every other basic code breaking trick.

Pretty harmless exercise as it stands now, but next time I’ll cover how I parlayed this into something that was actually colossally stupid.

Part 2 is located here