bookmarks for the week ending February 25, 2006

Bookmarks added by user jmanning between February 19, 2006 and February 25, 2006

SQL Server semantics: autocommit (every statement is a transaction) vs. implicit transactions

I’ve seen these get confused, so I wanted to give a quick definition:

1) autocommit mode: each statement is its own transaction (selects don’t need to be committed, of course)
2) implicit transaction: any “the state of the data matters to me” statement implicitly starts a transaction (that does *not* commit automatically at the end of the statement).  All it does is save you from having to “BEGIN TRAN” explicitly – you still need to commit/rollback.

A couple of msdn pages that may help:

one nit-pick about the LoginStatus control

The LoginStatus control is very useful, but I wish it defaulted to redirecting back to the login page instead of making me set that behavior using the LogoutAction property.

        <asp:LoginStatus LogoutAction=”RedirectToLoginPage” ID=”LoginStatus1″ runat=”server” />

On a related note, I really like WSAT’s ability to secure parts of the site (create “access rules”) based on roles.  That certainly saves me from having to write that code! ๐Ÿ™‚

how to remove from Dictionary while iterating over it

Since we’re in C#/.NET and not in Java, we don’t have Iterator.remove() (I really miss that method at times), and since we’re operating on Dictionary instead of List, we can’t use a RemoveAll instance method with a Predicate delegate.

However, as it does with many, many other questions about using and handling generic collections, the good ol’ Power Collections project comes to the rescue. Specifically, the Algorithms.RemoveWhere method.

Ok, technically we’re not iterating over it (at least not in our code), but it keeps us from having to manually do the whole “store the keys to remove in a List and then iterate over that list to .Remove() from the Dictionary”

This also shows off the super-spiffy Algorithms.ToString method – incredibly useful stuff!

The output:

Original: {0->a, 1->b, 2->c, 3->d, 4->e, 5->f, 6->g, 7->h, 8->i, 9->j}
Kept:     {1->b, 3->d, 5->f, 7->h, 9->j}
Removed:  {[0, a],[2, c],[4, e],[6, g],[8, i]}

        static void Main(string[] args)
            Dictionary<int, char> dict = new Dictionary<int, char>();
            for (int i = 0; i < 10; i++)
                dict.Add(i, Convert.ToChar(i + 'a'));
            Console.WriteLine("Original: {0}", Algorithms.ToString(dict));
            ICollection<KeyValuePair<int, char>> removed =
                Algorithms.RemoveWhere(dict, delegate(KeyValuePair<int, char> pair)
                return pair.Key % 2 == 0;
            Console.WriteLine("Kept:     {0}", Algorithms.ToString(dict));
            Console.WriteLine("Removed:  {0}", Algorithms.ToString(removed));

Atlas January CTP is out, why AJAX is like performance profiling, and how to be a good developer

Atlas is, basically, the preferred AJAX framework for ASP.NET 2.0 – it’s being developed by Microsoft and the most recent bits focused a good bit on (IMHO) THE key usage of Atlas and AJAX in general – making pages more interactive after they’ve already been developed.  For some more detail on the most recent bits, go read Nikhil’s blog post Atlas M1 Refresh – Some More Goodies

Why do I think this is such a key focus?

To me, application of techniques like AJAX are very much akin to techniques you apply when you’re developing a stand-alone app that’s not meeting particular perf/usability goals.  Specifically, they’re applied in the ways that actually make sense and actually make for a better app.  What do I mean by that?

Performance optimizations are a funny kind of code change – as opposed to the “other” category of code changes (correctness fixes, which initial implementation can be considered a part of, especially from a TDD perspective), they’re ones typically designed not to change the behavior of the code, but just the time it takes to run.  Why does run time matter though?  Like many other issues, it’s all a matter of human perception.  If an app seems sluggish and doesn’t respond quickly enough based on user perception, the app is deemed slow.  The typical way of attacking this issue is trying to speed up the actual actions going on (add indexes, use materialized views, denormalize tables, change algorithms, batch operations). 

Another technique, though, is to not actually try to make the action any faster (usually, it’s been decided that there’s not much additional work that can be done to speedup the action under investigation) but instead try to keep the user from perceiving the slowness.  One of the most common instances of this technique is doing expensive operations in the background (for instance, background compilation, background indexing, etc.).  This way, the user isn’t sitting there waiting on a long operation to occur, so they don’t perceive the “slowness” of the operation.

While it may seem a little odd to do so at first, I put AJAX into that category of technique.  Sure, but doing a “partial” instead of full page load, you can save some bandwidth, but the gains of AJAX aren’t really those few bytes – it’s the change in human perception of what’s going on.  Since the overall page isn’t doing a full load, the user isn’t shoved into that “waiting for a page to load” state that they’ve learned to hate so much.  There still may be an expensive operation going on, but from a user perception POV, we’ve switched to a much less invasive operation – very similar to making it a background operation.

How does this relate to performance optimization in normal stand-alone apps?  Ok, I’ll try to get back to that topic.  If you’re still reading at this point, you’re to be commended (or, perhaps, pitied), so I’ll try to make it worth your while.

The best developers take the implementation approach of these steps:

  1. Generate (or, more likely, be handed) performance goals for given scenarios (typically the most critical and most common operations).
  2. Write the clearest, most readable, most maintainable code possible to implement the given feature/item/thing.  (In an ideal world, do so after you’ve gotten a battery of unit tests available to know when you’re done, but that’s a different discussion entirely).  It should pass whatever available tests you have at that point (you may still need to iterate with more tests to be developed, but that is also a separate discussion).
  3. Deploy the bits to the target environment and do performance and stress testing to determine current performance.
  4. *IF* needed (performance numbers aren’t meeting goals), then profile and start attacking the perf problems.  Your best changes will be algorithmic, batching, and similar kinds of higher-level changes – they typically won’t be re-coding some inner loop in assembly (I’ve done so twice in my entire development life as one data point).

Bad developers do the whole premature optimization thing (if you haven’t, you’ll want to read the page at that link – it’s worth it).  They try to get cute (or, sadly, try to prove how smart they are, which is typically showing the ones that aren’t… or at least the ones that haven’t read or haven’t understood Kernighan’s quote) and write ghastly code full of microoptimizations that make for code much harder to read, debug, or modify, which means it’s hard (in some cases, impossible, since it’s less work to throw it out and rewrite it) to maintain.

Great – thanks for the perf rant – weren’t we talking about AJAX at some point?

Oh, right – thanks for reminding me.  AJAX, like doing expensive operations in the background for a standalone app (Eclipse does this really well, for instance), is a perf (or, arguably, perceived perf, which is all that matters) improvement that shouldn’t be applied from the start of a project. (bold applied to help group the idea together).  It will typically complicate things (although some frameworks try to minimize that effect) and lead to code (and, worse, typically javascript at that) that’s larger (more lines of code) and harder to maintain/debug/etc.  (The frameworks are coming along, but they minimize this effect, not prevent it, IMHO).

To me, a smart project starts with no AJAX at all, because like profiling and subsequent performance techniques applied to stand-alone apps, it’s a technique applied as needed to meet perf goals (which, again, are really perf perception goals in human-interactive apps).

AJAX is a wonderful new tool to have in the toolbox of web developers.  However, like many other entities in that toolbox (even ones that have been with us for over a decade), over-applying the tool leads to poor results (much like the whole “when all you have is a hammer” line).

Another trait I’ve noticed of good developers to go along with the previous?  Application of the KISS principle.  One more?  Being extremely pragmatic.  Computers are tools – treating any of the choices involved with religious fervor tends to reflect a lack of pragmatism in my view.

Do your code, your career, your co-workers, and, ultimately, your users a huge favor and be a good developer.