Investigating the performance of your SSRS reports

SQL Server Reporting Services keeps a nice view around for looking at the performance stats – ExecutionLog2 (new to 2008, obviously the ExecutionLog view predated it 🙂

Before you can optimize particular reports or your entire system, you need metrics and understand what they tell you.  In this posting, I want to focus on how to effectively interpret and utilize the data present in the new ExecutionLog2 view in the Reporting Services 2008 catalog database.  In summary, I am covering the following topics:

  • Description of ExecutionLog2 columns, with tips on how to interpret values
  • ExecutionLog2.AdditionalInfo and some interesting pieces of information it provides
  • Tips for analyzing ExecutionLog2 information
  • Tips for optimizing reports

When you go to query it, just keep in mind that Parameters is an ntext column, so you’ll want to use ‘like’ instead of trying to ‘=’ with a particular string.  Not sure what they didn’t make it nvarchar(max) since that was introduced in 2005, although it may be just to minimize the change between the 2 views.

If you’d used the ExecutionLog view that predates this one, you’re likely to appreciate the case statements they added in the view definition of ExecutionLog2 to change the RequestType and Source to human-readable values 🙂

Also note that Robert’s blog has tons of good info on SSRS in general and performance in particular – go subscribe 🙂

Reflector 6.1 bug decompiling Google.GData.Client.Feed<T>’s Entries property of type IEnumerable<T> (iterator block)

Edit [2010-03-01] – turns out Reflector can’t handle reversing any iterator blocks.  Ouch.

I’m not sure how helpful this post will be for others, but I needed a place to stick screen shots since I can’t attach files to Reflector’s in-app “Send Feedback” dialog nor when creating posts in their forum.


I was decompiling the Google.GData.Client.dll that comes with the (binary-only in the YouTube SDK msi) YouTubeUploader.exe – normally this wouldn’t make much sense, since the source is right here, but ignore that for the purpose of this post, please. 🙂

Feed<T> in that assembly has a nice Entries property of type IEnumerable<T> (starts @ line 202 in current trunk version of the file) implemented as an iterator block (using “yield return”, “yield break”).  It certainly has more logic than the average iterator block you’ll find out there, so I’m guessing part of the work the compiler had to do when implementing it confused reflector.  Whatever the cause, Reflector doesn’t currently handle correctly reversing it back to the iterator block in the source.

Reflector still shows you the generated code, of course – it just failed in whatever pass it does to identify these bits as the generated pattern and reverse it back.

Here’s the inner class of Feed<T> that’s generated:


And here’s the Entries property:


Most of the heavy lifting is done in the generated inner class’s MoveNext method. (too long and of too little value to put inline 🙂

If the above looks bizarre and/or you’re still curious about how the iterator pattern is implemented by the compiler (as state machines), read more here:

YouTubeUploader, Google.GData.Client.dll, and the UTF-8 Byte Order Mark

EDIT: as it turns out, this was just caused by a bug that snuck into the server side – the normal GData.Client.dll that sends the BOM works fine now, as does the modified one that doesn’t.

Background: As per this post, I’m making a simple upload-my-videos-to-YouTube app.  Step 1 was figuring out how we were going to get the info out of FlipShare, which we did in this post.  Step 2 is figuring out how we’re going to actually upload to YouTube, which is this post 🙂

Investigating available options

Just a couple of days before it was announced on their blog, I had started searching for available API’s (ideally already in .NET) and ran across their YouTube SDK and the larger GData SDK on their project download page.  Awesome!  Their wiki also described a sample app that would be perfect for me to learn from – YouTubeUploader!

One thing I noticed, though, was looking around the filesystem I didn’t see the source for YouTubeUploader.  I could certainly run reflector on it (and did), and that’s certainly a lot better than nothing, but it was odd that all the other samples had source but not this one (the later announcement would explicitly say you had to get the source via subversion).

So, I go to the Source tab to try and figure out where it might be in the tree, but at the time there was only 1 hit in an unrelated unit test. I noticed the UI had a type in it (uplads instead of uploads) so I searched for that string, and no hits. I filed a bug that I couldn’t find the source which Frank from the GData team closed after noting the location in the source tree.  Checking again as I write this, whatever process indexes the source tree has now picked up the source and both searches return parts of the project 🙂

Trying to use YouTubeUploader

At that point in time, I actually had 5 videos left that hadn’t successfully uploaded with FlipShare, so I tried to use the YouTubeUploader app to get them uploaded.  Its interface to the user is a CSV file you need to create to define title/tags/category/path/etc.  No biggie – I open Excel and make it, then save as csv (I’m too lazy to make the CSV by hand, worrying about quoting strings with commas and the like).

However, when I actually run it, the uploads all fail (doh!) with “400 Bad Request” (that’s what shows up in the UI).  I figured I’ve just got something misconfigured, so I try a bunch of different things in the UI, in a YouTubeUploader.exe.config, etc., but no such luck.

At this point I think about just ditching it here rather than go down the investigatory rabbit hole, but since it’s from the GData team and it’s the sample I’d really like to start with, I press on.


So, time to dig in to figure out what’s going on.  I run my current go-to tool for HTTP debugging, Fiddler, and then restart the app and have it try again.  As I should have expected, it does the calls over https and Fiddler in the middle is breaking all the calls since it’s not a trusted CA.  I hadn’t actually added Fiddler as a trusted root CA before, but there’s nice simple instructions to do so.

With that working, I look at the request and response of a failed call.  You can see them in the bug I filed, but the problem didn’t jump out at me at first.  I googled the error message (“Content is not allowed in prolog”) and much like you’d expect, it comes from having stuff show up before the xml prolog (“<?xml … ?>”), breaking the xml parsing (from the hits, it appears to be in Java XML parsing, not that it matters).  So I look back at the request again and I didn’t see anything before the prolog.

However, Fiddler has a ton of different ways of showing the request and response, including a handy-dandy hex view.  That show that there are indeed 3 bytes between our HTTP-spec 2 newlines (separating headers from body) and the xml prolog itself.  The bytes are 0xEF, 0xBB, and 0xBF.  Those bytes seem oddly familiar, and Google reminds me why – it’s the UTF-8 Byte Order Mark.  Ah, yes, it all starts to make sense.

Who’s to blame?

So, more googling and I run across the post that confirms what’s going on with XmlTextWriter along with a fix (Thanks Rick!).

Looking at YouTubeUploader’s source, it’s just using the ResumableUploader class in Google.GData.Client.dll, so it seems clear the bug isn’t YouTubeUploader’s fault.  Since it presumably was working for others, I was wondering if maybe it was something in the BCL (having 4.0 RC on the same box, although it should side-by-side fine and I had checked that YouTubeUploader was still running under 2.0/3.5.

Stepping through in the debugger, I see the offender – AtomBase.SaveToXml(Stream) does exactly what Rick’s post said – creating the XmlTextWriter with Encoding.UTF8, which writes the BOM.

Confirming the problem

At the time I was having problems rebuilding things (long story with no value – PEBKAC 🙂 so I wanted to verify that’s the problem by modifying the requests.  I knew Fiddler had this capability but I had never done it before.  Looking through the cookbook samples, though, it seemed pretty straightforward.

The biggest hurdle ended up being how to get those 3 bytes into a string, although for no good reason.  I knew about GetString off of Encoding but for some bizarre reason I thought that given the UTF8 BOM, it would just discard that (since I thought of it as ‘reading’ the UTF8 bytes) and leave me with an empty string.  Of course, once I finally tried it, it worked just fine and worked fine.  Figures. 🙂

The BOM isn’t going to change, but rather than put the bytes in directly to the rule, I referenced GetPreamble, which resulted in this simple addition to the OnBeforeRequest handler:

	var utf8: System.Text.Encoding = System.Text.Encoding.UTF8;
	var bom: String = utf8.GetString(utf8.GetPreamble());
	oSession.utilReplaceInRequest(bom + "<?xml", "<?xml");

I ran YouTubeUploader again with that rule in place, and sure enough, the uploads start working fine!  Yay!

Trying the fix

Once I wake up and finally realize that I don’t need to rebuild the full YouTubeUploader app, but instead just the Google.GData.Client.dll (hey, it was late :).  I make the one-line change (svn patch attached to bug), rebuild the dll, then drop it in to Google YouTube SDK for .NET\Samples and try YouTubeUploader again.  Sure enough, it worked fine (and faster, since it didn’t have to go through Fiddler’ and its request rewriting 🙂

Where are we?

So, while we (well, I) have spent an unfortunate amount of time yak shaving, we have a working sample for how to upload to YouTube.  The sample is pretty incestuous between logic and UI (which is fine, it’s a sample :), but it’s working code, uses the resumable uploader API, already supports multiple simultaneous uploads, and already supports configurable numbers of retries.  It’s already doing the hard parts, so it makes my life much easier writing my little uploader object model and apps. 🙂

getting video info out of flipshare.db

WARNING: this shows an implementation detail of FlipShare 5.0, NOT a public interface – it could break with upcoming versions

Goal – get the info directly from flipshare.db.

As has been noted in many places, FlipShare uses a SQLite database for storing its data.  When you add titles (or, well, change them from “Untitled” to something else), it doesn’t modify the names of your video files on disk, it just updates a row in the database.  Same for moving to a new logical folder within FlipShare (and lots other data changes, of course). It’s a great decision on their part, IMHO, even if it makes interacting with your videos in the filesystem a bit more painful 🙂

If you’re trying to interact with the actual videos on disk, you’ll notice there’s no description in them at all – look under Videos\FlipShare Data\Videos and you see VIDnnnnn.mp4 files (where nnnnn appears to be a normal monotonically increasing integer, at least at first glance unrelated to any of the PK id’s in the database).

Since we want to upload our videos with the (arguably, meta)data that’s shown in the FlipShare UI (logical folders, titles, etc), we need to access that flipshare.db sitting in your Videos\FlipShare Data folder.  We could use FlipShare’s export feature, but that would mean taking more disk space than required and we’d have to re-export if we wanted to ‘sync out’ changes to titles, and AFAICT it wouldn’t reflect the logical folders (at least not automatically), just the titles.

Schema investigation

The closest I found to an existing schema explanation was this post which admittedly does give you the 3 tables that hold the pieces of data you need – the title, the video location on disk, and (kinda optional if you just want to upload the video by title) the logical folder it’s in.

The problem for me was figuring out how to join these together.  SQLite supports foreign keys (well, at least 3.x does, not sure if that’s new though), but flipshare.db doesn’t use them, so I started to look at the data in my db’s tables to see if I could just notice it from the data.  The 2 tables I cared the most about were MediaElement and MediaElementSource since they hold the title and video file path, respectively.

  • I looked at the data of each to try and find the join condition, but nothing matched up.
    • so, it’ll be more than a simple 2-table join
  • At this point I wanted to look for a simple mapping table (so, a 3-table join)
    • I picked a particular video in MediaElement based on title,
    • then looked it up in the FlipShare UI to get its length
    • then sorted the VID*.mp4 files by length and looked to find the right video (the one that matches what’s show in the FlipShare UI)
    • then found the MediaElementSource row that matched up
    • Now I had the PK values for the 2 tables I cared about for a single video.
    • Now I looked for data in the database (using .output and .dump in sqlite3.exe) that referenced those 2 values (in either order, of course), hoping there would be a single mapping table involved.  No such luck.
  • At that point, knowing it must be a join of more than 3 tables, I gave up on trying to inspect my way to the join conditions.
  • I did findstr /sim in the Program Files\Flip Video\FlipShare to find which places referenced those tables, which had hits for Core.dll and FlipShare.dll
  • Since Core.dll seemed the more likely place for the DAL I then ran strings.exe on the file, dumped it out to a text file, then started searching for the table names in the output.
  • I found a query that joined MediaElementSourceGraph (which I had already noticed during the manual schema inspection was linked to MediaElementSource by its mediaSourceId – nice, friendly FK name, even if not defined strictly as a foreign key) and MediaElementRendition (which I had similarly noticed was linked to MediaElement via mediaElementId, also clearly an FK name).
  • So, the final 4-table join would link MediaElementSource to MediaElementSourceGraph to MediaElementRendition to MediaElement
  • In retrospect, the schema makes sense, of course – I just couldn’t see the forest for the trees 🙂
  • As a bonus, we could add in the logical folder location for the video, which was much easier to figure out since UserFolderMediaElements is a simple mapping of videos (MediaElement rows) to logical folders (UserFolders rows) as you could tell by its name, so it was just adding those 2 more tables (6 total) to get all the info we’re looking for.

Building the query

Using LinqPad 2.x with the IQ driver (so we can query SQLite) we can use linq’s support for joining (even without foreign keys in place).  It defaults to a couple of common conventions I’ve seen in other OR mappers (linq-to-X and otherwise), making the table names plural (but not the row entity singular if the table is already plural, oddly enough IMHO) and init-cap’ing the property names.  You could certainly turn those off, the query would just need to be slightly different.

I hit one gotcha while constructing the query and adding in the 2 folder-related tables, though:


It took me awhile to figure out because I incorrectly took the error message to mean it couldn’t figure out the type of ‘folderElem’ (and adding the explicit type didn’t change the error, of course :), but I eventually figured out it was because of a type mismatch between the members involved in the join condition (elem.Id equals folderElem.MediaElementId).  The UserFolderMediaElements table, oddly enough, doesn’t store the mediaElementId column as an int or similar numeric column – instead, it’s a string.  Not sure why that is (perhaps hysterical raisins), but it is.  Fixing it was simple enough, though – just ToString the elem.Id so it’s comparing 2 strings.  (You can’t use Convert.ToInt32 on the folderElem.MediaElementId since that method isn’t supported by the driver, at least not yet 🙂

One optional where clause (commented out below) is checking the folder’s ParentId – for the FlipShare logical folders (under their ‘Computer’ node), the parent id is 8 (at least in my database 🙂 – i don’t see this in another table, so I’m guessing it’s a defined constant in the code (looks like folder id’s under 1000 are likely that way).  I don’t need this at the moment since I don’t have any videos in other areas (like the ‘Flip Channels’ node), but if you do, you might want that filter.

LINQ Query to get the video info

from source in this.MediaElementSources
    join graph in this.MediaElementSourceGraphs
        on source.Id equals graph.MediaSourceId
rend in this.MediaElementRenditions
        on graph.RenditionId equals rend.Id
    join elem in this.MediaElements
        on rend.MediaElementId equals elem.Id
    join folderElem in this.UserFolderMediaElements
        on elem.Id.ToString() equals folderElem.MediaElementId
    join folder in this.UserFolders
        on folderElem.Id equals folder.Id
//where folder.ParentId == 8
select new

NOTE: the VideoFormat isn’t really all that useful to have in the output, I had just included it because I was curious, but feel free to kill it if you don’t need it.  It’s “free” in that you have to join on MediaElementRendition anyway, of course 🙂

SQL Query to get the video info

Since this is LinqPad, I just need to click on the SQL button to get the query that the LINQ query was transformed into, which I include here since it’s more likely to be applicable/usable for people running across this post 🙂

Admittedly, the generated SQL doesn’t have the nicest table alias names, but that’s pretty easy to search-and-replace (or just remove them) if someone wants to 🙂

SELECT t0.[name], t0.[PreviewImagePath], t1.[uri], t2.[videoFormat], t3.[folderName]
FROM [MediaElementSource] AS t1
INNER JOIN [MediaElementSourceGraph] AS t4
  ON (t1.[id] = t4.[mediaSourceId])
INNER JOIN [MediaElementRendition] AS t2
  ON (t4.[renditionId] = t2.[id])
INNER JOIN [MediaElement] AS t0
  ON (t2.[mediaElementId] = t0.[id])
INNER JOIN [UserFolderMediaElements] AS t5
  ON (t0.[id] = t5.[mediaElementId])
INNER JOIN [UserFolders] AS t3
  ON (t5.[id] = t3.[id])

Obligatory Screenshot

Here’s the results from LinqPad, showing it’s the desired output 🙂


And, to finish off the post, a repeat of the warning we saw at the beginning, just in case someone skipped down to get the queries 🙂

WARNING: this shows an implementation detail of FlipShare 5.0, NOT a public interface – it could break with upcoming versions

One nit-pick with FlipShare – YouTube uploading

I got asked what my impressions are of the Flip UltraHD I’m testing out.  The hardware itself is fine, although a little mediocre in 2010 since the bar’s constantly going up (720p instead of 1080p, no image stabilization, but pretty good in low-light, and the “candy bar” form factor is more usable than I expected), but FlipShare, the software that comes with it, is the linchpin for the overall user experience that is really the main goal on selling it (something akin to Apple product experiences, IMHO).

The box came with version 4.5 of FlipShare which had a good chunk of things I found annoying, but after the upgrade to their 5.0 version ( at the moment), the majority of those went away – it’s a nice, solid, very usable interface.  Good WAF, too.

It can upload to multiple services (facebook, myspace, youtube) – I picked youtube as a destination mainly for popularity – since it’s extended family that will be viewing these, I wanted a site they’d most likely already be somewhat familiar with.  I know a good chunk of them still aren’t on facebook, and most of them probably never even heard of myspace 🙂

The upload behavior is enough such that I’ve started my own app that’ll upload the videos (to be covered more in posts to come), especially after I threw in the various “bonus” bits and realized it’d be pretty unlikely that all this gets taken care of by Cisco+LinkSys+Flip in a timeframe I’d be ok with :)  It also gives me a chance to take advantage of some of the new/better out-of-browser capabilities in Silverlight 4, although there will be cmdline and desktop-app versions as well 🙂

Problems with FlipShare’s uploading

  • I always upload to YouTube – let me say "always this", or at least default the radio button to what was last selected so I don’t have to keep clicking it (similar to how the visibility defaults to public instead of private later in the "wizard")
  • When uploading to YouTube, there’s 2 phases involved:
    1. For a LONG time, FlipShare isn’t actually uploading anything and is just hogging CPU. Is it transcoding or something? It doesn’t (or at least, shouldn’t) need to (already H.264 MP4 coming off device).  I can upload the files directly with no transcoding using other youtube uploaders.
    2. After that it actually does the real upload – I can tell when it starts because the CPU fan in my laptop quiets down considerably 🙂
  • No X retries (would be very helpful when the laptop sleeps+resumes, since it’ll definitely fail during that cycle and will need to retry)
  • Related, apparently no resumable uploads?
  • If an upload job fails, all the videos in that upload job appear to fail
  • When a job fails, especially if you had multiple upload jobs going in parallel, you can’t tell which videos didn’t make it without inspecting what’s on youtube.
  • No ability to view the details of the actively running upload jobs, like:
    • Rate of transfer
    • List of files and their individual progress (completed/transferring/pending/etc)
    • Estimated time of completion (per file and overall), both as X hours and potentially as a time of day for users that don’t want to do the math themselves 🙂
  • No notifications of progress/finishing
    • Toast on desktop (like Outlook)
    • Emails
    • Bonus:
      • Facebook, Twitter, whatever notifications so family and friends find out about new videos (do per batch, not per video!)
      • NOTE: YouTube has this natively via AutoShare as a workaround for now, but I’d rather have it in the FlipShare UI so I can more easily customize the notifications (recipients, added text, etc) beforehand
  • Bonus: allow "syncing" of videos up to YouTube such that it checks whether a file is already up there (name, length, hash, whatever) and skips it if so.
    • This lets people just worry about maintaining a local copy and let it push up automatically, something akin to SyncToy
    • Target is eventually running on WHS where you get your videos onto your \\server\videos share and a WHS plugin syncs them up to YouTube for you
  • Bonus: allow creation of YouTube playlists based on folder structure within FlipShare
    • Similarly needs to be kept in sync – if you move videos around to different folders, it should modify the playlists

attaching a collection of strings to an email as a text file

Remember that previous post about List<string>.Add?  Well, one of the uses of the messages was to get into an email.

Unfortunately, there’s not an Attachment ctor that takes the contents of a file – it’s all around passing in a filename or a stream.  Since I don’t want to have to write this out to disk, stream’s the way to go.

NOTE: you would be forgiven if you read the example code in the string-param ctor and thought it actually *did* take content as the param.  I’m not sure the author of the example knew that the param was a filename, to be honest (although it’s possible the example was written during the 2.0 cycle when that ctor really did take content).  The example’s been busted for awhile. :)  Even the second comment here is wrong – it’s not in the body, it’s an attachment! (API perspective, ignore mime encoding behavior 🙂

// Attach the message string to this e-mail message.
Attachment data = new Attachment(textMessage);
// Send textMessage as part of the e-mail body.


Creating the Content

Now, on to getting the messages into file content.  Sure you could foreach and WriteLine something (for instance, new StreamWriter(new MemoryStream) then Position = 0 the stream afterwards for the ctor or whatever), but I like more declarative-ish constructs and like to actually be able to easily see (my infinitives are very flexible – they can do a split!) the file content in the debugger, so:

var testingContent = String.Concat(testingMessages
    .Select(s => s + Environment.NewLine) // append a newline to each
    .ToArray()); // make array so Concat can take it

Admittedly that last line is annoying, but it’s only there because this is currently built against 3.5 – in 4.0 String.Concat (and String.Join) thankfully added IEnumerable overloads

Then it’s a matter of constructing a stream (I’d use UTF8 or Unicode normally, but wanted ASCII for this)

new MemoryStream(Encoding.ASCII.GetBytes(testingContent))

then passing it to the ctor (and then adding that to the message)

message.Attachments.Add(new Attachment(testingMemoryStream, "text/plain")
        Name = "testingMessages.txt",

Attachments and Disposed Streams

One minor gotcha you may run across – the Attachment (smartly) doesn’t pull anything out of the stream at ctor time – the data will be fetched when you actually send the message (since at that point it needs the data to actually write the outgoing email), so while you should do a using() or similar to dispose the MemoryStream you create, you need to make sure that doesn’t happen until after the email is sent – otherwise you’ll get an exception when it tries to read the disposed stream :) 

You may be tempted to using (new MemoryStream) { message.Attachments.Add(…); } smtpClient.Send(message); but don’t do it! 🙂

Memory Usage?

Since someone will likely point it out – I know the memory behavior of this isn’t great – we end up with the same content in 3 places: 1) the List<string> 2) the file content 3) the memory stream’s byte buffer.  You could certainly have created the MemoryStream, wrapped it in a StreamWriter, then written the messages to it instead of the List<string>, reset the position before passing to the ctor, and been down to 1 copy of the contents in memory instead of 3.

Mitigating factors are: 1) This code path isn’t used in production and 2) we’re not keeping the copies around very long – the first 2 copies are free to be GC’d after the Add of the attachment since they’re local vars that go out of scope.

Blinking is for losers


Looks like cocaine is still, apparently, a hell of a drug. 😉

Of course, maybe it’s just caffeine that’s got him wired.

That leads to an obligatory Chris Knight quote:

If you think that by threatening me you can get me to do what you want… Well, that’s where you’re right. But – and I am only saying that because I care – there’s a lot of decaffeinated brands on the market that are just as tasty as the real thing.

another extension method: GetValueOrDefault for Dictionary

as many of you remember, Hashtable did (well, still does) store objects (just like ArrayList and the other non-generic collections).  When a key wasn’t found, it would return null back as a way of saying “not found”.  This was kind of painful as an approach, since many times applications ended up with null refs after doing a failed lookup that didn’t throw.

Dictionary fixed that such that lookup that fails to find the key will throw since, being generic, the value type (as in, the TValue type in the generic type params) might be, well, a value type (as in, not a reference type, so it can’t be null), so you can’t use null for differentiating failed-lookup from look-succeeded-and-found-value-of-null).  If you want to try and get the value but not have an exception to deal with if it’s not there, Dictionary gives you TryGetValue which will try to fetch the value (into an out variable you have to declare first) and then return a bool for whether or not the lookup succeeded.  It gives you everything you need.

If you hate that change in behavior, don’t worry, it was hashed out (internally and externally) to death – here’s a related blog post about the change from during the Whidbey cycle – the comments should give you an idea of the sides involved.

The really annoying thing, though, is the resulting verbosity of code that wants to use a Dictionary (for type safety, to avoid casts, etc).  You’ll often end up with something like:

string mimeType;
if (MimeMap.TryGetValue(extension, out mimeType))
    return mimeType;
    return "binary/octet-stream";

Kind of annoying, hunh?  It’s a lot of text and control flow for the relatively simple concept of “return the value for a successful lookup, return this default otherwise.”

So, extension methods to the rescue, of course. 🙂

Since the above pattern will be the same for all such scenarios, we can genericize it for Dictionary:

public static TValue GetValueOrDefault<TKey, TValue>(this Dictionary<TKey, TValue> self, TKey key, TValue defaultValue)
    TValue val;
    if (self.TryGetValue(key, out val))
        return val;
        return defaultValue;

The 2 generic type params might seem scary, but since they’re the same ones in the dictionary that’s passed in, the compiler can infer them, so you won’t have to specify them yourself.

Now the original snippet of code reduces to the easier-to-read-and-understand:

return MimeMap.GetValueOrDefault(extension, "binary/octet-stream");

I’m sure this will cause twitching in some readers, but I like it 🙂

List<string> should format args for me

I was building up a list of status messages which i was going to have to build with string.concat’s or (more likely) string.format calls, so instead I just added an overload for the case where additional args are passed in.  Much nicer 🙂

public static void Add(this List<string> self, string format, params object[] args)
    self.Add(String.Format(format, args));

answer to C# quiz

Original quiz post is here:

  • it does compile
  • it gives no compiler warnings
  • the output is:
  • System.Linq.Enumerable+WhereEnumerableIterator`1[System.Char]
  • more obviously, the output is *not* "123456” as you might have expected
  • I posted the quiz because I had hit this behavior and it surprised me.  I wasn’t sure if I could “override” (in a weird way) something that’s already on the type (like ToString()).  I figured if it wouldn’t work, the compiler would at least warn me (but, of course, it didn’t).  I filed a bug about it, but I’d be surprised if the behavior changed 🙂

    What really strikes me as odd is that this means when you add extension methods to a type (even part of a public API), you’re taking a gamble that the type won’t add the same method+params itself – if/when it does, your next build will start calling a totally different method with likely different behavior, return values, side effects, etc.  All without ever doing anything in your code to say you wanted to call this new method.  And without ever getting warned about this potential ‘conflict’/’redirect’ situation.

    A real-world example of hitting this is the upcoming .NET 4.0 methods – I had added a CopyTo extension method awhile back to Stream (so I could take streams from SSRS and send them to the Response.Output stream in some pages).

    Now it turns out that in 4.0 the BCL team nicely added CopyTo to Stream!  When I upgraded the app to 4.0, I got no warnings about this situation (I only knew about it from keeping up with the 🙂

    Admittedly in this particular case the behavior is likely the same, but what if my version had doing a .Position = 0 on the source after the copy to the destination and I relied on that?

    IOW, without the compiler at least warning about this situation, you have what amounts to a breaking change (from my app’s perspective) brought in by a new framework that made a nice, innocuous, should-not-break-anything change.

    Perhaps I’m overreacting but it seems like the spirit of the C# compiler (at least, my very vague understanding of it 🙂 is being violated with the current behavior.