Enabling jQuery IntelliSense in an MVC project with Visual Studio 2010

Standard

Heads up!

The blog has moved!
The new URL to bookmark is http://blog.valeriogheri.com/

 

Hello,
today I’d like to share with you a quick trick to enable IntelliSense to help writing jQuery code in the View.
If we are using the out of the box convention of MVC 3, our layout code will be placed in the Views/Shared/_layout.cshtml folder, and inside it we can put all of our scripts and stylesheets global references.

Then, when we write jQuery code in our own View, we will notice that we have no IntelliSense support helping us, contrary to our expectations.
The problem lies in the way Visual Studio works: IntelliSense engine will not be able to access the references we made into _layout because the engine doesn’t really know which layout will be used to render the View until the code is running (just imagine a scenario where we may dynamically choose a layout based on user profile to understand why Visual Studio works this way).

The solution

The first solution I came up with was to add again the vsdoc script reference (jquery-X.Y.Z-vsdoc.js) to the View and I had IntelliSense again.

Enabling Intellisense

Habemus IntelliSense!

The downside of this is double references that will produce double requests when browsing the site, and this is bad!

So I googled and I found a pretty nice trick to enable Intellisense and to avoid downsides(link):

@if (false)
{
<script src="../../Scripts/jquery-1.7.1-vsdoc.js" type="text/javascript"></script>
}

Surrounding the script reference by an if (false) statement, the reference will never be posted to the browser because the statement will always be false, but we will still have our Intellisense to help us!

Handling transactions in .Net using TransactionScope

Standard

Heads up!

The blog has moved!
The new URL to bookmark is http://blog.valeriogheri.com/

 

For one of the projects I’m currently working on, I had to refactor a (sort of) data access layer and move it from nHibernate technology to stored procedures.
During the code review process, I noticed there were some macro logical operations made of several short-lived and concise activities that were perfect to group together inside transactions.
I didn’t want to group them inside some bloated stored procedure, taking some of the logic away from the code into the DB, so I decided to look at what the framework had to offer, and this is how I was carried into TransactionScope world.

When using transaction is a good idea and when it isn’t 

Transactions are great when you can group a set of short and quick activities together under a transaction. If any of those activities fail, the whole transaction would fail and rollback all the already done work. Everything is taken care of automatically by the framework, ensuring data consistency.
While the transaction is being executed, the database engine keeps required resources in a locked state, and this means that no one else can use locked data (the degree of freedom depends on the isolation level which the transaction is currently running on. More on this later).
Now imagine your transaction is taking many minutes or hours (or more) to complete… this could potentially create deadlocks or starving situations or timeouts of other operations! Definetely not a good scenario, isn’t it?

TransactionScope Class: the framework to the rescue 

Before you start, I strongly suggest you start reading the fundamentals here if you didn’t do it yet.
Let’s start with some examples to see it in action:
private static void BaseCaseSuccessTest()
{
    //Let's get the connection to SQL Server
    var connection = GetConnection();
    try
    {
        using (TransactionScope rootScope = new TransactionScope())
        {
            using (connection)
            {
                connection.Open();
                // All code placed here will take part in the transaction
                connection.Close();
            }
            rootScope.Complete();
        }
    }
    catch (TransactionAbortedException tae)
    {
        Console.WriteLine("Test aborted: " + tae.Message);
    }
}
All the code placed inside the TransactionScope using statement will be executed inside the same ambient transaction, even if it’s code belonging to an external dll!
Furthermore, whenever using the default constructor we are implicitly asking the transaction manager to check if an existing ambient transaction is present, creating it if it isn’t, and then joining it.
We are also asking the framework to create a transaction with the default timeout (set to 1 minute) and with isolation level set to Serializable (more on this later).
Now, what if we want to do some non transactional work in the middle of a transaction, without leaving it?
One of  TransactionScope’s constructor accepts the TransactionScopeOption enumeration as a parameter, and we can set it to Suppress like I do in the following example:
private static void SuppressTest()
{
    var connection = GetConnection();
    try
    {
        using (TransactionScope scope1 = new TransactionScope())
        {
            using (connection)
            {
                connection.Open();
                // Do some work
                using (TransactionScope scope2 = new TransactionScope(TransactionScopeOption.Suppress))
                {
                    // Non transactional work here

                    var current = Transaction.Current; // current is equal to null

                    // This will create a new ambient transaction, as the upper level (scope3) is currently running with no ambient transaction
                    using (TransactionScope scope3 = new TransactionScope(TransactionScopeOption.Required))
                    {
                        current = Transaction.Current;  // current is now not null
                        scope3.Complete();
                    }

                    // Calling scope2.Complete() is not mandatory, given that the operations here are non transactional, but it is nevertheless
                    // recommended to do so for consistency sake among the code
                    scope2.Complete();
                }
                connection.Close();
            }
            scope1.Complete();
        }
    }
    catch (TransactionAbortedException tae)
    {
        Console.WriteLine("Test aborted: " + tae.Message);
    }
}
We can even create a new transaction inside the Suppress block of code!
It is important to note that if for any reason the code executed inside the Suppress block of code fails, the ambient transaction created with scope1 will not be aborted!
It’s also worth noting that calling scope3.Complete() in this case is not mandatory, given that we are not inside a transaction, but it’s nevertheless recommended for the sake of code consistency.

Transaction isolation level

I will start citing MSDN here:
“By default, the transaction executes with isolation level set to Serializable. Selecting an isolation level other than Serializable is commonly used for read-intensive systems. […]
Every isolation level besides Serializable is susceptible to inconsistency resulting from other transactions accessing the same information.”

Now, what does it mean in terms of performance that the default isolation level is set to Serializable? Let’s see what wikipedia has to say about it:
“This is the highest isolation level. It specifies that all transactions occur in a completely isolated fashion, or, in other words, as if all transactions in the system had executed serially, one after the other. The DBMS may execute two or more transactions at the same time only if the illusion of serial execution can be maintained.
With a lock-based concurrency control DBMS implementation, serializability requires read and write locks (acquired on selected data) to be released at the end of the transaction.“

As pointed out in this MSDN blog, the default behaviour can be harmful if you don’t know exactly what you’re doing and what usage patterns your system will have, because you might end up having deadlock and timeout problems, without a clue about why this is happening!

Let’s now see an example that shows how to create a non default TransactionScope object, manually setting the isolation level to ReadCommitted and the timeout for our transaction:

private static void TransactionWithIsolationAndTimeout()
{
    var transactionScopeOptions = new TransactionOptions();
    // The default isolation level value is Serializable
    // Here we explicitely ask the framework to create a transaction with isolation level ReadCommitted:
    // Volatile data cannot be read during the transaction, but can be modified.
    transactionScopeOptions.IsolationLevel = System.Transactions.IsolationLevel.ReadCommitted;
    transactionScopeOptions.Timeout = TimeSpan.MaxValue;
    var connection = GetConnection();
    try
    {
        using (TransactionScope scope1 = new TransactionScope(TransactionScopeOption.Required, transactionScopeOptions))
        {
            using (connection)
            {
                connection.Open();
                // Do transactional work using DB connection
                connection.Close();
            }
            scope1.Complete();
        }
    }
    catch (TransactionAbortedException tae)
    {
        Console.WriteLine("Test aborted: " + tae.Message);
    }
}

To end this topic, it’s important to note that when using nested TransactionScope objects, all nested scopes must be configured to use exactly the same isolation level if they want to join the ambient transaction. If a nested TransactionScope object tries to join the ambient transaction yet it specifies a different isolation level, an ArgumentException is thrown.

Local transaction vs. Distributed one

Up until now we’ve seen examples of just one SQL connection shared amongst several TransactionScope objects (and sometimes transactions), opened and closed only once.
This means that only one SQL server and one database is concerned.
This is what is called a local transaction, also known as lightweight.
A distributed transaction is a local transaction that has been escalated to MSDTC (Microsoft Distributed Transaction Coordinator).
Lightweight transactions should be the preferred solution, when possible, due to better performance because escalating to MSDTC adds overhead to the whole process (and saves some headache configuring it!).
So, how to avoid escalating? There is a lot about this topic on the internet because it’s not easy to understand exactly when a transaction is escalated and how to configure MSDTC to correctly handle your requests.
Browsing Stack Overflow I found a very interesting post  that casts some bright light on the problem and that  you can read for yourself.
Anyway to summarize and to remain practical, SQL Server 2005 and 2008 have different behaviour handling and escalating transactions:

SQL2008:

  • Allows multiple connections, not simultaneously open, within a single TransactionScope without escalating to MSDTC.
  • If those multiple SqlConnections are nested, that is, two or more SqlConnections are opened at the same time, TransactionScope will immediately escalate to DTC.
  • If an additional SqlConnection is opened to a different ‘durable resource’ (ie: a different SQL Server or a different database inside the same SQL Server) it will immediately escalate to DTC

SQL2005:

  • Does not allow multiple connections within a single TransactionScope. It will escalate when a second SqlConnection is opened, even if the previous one has been already closed.

The following code will escalate

using (TransactionScope transactionScope = new TransactionScope()) {
   using (SqlConnection connection = new   SqlConnection(connectionString)) {
      connection.Open();
      connection.Close();
      connection.Open(); // escalates to DTC
   }
}

This ends my article on how to handle transactions with TransactionScope object using the .Net framework, and I hope it will save you some time and headache while dealing with it!

Cheers,

Valerio