Introduction

The article deals with explaining the concepts behind implementing multi-threading applications in .NET through a working code example. The article covers the following topics in brief:

  1. Concepts of threading
  2. How to implement multi-threading in .NET
  3. Concepts behind implementing Thread Safe applications
  4. Deadlocks

What is a Process?

A process is an Operating System context in which an executable runs. It is used to segregate virtual address space, threads, object handles (pointers to resources such as files), and environment variables. Processes have attributes such as base priority class and maximum memory consumption.

Meaning…

  1. A process is a memory slice that contains resources
  2. An isolated task performed by the Operating System
  3. An application that is being run
  4. A process owns one or more Operating System threads

Technically, a process is a contiguous memory space of 4 GB. This memory is secure and private and cannot be accessed by other processes.

What is a Thread?

A thread is an instruction stream executing within a process. All threads execute within a process and a process can have multiple threads. All threads of a process use their process’ virtual address space. The thread is a unit of Operating System scheduling. The context of the thread is saved / restored as the Operating System switches execution between threads.

Meaning…

  • A thread is an instruction stream executing within a process.
  • All threads execute within a process and a process can have multiple threads.
  • All threads of a process use their process’ virtual address space.

What is Multi-Threading?

Multi threading is when a process has multiple threads active at the same time. This allows for either the appearance of simultaneous thread execution (through time slicing) or actual simultaneous thread execution on hyper-threading and multi-processor systems.

Multi-Threading – Why and Why Not

Why multi-thread:

  • To keep the UI responsive.
  • To improve performance (for example, concurrent operation of CPU bound and I/O bound activities).

Why not multi-thread:

  • Overhead can reduce actual performance.
  • Complicates code, increases design time, and risk of bugs.

Thread Pool

The thread pool provides your application with a pool of worker threads that are managed by the system. The threads in the managed thread pool are background threads. A ThreadPool thread will not keep an application running after all foreground threads have exited. There is one thread pool per process. The thread pool has a default size of 25 threads per available processor. The number of threads in the pool can be changed by the SetMaxThreads method. Each thread uses the default stack size and runs at the default priority.

Threading in .NET

In .NET, threading is achieved by one of the three methods:

  1. Thread class
  2. Delegates
  3. Background Worker

In the sections below, we will see how threading can be implemented by each of these methods.

In a nutshell, multi-threading is a technology by which any application can be made to run multiple tasks concurrently, thereby utilizing the maximum computing power of the processor and keeping the UI responsive. An example of this can be expressed by the block diagram below:

The code

The project is a simple WinForms application which demonstrates the use of threading in .NET by three methods:

  1. Delegates
  2. Thread class
  3. Background Worker

The application executes a heavy operation asynchronously so that the UI is not blocked. The same heavy operation is achieved by the above three ways to demonstrate their purpose.

The “Heavy” Operation

In real world, a heavy operation can be anything from polling a database to streaming a media file. For this example, we have simulated a heavy operation by appending values to a string. String being immutable, a string append will cause a new string variable to be created while discarding the old one. (This is handled by the CLR.) If done a huge number of times, this can really consume a lot of resources (a reason why we use Stringbuilder.Append instead). In the above UI screen, set the up down counter to specify the number of times the string is going to be appended.

We have a Utility class in the backend, which has a LoadData() method. It also has a delegate with signature similar to that of LoadData().

class Utility
{
    public delegate string delLoadData(int number);
    public static delLoadData dLoadData;

    public Utility()
    {
        
    }

    public static string LoadData(int max)
    {
        string str = string.Empty;

        for (int i = 0; i < max; i++)
                                {
            str += i.ToString();
                                }

        return str;
    }
}

The Synchronous Call

When you click the “Get Data Sync” button, the operation is run in the same thread as that of the UI thread (blocking call). Hence, for the time the operation is running, the UI will remain unresponsive.

private void btnSync_Click(object sender, EventArgs e)
{
    this.Cursor = Cursors.WaitCursor;
    this.txtContents.Text = Utility.LoadData(upCount);
    this.Cursor = Cursors.Default;
}

The Asynchronous Call

Using Delegates

If you choose the radio button “Delegates”, the LoadData() method is called asynchronously using delegates. We first initialize the type delLoadData with the address of utility.LoadData(). Then we call the BeginInvoke() method of the delegate. In .NET world, any method that has the name BeginXXX or EndXXX is asynchronous.  For example, delegate.Invoke() will call a method in the same thread. While delegate.BeginInvoke() will call the method in a separate thread.

The BeginInvoke() takes three arguments:

  1. Parameter to be passed to the Utility.LoadData() method
  2. Address of the callback method
  3. State of the object
Utility.dLoadData = new Utility.delLoadData(Utility.LoadData);
Utility.dLoadData.BeginInvoke(upCount, CallBack, null);
The Callback

Once we spawn an operation in a thread, we have to know what is happening in that operation. In other words, we should be notified when it has completed its operation. There are three ways of knowing whether the operation has completed:

  1. Callback
  2. Polling
  3. Wait until done

In our project, we use a callback method to trap the finishing of the thread. This is nothing but the name of the method that you had passed while calling the Begininvoke() method. This tells the thread to come back and invoke that method when it has done doing what it was supposed to do.

Once a method is fired in a separate thread, you might or might not be interested to know what that method returns. If the method does not return anything, then it will be a “fire and forget call”. In such a case, you would not be interested in the callback and would pass the callback parameter as null.

Utility.dLoadData.BeginInvoke(upCount, CallBack, null);

In our case, we need a callback method and hence we have passed the name of our callback method, which is coincidentally CallBack().

private void CallBack(IAsyncResult asyncResult)
{
    string result= string.Empty;

    if (this.cancelled)
        result = "Operation Cancelled";
    else
        result = Utility.dLoadData.EndInvoke(asyncResult);
    
      object[] args = { this.cancelled, result };
    this.BeginInvoke(dUpdateUI, args);
}

The signature of a callback method is – void MethodName(IAsyncResult asyncResult).

The IAsyncResult contains the necessary information about the thread. The returned data can be trapped as follows:

result = Utility.dLoadData.EndInvoke(asyncResult);

The polling method (not used in this project) is like the following:

IAsyncResult r = Utility.dLoadData.BeginInvoke(upCount, CallBack, null);
while (!r.IsCompleted)
{
    //do work
}
result = Utility.dLoadData.EndInvoke(asyncResult);

The wait-until-done, as the name suggests, is to wait until the operation is completed.

IAsyncResult r = Utility.dLoadData.BeginInvoke(upCount, CallBack, null);

//do work
result = Utility.dLoadData.EndInvoke(asyncResult);
Updating the UI

Now that we have trapped the ending of the operation and retrieved the result that LoadData() returned, we need to update the UI with that result. But there is a problem. The text box which needs to be updated resides in the UI thread and the result has been returned in the callback. The callback happens in the same thread that it started. So the UI thread is different from the callback thread. In other words, the text box cannot be updated with the result like shown below:

this.txtContents.Text = text;

Executing this line in the callback method will result in a cross thread system exception. We have to form a bridge between the UI thread and the background thread to update the result in the textbox. That is done using the Invoke() or BeginInvoke() methods of the form.

I have defined a method which will update the UI:

private void UpdateUI(bool cancelled, string text)
{
    this.btnAsync.Enabled = true;
    this.btnCancel.Enabled = false;
    this.txtContents.Text = text;
}

Define a delegate to the above method:

private delegate void delUpdateUI(bool value, string text);
dUpdateUI = new delUpdateUI(UpdateUI);

Call the BeginInvoke() method of the form:

object[] args = { this.cancelled, result };
this.BeginInvoke(dUpdateUI, args);

One thing to be noted here is that once a thread is spawned using a delegate, it cannot be cancelled, suspended, or aborted. We have no control on that thread.

Using the Thread Class

The same operation can be achieved using the Thread class. The advantage is that the Thread class gives you more power over suspending and cancelling the operation. The Thread class resides in the namespace System.Threading.

We have a private method LoadData() which is a wrapper to our Utility.LoadData().

private void LoadData()
{
    string result = Utility.LoadData(upCount);
    object[] args = { this.cancelled, result };
    this.BeginInvoke(dUpdateUI, args);
}

The reason we have this is because, Utility.LoadData() requires an argument. We need a thread start delegate to initialize the thread.

doWork = new Thread(new ThreadStart(this.LoadData));
doWork.Start();

The delegate has a void, void signature. In case we need to pass an argument, we have to use a parameterized thread start delegate. Unfortunately, the parameterized thread start delegate can take only objects as parameters. We need a string and would have to implement a type casting.

doWork = new Thread(new ParameterizedThreadStart(this.LoadData));
doWork.Start(parameter);

The Thread class gives a lot of power over the thread like Suspend, Abort, Interrupt, ThreadState, etc.

Using BackgroundWorker

The BackgroundWorker is a control which helps to make threading simple. The main feature of the BackgroundWorker is that it can report progress asynchronously which can be used to update a status bar, keeping the UI updated about the progress of the operation in a visual way.

To do this, we need to set the following properties to true. These are false by default.

  • WorkerReportsProgress
  • WorkerSupportsCancel

The control has three main events: DoCount, ProgressChanged, RunWorkerCompleted. We need to register these events at initializing:

this.bgCount.DoWork += new DoWorkEventHandler(bgCount_DoWork);
this.bgCount.ProgressChanged += 
     new ProgressChangedEventHandler(bgCount_ProgressChanged);
this.bgCount.RunWorkerCompleted += 
     new RunWorkerCompletedEventHandler(bgCount_RunWorkerCompleted);

The operation can be started by invoking the RunWorkerAsync() method as shown below:

this.bgCount.RunWorkerAsync();

Once this is invoked, the following method is invoked for processing the operation:

void bgCount_DoWork(object sender, DoWorkEventArgs e)
{
    string result = string.Empty;
    if (this.bgCount.CancellationPending)
    {
        e.Cancel = true;
        e.Result = "Operation Cancelled";
    }
    else
    {
        for (int i = 0; i < this.upCount; i++)
        {
            result += i.ToString();
            this.bgCount.ReportProgress((i / this.upCount) * 100);
        }
        e.Result = result;
    }
}

The CancellationPending property can be checked to see if the operation has been cancelled. The operation can be cancelled by calling:

this.bgCount.CancelAsync();

The below line reports the percentage progress:

this.bgCount.ReportProgress((i / this.upCount) * 100);

Once this is called, the below method is invoked to update the UI:

void bgCount_ProgressChanged(object sender, ProgressChangedEventArgs e)
{
    if (this.bgCount.CancellationPending)
        this.txtContents.Text = "Cancelling....";
    else
        this.progressBar.Value = e.ProgressPercentage;
}

Finally, the bgCount_RunWorkerCompleted method is called to complete the operation:

void bgCount_RunWorkerCompleted(object sender, RunWorkerCompletedEventArgs e)
{
    this.btnAsync.Enabled = true;
    this.btnCancel.Enabled = false;
    this.txtContents.Text = e.Result.ToString();
}

Web Applications

Threading in ASP.NET web applications can be achieved by sending an AJAX request from the client to the server. This makes the client request certain data to the server without blocking the UI. When the data is ready, the client is notified via a callback and only the part of the client concerned is updated, making the client agile and responsive.

The most common way to achieve this is by ICallbackEventHandler. Refer to the project Demo.Threading.Web. I have the same interface as Windows with a text box to enter a number and a textbox to show the data. The Load Data button performs the previously discussed “heavy” operation.

<div>
    <asp:Label runat="server" >Enter Number</asp:Label>
    <input type="text" id="inputText" /><br /><br />
    <asp:TextBox ID="txtContentText" runat="server" TextMode="MultiLine" /><br /><br />
    <input type="button" id="LoadData" title="LoadData" 
           onclick="LoadHeavyData()" value="LoadData" />
</div>

I have a JavaScript function LoadHeavyData() which is called on the click event of the button. This function calls the function CallServer with parameters.

<script type="text/ecmascript">
    function LoadHeavyData() {

        var lb = document.getElementById("inputText");
        CallServer(lb.value.toString(), "");
    }

    function ReceiveServerData(rValue) {
        document.getElementById("txtContentText").innerHTML = rValue;
    }
</script>

The CallServer function is registered with the server in the script that is defined at the page load event of the page:

protected void Page_Load(object sender, EventArgs e)
{
    String cbReference = Page.ClientScript.GetCallbackEventReference(this, 
                         "arg", "ReceiveServerData", "context");
    
    String callbackScript;
    callbackScript = "function CallServer(arg, context)" + 
                     "{ " + cbReference + ";}";
    
    Page.ClientScript.RegisterClientScriptBlock(this.GetType(),
                      "CallServer", callbackScript, true);
}

The above script defines and registers a CallServer function. On calling the CallServer function, the RaiseCallBackEvent of ICallbackeventHandler is invoked. This method invokes the LoadData() method which performs the heavy operation and returns the data.

public void RaiseCallbackEvent(string eventArgument)
{
    if (eventArgument!=null)
    {
        Result = this.LoadData(Convert.ToUInt16(eventArgument));
    }
}

private string LoadData(int num)
{
    // call Heavy data
    return Utility.LoadData(num);
}

Once LoadData() is executed, the GetCallbackResult() method of ICallbackEventHandler is executed, which returns the data:

public string GetCallbackResult()
{
    return Result;
}

Finally, the ReceiveServerData() function is called to update the UI. The ReceiveServerData function is registered as the callback for the CallServer() function in the page load event.

function ReceiveServerData(rValue) {
    document.getElementById("txtContentText").innerHTML = rValue;
}

Thread Safety

A talk on threads is never over without talking about thread safety. Consider a resource being used by multiple threads. That would mean that the resource is being used and shared by the control over multiple threads. This would result in the resource behaving in an in-deterministic way and the results getting haywire. That is why we need to implement “thread safe” applications so that a resource is only available to one single thread at any point in time.

The following are the ways of implementing thread safety in .NET:

  • Interlocked – The Interlocked class treats an operation as atomic. For example, simple addition, subtraction operations are three step operations inside the processor. When multiple threads access the same resource subject to these operations, the results can get confusing because one thread can be preempted after executing the first two steps. Another thread can then execute all three steps. When the first thread resumes execution, it overwrites the value in the instance variable, and the effect of the operation performed by the second thread is lost. Hence we need to use the Interlocked class which treats these operations as atomic, making them thread safe. E.g.: Increment, Decrement, Add, Read, Exchange, CompareExchange.
  • System.Threading.Interlocked.Increment(object);
  • Monitor – The Monitor class is used to lock an object which might be vulnerable to the perils of multiple threads accessing that object concurrently.
  • if (Monitor.TryEnter(this, 300)) {
        try {
            // code protected by the Monitor here.
        }
        finally {
            Monitor.Exit(this);
        }
    }
    else {
        // Code if the attempt times out.
    }

    The most popular example is that of the Getinstance() method of the Singleton class. Here the method can be used by various modules accessing it concurrently. Thread safety is implemented by locking that block of code with an object syncLock.

    static object syncLock = new object();
    
    if (_instance == null)
    {
        lock (syncLock)
        {
            if (_instance == null)
            {
                _instance = new LoadBalancer();
            }
        }
    }
  • Reader-Writer Lock - The lock can be acquired by an unlimited number of concurrent readers, or exclusively by a single writer. This can provide better performance than a Monitor if most accesses are reads while writes are infrequent and of short duration.

There are other ways of implementing thread safety as well but a detailed discussion of those techniques is beyond the scope of this article. Please refer to MSDN for further information. A discussion on how to create a thread safe application can never be complete without touching on the concept of deadlocks. Let’s look at what that is.

What is a Dead Lock?

A deadlock is a situation when two or more threads lock the same resource, each waiting for the other to let go. Such a situation will result in the operation being stuck indefinitely. Deadlocks can be avoided by careful programming.

Example:

  1. Thread A locks object A
  2. Thread A locks object B
  3. Thread B locks object B
  4. Thread B locks object A

Summary

In this article, I demonstrated how to implement fast, agile, responsive applications by harnessing the power of .NET's multi threading capabilities. We also took a brief look at the importance of making our resources thread safe to avoid our applications returning in-deterministic results.

推荐.NET配套的通用数据层ORM框架:CYQ.Data 通用数据层框架
新浪微博粉丝精灵,刷粉丝、刷评论、刷转发、企业商家微博营销必备工具"