Amazon.com Widgets CodeProject

WilliaBlog.Net

I dream in code

About the author

Robert Williams is an internet application developer for the Salem Web Network.
E-mail me Send mail
Code Project Associate Logo
Go Daddy Deal of the Week: 30% off your order at GoDaddy.com! Offer expires 11/6/12

Recent comments

Archive

Authors

Tags

Disclaimer

The opinions expressed herein are my own personal opinions and do not represent my employer's view in anyway.


Using System.Threading.Tasks and BlockingCollections to FTP multiple Files at the same time

I recently needed to write an application that would loop through a queue of files and FTP them to our Content Delivery Network for streaming. Users upload files, and our administrators can mark some of them as urgent. Urgent ones need to jump to the front of the queue, otherwise everything should be orderd by broadcast date. My initial code was basically a loop that looked something like this:

While (GetFtpQueue().Count > 0)
{
    // Gather all the info from the db,  Ftp the file, then clean up (move the source file, update the db, etc.

}

It worked beautifully while we were just uploading small audio files, but as soon asl we started adding a lot of video files to the queue it became so slow that it might take 2 hours or more to upload a single video file. So, we experimented with Filezilla to see how many concurrent uploads we could add before the overall speed of each upload started to drop. We found that at our location, 4 simultaneous FTP uploads seemed to hit the sweet spot: instead of uploading 1 file at 500 kb/s we could upload all four and each one would still be at that speed, quadrupling our throughput.

I read up on using the new Threading classes in .Net 4.0, and began refactoring my FTP application. I decided to use the Task Factory to manage the threads, in conjunction with a BlockingCollection to create a classic Producer/Consumer pattern. My first attempt looked a lot like this:

int maxThreads = 4;
var filesToFtp = new BlockingCollection<FtpItem>(maxThreads);
var processedFiles = new BlockingCollection<FtpItem>();

// Stage #1: Producer
var initFtp = Task.Factory.StartNew(() =>
{
    try
    {
        While (GetFtpQueue().Count > 0)
        {
            // Gather all the info from the db and use it to create FtpItem objects
            // Add them to list of filesToFtp, which only allows maxThreads in at a time (this allows us to have an urgent item jump to the top while current items are still FTPing)
            filesToFtp.Add(new FtpItem { ... };
        }
    }
    finally { filesToFtp .CompleteAdding(); }
});

// Stage #2 Consumer of initFtpTask and Producer for Cleanup Task
var process = Task.Factory.StartNew(() =>

{
    try
    {
        foreach(var file in filesToFtp.GetConsumingEnumerable()
        {
            // Ftp the file
            // Add to list of processedFiles
            processedFiles.Add(file);
        }
    }
    finally { processedFiles.CompleteAdding(); }
});

// Stage #3
var cleanup = Task.Factory.StartNew(() =>
{
    foreach(var file in processedFiles.GetConsumingEnumerable()
    {
        // Clean up (move the source file, update the db, etc.
    }
});

Task.WaitAll(initFtp, process, cleanup);

Initially, this looked quite promising. I wrote a bare bones version of it like the one above that just did thread.sleep to simulate work and iterated through a list of ints. I was ablt to verify that each "stage" was running on it's own thread, that it never allowed more than 4 items through at a time, that I could add items to the front of the queue and get them processed next, and that it never tried to 'cleanup' a file until that file had passed through both stage 1 and stage 2. However, I did notice that the elapsed time was the same as when I ran a similar unit test in a simple while loop. It might be obvious to you why this is, but at the time I put it down to a limitation of the unit test and pushed my new code to production. The first thing I noticed was that it wasn't any faster. Not even slightly. It took me hours of staring at the code to finally figure out why my multi threaded code was not running any faster, but the answer is simple: I only created one consumer of filesToFtp. I had incorrectly assumed that because I was creating up to 4 ftpItems at a time, and the ftp process was running on it's own thread, that it would consume as many as it could, but the reality is that in the code above, while each of the three stages are running on their own thread, the whole process was still happening in series, since stage 1 doesn't create 4 items at once, it creates them one after the other, stage 2 does begin working before stage 1 is complete (as soon as there is an item to consume), but then it will be busy Ftping that first item until that item is fully uploaded, only then will it grab the next file.

To resolve this problem, I simply wrapped stage 2 in a for loop, and created a IList of Tasks to wait on:

int maxThreads = 4;
var filesToFtp = new BlockingCollection<FtpItem>(maxThreads);
var processedFiles = new BlockingCollection<FtpItem>();
IList<Task> tasks = new List<Task>();

// Stage #1: Producer
tasks.Add(Task.Factory.StartNew(() =>
{
    try
    {
        While (GetFtpQueue().Count > 0)
        {
            // Gather all the info from the db and use it to create FtpItem objects
            // Add them to list of filesToFtp, which only allows maxThreads in at a time (this allows us to have an urgent item jump to the top while current items are still FTPing)
            filesToFtp.Add(new FtpItem { ... };
        }
    }
    finally { filesToFtp .CompleteAdding(); }
}));

// Start multiple instances of the ftp process
for (int i = 0; i < maxThreads; i++)
{
    // Stage #2 Consumer of initFtpTask and Producer for Cleanup Task
    tasks.Add(Task.Factory.StartNew(() =>
    {
	try
	{
		foreach(var file in filesToFtp.GetConsumingEnumerable()
		{
			// Ftp the file
			// Add to list of processedFiles
			processedFiles.Add(file);
		}
	}
	finally { processedFiles.CompleteAdding(); }
	}));
}

// Stage #3
tasks.Add(Task.Factory.StartNew(() =>
{
	foreach(var file in processedFiles.GetConsumingEnumerable()
	{
		// Clean up (move the source file, update the db, etc.
	}
}));

Task.WaitAll(tasks.ToArray());

I reran the unit test and it was faster! Very nearly 4 times faster in fact. Wahoo! I updated the code, published my changes and sat back. Sure enough, the Ftp process finally started to make up some ground. In the mean time, I went back to my unit test and began tweaking. The first thing I noticed was that sometimes I would get a "System.InvalidOperationException: The BlockingCollection<T> has been marked as complete with regards to additions." Luckily, this didn't take a lot of head scratching to figure out: the first thread to reach the 'finally' clause of  stage 2 closed the processedFiles collection, leaving the other three threads hanging. A final refactoring resolved the issue:

int maxThreads = 4;
var filesToFtp = new BlockingCollection<FtpItem>(maxThreads);
var processedFiles = new BlockingCollection<FtpItem>();
IList<Task> tasks = new List<Task>();

// maintain a seperate list of wait handles for the FTP Tasks, 
// since we need to know when they all complete in order to close the processedFiles blocking collection
IList<Task> ftpProcessTasks = new List<Task>();

// Stage #1: Producer
tasks.Add(Task.Factory.StartNew(() =>
{
	try
	{
		While (GetFtpQueue().Count > 0)
		{
			// Gather all the info from the db and use it to create FtpItem objects
			// Add them to list of filesToFtp, which only allows maxThreads in at a time (this allows us to have an urgent item jump to the top while current items are still FTPing)
			filesToFtp.Add(new FtpItem { ... };
		}
	}
	finally { filesToFtp .CompleteAdding(); }
}));

// Start multiple instances of the ftp process
for (int i = 0; i < maxThreads; i++)
{
	// Stage #2 Consumer of initFtpTask and Producer for Cleanup Task
	ftpProcessTasks.Add(Task.Factory.StartNew(() =>
	{
		try
		{
			foreach(var file in filesToFtp.GetConsumingEnumerable()
			{
				// Ftp the file
				// Add to list of processedFiles
				processedFiles.Add(file);
			}
		}
	}));
}

// Stage #3
tasks.Add(Task.Factory.StartNew(() =>
{
	foreach(var file in processedFiles.GetConsumingEnumerable()
	{
		// Clean up (move the source file, update the db, etc.
	}
}));


// When all the FTP Threads complete
Task.WaitAll(ftpProcessTasks.ToArray());

// Notify the stage #3 cleanup task that there is no need to wait, there will be no more processedFiles.
processedFiles.CompleteAdding();

// Make sure all the other tasks are complete too.
Task.WaitAll(tasks.ToArray());

Download a working example (Just enter your FTP Server details prior to running):

ProducerConsumer.zip (11.18 mb)


Posted by Williarob on Monday, April 18, 2011 11:47 AM
Permalink | Comments (0) | Post RSSRSS comment feed

Working with Metafile Images in .Net

What is a Metafile Image?

The Windows Metafile (WMF) is a graphics file format on Microsoft Windows systems, originally designed in the 1990s.

Internally, a metafile is an array of variable-length structures called metafile records. The first records in the metafile specify general information such as the resolution of the device on which the picture was created, the dimensions of the picture, and so on. The remaining records, which constitute the bulk of any metafile, correspond to the graphics device interface (GDI) functions required to draw the picture. These records are stored in the metafile after a special metafile device context is created. This metafile device context is then used for all drawing operations required to create the picture. When the system processes a GDI function associated with a metafile DC, it converts the function into the appropriate data and stores this data in a record appended to the metafile.

After a picture is complete and the last record is stored in the metafile, you can pass the metafile to another application by:

  • Using the clipboard
  • Embedding it within another file
  • Storing it on disk
  • Playing it repeatedly

A metafile is played when its records are converted to device commands and processed by the appropriate device.

There are two types of metafiles:

I had worked with Metafiles in Visual Basic 6 many years ago, when I worked for Taltech.com, a company that strives to produce the highest quality barcode images that Windows can create. As I remember it, this involved making lots of Windows API calls, and something called "Hi Metric Map Mode" (MM_HIMETRC). "Basically, the mapping mode system enables you to equate an abstract, logical drawing surface with a concrete and constrained display surface.  This is good in principle but GDI had a major drawback inasmuch as the logical drawing area coordinates were based upon signed integers.  This meant that creating drawing systems based upon some real-world measurement system such as inches or millimeters required you to use a number of integer values to represent a single unit of measure for example, in the case of MM_LOMETRC mapping there are ten integer values to each linear millimeter and in the case of MM_LOENGLISH there are 100 integer values to each linear inch." - Bob Powell. Bob has written a great article: Comparing GDI mapping modes with GDI+ transforms for anyone wanting to learn more about this.

Bob goes on to say that "Given the fact that matrix transformations have been recognized as the only sensible method to manipulate graphics for many years, GDI mapping modes were a very limited alternative and always a bit of a kludge", and he's probably right. To be honest, all that matrix stuff went way over my head. Luckily, today, the simplicity of matrix transformations is built into GDI+, and most of those API calls have been integrated into the System.Drawing Namespaces of the .Net Framework. Having already found a way to draw a barcode as a bitmap using the .Net Framework, I wanted to see how easy it would be to create a barcode as a metafile, since bitmaps are a lossy format, and barcodes need to be as high quality as possible to ensure that the scanners read them correctly.

You might think that creating a metafile would be as easy as using the Save() Method of System.Drawing.Image and giving the file a .wmf or .emf extension, but sadly this is not the case. If you do that, what you actually get, is a Portable Network Graphics (PNG) file, with a .wmf or .emf extension. Even if you use the ImageFormat overload, and pass in the filename and ImageFormat.Emf or ImageFormat.Wmf, you still end up with a PNG. It doesn't matter whether you create a Bitmap and call Save() or you go to the trouble of creating an in memory Metafile (more on that later) and then call Save(), you will never get a true Metafile. If you visit the MSDN documentation on the Metafile Class, you can see under 'Remarks' it casually states:

When you use the Save method to save a graphic image as a Windows Metafile Format (WMF) or Enhanced Metafile Format (EMF) file, the resulting file is saved as a Portable Network Graphics (PNG) file instead. This behavior occurs because the GDI+ component of the .NET Framework does not have an encoder that you can use to save files as .wmf or .emf files.

This is confirmed in the documentation for the System.Drawing.Image.Save Method:

If no encoder exists for the file format of the image, the Portable Network Graphics (PNG) encoder is used. When you use the Save() method to save a graphic image as a Windows Metafile Format (WMF) or Enhanced Metafile Format (EMF) file, the resulting file is saved as a Portable Network Graphics (PNG) file. This behavior occurs because the GDI+ component of the .NET Framework does not have an encoder that you can use to save files as .wmf or .emf files.

Saving the image to the same file it was constructed from is not allowed and throws an exception.

In order to save your in memory metafile as a true metafile, you must make some old fashioned API calls, and I will show you how to do this in due course, but first you need to know how to create an in memory Metafile. Let's assume that, like me, you already have some code that generates a bitmap image which looks just the way you want it. Here is some sample code distilled from a nice BarCode Library project written by Brad Barnhill

        static void Main(string[] args)

        {

            int width = 300;

            int height = 100;

 

            Bitmap b = new Bitmap(width, height);

            int pos = 0;

            string encodedValue =

                "1001011011010101001101101011011001010101101001011010101001101101010100110110101010011011010110110010101011010011010101011001101010101100101011011010010101101011001101010100101101101";

            int barWidth = width / encodedValue.Length;

            int shiftAdjustment = (width % encodedValue.Length) / 2;

            int barWidthModifier = 1;

 

            using (Graphics g = Graphics.FromImage(b))

            {

                // clears the image and colors the entire background

                g.Clear(Color.White);

 

                // lines are barWidth wide so draw the appropriate color line vertically

                using (Pen pen = new Pen(Color.Black, (float)barWidth / barWidthModifier))

                {

                    while (pos < encodedValue.Length)

                    {

                        if (encodedValue[pos] == '1')

                        {

                            g.DrawLine(

                                pen,

                                new Point(pos * barWidth + shiftAdjustment + 1, 0),

                                new Point(pos * barWidth + shiftAdjustment + 1, height));

                        }

 

                        pos++;

                    } // while

                } // using

            } // using

 

            b.Save(@"d:\temp\test.png", ImageFormat.Png);

        }

As you can see, this code creates a new Bitmap image, creates a Graphics object from it, draws on it using the Pen class then saves it as a .png. The resulting image looks like this:

So far so good. As we have already established, simply rewriting the last line as

b.Save(@"d:\temp\test.emf", ImageFormat.Emf);

is not enough to convert this image to a metafile. Sadly, substituting the word "Metafile" for "Bitmap" is not all it takes to create an in memory metafile. Instead, you will need to have a device context handle and a stream handy. If you are working on a Windows Forms application you can create a Graphics object easily by simply typing Graphics g = this.CreateGraphics(); but if you are writing a class library or a console application you have to be a bit more creative and use an internal method (FromHwndInternal) to create the Graphics object out of nothing:

            Graphics offScreenBufferGraphics;

            Metafile m;

            using (MemoryStream stream = new MemoryStream())

            {

                using (offScreenBufferGraphics = Graphics.FromHwndInternal(IntPtr.Zero))

                {

                    IntPtr deviceContextHandle = offScreenBufferGraphics.GetHdc();

                    m = new Metafile(

                        stream,

                        deviceContextHandle,

                        EmfType.EmfPlusOnly);

                    offScreenBufferGraphics.ReleaseHdc();

                }

            }

OK, so now your code looks like this:

        static void Main(string[] args)

        {

            int width = 300;

            int height = 100;

 

            Graphics offScreenBufferGraphics;

            Metafile m;

            using (MemoryStream stream = new MemoryStream())

            {

                using (offScreenBufferGraphics = Graphics.FromHwndInternal(IntPtr.Zero))

                {

                    IntPtr deviceContextHandle = offScreenBufferGraphics.GetHdc();

                    m = new Metafile(

                        stream,

                        deviceContextHandle,

                        EmfType.EmfPlusOnly);

                    offScreenBufferGraphics.ReleaseHdc();

                }

            }

 

            int pos = 0;

            string encodedValue =

                "1001011011010101001101101011011001010101101001011010101001101101010100110110101010011011010110110010101011010011010101011001101010101100101011011010010101101011001101010100101101101";

            int barWidth = width / encodedValue.Length;

            int shiftAdjustment = (width % encodedValue.Length) / 2;

            int barWidthModifier = 1;

 

            using (Graphics g = Graphics.FromImage(m))

            {

                // clears the image and colors the entire background

                g.Clear(Color.White);

 

                // lines are barWidth wide so draw the appropriate color line vertically

                using (Pen pen = new Pen(Color.Black, (float)barWidth / barWidthModifier))

                {

                    while (pos < encodedValue.Length)

                    {

                        if (encodedValue[pos] == '1')

                        {

                            g.DrawLine(

                                pen,

                                new Point(pos * barWidth + shiftAdjustment + 1, 0),

                                new Point(pos * barWidth + shiftAdjustment + 1, height));

                        }

 

                        pos++;

                    } // while

                } // using

            } // using

 

            m.Save(@"d:\temp\test2.png", ImageFormat.Png);

         }

But wait, what happened to my barcode? It's all off center, yet the code used to draw it hasn't changed:

Luckily this is easy to fix. We need to use a different overload when creating the metafile, so that we can specify a width and height, and a unit of measure:

            Graphics offScreenBufferGraphics;

            Metafile m;

            using (MemoryStream stream = new MemoryStream())

            {

                using (offScreenBufferGraphics = Graphics.FromHwndInternal(IntPtr.Zero))

                {

                    IntPtr deviceContextHandle = offScreenBufferGraphics.GetHdc();

                    m = new Metafile(

                        stream,

                        deviceContextHandle,

                        new RectangleF(0, 0, width, height),

                        MetafileFrameUnit.Pixel,

                        EmfType.EmfPlusOnly);

                    offScreenBufferGraphics.ReleaseHdc();

                }

            }

 

Now it looks the same when saved as a .png, but it may still look all wrong (and more importantly be completely unreadable by a barcode scanner) if printed and the resolution of the printer does not match that of your desktop when you created the metafile. Furthermore, if I save this as a real EMF file and email it to you, when you view it you may see a different rendering, because the desktop I created it on has a resolution of 1920x1080, but if your desktop has a higher or lower resolution it will affect how it is displayed. Remember a metafile is a stored set of instructions on how to render the image and by default it will use the stored resolution for reference. To correct this, we have to add some additional code to the Graphics object to ensure this doesn't happen (thanks go to Nicholas Piasecki and his blog entry for pointing this out):

 

                MetafileHeader metafileHeader = m.GetMetafileHeader();

                g.ScaleTransform(metafileHeader.DpiX / g.DpiX, metafileHeader.DpiY / g.DpiY);

                g.PageUnit = GraphicsUnit.Pixel;

                g.SetClip(new RectangleF(0, 0, width, height));

So how can we save it as a real Metafile anyway?

Well, first we need to declare some old fashioned Win API calls:

        [DllImport("gdi32.dll")]

        static extern IntPtr CopyEnhMetaFile(  // Copy EMF to file

            IntPtr hemfSrc,   // Handle to EMF

            String lpszFile // File

        );

 

        [DllImport("gdi32.dll")]

        static extern int DeleteEnhMetaFile(  // Delete EMF

            IntPtr hemf // Handle to EMF

        );

Then we can replace the m.Save(...); line with this:

            // Get a handle to the metafile

            IntPtr iptrMetafileHandle = m.GetHenhmetafile();

 

            // Export metafile to an image file

            CopyEnhMetaFile(iptrMetafileHandle, @"d:\temp\test2.emf");

 

            // Delete the metafile from memory

            DeleteEnhMetaFile(iptrMetafileHandle);

and finally we have a true metafile to share. Why Microsoft failed to encapsulate this functionality within the framework as an image encoder is a mystery. Windows Metafiles, and Enhanced Metafiles are after all their own creation. So our final version of the code looks like this:

        static void Main(string[] args)

        {

            int width = 300;

            int height = 100;

 

            Graphics offScreenBufferGraphics;

            Metafile m;

            using (MemoryStream stream = new MemoryStream())

            {

                using (offScreenBufferGraphics = Graphics.FromHwndInternal(IntPtr.Zero))

                {

                    IntPtr deviceContextHandle = offScreenBufferGraphics.GetHdc();

                    m = new Metafile(

                        stream,

                        deviceContextHandle,

                        new RectangleF(0, 0, width, height),

                        MetafileFrameUnit.Pixel,

                        EmfType.EmfPlusOnly);

                    offScreenBufferGraphics.ReleaseHdc();

                }

            }

 

            int pos = 0;

            string encodedValue =

                "1001011011010101001101101011011001010101101001011010101001101101010100110110101010011011010110110010101011010011010101011001101010101100101011011010010101101011001101010100101101101";

            int barWidth = width / encodedValue.Length;

            int shiftAdjustment = (width % encodedValue.Length) / 2;

            int barWidthModifier = 1;

 

            using (Graphics g = Graphics.FromImage(m))

            {

                // Set everything to high quality

                g.SmoothingMode = SmoothingMode.HighQuality;

                g.InterpolationMode = InterpolationMode.HighQualityBicubic;

                g.PixelOffsetMode = PixelOffsetMode.HighQuality;

                g.CompositingQuality = CompositingQuality.HighQuality;

 

                MetafileHeader metafileHeader = m.GetMetafileHeader();

                g.ScaleTransform(

                    metafileHeader.DpiX / g.DpiX,

                    metafileHeader.DpiY / g.DpiY);

 

                g.PageUnit = GraphicsUnit.Pixel;

                g.SetClip(new RectangleF(0, 0, width, height));

 

                // clears the image and colors the entire background

                g.Clear(Color.White);

 

                // lines are barWidth wide so draw the appropriate color line vertically

                using (Pen pen = new Pen(Color.Black, (float)barWidth / barWidthModifier))

                {

                    while (pos < encodedValue.Length)

                    {

                        if (encodedValue[pos] == '1')

                        {

                            g.DrawLine(

                                pen,

                                new Point(pos * barWidth + shiftAdjustment + 1, 0),

                                new Point(pos * barWidth + shiftAdjustment + 1, height));

                        }

 

                        pos++;

                    } // while

                } // using

            } // using

 

            // Get a handle to the metafile

            IntPtr iptrMetafileHandle = m.GetHenhmetafile();

 

            // Export metafile to an image file

            CopyEnhMetaFile(iptrMetafileHandle, @"d:\temp\test2.emf");

 

            // Delete the metafile from memory

            DeleteEnhMetaFile(iptrMetafileHandle);

        }

There is one more Metafile Gotcha I'd like to share. As part of my original Bitmap generating code, I had a boolean option to generate a label, that is the human readable text that appears beneath the barcode. If this option was selected, before returning the bitmap object I would pass it to another method that looked something like this:

        static Image DrawLabel(Image img, int width, int height)

        {

            Font font = new Font("Microsoft Sans Serif", 10, FontStyle.Bold); ;

 

            using (Graphics g = Graphics.FromImage(img))

            {

                g.DrawImage(img, 0, 0);

                g.SmoothingMode = SmoothingMode.HighQuality;

                g.InterpolationMode = InterpolationMode.HighQualityBicubic;

                g.PixelOffsetMode = PixelOffsetMode.HighQuality;

                g.CompositingQuality = CompositingQuality.HighQuality;

 

                StringFormat f = new StringFormat();

                f.Alignment = StringAlignment.Center;

                f.LineAlignment = StringAlignment.Near;

                int LabelX = width / 2;

                int LabelY = height - font.Height;

 

                //color a background color box at the bottom of the barcode to hold the string of data

                g.FillRectangle(new SolidBrush(Color.White), new RectangleF((float)0, (float)LabelY, (float)width, (float)font.Height));

 

                //draw datastring under the barcode image

                g.DrawString("038000356216", font, new SolidBrush(Color.Black), new RectangleF((float)0, (float)LabelY, (float)width, (float)font.Height), f);

 

                g.Save();

            }

 

            return img;

        }

When passing the bitmap, this works great, but when passing the metafile, the line using (Graphics g = Graphics.FromImage(img)) would throw a System.OutOfMemoryException every time. As a workaround, I copied the label generating code into the main method that creates the barcode. Another option might be to create a new metafile (not by calling m.Clone() - I tried that and still got the out of memory exception), send that to the DrawLabel() method, then when it comes back, create a third Metafile, and call g.DrawImage() twice (once for each metafile that isn't still blank) and return this new composited image. I think that will work, but I also think it would use a lot more resources and be grossly inefficient, so I think copying the label code into both the DrawBitmap() and DrawMetafile() methods, is a better solution.


Categories: C# | CodeProject | Windows
Posted by Williarob on Monday, April 04, 2011 6:51 AM
Permalink | Comments (0) | Post RSSRSS comment feed

Checking for a Running Instance

Sometimes you may not want a user to launch multiple instances of your program, or you may have a processor or I/O intensive scheduled task that runs every few minutes and you want to make sure that if it is started again before the last instance has finished it simply exits immediately. One way to check to see if your program is already running is to look at the list of running processes:

 

namespace RunningInstance

{

    using System;

    using System.Diagnostics;

    using System.Reflection;

 

    class Program

    {

        static void Main(string[] args)

        {

            if (RunningInstance())

            {

                Console.WriteLine("Another instance of this process was already running, exiting...");

                return;

            }

 

            // ...

        }

 

        static bool RunningInstance()

        {

            Process current = Process.GetCurrentProcess();

            Process[] processes = Process.GetProcessesByName(current.ProcessName);

 

            // Loop through the running processes in with the same name

            foreach (Process p in processes)

            {

                // Ignore the current process

                if (p.Id != current.Id)

                {

                    // Make sure that the process is running from the exe file.

                    if (Assembly.GetExecutingAssembly().Location.Replace("/", @"\") == current.MainModule.FileName)

                    {

                        return true;

                    }

                }

            }

 

            return false;

        }

    }

}

 

However, suppose you have a multi function console application that accepts a dozen different command line arguments to perform different jobs, all of which run as separate scheduled tasks at overlapping intervals. If this is the case, then checking the list of running processes may well find your executable already running, but have no idea which command line argument was used to start it. I tried adding:

 

Console.WriteLine("Instance started with args: '{0}'", p.StartInfo.Arguments);

above the "return true" statement in RunningInstance() but it will not print the command line args used to start it. Lets suppose we add 2 classes to our project. Task1 and Task2. For the sake of simplicity, they both look something like this:

namespace RunningInstance

{

    using System;

    using System.Threading;

 

    public class Task1

    { 

        public void Start()

        {

            Console.WriteLine("Starting Task 1");

        }

    }

}

 

Task 2 is exactly the same, except it prints "Starting Task 2". If we keep our RunningInstance check in place Main() now looks like this:

 

        static void Main(string[] args)

        {

            if (RunningInstance())

            {

                Console.WriteLine("An instance of this application is already running. Exiting.");

                Console.ReadLine();

                return;

            }

 

            if(args.Length < 1)

            {

                Console.WriteLine("Unrecognized Command.");

                return;

            }

 

            switch (args[0])

            {

                case "-task1":

                    var t1 = new Task1();

                    t1.Start();

                    break;

                case "-task2":

                    var t2 = new Task2();

                    t2.Start();

                    break;

                default:

                    Console.WriteLine("Unrecognized Command.");

                    break;

            }

        }

Task 2 will not run, if task 1 is still running, and vice versa. However, suppose the two tasks are completely unrelated. The first is only a minor chore, that simply counts the number of items marked as "queued" in a database table and sends out an email if the number is too high, while task 2 is much lengthier process that FTPs files. We may need both of these tasks to run as scheduled, but not want multiple instances of either task to run at the same time. How can we achive this? We use a Mutex. Mutex is an abbreviation of "Mutual exclusion" and is traditionally used in multi threaded applications to avoid the simultaneous use of a common resource, such as a global variable. After adding a mutex to each task, our final code looks like this:


        static void Main(string[] args)

        { 

            if(args.Length < 1)

            {

                Console.WriteLine("Unrecognized Command.");

                return;

            }

 

            switch (args[0])

            {

                case "-task1":

                    var t1 = new Task1();

                    t1.Start();

                    break;

                case "-task2":

                    var t2 = new Task2();

                    t2.Start();

                    break;

                default:

                    Console.WriteLine("Unrecognized Command.");

                    break;

            }

        }

 

 

namespace RunningInstance

{

    using System;

    using System.Threading;

 

    public class Task1

    {

        /// <summary>Gets the mutex that prevents multiple instances of this code running at once.</summary>

        private static Mutex mutex1;

 

        public void Start()

        {

            bool createdNew;

            mutex1 = new Mutex(true, "RunningInstance.Task1", out createdNew);

            if (!createdNew)

            {

                // Instance already running; exit.

                Console.WriteLine("Exiting: Instance already running");

                return;

            }

 

            Console.WriteLine("Starting Task 1");

        }

    }

}

 

 

namespace RunningInstance

{

    using System;

    using System.Threading;

 

    public class Task2

    {

        /// <summary>Gets the mutex that prevents multiple instances of this code running at once.</summary>

        private static Mutex mutex1;

 

        public void Start()

        {

            bool createdNew;

            mutex1 = new Mutex(true, "RunningInstance.Task2", out createdNew);

            if (!createdNew)

            {

                // Instance already running; exit.

                Console.WriteLine("Exiting: Instance already running");

                return;

            }

            Console.WriteLine("Starting Task 2");

        }

    }

}

 

Each Task has its own Mutex, uniquely named. If the mutex already exists, then that code must already be running, so exit, otherwise, run. Couldn't be simpler. By using this particular overload of the constructor, we don't even need to worry about releasing the mutex at the end of the program.

Categories: C# | CodeProject
Posted by Williarob on Monday, October 04, 2010 5:57 AM
Permalink | Comments (0) | Post RSSRSS comment feed

Override Configuration Manager

Recently I have been working on ways to solve configuration issues in large, multi environment solutions. In the beginning, I simply wanted to store shared app settings and connection strings with a class library so I didn't have to keep copying common configuration settings from project to project within the same solution. Taking that a step further, I thought it would be great to auto detect the runtime environment and use the right app settings and connection strings from that shared configuration file. This all works great, but it has two major drawbacks: firstly, third party tools such as Elmah, and built in tools such as the Membership, Profile and Role Providers look no further that the built in ConfigurationManager object for appSettings and connection strings which forces us to subclass (Dynamically setting the Elmah connection string at runtime) or override their initialization (Setting Membership-Profile-Role provider's connection string at runtime) in order for them to work with our new settings. Not all third party tools will be as easy to fix. Secondly, all the developers working on the project must be trained to use the new techniques and always remember to use Core.Configuration.AppSettings["key"] instead of ConfigurationManager because ConfigurationManager.AppSettings["key"] may be null or hold the wrong value.

With that in mind, the next logical step was to find a way to override the built in ConfigurationManager ensuring that the Core.Configuration settings are fully integrated. In short: any call to ConfigurationManager.AppSettings or ConfigurationManager.ConnectionStrings should return the correct setting, whether that setting comes from the local web/app.config or the Core.Config. In order to do this it is assumed that if a setting appears both in the local app/web.config and the Core.Config files, then the value in the Core.Config file will be the value returned.

Download the latest version of the Williablog.Core project:

Williablog.Core.zip (110.11 kb)

Add a reference to it from your project (either to the project or the dll in the bin folder) and the first line in void Main() of your console Application or (if a web application) Application_Start()  in Global.asax should be:

Williablog.Core.Configuration.ConfigSystem.Install();

This will reinitialize the Configuration forcing it to rebuild the static cache of values but this time we are in control, and as a result we are able to effectively override the ConfigurationManager. Here is the code:

namespace Williablog.Core.Configuration

{

    using System;

    using System.Collections.Specialized;

    using System.Configuration;

    using System.Configuration.Internal;

    using System.Reflection;

 

    using Extensions;

 

    public sealed class ConfigSystem : IInternalConfigSystem

    {

        private static IInternalConfigSystem clientConfigSystem;

 

        private object appsettings;

 

        private object connectionStrings;

 

        /// <summary>

        /// Re-initializes the ConfigurationManager, allowing us to merge in the settings from Core.Config

        /// </summary>

        public static void Install()

        {

            FieldInfo[] fiStateValues = null;

            Type tInitState = typeof(System.Configuration.ConfigurationManager).GetNestedType("InitState", BindingFlags.NonPublic);

 

            if (null != tInitState)

            {

                fiStateValues = tInitState.GetFields();

            }

 

            FieldInfo fiInit = typeof(System.Configuration.ConfigurationManager).GetField("s_initState", BindingFlags.NonPublic | BindingFlags.Static);

            FieldInfo fiSystem = typeof(System.Configuration.ConfigurationManager).GetField("s_configSystem", BindingFlags.NonPublic | BindingFlags.Static);

 

            if (fiInit != null && fiSystem != null && null != fiStateValues)

            {

                fiInit.SetValue(null, fiStateValues[1].GetValue(null));

                fiSystem.SetValue(null, null);

            }

 

            ConfigSystem confSys = new ConfigSystem();

            Type configFactoryType = Type.GetType("System.Configuration.Internal.InternalConfigSettingsFactory, System.Configuration, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a", true);

            IInternalConfigSettingsFactory configSettingsFactory = (IInternalConfigSettingsFactory)Activator.CreateInstance(configFactoryType, true);

            configSettingsFactory.SetConfigurationSystem(confSys, false);

 

            Type clientConfigSystemType = Type.GetType("System.Configuration.ClientConfigurationSystem, System.Configuration, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a", true);

            clientConfigSystem = (IInternalConfigSystem)Activator.CreateInstance(clientConfigSystemType, true);

        }

 

        #region IInternalConfigSystem Members

 

        public object GetSection(string configKey)

        {

            // get the section from the default location (web.config or app.config)

            object section = clientConfigSystem.GetSection(configKey);

 

            switch (configKey)

            {

                case "appSettings":

                    if (this.appsettings != null)

                    {

                        return this.appsettings;

                    }

 

                    if (section is NameValueCollection)

                    {

                        // create a new collection because the underlying collection is read-only

                        var cfg = new NameValueCollection((NameValueCollection)section);

 

                        // merge the settings from core with the local appsettings

                        this.appsettings = cfg.Merge(Core.Configuration.ConfigurationManager.AppSettings);

                        section = this.appsettings;

                    }

 

                    break;

                case "connectionStrings":

                    if (this.connectionStrings != null)

                    {

                        return this.connectionStrings;

                    }

 

                    // create a new collection because the underlying collection is read-only

                    var cssc = new ConnectionStringSettingsCollection();

 

                    // copy the existing connection strings into the new collection

                    foreach (ConnectionStringSettings connectionStringSetting in ((ConnectionStringsSection)section).ConnectionStrings)

                    {

                        cssc.Add(connectionStringSetting);

                    }

 

                    // merge the settings from core with the local connectionStrings

                    cssc = cssc.Merge(ConfigurationManager.ConnectionStrings);

 

                    // Cannot simply return our ConnectionStringSettingsCollection as the calling routine expects a ConnectionStringsSection result

                    ConnectionStringsSection connectionStringsSection = new ConnectionStringsSection();

 

                    // Add our merged connection strings to the new ConnectionStringsSection

                    foreach (ConnectionStringSettings connectionStringSetting in cssc)

                    {

                        connectionStringsSection.ConnectionStrings.Add(connectionStringSetting);

                    }

 

                    this.connectionStrings = connectionStringsSection;

                    section = this.connectionStrings;

                    break;

            }

 

            return section;

        }

 

        public void RefreshConfig(string sectionName)

        {

            if (sectionName == "appSettings")

            {

                this.appsettings = null;

            }

 

            if (sectionName == "connectionStrings")

            {

                this.connectionStrings = null;

            }

 

            clientConfigSystem.RefreshConfig(sectionName);

        }

 

        public bool SupportsUserConfig

        {

            get { return clientConfigSystem.SupportsUserConfig; }

        }

 

        #endregion

    }

}

The code to actually merge our collections is implemented as Extension methods:

namespace Williablog.Core.Extensions

{

    using System;

    using System.Collections.Generic;

    using System.Collections.Specialized;

    using System.Configuration;

    using System.Linq;

    using System.Linq.Expressions;

    using System.Text;

 

    public static class IEnumerableExtensions

    {

        /// <summary>

        /// Merges two NameValueCollections.

        /// </summary>

        /// <param name="first"></param>

        /// <param name="second"></param>

        /// <remarks>Used by <see cref="Williablog.Core.Configuration.ConfigSystem">ConfigSystem</c> to merge AppSettings</remarks>

        public static NameValueCollection Merge(this NameValueCollection first, NameValueCollection second)

        {

            if (second == null)

            {

                return first;

            }

 

            foreach (string item in second)

            {

                if (first.AllKeys.Contains(item))

                {

                    // if first already contains this item, update it to the value of second

                    first[item] = second[item];

                }

                else

                {

                    // otherwise add it

                    first.Add(item, second[item]);

                }

            }

 

            return first;

        }

 

        /// <summary>

        /// Merges two ConnectionStringSettingsCollections.

        /// </summary>

        /// <param name="first"></param>

        /// <param name="second"></param>

        /// <remarks>Used by <see cref="Williablog.Core.Configuration.ConfigSystem">ConfigSystem</c> to merge ConnectionStrings</remarks>

        public static ConnectionStringSettingsCollection Merge(this ConnectionStringSettingsCollection first, ConnectionStringSettingsCollection second)

        {

            if (second == null)

            {

                return first;

            }

 

            foreach (ConnectionStringSettings item in second)

            {

                ConnectionStringSettings itemInSecond = item;

                ConnectionStringSettings existingItem = first.Cast<ConnectionStringSettings>().FirstOrDefault(x => x.Name == itemInSecond.Name);

 

                if (existingItem != null)

                {

                    first.Remove(item);

                }

 

                first.Add(item);

            }

 

            return first;

        }

    }

}

If we create a console application to test with, complete with it's own app.config file that looks like this:

<?xml version="1.0" encoding="utf-8" ?>

<configuration>

  <appSettings>

    <add key="WebServiceUrl" value="http://webservices.yourserver.com/YourService.asmx"/>

    <add key="SmtpServer" value="smtp.yourmailserver.com"/>

    <add key="LocalOnly" value="This is from the local app.config"/>

  </appSettings>

  <connectionStrings>

    <add name="AppData" connectionString="data source=Audi01;initial catalog=MyDB;User ID=User;Password=Password;" providerName="System.Data.SqlClient"/>

    <add name="ElmahDB" connectionString="Database=ELMAH;Server=Audi02;User=User;Pwd=Password;" providerName="System.Data.SqlClient"/>

  </connectionStrings>

</configuration>

And run it with the following code:

        static void Main(string[] args)

        {

            ConfigSystem.Install();

 

            Console.WriteLine(System.Configuration.ConfigurationManager.AppSettings["SmtpServer"]);

            Console.WriteLine(System.Configuration.ConfigurationManager.AppSettings["LocalOnly"]);

            Console.WriteLine(System.Configuration.ConfigurationManager.ConnectionStrings["AppData"]);

        }

 

The output is:

smtp.yourlocalmailserver.com

This is from the local app.config
data source=Ford01;initial catalog=MyDB;User ID=User;Password=Password;

With the exception of the middle one (LocalOnly) all of these settings come from Williablog.Core.Config, not the local app.config proving that the config files were successfully merged.

The ConfigSystem class could be modified to retrive the additional appsettings from the registry, from a database or from any other source you care to use.

I'd like to thank the contributers/authors of the following articles which I found very helpful:

http://stackoverflow.com/questions/158783/is-there-a-way-to-override-configurationmanager-appsettings

http://andypook.blogspot.com/2007/07/overriding-configurationmanager.html


Posted by Williarob on Monday, March 29, 2010 7:13 AM
Permalink | Comments (0) | Post RSSRSS comment feed

Dynamically setting the Elmah connection string at runtime

If you have read my other articles about setting the SQL Membership provider's connection string at runtime, or automatically detecting the server name and using the appropriate connection strings then it will come as no surprise to see that I also had to find a way to set the Elmah connection string property dynamically too. If you are reading this, I'll assume that you already know what Elmah is and how to configure it. The problem then is simply that the connection string is supplied in the <elmah><errorLog> section of the web.config using a connection string name, and that while the name may the same in production as it is in development, chances are high that the connection string itself is different. The connection string property is readonly, so you can't change it at runtime. One solution is to create an elmah.config file, and use Finalbuilder or a web deployment project to change the path to that file when publishing, but if you like the AdvancedSettingsManager class I created and want to use that to set it you'll need to use a custom ErrorLog. Fortunately, Elmah is open source, so I simply downloaded the source, took a look at their SqlErrorLog class and then copied and pasted most of the code from that class into my own project, modifying it only slightly to suit my own needs.

In the end, the only changes I really needed to make were to pull the connectionstring by name from my AdvancedSettingsManager class and to copy a couple of helper functions locally into this class since they were marked as internal and therefore unavailable outside of the Elmah solution. I also removed the conditional compilation flags that only applied to .Net 1.x since this was a .Net 3.5 project.

namespace Williablog.Core.Providers

{

    #region Imports

 

    using System;

    using System.Configuration;

    using System.Data;

    using System.Data.SqlClient;

    using System.Diagnostics;

    using System.Threading;

    using System.Xml;

 

    using Elmah;

 

    using ApplicationException = System.ApplicationException;

    using IDictionary = System.Collections.IDictionary;

    using IList = System.Collections.IList;

 

    #endregion

 

    public class SqlErrorLog : ErrorLog

    {

        private readonly string _connectionString;

 

        private const int _maxAppNameLength = 60;

 

        private delegate RV Function<RV, A>(A a);

 

        ///<summary>

        /// Initializes a new instance of the <see cref="SqlErrorLog"/> class

        /// using a dictionary of configured settings.

        ///</summary>

 

        public SqlErrorLog(IDictionary config)

        {

            if (config == null)

                throw new ArgumentNullException("config");

 

// Start Williablog changes

 

            string connectionStringName = (string)config["connectionStringName"] jQuery1520895691458676146_1360618079128 string.Empty;

 

            string connectionString = string.Empty;

 

            if (connectionStringName.Length > 0)

            {

 

            //

            // Write your code here to get the connection string as a ConnectionStringSettings object

 

            //

                ConnectionStringSettings settings = Williablog.Core.Configuration.AdvancedSettingsManager.SettingsFactory().ConnectionStrings["ErrorDB"];

                if (settings == null)

                    throw new ApplicationException("Connection string is missing for the SQL error log.");

 

                connectionString = settings.ConnectionString ?? string.Empty;

            }

 

// End Williablog changes

 

            //

            // If there is no connection string to use then throw an

            // exception to abort construction.

            //

 

            if (connectionString.Length == 0)

                throw new ApplicationException("Connection string is missing for the SQL error log.");

 

            _connectionString = connectionString;

 

            //

            // Set the application name as this implementation provides

            // per-application isolation over a single store.

            //

 

            string appName = NullString((string)config["applicationName"]);

 

            if (appName.Length > _maxAppNameLength)

            {

                throw new ApplicationException(string.Format(

                    "Application name is too long. Maximum length allowed is {0} characters.",

                    _maxAppNameLength.ToString("N0")));

            }

 

            ApplicationName = appName;

        }

 

        ///<summary>

        /// Initializes a new instance of the <see cref="SqlErrorLog"/> class

        /// to use a specific connection string for connecting to the database.

        ///</summary>

 

        public SqlErrorLog(string connectionString)

        {

            if (connectionString == null)

                throw new ArgumentNullException("connectionString");

 

            if (connectionString.Length == 0)

                throw new ArgumentException(null, "connectionString");

 

            _connectionString = connectionString;

        }

 

        ///<summary>

        /// Gets the name of this error log implementation.

        ///</summary>

 

        public override string Name

        {

            get { return "Microsoft SQL Server Error Log"; }

        }

 

        ///<summary>

        /// Gets the connection string used by the log to connect to the database.

        ///</summary>

 

        public virtual string ConnectionString

        {

            get { return _connectionString; }

        }

 

        ///<summary>

        /// Logs an error to the database.

        ///</summary>

        ///<remarks>

        /// Use the stored procedure called by this implementation to set a

        /// policy on how long errors are kept in the log. The default

        /// implementation stores all errors for an indefinite time.

        ///</remarks>

 

        public override string Log(Error error)

        {

            if (error == null)

                throw new ArgumentNullException("error");

 

            string errorXml = ErrorXml.EncodeString(error);

            Guid id = Guid.NewGuid();

 

            using (SqlConnection connection = new SqlConnection(this.ConnectionString))

            using (SqlCommand command = Commands.LogError(

                id, this.ApplicationName,

                error.HostName, error.Type, error.Source, error.Message, error.User,

                error.StatusCode, error.Time.ToUniversalTime(), errorXml))

            {

                command.Connection = connection;

                connection.Open();

                command.ExecuteNonQuery();

                return id.ToString();

            }

        }

 

        ///<summary>

        /// Returns a page of errors from the databse in descending order

        /// of logged time.

        ///</summary>

 

        public override int GetErrors(int pageIndex, int pageSize, IList errorEntryList)

        {

            if (pageIndex < 0)

                throw new ArgumentOutOfRangeException("pageIndex", pageIndex, null);

 

            if (pageSize < 0)

                throw new ArgumentOutOfRangeException("pageSize", pageSize, null);

 

            using (SqlConnection connection = new SqlConnection(this.ConnectionString))

            using (SqlCommand command = Commands.GetErrorsXml(this.ApplicationName, pageIndex, pageSize))

            {

                command.Connection = connection;

                connection.Open();

 

                XmlReader reader = command.ExecuteXmlReader();

 

                try

                {

                    ErrorsXmlToList(reader, errorEntryList);

                }

                finally

                {

                    reader.Close();

                }

 

                int total;

                Commands.GetErrorsXmlOutputs(command, out total);

                return total;

            }

        }

 

        ///<summary>

        /// Begins an asynchronous version of <see cref="GetErrors"/>.

        ///</summary>

 

        public override IAsyncResult BeginGetErrors(int pageIndex, int pageSize, IList errorEntryList,

            AsyncCallback asyncCallback, object asyncState)

        {

            if (pageIndex < 0)

                throw new ArgumentOutOfRangeException("pageIndex", pageIndex, null);

 

            if (pageSize < 0)

                throw new ArgumentOutOfRangeException("pageSize", pageSize, null);

 

            //

            // Modify the connection string on the fly to support async

            // processing otherwise the asynchronous methods on the

            // SqlCommand will throw an exception. This ensures the

            // right behavior regardless of whether configured

            // connection string sets the Async option to true or not.

            //

 

            SqlConnectionStringBuilder csb = new SqlConnectionStringBuilder(this.ConnectionString);

            csb.AsynchronousProcessing = true;

            SqlConnection connection = new SqlConnection(csb.ConnectionString);

 

            //

            // Create the command object with input parameters initialized

            // and setup to call the stored procedure.

            //

 

            SqlCommand command = Commands.GetErrorsXml(this.ApplicationName, pageIndex, pageSize);

            command.Connection = connection;

 

            //

            // Create a closure to handle the ending of the async operation

            // and retrieve results.

            //

 

            AsyncResultWrapper asyncResult = null;

 

            Function<int, IAsyncResult> endHandler = delegate

            {

                Debug.Assert(asyncResult != null);

 

                using (connection)

                using (command)

                {

                    using (XmlReader reader = command.EndExecuteXmlReader(asyncResult.InnerResult))

                        ErrorsXmlToList(reader, errorEntryList);

 

                    int total;

                    Commands.GetErrorsXmlOutputs(command, out total);

                    return total;

                }

            };

 

            //

            // Open the connenction and execute the command asynchronously,

            // returning an IAsyncResult that wrap the downstream one. This

            // is needed to be able to send our own AsyncState object to

            // the downstream IAsyncResult object. In order to preserve the

            // one sent by caller, we need to maintain and return it from

            // our wrapper.

            //

 

            try

            {

                connection.Open();

 

                asyncResult = new AsyncResultWrapper(

                    command.BeginExecuteXmlReader(

                        asyncCallback != null ? /* thunk */ delegate { asyncCallback(asyncResult); } : (AsyncCallback)null,

                        endHandler), asyncState);

 

                return asyncResult;

            }

            catch (Exception)

            {

                connection.Dispose();

                throw;

            }

        }

 

        ///<summary>

        /// Ends an asynchronous version of <see cref="ErrorLog.GetErrors"/>.

        ///</summary>

 

        public override int EndGetErrors(IAsyncResult asyncResult)

        {

            if (asyncResult == null)

                throw new ArgumentNullException("asyncResult");

 

            AsyncResultWrapper wrapper = asyncResult as AsyncResultWrapper;

 

            if (wrapper == null)

                throw new ArgumentException("Unexepcted IAsyncResult type.", "asyncResult");

 

            Function<int, IAsyncResult> endHandler = (Function<int, IAsyncResult>)wrapper.InnerResult.AsyncState;

            return endHandler(wrapper.InnerResult);

        }

 

        private void ErrorsXmlToList(XmlReader reader, IList errorEntryList)

        {

            Debug.Assert(reader != null);

 

            if (errorEntryList != null)

            {

                while (reader.IsStartElement("error"))

                {

                    string id = reader.GetAttribute("errorId");

                    Error error = ErrorXml.Decode(reader);

                    errorEntryList.Add(new ErrorLogEntry(this, id, error));

                }

            }

        }

 

        ///<summary>

        /// Returns the specified error from the database, or null

        /// if it does not exist.

        ///</summary>

        public override ErrorLogEntry GetError(string id)

        {

            if (id == null)

                throw new ArgumentNullException("id");

 

            if (id.Length == 0)

                throw new ArgumentException(null, "id");

 

            Guid errorGuid;

 

            try

            {

                errorGuid = new Guid(id);

            }

            catch (FormatException e)

            {

                throw new ArgumentException(e.Message, "id", e);

            }

 

            string errorXml;

 

            using (SqlConnection connection = new SqlConnection(this.ConnectionString))

            using (SqlCommand command = Commands.GetErrorXml(this.ApplicationName, errorGuid))

            {

                command.Connection = connection;

                connection.Open();

                errorXml = (string)command.ExecuteScalar();

            }

 

            if (errorXml == null)

                return null;

 

            Error error = ErrorXml.DecodeString(errorXml);

            return new ErrorLogEntry(this, id, error);

        }

 

// These utility functions were marked as internal, so I had to copy them locally

        public static string NullString(string s)

        {

            return s ?? string.Empty;

        }

 

        public static string EmptyString(string s, string filler)

        {

            return NullString(s).Length == 0 ? filler : s;

        }

 

// End

 

        private sealed class Commands

        {

            private Commands() { }

 

            public static SqlCommand LogError(

                Guid id,

                string appName,

                string hostName,

                string typeName,

                string source,

                string message,

                string user,

                int statusCode,

                DateTime time,

                string xml)

            {

                SqlCommand command = new SqlCommand("ELMAH_LogError");

                command.CommandType = CommandType.StoredProcedure;

 

                SqlParameterCollection parameters = command.Parameters;

 

                parameters.Add("@ErrorId", SqlDbType.UniqueIdentifier).Value = id;

                parameters.Add("@Application", SqlDbType.NVarChar, _maxAppNameLength).Value = appName;

                parameters.Add("@Host", SqlDbType.NVarChar, 30).Value = hostName;

                parameters.Add("@Type", SqlDbType.NVarChar, 100).Value = typeName;

                parameters.Add("@Source", SqlDbType.NVarChar, 60).Value = source;

                parameters.Add("@Message", SqlDbType.NVarChar, 500).Value = message;

                parameters.Add("@User", SqlDbType.NVarChar, 50).Value = user;

                parameters.Add("@AllXml", SqlDbType.NText).Value = xml;

                parameters.Add("@StatusCode", SqlDbType.Int).Value = statusCode;

                parameters.Add("@TimeUtc", SqlDbType.DateTime).Value = time;

 

                return command;

            }

 

            public static SqlCommand GetErrorXml(string appName, Guid id)

            {

                SqlCommand command = new SqlCommand("ELMAH_GetErrorXml");

                command.CommandType = CommandType.StoredProcedure;

 

                SqlParameterCollection parameters = command.Parameters;

                parameters.Add("@Application", SqlDbType.NVarChar, _maxAppNameLength).Value = appName;

                parameters.Add("@ErrorId", SqlDbType.UniqueIdentifier).Value = id;

 

                return command;

            }

 

            public static SqlCommand GetErrorsXml(string appName, int pageIndex, int pageSize)

            {

                SqlCommand command = new SqlCommand("ELMAH_GetErrorsXml");

                command.CommandType = CommandType.StoredProcedure;

 

                SqlParameterCollection parameters = command.Parameters;

 

                parameters.Add("@Application", SqlDbType.NVarChar, _maxAppNameLength).Value = appName;

                parameters.Add("@PageIndex", SqlDbType.Int).Value = pageIndex;

                parameters.Add("@PageSize", SqlDbType.Int).Value = pageSize;

                parameters.Add("@TotalCount", SqlDbType.Int).Direction = ParameterDirection.Output;

 

                return command;

            }

 

            public static void GetErrorsXmlOutputs(SqlCommand command, out int totalCount)

            {

                Debug.Assert(command != null);

 

                totalCount = (int)command.Parameters["@TotalCount"].Value;

            }

        }

 

        ///<summary>

        /// An <see cref="IAsyncResult"/> implementation that wraps another.

        ///</summary>

 

        private sealed class AsyncResultWrapper : IAsyncResult

        {

            private readonly IAsyncResult _inner;

            private readonly object _asyncState;

 

            public AsyncResultWrapper(IAsyncResult inner, object asyncState)

            {

                _inner = inner;

                _asyncState = asyncState;

            }

 

            public IAsyncResult InnerResult

            {

                get { return _inner; }

            }

 

            public bool IsCompleted

            {

                get { return _inner.IsCompleted; }

            }

 

            public WaitHandle AsyncWaitHandle

            {

                get { return _inner.AsyncWaitHandle; }

            }

 

            public object AsyncState

            {

                get { return _asyncState; }

            }

 

            public bool CompletedSynchronously

            {

                get { return _inner.CompletedSynchronously; }

            }

        }

    }

}

Finally all you need to do is modify the web.config file to use this SqlErrorlog instead of the built in one:

  <elmah>  

    <errorLogtype="Williablog.Core.Providers.SqlErrorLog, Williablog.Core"

            connectionStringName="ErrorDB" />

<!--

            Other elmah settings ommitted for clarity

-->

  </elmah>

Note: You will still need to reference the Elmah dll in your project as all we have done here is subclass the ErrorLog type, all of the remaining Elmah goodness is still locked up inside the elmah dll. You could of course make these changes directly inside the elmah source code and recompile it to produce your own version of the elmah dll, but these changes were project specific and I didn't want to end up one day with dozens of project specific versions of the elmah dll. This way, the project specific code stays with the project and the elmah dll remains untouched.

Edit: As Stan Shillis points out on the Code project version of this article, there is a cleaner, simpler approach that will allow you to keep up with new versions of Elmah without editing the source of each release:

Instead of fully rewriting Elmah's SQLErrorLog you can inherit it and override just the ConnectingString property. This way you don't loose benefits of Elmah code updates.
 
Sample code:

public class CustomSqlErrorLog : Elmah.SqlErrorLog
{
	protected string connectionStringName;
	public CustomSqlErrorLog(IDictionary config) : base(config)
	{
		connectionStringName = (string)config["connectionStringName"];
	}
 
	public override string ConnectionString {
		get { return CustomConfigManager.ConnectionStrings[connectionStringName]; }
	}
}

 
The only caveat is that you still have to have that connection string entry in your web.config ConnectionStrings sections because SqlErrorLog base class checks for its existence. It won't actually use the connection string from config file but it needs to be there for it work properly.
 
Sample config:
 

<elmah>
<errorLog type="YourNameSpace.CustomSqlErrorLog, YourAssembly" connectionStringName="Elmah" applicationName="CustomApp" />
</elmah>
 
<connectionStrings>
    <add name="Elmah" connectionString="do.not.change.or.remove.this" providerName="System.Data.SqlClient" />
</connectionStrings>

Categories: ASP.Net | C# | CodeProject
Posted by Williarob on Thursday, March 18, 2010 12:10 PM
Permalink | Comments (0) | Post RSSRSS comment feed

Auto detect the runtime environment and use the right app settings and connection strings

There are many ways to manage the problem of connection string and app settings substitution in the web.config / app.config files when publishing to different environments (e.g. QA and Production servers). In the past I have made use of the Web Deployment project's ability to replace the appsettings and connectionstrings sections, I have experimented with batch files, Build events, conditional compilation and used the extremely powerful FinalBuilder. However, my prefered solution is to have a single shared .config file with all the possible settings in it (so you only have to open one file to change any of the settings) then have the executing application automatically detect the environment and use the correct settings every time.

The technique dicussed below builds on that of an earlier article which described how to centralize your shared application settings and connection strings in a common class library. It also assumes that you know the machine names of your development, QA and production servers. Obviously servers get replaced from time to time and websites sometimes get moved from one server to another, but it has been my experience that there is usually some sort of common naming convention used on servers and web farms, and knowing that convention should be good enough. Even this is not the case, the Development, QA and Production server names are stored in an app setting so you can easily change them at any time if necessary. For this example, the assumption is that the development servers are all named something like "Squirrel01", "Squirrel02", the QA boxes are "Fox01", "Fox02", and the production (farm) boxes are "Rabbit01x", "Rabbit01y", "Rabbit02x", "Rabbit02y", etc. With this in mind, it is necessary only to look for the words "Rabbit", "Fox" or "Squirrel" in the machine name we are running on to identify the current environment and know which section of our config file to use. If none of these names is found, we shall assume the app is running on the localhost of a developer's computer, and use those settings. I should point out that it is possible to for a server to be configured in such a way as to prevent Environment.MachineName from returning a value, in which case this technique simply will not work, so before you start trying to integrate this code into your solution, I recommend you craete a quick test.aspx page or console app that simply does a Response.Write(Environment.MachineName)/Console.WriteLine(Environment.MachineName) and run it on your servers.

First, let's setup our .config file:

<?xml version="1.0" encoding="utf-8" ?>

<configuration>

  <configSections>

    <sectionGroup name="Localhost" type="Williablog.Core.Configuration.EnvironmentSectionGroup, Williablog.Core">

      <section name="appSettings" type="System.Configuration.AppSettingsSection, System.Configuration, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" restartOnExternalChanges="false" requirePermission="false" />

      <section name="connectionStrings" type="System.Configuration.ConnectionStringsSection, System.Configuration, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" requirePermission="false" />

    </sectionGroup>

 

    <sectionGroup name="Dev" type="Williablog.Core.Configuration.EnvironmentSectionGroup, Williablog.Core">

      <section name="appSettings" type="System.Configuration.AppSettingsSection, System.Configuration, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" restartOnExternalChanges="false" requirePermission="false" />

      <section name="connectionStrings" type="System.Configuration.ConnectionStringsSection, System.Configuration, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" requirePermission="false" />

    </sectionGroup>

 

    <sectionGroup name="Qa" type="Williablog.Core.Configuration.EnvironmentSectionGroup, Williablog.Core">

      <section name="appSettings" type="System.Configuration.AppSettingsSection, System.Configuration, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" restartOnExternalChanges="false" requirePermission="false" />

      <section name="connectionStrings" type="System.Configuration.ConnectionStringsSection, System.Configuration, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" requirePermission="false" />

    </sectionGroup>

 

    <sectionGroup name="Production" type="Williablog.Core.Configuration.EnvironmentSectionGroup, Williablog.Core">

      <section name="appSettings" type="System.Configuration.AppSettingsSection, System.Configuration, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" restartOnExternalChanges="false" requirePermission="false" />

      <section name="connectionStrings" type="System.Configuration.ConnectionStringsSection, System.Configuration, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" requirePermission="false" />

    </sectionGroup>

  </configSections>

 

  <Localhost>

    <appSettings>

      <add key="WebServiceUrl" value="http://webservices.squirrel01.yourserver.com/YourService.asmx"/>

      <add key="SmtpServer" value="smtp.yourlocalmailserver.com"/>

    </appSettings>

    <connectionStrings>

      <add name="AppData" connectionString="data source=Ford01;initial catalog=MyDB;User ID=User;Password=Password;" providerName="System.Data.SqlClient"/>

      <add name="ElmahDB" connectionString="Database=ELMAH;Server=Ford02;User=User;Pwd=Password;" providerName="System.Data.SqlClient"/>

    </connectionStrings>

  </Localhost>

 

  <Dev>

    <appSettings>

      <add key="WebServiceUrl" value="http://webservices.squirrel01.yourserver.com/YourService.asmx"/>

      <add key="SmtpServer" value="smtp.yourlocalmailserver.com"/>

    </appSettings>

    <connectionStrings>

      <add name="AppData" connectionString="data source=Ford01;initial catalog=MyDB;User ID=User;Password=Password;" providerName="System.Data.SqlClient"/>

      <add name="ElmahDB" connectionString="Database=ELMAH;Server=Ford02;User=User;Pwd=Password;" providerName="System.Data.SqlClient"/>

    </connectionStrings>

  </Dev>

 

  <Qa>

    <appSettings>

      <add key="WebServiceUrl" value="http://webservices.Fox01.yourserver.com/YourService.asmx"/>

      <add key="SmtpServer" value="smtp.yourlocalmailserver.com"/>

    </appSettings>

    <connectionStrings>

      <add name="AppData" connectionString="data source=BMW01;initial catalog=MyDB;User ID=User;Password=Password;" providerName="System.Data.SqlClient"/>

      <add name="ElmahDB" connectionString="Database=ELMAH;Server=BMW02;User=User;Pwd=Password;" providerName="System.Data.SqlClient"/>

    </connectionStrings>

  </Qa>

 

  <Production>

    <appSettings>

      <add key="WebServiceUrl" value="http://webservices.yourserver.com/YourService.asmx"/>

      <add key="SmtpServer" value="smtp.yourmailserver.com"/>

    </appSettings>

    <connectionStrings>

      <add name="AppData" connectionString="data source=Audi01;initial catalog=MyDB;User ID=User;Password=Password;" providerName="System.Data.SqlClient"/>

      <add name="ElmahDB" connectionString="Database=ELMAH;Server=Audi02;User=User;Pwd=Password;" providerName="System.Data.SqlClient"/>

    </connectionStrings>

  </Production>

 

  <appSettings>

    <!-- Global/common appsettings can go here -->

    <add key="Test" value="Hello World"/>

 

    <add key="DevelopmentNames" value="SQUIRREL"/>

    <add key="ProductionNames" value="RABBIT"/>

    <add key="QANames" value="FOX"/>

    <add key="EnvironmentOverride" value=""/>

    <!-- /Dev | /Localhost | /Production | (blank)-->

 

  </appSettings>

</configuration>

As you can see, the first thing we do in the config file is declare four section groups, "LocalHost", "Dev", "Qa" and "Production". I chose to create a custom SectionGroup since this allowed me to strongly type the expected sections within it, greatly simplifying the code required to access those sections. All the EnvironmentSectionGroup class does, is inherit ConfigurationSectionGroup and declare two properties:

namespace Williablog.Core.Configuration

{

    using System.Configuration;

 

    public class EnvironmentSectionGroup : ConfigurationSectionGroup

    {

 

        #region Properties

 

        [ConfigurationProperty("appSettings")]

        public AppSettingsSection AppSettings

        {

            get

            {

                return (AppSettingsSection)Sections["appSettings"];

            }

        }

 

        [ConfigurationProperty("connectionStrings")]

        public ConnectionStringsSection ConnectionStrings

        {

            get

            {

                return (ConnectionStringsSection)Sections["connectionStrings"];

            }

        }

 

        #endregion

 

    }

}

Next, we create the sections for localhost, development, qa and production, each of which has its own appSettings and connectionStrings sections. These are of the same type as the connectionStrings and appSettings found in any .config file, meaning we don't need to write any additional code to fully utilise these sections - no traversing of primitive xmlNodes or anything like that to get the connectionstrings from that section. Finally we add the expected, normal appsettings section which in this case will provide the global or common appsettings that are shared by all environments. It is here that we store the server names that will help us identify where the app is currently executing. The EnvironmentOverride setting is an added bonus -it allows you to use all of qa or production settings while running on localhost which helps you debug those "well it works on my machine" situations without having to manually change all of the settings for localhost.

Building on the BasicSettingsManager we built earlier we simply add some code to determine the machine name we are running on and return the appSettings and connectionStrings sections appropriate to that environment:

namespace Williablog.Core.Configuration

{

    using System;

    using System.Collections.Specialized;

    using System.Configuration;

    using System.IO;

    using System.Linq;

 

    public class AdvancedSettingsManager

    {

        #region fields

 

        private const string ConfigurationFileName = "Williablog.Core.config";

 

        /// <summary>

        /// default path to the config file that contains the settings we are using

        /// </summary>

        private static string configurationFile;

 

        /// <summary>

        /// Stores an instance of this class, to cut down on I/O: No need to keep re-loading that config file

        /// </summary>

        /// <remarks>Cannot use system.web.caching since agents will not have access to this by default, so use static member instead.</remarks>

        private static AdvancedSettingsManager instance;

 

        /// <summary>

        /// Settings Environment

        /// </summary>

        private static string settingsEnvironment;

 

        private static EnvironmentSectionGroup currentSettingsGroup;

 

        #endregion

 

        #region Constructors

 

        private AdvancedSettingsManager()

        {

            ExeConfigurationFileMap fileMap = new ExeConfigurationFileMap();

 

            fileMap.ExeConfigFilename = configurationFile;

 

            Configuration config = ConfigurationManager.OpenMappedExeConfiguration(fileMap, ConfigurationUserLevel.None);

 

            settingsEnvironment = "Localhost"; // default to localhost

 

            // get the name of the machine we are currently running on

            string machineName = Environment.MachineName.ToUpper();

 

            // compare to known environment machine names

            if (config.AppSettings.Settings["ProductionNames"].Value.Split(',').Where(x => machineName.Contains(x)).Count() > 0)

            {

                settingsEnvironment = "Production";

            }

            else if (config.AppSettings.Settings["QANames"].Value.Split(',').Where(x => machineName.Contains(x)).Count() > 0)

            {

                settingsEnvironment = "Qa";

            }

            else if (config.AppSettings.Settings["DevelopmentNames"].Value.Split(',').Where(x => machineName.Contains(x)).Count() > 0)

            {

                settingsEnvironment = "Dev";

            }

 

            // If there is a value in the EnvironmentOverride appsetting, ignore results of auto detection and set it here

            // This allows us to hit production data from localhost without monkeying with all the config settings.

            if (!string.IsNullOrEmpty(config.AppSettings.Settings["EnvironmentOverride"].Value))

            {

                settingsEnvironment = config.AppSettings.Settings["EnvironmentOverride"].Value;

            }

 

            // Get the name of the section we are using in this environment & load the appropriate section of the config file

            currentSettingsGroup = config.GetSectionGroup(SettingsEnvironment) as EnvironmentSectionGroup;

        }

 

        #endregion

 

        #region Properties

 

        /// <summary>

        /// Returns the name of the current environment

        /// </summary>

        public string SettingsEnvironment

        {

            get

            {

                return settingsEnvironment;

            }

        }

 

        /// <summary>

        /// Returns the ConnectionStrings section

        /// </summary>

        public ConnectionStringSettingsCollection ConnectionStrings

        {

            get

            {

                return currentSettingsGroup.ConnectionStrings.ConnectionStrings;

            }

        }

 

        /// <summary>

        /// Returns the AppSettings Section

        /// </summary>

        public NameValueCollection AppSettings

        {

            get

            {

                NameValueCollection settings = new NameValueCollection();

                foreach (KeyValueConfigurationElement element in currentSettingsGroup.AppSettings.Settings)

                {

                    settings.Add(element.Key, element.Value);

                }

 

                return settings;

            }

        }

 

        #endregion

 

        #region static factory methods

 

        /// <summary>

        /// Public factory method

        /// </summary>

        /// <returns></returns>

        public static AdvancedSettingsManager SettingsFactory()

        {

            // If there is a bin folder, such as in web projects look for the config file there first

            if (Directory.Exists(AppDomain.CurrentDomain.BaseDirectory + @"\bin"))

            {

                configurationFile = string.Format(@"{0}\bin\{1}", AppDomain.CurrentDomain.BaseDirectory, ConfigurationFileName);

            }

            else

            {

                // agents, for example, won't have a bin folder in production

                configurationFile = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, ConfigurationFileName);

            }

 

            // If we still cannot find it, quit now!

            if (!File.Exists(configurationFile))

            {

                throw new FileNotFoundException(configurationFile);

            }

 

            return CreateSettingsFactoryInternal();

        }

 

        /// <summary>

        /// Overload that allows you to pass in the full path and filename of the config file you want to use.

        /// </summary>

        /// <param name="fullPathToConfigFile"></param>

        /// <returns></returns>

        public static AdvancedSettingsManager SettingsFactory(string fullPathToConfigFile)

        {

            configurationFile = fullPathToConfigFile;

            return CreateSettingsFactoryInternal();

        }

 

        /// <summary>internal Factory Method

        /// </summary>

        /// <returns>ConfigurationSettings object

        /// </returns>

        internal static AdvancedSettingsManager CreateSettingsFactoryInternal()

        {

            // If we havent created an instance yet, do so now

            if (instance == null)

            {

                instance = new AdvancedSettingsManager();

            }

 

            return instance;

        }

 

        #endregion

    }

}

As before you can then access the appSettings of the Core.Config from any of your projects like so:

Console.WriteLine(Williablog.Core.Configuration.AdvancedSettingsManager.SettingsFactory().AppSettings["Test"]);

To make this work, you will need to add a reference to System.Configuration. If the config file and Settings manager code is to be part of a class library, you will need to set the "Copy to Output Directory" property of your .config file to "Copy always"and add a reference to System.Configuration to each of your projects.

Download the Williablog.Core project: Williablog.Core.zip (100.77 kb)


Posted by Williarob on Thursday, March 18, 2010 9:00 AM
Permalink | Comments (0) | Post RSSRSS comment feed

How to store shared app settings and connection strings with your class library

When working on enterprise level, multi-tiered .Net applications it is not uncommon to want to create a shared class library, that may be used in multiple related projects. For example, let's suppose you are building a public website, a separate private intranet website used by company staff to manage the public site, and one or more console applications that may run as scheduled tasks related to both sites. You may have an console application that creates and emails reports about sales and other data, and another app that encodes video or audio that is uploaded to your site. Finally, you probably have another project for unit tests.

Since all of these projects will be working with the same database you also have a class library in your solution acting as your datalayer, and perhaps another Core library that contains other shared components. Each of these projects has it's own web.config or app.config file, and you had to copy and paste your connection string, smtp server data, and various other appSettings required by all the projects into every .config file. You may be inspired to add a new .config file to your Core library, and store all of the shared appsettings and connection strings in that one central location. If you then delete all of these settings from the other .config files you'll quickly realize that everything breaks. Even setting the "Copy to Output Directory" property of your Core.config file to "Copy always" doesn't fix this. The reason for this of course is that .Net always looks to the host application for the settings.

The solution is to add some code to your Core project that explicitly loads the Core.config file, reads in the data and makes the results available to all the other projects. That code might look something like this:

namespace Williablog.Core.Configuration

{

    using System;

    using System.Collections.Specialized;

    using System.Configuration;

    using System.IO;

 

    public class BasicSettingsManager

    {

        #region fields

 

        private const string ConfigurationFileName = "Williablog.Core.config";

 

        /// <summary>

        /// default path to the config file that contains the settings we are using

        /// </summary>

        private static string configurationFile;

 

        /// <summary>

        /// Stores an instance of this class, to cut down on I/O: No need to keep re-loading that config file

        /// </summary>

        /// <remarks>Cannot use system.web.caching since agents will not have access to this by default, so use static member instead.</remarks>

        private static BasicSettingsManager instance;

 

        private static Configuration config;

 

        #endregion

 

        #region Constructors

 

        private BasicSettingsManager()

        {

            ExeConfigurationFileMap fileMap = new ExeConfigurationFileMap();

            fileMap.ExeConfigFilename = configurationFile;

            config = ConfigurationManager.OpenMappedExeConfiguration(fileMap, ConfigurationUserLevel.None);

        }

 

        #endregion

 

        #region Properties

 

        /// <summary>

        /// Returns the ConnectionStrings section

        /// </summary>

        public ConnectionStringSettingsCollection ConnectionStrings

        {

            get

            {

                return config.ConnectionStrings.ConnectionStrings;

            }

        }

 

        /// <summary>

        /// Returns the AppSettings Section

        /// </summary>

        public NameValueCollection AppSettings

        {

            get

            {

                NameValueCollection settings = new NameValueCollection();

                foreach (KeyValueConfigurationElement element in config.AppSettings.Settings)

                {

                    settings.Add(element.Key, element.Value);

                }

 

                return settings;

            }

        }

 

        #endregion

 

        #region static factory methods

 

        /// <summary>

        /// Public factory method

        /// </summary>

        /// <returns></returns>

        public static BasicSettingsManager SettingsFactory()

        {

            // If there is a bin folder, such as in web projects look for the config file there first

            if (Directory.Exists(AppDomain.CurrentDomain.BaseDirectory + @"\bin"))

            {

                configurationFile = string.Format(@"{0}\bin\{1}", AppDomain.CurrentDomain.BaseDirectory, ConfigurationFileName);

            }

            else

            {

                // agents, for example, won't have a bin folder in production

                configurationFile = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, ConfigurationFileName);

            }

 

            // If we still cannot find it, quit now!

            if (!File.Exists(configurationFile))

            {

                throw new FileNotFoundException(configurationFile);

            }

 

            return CreateSettingsFactoryInternal();

        }

 

        /// <summary>

        /// Overload that allows you to pass in the full path and filename of the config file you want to use.

        /// </summary>

        /// <param name="fullPathToConfigFile"></param>

        /// <returns></returns>

        public static BasicSettingsManager SettingsFactory(string fullPathToConfigFile)

        {

            configurationFile = fullPathToConfigFile;

            return CreateSettingsFactoryInternal();

        }

 

        /// <summary>internal Factory Method

        /// </summary>

        /// <returns>ConfigurationSettings object

        /// </returns>

        internal static BasicSettingsManager CreateSettingsFactoryInternal()

        {

            // If we havent created an instance yet, do so now

            if (instance == null)

            {

                instance = new BasicSettingsManager();

            }

 

            return instance;

        }

 

        #endregion

    }

}

You can then access the appSettings of Core.Config from any of your projects like so:

Console.WriteLine(Williablog.Core.Configuration.BasicSettingsManager.SettingsFactory().AppSettings["Key"]);

To make this work, you will need to set the "Copy to Output Directory" property of your Core.config file to "Copy always"and add a reference to System.Configuration to each of your projects.

We shall take this a step further next time and expand on this technique to enable your Core project to automatically detect wether it is running on localhost, a development environment, QA, or production, and to return the appropriate connection strings and settings for that environment.


Posted by Williarob on Thursday, March 18, 2010 8:08 AM
Permalink | Comments (0) | Post RSSRSS comment feed

Mock a database repository using Moq

The concept of unit testing my code is still fairly new to me and was introduced when I started writing applications with the Microsoft MVC Framework in Visual Studio 2008.

Intimidated somewhat by the Moq library's heavy reliance on lambdas, my early tests used full Mock classes that I would write myself, and which implemented the same interface as my real database repositories. I'd only write the code for the methods I needed, all other methods would simply throw a "NotImplementedException". However, I quickly discovered that the problem with this approach is that whenever a new method was added to the interface, my test project would no longer build (since the new method was not implemented in my mock repository) and I would have to manually add a new method that threw another "NotImplementedException". After doing this for the 5th or 6th time I decided to face my fears and get to grips with using the Moq library instead. Here is a simple example, of how you can mock a database repository class using the Moq library.

Let's assume that your database contains a table called Product, and that either you or Linq, or LLBLGen, or something similar has created the following class to represent that table as an object in your class library:

The Product Class

namespace MoqRepositorySample

{

    using System;

 

    public class Product

    {

        public int ProductId { get; set; }

 

        public string Name { get; set; }

 

        public string Description { get; set; }

 

        public double Price { get; set; }

 

        public DateTime DateCreated { get; set; }

 

        public DateTime DateModified { get; set; }

    }

}

 

Your Product Repository class might implement an interface similar to the following, which offers basic database functionality such as retrieving a product by id, by name, fetching all products, and a save method that would handle inserting and updating products.

 

The IProductRepository Interface

 

namespace MoqRepositorySample

{

    using System.Collections.Generic;

 

    public interface IProductRepository

    {

        IList<Product> FindAll();

 

        Product FindByName(string productName);

 

        Product FindById(int productId);

 

        bool Save(Product target);

    }

}

 

The test class that follows demonstrates how to use Moq to set up a mock Products repository based on the interface above. The unit tests shown here focus primarily on testing the mock repository itself, rather than on testing how your application uses the repository, as they would in the real world.

 

Microsoft Unit Test Class

 

namespace TestProject1

{

    using System;

    using System.Collections.Generic;

    using System.Linq;

    using Microsoft.VisualStudio.TestTools.UnitTesting;

 

    using Moq;

 

    using MoqRepositorySample;

 

    ///<summary>

    /// Summary description for UnitTest1

    ///</summary>

    [TestClass]

    public class UnitTest1

    {

        ///<summary>

        /// Constructor

        ///</summary>

        public UnitTest1()

        {

            // create some mock products to play with

            IList<Product> products = new List<Product>

                {

                    new Product { ProductId = 1, Name = "C# Unleashed", Description = "Short description here", Price = 49.99 },

                    new Product { ProductId = 2, Name = "ASP.Net Unleashed", Description = "Short description here", Price = 59.99 },

                    new Product { ProductId = 3, Name = "Silverlight Unleashed", Description = "Short description here", Price = 29.99 }

                };

 

            // Mock the Products Repository using Moq

            Mock<IProductRepository> mockProductRepository = new Mock<IProductRepository>();

 

            // Return all the products

            mockProductRepository.Setup(mr => mr.FindAll()).Returns(products);

 

            // return a product by Id

            mockProductRepository.Setup(mr => mr.FindById(It.IsAny<int>())).Returns((int i) => products.Where(x => x.ProductId == i).Single());

 

            // return a product by Name

            mockProductRepository.Setup(mr => mr.FindByName(It.IsAny<string>())).Returns((string s) => products.Where(x => x.Name == s).Single());

 

            // Allows us to test saving a product

            mockProductRepository.Setup(mr => mr.Save(It.IsAny<Product>())).Returns(

                (Product target) =>

                {

                    DateTime now = DateTime.Now;

 

                    if (target.ProductId.Equals(default(int)))

                    {

                        target.DateCreated = now;

                        target.DateModified = now;

                        target.ProductId = products.Count() + 1;

                        products.Add(target);

                    }

                    else

                    {

                        var original = products.Where(q => q.ProductId == target.ProductId).Single();

 

                        if (original == null)

                        {

                            return false;

                        }

 

                        original.Name = target.Name;

                        original.Price = target.Price;

                        original.Description = target.Description;

                        original.DateModified = now;

                    }

 

                    return true;

                });

 

            // Complete the setup of our Mock Product Repository

            this.MockProductsRepository = mockProductRepository.Object;

        }

 

        ///<summary>

        /// Gets or sets the test context which provides

        /// information about and functionality for the current test run.

        ///</summary>

        public TestContext TestContext { get; set; }

 

        ///<summary>

        /// Our Mock Products Repository for use in testing

        ///</summary>

        public readonly IProductRepository MockProductsRepository;

 

        ///<summary>

        /// Can we return a product By Id?

        ///</summary>

        [TestMethod]

        public void CanReturnProductById()

        {

            // Try finding a product by id

            Product testProduct = this.MockProductsRepository.FindById(2);

 

            Assert.IsNotNull(testProduct); // Test if null

            Assert.IsInstanceOfType(testProduct, typeof(Product)); // Test type

            Assert.AreEqual("ASP.Net Unleashed", testProduct.Name); // Verify it is the right product

        }

 

        ///<summary>

        /// Can we return a product By Name?

        ///</summary>

        [TestMethod]

        public void CanReturnProductByName()

        {

            // Try finding a product by Name

            Product testProduct = this.MockProductsRepository.FindByName("Silverlight Unleashed");

 

            Assert.IsNotNull(testProduct); // Test if null

            Assert.IsInstanceOfType(testProduct, typeof(Product)); // Test type

            Assert.AreEqual(3, testProduct.ProductId); // Verify it is the right product

        }

 

        ///<summary>

        /// Can we return all products?

        ///</summary>

        [TestMethod]

        public void CanReturnAllProducts()

        {

            // Try finding all products

            IList<Product> testProducts = this.MockProductsRepository.FindAll();

 

            Assert.IsNotNull(testProducts); // Test if null

            Assert.AreEqual(3, testProducts.Count); // Verify the correct Number

        }

 

        ///<summary>

        /// Can we insert a new product?

        ///</summary>

        [TestMethod]

        public void CanInsertProduct()

        {

            // Create a new product, not I do not supply an id

            Product newProduct = new Product

                { Name = "Pro C#", Description = "Short description here", Price = 39.99 };

 

            int productCount = this.MockProductsRepository.FindAll().Count;

            Assert.AreEqual(3, productCount); // Verify the expected Number pre-insert

 

            // try saving our new product

            this.MockProductsRepository.Save(newProduct);

 

            // demand a recount

            productCount = this.MockProductsRepository.FindAll().Count;

            Assert.AreEqual(4, productCount); // Verify the expected Number post-insert

 

            // verify that our new product has been saved

            Product testProduct = this.MockProductsRepository.FindByName("Pro C#");

            Assert.IsNotNull(testProduct); // Test if null

            Assert.IsInstanceOfType(testProduct, typeof(Product)); // Test type

            Assert.AreEqual(4, testProduct.ProductId); // Verify it has the expected productid

        }

 

        ///<summary>

        /// Can we update a prodict?

        ///</summary>

        [TestMethod]

        public void CanUpdateProduct()

        {

            // Find a product by id

            Product testProduct = this.MockProductsRepository.FindById(1);

 

            // Change one of its properties

            testProduct.Name = "C# 3.5 Unleashed";

 

            // Save our changes.

            this.MockProductsRepository.Save(testProduct);

 

            // Verify the change

            Assert.AreEqual("C# 3.5 Unleashed", this.MockProductsRepository.FindById(1).Name);

        }

    }

}

 

Download the Sample project and run the tests yourself:

MoqRepositorySample.zip (691.96 kb)


Categories: ASP.Net | C# | CodeProject | Moq | MVC | Unit Testing
Posted by Williarob on Tuesday, December 15, 2009 8:17 AM
Permalink | Comments (0) | Post RSSRSS comment feed

How to get the length (duration) of a media File in C# on Windows 7

If you have ever looked at a media file (audio or video) in the explorer window on a Windows 7 PC, you may have noticed that it displays additional information about that media file that previous versions of Windows didn't seem to have access to, for example the length/duration of a Quicktime Movie Clip:

 

Even right clicking the file and choosing Properties > Details does not give me this information on my Vista Ultimate PC. Of course, now that Windows has the ability to fetch this information, so do we as developers, through the Windows API (The DLL to Import by the way is "propsys.dll"):

        internal enum PROPDESC_RELATIVEDESCRIPTION_TYPE

        {

            PDRDT_GENERAL,

            PDRDT_DATE,

            PDRDT_SIZE,

            PDRDT_COUNT,

            PDRDT_REVISION,

            PDRDT_LENGTH,

            PDRDT_DURATION,

            PDRDT_SPEED,

            PDRDT_RATE,

            PDRDT_RATING,

            PDRDT_PRIORITY

        }

 

 

        [DllImport("propsys.dll", CharSet = CharSet.Unicode, SetLastError = true)]

        internal static extern int PSGetNameFromPropertyKey(

            ref PropertyKey propkey,

            [Out, MarshalAs(UnmanagedType.LPWStr)] out string ppszCanonicalName

        );

 

        [DllImport("propsys.dll", CharSet = CharSet.Unicode, SetLastError = true)]

        internal static extern HRESULT PSGetPropertyDescription(

            ref PropertyKey propkey,

            ref Guid riid,

            [Out, MarshalAs(UnmanagedType.Interface)] out IPropertyDescription ppv

        );

 

        [DllImport("propsys.dll", CharSet = CharSet.Unicode, SetLastError = true)]

        internal static extern int PSGetPropertyKeyFromName(

            [In, MarshalAs(UnmanagedType.LPWStr)] string pszCanonicalName,

            out PropertyKey propkey

        );

However, before you rush off to play with these, you may be interested to know that Microsoft has created a great Library that showcases this and many of the other new API features of Windows 7. It's called the WindowsAPICodePack and you can get it here.

If you open the WindowsAPICodePack Solution and compile the Shell Project, it creates a nice wrapper around all the neat new system properties available through propsys.dll. Adding a reference to WindowsAPICodePack.dll and WindowsAPICodePack.Shell.dll in a console application will allow you to get the duration of just about any media file that Windows recognizes. (Of course the more codec packs you install, the more types it will recognize, I recommend The Combined Community Codec Pack to maximize your range of playable files.)

Here is a simple example showing how to get the duration of a media file in C# using this library:

namespace ConsoleApplication1

{

    using System;

 

    using Microsoft.WindowsAPICodePack.Shell;

 

    class Program

    {

        static void Main(string[] args)

        {

            if(args.Length < 1)

            {

                Console.WriteLine("Usage: ConsoleApplication1.exe [Filename to test]");

                return;

            }

 

            string file = args[0];

            ShellFile so = ShellFile.FromFilePath(file);

            double nanoseconds;

            double.TryParse(so.Properties.System.Media.Duration.Value.ToString(), out nanoseconds);

            Console.WriteLine("NanaoSeconds: {0}", nanoseconds);

            if (nanoseconds > 0)

            {

                double seconds = Convert100NanosecondsToMilliseconds(nanoseconds) / 1000;

                Console.WriteLine(seconds.ToString());

            }

        }

 

        public static double Convert100NanosecondsToMilliseconds(double nanoseconds)

        {

            // One million nanoseconds in 1 millisecond, but we are passing in 100ns units...

            return nanoseconds * 0.0001;

        }

    }

}

As you can see, the System.Media.Duration Property returns a value in 100ns units so some simple math will turn it into seconds. Download the Test Project which includes the prebuilt WindowsAPICodePack.dll and WindowsAPICodePack.Shell.dll files in the bin folder:

ConsoleApplication1.zip (218.76 kb)

For the curious, I tested this on Windows XP and as you'd expect, it didn't work:

Unhandled Exception: System.DllNotFoundException: Unable to load DLL 'propsys.dll': The specified module could not be found. (Exception from HRESULT: 0x8007007E)

On Vista Ultimate SP2, it still didn't work - nanoseconds was always 0, though it didn't throw any exceptions.

For the older systems I guess we are limited to using the old MCI (Media Control Interface) API:

        using System.Runtime.InteropServices;

 

        [DllImport("winmm.dll")]

        public static extern int mciSendString(string lpstrCommand, StringBuilder lpstrReturnString, int uReturnLength, int hwndCallback);

 

        [DllImport("winmm.dll")]

        private static extern int mciGetErrorString(int l1, StringBuilder s1, int l2);

 

        private void FindLength(string file)

        {

            string cmd = "open " + file + " alias voice1";

            StringBuilder mssg = new StringBuilder(255);

            int h = mciSendString(cmd, null, 0, 0);

            int i = mciSendString("set voice1 time format ms", null, 0, 0);

            int j = mciSendString("status voice1 length", mssg, mssg.Capacity, 0);

            Console.WriteLine(mssg.ToString());

        }

Which works fine for .mp3 and .avi and other formats that play natively in Windows Media Player, but even with a codec pack installed, it doesn't work on Quicktime or .mp4 files, where the new Windows 7 API did.


Categories: C# | CodeProject | Windows | Windows 7
Posted by Williarob on Wednesday, October 21, 2009 12:14 PM
Permalink | Comments (0) | Post RSSRSS comment feed

Using Expression Encoder 2 Silverlight 2 Templates in your project

Some time ago, I wrote a popular article on how to create a scrolling Silverlight 1.x Playlist using Microsoft Expression Encoder output. Well, I finally found some time to revisit that application to see how I might upgrade it to Silverlight 2. As you are probably aware, Expression Encoder 2 Service Pack 1 is now out and it ships with a handful of Player templates just for Silverlight 2. Among these new templates are two which already have built in Scrolling playlists and I thought I would test one out.

However, I already have all my videos encoded - including all the chapter point thumbnails, etc. so I didn't want to start inside Encoder, have it build my project and work from there like I did last time. This time, I created a new Silverlight 2.0 Usercontrol project in Visual Studio 2008 and worked for a week or two on the new look and feel for the site before I decided it was time to merge my project with the Expression Encoder template. I found Tim Heuer's blog entry on integrating these new templates very helpful, though I feel it is important to add that the template itself, meaning the look and feel of the player, is not stored in the dlls and it does not matter which of the template projects you open and compile, just referencing the dlls and following Tim's instructions will always give you the standard Silverlight 2 player. In order to add the look and feel you must copy (or merge) the contents of the Page.xaml file in that template into your own UserControl.

For my project I wanted the player to be a seperate UserControl, so I went to Project > New Item >  Silverlight User Control, and called it MediPlayer.xaml. Next I pasted everything from C:\Program Files\Microsoft Expression\Encoder 2\Templates\en\FrostedGallery\Source\Page.xaml into my new MediaPlayer.xaml file, then changed the x:Class at the very top to reflect my original NameSpace and Class Name. (If you forget what it was, just undo your paste, make a note of it and redo the paste).

I also wanted my Player control to be available on multiple pages, and I wanted it hidden most of the time, only to pop up in a pseudo modal 'Lightbox' format with a close button. To achieve this effect I simply used these techniques described by Scot Guthrie in his excellent tutorial on creating a Silverlight Digg application. So now I had my player, and I could show and hide it with the click of a button. Basically, my site showcases the James Bond Movies, and I have customized the Yet Another Carousel control so that you can click on the selected box art and read the synopsis, watch the Trailer and buy it on Amazon.com. My initial idea therefore, was to create a new PlayListItem programatically and add it to the player's PlayListCollection in the onLeftMouseButtonUp event of the button. What I found was that while this worked quite nicely, I ultimately ended up with a playlist slowly being generated by the user in the order in which the user clicked on the movies, making it harder to keep track of which item in the collection was which movie, but more importantly I couldn't figure out how to take full advantage of all the chapter information and thumbnails I had created for the original project. I could create chapter item objects, and add them to my own PlayListCollection object, but I could not bind that new object to my player since the Playlist Property is read only.

Reading more of Tim's blog I saw that you could add some xml to the InitParams of the control, but I have 24 videos, each with thumbnail paths for at least 4 chapter points and I didn't need to start typing that all into a single line to understand what a nightmare that would become: not only would it be hard to read and maintain but also it goes against the whole MVC seperation of code, data and presentation layers, ethic we have all grown so attached to.

More google searches led me to this solution which does allow you to move the creation of the parameters into the code behind, but seems to require tinkering with the original template code, which I am not opposed to, but I'd already thought of a cleaner solution. What I wanted was a way to create an xml file containing my entire playlist in this format:

   1: <?xml version="1.0" encoding="utf-8" ?>
   2: <playList>
   3:   <playListItems>
   4:     <playListItem title="Dr No" description="Trailer" mediaSource="ClientBin/01_dr_no.wmv" adaptiveStreaming="False" thumbSource="ClientBin/01_dr_no_Thumb.jpg" >
   5:       <chapters>
   6:         <chapter  position="29.176" thumbnailSource="ClientBin/01_dr_no_MarkerThumb 00.00.29.1760677.jpg" title="1" />
   7:         <chapter  position="49.374" thumbnailSource="ClientBin/01_dr_no_MarkerThumb 00.00.49.3748838.jpg" title="2" />
   8:         <!-- etc -->
  20:       </chapters>
  21:     </playListItem>
  22:   </playListItems>
  23: </playList>

and simply pass the file to my player. And my Solution turned out to be trivially easy: I created a new class that inherits from ExpressionMediaPlayer.MediaPlayer, and added a new method that would accept my file:

 

using System;

using System.Diagnostics;

using System.Net;

using System.Windows;

using System.Windows.Browser;

using System.Windows.Controls;

using System.Windows.Documents;

using System.Windows.Ink;

using System.Windows.Input;

using System.Windows.Media;

using System.Windows.Media.Animation;

using System.Windows.Shapes;

using System.Xml.Linq;

 

 

namespace Bond_Silverlight2

{

    public class MI6MediaPlayer : ExpressionMediaPlayer.MediaPlayer

    {

        public MI6MediaPlayer(): base()

        {

        }

 

        public void OnStartup(string xmlPlayList)

        {

            XDocument document = XDocument.Load(xmlPlayList);

            try

            {

                Playlist.Clear();

                Playlist.ParseXml(HtmlPage.Document.DocumentUri, document.ToString());

            }

            catch (System.Xml.XmlException xe)

            {

                Debug.WriteLine("XML Parsing Error:" + xe.ToString());

            }

            catch (NullReferenceException)

            {

            }

        }

 

    }

}

 


This required some minor changes to the MediaPlayer.xaml file, inorder to make it use my version of the player:

First of all, I replaced the <expression:ExpressionPlayer> tags with <Bond_Silverlight2:MI6MediaPlayer> tags and any static resource styles that had a target type of ExpressionPlayer:ExpressionPlayer also needed to be replaced, and then everything used my new player. Obviously, there was another step - how to initialize and pass my xml playlist to my new player. First, I created my xml file in the format outlined above. It is important to note, that my video files and associated JPEG files are stored on the webserver inside the Clientbin folder, rather than inside the xap file as resources or content. In the code behind of my MediaPlayer.xaml (that is the usercontrol I pasted the page.xaml into earlier, not the MI6MediaPlayer class from the code above), I call the startup method using a link to my xml file:

 

 

using System;

using System.Collections.Generic;

using System.Linq;

using System.Net;

using System.Windows;

using System.Windows.Controls;

using System.Windows.Documents;

using System.Windows.Input;

using System.Windows.Media;

using System.Windows.Media.Animation;

using System.Windows.Shapes;

 

namespace Bond_Silverlight2

{

    public partial class MediaPlayer : UserControl

    {

        public MediaPlayer()

        {

            InitializeComponent();

            this.Loaded += new RoutedEventHandler(MediaPlayer_Loaded);

        }

 

        void MediaPlayer_Loaded(object sender, RoutedEventArgs e)

        {

            Player1.OnStartup("Playlist.xml");

        }

 

        private void Button_Click(object sender, RoutedEventArgs e)

        {

            Player1.Stop();

            Visibility = System.Windows.Visibility.Collapsed;

        }

    }

}

 

The xml file ("Playlist.xml") is stored as Content within the xap file. This should be the default behavior if you created the xml file inside Visual Studio by using the Project > Add Item menu, but if you didn't, you can check by right clicking on the xml file, choosing Properties and checking that the Build Action is "Content", and Copy to Output Directory is set to "Do Not Copy".

Now when my Player is first loaded into memory, my full playlist is immediately available.

 


Posted by Williarob on Wednesday, April 08, 2009 1:47 PM
Permalink | Comments (0) | Post RSSRSS comment feed