A Way of Organizing Project Documentation

by Ventsy Popov

Have you ever thought if the way you are organizing your documents works for you and your team? Recently I had to sit and think (and research) of a common structure that on one hand is general enough to work (with minor adjustments) for most of the projects I participate in, and on the other hand is logical and self-explanatory.

Here is what I came up with:

  1. Project Planning
    1. Brief – an overview of the project setting one's focus on the subject.
    2. Specification – definition of the scope and work that has to be done.
    3. Plan – time schedule describing when and what should be done in a sequential manner along with specific milestones.
    4. Roles & Responsibilities – definition of a person's/team's involvement in the project.
  2. Environment
    1. Technical Documentation
      1. Network Access – what credentials and software is needed and how to use it.
      2. Systems and architecture – any specifics about interacting servers or applications.
    2. Rules & Procedures – description of things such as way of time tracking, question escalation, access requests, etc.
    3. Guides/Manuals – any documentation such as user manuals, installation guides, or other information regarding environment set-up or usage.
  3. Project Areas (Modules)
    1. Area 1
      1. Requirements – expectations that have to be met by persons/teams involved in this area.
      2. Processes – description of the steps that are taken for fulfilling the above requirements.
      3. Issues & Solutions – any issue that is found during the process of work and its corresponding workaround (if no workaround, then it belongs to “Questions & Concerns).
    2. Area 2
      1. ...
  4. Questions & Concerns – any open questions that need to be resolved along with obstacles/assumed difficulties or recommendations.
  5. Meetings & Discussions – description of meetings (discussed topics, decisions taken, responsibilities distributed, deadlines that were set, etc.).
  6. Contacts – contact information of the persons taking part in the project.


Reader, do share your thoughts or experience on this matter :).

Oraganization | Management

How to Access a Local Web Site or Service, Using Host Header

by Ventsy Popov

If we are part of a software developer team (contrary to one-man-does-it-all guy), quite often we find ourselves in a situiation in which we have to set-up a new developer environment. One of the issues we have to cope with, is adjusting configuration, that has to match the new machine specifics. An useful trick that is often used, is to implement aliases (SQL Server aliases, host aliases, etc.) on each developer machine, so that there are no differences in the configuration. This way if the configuration settings are preserved in a version control system, they are used by each develeper without the need anything to be changed. Instead, what happens behind the curtains is all the settings are interpreted with the help of the mentioned aliases to match the working machine environment.

Following the abovely described practice, recently we (the team and I) had to deal with a securuty isse that did not allow us to take full advantage of aliases configuration. My point in this post is to describe the problem and solution, hoping that it can be useful to someone else, and not only me :).

What was required

We had a MS SQL Reporting Services solution with a couple of projects in it. Projects had to be configured so that :
    -Each one had to use one and the same "Target Server URL" for deployment;
    -Behind this "Target Server URL", actually had to stand the developer's local machine.

What had to be done

In order for these things to work, every developer machine had to include two changes:
1) To have a record in the C:\Windows\System32\Drivers\Etc\Host file saying that "fixedname" server is actually
2) And in the Report Server Web Service configuration, we had to add  http://fixedname/ as a new URL from which the service could be accessed.

What was the problem

After doing what logically seemed to be enough and tried to open reporting services through the newly added URL, we found out that windows authentication did not work for this URL, and we could not use it at all.

What was the reason

After digging for sometime it turned out that after IIS 5.1 and above in combination with Windows Server 2003 SP1 (or Windows XP SP2) and later versons of Windows OS a securty check was implied. This check did not allow us to authenticate when the request was fired from the local machine using a host header, that matched a host header configured for the same machine. If we fired the request from another machine within the network, things worked perfectly. But in our situation, requests had to made and serviced by the same environment.

What was the solution

Well, it happend so, that there was trick which we could use in order to pass by the securtiy check – to disable it. Disbaling it in our case was justified, but do be careful to consider it carefully when you forbid a security feature. All we had to do was:
1. Click Start, click Run, type regedit, and then click OK.
2. In Registry Editor, locate and then click the following registry key:
3. HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa
4. Right-click Lsa, point to New, and then click DWORD Value.
5. Type DisableLoopbackCheck, and then press ENTER.
6. Right-click DisableLoopbackCheck, and then click Modify.
7. In the Value data box, type 1, and then click OK.
8. Quit Registry Editor, and then restart your computer.
... as instructed in the following Microsoft KB Article: http://support.microsoft.com/kb/896861

This scenario can be applied not only when it comes to Reporting Services, but whenever you need a web site requiring a host header, to be accessed form the same machine that hosts the site.

Reporting Services | IIS

Sofia University - ASP.NET State Management

by Ventsy Popov

As developers we are quite often put in a situation of trying to find the right solution of a currently emerging problem. The immense information we have at hand (eyes) with internet might not always be useful, instead if you think about it, it can be in a way a bit wicked… Wicked so that it makes us copy and reuse ready-made fragments which leads to our tasks being finished as soon as possible. What I am trying to imply is that if we want to reuse our own knowledge, we should try to be aware of the fundamentals of the technologies we utilize.

Some of the basic, yet important aspects of ASP.NET state management a developer should have in mind: 

  1. Try to see the whole picture of state management in web applications in general (cookies, hidden fields, parameterized addresses)
  2. Then above all – know a page lifecycle. It comes very handy when you wonder what is executed first, the button click event handler or the page load one :).
  3. What comes next is see what goodies we have in ASP.NET in particular – view state, application state, session state
  4. Last - you can check how to manipulate request/response objects in .NET

A presentation from a Sofia University course referring to the above matters:  ASP.NET-State-Management.pdf (12.48 mb)
…and the demos for it: State-Management-Demos.rar (44.68 kb)

Presentations | ASP.NET

Backing up your web site in a few steps

by Ventsy Popov
Yesterday I decided to deal with something I’ve been postponing since the moment I installed the blog here. I can assume every hosting company has a way to back up the data it is hosting, including in my case. Yet, I am a bit more cautious when it comes to archiving my information in case it has to be later recovered… Along with prayers to computer gods not to allow such emergencies, I usually want be prepared as well. Here is a pretty easy way for a regular backup of a web site to a local machine:


First go and get BestSync 2011 (link here)
It’s a freeware app and once installed it will let you define a sync procedure between folders (local or not). At first you have to set the folders you need to sync, and the sync direction:
Then you have to define the desired schedule for the sync and even mark the task to be run as a Windows Service, so there won’t be any need to run BestSync at Windows startup:
Well so far so good, but I wanted my backup not only to be a sync process between two folders. My goal was to have daily backup archives. Following this idea I added an .exe file as an application that had to be ran every time the sync task is finished:
The cool thing here is how I created the .exe file, which had to do the simplest thing – to get all the data from the latest backup folder and copy it to a brand new folder named with the current date and time. I wanted super simple way to do it – a script for example. So I prepared the following .vbs:
Dim fso, objFolder
strComputer = "."

' Date and time

Set objWMIService = GetObject("winmgmts:\\" & strComputer & "\root\cimv2")
Set colItems = objWMIService.ExecQuery("Select * from Win32_OperatingSystem")

For Each objItem in colItems
    dtmLocalTime = objItem.LocalDateTime
    dtmMonth = Mid(dtmLocalTime, 5, 2)
    dtmDay = Mid(dtmLocalTime, 7, 2)
    dtmYear = Left(dtmLocalTime, 4)
    dtmHour = Mid(dtmLocalTime, 9, 2)
    dtmMinutes = Mid(dtmLocalTime, 11, 2)
    dtmSeconds = Mid(dtmLocalTime, 13, 2)

strDirectory = "D:\Backups\ventsypopov.com\" & dtmYear & dtmMonth & dtmDay & "_" & dtmHour & "." & dtmMinutes 
SET fso = CreateObject("Scripting.FileSystemObject")
Set objFolder = FSO.CreateFolder(strDirectory)
fso.CopyFolder "D:\Backups\ventsypopov.com\LatestBak", objFolder, True 
I tested it to be sure it does the job correctly. And here was the first obstacle – BestSync allowed a .bat file, .exe file or .cmd file only to be executed after a sync task. Batch file or cmd were not suitable, because I wanted everything to be in silent mode. So here is the trick - I used a little known IExpress application that can be found in C:\Windows\System32 to convert my .vbs to an exe. Here is a more in depth description how it works: http://social.technet.microsoft.com/Forums/en/ITCG/thread/c0e7575a-6983-453b-8959-c6d889ccc01f

…well this is not more than 20-minute procedure to assure you’ve taken some disaster measures :). If you try it out, do not forget to test a restore scenario… backups usually work pretty well, but when it comes to restoring… right?

Tags: ,


SQL Server Integration Services - example of error handling

by Ventsy Popov

There was a saying about a student who constantly complained to his teacher of always having so much work to do, that time was never enough for him. “If you never stop working, when do you actually think of the things you should do and how you should do them?” was the question the teacher asked him. Following this thought when my daily tasks flow is at a high level of load, I usually try to stop once in a while and think of the bigger picture of what I am actually trying to achieve and whether I am doing it effectively.

For a couple of days now I have been trying to sit and play a bit with Microsoft SQL Server Integration Services (SSIS) error handling. The reason was that colleagues and I were working on a SSIS project and at a certain point the error analysis of migrated data became somehow a pain in the triple-letter word :).

We identified the following issues:

  1. We spent more time analyzing errors, than actually transferring the data
  2. Doing the above we tended to make mistakes in the statistics we provided for the client

Then we tried to nail the actual reasons for these things:

  1. We had too many error destination components and we had to go over each one, every time the migration was finished. This was slowing us.
  2. We did not have a common way of outputting the errors content in the destination components. This way we had a different approach of analysis for each one, which is an error-prone way of doing things.

Having this figured out, the next step was to set the goals:

  1. We need as less error destinations to log data in, as possible. One error output per data flow task was an excellent optimization for our case.
  2. We wanted a relatively easy data format to analyze over. So we targeted at least an excel sheet to output into, not just a flat file for instance.

So I’ve put some effort to make an example of a problematic (in the context of our case) package. In it we had two types of error outputs (all pointing to flat file destinations) – a) Business logic errors and b) Data manipulation errors:


Starting almost from the scratch:


The data flow task was transformed into this:


The key points here are:

  1. We have multiple error outputs (both - business logic and data manipulation) united into a single path flow.
  2. For every business logic type of error we add a custom reason (i.e. “Add Reason” component on the image).
  3. We have a Script component extracting the actual description of errors with the following code:
    public override void Input0_ProcessInputRow(Input0Buffer Row)
            if (Row.ErrorCode_IsNull == false)
                Row.ErrorDescription = ComponentMetaData.GetErrorDescription(Row.ErrorCode);
                    Row.ErrorColumnName = ComponentMetaData.InputCollection[0].InputColumnCollection[Row.ErrorColumn].Name;
                catch (Exception ex)
                    Row.ErrorColumnName = "Column Name retrieval failure";
                Row.ErrorDescription = Row.Reason;
  4. All information is gathered into an excel sheet for later analysis.

If someone wants to dig deeper, here is the source code of the package:

ErrorTest.zip (28.83 kb)

Integration Services

Elieff Center for Education and Culture - PHP on IIS

by Ventsy Popov
A while ago I was called by my boss Vladimir Tchalkov on the  phone with the offer to cover for him on a talk he had to give. I say an "offer", since I have to be honest and admit, that it was more of an opportunity for me, than doing a favour to Vlado who had an urgent travel to make at that time... In other words - something of a win-win situation for both of us. The topic was "PHP Applications Hosted on IIS" and although I had literally no time to prepare, most of the work was already done for me. With some counseling by Vlado and fooling around with test application I was ready to launch :).

The talk was actually part of a small Microsoft event about PHP integration in Microsoft technologies. The other presnetation on this topic was made by Svetlin Nakov. Since the audience consisted more of PHP guys than .NET ones, we kinda had to convince them there is a real deal in using PHP along with Microsoft products... Hope at least we inclined them on the idea of giving it a shot :).

 Here are some strokes of the raw material on my side of speaking:

Although CGI is a relatively easy way to delegate the generation of a web page to an executable it comes at a certain cost. Every time a command is called we pay the price of creating a new process, which can be a bit of a performance drawback.  ISAPI extensions on the other hand could be real fast (guessing they where developed properly), but require thread safety to be separately taken into account. On the top of that we cannot use scripting languages to create ISAPI extensions (or filters).


Here comes FastCGI 
Which we can say combines the good sides of both of the above:
 - A process is created on a first request, and then reused. Hence it is very fast.
 - Has a single-threaded execution, which is recommended for NON-thread safe PHP applications. This way we can count on stability as well.
How to Install
1) Well you can use Web Platform Installer and with just a few clicks have your environment ready,
2) You can head to a more tedious process of doing it by enabling FastCGI on IIS, downloading the latest version of PHP for Windows, and configuring IIS to handle PHP requests.

Good To Have in Mind  
After installing you might want to check out the web.config and configure the maximum requests being served, before a process is recycled, and adjust the maximum instances of a process ran on a single processor:

<application fullPath="C:\PHP\php-cgi.exe"    
Here is the full presnetation for PHP Applications on IIS.


Sofia University - Design Patterns course (Facade)

by Ventsy Popov

I intend to start something of a personal initiative and upload some materials on talks I held. Hopefully they can be useful to others. One of the courses I eagerly participated in was on the subject of Design Patterns a couple of years ago.

Façade was the first topic I had to cover and the talk was very well accepted by the students in Sofia University, even though I had very little experience in public speaking back then. Thinking about it - good preparation plus some funny moments played the main role for that. As for the more technical aspect of it, I will try to lead you through the main idea of the pattern:
From time to time it might happen that we come across a complicated system (in terms of being highly coupled) and we should somehow plug into it and use its features. God forbid the system was not developed by us, and then the misery is doubled :). Let me illustrate what I mean:


If we are in the tough position of creating the client, we will bump into:
   - the complexity of such a base system;
   - the high coupling of the classes;
   - the difficulties we hardly can foresee in supporting such a monster :).
So what we can do is, expose only the functionality we need for our client and in a way – forget about the rest of the hairy base system, reshaping the sketch like this:


In such way we can create a separate level of abstraction, which is much more easily “absorbable” by us developers.

Here is the presentation FacadePattern.pdf and the demos FacadeDemos.zip. Having in mind the source code is very minimalistic example of a Façade pattern implementation – enjoy :).


Sofia .NET User Group - Open Data Protocol (OData)

by Ventsy Popov
Recently I had a talk for the Sofia .NET User Group.  Actually it was my first speak for the user group and I was happy to see that after a couple of years of irregular gatherings, folks are still interested in these events. Nadya Atanasova who is lately the main engine for organizing us geeks, is doing a great job in promoting the meetings.

The topic of my speak was Open Data Protocol and WCF Data Services. With a few surprises - for example video recording with Microsoft Live Meeting (which I happened to have no experience with), we were able to start and cut to the chase :). First we travelled a bit in time, tracking the history of OData through Astoria and ADO.NET Data Services until it became a separate web protocol. After that we pulled some data out of Netflix (super exploited jQuery and OData example by the way :)):
       $(document).ready(function () {
        function (data) {
            $.each(data.d.results, function (key, val) {
                var str = '<h3>' + val.Title  + '</h3>' + val.Abstract + '<br/>';



And of course in honor of Microsoft, we played with a little C# code, implementing a simple WCF Data Services with exception handling:
    public static void InitializeService(DataServiceConfiguration config)
        config.SetEntitySetAccessRule("*", EntitySetRights.All);
        // config.SetServiceOperationAccessRule("MyServiceOperation", ServiceOperationRights.All);
        config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2;

    protected override void HandleException(HandleExceptionArgs args)
        args.Exception = new DataServiceException(

You can check out the slides and demos: