It’s important to understand which kind of DSN you have. On an x64 system, you can create an ODBC connection(DSN) on the 32-bit side of the system or on the 64-bit side of the system.
32-bit applications will only see ODBC connections created in the 32-bit side, and 64-bits applications will only see ODBC connections from the 64-bit side. Each kind of application has is own registry.To setup DSN for 32-bit application you must use:
%WINDIR%\SysWOW64\odbcad32.exe
and for 64-bit application you must use:
%WINDIR%\System32\odbcad32.exe
There is not 32-bit edition of Windows XP on the XP Pro x64 media.
http://support.microsoft.com/kb/942976/en-us
A 64-bit version of the Microsoft Windows operating system includes the following versions of the Microsoft Open Database Connectivity (ODBC) Data Source Administrator tool (Odbcad32.exe):
• The 32-bit version of the Odbcad32.exe file is located in the %systemdrive%WindowsSysWoW64 folder.
• The 64-bit version of the Odbcad32.exe file is located in the %systemdrive%WindowsSystem32 folder.
The Odbcad32.exe file displays the following types of data source names (DSNs):
• System DSNs
• User DSNs
Wednesday, December 21, 2011
Using Excel in Windows service on windows 200864 bit machine
Recieved following error message when try to open Excel file in Windows service on 2008 R2 64 bit
exception from hresult 0x800a03ec excel
Solution
1. Create following 2 folders and make sure service account have R/W access
C:\Windows\System32\config\systemprofile\Desktop
C:\Windows\SysWOW64\config\systemprofile\Desktop
2. For any Automation client to be able to access the VBA object model programmatically, the user running the code must explicitly grant access. To turn on access, the user must follow these steps.
exception from hresult 0x800a03ec excel
Solution
1. Create following 2 folders and make sure service account have R/W access
C:\Windows\System32\config\systemprofile\Desktop
C:\Windows\SysWOW64\config\systemprofile\Desktop
2. For any Automation client to be able to access the VBA object model programmatically, the user running the code must explicitly grant access. To turn on access, the user must follow these steps.
Office 2003 and Office XP
- Open the Office 2003 or Office XP application in question. On the Tools menu, click Macro, and then click Security to open the Macro Security dialog box.
- On the Trusted Sources tab, click to select the Trust access to Visual Basic Project check box to turn on access.
- Click OK to apply the setting. You may need to restart the application for the code to run properly if you automate from a Component Object Model (COM) add-in or template.
Office 2007
- Open the 2007 Microsoft Office system application in question. Click the Microsoft Office button, and then click Application Options.
- Click the Trust Center tab, and then click Trust Center Settings.
- Click the Macro Settings tab, click to select the Trust access to the VBA project object model check box, and then click OK.
- Click OK.
Tuesday, December 6, 2011
Power Shell - equivalent of a Windows CMD or MS-DOS batch file
Run a powershell scriptA PowerShell script should be saved with a .ps1 extension, e.g. MyScript.ps1.
Before running any scripts on a new powershell installation, you must first set an appropriate Execution Policy, e.g. Set-ExecutionPolicy RemoteSigned
There are two ways to run a PowerShell script.
The most common (default) way to run a script is by calling it:
PS C:\> & "C:\Belfry\My first Script.ps1"
If the path does not contain any spaces, then you can omit the quotes and the '&' operator
PS C:\> C:\Belfry\Myscript.ps1
If the script is in the current directory, you must indicate this using .\
PS C:\> .\Myscript.ps1
When you invoke a script using the syntax above, variables and functions defined in the script will disappear when the script ends.1
Dot Sourcing
When you dot source a script, all variables and functions defined in the script will persist even when the script ends.
Run a script by dot-sourcing it:
PS C:\> . "C:\Belfry\My first Script.ps1"
Dot-sourcing a script in the current directory:
PS C:\> . .\Myscript.ps1"
The System Path
If you run a script (or even just enter a command) without specifying the fully qualified path name, PowerShell will search for it as follows:
Firstly it will look at currently defined aliases, then currently defined functions and lastly commands located in the system path.
1unless they are explicitly defined as globals: Function SCOPE:GLOBAL or Filter SCOPE:GLOBAL or Set-Variable -scope "Global"
# ------------------------------------------------------------------------------
function writelog
{
param([string]$LogFile, [string]$data)
Write-Host $data
if ($LogFolder -ne "")
{
$data >> $LogFile
}else
{
}
}
# ------------------------------------------------------------------------------
function ZIPFolder
{
param( [string]$sourcefolder, [string]$outputfolder, [int]$retention, [string]$LogFolder )
$CompareDate=(Get-Date).AddDays(-$retention)
writelog $LogFolder ""
writelog $LogFolder "-- Search file old or equal this date: $CompareDate"
$a = Get-ChildItem -recurse $sourcefolder | where-object {$_.LastWriteTime -le $CompareDate}
foreach($x in $a)
{
writelog $LogFolder " evaluating file: $x"
#try/catch only Works in version 2
try
{
# $y = ((Get-Date) - $x.CreationTime).Days
$y = ((Get-Date) - $x.LastWriteTime).Days
# if ($y -gt $retention -and $x.PsISContainer -ne $True)
if ($x.PsISContainer -ne $True)
{
# $FileDate =Get-Date -format "dd-MMM-yyyy"
# $FileDate=($x.LastWriteTime).tostring("yyyy-mm-dd")
$FileDate=($x.LastWriteTime).tostring("yyyy-MM-dd")
$outputFile="$outputfolder$FileDate-output.zip"
writelog $LogFolder " Start ZIP file $x to package $outputFile"
$x |ZIPFile $outputFile
writelog $LogFolder " deleting file: $x"
# $x.delete()
writelog $LogFolder " Finished ZIP $x --> $outputFile, source file deleted"
writelog $LogFolder ""
}else
{
}
}
catch
{
writelog $LogFolder $_.Exception.ToString()
}
}
writelog $LogFolder "-- END Search file old or equal this date: $CompareDate"
}
# ------------------------------------------------------------------------------
function ZIPFile
{
param([string]$zipfilename)
if(-not (test-path($zipfilename)))
{
set-content $zipfilename ("PK" + [char]5 + [char]6 + ("$([char]0)" * 18))
(dir $zipfilename).IsReadOnly = $false
}
$shellApplication = new-object -com shell.application
$zipPackage = $shellApplication.NameSpace($zipfilename)
# $Path=$zipfilename.Remove($zipfilename.LastIndexOf("\")+1)
foreach($file in $input)
{
#write-host $zipPackage.Items().Contains($file)
#write-host $zipPackage.Items(0).tostring()
# if (-not (Test-Path("$zipfilename\$file")))
# {
$zipPackage.CopyHere($file.FullName,0x16)
Start-sleep -milliseconds 500
# }
}
}
# ------------------------------------------------------------------------------
1. Create a New Zip
Before running any scripts on a new powershell installation, you must first set an appropriate Execution Policy, e.g. Set-ExecutionPolicy RemoteSigned
There are two ways to run a PowerShell script.
The most common (default) way to run a script is by calling it:
PS C:\> & "C:\Belfry\My first Script.ps1"
If the path does not contain any spaces, then you can omit the quotes and the '&' operator
PS C:\> C:\Belfry\Myscript.ps1
If the script is in the current directory, you must indicate this using .\
PS C:\> .\Myscript.ps1
When you invoke a script using the syntax above, variables and functions defined in the script will disappear when the script ends.1
Dot Sourcing
When you dot source a script, all variables and functions defined in the script will persist even when the script ends.
Run a script by dot-sourcing it:
PS C:\> . "C:\Belfry\My first Script.ps1"
Dot-sourcing a script in the current directory:
PS C:\> . .\Myscript.ps1"
The System Path
If you run a script (or even just enter a command) without specifying the fully qualified path name, PowerShell will search for it as follows:
Firstly it will look at currently defined aliases, then currently defined functions and lastly commands located in the system path.
1unless they are explicitly defined as globals: Function SCOPE:GLOBAL or Filter SCOPE:GLOBAL or Set-Variable -scope "Global"
# ------------------------------------------------------------------------------
function writelog
{
param([string]$LogFile, [string]$data)
Write-Host $data
if ($LogFolder -ne "")
{
$data >> $LogFile
}else
{
}
}
# ------------------------------------------------------------------------------
function ZIPFolder
{
param( [string]$sourcefolder, [string]$outputfolder, [int]$retention, [string]$LogFolder )
$CompareDate=(Get-Date).AddDays(-$retention)
writelog $LogFolder ""
writelog $LogFolder "-- Search file old or equal this date: $CompareDate"
$a = Get-ChildItem -recurse $sourcefolder | where-object {$_.LastWriteTime -le $CompareDate}
foreach($x in $a)
{
writelog $LogFolder " evaluating file: $x"
#try/catch only Works in version 2
try
{
# $y = ((Get-Date) - $x.CreationTime).Days
$y = ((Get-Date) - $x.LastWriteTime).Days
# if ($y -gt $retention -and $x.PsISContainer -ne $True)
if ($x.PsISContainer -ne $True)
{
# $FileDate =Get-Date -format "dd-MMM-yyyy"
# $FileDate=($x.LastWriteTime).tostring("yyyy-mm-dd")
$FileDate=($x.LastWriteTime).tostring("yyyy-MM-dd")
$outputFile="$outputfolder$FileDate-output.zip"
writelog $LogFolder " Start ZIP file $x to package $outputFile"
$x |ZIPFile $outputFile
writelog $LogFolder " deleting file: $x"
# $x.delete()
writelog $LogFolder " Finished ZIP $x --> $outputFile, source file deleted"
writelog $LogFolder ""
}else
{
}
}
catch
{
writelog $LogFolder $_.Exception.ToString()
}
}
writelog $LogFolder "-- END Search file old or equal this date: $CompareDate"
}
# ------------------------------------------------------------------------------
function ZIPFile
{
param([string]$zipfilename)
if(-not (test-path($zipfilename)))
{
set-content $zipfilename ("PK" + [char]5 + [char]6 + ("$([char]0)" * 18))
(dir $zipfilename).IsReadOnly = $false
}
$shellApplication = new-object -com shell.application
$zipPackage = $shellApplication.NameSpace($zipfilename)
# $Path=$zipfilename.Remove($zipfilename.LastIndexOf("\")+1)
foreach($file in $input)
{
#write-host $zipPackage.Items().Contains($file)
#write-host $zipPackage.Items(0).tostring()
# if (-not (Test-Path("$zipfilename\$file")))
# {
$zipPackage.CopyHere($file.FullName,0x16)
Start-sleep -milliseconds 500
# }
}
}
# ------------------------------------------------------------------------------
1. Create a New Zip
function New-Zip
{
param([string]$zipfilename)
set-content $zipfilename ("PK" + [char]5 + [char]6 + ("$([char]0)" * 18))
(dir $zipfilename).IsReadOnly = $false
}usage: new-zip c:\demo\myzip.zip2. Add files to a zip via a pipeline
function Add-Zip
{
param([string]$zipfilename)
if(-not (test-path($zipfilename)))
{
set-content $zipfilename ("PK" + [char]5 + [char]6 + ("$([char]0)" * 18))
(dir $zipfilename).IsReadOnly = $false
}
$shellApplication = new-object -com shell.application
$zipPackage = $shellApplication.NameSpace($zipfilename)
foreach($file in $input)
{
$zipPackage.CopyHere($file.FullName)
Start-sleep -milliseconds 500
}
}usage: dir c:\demo\files\*.* -Recurse | add-Zip c:\demo\myzip.zip3. List the files in a zip
function Get-Zip
{
param([string]$zipfilename)
if(test-path($zipfilename))
{
$shellApplication = new-object -com shell.application
$zipPackage = $shellApplication.NameSpace($zipfilename)
$zipPackage.Items() | Select Path
}
}usage: Get-Zip c:\demo\myzip.zip4. Extract the files form the zip
function Extract-Zip
{
param([string]$zipfilename, [string] $destination)
if(test-path($zipfilename))
{
$shellApplication = new-object -com shell.application
$zipPackage = $shellApplication.NameSpace($zipfilename)
$destinationFolder = $shellApplication.NameSpace($destination)
$destinationFolder.CopyHere($zipPackage.Items())
}
}usage: extract-zip c:\demo\myzip.zip c:\demo\destination
So, how do we package the Vista Sidebar Gadget?
dir <path_to_gadget_files | add-Zip <path_to_gadget_zip> Rename-Item <path_to_gadget_zip> <path_to_gadget_zip>.Gadget
Friday, October 28, 2011
TFS Command Line
tf checkin /author:mcole04 /recursive
tf workspaces /server:tfs.bmogc.net /owner:mcole04
tf workspace IMATBCCWDVAPP03;mcole04
tf workspace /delete OCDT70302103;mcole04 /server:tfs.bmogc.net
tf undo /server:tfs.bmogc.net /workspace:OCDT70302103;mcole04 /recursive $/
tf undo /server:tfs.bmogc.net /workspace:IMATBCCWDVSCH01;mcole04 /recursive $/
tf undo /server:tfs.bmogc.net /workspace:IMATBCCWDVAPP03;mcole04 /recursive $/
tf destroy $/<team project>/<branch dir> /startcleanup /noprompt [/collection:uri]
tf shelvesets /owner:mcole04
$/Horseshoe/AVTLAPP2-DEV-PCGIBG
tf checkin /server:tfs.bmogc.net /shelveset:IMATBCCWDVAPP03 /author:mcole04 $/AVTLAPP2 /validate
tf checkin /workspace:OCDT70302103;mcole04 /author:mcole04 /recursive $/AVTLAPP2
tf workspaces /server:tfs.bmogc.net /owner:mcole04
tf workspace IMATBCCWDVAPP03;mcole04
tf workspace /delete OCDT70302103;mcole04 /server:tfs.bmogc.net
tf undo /server:tfs.bmogc.net /workspace:OCDT70302103;mcole04 /recursive $/
tf undo /server:tfs.bmogc.net /workspace:IMATBCCWDVSCH01;mcole04 /recursive $/
tf undo /server:tfs.bmogc.net /workspace:IMATBCCWDVAPP03;mcole04 /recursive $/
tf destroy $/<team project>/<branch dir> /startcleanup /noprompt [/collection:uri]
tf shelvesets /owner:mcole04
$/Horseshoe/AVTLAPP2-DEV-PCGIBG
tf checkin /server:tfs.bmogc.net /shelveset:IMATBCCWDVAPP03 /author:mcole04 $/AVTLAPP2 /validate
tf checkin /workspace:OCDT70302103;mcole04 /author:mcole04 /recursive $/AVTLAPP2
Thursday, October 13, 2011
Monday, September 26, 2011
Wednesday, September 21, 2011
TFS Branching
What happens if:
Tools:
Team Foundation Server Power Tools v1.2 Attrice Team Foundation SideKicks
Reference:
Visual Studio TFS Branching Guide 2010
Guidance: A Branching strategy for Scrum Teams
SSW Rules to Better Source Control with TFS
When should I use Areas in TFS instead of Team Projects
- you and your team are working on a new set of features and the customer wants a change to his current version?
- you are working on two features and the customer decides to abandon one of them?
- you have two teams working on different feature sets and their changes start interfering with each other?
- I just use labels instead of branches?
- A key benefit of branching for development is to isolate changes from the stable Main branch. Branching adds sanity more than it adds complexity. The primary cost is the effort to do merges and resolve conflicts. A key benefit is that you have a stable code base in Main and accept changes into Main only after they pass quality gates, etc.
- Branch the WHOLE “Main” line. Branch parts of your code can make integration a nightmare.
Some even go as far as to add the environments used, I prefer to think of deploying to environments (such as from Main to QA) rather than branching for QA. - Always do a Forward Integration from Main into Dev branch before you do a Reverse Integration from Dev Branch back into Main.
- After branching Main to Release, we generally recommend not doing any subsequent merging (FI) from Main into the release branch.
- In TFS, labels are not immutable. This does not mean they are not useful. But labels do not provide a very good development isolation mechanism. Branching allows separate code sets to evolve separately (e.g. Current with hotfixes, and vNext with new development). I don’t see how labels work here. Labels are not Auditable and if a dispute was raised by the customer you can produce a verifiable version of the source code for an independent party to check.
- Branching is a tool; it is not a silver bullet. Don’t over use it, and avoid “Anti-Patterns” where possible. There is a happy between no-branching and too-much-branching.
Tools:
Team Foundation Server Power Tools v1.2 Attrice Team Foundation SideKicks
Reference:
Visual Studio TFS Branching Guide 2010
Guidance: A Branching strategy for Scrum Teams
SSW Rules to Better Source Control with TFS
When should I use Areas in TFS instead of Team Projects
Wednesday, August 31, 2011
The breakpoint will not currently be hit. No symbols have been loaded for this document.
- While debugging in Visual Studio, click on Debug > Windows > Modules. The IDE will dock a Modules window, showing all the modules that have been loaded for your project.
- Look for your project's DLL, and check the Symbol Status for it.
- If it says Symbols Loaded, then you're golden. If it says something like Cannot find or open the PDB file, right-click on your module, select Load Symbols, and browse to the path of your PDB.
- stop the debugger
- close the IDE
- close the hosting application
- nuke the obj and bin folders
- restart the IDE
- rebuild the project
- go through the Modules window again
Wednesday, August 24, 2011
Load Text File into SQL Server
SQL Bulk insert
ADO.NET SQLBulkCopy
In ADO .NET, SqlBulkCopy is the object that helps you to perform a bulk copy. You can use a DataReader or DataTable as source data store (you can load your data from SQL database, Access database, XML or ... into these objects easily) and copy them to a destination table in database.
Copying only updated rows, Mapping Columns
Using a DataReader - most efficient way to bulk copy data between SQL Servers using .NET
SQL Server Integration Services
Running SSIS package programmatically
ADO.NET SQLBulkCopy
In ADO .NET, SqlBulkCopy is the object that helps you to perform a bulk copy. You can use a DataReader or DataTable as source data store (you can load your data from SQL database, Access database, XML or ... into these objects easily) and copy them to a destination table in database.
Copying only updated rows, Mapping Columns
Using a DataReader - most efficient way to bulk copy data between SQL Servers using .NET
SQL Server Integration Services
Running SSIS package programmatically
Wednesday, August 17, 2011
dynamic sql - From Clause Table name Variable
Create procedure s_ProcTable
@TableName varchar(128)
as
declare @sql varchar(4000)
select @sql = 'select rows = count(*) from [' + @TableName + ']'
exec (@sql)
go
Now executing this will give the result.
Note the [] around the name in case it contains invalid characters.
You may also have to deal with the owner.
Using EXECUTE 'tsql_string' with a variableThe following example shows how EXECUTE handles dynamically built strings that contain variables. This example creates the tables_cursor cursor to hold a list of all user-defined tables in the AdventureWorks2008R2 database, and then uses that list to rebuild all indexes on the tables.
USE AdventureWorks2008R2;
GO
DECLARE tables_cursor CURSOR
FOR
SELECT s.name, t.name
FROM sys.objects AS t
JOIN sys.schemas AS s ON s.schema_id = t.schema_id
WHERE t.type = 'U';
OPEN tables_cursor;
DECLARE @schemaname sysname;
DECLARE @tablename sysname;
FETCH NEXT FROM tables_cursor INTO @schemaname, @tablename;
WHILE (@@FETCH_STATUS <> -1)
BEGIN;
EXECUTE ('ALTER INDEX ALL ON ' + @schemaname + '.' + @tablename + ' REBUILD;');
FETCH NEXT FROM tables_cursor INTO @schemaname, @tablename;
END;
PRINT 'The indexes on all tables have been rebuilt.';
CLOSE tables_cursor;
DEALLOCATE tables_cursor;
GO
@TableName varchar(128)
as
declare @sql varchar(4000)
select @sql = 'select rows = count(*) from [' + @TableName + ']'
exec (@sql)
go
Now executing this will give the result.
Note the [] around the name in case it contains invalid characters.
You may also have to deal with the owner.
Using EXECUTE 'tsql_string' with a variableThe following example shows how EXECUTE handles dynamically built strings that contain variables. This example creates the tables_cursor cursor to hold a list of all user-defined tables in the AdventureWorks2008R2 database, and then uses that list to rebuild all indexes on the tables.
USE AdventureWorks2008R2;
GO
DECLARE tables_cursor CURSOR
FOR
SELECT s.name, t.name
FROM sys.objects AS t
JOIN sys.schemas AS s ON s.schema_id = t.schema_id
WHERE t.type = 'U';
OPEN tables_cursor;
DECLARE @schemaname sysname;
DECLARE @tablename sysname;
FETCH NEXT FROM tables_cursor INTO @schemaname, @tablename;
WHILE (@@FETCH_STATUS <> -1)
BEGIN;
EXECUTE ('ALTER INDEX ALL ON ' + @schemaname + '.' + @tablename + ' REBUILD;');
FETCH NEXT FROM tables_cursor INTO @schemaname, @tablename;
END;
PRINT 'The indexes on all tables have been rebuilt.';
CLOSE tables_cursor;
DEALLOCATE tables_cursor;
GO
Tuesday, August 9, 2011
Installation Package Tools
Visual Studio Installer
Will be discontinued after 2012, introduced new Installshield
The WiX distribution includes Votive, a Visual Studio add-in that allows creating and building WiX setup projects using the Visual Studio IDE. Votive supports syntax highlighting and IntelliSense for .WXS source files and adds a WiX setup project type to Visual Studio.
Installation cause VS IDE extremely slow when edit the file.
Thursday, August 4, 2011
Unit Test Best Practise
Test Driven Development (TDD) process is a robust way of designing software components (“units”) interactively so that their behaviour is specified through unit tests.
when you’re refactoring, i.e., restructuring a unit’s code but without meaning to change its behaviour. In this case, unit tests can often tell you if the unit’s behaviour has changed.
Unit Test in Visual Studio 2010
Walkthrough: Creating and Running Unit Tests
Walkthrough: Run Tests and View Code Coverage
Walkthrough: Using the Command-line Test Utility
Walkthrough: Create And Run Unit Tests As Part of a Team Build
MSTest Issues in 2008:
1. MSTest has limited support for parametrized tests and it is very cumbersome and constraining.
2. MSTest is stuck with the current release of VS and one has to wait till the next full release of VS to get any new features for unit testing whereas with NUnit and the like you can get new features and bugs fixes quickly.
3.limitations on how third party tools like TeamCity can work with MSTest
| Goal | Strongest technique |
| Finding bugs (things that don’t work as you want them to) | Manual testing (sometimes also automated integration tests) |
| Detecting regressions (things that used to work but have unexpectedly stopped working) | Automated integration tests (sometimes also manual testing, though time-consuming) |
| Designing software components robustly | Unit testing (within the TDD process) |
when you’re refactoring, i.e., restructuring a unit’s code but without meaning to change its behaviour. In this case, unit tests can often tell you if the unit’s behaviour has changed.
Unit Test in Visual Studio 2010
Walkthrough: Creating and Running Unit Tests
Walkthrough: Run Tests and View Code Coverage
Walkthrough: Using the Command-line Test Utility
Walkthrough: Create And Run Unit Tests As Part of a Team Build
MSTest Issues in 2008:
1. MSTest has limited support for parametrized tests and it is very cumbersome and constraining.
2. MSTest is stuck with the current release of VS and one has to wait till the next full release of VS to get any new features for unit testing whereas with NUnit and the like you can get new features and bugs fixes quickly.
3.limitations on how third party tools like TeamCity can work with MSTest
Schedule Task Table Design
CREATE TABLE [dbo].[Schedule](
[ScheduleID] [int] NOT NULL,[ScheduleTaskID] [int] NULL,
[ScheduleFrequency] [varchar](20) NOT NULL, /*Once, Daily, Weekly, Biweekly, Monthly, Yearly*/
[ScheduleStatus] [varchar](50) NULL, /*A-Enabled, X-Disabled*/
[ScheduleStart] [smalldatetime] NULL, /*Full vaule used by Once and Biweekly, Time part only used by Others*/
[ScheduleEnd] [smalldatetime] NULL, /**/
[MonthWeekDay] [smallint] NULL, /*This will override Start and End column*/
[LastModifyUser] [varchar](50) NULL,
[LastModifyDate] [smalldatetime] NULL,
CONSTRAINT [PK_Schedule] PRIMARY KEY CLUSTERED
( [ScheduleID] ASC)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
CREATE TABLE [dbo].[ScheduleException](
[ScheduleID] [int] NOT NULL,[ExceptionDateStart] [smalldatetime] NOT NULL,
[ExceptionDateEnd] [smalldatetime] NOT NULL
) ON [PRIMARY]
CREATE TABLE [dbo].[ScheduleExecution](
[ScheduleID] [int] NOT NULL,[ScheduleTaskID] [varchar](50) NOT NULL,
[ExecuteStart] [datetime] NOT NULL,
[ExecuteEnd] [datetime] NULL,
[ExecuteStatus] [varchar](50) NOT NULL,
[RequestUser] [varchar](50) NULL,
[RequestTime] [datetime] NULL
) ON [PRIMARY]
1). Read "Developing Time-Oriented Database Applications in SQL" by Richard Snodgrass (http://www.cs.arizona.edu/people/rts/tdbbook.pdf). This is a free PDF book which is perhaps the best resource on developing temporal databases.2). Review some existing data models that relate to scheduling, like:
http://www.databaseanswers.org/data_models/services_job_scheduling/index.htm
http://www.databaseanswers.org/data_models/hairdressers/index.htm
Look at this resources to see if it provides any ideas.
http://code.msdn.microsoft.com/SQLExamples/Wiki/View.aspx?title=FirstAvailableTimeslot&referringTitle=Home
Tibor's article on working with date, time, and datetime values is excellent.
http://www.karaszi.com/SQLServer/info_datetime.asp
Eralper's discussion on Calendar tables is worth reading:
http://www.kodyaz.com/articles/sql-server-dates-table-using-tsql-cte-calendar-table.aspx
Another discussion of using a Calendar table:
http://sqlserver2000.databases.aspfaq.com/why-should-i-consider-using-an-auxiliary-calendar-table.html
CREATE TABLE [dbo].[ScheduleException](
[ScheduleID] [int] NOT NULL,[ExceptionDateStart] [smalldatetime] NOT NULL,
[ExceptionDateEnd] [smalldatetime] NOT NULL
) ON [PRIMARY]
Wednesday, August 3, 2011
Copy database from sql 2008 to sql express
You can NOT use Copy Database Wizard when destination server is SQL Server Express Edition.
Use backup and restore instead but keep in mind database size must be less than 4 GB.
Try copying with DB scripts, Database >> Tasks >> Generate Scripts >>
Refer this link for additional info
http://blog.sqlauthority.com/2009/07/29/sql-server-2008-copy-database-with-data-generate-t-sql-for-inserting-data-from-one-table-to-another-table/
Try using DATABASE publishing wizard to generate scripts with schema and data.
http://www.microsoft.com/downloads/details.aspx?FamilyId=56E5B1C5-BF17-42E0-A410-371A838E570A&displaylang=en
Use backup and restore instead but keep in mind database size must be less than 4 GB.
Try copying with DB scripts, Database >> Tasks >> Generate Scripts >>
Refer this link for additional info
http://blog.sqlauthority.com/2009/07/29/sql-server-2008-copy-database-with-data-generate-t-sql-for-inserting-data-from-one-table-to-another-table/
Try using DATABASE publishing wizard to generate scripts with schema and data.
http://www.microsoft.com/downloads/details.aspx?FamilyId=56E5B1C5-BF17-42E0-A410-371A838E570A&displaylang=en
Thursday, July 14, 2011
Interactive Voice Response system
IVR system use Dialogic voice board, developed a COM in MFC to communicate with Dialogic SDK driver, a VB6 program used for interface and manage all customized voice flow chart.
HSBC stock trading web service
Web page front to capture order, send to MSMQ, write in ASP.NET
Schedule Task pick up from MSMQ, place order through HSBC web service
Schedule Task to check order status and update system
Schedule Task pick up from MSMQ, place order through HSBC web service
Schedule Task to check order status and update system
ESPP Enrollment Process
The project include web page written in ASP.Net, IVR system in VB6 and Scheduled Service in VB.Net.The back end database is SQL Server 2000.
Tuesday, June 28, 2011
Monday, June 20, 2011
Windows Communication Foundation
Windows Communication Foundation (WCF) is designed for Service Oriented Architecture
WCF’s most important aspects:
[ServiceContract]
class RentalReservations
{
[OperationContract]
public bool Check(int vehicleClass, int location, string dates)
{
bool availability;
// code to check availability goes here
return availability;
}
public int GetStats()
{
int numberOfReservations;
// code to get the current reservation count goes here
return numberOfReservations;
}
}
Using explicit interfaces like this is slightly more complicated, but it allows more flexibility. For example, a class can implement more than one interface, which means that it can also implement more than one service contract.
using System.ServiceModel;
[ServiceContract]
interface IReservations
{
[OperationContract]
bool Check(int vehicleClass, int location, string dates);
}
class RentalReservations : IReservations
{
public bool Check(int vehicleClass, int location, string dates)
{
bool availability;
// logic to check availability goes here
return availability;
}
public int GetStats()
{
int numberOfReservations;
// logic to determine reservation count goes here
return numberOfReservations;
}
}
Hosting a Service Using IIS or WAS
reserve.svc
<%@Service language=c# class="RentalReservations" %>
Windows Communication Foundation
WCF’s most important aspects:
- Unification of the original .NET Framework communication technologies
- Interoperability with applications built on other technologies
- Explicit support for service-oriented development.
- ASMX, also called ASP.NET Web Services, would be an option for communicating with the Java to achieve cross-vendor interoperability.
- .NET Remoting is a natural choice for .NET-to-.NET communication
- Distributed Transactions
- Web Services Enhancements (WSE) might be used along with ASMX to communicate with the Java EE-based reservation application and with the partner applications.
- System.Messaging, which provides a programming interface to Microsoft Message Queuing (MSMQ), could be used to communicate with Windows-based partner applications that weren’t always available. The persistent queuing that MSMQ provides is typically the best solution for intermittently connected applications.
- System.Net might be used to communicate with partner applications or perhaps in other ways. Representational State Transfer (REST)
[ServiceContract]
class RentalReservations
{
[OperationContract]
public bool Check(int vehicleClass, int location, string dates)
{
bool availability;
// code to check availability goes here
return availability;
}
public int GetStats()
{
int numberOfReservations;
// code to get the current reservation count goes here
return numberOfReservations;
}
}
Using explicit interfaces like this is slightly more complicated, but it allows more flexibility. For example, a class can implement more than one interface, which means that it can also implement more than one service contract.
using System.ServiceModel;
[ServiceContract]
interface IReservations
{
[OperationContract]
bool Check(int vehicleClass, int location, string dates);
}
class RentalReservations : IReservations
{
public bool Check(int vehicleClass, int location, string dates)
{
bool availability;
// logic to check availability goes here
return availability;
}
public int GetStats()
{
int numberOfReservations;
// logic to determine reservation count goes here
return numberOfReservations;
}
}
using System.Runtime.Serialization;
[DataContract]
struct ReservationInfo {
[DataMember] public int vehicleClass;
[DataMember] public int location;
[DataMember] public string dates;
}
reserve.svc
<%@Service language=c# class="RentalReservations" %>
- IIS-hosted WCF services can only be accessed using SOAP over HTTP. No other transport protocols are supported.
- Although WAS doesn’t require a Web server to be installed on the system, WCF services hosted in IIS obviously do.
Defining Endpoints
- An address indicating where this endpoint can be found. Addresses are URLs that identify a machine and a particular endpoint on that machine.
- A binding determining how this endpoint can be accessed. The binding determines what protocol combination can be used to access this endpoint along with other things, such as whether the communication is reliable and what security mechanisms can be used.
- A contract name indicating which service contract this WCF service class exposes via this endpoint. A class marked with ServiceContract that implements no explicit interfaces, such as RentalReservations in the first example shown earlier, can expose only one service contract. In this case, all its endpoints will expose the same contract. If a class explicitly implements two or more interfaces marked with ServiceContract, however, different endpoints can expose different contracts, each defined by a different interface.
Creating a WCF Client
Creating a proxy requires knowing what contract is exposed by the target endpoint, and then using the contract’s definition to generate the proxy. In WCF, this process can be performed using either Visual Studio or the command-line svcutil tool.Windows Communication Foundation
Workflow in the .NET Framework 4
Goal
creating unified application logic
creating scalable applications with simple state management.
Advantage
One obvious advantage of this approach is that the workflow doesn’t hang around in memory blocking a thread and using up a process while it’s waiting for input.
Another advantage is that a persisted workflow can potentially be re-loaded on a machine other than the one it was originally running on. Because of this, different parts of the workflow might end up running on different systems.
other advantages: It can make coordinating parallel work easier, providing automatic tracking of the workflow’s execution is straightforward. the main control flow of a workflow can be assembled graphically.
Overview
Base Activity Library (BAL), custom activities
Custom activities can be written directly in code, using C# or Visual Basic or another language. They can also be created by combining existing activities, which allows some interesting options. For example, it might be possible for less technical people to create WF applications using these pre-packaged chunks of custom functionality.
A workflow's state and control flow are typically described in eXtensible Application Markup Language (XAML), while custom activities can be written in code.
Workflow in the .NET Framework 4, it's not compatible with earlier version in 3.5
WF workflows can run in pretty much any process. You’re free to create your own host, even replacing some of WF’s basic services (like persistence) if you’d like.
A simpler option is to host a WF workflow in a worker process provided by Internet Information Server (IIS). While this works, it provides only a bare-bones solution. Microsoft is providing a technology code-named “Dublin”. Implemented as extensions to IIS and the Windows Process Activation Service (WAS), a primary goal of “Dublin” is to make IIS and WAS more attractive as a host for workflow services.
Microsoft Windows Workflow Foundation
creating unified application logic
creating scalable applications with simple state management.
Advantage
One obvious advantage of this approach is that the workflow doesn’t hang around in memory blocking a thread and using up a process while it’s waiting for input.
Another advantage is that a persisted workflow can potentially be re-loaded on a machine other than the one it was originally running on. Because of this, different parts of the workflow might end up running on different systems.
other advantages: It can make coordinating parallel work easier, providing automatic tracking of the workflow’s execution is straightforward. the main control flow of a workflow can be assembled graphically.
Overview
Base Activity Library (BAL), custom activities
Custom activities can be written directly in code, using C# or Visual Basic or another language. They can also be created by combining existing activities, which allows some interesting options. For example, it might be possible for less technical people to create WF applications using these pre-packaged chunks of custom functionality.
A workflow's state and control flow are typically described in eXtensible Application Markup Language (XAML), while custom activities can be written in code.
Workflow in the .NET Framework 4, it's not compatible with earlier version in 3.5
- Sequence: Executes activities in sequence, one after another. The sequence can contain If activities, While activities, and other kinds of control flow. It’s not possible to go backwards, however—execution must always move forward.
- Flowchart: Executes activities one after another, like a Sequence, but also allows control to return to an earlier step. This more flexible approach, new in the .NET Framework 4 release of WF, is closer both to how real processes work and to the way most of us think.
- Implementing services that do parallel work is straightforward: just drop activities into a Parallel activity.
- Tracking is provided by the runtime.
- Depending on the problem domain, it might be possible to create reusable custom activities for use in other services.
- The workflow can be created graphically, with the process logic directly visible in the WF workflow designer.
WF workflows can run in pretty much any process. You’re free to create your own host, even replacing some of WF’s basic services (like persistence) if you’d like.
A simpler option is to host a WF workflow in a worker process provided by Internet Information Server (IIS). While this works, it provides only a bare-bones solution. Microsoft is providing a technology code-named “Dublin”. Implemented as extensions to IIS and the Windows Process Activation Service (WAS), a primary goal of “Dublin” is to make IIS and WAS more attractive as a host for workflow services.
Microsoft Windows Workflow Foundation
Monday, May 16, 2011
Wednesday, March 9, 2011
VB.Net VS C#
- XML Literals
- Optional parameters
- Catch filters
- Static variables (not Shared)
- Handles events
- RaiseEvent
- With
Wednesday, February 23, 2011
Package "Visual Web Developer Trident designer package " has failed to load properly.
This usually happens when wrong gdiplus.dll is on the path.
This may leads to design mode hanging in the ASP.NET web application.
We can solve this issue by
copying gdiplus.dll from "Framework\v2.0.xxx"
to "Program Files\Visual Studio 8\Common7\Packages."
This may leads to design mode hanging in the ASP.NET web application.
We can solve this issue by
copying gdiplus.dll from "Framework\v2.0.xxx"
to "Program Files\Visual Studio 8\Common7\Packages."
Thursday, January 20, 2011
Table Variable VS Temporary Table
Table Variable introduced in SQL Server 2000. Table variables can offer performance benefits and flexibility when compared to temporary tables, and you can let the server clean up after wards.
Table Variable
1. Scoped to the stored procedure, batch, or user-defined function, no need to clean up.
2. Can't use as an input or output parameter
3. Fewer resources, less locking and logging overhead
4. Constraints same
DECLARE @MyTable TABLE
(
ProductID int UNIQUE,
Price money CHECK(Price < 10.0)
)
5. You cannot create a non-clustered index on a table variable, unless the index is a side effect of a PRIMARY KEY or UNIQUE constraint on the table
6. Using a temporary table inside of a stored procedure may result in additional re-compilations of the stored procedure.Table variables can often avoid this recompilation hit. For more information on why stored procedures may recompile, look at Microsoft knowledge base article 243586 (INF: Troubleshooting Stored Procedure Recompilation).
When to use Temporary Table
1. Nested stored procedures which use the result set
2. you need transaction rollback support
3. Large result set even required indexes to improve query performance
Table Variable
1. Scoped to the stored procedure, batch, or user-defined function, no need to clean up.
2. Can't use as an input or output parameter
3. Fewer resources, less locking and logging overhead
4. Constraints same
DECLARE @MyTable TABLE
(
ProductID int UNIQUE,
Price money CHECK(Price < 10.0)
)
5. You cannot create a non-clustered index on a table variable, unless the index is a side effect of a PRIMARY KEY or UNIQUE constraint on the table
6. Using a temporary table inside of a stored procedure may result in additional re-compilations of the stored procedure.Table variables can often avoid this recompilation hit. For more information on why stored procedures may recompile, look at Microsoft knowledge base article 243586 (INF: Troubleshooting Stored Procedure Recompilation).
When to use Temporary Table
1. Nested stored procedures which use the result set
2. you need transaction rollback support
3. Large result set even required indexes to improve query performance
Subscribe to:
Comments (Atom)