Showing posts with label sql2000. Show all posts
Showing posts with label sql2000. Show all posts

Sunday, March 25, 2012

Determine ANSI_PADDING for a table?

1) How do I determine the ANSI_PADDING setting for an existing table? (SQL
2000 or SQL 2005.)
Objectproperty doesn't show ANSI_PADDING, although it shows ANSI_NULLS.
2) How do I change the ANSI_NULLS or ANSI_PADDING setting for an existing
table?
These settings are just set with a seemingly "global" SET statement;
apparently the environment (or something in the ether) is set with this
setting, and then that is used when tables are created. The same is true
of options like NUMERIC_ROUNDABORT.
Thanks.
David Walker
"DWalker" <none@.none.com> wrote in message
news:eWsmaohUIHA.4752@.TK2MSFTNGP05.phx.gbl...
> 1) How do I determine the ANSI_PADDING setting for an existing table?
> (SQL
> 2000 or SQL 2005.)
>
You can either use:
EXEC sp_help Foo (look at TrimTrailingBlanks)
or
SELECT [name], typestat
FROM syscolumns
WHERE id IN ( OBJECT_ID('Foo') )
ANSI_PADDING is ON when typestat is 2, 0 if OFF.

> 2) How do I change the ANSI_NULLS or ANSI_PADDING setting for an existing
> table?
>
You can use ALTER TABLE and setting the corresponding setting before the
alter statement, here is an example:
SET ANSI_PADDING OFF
GO
CREATE TABLE Foo(col1 VARCHAR(10) NOT NULL)
GO
INSERT Foo VALUES ('Test ')
GO
SET ANSI_PADDING ON
GO
ALTER TABLE Foo ALTER COLUMN col1 VARCHAR(10) NOT NULL
GO
INSERT Foo VALUES ('Test ')
GO
SELECT '|' + col1 + '|'
FROM Foo
GO
DROP TABLE Foo
And here is a note from BOL:
"In a future version of SQL Server ANSI_PADDING will always be ON and any
applications that explicitly set the option to OFF will produce an error.
Avoid using this feature in new development work, and plan to modify
applications that currently use this feature."
HTH,
Plamen Ratchev
http://www.SQLStudio.com

Thursday, March 22, 2012

Determine #transaction per second per db

Whats the best method of determining #trans per sec per db in a SQL2000 Enterprise sp3? Thx in adv.Depends on your view of what a transaction is but how about monitoring the
Transactions/Sec counter in Perfmon under SQL Databases.
--
Andrew J. Kelly SQL MVP
"LIZ" <anonymous@.discussions.microsoft.com> wrote in message
news:2C79D01A-6D04-4D5F-8FE5-2A6C0E1795DA@.microsoft.com...
> Whats the best method of determining #trans per sec per db in a SQL2000
Enterprise sp3? Thx in adv.|||You could also use the transactions even in SQL Profiler, which also
captures the database... You could save the trace into a SQL table and do a
query grouped by database
--
Wayne Snyder, MCDBA, SQL Server MVP
Computer Education Services Corporation (CESC), Charlotte, NC
www.computeredservices.com
(Please respond only to the newsgroups.)
I support the Professional Association of SQL Server (PASS) and it's
community of SQL Server professionals.
www.sqlpass.org
"LIZ" <anonymous@.discussions.microsoft.com> wrote in message
news:2C79D01A-6D04-4D5F-8FE5-2A6C0E1795DA@.microsoft.com...
> Whats the best method of determining #trans per sec per db in a SQL2000
Enterprise sp3? Thx in adv.sql

Friday, March 9, 2012

Detach sql2000 msde database using SMO.

Hello,

I am trying to detach a database from an instance of msde 2000 using smo. When I call the Server.DetachDatabase function using my valid server object I get the following error.

"This method or property is accessible only while working against SQL Server 2005 or later."

Does anyone know if it is possible to detach a msde database using the .net smo objects.

What I am trying to do is detach the database and then re-atach to an instance of sqlexpress. I thought about just stopping the server and copying the mdf file to a new location but I cannot figure out how to stop the server and restart it using smo.

Thanks,

Oh you don't have to go through all of that trouble just use the transfer class. Be sure to look at the information in MSDN regarding the Transfer class (specifically regarding its limitations). But going from 2000 to a 2005 DB should be well within its capabilities. Use the overloaded constructor that takes an existing database object to get started.

Hope this helps.

Michael

|||I suspect this may be a bug that you are running into. Detaching a SQL Server 2000 Database is supported by SMO. Could you post a small repro source code here so I can take a look?|||

Here is the main part of the code. I get an exception on the detach function that tells me that this function is valid on sql2005 databases only. The server object is valid and connected and the function will work on a 2005 database.

Dim KBMSS As Server = Nothing

Dim svc As New ServerConnection

With svc

.ApplicationName = Application.ProductName

.ServerInstance = ServerName

.StatementTimeout = 30

.LoginSecure = False

.Login = "sa"

.Password = "kbmsa"

End With

KBMSS = New Server(svc)

KBMSS.DetachDatabase("KBM", False)

|||

I've had a similar message when attempting to itterate through triggers in a database

connects =

m_srvconnSelectedServerConnection.ServerInstance = comboServer.Text;
m_srvconnSelectedServerConnection.LoginSecure = false;
m_srvconnSelectedServerConnection.Login = this.textUser.Text;
m_srvconnSelectedServerConnection.Password = this.textPassword.Text;
m_serverSelectedServer = new Server(m_srvconnSelectedServerConnection);

itterate triggers =

Database SelectedDatabase = m_serverSelectedServer.Databases[comboDB.Text];

foreach (Trigger trgTRG in SelectedDatabase.Triggers)
{
}

version of connected server reports =

?m_serverSelectedServer.Information.Version
{8.0.760}

cheers

Detach sql2000 msde database using SMO.

Hello,

I am trying to detach a database from an instance of msde 2000 using smo. When I call the Server.DetachDatabase function using my valid server object I get the following error.

"This method or property is accessible only while working against SQL Server 2005 or later."

Does anyone know if it is possible to detach a msde database using the .net smo objects.

What I am trying to do is detach the database and then re-atach to an instance of sqlexpress. I thought about just stopping the server and copying the mdf file to a new location but I cannot figure out how to stop the server and restart it using smo.

Thanks,

Oh you don't have to go through all of that trouble just use the transfer class. Be sure to look at the information in MSDN regarding the Transfer class (specifically regarding its limitations). But going from 2000 to a 2005 DB should be well within its capabilities. Use the overloaded constructor that takes an existing database object to get started.

Hope this helps.

Michael

|||I suspect this may be a bug that you are running into. Detaching a SQL Server 2000 Database is supported by SMO. Could you post a small repro source code here so I can take a look?|||

Here is the main part of the code. I get an exception on the detach function that tells me that this function is valid on sql2005 databases only. The server object is valid and connected and the function will work on a 2005 database.

Dim KBMSS As Server = Nothing

Dim svc As New ServerConnection

With svc

.ApplicationName = Application.ProductName

.ServerInstance = ServerName

.StatementTimeout = 30

.LoginSecure = False

.Login = "sa"

.Password = "kbmsa"

End With

KBMSS = New Server(svc)

KBMSS.DetachDatabase("KBM", False)

|||

I've had a similar message when attempting to itterate through triggers in a database

connects =

m_srvconnSelectedServerConnection.ServerInstance = comboServer.Text;
m_srvconnSelectedServerConnection.LoginSecure = false;
m_srvconnSelectedServerConnection.Login = this.textUser.Text;
m_srvconnSelectedServerConnection.Password = this.textPassword.Text;
m_serverSelectedServer = new Server(m_srvconnSelectedServerConnection);

itterate triggers =

Database SelectedDatabase = m_serverSelectedServer.Databases[comboDB.Text];

foreach (Trigger trgTRG in SelectedDatabase.Triggers)
{
}

version of connected server reports =

?m_serverSelectedServer.Information.Version
{8.0.760}

cheers

Detach sql2000 msde database using SMO.

Hello,

I am trying to detach a database from an instance of msde 2000 using smo. When I call the Server.DetachDatabase function using my valid server object I get the following error.

"This method or property is accessible only while working against SQL Server 2005 or later."

Does anyone know if it is possible to detach a msde database using the .net smo objects.

What I am trying to do is detach the database and then re-atach to an instance of sqlexpress. I thought about just stopping the server and copying the mdf file to a new location but I cannot figure out how to stop the server and restart it using smo.

Thanks,

Oh you don't have to go through all of that trouble just use the transfer class. Be sure to look at the information in MSDN regarding the Transfer class (specifically regarding its limitations). But going from 2000 to a 2005 DB should be well within its capabilities. Use the overloaded constructor that takes an existing database object to get started.

Hope this helps.

Michael

|||I suspect this may be a bug that you are running into. Detaching a SQL Server 2000 Database is supported by SMO. Could you post a small repro source code here so I can take a look?|||

Here is the main part of the code. I get an exception on the detach function that tells me that this function is valid on sql2005 databases only. The server object is valid and connected and the function will work on a 2005 database.

Dim KBMSS As Server = Nothing

Dim svc As New ServerConnection

With svc

.ApplicationName = Application.ProductName

.ServerInstance = ServerName

.StatementTimeout = 30

.LoginSecure = False

.Login = "sa"

.Password = "kbmsa"

End With

KBMSS = New Server(svc)

KBMSS.DetachDatabase("KBM", False)

|||

I've had a similar message when attempting to itterate through triggers in a database

connects =

m_srvconnSelectedServerConnection.ServerInstance = comboServer.Text;
m_srvconnSelectedServerConnection.LoginSecure = false;
m_srvconnSelectedServerConnection.Login = this.textUser.Text;
m_srvconnSelectedServerConnection.Password = this.textPassword.Text;
m_serverSelectedServer = new Server(m_srvconnSelectedServerConnection);

itterate triggers =

Database SelectedDatabase = m_serverSelectedServer.Databases[comboDB.Text];

foreach (Trigger trgTRG in SelectedDatabase.Triggers)
{
}

version of connected server reports =

?m_serverSelectedServer.Information.Version
{8.0.760}

cheers

Wednesday, March 7, 2012

Desperate with SQL2000 Log Shipping , Out Of Sync

After try and error for several months, finally my SQL2000 LS works.

i gave my self a little celebration for that.

but after running well for a day, just one day !

this morning it give me Out of Sync.

error messages:

failure /4305

activity : Load

[Microsoft SQL-DMO (ODBC SQLState: 42000)]
Error 4305: [Microsoft][ODBC SQL Server Driver][SQL Server]The log in this backup set
begins at LSN 7000000026200001, which is too late to apply to the database. An earlier
log backup that includes LSN 6000000015100001 can be restored.
[Microsoft][ODBC SQL Server Driver][SQL Server]RESTORE LOG is terminating abnormally.

activity : Copy

failure / 32

The process cannot access the file because it is being used by another process.

i just dont understand, how come the load & copy process failed, after it works for a day.

Hi toni,

Are you using SQL 2000 Standard or Enterprise? In short are you using the Enterprise log shipping feature or carrying out log shipping through scripts?

sk

|||

Hi SK,

I'm using SQL2000 Ent Edition feature , using the Log Shipping Wizard.

The database for this LS is about 40GB, so in this case we restore the database to target server.

we did the restore process using OVDP(open view data protector) and it worked fine for a day until it gave us Out of sync.

any troubleshooting steps will be appreciated.

thx

|||

Not sure i could help you out in this as i am using the Standard edition of SQL Server 2k. However i was interested in log shipping, by script that is, but failed to make it work..So i was curious to know about your setup...

I take it that your transaction logs are created on a shared drive which then is restored by the receiving Sql Server(target).. Could it be that the network connection was lost and it missed out on one of the logs in between?

This guess is based on the fact that it worked for a day and so there shouldnt be any reason why it fails unless there is a communication problem...

Hope someone else could point out the technicalities behind it...

sk

|||

//Could it be that the network connection was lost and it missed out on one of the logs in between?//

i'll observe that, but i wonder if the system itself already has the mechanism to react on that problems ?

try to reconfigure the LS , set the interval to 15 mins, lets see...

thx

|||

failed again, but this time i think i found it.

there is another job running trx backup to tape device, so the missing trx goes to tape.

rgds

Desperate with SQL2000 Log Shipping , Out Of Sync

After try and error for several months, finally my SQL2000 LS works.

i gave my self a little celebration for that.

but after running well for a day, just one day !

this morning it give me Out of Sync.

error messages:

failure /4305

activity : Load

[Microsoft SQL-DMO (ODBC SQLState: 42000)]
Error 4305: [Microsoft][ODBC SQL Server Driver][SQL Server]The log in this backup set
begins at LSN 7000000026200001, which is too late to apply to the database. An earlier
log backup that includes LSN 6000000015100001 can be restored.
[Microsoft][ODBC SQL Server Driver][SQL Server]RESTORE LOG is terminating abnormally.

activity : Copy

failure / 32

The process cannot access the file because it is being used by another process.

i just dont understand, how come the load & copy process failed, after it works for a day.

Hi toni,

Are you using SQL 2000 Standard or Enterprise? In short are you using the Enterprise log shipping feature or carrying out log shipping through scripts?

sk

|||

Hi SK,

I'm using SQL2000 Ent Edition feature , using the Log Shipping Wizard.

The database for this LS is about 40GB, so in this case we restore the database to target server.

we did the restore process using OVDP(open view data protector) and it worked fine for a day until it gave us Out of sync.

any troubleshooting steps will be appreciated.

thx

|||

Not sure i could help you out in this as i am using the Standard edition of SQL Server 2k. However i was interested in log shipping, by script that is, but failed to make it work..So i was curious to know about your setup...

I take it that your transaction logs are created on a shared drive which then is restored by the receiving Sql Server(target).. Could it be that the network connection was lost and it missed out on one of the logs in between?

This guess is based on the fact that it worked for a day and so there shouldnt be any reason why it fails unless there is a communication problem...

Hope someone else could point out the technicalities behind it...

sk

|||

//Could it be that the network connection was lost and it missed out on one of the logs in between?//

i'll observe that, but i wonder if the system itself already has the mechanism to react on that problems ?

try to reconfigure the LS , set the interval to 15 mins, lets see...

thx

|||

failed again, but this time i think i found it.

there is another job running trx backup to tape device, so the missing trx goes to tape.

rgds

Friday, February 17, 2012

Design problem

Hello. I'm using SQL2000, and I have a design problem.
I know that every time I make a primary key, sql server makes it by default
a clustered index. Since I have a large composite key in the table I don't
know if it's smart to leave it as a clustered index. This is the table:

CREATE TABLE [InputOutputItems] (
[FromStorage] [smallint] NOT NULL ,
[ToStorage] [smallint] NOT NULL ,
[DocumentTypeID] [int] NOT NULL ,
[DocumentNumber] [int] NOT NULL ,
[StorageDocYear] [int] NOT NULL ,
[ProductID] [int] NOT NULL ,
[SerialNumber] [varchar] (50) COLLATE Croatian_CI_AI NOT NULL ,
[PartnerID] [int] NULL ,
[Barcode] [varchar] (50) COLLATE Croatian_CI_AI NULL ,
[DaysOfExpiration] [int] NULL ,
[DateOfValidation] [datetime] NULL ,
[Row] [varchar] (20) COLLATE Croatian_CI_AI NULL ,
[Column] [varchar] (20) COLLATE Croatian_CI_AI NULL ,
[Level] [varchar] (20) COLLATE Croatian_CI_AI NULL ,
[UnitDimensionID] [int] NULL ,
[UnitPack] [decimal](18, 4) NULL ,
[TotalWeight] [decimal](18, 4) NULL ,
[PackageMachineID] [int] NOT NULL ,
[PackageEmployeeID] [int] NULL ,
[UserIDCreated] [int] NULL ,
[DateCreated] [datetime] NULL ,
[UserIDChanged] [int] NULL ,
[DateChanged] [datetime] NULL ,
[PaleteNumber] [int] NULL ,
[LinkedDocument] [int] NULL ,
CONSTRAINT [PK_InputOutputItems] PRIMARY KEY CLUSTERED
(
[FromStorage],
[ToStorage],
[DocumentTypeID],
[DocumentNumber],
[StorageDocYear],
[ProductID],
[SerialNumber],
[PackageMachineID]
) WITH FILLFACTOR = 90 ON [PRIMARY] ,
CONSTRAINT [FK_InputOutputItems_InputOutput] FOREIGN KEY
(
[FromStorage],
[ToStorage],
[DocumentTypeID],
[DocumentNumber],
[StorageDocYear]
) REFERENCES [InputOutput] (
[FromStorage],
[ToStorage],
[DocumentTypeID],
[DocumentNumber],
[StorageDocYear]
)
) ON [PRIMARY]

InputOutPutItems is actually an Item table for documents, here is the header
table for documents:

CREATE TABLE [InputOutput] (
[FromStorage] [smallint] NOT NULL ,
[ToStorage] [smallint] NOT NULL ,
[DocumentTypeID] [int] NOT NULL ,
[DocumentNumber] [int] NOT NULL ,
[StorageDocYear] [int] NOT NULL ,
[AppUserID] [smallint] NULL ,
[ManufactureIndent] [int] NULL ,
[DeliveryDate] [datetime] NULL ,
[OrgUnitID] [int] NULL ,
[TypeOfTransferID] [int] NULL ,
[Note] [varchar] (500) COLLATE Croatian_CI_AI NULL ,
[Status] [int] NULL ,
[UserIDCreated] [int] NULL ,
[DateCreated] [datetime] NULL ,
[UserIDChanged] [int] NULL ,
[DateChanged] [datetime] NULL ,
[TypeOfIndent] [char] (2) COLLATE Croatian_CI_AI NULL ,
[TMOrgUnitID] [smallint] NULL ,
CONSTRAINT [PK_InputOutput] PRIMARY KEY CLUSTERED
(
[FromStorage],
[ToStorage],
[DocumentTypeID],
[DocumentNumber],
[StorageDocYear]
) WITH FILLFACTOR = 90 ON [PRIMARY] ,
CONSTRAINT [FK_InputOutput_DocumentTypes] FOREIGN KEY
(
[DocumentTypeID]
) REFERENCES [DocumentTypes] (
[DocumentTypeID]
),
CONSTRAINT [FK_InputOutput_OrgUnits] FOREIGN KEY
(
[FromStorage]
) REFERENCES [OrgUnits] (
[OrgUnitID]
),
CONSTRAINT [FK_InputOutput_OrgUnits1] FOREIGN KEY
(
[ToStorage]
) REFERENCES [OrgUnits] (
[OrgUnitID]
)
) ON [PRIMARY]

These tabels are used for documents in a Warehouse Managament System with
barcode scanners.
Because of the way that the system works the primary keys cannot be any
smaller. I never use the whole
primary key in a where clause, I just use parts of it.
Should I make somethig else a clustered index, or leave the clustered index
as a primary key?

Thanks in advance.
Drazen Grabovac.Grabi (drazen@.git.hr) writes:
> Hello. I'm using SQL2000, and I have a design problem. I know that every
> time I make a primary key, sql server makes it by default a clustered
> index. Since I have a large composite key in the table I don't know if
> it's smart to leave it as a clustered index. This is the table:

This question already has answers in microsoft.public.sqlserver.programming.
Please do not post the same question independly to several newsgroups,
as this can result in people wasting on a time on a problem that has
already been addressed.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx

Design problem

Hello. I'm using SQL2000, and I have a design problem.
I know that every time I make a primary key, sql server makes it by default
a clustered index. Since I have a large composite key in the table I don't
know if it's smart to leave it as a clustered index. This is the table:
CREATE TABLE [InputOutputItems] (
[FromStorage] [smallint] NOT NULL ,
[ToStorage] [smallint] NOT NULL ,
[DocumentTypeID] [int] NOT NULL ,
[DocumentNumber] [int] NOT NULL ,
[StorageDocYear] [int] NOT NULL ,
[ProductID] [int] NOT NULL ,
[SerialNumber] [varchar] (50) COLLATE Croatian_CI_AI NOT NULL ,
[PartnerID] [int] NULL ,
[Barcode] [varchar] (50) COLLATE Croatian_CI_AI NULL ,
[DaysOfExpiration] [int] NULL ,
[DateOfValidation] [datetime] NULL ,
[Row] [varchar] (20) COLLATE Croatian_CI_AI NULL ,
[Column] [varchar] (20) COLLATE Croatian_CI_AI NULL ,
[Level] [varchar] (20) COLLATE Croatian_CI_AI NULL ,
[UnitDimensionID] [int] NULL ,
[UnitPack] [decimal](18, 4) NULL ,
[TotalWeight] [decimal](18, 4) NULL ,
[PackageMachineID] [int] NOT NULL ,
[PackageEmployeeID] [int] NULL ,
[UserIDCreated] [int] NULL ,
[DateCreated] [datetime] NULL ,
[UserIDChanged] [int] NULL ,
[DateChanged] [datetime] NULL ,
[PaleteNumber] [int] NULL ,
[LinkedDocument] [int] NULL ,
CONSTRAINT [PK_InputOutputItems] PRIMARY KEY CLUSTERED
(
[FromStorage],
[ToStorage],
[DocumentTypeID],
[DocumentNumber],
[StorageDocYear],
[ProductID],
[SerialNumber],
[PackageMachineID]
) WITH FILLFACTOR = 90 ON [PRIMARY] ,
CONSTRAINT [FK_InputOutputItems_InputOutput] FOREIGN KEY
(
[FromStorage],
[ToStorage],
[DocumentTypeID],
[DocumentNumber],
[StorageDocYear]
) REFERENCES [InputOutput] (
[FromStorage],
[ToStorage],
[DocumentTypeID],
[DocumentNumber],
[StorageDocYear]
)
) ON [PRIMARY]
The tabel is used for document items in a WMS system. I never use the whole
primary key in a where clause, I just use parts of it.
Should I make somethig else a clustered index, or leave the clustered index
as a primary key?
Drazen Grabovac.If you use EM (Enterprise Manager) and set the PK first then by default a
clustred index will be created on the PK column(s). You can first create the
clustred index as desired and then set the PK.
When defining the clustred index take into concidaration how will you being
querying data. If you are going to have range queries i.e. by dates then
concider a clustred index on the relevant datetime column.
For a single row access sql will use your PK and for range queries the
clustered index.
In addition you may create other noneclustred indexs as required to server
data retreival.
"Grabi" <drazen@.git.hr> wrote in message news:dvjpor$5nq$1@.ss405.t-com.hr...
> Hello. I'm using SQL2000, and I have a design problem.
> I know that every time I make a primary key, sql server makes it by
> default a clustered index. Since I have a large composite key in the table
> I don't know if it's smart to leave it as a clustered index. This is the
> table:
> CREATE TABLE [InputOutputItems] (
> [FromStorage] [smallint] NOT NULL ,
> [ToStorage] [smallint] NOT NULL ,
> [DocumentTypeID] [int] NOT NULL ,
> [DocumentNumber] [int] NOT NULL ,
> [StorageDocYear] [int] NOT NULL ,
> [ProductID] [int] NOT NULL ,
> [SerialNumber] [varchar] (50) COLLATE Croatian_CI_AI NOT NULL ,
> [PartnerID] [int] NULL ,
> [Barcode] [varchar] (50) COLLATE Croatian_CI_AI NULL ,
> [DaysOfExpiration] [int] NULL ,
> [DateOfValidation] [datetime] NULL ,
> [Row] [varchar] (20) COLLATE Croatian_CI_AI NULL ,
> [Column] [varchar] (20) COLLATE Croatian_CI_AI NULL ,
> [Level] [varchar] (20) COLLATE Croatian_CI_AI NULL ,
> [UnitDimensionID] [int] NULL ,
> [UnitPack] [decimal](18, 4) NULL ,
> [TotalWeight] [decimal](18, 4) NULL ,
> [PackageMachineID] [int] NOT NULL ,
> [PackageEmployeeID] [int] NULL ,
> [UserIDCreated] [int] NULL ,
> [DateCreated] [datetime] NULL ,
> [UserIDChanged] [int] NULL ,
> [DateChanged] [datetime] NULL ,
> [PaleteNumber] [int] NULL ,
> [LinkedDocument] [int] NULL ,
> CONSTRAINT [PK_InputOutputItems] PRIMARY KEY CLUSTERED
> (
> [FromStorage],
> [ToStorage],
> [DocumentTypeID],
> [DocumentNumber],
> [StorageDocYear],
> [ProductID],
> [SerialNumber],
> [PackageMachineID]
> ) WITH FILLFACTOR = 90 ON [PRIMARY] ,
> CONSTRAINT [FK_InputOutputItems_InputOutput] FOREIGN KEY
> (
> [FromStorage],
> [ToStorage],
> [DocumentTypeID],
> [DocumentNumber],
> [StorageDocYear]
> ) REFERENCES [InputOutput] (
> [FromStorage],
> [ToStorage],
> [DocumentTypeID],
> [DocumentNumber],
> [StorageDocYear]
> )
> ) ON [PRIMARY]
> The tabel is used for document items in a WMS system. I never use the
> whole primary key in a where clause, I just use parts of it.
> Should I make somethig else a clustered index, or leave the clustered
> index as a primary key?
>
> Drazen Grabovac.
>|||Your design looks wrong.
Call you explain what a "type_id" means? An attribute can be a type or
an identifer, but NEVER both (see ISO-11179).
Can you give an example of varyign length 50 character barcode?
What is a user_id history being kept in this table? Where is the User
table that should have that kind of information?
What is "InputOutputItems" -- other than very vague.
Some of the other names also suggest LOTS of problems, which might be
why you have such a large key.|||Hello.
DocumentTypeID means ID, but it's named that way because it's a FOREIGN KEY
to this table:
CREATE TABLE [DocumentTypes] (
[DocumentTypeID] [int] NOT NULL ,
[AppUserID] [smallint] NULL ,
[Name] [varchar] (50) COLLATE Croatian_CI_AI NULL ,
[Description] [varchar] (500) COLLATE Croatian_CI_AI NULL ,
[OweInput] [int] NULL ,
[LookupInput] [int] NULL ,
[ActiveUntil] [datetime] NULL ,
[Label] [varchar] (20) COLLATE Croatian_CI_AI NULL ,
[ChargeTypeID] [int] NULL ,
[UserIDCreated] [int] NULL ,
[DateCreated] [datetime] NULL ,
[UserIDChanged] [int] NULL ,
[DateChanged] [datetime] NULL ,
CONSTRAINT [PK_DocumentTypes] PRIMARY KEY CLUSTERED
(
[DocumentTypeID]
) WITH FILLFACTOR = 90 ON [PRIMARY]
) ON [PRIMARY]
The remark about the Barcode is ok. It's a varyign length 50 character
barcode because
we made a lot of changes to the structure of the barcode.
This is the user table:
CREATE TABLE [Users] (
[UserID] [MyKey] NOT NULL ,
[EmployeeID] [MyKey] NULL ,
[PartnerID] [MyKey] NULL ,
[OrgUnitID] [smallint] NULL ,
[Type1] [MyKey] NULL ,
[Type2] [MyKey] NULL ,
[UserName] [MyString] NOT NULL ,
[Password] [MyString] NOT NULL ,
[LogTime] [smalldatetime] NULL ,
[LogCount] [MyKey] NULL ,
[LastLog] [datetime] NULL ,
[Active] [bit] NULL ,
[Activation] [datetime] NULL ,
[ActivationNote] [MyLongString] NULL ,
[LanguageID] [MyKey] NULL ,
[Sms] [MyKey] NULL ,
[SmsSent] [MyKey] NULL ,
[Note] [MyText] NULL ,
[PiccoLinkPassword] [varchar] (10) COLLATE Croatian_CI_AI NULL ,
[PActive] [char] (1) COLLATE Croatian_CI_AI NULL ,
CONSTRAINT [PK_Users] PRIMARY KEY CLUSTERED
(
[UserID]
) WITH FILLFACTOR = 90 ON [PRIMARY] ,
CONSTRAINT [FK_Users_Employees] FOREIGN KEY
(
[EmployeeID]
) REFERENCES [Employees] (
[EmployeeID]
) NOT FOR REPLICATION ,
CONSTRAINT [FK_Users_Languages] FOREIGN KEY
(
[LanguageID]
) REFERENCES [Languages] (
[LanguageID]
) NOT FOR REPLICATION ,
CONSTRAINT [FK_Users_OrgUnits] FOREIGN KEY
(
[OrgUnitID]
) REFERENCES [OrgUnits] (
[OrgUnitID]
) NOT FOR REPLICATION ,
CONSTRAINT [FK_Users_Partners] FOREIGN KEY
(
[PartnerID]
) REFERENCES [Partners] (
[PartnerID]
) NOT FOR REPLICATION
) ON [PRIMARY]
InputOutPutItems is actually an Item table for documents, here is the header
table for documents:
CREATE TABLE [InputOutput] (
[FromStorage] [smallint] NOT NULL ,
[ToStorage] [smallint] NOT NULL ,
[DocumentTypeID] [int] NOT NULL ,
[DocumentNumber] [int] NOT NULL ,
[StorageDocYear] [int] NOT NULL ,
[AppUserID] [smallint] NULL ,
[ManufactureIndent] [int] NULL ,
[DeliveryDate] [datetime] NULL ,
[OrgUnitID] [int] NULL ,
[TypeOfTransferID] [int] NULL ,
[Note] [varchar] (500) COLLATE Croatian_CI_AI NULL ,
[Status] [int] NULL ,
[UserIDCreated] [int] NULL ,
[DateCreated] [datetime] NULL ,
[UserIDChanged] [int] NULL ,
[DateChanged] [datetime] NULL ,
[TypeOfIndent] [char] (2) COLLATE Croatian_CI_AI NULL ,
[TMOrgUnitID] [smallint] NULL ,
CONSTRAINT [PK_InputOutput] PRIMARY KEY CLUSTERED
(
[FromStorage],
[ToStorage],
[DocumentTypeID],
[DocumentNumber],
[StorageDocYear]
) WITH FILLFACTOR = 90 ON [PRIMARY] ,
CONSTRAINT [FK_InputOutput_DocumentTypes] FOREIGN KEY
(
[DocumentTypeID]
) REFERENCES [DocumentTypes] (
[DocumentTypeID]
),
CONSTRAINT [FK_InputOutput_OrgUnits] FOREIGN KEY
(
[FromStorage]
) REFERENCES [OrgUnits] (
[OrgUnitID]
),
CONSTRAINT [FK_InputOutput_OrgUnits1] FOREIGN KEY
(
[ToStorage]
) REFERENCES [OrgUnits] (
[OrgUnitID]
)
) ON [PRIMARY]
So now that you have an idea of how the document tabels look in my database,
could you give me an advice.
Should I put the primary key as a clustered index or not? InputOutPutItems
is pretty large and has millions or records,
and query's can sometimes be slow.
"--CELKO--" <jcelko212@.earthlink.net> wrote in message
news:1142780014.593696.168280@.g10g2000cwb.googlegroups.com...
> Your design looks wrong.
> Call you explain what a "type_id" means? An attribute can be a type or
> an identifer, but NEVER both (see ISO-11179).
> Can you give an example of varyign length 50 character barcode?
> What is a user_id history being kept in this table? Where is the User
> table that should have that kind of information?
> What is "InputOutputItems" -- other than very vague.
> Some of the other names also suggest LOTS of problems, which might be
> why you have such a large key.
>
"--CELKO--" <jcelko212@.earthlink.net> wrote in message
news:1142780014.593696.168280@.g10g2000cwb.googlegroups.com...
> Your design looks wrong.
> Call you explain what a "type_id" means? An attribute can be a type or
> an identifer, but NEVER both (see ISO-11179).
> Can you give an example of varyign length 50 character barcode?
> What is a user_id history being kept in this table? Where is the User
> table that should have that kind of information?
> What is "InputOutputItems" -- other than very vague.
> Some of the other names also suggest LOTS of problems, which might be
> why you have such a large key.
>|||Grabi (drazen@.git.hr) writes:
> Hello. I'm using SQL2000, and I have a design problem. I know that every
> time I make a primary key, sql server makes it by default a clustered
> index. Since I have a large composite key in the table I don't know if
> it's smart to leave it as a clustered index. This is the table:
Whether it is a good idea or not that the PK by default is clustered
is disputed. Some people think it's bad. Personally, I think it's good
for the simple reason that many tables have no other index than the PK,
and in the verymost cases it's a good idea that all tables have a
clustered index.
Whatever, in very many cases, the clustered index shold not be on the
primary key. Typically you put the clustered index on a column that
appears commonly in range queries. For instance in an Orders table,
you would probably cluster on OrderDate or CustomerID depending on what
you most often want to view queries by.
What you should cluster on in InputOutputItems, I can't tell, because
I don't know the business. If most queries will by ProductID, maybe that
is the guy. But I don't know.
One thing to keep in mind, is that in non-clustered index, the clustered
index key is used as row locator. This means that a wide clustered key, also
increases the size of the non-clustered indexes.
As for the general design I note that all your non-key columns are nullable,
but is really that way? Could there really exist a row where DateCreated
and UserIDCreated are NULL? Or for that matter the BarCode? Strictly
specifying which columns that may be nullable and which may be not, will
help you to get better quality of the data.
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx