Showing posts with label upgrade. Show all posts
Showing posts with label upgrade. Show all posts

Thursday, March 29, 2012

Determine table constraints for database upgrade/migration project

I have a project to update one of our program's database from one version to
another. Additionally, I will be migrating the data from one database to
another.
Here's an example:
mydb7.1
mydb8.0
Both databases exist in the same database server.
So mydb7.1 contains production data for an Online store. I need to migrate
this data in to the mydb8.0 database. The mydb8.0 database uses a newer
schema, so there will some work to convert the data to the newer layout.
Here's my problem mydb7.1 database contains hundeds of tables and foreign
key relationships. Is there some way or tool, that can help me determine
the table migration order? That way my code know's which tables to migrat
e
first, and still preserving the ref integrity.
Will it be easier to create the mydb8.0 database, just with tables, and then
migrate the data. After the data is migrated, then setup the foreign key
relations. That way, I will not need order the tables for migration? Jus
t
a thought.
ThanksRichard,
Just a thought...script out the table definitions...alter them...run and
create the tables in the new DB...transfer the data over...views or
dts...script out the PK and FK constraints then apply to the new tables in
the new DB.
HTH
Jerry
"Richard" <Richard@.discussions.microsoft.com> wrote in message
news:69ACA9B4-CB56-417B-8015-B3A168F82F36@.microsoft.com...
>I have a project to update one of our program's database from one version
>to
> another. Additionally, I will be migrating the data from one database to
> another.
> Here's an example:
> mydb7.1
> mydb8.0
> Both databases exist in the same database server.
> So mydb7.1 contains production data for an Online store. I need to
> migrate
> this data in to the mydb8.0 database. The mydb8.0 database uses a newer
> schema, so there will some work to convert the data to the newer layout.
> Here's my problem mydb7.1 database contains hundeds of tables and foreign
> key relationships. Is there some way or tool, that can help me determine
> the table migration order? That way my code know's which tables to
> migrate
> first, and still preserving the ref integrity.
> Will it be easier to create the mydb8.0 database, just with tables, and
> then
> migrate the data. After the data is migrated, then setup the foreign
> key
> relations. That way, I will not need order the tables for migration?
> Just
> a thought.
> Thanks

Sunday, February 19, 2012

Design Question - SQL 2000 to 2005 App Upgrade (CLR?)

I'm upgrading an app I wrote from SQL 2000 to 2005 looking for some
high-level design advice, just to determine how much should be T-SQL vs.
CLR, and the best approach to take for performance. For background, I'm a
strong c# developer, but with only average SQL skills and I'm just getting
into SQL 2005.
My Current (SQL 2000) Setup:
--
My C# app regularly collects performance information, then inserts it into a
table. On insert in that table I have a trigger to kick off a couple stored
procs that compare the new data to other tables where thresholds are set.
As a simple example, it essentially does a "SELECT * FROM Thresholds WHERE
@.NewlyInsertedValue > ThresholdValue" to determine if the current value is
in violation of any thresholds. (There are a couple more columns that are
considered that makes the query a little more intensive, but this is the
general idea)
If thresholds are violated, it inserts a record into an 'alerts' table and
generates an email (by another stored proc). If no thresholds are violated,
it runs a stored proc to clear any alerts that might have been created
previously since we're not in violation anymore. Since this happens EVERY
time a new value is collected, I really want to optimize this as much as
possible.
My Desired (SQL 2005) Setup:
--
I'm going to replace certain obvious functions like the "send email"
function from old SQL objects to net CLR objects. However, I want to
further optimize the system that compares incoming data with thresholds and
generates alerts as necessary. It seems very wasteful to do selects every
time a new value is inserted, and as the tables grow even with proper
indexing it is going to get expensive. Is there a way I can take advantage
of CLR to maybe keep these thresholds in memory to compare them against?
Any other ideas about how to accomplish this most efficiently?
Craig
PS - I can post DDL if you really want it, I was just trying to get high
level design ideas from someone that's done a lot of SQL CLR work already.For general data manipulation especially Inserts, Updates, Deletes and
normal Selects TSQL will always be the best choice. The CLR is useful for
heavy calculations / aggregations or when you simply can't do something with
TSQL. Of coarse anytime you need to visit the filesystem or need the
equivalent of an extended sp the CLR is prime for that as well. AS for the
mail component you can use the new DatabaseMail which works pretty well and
a lot less troublesome than the MAPI version in 2000. But in general I
suggest you use stored procedures for this instead of triggers. You have a
fair amount of processing to do and you should keep triggers as trim as
possible. I don't see any reason this logic can not be put into a standard
TSQL stored procedure. But as for checking the values why not do that in the
front end or middle tier before you even attempt the insert? That kind of
information is pretty much static and can be cached very easily in the
middle tier or front end. If these inserts are many you can save a lot of
processing etc. on the back end by validating the data before you send it
in.
Andrew J. Kelly SQL MVP
"Craig S" <cscheets@.dontspam.kc.rr.com> wrote in message
news:OIG$CfXDGHA.1544@.TK2MSFTNGP10.phx.gbl...
> I'm upgrading an app I wrote from SQL 2000 to 2005 looking for some
> high-level design advice, just to determine how much should be T-SQL vs.
> CLR, and the best approach to take for performance. For background, I'm a
> strong c# developer, but with only average SQL skills and I'm just getting
> into SQL 2005.
> My Current (SQL 2000) Setup:
> --
> My C# app regularly collects performance information, then inserts it into
> a table. On insert in that table I have a trigger to kick off a couple
> stored procs that compare the new data to other tables where thresholds
> are set. As a simple example, it essentially does a "SELECT * FROM
> Thresholds WHERE @.NewlyInsertedValue > ThresholdValue" to determine if the
> current value is in violation of any thresholds. (There are a couple more
> columns that are considered that makes the query a little more intensive,
> but this is the general idea)
> If thresholds are violated, it inserts a record into an 'alerts' table and
> generates an email (by another stored proc). If no thresholds are
> violated, it runs a stored proc to clear any alerts that might have been
> created previously since we're not in violation anymore. Since this
> happens EVERY time a new value is collected, I really want to optimize
> this as much as possible.
> My Desired (SQL 2005) Setup:
> --
> I'm going to replace certain obvious functions like the "send email"
> function from old SQL objects to net CLR objects. However, I want to
> further optimize the system that compares incoming data with thresholds
> and generates alerts as necessary. It seems very wasteful to do selects
> every time a new value is inserted, and as the tables grow even with
> proper indexing it is going to get expensive. Is there a way I can take
> advantage of CLR to maybe keep these thresholds in memory to compare them
> against? Any other ideas about how to accomplish this most efficiently?
> Craig
> PS - I can post DDL if you really want it, I was just trying to get high
> level design ideas from someone that's done a lot of SQL CLR work already.
>|||First, you're right - everything I'm doing can be done in T-SQL (It is today
in SQL 2000). I just wanted to see if CLR would offer any optimization.
Secondly, I think you're right about caching the data and performing the
checks in the middle tier for violations (I'm using web services). From
what I understand I can actually subscribe to changes to a table/view in
2005 so that I could cache that info and just rebuild it any time the
thresholds table was updated. The thresholds data is pretty static, so this
could definately work.
I guess this is basically what I was trying to do, but for some reason
thought it needed to live in the db. Logic is always best left in the
middle/front tier anyway I guess.
So, good call Andrew. Anyone else have anything to add/recommend?
Craig
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:O9eKD9XDGHA.3444@.TK2MSFTNGP10.phx.gbl...
> For general data manipulation especially Inserts, Updates, Deletes and
> normal Selects TSQL will always be the best choice. The CLR is useful for
> heavy calculations / aggregations or when you simply can't do something
> with TSQL. Of coarse anytime you need to visit the filesystem or need the
> equivalent of an extended sp the CLR is prime for that as well. AS for the
> mail component you can use the new DatabaseMail which works pretty well
> and a lot less troublesome than the MAPI version in 2000. But in general
> I suggest you use stored procedures for this instead of triggers. You have
> a fair amount of processing to do and you should keep triggers as trim as
> possible. I don't see any reason this logic can not be put into a
> standard TSQL stored procedure. But as for checking the values why not do
> that in the front end or middle tier before you even attempt the insert?
> That kind of information is pretty much static and can be cached very
> easily in the middle tier or front end. If these inserts are many you can
> save a lot of processing etc. on the back end by validating the data
> before you send it in.
> --
> Andrew J. Kelly SQL MVP
>
> "Craig S" <cscheets@.dontspam.kc.rr.com> wrote in message
> news:OIG$CfXDGHA.1544@.TK2MSFTNGP10.phx.gbl...
>|||Xref: TK2MSFTNGP08.phx.gbl microsoft.public.sqlserver.programming:574655
Craig S (cscheets@.dontspam.kc.rr.com) writes:
> My C# app regularly collects performance information, then inserts it
> into a table. On insert in that table I have a trigger to kick off a
> couple stored procs that compare the new data to other tables where
> thresholds are set. As a simple example, it essentially does a "SELECT *
> FROM Thresholds WHERE @.NewlyInsertedValue > ThresholdValue" to determine
> if the current value is in violation of any thresholds. (There are a
> couple more columns that are considered that makes the query a little
> more intensive, but this is the general idea)
As long you have full control over the data, using a stored procedure
to insert is better. Triggers are more difficult to write, and there
is one ugly thing about them: they can disappear without notice. If
a stored procedure is dropped inadvertenly, your application will
cry out, but if the trigger is missing, you will only get incorrect
results.

> It seems very wasteful to do selects every time a new value is inserted,
> and as the tables grow even with proper indexing it is going to get
> expensive. Is there a way I can take advantage of CLR to maybe keep
> these thresholds in memory to compare them against?
Maybe there is, but it would be a bad idea. Don't worry, that data
is in memory already. SQL Server takes care of that, and keeps as
much data in cache as possible.

> I guess this is basically what I was trying to do, but for some reason
> thought it needed to live in the db. Logic is always best left in the
> middle/front tier anyway I guess.
Personally, I'm a strong believer in having the business logic where
the database is.
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx

Tuesday, February 14, 2012

Design a Database for Daily News Network website

Hello..

I want to upgrade my website to asp.net 2.0 and I want to add a Daily news to website.
and 10 to 15 news will be added daily , so after a year we have almost 4000 entry in the database.

For designing this DB what is your advice on how to store news in database?
Is it better to create same tables for each year ? for example
tblNews_2005
tblNews_2006

Or just to make one table and have all news stored in that ? its gonna be huge after years , isnt that make any problems?

Thanks
Regards.

Nah, just use a simple table... SQL server is highly optimized for large tables. I suppose, however, that you will have a primary key there. Moreover, you should add good indexes in the tableSmile

|||

Thanks, I wonder , how large the table could be ? I have an archive of 40,000 records until now and it will be bigger every year.. Regards|||

I've seen sql server responding with the speed of lightStick out tongue to one million recordsBig Smile

And I've seen it responding poorly in 5000 rows (somebody had forgotten to use indexes...)

It's all to good database design, my friend!Smile

|||

Thanks, so the size doesnt matter :D

The point is in a good design :)