Showing posts with label advice. Show all posts
Showing posts with label advice. Show all posts

Monday, March 26, 2012

Multiple Insertions

Hi everyone,

I just need a bit of advice as to where to start tackling a problem, if
thats possible - thanks very much.

I need a single stored procedure to make several inserts into my msde
database. There will be two arguments to the stored proc. The first
is a title argument which needs to be inserted into the first table
after which the autonumbered primary key is captured with @.@.identity.

The second argument is a delimited list of foreign keys which need to
be inserted into the second table along with the new key from the first
statement. This table is a link table with two columns - both foreign
keys - ie its the link table in a many to many relationship.

My problems is that as far as I know I can't use arrays in sql server
cause it doesn't support them. And this has come about because I don't
know how many rows need to be inserted into the link table. But there
will always be at least one.

I know I need to do this in a loop, but how do I split up the the
second argument so that I can?

Thanks,

MarkOn 27 Jun 2005 08:22:10 -0700, Mark wrote:

>Hi everyone,
>I just need a bit of advice as to where to start tackling a problem, if
>thats possible - thanks very much.
>I need a single stored procedure to make several inserts into my msde
>database. There will be two arguments to the stored proc. The first
>is a title argument which needs to be inserted into the first table
>after which the autonumbered primary key is captured with @.@.identity.
>The second argument is a delimited list of foreign keys which need to
>be inserted into the second table along with the new key from the first
>statement. This table is a link table with two columns - both foreign
>keys - ie its the link table in a many to many relationship.
>My problems is that as far as I know I can't use arrays in sql server
>cause it doesn't support them. And this has come about because I don't
>know how many rows need to be inserted into the link table. But there
>will always be at least one.
>I know I need to do this in a loop, but how do I split up the the
>second argument so that I can?
>Thanks,
>Mark

Hi Mark,

Check out this site for a wealth of possible solutions:

http://www.sommarskog.se/arrays-in-sql.html

Best, Hugo
--

(Remove _NO_ and _SPAM_ to get my e-mail address)|||>> I just need a bit of advice as to where to start tackling a problem, if that is possible - thanks very much. <<

You need to go back to foundations. You missed the major points of
RDBMS and are trying to write a 1950's file system in SQL.

1) Autonumbering is totally non-relational and cannot be a key by
definition. This is foundations, not fancy stuff. A key is a subset
of attributes that makes a row unique within a table; it has to do with
the data model and not the current state of the hardware on which the
data is stored.

2) An INSERT INTO statement works on one and only one base table.

3) There are only scalar value parameters; there are no lists, arrays,
etc. There are a bunch of kludges where you write a parser in T-SQL,
if you do not care about maintaining or porting your code.

4) There is no such term as "link table" -- link is a term from
navigational databases and assembly language. It is a many-to-many
relationship.

5) We do not like to write procedural code, so you should avoid loops.

6) You do not insert keys into a table; you insert rows. WHOLE rows.
You probably have more (non-key) columns in the second table to fill
in.

7) Please post DDL, so that people do not have to guess what the keys,
constraints, Declarative Referential Integrity, data types, etc. in
your schema are. Sample data is also a good idea, along with clear
specifications. It is very hard to debug code when you do not let us
see it.

Your code should look something like this, assuming a vanilla invoice
schema:

BEGIN
INSERT INTO Invoices (invoice_nbr, ..)
VALUES (@.new_invoice_nbr, .. );

INSERT INTO InvoiceDetails (invoice_nbr, sku..)
SELECT @.new_invoice_nbr, sku..
FROM WorkingTable;
END;

Invoice_nbr should have a CHECK() constraint to validate it, of course.|||This is the version of split array return table from sommarskog. I use this
to pass in arrays of keys (space separated). It's more efficient than
populating a temporary table with keys programmatically every time you need
to join (as Celko seems to suggest). Although in general Celko is correct
from a purest point of view, I do believe that a small function like this
allowing you to pass and split arrays in a stored procedure has more utility
than it does downsides. For example, my client needs to periodically check
a set of rows to see if they have been changed. These records are in no
particular order (whichever rows the user happens to be viewing). Instead
of writing each one to a working table one by one and then executing an SP
to check their timestamps, I pass in an array of keys, split it and join
with the split table to return the update state.

However, in your example, perhaps a working table would be a better idea. I
would only advise using array splitting algorithms server-side if they are
just autonumber unique keys, rather than whole rows of information. It's
just a quickish method for fetching arbitrary rows from your tables.

CREATE FUNCTION dbo.func_Split_Array_Return_Table (@.list NTEXT)
RETURNS @.Table TABLE ( listpos INT IDENTITY(1, 1) NOT NULL, number INT NOT
NULL) AS

BEGIN
DECLARE @.pos INT, @.textpos INT, @.chunklen SMALLINT, @.str NVARCHAR(4000),
@.tmpstr NVARCHAR(4000), @.leftover NVARCHAR(4000)

SET @.textpos = 1
SET @.leftover = ''

WHILE @.textpos <= datalength(@.list) / 2
BEGIN

SET @.chunklen = 4000 - datalength(@.leftover) / 2
SET @.tmpstr = ltrim(@.leftover + substring(@.list, @.textpos,
@.chunklen))
SET @.textpos = @.textpos + @.chunklen
SET @.pos = charindex(' ', @.tmpstr)

WHILE @.pos > 0
BEGIN

SET @.str = substring(@.tmpstr, 1, @.pos - 1)

INSERT @.Table (number)
VALUES
(convert(int, @.str))

SET @.tmpstr = ltrim(substring(@.tmpstr, @.pos + 1, len(@.tmpstr)))
SET @.pos = charindex(' ', @.tmpstr)

END

SET @.leftover = @.tmpstr
END

IF ltrim(rtrim(@.leftover)) <> ''
INSERT @.Table (number)
VALUES
(convert(int, @.leftover))

RETURN

END

"Mark" <mjmather@.gmail.com> wrote in message
news:1119885730.339475.84280@.g47g2000cwa.googlegro ups.com...
> Hi everyone,
> I just need a bit of advice as to where to start tackling a problem, if
> thats possible - thanks very much.
> I need a single stored procedure to make several inserts into my msde
> database. There will be two arguments to the stored proc. The first
> is a title argument which needs to be inserted into the first table
> after which the autonumbered primary key is captured with @.@.identity.
> The second argument is a delimited list of foreign keys which need to
> be inserted into the second table along with the new key from the first
> statement. This table is a link table with two columns - both foreign
> keys - ie its the link table in a many to many relationship.
> My problems is that as far as I know I can't use arrays in sql server
> cause it doesn't support them. And this has come about because I don't
> know how many rows need to be inserted into the link table. But there
> will always be at least one.
> I know I need to do this in a loop, but how do I split up the the
> second argument so that I can?
> Thanks,
> Marksql

Wednesday, March 21, 2012

Multiple Filegroups

Hi experts,

I'm new to SQLServer 2000 and would like some advice on filegroups.

Is there any advantage to seperate the filegroups for different type of data.

For example:

Data_1 for tables
Index_1 for Indexes
Audit_1 for Audit tables

The files for these filegroups would be placed on RAID Disk.

Thanks for any suggestions.Any comments?

I need to know whether there are any performance gain in seperating types of data in different filegroups or is it just good enough to set it to a default secondary filegroup for user data objects.

Please comment.|||Hi,
you can save a lot of system time if you are able to store smart your tables, indexes, ....
This is part from SQL books:

Placing Tables on Filegroups

A table can be created on a specific filegroup rather than the default filegroup. If the filegroup comprises multiple files spread across various physical disks, each with its own disk controller, then queries for data from the table will be spread across the disks, thereby improving performance. The same effect can be accomplished by creating a single file on a RAID (redundant array of independent disks) level 0, 1, or 5 device.

If the computer has multiple processors, Microsoft? SQL Server? 2000 can perform parallel scans of the data. Multiple parallel scans can be executed for a single table regardless of the number of files that are in its filegroup. Additionally, any text, ntext, or image columns within a table can be created on a filegroup other than the one that contains the base table.

Eventually, there is a saturation point when there are too many outstanding I/O's causing bottlenecks in the disk I/O subsystem. These bottlenecks can be identified by using Windows NT? Performance Monitor to monitor the PhysicalDisk object and Disk Queue Length counter. If the Disk Queue Length counter is greater than three, consider spreading the file across more disk drives. For more information, see Monitoring Disk Activity.

It is advantageous to get as much data spread across as many physical drives as possible in order to improve throughput through parallel data access. To spread data evenly across all disks, you can place a single file across striped disks or maintain each disk separately and place a file on each disk.


I hope it will help you. It's very good text.
Bye


|||No, there is no appreciable gain. Filegroups are mainly for ease of administration across multiple volumes, not performance.sql

Monday, March 12, 2012

Multiple databases using different collations on same instance

Hi all,
I'm in need of some advice, I currently have a SQL 2005 64 bit instance
running SQL_Latin1_CP_CI_AS, which is the collation type for my databases and
suits me fine, however I now need to create and populate a new database which
requires Latin1_General_BIN
Is this possible, are there any pitfalls I should watch out for.
ThanksHi
The biggest problem with creating non-system collation columns is probably
the collation conflicts that you may get when doing out of database
operations, such as using temporary tables; but you can usually get around
this by always specifying a collation using the COLLATE clause.
If you are only looking at using this collation for binary comparisons, you
may want to look at using the same collation but using the COLLATE clause to
force a binary collation in your where clause. This may be a more risky
problem as it will not give you an error when you forget the clause.
Checkout books online for more about COLLATE.
John
"Padraig" wrote:
> Hi all,
> I'm in need of some advice, I currently have a SQL 2005 64 bit instance
> running SQL_Latin1_CP_CI_AS, which is the collation type for my databases and
> suits me fine, however I now need to create and populate a new database which
> requires Latin1_General_BIN
> Is this possible, are there any pitfalls I should watch out for.
> Thanks