Monday, March 26, 2012

Multiple INSERTs into a single table scaling / performance problem

Hello all ...
This one has me a bit confused. I'd really appreciate any and all
suggestions. This may be pilot error on my part...
My requirement is to have multiple programatic database applications, each
on a separate database connection insert data into a single table as fast as
possible. My hope is that I'll see some scaling of rows-per-second inserted
by adding additional processes, connections and simultaneous inserts.
I'm not able to use BCP for this as the data is coming from an application
style feed.
I'm using Windows 2003 Server R2 and SQL Server 2000 SP4. I'm running on an
AMD 64 X2 Dual Core Processer (fast box) with 4 GB of memory. SQL Server
tends to get up around 2.9GB at times ... it is the "only" service on the
box.
Here's my table:
CREATE TABLE MyTable
(myID numeric(10,0) not null,
myChar char(8000) null,
myRowversion rowversion)
ON A_TABLE_FILEGROUP
-- No indexes, just a simple heap. I know that each row will require 8K of
-- storage ... about a page each.
My database transaction log is on a different disk/file group on a separate
controller.
I used SQL Query Analyzer for my test. I opened up a single connection and
ran a simple batch along the lines of:
begin tran
while @.i <= 10000
begin
insert into MyTable ...
values ...
select @.i = @.i + 1
end
commit
OK, when I run the above and check the start and finish times I can insert
about 1200 rows per second. I checked the disks with perf mon and I have
very little Average Disk Write Queues on the table and transaction log disks
(on avg less than 1.)
Sooooo.....
When I open up two additional sessions in SQL Query Analyzer ... and kick
off three of the same loops "at the same time" I average ... a sum total of
about 1200 rows per second. No scaling at all. Note that I am running all
three windows from within the same SQL Query Analyzer session, each with a
different connection into the database.
Again, the disks and the whole box are pretty much snoozing.
Any suggestions I might try? I'm sure I'm missing something here.
Thanks so much!!!!
DBwell...
have you try varchar instead of char?
why a numeric and not an integer?
separated disks is not enough.
What is your disk config?
do you use RAID 5 or Raid 0+1?
how many disks on each controller? (do you use SCSI 15krpm disks?)
have you some write cache on your controllers?
does your files are an good initial size? when SQL server need to expand a
file, this takes some ressources, so if your files are larger then your
requirements you'll improve the performance.
have you tested you disks subsystem using the SQLIOStress & SQLIO tools?
Using 4 local IDE drives (SATA; no cache; Raid 0 (strip)) I reach 960io/s
and 60MB/s (SQLIO results):
sqlio -kW -b64 -frandom -LS
with this result I achieve 3333 insert/sec using a script near like you (100
000 rows inserted in 30sec; no transactions)
I insert into a table like yours (int, char(8000), rowversion)
doing all the inserts into 1 transaction reduce the time of the process to
16sec.
and finally adding the set nocount on option, I reduce the process to 7sec
my log & database files are on the same disk
Its an Opteron dual core server with 4Gb and SQL 2005 x64 version.
"Doug" <Doug@.discussions.microsoft.com> wrote in message
news:50F8E5B3-2B49-4572-B2AC-89A1E77A55CC@.microsoft.com...
> Hello all ...
> This one has me a bit confused. I'd really appreciate any and all
> suggestions. This may be pilot error on my part...
> My requirement is to have multiple programatic database applications, each
> on a separate database connection insert data into a single table as fast
> as
> possible. My hope is that I'll see some scaling of rows-per-second
> inserted
> by adding additional processes, connections and simultaneous inserts.
> I'm not able to use BCP for this as the data is coming from an application
> style feed.
> I'm using Windows 2003 Server R2 and SQL Server 2000 SP4. I'm running on
> an
> AMD 64 X2 Dual Core Processer (fast box) with 4 GB of memory. SQL Server
> tends to get up around 2.9GB at times ... it is the "only" service on the
> box.
> Here's my table:
> CREATE TABLE MyTable
> (myID numeric(10,0) not null,
> myChar char(8000) null,
> myRowversion rowversion)
> ON A_TABLE_FILEGROUP
> -- No indexes, just a simple heap. I know that each row will require 8K
> of
> -- storage ... about a page each.
> My database transaction log is on a different disk/file group on a
> separate
> controller.
> I used SQL Query Analyzer for my test. I opened up a single connection
> and
> ran a simple batch along the lines of:
> begin tran
> while @.i <= 10000
> begin
> insert into MyTable ...
> values ...
> select @.i = @.i + 1
> end
> commit
> OK, when I run the above and check the start and finish times I can insert
> about 1200 rows per second. I checked the disks with perf mon and I have
> very little Average Disk Write Queues on the table and transaction log
> disks
> (on avg less than 1.)
> Sooooo.....
> When I open up two additional sessions in SQL Query Analyzer ... and kick
> off three of the same loops "at the same time" I average ... a sum total
> of
> about 1200 rows per second. No scaling at all. Note that I am running
> all
> three windows from within the same SQL Query Analyzer session, each with a
> different connection into the database.
> Again, the disks and the whole box are pretty much snoozing.
> Any suggestions I might try? I'm sure I'm missing something here.
> Thanks so much!!!!
> DB
>|||Thank you very much for your comments ... very useful information and ideas!
Char is the requirement I'm stuck with. I'm working on a "worst case
scenario" so a char(8000) is pretty tough. I know this uses a lot more
storage space than Varchar and yes I'm only storing a single row per page.
It is a long story.
I could switch from a numeric to an int, if that would make a tremendous
scaling difference.
I'm not using any RAID or striped disk setup. The two disks I have are on
two separate controllers. One is a SATA controller and the other is a
Firewire 2.0 controller. Both disks have 16 MB of cache turned on. The data
table is on one and the transaction log is on the other. When I look at the
disk IO stats using Perfmon I don't see a lot of heavy disk IO activity.
There's virtually no Average Disk Queue on either one, and the number of
writing I/Os per second is pretty low. Perhaps my controllers are not
pushing the disks fast enough. I'm not sure. Would faster/striped disks
make a difference if Perfmon doesn't show a lot of disk activity? Are there
a couple of other Perfmon settings I should look at? Note that I'm not
having any hard page faults either.
The SQLIO is a good suggestion ... I'll try that as well.
There is a lot of pre-allocated space in both the database and transaction
log files. No problems there.
I also used the NOCOUNT ON and put the work into a single transaction ...
this did double my speed to get to the 1200 per second.
Any more ideas out there? I suppose I am making a BIG assumption that
multiple threads writing into a single table on one disk/file would be
faster. Is this a poor assumption, unless the table is set up across
multiple physical disks/files? Would the relative scaling results be similar
if the rowsize was much smaller, resulting in more rows per disk I/O? If the
disk is overworked, wouldn't the Average Write Disk Queue length be really
high?
I don't see any locking/blocking problems ... perhaps I'm missing something
here.
Thanks so much ...
DB
Doug
"Jeje" wrote:
> well...
> have you try varchar instead of char?
> why a numeric and not an integer?
> separated disks is not enough.
> What is your disk config?
> do you use RAID 5 or Raid 0+1?
> how many disks on each controller? (do you use SCSI 15krpm disks?)
> have you some write cache on your controllers?
> does your files are an good initial size? when SQL server need to expand a
> file, this takes some ressources, so if your files are larger then your
> requirements you'll improve the performance.
> have you tested you disks subsystem using the SQLIOStress & SQLIO tools?
> Using 4 local IDE drives (SATA; no cache; Raid 0 (strip)) I reach 960io/s
> and 60MB/s (SQLIO results):
> sqlio -kW -b64 -frandom -LS
> with this result I achieve 3333 insert/sec using a script near like you (100
> 000 rows inserted in 30sec; no transactions)
> I insert into a table like yours (int, char(8000), rowversion)
> doing all the inserts into 1 transaction reduce the time of the process to
> 16sec.
> and finally adding the set nocount on option, I reduce the process to 7sec
> my log & database files are on the same disk
> Its an Opteron dual core server with 4Gb and SQL 2005 x64 version.
>
> "Doug" <Doug@.discussions.microsoft.com> wrote in message
> news:50F8E5B3-2B49-4572-B2AC-89A1E77A55CC@.microsoft.com...
> > Hello all ...
> >
> > This one has me a bit confused. I'd really appreciate any and all
> > suggestions. This may be pilot error on my part...
> >
> > My requirement is to have multiple programatic database applications, each
> > on a separate database connection insert data into a single table as fast
> > as
> > possible. My hope is that I'll see some scaling of rows-per-second
> > inserted
> > by adding additional processes, connections and simultaneous inserts.
> >
> > I'm not able to use BCP for this as the data is coming from an application
> > style feed.
> >
> > I'm using Windows 2003 Server R2 and SQL Server 2000 SP4. I'm running on
> > an
> > AMD 64 X2 Dual Core Processer (fast box) with 4 GB of memory. SQL Server
> > tends to get up around 2.9GB at times ... it is the "only" service on the
> > box.
> >
> > Here's my table:
> >
> > CREATE TABLE MyTable
> >
> > (myID numeric(10,0) not null,
> >
> > myChar char(8000) null,
> >
> > myRowversion rowversion)
> >
> > ON A_TABLE_FILEGROUP
> >
> > -- No indexes, just a simple heap. I know that each row will require 8K
> > of
> >
> > -- storage ... about a page each.
> >
> > My database transaction log is on a different disk/file group on a
> > separate
> > controller.
> >
> > I used SQL Query Analyzer for my test. I opened up a single connection
> > and
> > ran a simple batch along the lines of:
> >
> > begin tran
> >
> > while @.i <= 10000
> >
> > begin
> >
> > insert into MyTable ...
> >
> > values ...
> >
> > select @.i = @.i + 1
> >
> > end
> >
> > commit
> >
> > OK, when I run the above and check the start and finish times I can insert
> > about 1200 rows per second. I checked the disks with perf mon and I have
> > very little Average Disk Write Queues on the table and transaction log
> > disks
> > (on avg less than 1.)
> >
> > Sooooo.....
> >
> > When I open up two additional sessions in SQL Query Analyzer ... and kick
> > off three of the same loops "at the same time" I average ... a sum total
> > of
> > about 1200 rows per second. No scaling at all. Note that I am running
> > all
> > three windows from within the same SQL Query Analyzer session, each with a
> > different connection into the database.
> >
> > Again, the disks and the whole box are pretty much snoozing.
> >
> > Any suggestions I might try? I'm sure I'm missing something here.
> >
> > Thanks so much!!!!
> >
> > DB
> >
>
>|||well... what's appends if the you only use the local drive? (log + data)
the latency of an external drive is not good.
and for the price of a drive, add 3 local drives, and, if you don't care
about crash, put the 4 disks in Raid 0 (bad design, but good performance)
or the 3 new drives in raid 0 for data and the current drive for the log.
your bottleneck is at the disk level for sure.
"Doug" <Doug@.discussions.microsoft.com> wrote in message
news:055220B6-7A4B-4CF9-976A-71FEA412AECD@.microsoft.com...
> Thank you very much for your comments ... very useful information and
> ideas!
> Char is the requirement I'm stuck with. I'm working on a "worst case
> scenario" so a char(8000) is pretty tough. I know this uses a lot more
> storage space than Varchar and yes I'm only storing a single row per page.
> It is a long story.
> I could switch from a numeric to an int, if that would make a tremendous
> scaling difference.
> I'm not using any RAID or striped disk setup. The two disks I have are on
> two separate controllers. One is a SATA controller and the other is a
> Firewire 2.0 controller. Both disks have 16 MB of cache turned on. The
> data
> table is on one and the transaction log is on the other. When I look at
> the
> disk IO stats using Perfmon I don't see a lot of heavy disk IO activity.
> There's virtually no Average Disk Queue on either one, and the number of
> writing I/Os per second is pretty low. Perhaps my controllers are not
> pushing the disks fast enough. I'm not sure. Would faster/striped disks
> make a difference if Perfmon doesn't show a lot of disk activity? Are
> there
> a couple of other Perfmon settings I should look at? Note that I'm not
> having any hard page faults either.
> The SQLIO is a good suggestion ... I'll try that as well.
> There is a lot of pre-allocated space in both the database and transaction
> log files. No problems there.
> I also used the NOCOUNT ON and put the work into a single transaction ...
> this did double my speed to get to the 1200 per second.
> Any more ideas out there? I suppose I am making a BIG assumption that
> multiple threads writing into a single table on one disk/file would be
> faster. Is this a poor assumption, unless the table is set up across
> multiple physical disks/files? Would the relative scaling results be
> similar
> if the rowsize was much smaller, resulting in more rows per disk I/O? If
> the
> disk is overworked, wouldn't the Average Write Disk Queue length be really
> high?
> I don't see any locking/blocking problems ... perhaps I'm missing
> something
> here.
> Thanks so much ...
> DB
>
> --
> Doug
>
> "Jeje" wrote:
>> well...
>> have you try varchar instead of char?
>> why a numeric and not an integer?
>> separated disks is not enough.
>> What is your disk config?
>> do you use RAID 5 or Raid 0+1?
>> how many disks on each controller? (do you use SCSI 15krpm disks?)
>> have you some write cache on your controllers?
>> does your files are an good initial size? when SQL server need to expand
>> a
>> file, this takes some ressources, so if your files are larger then your
>> requirements you'll improve the performance.
>> have you tested you disks subsystem using the SQLIOStress & SQLIO tools?
>> Using 4 local IDE drives (SATA; no cache; Raid 0 (strip)) I reach 960io/s
>> and 60MB/s (SQLIO results):
>> sqlio -kW -b64 -frandom -LS
>> with this result I achieve 3333 insert/sec using a script near like you
>> (100
>> 000 rows inserted in 30sec; no transactions)
>> I insert into a table like yours (int, char(8000), rowversion)
>> doing all the inserts into 1 transaction reduce the time of the process
>> to
>> 16sec.
>> and finally adding the set nocount on option, I reduce the process to
>> 7sec
>> my log & database files are on the same disk
>> Its an Opteron dual core server with 4Gb and SQL 2005 x64 version.
>>
>> "Doug" <Doug@.discussions.microsoft.com> wrote in message
>> news:50F8E5B3-2B49-4572-B2AC-89A1E77A55CC@.microsoft.com...
>> > Hello all ...
>> >
>> > This one has me a bit confused. I'd really appreciate any and all
>> > suggestions. This may be pilot error on my part...
>> >
>> > My requirement is to have multiple programatic database applications,
>> > each
>> > on a separate database connection insert data into a single table as
>> > fast
>> > as
>> > possible. My hope is that I'll see some scaling of rows-per-second
>> > inserted
>> > by adding additional processes, connections and simultaneous inserts.
>> >
>> > I'm not able to use BCP for this as the data is coming from an
>> > application
>> > style feed.
>> >
>> > I'm using Windows 2003 Server R2 and SQL Server 2000 SP4. I'm running
>> > on
>> > an
>> > AMD 64 X2 Dual Core Processer (fast box) with 4 GB of memory. SQL
>> > Server
>> > tends to get up around 2.9GB at times ... it is the "only" service on
>> > the
>> > box.
>> >
>> > Here's my table:
>> >
>> > CREATE TABLE MyTable
>> >
>> > (myID numeric(10,0) not null,
>> >
>> > myChar char(8000) null,
>> >
>> > myRowversion rowversion)
>> >
>> > ON A_TABLE_FILEGROUP
>> >
>> > -- No indexes, just a simple heap. I know that each row will require
>> > 8K
>> > of
>> >
>> > -- storage ... about a page each.
>> >
>> > My database transaction log is on a different disk/file group on a
>> > separate
>> > controller.
>> >
>> > I used SQL Query Analyzer for my test. I opened up a single connection
>> > and
>> > ran a simple batch along the lines of:
>> >
>> > begin tran
>> >
>> > while @.i <= 10000
>> >
>> > begin
>> >
>> > insert into MyTable ...
>> >
>> > values ...
>> >
>> > select @.i = @.i + 1
>> >
>> > end
>> >
>> > commit
>> >
>> > OK, when I run the above and check the start and finish times I can
>> > insert
>> > about 1200 rows per second. I checked the disks with perf mon and I
>> > have
>> > very little Average Disk Write Queues on the table and transaction log
>> > disks
>> > (on avg less than 1.)
>> >
>> > Sooooo.....
>> >
>> > When I open up two additional sessions in SQL Query Analyzer ... and
>> > kick
>> > off three of the same loops "at the same time" I average ... a sum
>> > total
>> > of
>> > about 1200 rows per second. No scaling at all. Note that I am running
>> > all
>> > three windows from within the same SQL Query Analyzer session, each
>> > with a
>> > different connection into the database.
>> >
>> > Again, the disks and the whole box are pretty much snoozing.
>> >
>> > Any suggestions I might try? I'm sure I'm missing something here.
>> >
>> > Thanks so much!!!!
>> >
>> > DB
>> >
>>|||Thank you so much again for helping me.
With a small investment, I can set up a configuration with two internal SATA
disk drives, both plugged into the computer motherboard. I can place the
table on one, and the transaction log on the other. For this application I
don't have to worry about a disk crash/recovery via RAID.
With a much larger investment (that I can make it this is the real
bottleneck) I suppose I could pick up some type of RAID array. If I
understand you, you're saying a RAID array with the table and transaction log
striped over several disks would run faster?
Thanks again!
DB
--
Doug
"Jeje" wrote:
> well... what's appends if the you only use the local drive? (log + data)
> the latency of an external drive is not good.
> and for the price of a drive, add 3 local drives, and, if you don't care
> about crash, put the 4 disks in Raid 0 (bad design, but good performance)
> or the 3 new drives in raid 0 for data and the current drive for the log.
> your bottleneck is at the disk level for sure.
>
> "Doug" <Doug@.discussions.microsoft.com> wrote in message
> news:055220B6-7A4B-4CF9-976A-71FEA412AECD@.microsoft.com...
> > Thank you very much for your comments ... very useful information and
> > ideas!
> >
> > Char is the requirement I'm stuck with. I'm working on a "worst case
> > scenario" so a char(8000) is pretty tough. I know this uses a lot more
> > storage space than Varchar and yes I'm only storing a single row per page.
> > It is a long story.
> >
> > I could switch from a numeric to an int, if that would make a tremendous
> > scaling difference.
> >
> > I'm not using any RAID or striped disk setup. The two disks I have are on
> > two separate controllers. One is a SATA controller and the other is a
> > Firewire 2.0 controller. Both disks have 16 MB of cache turned on. The
> > data
> > table is on one and the transaction log is on the other. When I look at
> > the
> > disk IO stats using Perfmon I don't see a lot of heavy disk IO activity.
> > There's virtually no Average Disk Queue on either one, and the number of
> > writing I/Os per second is pretty low. Perhaps my controllers are not
> > pushing the disks fast enough. I'm not sure. Would faster/striped disks
> > make a difference if Perfmon doesn't show a lot of disk activity? Are
> > there
> > a couple of other Perfmon settings I should look at? Note that I'm not
> > having any hard page faults either.
> >
> > The SQLIO is a good suggestion ... I'll try that as well.
> >
> > There is a lot of pre-allocated space in both the database and transaction
> > log files. No problems there.
> >
> > I also used the NOCOUNT ON and put the work into a single transaction ...
> > this did double my speed to get to the 1200 per second.
> >
> > Any more ideas out there? I suppose I am making a BIG assumption that
> > multiple threads writing into a single table on one disk/file would be
> > faster. Is this a poor assumption, unless the table is set up across
> > multiple physical disks/files? Would the relative scaling results be
> > similar
> > if the rowsize was much smaller, resulting in more rows per disk I/O? If
> > the
> > disk is overworked, wouldn't the Average Write Disk Queue length be really
> > high?
> >
> > I don't see any locking/blocking problems ... perhaps I'm missing
> > something
> > here.
> >
> > Thanks so much ...
> >
> > DB
> >
> >
> >
> > --
> > Doug
> >
> >
> > "Jeje" wrote:
> >
> >> well...
> >>
> >> have you try varchar instead of char?
> >> why a numeric and not an integer?
> >>
> >> separated disks is not enough.
> >> What is your disk config?
> >> do you use RAID 5 or Raid 0+1?
> >> how many disks on each controller? (do you use SCSI 15krpm disks?)
> >> have you some write cache on your controllers?
> >>
> >> does your files are an good initial size? when SQL server need to expand
> >> a
> >> file, this takes some ressources, so if your files are larger then your
> >> requirements you'll improve the performance.
> >>
> >> have you tested you disks subsystem using the SQLIOStress & SQLIO tools?
> >> Using 4 local IDE drives (SATA; no cache; Raid 0 (strip)) I reach 960io/s
> >> and 60MB/s (SQLIO results):
> >> sqlio -kW -b64 -frandom -LS
> >>
> >> with this result I achieve 3333 insert/sec using a script near like you
> >> (100
> >> 000 rows inserted in 30sec; no transactions)
> >> I insert into a table like yours (int, char(8000), rowversion)
> >> doing all the inserts into 1 transaction reduce the time of the process
> >> to
> >> 16sec.
> >> and finally adding the set nocount on option, I reduce the process to
> >> 7sec
> >> my log & database files are on the same disk
> >> Its an Opteron dual core server with 4Gb and SQL 2005 x64 version.
> >>
> >>
> >>
> >> "Doug" <Doug@.discussions.microsoft.com> wrote in message
> >> news:50F8E5B3-2B49-4572-B2AC-89A1E77A55CC@.microsoft.com...
> >> > Hello all ...
> >> >
> >> > This one has me a bit confused. I'd really appreciate any and all
> >> > suggestions. This may be pilot error on my part...
> >> >
> >> > My requirement is to have multiple programatic database applications,
> >> > each
> >> > on a separate database connection insert data into a single table as
> >> > fast
> >> > as
> >> > possible. My hope is that I'll see some scaling of rows-per-second
> >> > inserted
> >> > by adding additional processes, connections and simultaneous inserts.
> >> >
> >> > I'm not able to use BCP for this as the data is coming from an
> >> > application
> >> > style feed.
> >> >
> >> > I'm using Windows 2003 Server R2 and SQL Server 2000 SP4. I'm running
> >> > on
> >> > an
> >> > AMD 64 X2 Dual Core Processer (fast box) with 4 GB of memory. SQL
> >> > Server
> >> > tends to get up around 2.9GB at times ... it is the "only" service on
> >> > the
> >> > box.
> >> >
> >> > Here's my table:
> >> >
> >> > CREATE TABLE MyTable
> >> >
> >> > (myID numeric(10,0) not null,
> >> >
> >> > myChar char(8000) null,
> >> >
> >> > myRowversion rowversion)
> >> >
> >> > ON A_TABLE_FILEGROUP
> >> >
> >> > -- No indexes, just a simple heap. I know that each row will require
> >> > 8K
> >> > of
> >> >
> >> > -- storage ... about a page each.
> >> >
> >> > My database transaction log is on a different disk/file group on a
> >> > separate
> >> > controller.
> >> >
> >> > I used SQL Query Analyzer for my test. I opened up a single connection
> >> > and
> >> > ran a simple batch along the lines of:
> >> >
> >> > begin tran
> >> >
> >> > while @.i <= 10000
> >> >
> >> > begin
> >> >
> >> > insert into MyTable ...
> >> >
> >> > values ...
> >> >
> >> > select @.i = @.i + 1
> >> >
> >> > end
> >> >
> >> > commit
> >> >
> >> > OK, when I run the above and check the start and finish times I can
> >> > insert
> >> > about 1200 rows per second. I checked the disks with perf mon and I
> >> > have
> >> > very little Average Disk Write Queues on the table and transaction log
> >> > disks
> >> > (on avg less than 1.)
> >> >
> >> > Sooooo.....
> >> >
> >> > When I open up two additional sessions in SQL Query Analyzer ... and
> >> > kick
> >> > off three of the same loops "at the same time" I average ... a sum
> >> > total
> >> > of
> >> > about 1200 rows per second. No scaling at all. Note that I am running
> >> > all
> >> > three windows from within the same SQL Query Analyzer session, each
> >> > with a
> >> > different connection into the database.
> >> >
> >> > Again, the disks and the whole box are pretty much snoozing.
> >> >
> >> > Any suggestions I might try? I'm sure I'm missing something here.
> >> >
> >> > Thanks so much!!!!
> >> >
> >> > DB
> >> >
> >>
> >>
> >>
>
>|||--
Doug
"Jeje" wrote:
> well... what's appends if the you only use the local drive? (log + data)
> the latency of an external drive is not good.
> and for the price of a drive, add 3 local drives, and, if you don't care
> about crash, put the 4 disks in Raid 0 (bad design, but good performance)
> or the 3 new drives in raid 0 for data and the current drive for the log.
> your bottleneck is at the disk level for sure.
>
> "Doug" <Doug@.discussions.microsoft.com> wrote in message
> news:055220B6-7A4B-4CF9-976A-71FEA412AECD@.microsoft.com...
> > Thank you very much for your comments ... very useful information and
> > ideas!
> >
> > Char is the requirement I'm stuck with. I'm working on a "worst case
> > scenario" so a char(8000) is pretty tough. I know this uses a lot more
> > storage space than Varchar and yes I'm only storing a single row per page.
> > It is a long story.
> >
> > I could switch from a numeric to an int, if that would make a tremendous
> > scaling difference.
> >
> > I'm not using any RAID or striped disk setup. The two disks I have are on
> > two separate controllers. One is a SATA controller and the other is a
> > Firewire 2.0 controller. Both disks have 16 MB of cache turned on. The
> > data
> > table is on one and the transaction log is on the other. When I look at
> > the
> > disk IO stats using Perfmon I don't see a lot of heavy disk IO activity.
> > There's virtually no Average Disk Queue on either one, and the number of
> > writing I/Os per second is pretty low. Perhaps my controllers are not
> > pushing the disks fast enough. I'm not sure. Would faster/striped disks
> > make a difference if Perfmon doesn't show a lot of disk activity? Are
> > there
> > a couple of other Perfmon settings I should look at? Note that I'm not
> > having any hard page faults either.
> >
> > The SQLIO is a good suggestion ... I'll try that as well.
> >
> > There is a lot of pre-allocated space in both the database and transaction
> > log files. No problems there.
> >
> > I also used the NOCOUNT ON and put the work into a single transaction ...
> > this did double my speed to get to the 1200 per second.
> >
> > Any more ideas out there? I suppose I am making a BIG assumption that
> > multiple threads writing into a single table on one disk/file would be
> > faster. Is this a poor assumption, unless the table is set up across
> > multiple physical disks/files? Would the relative scaling results be
> > similar
> > if the rowsize was much smaller, resulting in more rows per disk I/O? If
> > the
> > disk is overworked, wouldn't the Average Write Disk Queue length be really
> > high?
> >
> > I don't see any locking/blocking problems ... perhaps I'm missing
> > something
> > here.
> >
> > Thanks so much ...
> >
> > DB
> >
> >
> >
> > --
> > Doug
> >
> >
> > "Jeje" wrote:
> >
> >> well...
> >>
> >> have you try varchar instead of char?
> >> why a numeric and not an integer?
> >>
> >> separated disks is not enough.
> >> What is your disk config?
> >> do you use RAID 5 or Raid 0+1?
> >> how many disks on each controller? (do you use SCSI 15krpm disks?)
> >> have you some write cache on your controllers?
> >>
> >> does your files are an good initial size? when SQL server need to expand
> >> a
> >> file, this takes some ressources, so if your files are larger then your
> >> requirements you'll improve the performance.
> >>
> >> have you tested you disks subsystem using the SQLIOStress & SQLIO tools?
> >> Using 4 local IDE drives (SATA; no cache; Raid 0 (strip)) I reach 960io/s
> >> and 60MB/s (SQLIO results):
> >> sqlio -kW -b64 -frandom -LS
> >>
> >> with this result I achieve 3333 insert/sec using a script near like you
> >> (100
> >> 000 rows inserted in 30sec; no transactions)
> >> I insert into a table like yours (int, char(8000), rowversion)
> >> doing all the inserts into 1 transaction reduce the time of the process
> >> to
> >> 16sec.
> >> and finally adding the set nocount on option, I reduce the process to
> >> 7sec
> >> my log & database files are on the same disk
> >> Its an Opteron dual core server with 4Gb and SQL 2005 x64 version.
> >>
> >>
> >>
> >> "Doug" <Doug@.discussions.microsoft.com> wrote in message
> >> news:50F8E5B3-2B49-4572-B2AC-89A1E77A55CC@.microsoft.com...
> >> > Hello all ...
> >> >
> >> > This one has me a bit confused. I'd really appreciate any and all
> >> > suggestions. This may be pilot error on my part...
> >> >
> >> > My requirement is to have multiple programatic database applications,
> >> > each
> >> > on a separate database connection insert data into a single table as
> >> > fast
> >> > as
> >> > possible. My hope is that I'll see some scaling of rows-per-second
> >> > inserted
> >> > by adding additional processes, connections and simultaneous inserts.
> >> >
> >> > I'm not able to use BCP for this as the data is coming from an
> >> > application
> >> > style feed.
> >> >
> >> > I'm using Windows 2003 Server R2 and SQL Server 2000 SP4. I'm running
> >> > on
> >> > an
> >> > AMD 64 X2 Dual Core Processer (fast box) with 4 GB of memory. SQL
> >> > Server
> >> > tends to get up around 2.9GB at times ... it is the "only" service on
> >> > the
> >> > box.
> >> >
> >> > Here's my table:
> >> >
> >> > CREATE TABLE MyTable
> >> >
> >> > (myID numeric(10,0) not null,
> >> >
> >> > myChar char(8000) null,
> >> >
> >> > myRowversion rowversion)
> >> >
> >> > ON A_TABLE_FILEGROUP
> >> >
> >> > -- No indexes, just a simple heap. I know that each row will require
> >> > 8K
> >> > of
> >> >
> >> > -- storage ... about a page each.
> >> >
> >> > My database transaction log is on a different disk/file group on a
> >> > separate
> >> > controller.
> >> >
> >> > I used SQL Query Analyzer for my test. I opened up a single connection
> >> > and
> >> > ran a simple batch along the lines of:
> >> >
> >> > begin tran
> >> >
> >> > while @.i <= 10000
> >> >
> >> > begin
> >> >
> >> > insert into MyTable ...
> >> >
> >> > values ...
> >> >
> >> > select @.i = @.i + 1
> >> >
> >> > end
> >> >
> >> > commit
> >> >
> >> > OK, when I run the above and check the start and finish times I can
> >> > insert
> >> > about 1200 rows per second. I checked the disks with perf mon and I
> >> > have
> >> > very little Average Disk Write Queues on the table and transaction log
> >> > disks
> >> > (on avg less than 1.)
> >> >
> >> > Sooooo.....
> >> >
> >> > When I open up two additional sessions in SQL Query Analyzer ... and
> >> > kick
> >> > off three of the same loops "at the same time" I average ... a sum
> >> > total
> >> > of
> >> > about 1200 rows per second. No scaling at all. Note that I am running
> >> > all
> >> > three windows from within the same SQL Query Analyzer session, each
> >> > with a
> >> > different connection into the database.
> >> >
> >> > Again, the disks and the whole box are pretty much snoozing.
> >> >
> >> > Any suggestions I might try? I'm sure I'm missing something here.
> >> >
> >> > Thanks so much!!!!
> >> >
> >> > DB
> >> >
> >>
> >>
> >>
>
>

No comments:

Post a Comment