Friday, March 23, 2012

Multiple files/filegroups

Hi,
We have a database which has reached our disk capacity of 340GB and we
are looking at our options on how to resolve the disk space issue. The
problem is I'm not too familiar with setting up additional filegroups
and/or secondary files, or how they function.
The database consists of only one table which stores all our product
hi-res photography, each image being about 20MBs. The table is not
heavily accessed - either read or write - but it is our central storage
location for our studio photographs.
We could delete all images for discontinued product, but we would then
have to rely on a tape backup if we needed access to them in the
future.
Other options I think we have are to: -
1. Create a new database on a separate disk and have a scheduled
archive task to move any discontinued product images to a table in this
database. Place this DB on a separate disk. This means our 'live' db
would be much smaller and quicker to backup
2. Create a new filegroup for existing DB and create an archive table
on this group (file on different disk). Archive periodically and use a
view to join the two tables. Only perform regular database backups on
the 'live' filegroup
3. Create a secondary file on the same filegroup (primary) as the
'live' table and let the system start to fill this up with data as new
records are added. Downside of this is the backup size will not reduce.
On point 3. I'm not sure how this works - will the primary file start
to reduce in size, or will it remain at capacity and the secondary file
will start to increase in size? Is there something I can do to start
balancing the file sizes?
Also would be interested to know opinions on what the best approach
would be and if I have any misconceptions about the above options (or
if I've missed any)!
Thanks,
Andrew DuncanSome thoughts:

> We could delete all images for discontinued product, but we would then
> have to rely on a tape backup if we needed access to them in the
> future.
Sounds like a lot of manual work.

> 1. Create a new database on a separate disk and have a scheduled
> archive task to move any discontinued product images to a table in this
> database. Place this DB on a separate disk. This means our 'live' db
> would be much smaller and quicker to backup
This is probably what I would look for, but it require that your code adapts
for this.

> 2. Create a new filegroup for existing DB and create an archive table
> on this group (file on different disk). Archive periodically and use a
> view to join the two tables. Only perform regular database backups on
> the 'live' filegroup
You still need to adapt your code. But you now have a more complex backup st
rategy compared to 1. If
you are on 2005, you can archive, set file group to read-only, backup the fi
legroup. If you now
restore that filegroup backup, you don't need to restore all subsequent log
backup. In 2000, you
*do* need to restore all subsequent log backups, even if the file group was
in read-only since the
file group backup was performed.

> 3. Create a secondary file on the same filegroup (primary) as the
> 'live' table and let the system start to fill this up with data as new
> records are added. Downside of this is the backup size will not reduce.
> On point 3. I'm not sure how this works - will the primary file start
> to reduce in size, or will it remain at capacity and the secondary file
> will start to increase in size? Is there something I can do to start
> balancing the file sizes?
Space will be used from the file according to free space on each file. So yo
u want to pre-allocate
file size for the new file, and then shrink the old file to push data over t
o the new file (and then
possibly increase the old file again so you have some free space).
I'd go for 1 or 2. If you go for 2, you will always get a snapshot of all of
the data when you
restore but backup/restore is a bit more complex. 2 doesn't really buy you a
nything if you are on
2000.
If you go for 1, you can get data from different point in time when you rest
ore (as you have two
databases).
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
<andrew@.duncanhcmc.com> wrote in message
news:1163749909.367437.223570@.e3g2000cwe.googlegroups.com...
> Hi,
> We have a database which has reached our disk capacity of 340GB and we
> are looking at our options on how to resolve the disk space issue. The
> problem is I'm not too familiar with setting up additional filegroups
> and/or secondary files, or how they function.
> The database consists of only one table which stores all our product
> hi-res photography, each image being about 20MBs. The table is not
> heavily accessed - either read or write - but it is our central storage
> location for our studio photographs.
> We could delete all images for discontinued product, but we would then
> have to rely on a tape backup if we needed access to them in the
> future.
> Other options I think we have are to: -
> 1. Create a new database on a separate disk and have a scheduled
> archive task to move any discontinued product images to a table in this
> database. Place this DB on a separate disk. This means our 'live' db
> would be much smaller and quicker to backup
> 2. Create a new filegroup for existing DB and create an archive table
> on this group (file on different disk). Archive periodically and use a
> view to join the two tables. Only perform regular database backups on
> the 'live' filegroup
> 3. Create a secondary file on the same filegroup (primary) as the
> 'live' table and let the system start to fill this up with data as new
> records are added. Downside of this is the backup size will not reduce.
> On point 3. I'm not sure how this works - will the primary file start
> to reduce in size, or will it remain at capacity and the secondary file
> will start to increase in size? Is there something I can do to start
> balancing the file sizes?
> Also would be interested to know opinions on what the best approach
> would be and if I have any misconceptions about the above options (or
> if I've missed any)!
> Thanks,
> Andrew Duncan
>|||Hi Tibor,
Thanks for the advice. Option 1 does seem the simplest, so I'll be
opting for that.
Rgds,
Andrew
Tibor Karaszi wrote:[vbcol=seagreen]
> Some thoughts:
>
> Sounds like a lot of manual work.
>
> This is probably what I would look for, but it require that your code adap
ts for this.
>
> You still need to adapt your code. But you now have a more complex backup
strategy compared to 1. If
> you are on 2005, you can archive, set file group to read-only, backup the
filegroup. If you now
> restore that filegroup backup, you don't need to restore all subsequent lo
g backup. In 2000, you
> *do* need to restore all subsequent log backups, even if the file group wa
s in read-only since the
> file group backup was performed.
>
> Space will be used from the file according to free space on each file. So
you want to pre-allocate
> file size for the new file, and then shrink the old file to push data over
to the new file (and then
> possibly increase the old file again so you have some free space).
> I'd go for 1 or 2. If you go for 2, you will always get a snapshot of all
of the data when you
> restore but backup/restore is a bit more complex. 2 doesn't really buy you
anything if you are on
> 2000.
> If you go for 1, you can get data from different point in time when you re
store (as you have two
> databases).
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
>
> <andrew@.duncanhcmc.com> wrote in message
> news:1163749909.367437.223570@.e3g2000cwe.googlegroups.com...

No comments:

Post a Comment