I've got a performance issue.
Here's in global what the sp (Let's call it SP_A) does.
Step 1 Call a different SP (Lets call it SP_B) and store the output in a variable
Step 2 SP_B runs a select statement that returns 1 value
Step 3 SP_A uses this value as a parameter in a select statement.
Step 4 The result of the SP_A is the result of the select statement (744 rows (always))
All tables used in SP_A and SP_B are temp tables.
Total performance of SP_A is between 0.090 and 0.140 seconds.
The problem is that this SP is called 180 times from outside SQL server. That means that the total processing time is somewhere between 21 and 25 seconds.
When I move the entire processing to within SQL server I gain only 2 seconds. So I lose 2 seconds in connecting to the database 180 times.
Can someone give me some pointers on where to look for performance wins?
If you like I can add the SP's
Regards,
Sander
Call it one time instead of 180 times =;o)
But seriously, it does indeed look like a 'looping' symptom.
Your problem is how do I tune 180 calls, not how do I tune this one single procedure, if I understand it right.
Have you considered to - if possible - do fewer calls? Ideal would probably be just one instead of 180. It's a bit hard to come up with something tangible without knowing more. Why is it 180 calls? Are they all parts of something that is complete once 180 is done?
/Kenneth
|||"Your problem is how do I tune 180 calls, not how do I tune this one single procedure, if I understand it right."Completely correct! And I cannot perform less calls.
180 = 15 years * 12 months.
I'm now working on filling several tables. These tables would contain the output of SP_A (in normalized form). That way the users would only need a select for the dates required.....but I do not know if that will work.
So i'm working on this workaround on the side.
Do you know what possibilities I've got for tuning the 180 calls?
|||If you could provide some details about what exactly is your SP doing, what are you calculating in general, and maybe the SP code and the caller code too, that would be nice-we could be more specific.|||Here is the source code, btw: SP_A and SP_B cannot be combined (technicly they can of course....)
This is SP_A (uspRetrieveHourlyFactor)
ALTER PROCEDURE uspRetrieveHourlyFactor
@.StartDate2 varchar(10),
@.EndDate2 varchar(10),
@.InMarket nvarchar(50),
@.InProductType int,
@.InWeekDay int,
@.Normalise bit
AS
SET NOCOUNT ON
DECLARE @.StartDate as datetime
DECLARE @.EndDate as datetime
DECLARE @.InProductTypeID as int
DECLARE @.InMarketID as int
DECLARE @.RC as numeric(25,20)
DECLARE @.CurrDate as datetime
DECLARE @.WeightedAverage AS numeric(25,20)
SELECT @.InMarketID = ...WHERE MarketPlace = @.InMarket
SELECT @.InProductTypeID = ...WHERE ProductTypeID = @.InProductType
IF @.Normalise = 0
BEGIN
--No normalisation required!
SET @.WeightedAverage = 1
END
ELSE
BEGIN
EXEC @.RC = uspCalcWeightedAverage @.StartDate2, @.EndDate2, @.InMarket, @.InProductType, 1, @.WeightedAverage OUTPUT
END
SET @.StartDate = CAST(@.StartDate2 as datetime)
SET @.EndDate = CAST(@.EndDate2 as datetime)
SET DATEFIRST 1
CREATE TABLE #DatesBetweenInterval ([Date] [datetime] NULL)
SET @.CurrDate = @.StartDate
WHILE @.CurrDate < dateadd(hh,24,@.EndDate)
BEGIN
INSERT INTO #DatesBetweenInterval VALUES (@.currDate)
set @.CurrDate = dateadd(hh,1,@.currDate)
END
SELECT
DBI.DATE [DATE],
[PDF].[HOUR] [HOUR],
FLAG [FLAG],
ISNULL((HHF.Factor * flag) / @.WeightedAverage,0.0) [FACTOR]
FROM ##TBL_PRODUCTDEFS PDF
INNER JOIN #DATESBETWEENINTERVAL DBI ON DATEPART(HH, [DBI].[DATE]) = [PDF].[HOUR] - 1
INNER JOIN ##tbl_historichourlyfactors HHF ON DATEPART(dw, DATEPART(D,[DBI].[DATE])) = [HHF].[DayID]
AND [PDF].[HOUR] = [HHF].[HOUR]
AND DATEPART(M,[DBI].[DATE]) = [HHF].[Month]
WHERE PDF.MARKETID = @.InMarketID
AND PDF.PRODUCTTYPEID = @.InProductTypeID
AND
(([PDF].[WD-WE] = 1 AND DATEPART(dw, [DBI].[DATE] ) <= 5) OR
([PDF].[WD-WE] = 0 AND DATEPART(dw, [DBI].[DATE] ) > 5)
)
AND HHF.MARKETID = @.InMarketID
ORDER BY DBI.DATE
DROP TABLE #DatesBetweenInterval
This is SP_B (uspCalcWeightedAverage)
ALTER PROCEDURE dbo.uspCalcWeightedAverage
@.StartDate2 varchar(10),
@.EndDate2 varchar(10),
@.InMarket nvarchar(50),
@.InProductType int,
@.InWeekDay int,
@.WeightedAverage numeric(25,20) OUTPUT
AS
SET NOCOUNT ON
DECLARE @.StartDate as datetime
DECLARE @.EndDate as datetime
DECLARE @.InProductTypeID as int
DECLARE @.InMarketID as int
DECLARE @.CurrDate as datetime
DECLARE @.helpfloat as numeric(25,20)
--Get ID's for selected parameters
SELECT @.InMarketID = ...WHERE MarketPlace = @.InMarket
SELECT @.InProductTypeID = ...WHERE ProductTypeID = @.InProductType
SET @.StartDate = CAST(@.StartDate2 as datetime)
SET @.EndDate = CAST(@.EndDate2 as datetime)
SET DATEFIRST 1
--Create temp table
CREATE TABLE #DatesBetweenInterval ([Date] [datetime] NULL)
Set @.CurrDate = @.StartDate
WHILE @.CurrDate < dateadd(hh,24,@.EndDate)
BEGIN
INSERT INTO #DatesBetweenInterval VALUES (@.currDate)
set @.CurrDate = dateadd(hh,1,@.currDate)
END
SELECT @.WeightedAverage = (SUM(HHF.FACTOR) / COUNT(PDF.FLAG))
FROM
##TBL_PRODUCTDEFS PDF
INNER JOIN #DATESBETWEENINTERVAL DBI ON DATEPART(HH, [DBI].[DATE]) = [PDF].[HOUR]
INNER JOIN ##tbl_historichourlyfactors HHF ON DATEPART(D,DBI.DATE) = HHF.DayID
AND [PDF].[HOUR] = [HHF].[HOUR]
AND DATEPART(M,DBI.DATE) = [HHF].[Month]
WHERE
PDF.MARKETID = @.InMarketID
AND PDF.PRODUCTTYPEID = @.InProductTypeID
AND --[PDF].[WD-WE] = @.InWeekDay
(([PDF].[WD-WE] = 1 AND DATEPART(dw, DBI.DATE ) <= 5) or
([PDF].[WD-WE] = 0 AND DATEPART(dw, DBI.DATE ) > 5)
)
AND HHF.MARKETID = @.InMarketID
AND PDF.FLAG = 1
GROUP BY FLAG
DROP TABLE #DatesBetweenInterval
|||
SDerix wrote:
Completely correct! And I cannot perform less calls.
180 = 15 years * 12 months.I'm now working on filling several tables. These tables would contain the output of SP_A (in normalized form). That way the users would only need a select for the dates required.....but I do not know if that will work.
So i'm working on this workaround on the side.Do you know what possibilities I've got for tuning the 180 calls?
Hmmmm... I'm still not convinced that you have to do 180 calls, even though I don't doubt your word on it =;o)
On the other hand, it looks more or less like the overall is grouped by year and month, so it may be doable all at once anyway.. At least in theory. Depending on the datavolume, hardware may restrain the performance if resources aren't available for the 'full' set.
It seems like the proc itself isn't really a problem, since 0.14 sec exec time seems quite acceptable? Though, 180 * 0.14 = 25.2 seconds... And that's the problem.
Would it be possible to rethink the current 'single-month-at-a-time' strategy into something that involves the entire range all at once?
Perhaps you could consider replacing the temporary date-hour table that gets created and thrown away 360 times each run, for a permanent table to join against instead?
spA ends with an order by - is that necessary?
(it would only serve it's ordering purpose if the result is sent to the client, or inserted into a table with some other ordering attribute)
In any case, I believe that the best tuning would be to lower the number of calls from 180 to some lower number, but that would probably involve some rethinking/redesigning of what these procs does....
So... why just a single month each call for a 15 year period? Would it be possible to produce the same result for all 12 months within a year? Or for all months and years in just a single call?
/Kenneth
|||I think you can do without this temp table #DatesBetweenInterval and use a between clause for the input start and end date.
Did you try creating indexes on the global temp tables?
No comments:
Post a Comment