Batch job taking longer time then expected
Hi All,
We have a scheduled batch job which run at 11:30 pm daily.
When user did testing in UAT environment , it took 56 hrs to complete.But now when they run the same batch job in production system took more than 80 hrs.
FYI : The production server RAM(40GB approx) is more than UAT server RAM (4 GB).
Can anyone plz help to explore.
Thanks in advance.
Please post:
The exact version of Oracle (10gR2 is not a version, 10.2.0.5 is a version).
The platform and OS you are using.
Any differences in init.ora parameters (including double underscore parameters you see in create pfile from spfile).
Any differences in kernel parameters.
Any differences in hardware (including network). For that matter, what hardware, how is swap defined.
Any differences in how the data was originally loaded (for example, production data entered over time online, UAT imported).
Any differences in what else is running.
How and when you've collected statistics.
It's not even twice as long, so it could be a relatively obscure difference that is your bottleneck. You have more ram, so it could be something like, you are cpu bound because you are thrashing a larger SGA, and not letting the cpu service i/o when it needs to. Statspack or AWR may give a clue about that, as can OS tools.
Remember, you can see what is happening on your system, we can't. So you have to tell us for us to help you. Cut and paste is more believable than you typoing in stuff. Use the tag before and after any output you post.
Similar Messages
-
Urgent::Compression Job taking long time???
Can anybody know regarding compression of cube that how much time it should take for around 15,000 records in cube.
For Us ,it is taking around 3-4 hrs?
How can we finish it early????
We have around 1900 request in Cube .And each request having around 10,000 records.
So if we go likewise ,than it will be very time consuming ,decrease performance of other loads and very boring???
pls give ur suggestions??
thanx in advance...Hi Sonika ,
Pls find my answer in front of ur q?
Please check the
1.all availability of the background processes in sm50. NO
Ans--only one job is running
2. please check st04 ->detail analysis menu -> oracle session ..check is there any locked memory thr.
Ans--No locked memory
3. check in sm12 that ur cube is locked
Ans- no locked
3. please check any back up is going on in db12 (if u r authorized)
Ans-No Back up is running
4. check table spaces in DB02
Ans-Which table space,i mean table name -
hi Team,
We are running SLD on separate hardware with Windows 2003. But since couple of months we been observing that 'full GC taking longer time' then it used to be. So wondering what are the possible parameters, configuration items that i need to look back to find tune this issue.
any good reference docs related to find tuning the GC is much appriciated.
Thanks
SekharHi ,
Hope this might help.
http://www.ibm.com/developerworks/library/i-gctroub/
http://my.safaribooksonline.com/0596003773/javapt2-CHP-3-SECT-4#X2ludGVybmFsX1NlY3Rpb25Db250ZW50P3htbGlkPTA1OTYwMDM3NzMvamF2YXB0Mi1DSFAtMy1TRUNULTE=
http://java.sun.com/docs/hotspot/gc5.0/gc_tuning_5.html
Also check out the pdfs
https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/7fdca26e-0601-0010-369d-b3fc87d3a2d9
https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/22baa590-0201-0010-26a3-f1cfa2469973
Rgds
joel -
Why this Query is taking much longer time than expected?
Hi,
I need experts support on the below mentioned issue:
Why this Query is taking much longer time than expected? Sometimes I am getting connection timeout error. Is there any better way to achieve result in shortest time. Below, please find the DDL & DML:
DDL
BHDCollections
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [dbo].[BHDCollections](
[BHDCollectionid] [bigint] IDENTITY(1,1) NOT NULL,
[GroupMemberid] [int] NOT NULL,
[BHDDate] [datetime] NOT NULL,
[BHDShift] [varchar](10) NULL,
[SlipValue] [decimal](18, 3) NOT NULL,
[ProcessedValue] [decimal](18, 3) NOT NULL,
[BHDRemarks] [varchar](500) NULL,
[Createdby] [varchar](50) NULL,
[Createdon] [datetime] NULL,
CONSTRAINT [PK_BHDCollections] PRIMARY KEY CLUSTERED
[BHDCollectionid] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
SET ANSI_PADDING OFF
BHDCollectionsDet
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[BHDCollectionsDet](
[CollectionDetailid] [bigint] IDENTITY(1,1) NOT NULL,
[BHDCollectionid] [bigint] NOT NULL,
[Currencyid] [int] NOT NULL,
[Denomination] [decimal](18, 3) NOT NULL,
[Quantity] [int] NOT NULL,
CONSTRAINT [PK_BHDCollectionsDet] PRIMARY KEY CLUSTERED
[CollectionDetailid] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
Banks
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [dbo].[Banks](
[Bankid] [int] IDENTITY(1,1) NOT NULL,
[Bankname] [varchar](50) NOT NULL,
[Bankabbr] [varchar](50) NULL,
[BankContact] [varchar](50) NULL,
[BankTel] [varchar](25) NULL,
[BankFax] [varchar](25) NULL,
[BankEmail] [varchar](50) NULL,
[BankActive] [bit] NULL,
[Createdby] [varchar](50) NULL,
[Createdon] [datetime] NULL,
CONSTRAINT [PK_Banks] PRIMARY KEY CLUSTERED
[Bankid] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
SET ANSI_PADDING OFF
Groupmembers
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [dbo].[GroupMembers](
[GroupMemberid] [int] IDENTITY(1,1) NOT NULL,
[Groupid] [int] NOT NULL,
[BAID] [int] NOT NULL,
[Createdby] [varchar](50) NULL,
[Createdon] [datetime] NULL,
CONSTRAINT [PK_GroupMembers] PRIMARY KEY CLUSTERED
[GroupMemberid] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
SET ANSI_PADDING OFF
GO
ALTER TABLE [dbo].[GroupMembers] WITH CHECK ADD CONSTRAINT [FK_GroupMembers_BankAccounts] FOREIGN KEY([BAID])
REFERENCES [dbo].[BankAccounts] ([BAID])
GO
ALTER TABLE [dbo].[GroupMembers] CHECK CONSTRAINT [FK_GroupMembers_BankAccounts]
GO
ALTER TABLE [dbo].[GroupMembers] WITH CHECK ADD CONSTRAINT [FK_GroupMembers_Groups] FOREIGN KEY([Groupid])
REFERENCES [dbo].[Groups] ([Groupid])
GO
ALTER TABLE [dbo].[GroupMembers] CHECK CONSTRAINT [FK_GroupMembers_Groups]
BankAccounts
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [dbo].[BankAccounts](
[BAID] [int] IDENTITY(1,1) NOT NULL,
[CustomerID] [int] NOT NULL,
[Locationid] [varchar](25) NOT NULL,
[Bankid] [int] NOT NULL,
[BankAccountNo] [varchar](50) NOT NULL,
CONSTRAINT [PK_BankAccounts] PRIMARY KEY CLUSTERED
[BAID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
SET ANSI_PADDING OFF
GO
ALTER TABLE [dbo].[BankAccounts] WITH CHECK ADD CONSTRAINT [FK_BankAccounts_Banks] FOREIGN KEY([Bankid])
REFERENCES [dbo].[Banks] ([Bankid])
GO
ALTER TABLE [dbo].[BankAccounts] CHECK CONSTRAINT [FK_BankAccounts_Banks]
GO
ALTER TABLE [dbo].[BankAccounts] WITH CHECK ADD CONSTRAINT [FK_BankAccounts_Locations1] FOREIGN KEY([Locationid])
REFERENCES [dbo].[Locations] ([Locationid])
GO
ALTER TABLE [dbo].[BankAccounts] CHECK CONSTRAINT [FK_BankAccounts_Locations1]
Currency
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [dbo].[Currency](
[Currencyid] [int] IDENTITY(1,1) NOT NULL,
[CurrencyISOCode] [varchar](20) NOT NULL,
[CurrencyCountry] [varchar](50) NULL,
[Currency] [varchar](50) NULL,
CONSTRAINT [PK_Currency] PRIMARY KEY CLUSTERED
[Currencyid] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
SET ANSI_PADDING OFF
CurrencyDetails
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [dbo].[CurrencyDetails](
[CurDenid] [int] IDENTITY(1,1) NOT NULL,
[Currencyid] [int] NOT NULL,
[Denomination] [decimal](15, 3) NOT NULL,
[DenominationType] [varchar](25) NOT NULL,
CONSTRAINT [PK_CurrencyDetails] PRIMARY KEY CLUSTERED
[CurDenid] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
SET ANSI_PADDING OFF
QUERY
WITH TEMP_TABLE AS
SELECT 0 AS COINS, BHDCollectionsDet.Quantity AS BN, BHDCollections.BHDDate AS CollectionDate, BHDCollectionsDet.Currencyid,
(BHDCollections.BHDCollectionid) AS DSLIPS, Banks.Bankname
FROM BHDCollections INNER JOIN
BHDCollectionsDet ON BHDCollections.BHDCollectionid = BHDCollectionsDet.BHDCollectionid INNER JOIN
GroupMembers ON BHDCollections.GroupMemberid = GroupMembers.GroupMemberid INNER JOIN
BankAccounts ON GroupMembers.BAID = BankAccounts.BAID INNER JOIN
Currency ON BHDCollectionsDet.Currencyid = Currency.Currencyid INNER JOIN
CurrencyDetails ON Currency.Currencyid = CurrencyDetails.Currencyid INNER JOIN
Banks ON BankAccounts.Bankid = Banks.Bankid
GROUP BY BHDCollectionsDet.Quantity, BHDCollections.BHDDate, BankAccounts.Bankid, BHDCollectionsDet.Currencyid, CurrencyDetails.DenominationType,
CurrencyDetails.Denomination, BHDCollectionsDet.Denomination, Banks.Bankname,BHDCollections.BHDCollectionid
HAVING (BHDCollections.BHDDate BETWEEN @FromDate AND @ToDate) AND (BankAccounts.Bankid = @Bankid) AND (CurrencyDetails.DenominationType = 'Currency') AND
(CurrencyDetails.Denomination = BHDCollectionsDet.Denomination)
UNION ALL
SELECT BHDCollectionsDet.Quantity AS COINS, 0 AS BN, BHDCollections.BHDDate AS CollectionDate, BHDCollectionsDet.Currencyid,
(BHDCollections.BHDCollectionid) AS DSLIPS, Banks.Bankname
FROM BHDCollections INNER JOIN
BHDCollectionsDet ON BHDCollections.BHDCollectionid = BHDCollectionsDet.BHDCollectionid INNER JOIN
GroupMembers ON BHDCollections.GroupMemberid = GroupMembers.GroupMemberid INNER JOIN
BankAccounts ON GroupMembers.BAID = BankAccounts.BAID INNER JOIN
Currency ON BHDCollectionsDet.Currencyid = Currency.Currencyid INNER JOIN
CurrencyDetails ON Currency.Currencyid = CurrencyDetails.Currencyid INNER JOIN
Banks ON BankAccounts.Bankid = Banks.Bankid
GROUP BY BHDCollectionsDet.Quantity, BHDCollections.BHDDate, BankAccounts.Bankid, BHDCollectionsDet.Currencyid, CurrencyDetails.DenominationType,
CurrencyDetails.Denomination, BHDCollectionsDet.Denomination, Banks.Bankname,BHDCollections.BHDCollectionid
HAVING (BHDCollections.BHDDate BETWEEN @FromDate AND @ToDate) AND (BankAccounts.Bankid = @Bankid) AND (CurrencyDetails.DenominationType = 'COIN') AND
(CurrencyDetails.Denomination = BHDCollectionsDet.Denomination)),
TEMP_TABLE2 AS
SELECT CollectionDate,Bankname,DSLIPS AS DSLIPS,SUM(BN) AS BN,SUM(COINS)AS COINS FROM TEMP_TABLE Group By CollectionDate,DSLIPS,Bankname
SELECT CollectionDate,Bankname,count(DSLIPS) AS DSLIPS,sum(BN) AS BN,sum(COINS) AS coins FROM TEMP_TABLE2 Group By CollectionDate,Bankname
HAVING COUNT(DSLIPS)<>0;Without seeing an execution plan of the query it is hard to suggest something useful. Try insert the result of UNION ALL to the temporary table and then perform an aggregation on that table, not a CTE.
Just
SELECT CollectionDate,Bankname,DSLIPS AS DSLIPS,SUM(BN) AS BN,SUM(COINS)AS COINS FROM
#tmp Group By CollectionDate,DSLIPS,Bankname
HAVING COUNT(DSLIPS)<>0;
Best Regards,Uri Dimant SQL Server MVP,
http://sqlblog.com/blogs/uri_dimant/
MS SQL optimization: MS SQL Development and Optimization
MS SQL Consulting:
Large scale of database and data cleansing
Remote DBA Services:
Improves MS SQL Database Performance
SQL Server Integration Services:
Business Intelligence -
my time capsule is taking too much time indexing backup and then taking longer time to back up ( 207 days ) or longer !!! what shall i do ?
Try 10.7.5 supplemental update.
This update seems to have solved this problem for many.
Best. -
Taking long time to run a jobs-in programRBDMIDOC
hi expert
hope all r doing well.i have one issue related to jobs which is taking long time
40 hrs.so please any body help me on this issue. i am giving information related
to issue as below
job name:"J4674-sd-pric-cond-chng-for-vncl" this job is running with program
RBDMIDOC
Job name:"j2378-fi-auto-clear-wo-curr-all" this job is running with program
SAPFI24
can anybady tell me what is the reason its taking so long time.or for improving
performance any OSS note require .
please suggest me the solution
awating for quick response.
Regards
Nisha ADear rob,
thanks for your quick response,as you have given the OSS note
but i discussed with ABAPER but they say this will not support
so please can you give me other note no so,that we can do some
thing to wipe out problem.i am awaiting for your valuable response
Regds
Nisha A -
Master Dataload is taking long time to complete
Hi
In last couple of weeks, we are experience the delay for master dataload job. These dataload should take 5-8 minute to complete but it is taking more then 5 hour to complete. here we are using process chain for master dataload.
Any idea? We don't understand where to start to check the root cause of this problem.
Regards
AmarHi Freinds
Some update of our master dataload. Still not resolved.
Seem to me, our problem is related to Hierarchy in BW server.It is taking long time to update the hierarchy in BW server. As i told you, problem occur only when loading master dataload. For transaction dataload, no IDOC get stuck in R/3 (NPR) server. There are enough resources in both R/3 and BW server for trfc.
There are total 9 hierarchy in the master dataload that is delaying in the process for update. If you have seen this type of problem for the following hierarchy, pls update me.
A 0ACCOUNT (InfoSource)
Hierarchy
1 Fin statement vers FSV5
2 Cost Element by fun area
B 0COSTCENTER (InfoSource)
Hierarchy
3 Mitel Network
4 NA MNS Total
5 CEO
6 Global Support Cost center
C 0COORDER (infosource)
Hierarchy
7 Total R&D (Ron Wellard)
8 RD YRD1
9 RD YRD2
Regards
Amarjit -
Process Chain taking long time in loading data in infocube
Dear Expert,
We are loading data thru PC in AR cube it takes data frm
PSA-> DSO->Activation->Index Deletion->DTP(load infocube)->IndexCreation->Create Aggregates.
In Index creation everyday its taking long time around 9 to 10 hrs to create it
when we go in RSRV and repair the infocube thr loading of data happens fast. We are doing it(RSRV) everyday. In DB02 we have seen dat 96% tablespace is used.
Please tell permanent solution.
Please suggest its BI Issue or Basis.
Regards,
AnkitHi ,
We are loading data thru PC in AR cube it takes data frm
PSA-> DSO->Activation->Index Deletion->DTP(load infocube)->IndexCreation->Create Aggregates.
In the above steps insted of Create Aggregates it should be Roll up Process of aggregates.
You can ask the basis team to check the Table space in the transaction db02old/db02.
Check if there is long running job in SM66/SM50 kill that job.
check there should be enough Batch process to perform the steps.
Hope this helps.
"Assigning points is the ways to say thanks on SDN".
Br
Alok -
I am extracting the data from ECC To bw .but Data Loading taking long tim
Hi All,
i am extracting the data from ECC To BI Syatem..but Data Loading Taking Long time. from last 6 hoursinfopackage is running.still it is showing yellow.Manually i made the red.and delete again i applied repeat of the last delta.but same proble is coming .in the status job is showing bckground job is not finished at source system.we requested to basis.basis people killed that job.again we schedule the chain also again same problem is coming.how can i solve this issue.
Thanks ,
chanduHi,
There are different places to track your job. Once your job is triggered in BW, you can track your load job where exactly it is taking more time and why. Follow below steps:
1) After InfoPackage is triggered, then take the request number and go to source system to check your extraction job status.
You can get the job status by taking the request number from BW and go to transaction SM37 in ECC. Then give the request number with begining '' and ending ''. Also give '*' to user name.
Job name: REQ_XXXXXX
User Name: *
Check the job status whether job is completed or cancelled or short dump. If the job is still running check in SM66 whether you can see any process. If not accordingly you got to check in ST22 or SM21 in ECC. If the job is complete, then the same in BW side now.
2) Check the data arrived in PSA, if not check whether Transfer routines or start routines are having bad SQL or code. Similarly in update rules.
3) Once it is through in Source system (ECC), Transfer rules , Update Rules, then the next task is updating the data might some time take more time which might be based on some parameters ( Number of parallel process to update database ). Check whether updating the database is taking more time and may be you got to check with the DBA guy also.
At all the times you should see minimum of atleast once process running all the time in SM66 till the time your job gets complete. If not you will see a log in ST22.
Let me know if you still have questions.
Assigning points is the only way of saying thanks in SDN.
Thanks,
Kumar. -
SSRS Reports taking long time to load
Hello,
Problem : SSRS Reports taking long time to load
My System environment : Visual Studio 2008 SP1 and SQL Server 2008 R2
Production Environment : Visual Studio 2008 SP1 and SQL Server 2008 R2
I have created a Parameterized report (6 parameters), it will fetch data from 1 table. table has 1 year and 6 months data, I am selecting parameters for only 1 month (about 2500 records). It is taking almost 2 minutes and 30 seconds
to load the report.
This report running efficiently in my system (report load takes only 5 to 6 seconds) but in
production it is taking 2 minutes 30 seconds.
I have checked the Execution log from production so I found the timing for
Data retrieval (approx~) Processing (approx~) Rendering (approx~)
10 second 15 sec
2 mins and 5 sec.
But Confusing point is that , if I run the same report at different time overall output time is same (approx) 2 min 30 sec but
Data retrieval (approx~) Processing (approx~) Rendering (approx~)
more than 1 min 15 sec
more than 1 min
so 1 question why timings are different ?
My doubts are
1) If query(procedure to retrieve the data) is the problem then it should take more time always,
2) If Report structure is problem then rendering will also take same time (long time)
for this (2nd point) I checked on blog that Rendering depends on environment structure e.g. Network bandwidth, RAM, CPU Usage , Number of users accessing same report at a time.
So I did testing of report when no other user working on any report But failed (same result output is 2 min 30 sec)
From network team I got the result is that there is no issue or overload in CPU usage or RAM also No issue in Network bandwidth.
Production Database Server and Report server are different (but in same network).
I checked that database server the SQL Server is using almost Full RAM (23 GB out of 24 GB)
I tried to allocate the memory to less amount up to 2GB (Trial solution I got from Blogs) but this on also failed.
one hint I got from colleague that , change the allocated memory setting from static memory to dynamic to SQL Server
(I guess above point is the same) I could not find that option Static and Dynamic memory setting.
I did below steps
Connected to SQL Server Instance
Right click on Instance go to properties, Go to Memory Tab
I found three options 1) Server Memory 2) Other memory 3) Section for "Configured values and Running values"
Then I tried to reduce Maximum Server memory up to 2 GB (As mentioned above)
All trials failed, this issue I could not find the roots for this issue.
Can anyone please help (it's bit urgent).Hi UdayKGR,
According to your description, your report takes too long to load on your production environment. Right?
In this scenario, since the report runs quickly in developing environment, we initially think it supposed to be the issue on data retrieval. However, based on the information in execution log, it takes longest time on rendering part. So we suggest you optimize
the report itself to reduce the time for rendering. Please refer to the link below:
My report takes too long to render
Here is another article about overall performance optimization for Reporting Services:
Reporting Services Performance and Optimization
If you have any question, please feel free to ask.
Best Regards,
Simon Hou -
The ODS activation is taking long time
Hi,
We are on SAP NetWeaver BI 701 (Support Package 5).
We create a Z ODS, it will contain a lot of data (180.000.000 month-end) and we want to generate specific reports about it.
The activation is taking long time, I assume is because we checked the flag "SIDs Generation upon Activation". I am confused about this check. I really need it? is this check the only problem.
Thanks for you help.
VictoriaHi Victoria:
If your Z DSO is used only for staging purposes (you don't have queries based on this DSO and you send the data to another DSO or to an InfoCube) then you don't need to check the "SIDs Generation Upon Activation" box.
Even more, to achieve better performance during data loads in this scenario, you might consider using a Write Optimized DSO instead of a Standard DSO, but if you decide to take this alternative don't forget to select the "Do Not check Uniqueness of Data" box if you need to write several records with the same Semantic Key.
Regards,
Francisco Milán. -
Update ztable is taking long time
Hi All,
i have run the 5 jobs with the same program at a time but when we check the db trace
zs01 is taking long time as shown below.here zs01 is having small amount of data.
in the below dbtrace for updating zs01 is taking 2,315,485 seconds .how to reduce this?
HH:MM:SS.MS Duration Program ObjectName Op. Curs Array Rec RC Conn
2:36:15 AM 2,315,485 SAPLZS01 ZS01 FETCH 294 1 1 0 R/3
The code as shown below
you can check the code in the program SAPLZS01 include LZS01F01.
FORM UPDATE_ZS01.
IF ZS02-STATUS = '3'.
IF Z_ZS02_STATUS = '3'. "previous status is ERROR
EXIT.
ELSE.
SELECT SINGLE FOR UPDATE * FROM ZS01
WHERE PROC_NUM = ZS02-PROC_NUM.
CHECK SY-SUBRC = 0.
ADD ZS02-MF_AMT TO ZS01-ERR_AMT.
ADD 1 TO ZS01-ERR_INVOI.
UPDATE ZS01.
ENDIF.
ENDIF.
my question is when updating the ztable why it is taking such long time,
how to reduce the time or how to make faster to update the ztable .
Thanks in advance,
regards
SuniTry the code like this..
data: wa_zs01 type zs01.
FORM UPDATE_ZS01.
IF ZS02-STATUS = '3'.
IF Z_ZS02_STATUS = '3'. "previous status is ERROR
EXIT.
ELSE.
SELECT SINGLE FOR UPDATE * FROM ZS01
WHERE PROC_NUM = ZS02-PROC_NUM.
-- change
CHECK SY-SUBRC = 0.
ADD ZS02-MF_AMT TO wa_ZS01-ERR_AMT.
ADD 1 TO wa_ZS01-ERR_INVOI.
UPDATE ZS01 from wa_zs01.
ENDIF.
ENDIF.
And i think this Select query for ZS01 is inside the ZS02 SELECT statement,
This might also make slow process.
If you want to make database access always use Workarea/Internal table to fetch the data
and work with that.
Accessing database like this or with Select.... endselect is an inefficient programming. -
------Load Dataset into Temp table---------------
SELECT
z.SYSTEMNAME
--,Case when ZXC.[Subsystem Name] <> 'NULL' Then zxc.[SubSystem Name]
--else NULL
--End AS SubSystemName
, CASE
WHEN z.PROV_TAX_ID IN
(SELECT DISTINCT zxc.TIN
FROM dbo.SQS_Provider_Tracking zxc
WHERE zxc.[SubSystem Name] <> 'NULL'
THEN
(SELECT DISTINCT [Subsystem Name]
FROM dbo.SQS_Provider_Tracking zxc
WHERE z.PROV_TAX_ID = zxc.TIN)
End As SubSYSTEMNAME
,z.PROVIDERNAME
,z.STATECODE
,z.PROV_TAX_ID
,z.SRC_PAR_CD
,SUM(z.SEQUEST_AMT) Actual_Sequestered_Amt
, CASE
WHEN z.SRC_PAR_CD IN ('E','O','S','W')
THEN 'Nonpar Waiver'
--**Amendment Mailed**
--WHEN z.PROV_TAX_ID IN
When EXISTS
(SELECT DISTINCT b.PROV_TIN
FROM dbo.SQS_Mailed_TINs_010614 b WITH (NOLOCK )
where not exists (select * from dbo.sqs_objector_TINs t where b.PROV_TIN = t.prov_tin))
THEN
(SELECT DISTINCT b.Mailing
FROM dbo.SQS_Mailed_TINs_010614 b WITH (NOLOCK )
WHERE z.PROV_TAX_ID = b.PROV_TIN
-- --**Amendment Mailed Wave 3 and 4**
--WHEN z.PROV_TAX_ID In
When EXISTS
(SELECT
qz.PROV_TIN
FROM
[SQS_Mailed_TINs] qz WITH (NOLOCK )
where qz.Mailing = 'Amendment Mailed (3rd Wave)'
and not exists (select * from dbo.sqs_objector_TINs t WITH (NOLOCK ) where qz.PROV_TIN = t.prov_tin))
THEN 'Amendment Mailed (3rd Wave)'
WHEN EXISTS
(SELECT
qz.PROV_TIN
FROM
[SQS_Mailed_TINs] qz WITH (NOLOCK )
where qz.Mailing = 'Amendment Mailed (4th Wave)'
and not exists (select * from dbo.sqs_objector_TINs t WITH (NOLOCK ) where qz.PROV_TIN = t.prov_tin))
THEN 'Amendment Mailed (4th Wave)'
-- --Is Puerto Rico of Lifesynch
WHEN EXISTS
(SELECT DISTINCT a.PROV_TAX_ID
FROM PACT.dbo.SQS_NonPar_PR_LS_TINs a WITH (NOLOCK )
WHERE a.Bucket <> 'Nonpar'
THEN
(SELECT DISTINCT a.Bucket
FROM PACT.dbo.SQS_NonPar_PR_LS_TINs a WITH (NOLOCK )
WHERE a.PROV_TAX_ID = z.PROV_TAX_ID)
-- --**Top Objecting Systems**
WHEN z.SYSTEMNAME IN
('ADVENTIST HEALTH SYSTEM','ASCENSION HEALTH ALLIANCE','AULTMAN HEALTH FOUNDATION','BANNER HEALTH SYSTEM','BERT FISH MEDICAL CENTER','BETHESDA MEMORIAL HOSPITAL','BJC HEALTHCARE','BLOUNT MEMORIAL HOSPITAL','BOCA RATON REGIONAL HOSPITAL','CAROMONT HEALTH SYSTEM','CATHOLIC HEALTH INITIATIVES','CATHOLIC HEALTHCARE PARTNERS','CHRISTUS HEALTH',/*'CLEVELAND CLINIC HEALTH SYSTEM',*/'COLUMBUS REGIONAL HEALTHCARE SYSTEM','COMMUNITY HEALTH SYSTEMS, INC','COXHEALTH','HCA','HEALTH MANAGEMENT ASSOCIATES','HUNTSVILLE HOSPITAL HEALTH SYSTEM','INTEGRIS HEALTH','JUPITER MEDICAL CENTER','LEE MEMORIAL HEALTH SYSTEM','MARTIN MEMORIAL HEALTH SYSTEM','MERCY','MT SINAI MEDICAL CENTER (MIAMI)','MUNROE REGIONAL MEDICAL CENTER','NORMAN REGIONAL HEALTH SYSTEM','NORTHSIDE HEALTH SYSTEM','SHANDS HEALTHCARE','SISTERS OF MERCY - SPRINGFIELD, MO','SSM HEALTH CARE','ST LUKES HEALTH SYSTEM','SUMMA HEALTH SYSTEM','SUSQUEHANNA HEALTH SYSTEM','TBD -- TRINITY HEALTH - CATHOLIC HEALTH EAST','UNIVERSITY OF MISSOURI HEALTH SYSTEM','UNIVERSITY OF NEW MEXICO HOSPITALS','UNIVERSITY OF UTAH HEALTH CARE')
THEN 'Top Objecting Systems'
WHEN EXISTS
(SELECT
h.PROV_TAX_ID
FROM
#HIHO_Records h
INNER JOIN SQS_Provider_Tracking obj WITH (NOLOCK )
ON h.PROV_TAX_ID = obj.TIN
AND obj.[Objector?] = 'Top Objector'
WHERE z.PROV_TAX_ID = h.PROV_TAX_ID
OR h.SMG_ID IS NOT NULL
)and z.LCLM_RSTMT_TREND_CAT_CD IN ('HO','HI')
THEN 'Top Objecting Systems'
-- --**Other Objecting Hospitals**
WHEN EXISTS
(SELECT
h.PROV_TAX_ID
FROM
#HIHO_Records h
INNER JOIN SQS_Provider_Tracking obj WITH (NOLOCK )
ON h.PROV_TAX_ID = obj.TIN
AND obj.[Objector?] = 'Objector'
WHERE z.PROV_TAX_ID = h.PROV_TAX_ID
OR h.SMG_ID IS NOT NULL
)and z.LCLM_RSTMT_TREND_CAT_CD IN ('HO','HI')
THEN 'Other Objecting Hospitals'
-- --**Objecting Physicians**
WHEN EXISTS
(SELECT z.PROV_TAX_ID
FROM SQS_EDW_Source z WITH (NOLOCK)
WHERE EXISTS
(SELECT DISTINCT
obj.TIN
FROM SQS_Provider_Tracking obj WITH (NOLOCK )
WHERE obj.[Objector?] in ('Objector','Top Objector')
and z.PROV_TAX_ID = obj.TIN
and z.LCLM_RSTMT_TREND_CAT_CD not IN ('HO','HI')
THEN 'Objecting Physicians'
--****Rejecting Hospitals****
WHEN EXISTS
(SELECT
h.PROV_TAX_ID
FROM
#HIHO_Records h
INNER JOIN SQS_Provider_Tracking obj WITH (NOLOCK )
ON h.PROV_TAX_ID = obj.TIN
AND obj.[Objector?] = 'Rejector'
WHERE z.PROV_TAX_ID = h.PROV_TAX_ID
OR h.SMG_ID IS NOT NULL
)and z.LCLM_RSTMT_TREND_CAT_CD IN ('HO','HI')
THEN 'Rejecting Hospitals'
--****Rejecting Physciains****
WHEN EXISTS
(SELECT obj.TIN
FROM SQS_Provider_Tracking obj WITH (NOLOCK )
WHERE z.PROV_TAX_ID = obj.TIN
AND obj.[Objector?] = 'Rejector')
and z.LCLM_RSTMT_TREND_CAT_CD NOT IN ('HO','HI')
THEN 'REjecting Physicians'
----**********ALL OBJECTORS SHOULD HAVE BEEN BUCKETED AT THIS POINT IN THE QUERY**********
-- --**Non-Objecting Hospitals**
WHEN EXISTS
(SELECT DISTINCT
h.PROV_TAX_ID
FROM
#HIHO_Records h WITH (NOLOCK )
WHERE
(z.PROV_TAX_ID = h.PROV_TAX_ID)
OR h.SMG_ID IS NOT NULL
)and z.LCLM_RSTMT_TREND_CAT_CD IN ('HO','HI')
THEN 'Non-Objecting Hospitals'
-- **Outstanding Contracts for Review**
WHEN EXISTS
(SELECT qz.PROV_TIN
FROM
[PACT].[HUMAD\ARS3766].[SQS_Mailed_TINs] qz WITH (NOLOCK )
where qz.Mailing = 'Non-Objecting Bilateral Physicians'
AND z.PROV_TAX_ID = qz.PROV_TIN)
Then 'Non-Objecting Bilateral Physicians'
When EXISTS
(select
p.prov_tax_id
from dbo.SQS_CoC_Potential_Mail_List p WITH (NOLOCK )
where p.amendmentrights <> 'Unilateral'
AND z.prov_tax_id = p.prov_tax_id)
THEN 'Non-Objecting Bilateral Physicians'
WHEN EXISTS
(SELECT
qz.PROV_TIN
FROM
[SQS_Mailed_TINs] qz WITH (NOLOCK )
where qz.Mailing = 'More Research Needed'
AND qz.PROV_TIN = z.PROV_TAX_ID)
THEN 'More Research Needed'
WHEN EXISTS (SELECT qz.PROV_TIN FROM [SQS_Mailed_TINs] qz WITH (NOLOCK ) where qz.Mailing = 'Objector' AND qz.PROV_TIN = z.PROV_TAX_ID)
THEN 'ERROR'
else 'Market Review/Preparing to Mail'
END AS [Updated Bucket]
,COALESCE(q.INDdesc, f.IND_desc) AS INDdesc
,f.Time_Period_for_Dispute
,f.Renew_Term_Ind
,f.Renewal_Date
,z.SMG_ID
,'' AS OrderedRank
INTO SQS_Bucketed_Details_SMG_on_SMGXXX
From #SQS_EDW_SOURCE_WithSMG z
left join #F f ON f.PROV_TAX_ID = z.PROV_TAX_ID
AND z.SYSTEMNAME = f.SYSTEM_NAME
AND z.PROVIDERNAME = f.Provider
Left join #Q q ON z.PROV_TAX_ID = q.TIN
GROUP BY z.SYSTEMNAME
--,Z.[SubsystemName]
,z.PROVIDERNAME
,z.STATECODE
,z.PROV_TAX_ID
,z.SRC_PAR_CD
,q.INDdesc
,f.IND_Desc
,f.Time_Period_for_Dispute
,f.Renew_Term_Ind
,f.Renewal_Date
,z.SMG_ID
,z.LCLM_RSTMT_TREND_CAT_CD
As I am a developer I do not have the access to Sql profiler or tuning agent to optimize my query, I have used some joins over the temp table #HIHO which indeed pulling the records from table EDW_Source which has 5 million records, I also added Non clustered
indexes on prov_ID, SMG_ID and Incurred month for this table, but still it is taking longer time. Need helpHi, It needs some more tweaks but pls try this one
USE
Go
--****Create sqs_objector_TINs (Objections and Rejections)****
--Drop table .dbo.sqs_objector_TINs
select distinct a.TIN as Prov_TIN
Into #sqs_objector_TINs
from .dbo.sqs_provider_tracking as a with (nolock)
where a.[Objector?] in ('Top Objector','Objector','Rejector')
/*********** Query for SQS_TINtoSyst***********/
--DROP TABLE .dbo.SQS_TINtoSystem
select distinct
b.SRC_PROV_ID
--,case
-- when a.SYSTEM_NAME is null
-- then
-- case
-- when a.CTRCT_GRP_NAME is null
-- then a.PROV_SMG_NAME
-- else a.CTRCT_GRP_NAME
-- end
-- else a.SYSTEM_NAME
--end as SYSTEM_NAME
,COALESCE(a.SYSTEM_NAME, a.CTRCT_GRP_NAME, a.PROV_SMG_NAME) AS SYSTEM_NAME
INTO #SQS_TINtoSystem
from
PARE.dbo.EDW_PROD_HOSPITAL_MASTER a with (nolock)
Inner Join PARE.dbo.EDW_PROD_HOSPITAL_ID_XREF b with (nolock)
on a.SMG_ID = b.SMG_ID
-- Inner Join .dbo.SQS_EDW_Source q
--on b.SRC_PROV_ID = q.PROV_TAX_ID
where b.SRC_PLATFORM_CD = 'TX'
and exists
select
SMG_ID
from PARE.dbo.EDW_PROD_HOSPITAL_ID_XREF as t1 with (nolock)
where SRC_PLATFORM_CD = 'TX'
and exists (select q.PROV_TAX_ID from .dbo.SQS_EDW_Source q with (nolock) where q.PROV_TAX_ID = b.SRC_PROV_ID)
and a.SMG_ID = t1.SMG_ID
/************** Query for SQS_Bucketed_Details_SMG*****************/
DROP TABLE .dbo.SQS_Bucketed_Details_SMG
--Create temp table
SELECT z.SYSTEMNAME
,Z.PROV_TAX_ID
,z.PROVIDERNAME
,z.STATECODE
,z.SRC_PAR_CD
,z.SEQUEST_AMT
,case when Z.LCLM_RSTMT_TREND_CAT_CD IN ('HI','HO') Then 'H' else 'P' end as Hosp_Ind
,Z.SMG_ID
INTO #SQS_EDW_SOURCE_WithSMG
FROM dbo.SQS_EDW_SOURCE_WithSMG z with (nolock)
WHERE (Z.Incurred_Mth >= convert(datetime,'01/01/2013')) and (Z.Incurred_Mth < convert(datetime, '1/1/2014'))
--between convert(datetime,'01/01/2013') and convert(datetime, '12/31/2013 23:59:59.996')
--YEAR(Z.Incurred_Mth)=2013
-- Create Temp table Q
select
x.TIN,
case when max(x.IND) = 'NYN'
then 'Standard'
when max(x.IND) = 'YNN'
then 'Express'
when max(x.IND) = 'NNY'
then 'Non_Standard' else 'Mixed'
end as INDdesc
Into #Q
FROM
(SELECT
a.tin,
MAX(a.express) + MAX(a.StandardInd) + MAX(NonstandardIND) as IND
from
(select r.TIN,
case when MAX(r.Express) like 'Y%' then 'Y' else 'N' end As Express,
case when MAX(r.Standard) = 'Y' then 'Y' else 'N' end As StandardInd,
case when MAX(r.[Non-Standard]) = 'Y' then 'Y' else 'N' end AS NonstandardIND
FROM DBO.SQS_Objectors_01032014 r with (nolock)
GROUP BY r.TIN) a
group by a.TIN) x
group by x.TIN
--Create Temp table F
Select *
INTO #F
FROM(
SELECT distinct g.prov_tax_id
,g.system_name
,g.provider
,case when g.reimburse_mixed = 'Y' then 'Mixed'
when g.reimburse_express = 'Y' then 'Express'
when g.reimburse_standard = 'Y' then 'Standard'
when g.reimburse_NonStandard = 'Y' then 'NonStandard'
end as IND_Desc
,g.Time_Period_for_Dispute
,case when g.Renewal_Date = 'N' and g.Expiration_Date = 'N'
then 'Unclear'
when g.Renewal_Date = 'N' and g.Expiration_Date <> 'N'
then 'Termination'
when g.Renewal_Date <> 'N' and g.Expiration_Date = 'N'
then 'Evergreen'
when g.Renewal_Date <> 'N' and g.Expiration_Date <> 'N'
then 'Termination'
else 'Unknown'
end as 'Renew_Term_Ind'
,g.Renewal_Date
FROM
(select distinct
bb.PROV_TAX_ID1 as prov_tax_id
,aa.*
from
[dbo].[Top_600_Hospitals3] aa with (nolock)
left join pare.dbo.EDW_PROD_HOSPITAL_MASTER bb with (nolock)
on --a.CTRCT_GRP_NAME = b.CTRCT_GRP_NAME
aa.Provider = bb.PROV_SMG_NAME
-- and (a.SYSTEM_NAME = b.SMG_SYS_NAME or a.SYSTEM_NAME = b.SYSTEM_NAME)
--and a.ADDR_LINE1 = b.ADDR_LINE1
and aa.STATE_CD = bb.STATE_CD
--and a.ZIP_CD = b.ZIP_CD
and aa.City1 = bb.CITY_NAME
where aa.SYSTEM_NAME <> 'SEE ABOVE') g
where g.system_name <> 'SEE ABOVE') h
where h.ind_Desc is not null
SELECT DISTINCT z.PROV_TAX_ID
, z.SMG_ID
INTO #HIHO_Records
FROM SQS_EDW_SOURCE_WithSMG z with (nolock)
WHERE z.LCLM_RSTMT_TREND_CAT_CD IN ('HO', 'HI')
AND Z.Incurred_Mth >=convert(datetime, '1/1/2013') and Z.Incurred_Mth <convert(datetime, '1/1/2014')
--YEAR(Z.Incurred_Mth)=2013
---------------------------------Load Dataset into Temp table---------------
SELECT
z.SYSTEMNAME
--,Case when ZXC.[Subsystem Name] <> 'NULL' Then zxc.[SubSystem Name]
--else NULL
--End AS SubSystemName
, CASE
WHEN z.PROV_TAX_ID IN
(SELECT zxc.TIN
FROM dbo.SQS_Provider_Tracking zxc with (nolock)
WHERE zxc.[SubSystem Name] <> 'NULL'
THEN
(SELECT top 1 [Subsystem Name]
FROM dbo.SQS_Provider_Tracking zxc with (nolock)
WHERE z.PROV_TAX_ID = zxc.TIN)
End As SubSYSTEMNAME
,z.PROVIDERNAME
,z.STATECODE
,z.PROV_TAX_ID
,z.SRC_PAR_CD
,SUM(z.SEQUEST_AMT) Actual_Sequestered_Amt
, CASE
WHEN z.SRC_PAR_CD IN ('E','O','S','W')
THEN 'Nonpar Waiver'
-- --Is Puerto Rico of Lifesynch
WHEN z.PROV_TAX_ID IN
(SELECT a.PROV_TAX_ID
FROM .dbo.SQS_NonPar_PR_LS_TINs a with (nolock)
WHERE a.Bucket <> 'Nonpar'
THEN
(SELECT top 1 a.Bucket
FROM .dbo.SQS_NonPar_PR_LS_TINs a with (nolock)
WHERE a.PROV_TAX_ID = z.PROV_TAX_ID)
--**Amendment Mailed**
WHEN z.PROV_TAX_ID IN
(SELECT b.PROV_TIN
FROM dbo.SQS_Mailed_TINs_010614 b WITH (NOLOCK )
where not exists (select * from dbo.sqs_objector_TINs t with (nolock) where b.PROV_TIN = t.prov_tin))
and z.Hosp_Ind = 'P'
THEN
(SELECT top 1 b.Mailing
FROM dbo.SQS_Mailed_TINs_010614 b with (nolock)
WHERE z.PROV_TAX_ID = b.PROV_TIN
-- --**Amendment Mailed Wave 3-5**
WHEN z.PROV_TAX_ID In
(SELECT
qz.PROV_TIN
FROM
[SQS_Mailed_TINs] qz
where qz.Mailing = 'Amendment Mailed (3rd Wave)'
and not exists (select * from dbo.sqs_objector_TINs t with (nolock) where qz.PROV_TIN = t.prov_tin))
and z.Hosp_Ind = 'P'
THEN 'Amendment Mailed (3rd Wave)'
WHEN z.PROV_TAX_ID IN
(SELECT
qz.PROV_TIN
FROM
[SQS_Mailed_TINs] qz
where qz.Mailing = 'Amendment Mailed (4th Wave)'
and not exists (select * from dbo.sqs_objector_TINs t with (nolock) where qz.PROV_TIN = t.prov_tin))
and z.Hosp_Ind = 'P'
THEN 'Amendment Mailed (4th Wave)'
WHEN z.PROV_TAX_ID IN
(SELECT
qz.PROV_TIN
FROM
[SQS_Mailed_TINs] qz
where qz.Mailing = 'Amendment Mailed (5th Wave)'
and not exists (select * from dbo.sqs_objector_TINs t with (nolock) where qz.PROV_TIN = t.prov_tin))
and z.Hosp_Ind = 'P'
THEN 'Amendment Mailed (5th Wave)'
-- --**Top Objecting Systems**
WHEN z.SYSTEMNAME IN
('ADVENTIST HEALTH SYSTEM','ASCENSION HEALTH ALLIANCE','AULTMAN HEALTH FOUNDATION')
THEN 'Top Objecting Systems'
WHEN z.PROV_TAX_ID IN
(SELECT
h.PROV_TAX_ID
FROM
#HIHO_Records h
INNER JOIN .dbo.SQS_Provider_Tracking obj with (nolock)
ON h.PROV_TAX_ID = obj.TIN
AND obj.[Objector?] = 'Top Objector'
WHERE z.PROV_TAX_ID = h.PROV_TAX_ID
OR h.SMG_ID IS NOT NULL
)and z.Hosp_Ind = 'H'
THEN 'Top Objecting Systems'
-- --**Other Objecting Hospitals**
WHEN (z.PROV_TAX_ID IN
(SELECT
h.PROV_TAX_ID
FROM
#HIHO_Records h
INNER JOIN .dbo.SQS_Provider_Tracking obj with (nolock)
ON h.PROV_TAX_ID = obj.TIN
AND obj.[Objector?] = 'Objector'
WHERE z.PROV_TAX_ID = h.PROV_TAX_ID
OR h.SMG_ID IS NOT NULL
)and z.Hosp_Ind = 'H')
THEN 'Other Objecting Hospitals'
-- --**Objecting Physicians**
WHEN (z.PROV_TAX_ID IN
(SELECT
obj.TIN
FROM .dbo.SQS_Provider_Tracking obj with (nolock)
WHERE obj.[Objector?] in ('Objector','Top Objector')
and z.PROV_TAX_ID = obj.TIN
and z.Hosp_Ind = 'P')
THEN 'Objecting Physicians'
--****Rejecting Hospitals****
WHEN (z.PROV_TAX_ID IN
(SELECT
h.PROV_TAX_ID
FROM
#HIHO_Records h
INNER JOIN .dbo.SQS_Provider_Tracking obj with (nolock)
ON h.PROV_TAX_ID = obj.TIN
AND obj.[Objector?] = 'Rejector'
WHERE z.PROV_TAX_ID = h.PROV_TAX_ID
OR h.SMG_ID IS NOT NULL
)and z.Hosp_Ind = 'H')
THEN 'Rejecting Hospitals'
--****Rejecting Physciains****
WHEN
(z.PROV_TAX_ID IN
(SELECT
obj.TIN
FROM .dbo.SQS_Provider_Tracking obj with (nolock)
WHERE z.PROV_TAX_ID = obj.TIN
AND obj.[Objector?] = 'Rejector')
and z.Hosp_Ind = 'P')
THEN 'REjecting Physicians'
----**********ALL OBJECTORS SHOULD HAVE BEEN BUCKETED AT THIS POINT IN THE QUERY**********
-- --**Non-Objecting Hospitals**
WHEN z.PROV_TAX_ID IN
(SELECT
h.PROV_TAX_ID
FROM
#HIHO_Records h
WHERE
(z.PROV_TAX_ID = h.PROV_TAX_ID)
OR h.SMG_ID IS NOT NULL)
and z.Hosp_Ind = 'H'
THEN 'Non-Objecting Hospitals'
-- **Outstanding Contracts for Review**
WHEN z.PROV_TAX_ID IN
(SELECT
qz.PROV_TIN
FROM
[SQS_Mailed_TINs] qz with (nolock)
where qz.Mailing = 'Non-Objecting Bilateral Physicians'
AND z.PROV_TAX_ID = qz.PROV_TIN)
Then 'Non-Objecting Bilateral Physicians'
When z.prov_tax_id in
(select
p.prov_tax_id
from dbo.SQS_CoC_Potential_Mail_List p with (nolock)
where p.amendmentrights <> 'Unilateral'
AND z.prov_tax_id = p.prov_tax_id)
THEN 'Non-Objecting Bilateral Physicians'
WHEN z.PROV_TAX_ID IN
(SELECT
qz.PROV_TIN
FROM
[SQS_Mailed_TINs] qz
where qz.Mailing = 'More Research Needed'
AND qz.PROV_TIN = z.PROV_TAX_ID)
THEN 'More Research Needed'
WHEN z.PROV_TAX_ID IN (SELECT qz.PROV_TIN FROM [SQS_Mailed_TINs] qz with (nolock) where qz.Mailing = 'Objector' AND qz.PROV_TIN = z.PROV_TAX_ID)
THEN 'ERROR'
else 'Market Review/Preparing to Mail'
END AS [Updated Bucket]
,COALESCE(q.INDdesc, f.IND_desc) AS INDdesc
,f.Time_Period_for_Dispute
,f.Renew_Term_Ind
,f.Renewal_Date
,z.SMG_ID
,'' AS OrderedRank
INTO dbo.SQS_Bucketed_Details_SMG with (nolock)
From #SQS_EDW_SOURCE_WithSMG z
left join #F f ON f.PROV_TAX_ID = z.PROV_TAX_ID
AND z.SYSTEMNAME = f.SYSTEM_NAME
AND z.PROVIDERNAME = f.Provider
Left join #Q q ON z.PROV_TAX_ID = q.TIN
GROUP BY z.SYSTEMNAME
--,Z.[SubsystemName]
,z.PROVIDERNAME
,z.STATECODE
,z.PROV_TAX_ID
,z.SRC_PAR_CD
,q.INDdesc
,f.IND_Desc
,f.Time_Period_for_Dispute
,f.Renew_Term_Ind
,f.Renewal_Date
,z.SMG_ID
,z.Hosp_Ind
/************************** Drop temp tables*********************/
--DROP TABLE #SQS_EDW_SOURCE_WithSMG
--DROP TABLE #Q
--DROP TABLE #F
--DROP TABLE #HIHO_Records
--DROP TABLE #SQS_TINtoSystem
--DROP TABLE #SQS_EDW_SOURCE_WithSMG
--DROP TABLE #sqs_objector_TINs -
Queries taking long time to run
Hi BW Folks,
I am working on virtual cube 0bcs_vc10 for bcs(business consolidation) the base cube is 0bcs_c10. We compressed and partitioned the base cube. The queries which i developed are running fine and are in production.
Now when a new req came for some more queries after developing them and running in DEv they are taking 20 to 25 mins to run.
Suprisingly the queries which are running in production they are also taking long time to run, we havent disturbed the performance tuning process we did earlier .
Can any one of share their experience in how to tackle this. Will assign full points
THanksHi Nick,
Do you have a lot of navigational attributes? That could be slowing you down. Also, if it's going too slow, try caching and pre-calculation (pre-loads the cache). Although I'm not sure if this will work with a Virtual Cube. By their nature they will be much slower than a physical cube.....so worst case scenario for me would be to load the VC data into a regular cube. If the query is still slow, then at least you know it's a query issue and not the VC.
Brian -
Releasing transport request taking long time
Hi All,
I am releasing transport request in SE09, for releasing child request it was taking long time and for parent request taking much more long time.
In SM50, i didn't find any processes running.
Can anyone tell me the solution.
Thanks & Regards
UdayHi Uday,
>> I am releasing transport request in SE09, for releasing child request it was taking long time and for parent request taking much more long time.
You didn't note what release your are running on, but you can check the note 1541334 - Database connect takes two minutes
>> In SM50, i didn't find any processes running.
It is normal, because the system exports the transport request by "tp" command at the OS level, after TMS complete its job on a DIALOG wp.
Best regards,
Orkun Gedik
Maybe you are looking for
-
I have iCloud set up, but I'm not sure how to go about syncing the Mac Calendar to my iPhone. The calendar on my iPhone has my yahoo account synced to it and also has a section for iCloud. I've tried to add the On My Mac calendar to the Icloud secti
-
Hello all, I am checking the replication status of two master instances.Both are showing 209 pending changes from 201:48:55 in DSCC.(Directory control center) The replication tab says "Replication idle".I dont see any errors in the error logs for bot
-
Getting error in upgrading from Sap 2005 to sap 8.8
Hi everyone, I am upgrading my client from sap 2005 to sap 8.8.During the pre check wizard system is getting some error and creating the log file.The coming error is: Test result: Not completed due to SBOErr -2004 in function CDagCheckUTBRecordIncons
-
Embedding external rss in my site
I am trying to embed a rss feed from another site to update current info automatically on my site. My research indicates that if I use php that should be relatively easy to do. I am having trouble finding where to start, can someone point me in the r
-
Greetings All, We are currently hosting a Visual FoxPro 9.0 SP2 application on a Windows 2003 server. I have attempted to virtualize the server using VMware's P2V converter. The conversion went flawlessly with no errors and the virtual server star