Long time creating execution plan on wide table

When executing a select on a wide table, with a column set and sparse columns, it always takes 12 seconds or longer to create the execution plan, even on an empty
table. We used the database option ‘SET
PARAMETERIZATION FORCED’ but it does not resolve the issue. We also tried ‘INDEXED
VIEWS’ and ‘FILTERED INDEXES’
without success. A covering index resolves the problem but this cannot be implemented as a general solution. We tested this on SQL Server 2008, 2008 R8, 2012 and 2014.
The following queries (with their actual execution plans included in picture) all experience the problem:
SELECT
TOP (1)
  [ISN], [TIMESTAMP], [FIRMEN_NR], [SATZART], [TAB_SPERR_KZ],
[DATUM_AEND], [UHRZEIT_AEND], [LFDNR],
  [BEARBEITER], [STEUERUNG], [UMSATZSCHLUESSEL], [US_W_S_KZ],
[US_G_L_KZ], [VL_KZ], [RABATT_KZ],
  [INPUT_KZ], [ZAST_PFLICHTIGER_US], [BEWEG_SCHL1_1], [BEWEG_SCHL1_2],
[BEWEG_SCHL1_3], [BEWEG_SCHL1_4],
[BEWEG_SCHL1_5], [BEWEG_SCHL2_1], [BEWEG_SCHL2_2], [BEWEG_SCHL2_3], [BEWEG_SCHL2_4],
[BEWEG_SCHL2_5],
  [WERT_KZ_1], [WERT_KZ_2], [WERT_KZ_3], [WERT_KZ_4], [WERT_KZ_5],
[PREIS_KZ_1], [PREIS_KZ_2], [PREIS_KZ_3],
  [PREIS_KZ_4], [PREIS_KZ_5], [US_BEZ_1],
[US_BEZ_2], [US_BEZ_3], [US_BEZ2_1], [US_BEZ2_2], [US_BEZ2_3],
  [GEGEN_KONTO], [EIN_AUSZAHLUNGS_KZ], [ABR_DRUCK_KZ], [UPFRONT_FEE],
[LARG]
FROM [IAM800_A]
WHERE [IAM880_KEY1_N3]
= 0x3033383830303532
ORDER
BY [ISN] ASC
SELECT
-- now without TOP (1)
  [ISN], [TIMESTAMP], [FIRMEN_NR], [SATZART], [TAB_SPERR_KZ],
[DATUM_AEND], [UHRZEIT_AEND], [LFDNR],
  [BEARBEITER], [STEUERUNG], [UMSATZSCHLUESSEL], [US_W_S_KZ],
[US_G_L_KZ], [VL_KZ], [RABATT_KZ],
  [INPUT_KZ], [ZAST_PFLICHTIGER_US], [BEWEG_SCHL1_1], [BEWEG_SCHL1_2],
[BEWEG_SCHL1_3], [BEWEG_SCHL1_4],
[BEWEG_SCHL1_5], [BEWEG_SCHL2_1], [BEWEG_SCHL2_2], [BEWEG_SCHL2_3], [BEWEG_SCHL2_4],
[BEWEG_SCHL2_5],
  [WERT_KZ_1], [WERT_KZ_2], [WERT_KZ_3], [WERT_KZ_4], [WERT_KZ_5],
[PREIS_KZ_1], [PREIS_KZ_2], [PREIS_KZ_3],
  [PREIS_KZ_4], [PREIS_KZ_5], [US_BEZ_1],
[US_BEZ_2], [US_BEZ_3], [US_BEZ2_1], [US_BEZ2_2], [US_BEZ2_3],
  [GEGEN_KONTO], [EIN_AUSZAHLUNGS_KZ], [ABR_DRUCK_KZ], [UPFRONT_FEE],
[LARG]
FROM [IAM800_A]
WHERE [IAM880_KEY1_N3]
= 0x3033383830303532
ORDER
BY [ISN] ASC
SELECT [ISN], [FIRMEN_NR]
FROM [IAM800_A]
WHERE [IAM880_KEY1_N3]
= 0x3033383830303532
ORDER
BY [ISN] ASC
Execution plans for above queries:

Sorry my mistake. As these definitions fit within the size of a reply, I am posting them directly.
Do note the use of a filegroup.
PRINT 'Adding indexes to table [dbo].[IAM800_A]...'
GO
SET ANSI_NULLS              ON
SET ANSI_PADDING            ON
SET ANSI_WARNINGS           ON
SET ARITHABORT              ON
SET CONCAT_NULL_YIELDS_NULL ON
SET NUMERIC_ROUNDABORT      OFF
SET QUOTED_IDENTIFIER       ON
GO
-- Primary Key (ISN)
ALTER TABLE [dbo].[IAM800_A] ADD CONSTRAINT [IAM800_A:ISN]
  PRIMARY KEY CLUSTERED ( [ISN] )
  ON [FG001_Data]
GO
-- Non-Clustered Indexes (Descriptors)
CREATE UNIQUE NONCLUSTERED INDEX [IAM800_A:SUPER-DESCRIPTOR:IAM505-KEY1]
  ON [dbo].[IAM800_A] ( [IAM505_KEY1], [ISN] )
  ON [FG002_Index]
GO
CREATE UNIQUE NONCLUSTERED INDEX [IAM800_A:SUPER-DESCRIPTOR:IAM810-KEY1]
  ON [dbo].[IAM800_A] ( [IAM810_KEY1], [ISN] )
  ON [FG002_Index]
GO
CREATE UNIQUE NONCLUSTERED INDEX [IAM800_A:SUPER-DESCRIPTOR:IAM820-KEY1]
  ON [dbo].[IAM800_A] ( [IAM820_KEY1], [ISN] )
  ON [FG002_Index]
GO
CREATE UNIQUE NONCLUSTERED INDEX [IAM800_A:SUPER-DESCRIPTOR:IAM830-KEY1]
  ON [dbo].[IAM800_A] ( [IAM830_KEY1], [ISN] )
  ON [FG002_Index]
GO
CREATE UNIQUE NONCLUSTERED INDEX [IAM800_A:SUPER-DESCRIPTOR:IAM835-KEY1]
  ON [dbo].[IAM800_A] ( [IAM835_KEY1], [ISN] )
  ON [FG002_Index]
GO
CREATE UNIQUE NONCLUSTERED INDEX [IAM800_A:SUPER-DESCRIPTOR:IAM840-KEY1]
  ON [dbo].[IAM800_A] ( [IAM840_KEY1], [ISN] )
  ON [FG002_Index]
GO
CREATE UNIQUE NONCLUSTERED INDEX [IAM800_A:SUPER-DESCRIPTOR:IAM841-KEY1-ALT]
  ON [dbo].[IAM800_A] ( [IAM841_KEY1_ALT], [ISN] )
  ON [FG002_Index]
GO
CREATE UNIQUE NONCLUSTERED INDEX [IAM800_A:SUPER-DESCRIPTOR:IAM842-KEY1]
  ON [dbo].[IAM800_A] ( [IAM842_KEY1], [ISN] )
  ON [FG002_Index]
GO
CREATE UNIQUE NONCLUSTERED INDEX [IAM800_A:SUPER-DESCRIPTOR:IAM843-KEY1]
  ON [dbo].[IAM800_A] ( [IAM843_KEY1], [ISN] )
  ON [FG002_Index]
GO
CREATE UNIQUE NONCLUSTERED INDEX [IAM800_A:SUPER-DESCRIPTOR:IAM845-KEY1]
  ON [dbo].[IAM800_A] ( [IAM845_KEY1], [ISN] )
  ON [FG002_Index]
GO
CREATE UNIQUE NONCLUSTERED INDEX [IAM800_A:SUPER-DESCRIPTOR:IAM850-KEY1]
  ON [dbo].[IAM800_A] ( [IAM850_KEY1], [ISN] )
  ON [FG002_Index]
GO
CREATE UNIQUE NONCLUSTERED INDEX [IAM800_A:SUPER-DESCRIPTOR:IAM860-KEY1]
  ON [dbo].[IAM800_A] ( [IAM860_KEY1], [ISN] )
  ON [FG002_Index]
GO
CREATE UNIQUE NONCLUSTERED INDEX [IAM800_A:SUPER-DESCRIPTOR:IAM870-KEY1]
  ON [dbo].[IAM800_A] ( [IAM870_KEY1], [ISN] )
  ON [FG002_Index]
GO
CREATE UNIQUE NONCLUSTERED INDEX [IAM800_A:SUPER-DESCRIPTOR:IAM875-KEY1]
  ON [dbo].[IAM800_A] ( [IAM875_KEY1], [ISN] )
  ON [FG002_Index]
GO
CREATE UNIQUE NONCLUSTERED INDEX [IAM800_A:SUPER-DESCRIPTOR:IAM880-KEY1-N3]
  ON [dbo].[IAM800_A] ( [IAM880_KEY1_N3], [ISN] )
  ON [FG002_Index]
GO
CREATE UNIQUE NONCLUSTERED INDEX [IAM800_A:SUPER-DESCRIPTOR:IAM885-KEY1]
  ON [dbo].[IAM800_A] ( [IAM885_KEY1], [ISN] )
  ON [FG002_Index]
GO
CREATE UNIQUE NONCLUSTERED INDEX [IAM800_A:SUPER-DESCRIPTOR:IAM899-KEY1]
  ON [dbo].[IAM800_A] ( [IAM899_KEY1], [ISN] )
  ON [FG002_Index]
GO
CREATE UNIQUE NONCLUSTERED INDEX [IAM800_A:SUPER-DESCRIPTOR:IAM899-KEY2]
  ON [dbo].[IAM800_A] ( [IAM899_KEY2], [ISN] )
  ON [FG002_Index]
GO
CREATE UNIQUE NONCLUSTERED INDEX [IAM800_A:SUPER-DESCRIPTOR:IAM899-KEY3]
  ON [dbo].[IAM800_A] ( [IAM899_KEY3], [ISN] )
  ON [FG002_Index]
GO
CREATE UNIQUE NONCLUSTERED INDEX [IAM800_A:SUPER-DESCRIPTOR:IAM800-KEY1]
  ON [dbo].[IAM800_A] ( [IAM800_KEY1], [ISN] )
  ON [FG002_Index]
GO
CREATE UNIQUE NONCLUSTERED INDEX [IAM800_A:SUPER-DESCRIPTOR:IAM841-KEY1]
  ON [dbo].[IAM800_A] ( [IAM841_KEY1], [ISN] )
  ON [FG002_Index]
GO
CREATE UNIQUE NONCLUSTERED INDEX [IAM800_A:SUPER-DESCRIPTOR:IAM886-KEY1]
  ON [dbo].[IAM800_A] ( [IAM886_KEY1], [ISN] )
  ON [FG002_Index]
GO
CREATE UNIQUE NONCLUSTERED INDEX [IAM800_A:SUPER-DESCRIPTOR:IAM876-KEY1]
  ON [dbo].[IAM800_A] ( [IAM876_KEY1], [ISN] )
  ON [FG002_Index]
GO

Similar Messages

  • How to find the time taken for creating execution plan alone

    Hi,
    Is there any way to find out the division between the time taken for query parsing, creating execution plan and actual data retrieval seperately? If I enable 'set timing on' I see the elapsed time which is the total time taken for all these 3.
    Some of my queries are taking long time when I run it first time and so want to know what is it taking long, is it the parsing the query or creating the execution plan? (Since my queries run faster second time, I am assuming the major part was for parsing or creating the plan but not the data retrieval).
    Also, where does Oracle keep the execution plan? Is it in the BUFFER_CACHE? if so, I tried flushing the buffer_cache and restarting the DB as well, but still the query execution seems faster compared to the first time. How long does Oracle keep the execution plan in the cache and is there anyway to increase this cache size?
    Thanks in advance!

    user13169027 wrote:
    Hi,
    Is there any way to find out the division between the time taken for query parsing, creating execution plan and actual data retrieval seperately? If I enable 'set timing on' I see the elapsed time which is the total time taken for all these 3.
    Some of my queries are taking long time when I run it first time and so want to know what is it taking long, is it the parsing the query or creating the execution plan? (Since my queries run faster second time, I am assuming the major part was for parsing or creating the plan but not the data retrieval).
    The ideal way for finding the answer to your questions above, would be to perform a (sql) trace of your query executions. To see the difference in the trace-files, you might want to trace the first execution in one session, and the second execution in another session: so you get two different trace files, which you can then seperately tkprof, and investigate.
    Also, where does Oracle keep the execution plan? Is it in the BUFFER_CACHE? if so, I tried flushing the buffer_cache and restarting the DB as well, but still the query execution seems faster compared to the first time. How long does Oracle keep the execution plan in the cache and is there anyway to increase this cache size?
    Execution plans are held in the shared-pool, not the buffer-cache. As far as I know they will be kept in memory in an LRU way (least recently used), just like db-blocks are in the buffer-pool (I know this is not entirely correct, but for all practical purposes, think of it this way).

  • We are running a report ? it is  taking long time for  execution. what step

    we are running a report ? it is  taking long time for  execution. what steps will we do to reduce the execution time?

    Hi ,
    the performance can be improved thru many ways..
    First try to select based on the key fields if it is a very large table.
    if not then create a Secondary index for selection.
    dont perform selects inside a loop , instead do FOR ALL ENTRIES IN
    try to perform may operations in one loop rather than calling different loops on the same internal table..
    All these above and many more steps can be implemented to improve the performance.
    Need to look into your code, how it can be improved in your case...
    Regards,
    Vivek Shah

  • DAC: Clearing Failed Execution Plans and BAW Tables

    Hi all,
    Thank you for taking the time to review this post.
    Background
    Oracle BI Applications 7.9.6 Financial Analytics
    OLTP Source: E-Business Suite 11.5.10
    Steps Taken
    1. In DAC I have create a New Source Container based on Oracle 11.5.10
    2. I have updated the parameters in the Source System parameters
    3. Then I created a new Execution Plan as a copy of the Financials_Oracle 11.5.10 record and checked Full Load Always
    4. Added new Financials Subject Areas so that they have the new Source System
    5. Updated the Parameters tab with the new Source System and Generated the Parameters
    6. Build a new Execution Plan - Fails
    Confirmation for Rerun
    I want to confirm that the correct steps to Rerun an Execution Plan are as follows. I want to ensure that the OLTP (BAW) tables are truncated. I am experiencing duplicates in the W_GL_SEGMENTS_D (and DS) table even though there are no duplicates on the EBS.
    In DAC under the EXECUTE window do the following:
    - Navigate to the 'Current Run' tab.
    - Highlight the failed execution plan.
    - Right click and seleted 'Mark as completed.'
    - Enter the numbers/text in the box.
    Then:
    - In the top toolbar select Tools --> ETL Management --> Reset Data Sources
    - Enter the numbers/text in the boox.
    Your assistance is greatly appreciated.
    Kind Regards,
    Gary.

    Hi HTH,
    I can confirm that I do not have duplicates on the EBS side.
    I got the SQL Statement by:
    1. Open Mapping SDE_ORA_GL_SegmentDimension in the SDE_ORA11510_Adaptor.
    2. Review the SQL Statement in the Source Qualifier SQ_FND_FLEX_VALUES
    3. Run this SQL command against my EBS 11.5.10 source OLTP and the duplicates that are appearing in the W_GL_SEGMENT_DS table to not exist.
    SELECT
    FND_FLEX_VALUE_SETS.FLEX_VALUE_SET_ID,
    FND_FLEX_VALUE_SETS.FLEX_VALUE_SET_NAME,
    FND_FLEX_VALUES.FLEX_VALUE,
    MAX(FND_FLEX_VALUES_TL.DESCRIPTION),
    MAX(FND_FLEX_VALUES.LAST_UPDATE_DATE),
    MAX(FND_FLEX_VALUES.LAST_UPDATED_BY),
    MAX(FND_FLEX_VALUES.CREATION_DATE),
    MAX(FND_FLEX_VALUES.CREATED_BY),
    MAX(FND_FLEX_VALUES.START_DATE_ACTIVE),
    MAX(FND_FLEX_VALUES.END_DATE_ACTIVE),
    FND_FLEX_VALUE_SETS.LAST_UPDATE_DATE LAST_UPDATE_DATE1
    FROM
    FND_FLEX_VALUES, FND_FLEX_VALUE_SETS, FND_FLEX_VALUES_TL
    WHERE
    FND_FLEX_VALUES.FLEX_VALUE_SET_ID = FND_FLEX_VALUE_SETS.FLEX_VALUE_SET_ID AND FND_FLEX_VALUES.FLEX_VALUE_ID = FND_FLEX_VALUES_TL.FLEX_VALUE_ID AND
    FND_FLEX_VALUES_TL.LANGUAGE = 'US' AND
    (FND_FLEX_VALUES.LAST_UPDATE_DATE > TO_DATE('$$LAST_EXTRACT_DATE', 'MM/DD/YYYY HH24:MI:SS') OR
    FND_FLEX_VALUE_SETS.LAST_UPDATE_DATE > TO_DATE('$$LAST_EXTRACT_DATE', 'MM/DD/YYYY HH24:MI:SS'))
    GROUP BY
    FND_FLEX_VALUE_SETS.FLEX_VALUE_SET_ID,
    FND_FLEX_VALUE_SETS.FLEX_VALUE_SET_NAME,
    FND_FLEX_VALUES.FLEX_VALUE,
    FND_FLEX_VALUE_SETS.LAST_UPDATE_DATEHowever, one thing that I noticed was that I wanted to validate what the value of parameter $$LAST_EXTRACT_DATE is being populated with.
    My investigation took me along the following route:
    Checked what was set up in the DAC (+Dac Build AN 10.1.3.4.1.20090415.0146, Build date: April 15 2009+):
    1. Design View -> Source System Parameters -> $$LAST_EXTRACT_DATE = Runtime Variable =@DAC_SOURCE_PRUNED_REFRESH_TIMESTAMP (+Haven't been able to track this Variable down!)+
    2. Setup View -> DAC System Properties -> InformaticaParameterFileLocation -> $INFA_HOME/server/infa_shared/SrcFiles
    Reviewing one of the log files for my failing Task:
    $INFA_HOME/server/infa_shared/SrcFiles/ORA_11_5_10.DWRLY_OLAP.SDE_ORA11510_Adaptor.SDE_ORA_GLSegmentDimension_Full.txt
    I noticed that several variables near the bottom (including $$LAST_EXTRACT_DATE) have not been populated. This variable gets populated at Runtime but is there a log file that shows the value that gets populated? I would also have expected that there would have been a Subsitution Variable in the place of a Static Value.
    [SDE_ORA11510_Adaptor.SDE_ORA_GLSegmentDimension_Full]
    $$ANALYSIS_END=01/01/2011 12:59:00
    $$ANALYSIS_END_WID=20110101
    $$ANALYSIS_START=12/31/1979 01:00:00
    $$ANALYSIS_START_WID=19791231
    $$COST_TIME_GRAIN=QUARTER
    $$CURRENT_DATE=03/17/2010
    $$CURRENT_DATE_IN_SQL_FORMAT=TO_DATE('2010-03-17', 'YYYY-MM-DD')
    $$CURRENT_DATE_WID=20100317
    $$DATASOURCE_NUM_ID=4
    $$DEFAULT_LOC_RATE_TYPE=Corporate
    $$DFLT_LANG=US
    $$ETL_PROC_WID=21147868
    $$FILTER_BY_SET_OF_BOOKS_ID='N'
    $$FILTER_BY_SET_OF_BOOKS_TYPE='N'
    $$GBL_CALENDAR_ID=WPG_Calendar~Month
    $$GBL_DATASOURCE_NUM_ID=4
    $$GLOBAL1_CURR_CODE=AUD
    $$GLOBAL1_RATE_TYPE=Corporate
    $$GLOBAL2_CURR_CODE=GBP
    $$GLOBAL2_RATE_TYPE=Corporate
    $$GLOBAL3_CURR_CODE=MYR
    $$GLOBAL3_RATE_TYPE=Corporate
    $$HI_DATE=TO_DATE('3714-01-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS')
    $$HI_DT=01/01/3714 12:00:00
    $$HR_ABSNC_EXTRACT_DATE=TO_DATE('1980-01-01 08:19:00', 'YYYY-MM-DD HH24:MI:SS')
    $$HR_WRKFC_ADJ_SERVICE_DATE='N'
    $$HR_WRKFC_EXTRACT_DATE=01/01/1970
    $$HR_WRKFC_SNAPSHOT_DT=TO_DATE('2004-01-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS')
    $$HR_WRKFC_SNAPSHOT_TO_WID=20100317
    $$Hint1=
    $$Hint_Tera_Post_Cast=
    $$Hint_Tera_Pre_Cast=
    $$INITIAL_EXTRACT_DATE=06/27/2009
    $$INVPROD_CAT_SET_ID=27
    $$INV_PROD_CAT_SET_ID10=
    $$INV_PROD_CAT_SET_ID1=27
    $$INV_PROD_CAT_SET_ID2=
    $$INV_PROD_CAT_SET_ID3=
    $$INV_PROD_CAT_SET_ID4=
    $$INV_PROD_CAT_SET_ID5=
    $$INV_PROD_CAT_SET_ID6=
    $$INV_PROD_CAT_SET_ID7=
    $$INV_PROD_CAT_SET_ID8=
    $$INV_PROD_CAT_SET_ID9=
    $$LANGUAGE=
    $$LANGUAGE_CODE=E
    $$LAST_EXTRACT_DATE=
    $$LAST_EXTRACT_DATE_IN_SQL_FORMAT=
    $$LAST_TARGET_EXTRACT_DATE_IN_SQL_FORMAT=
    $$LOAD_DT=TO_DATE('2010-03-17 19:27:10', 'YYYY-MM-DD HH24:MI:SS')
    $$LOW_DATE=TO_DATE('1899-01-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS')
    $$LOW_DT=01/01/1899 00:00:00
    $$MASTER_CODE_NOT_FOUND=
    $$ORA_HI_DATE=TO_DATE('4712-12-31 00:00:00', 'YYYY-MM-DD HH24:MI:SS')
    $$PROD_CAT_SET_ID10=
    $$PROD_CAT_SET_ID1=2
    $$PROD_CAT_SET_ID2=
    $$PROD_CAT_SET_ID3=
    $$PROD_CAT_SET_ID4=
    $$PROD_CAT_SET_ID5=
    $$PROD_CAT_SET_ID6=
    $$PROD_CAT_SET_ID7=
    $$PROD_CAT_SET_ID8=
    $$PROD_CAT_SET_ID9=
    $$PROD_CAT_SET_ID=2
    $$SET_OF_BOOKS_ID_LIST=1
    $$SET_OF_BOOKS_TYPE_LIST='NONE'
    $$SOURCE_CODE_NOT_SUPPLIED=
    $$TENANT_ID=DEFAULT
    $$WH_DATASOURCE_NUM_ID=999
    $DBConnection_OLAP=DWRLY_OLAP
    $DBConnection_OLTP=ORA_11_5_10
    $PMSessionLogFile=ORA_11_5_10.DWRLY_OLAP.SDE_ORA11510_Adaptor.SDE_ORA_GLSegmentDimension_Full.logThe following snippet was discovered in the DAC logs for the failed Task (Log file: SDE_ORA_GLSegmentDimension_Full_DETAIL.log):
    First error code [7004]
    First error message: [TE_7004 Transformation Parse Warning [FLEX_VALUE_SET_ID || '~' || FLEX_VALUE]; transformation continues...]Finally I can confirm that there was a Task Truncate Table W_GL_LINKAGE_INFORMATION_GS that was successfully executed in Task SDE_ORA_GL_LinkageInformation_Extract.
    Any further guidance greatly appreciated.
    Kind Regards,
    Gary.

  • Using Word Easy Table Under Report Generation takes long time to add data points to table and generate report

    Hi All,
    We used report generation tool kit to generate the report on word and with other API 's under it,we get good reports .
    But when the data points are more (> 100 on all channels) it take a long time  to write all data and create a table in the word and generate report.
    Any sugegstions how to  make this happen in some seconds .
    Please assist.

    Well, I just tried my suggestion.  I simulated a 24-channel data producer (I actually generated 25 numbers -- the first number was the row number, followed by 24 random numbers) and generated 100 of these for a total of 2500 double-precision values.  I then saved this table to Excel and closed the file.  I then opened Word (all using RGT), wrote a single text line "Text with Excel", inserted the previously-created "Excel Object", and saved and closed Word.
    First, it worked (sort of).  The Table in Word started on a new page, and was in a very tiny font (possibly trying to fit 25 columns on a page?  I didn't inspect it very carefully).  This is probably "too much data" to really try to write the whole table, unless you format it for, say, 3 significant figures.
    Now, timing.  I ran this four times, two duplicate sets, one with Excel and Word in "normal" mode, one in "minimized".  To my surprise, this didn't make a lot of difference (minimized was less than 10% faster).  Here are the approximate times:
         Generate the data -- about 1 millisecond.
         Write the Excel Report -- about 1.5 seconds
         Write the Word Report -- about 10.5 seconds
    Seems to me this is way faster than trying to do this directly in Word.
    Bob Schor

  • How to create execution plan in DAC and how to start ETL steps

    Hi,
    For ETL configuration i have installed and configured,
    1. Oracle 10G DB
    2. obiee and obia 7.9.6
    3. informatica server (here i created repository and integration services)
    4. DAC server 10g (setup for DAC is also configured like create warehouse tables etc.,
    same installation done on windows for client.
    Now i'm struck with execution plan creation.
    then
    how to start ETL, where start ETL
    Source : Oracle EBIZ R12.1.1
    target : Datawarehouse (10G DB)
    please help me out the steps till ETL start ?
    Thanks,
    Srik

    Hi Srik
    I am assuming you have followed the steps required before a load in the library for dac config, csv lookups etc...
    Did you check out the example in the documentation?
    Here is the link
    http://download.oracle.com/docs/cd/E14223_01/bia.796/e14217/windows_ic.htm#BABFGIHE
    If you follow those steps and run the execution plan and you can monitor the progress under the current run tab in the dac and the infa workflow monitor.
    Regards
    Nick
    Edited by: Nickho on 2010/06/21 9:24 AM

  • SSIS package takes longer time when inserting data into temp tables

    querying records from one  server  and  inserting them into temp tables is taking longer time.
    are there any setting in package which  enhance the performance .

    will local temp table (#temp ) enhance the performance  ..
    If you're planning to use # tables in ssis make sure you read this
    http://consultingblogs.emc.com/jamiethomson/archive/2006/11/19/SSIS_3A00_-Using-temporary-tables.aspx
    Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs

  • Sharepoint Designer workflow takes long time for execution of action

    Hi All ,
    I have created declarative workflow using SharePoint designer 2010.which is getting executed successfully,But taking lot of time for execution.
    Below are details of it
    workflow contains only one activity "assign Task to User" and workflow will start automatically after uploading document.
    workflow takes 10 minutes to create task for user , 10 minutes to claim task and 10 minutes to execute if any action(Approve or Reject) is taken on task.
    no error in log file or event log related to workflow.
    options tried:
    1.I have tried options  suggested in article(http://www.codeproject.com/Articles/251828/How-to-Improve-Workflow-Performance-in-SharePoint#_rating ),but no luck
    2. Reduced the interval of worflow timer job to 1  from 5 .still no luck
    Any thoughts regarding this would be appreciated.
    ragava_28

    Hi Thuan,
    I have similar issue posted here
    http://social.msdn.microsoft.com/Forums/sharepoint/en-US/82410142-31bc-43a2-b8bb-782c99e082d3/designer-workflow-with-takes-time-to-execute?forum=sharepointcustomizationprevious
    Regards,
    SPGeek03

  • Long Time for execution of select query

    Hi,
    I have a select query
    select * from Table where Time1 and time 2;
    The table has a large no. of colums than rows. So the query taking a lot of time to execute.
    Is there any way we can reduce the time taken by the query.
    Thanks
    Jit
    Message was edited by:
    user637843

    select * from Table where Time1 and time 2;I doubt this query run for long time. More or less 1 millsecond, the time for Oracle to check the query syntax and return errors.
    Nicolas.

  • Every 3rd data package taking long time for execution

    Hi Everyone
    We are facing a strange situation. Our scenario involves doing a full load from DSO to CUBE.
    Start routines are not very database intensive and care has been taken to write them in a optimized way.
    But strangely every 3rd data package is taking exceptionally longer time than other data packages.
    a) DTP is having 3 parallal processes.
    b)time spent in extraction , rule, and updation is constant for every data package.
    c)start routine time is larger for every 3rd data package and keeps on increasing. for e.g. 5 mins, 10 mins, 24 mins, 33 mins etc it increases by each 3rd package.
    I tried to anlayze the data which was taking so much time but found no difference in terms of data in normal and longer time taking DTP (i.e. there was not logical difference in data for start routine to behave like this).
    I was wondering what can be the possible reasons for it and may be some other external system factors can be responsible for it. If someone can help in this regard that will be highly appreciated.

    Hi Hemanth,
    In your start routine, are you by any chance adding or multiplying the number of records to the source_package? Something like copy source package into an internal table, add records to internal table and then copy it back to source package? If some logic of this sorts is in your start routine, you need to refresh your internal table. Otherwise, the internal table records goes on increasing with every data package. So, the processing time might increase as the load progresses. This is one common mistake I have seen. Please check your code if you have something like that and refresh the internal tables. See if this makes any difference.
    Thanks and Regards
    Subray Hegde

  • Taking long time to initialize planning area.

    Hi All,
    Few questions on initalizing planning are. We have about 500000 CVC and 8 millions records in it.  We have recently upgradd to SCM 7.0. I am noticing now that every time we run background job to create CVC and initialize planning are, I noticed that hob SAPAPO/TS_PAREA_INITILIZE is taking lont time about 1.5 hours . Dont know why is it something to do with Memory or can we do any setting to improve performance.
    Also please advice if we have to initialize planning area every time we create CVC'c or Realignment. Or just during month end job.
    Thanks
    KV
    Edited by: KVerma on Oct 30, 2009 5:22 PM

    Hi KV,
    1)  Housekeeping of your existing data by archiving or deleting unwanted
    data and keep your system free from all inconsistencies could be your
    primary solution to resolve this issue
    2) To keep the size of time series objects to a manageable minimum it makes
    sense to initialize planning areas by scheduling background jobs that run
    periodically. This is sometimes known as working with rolling horizons.
    3) If you want the system to roll over periodically without scheduling the
    same job repeatedly, you should use selection variables to enter relative dates.
    The system then calculates the start and end dates each time the report is
    executed. For instance, you specify that the initialization period should be
    from 12 months in the past to 12 months in the future. If todayu2019s date is
    July 1, 2003, this means that the initialization period starts on July 1, 2002
    and ends on June 30, 2004. If the report is started anew on August 1, 2003,
    change the dates to August 1, 2002 and July 31, 2004.
    Regards
    R. Senthil Mareeswaran.

  • Long time for loading Planning books/data views

    Hi
    Could someone throw some pointers/solutions for extremely horrible time taken for a planning view (data view) to load and populate for certains selection profiles that already have large number of trasactional records. In my case it is specifically the run time dumps thrown for a popular combination (large number of transactions).
    Urgent suggestions/solutions required. Pls. call 9923170825. India, if you are lazy enough to type it out. or just type in some key words, tcodes etc.
    Thanks

    I hope you dont have too many macros hogging the memory in interactive book. The other thing is to be on the latest LC build. Also try to have default unit of measures (keep the dataview settings for UoM blank).
    Can you confirm if you build the TLB view brand new in 5.0 or did you do some migration of the books ? The first time opening of book takes longer due to some compilation of certain objects. in SM50, try to identify where the process takes longest - whether it is at client end, at the DB procedure of LC or at application level.

  • Long time for execution for scheduled CIF background jobs

    Hi,
    we hv schedlued CIF background job to be run daily around 10.30 PST.
    there is large variation for the time required for execution of this job.
    on 26 Dec, it took approx 48000 seconds while regular average is 120 seconds only.
    today, despite past of 6000 seconds, the job is still in ACTIVE stage.
    can anyone know why such long delay for such jobs?
    how can i reduce its execution time (as one of case in a week)?
    rgds/Jay

    Hi Jay,
    A few obvious things to look for:
    1.  Multiple CIF activation jobs running at the same time
    2.  Large change in the master data, eg new plant, new Material Masters, new customers, etc etc.
    3.  Conflicts with other non CIF programs that may be going after the same data
    4.  Communication degradation between the OLTP and SCM clients
    Normally you refer such questions to someone on your Basis team, or perhaps your DBA.  They can turn on tracing tools that can track the changes in your environment that may be contributing to the changes in run time.
    Regards,
    DB49

  • Query takes longer time to execue

    Hi,
    I am working on oracle 9.2.0.7.0 installed on HP_UX 11.23.I just installed and created 2 instance in that yesterday.Before i was installing 9.2.0.7, 9.2.0.5 is already installed there and one instance is running in it. when i try to connect any user or execute any query in 9.2.0.7 it is taking quite some time to execute it? May i know what is the reason? how to improve the performance again.
    eg:SQL>select * from v$instance;
    SQL>conn scott/tiger;
    SQL>select * from tab;
    even these things take quite some time to execute?
    My RAM size is 6GB
    swap size is 9 GB
    SGA configuration for both instance running in 9.2.0.7 are as follows
    SGA_MAX_SIZE=257MB
    Shared_pool_size 160MB
    Buffer_cache_size 32MB
    Java_pool_size 0
    large_pool_size 16MB
    log_buffer 512K
    PGA_AGGREGATE_TARGET 24MB
    SGA configuration for instance running in 9.2.0.5 is as follows
    SGA_MAX_SIZE=305MB
    Shared_pool_size 117MB
    Buffer_cache_size 32MB
    Java_pool_size 117MB
    large_pool_size 16MB
    log_buffer 512K
    PGA_AGGREGATE_TARGET 24MB
    can any one help me in this issue please?

    Hi,
    you say that "My RAM size is 6GB"
    but you have to configure
    SGA_MAX_SIZE=257MB
    Shared_pool_size 160MB
    Buffer_cache_size 32MB
    parameters also..
    First of all i would increase buffer cache..
    You should look in the oracle docs how to configure these params..
    Regs,
    Acr

  • Long time creating Exchange 2007 mailbox

    Hi everybody.
    I´m creating a mailbox in Exchange 2007 when provisioning users in AD, and it looks fine. The problem is the time spent in doing it (approximately 2:30 minutes). I´ve been researching but did not solved it.
    Any idea of what could be the cause?
    Regards.

    Hi Rookie,
    as a matter of fact, you are quite ahead of the rest of us.
    I have tried over and over to create Mailboxes using the Exchange 2007 in IDM. But, so far, unsuccessfully.
    Creating users in AD works fine. For some reason, IDM refuses to create mailboxes.
    if you have actually succeeded in accomplishing this (albeit slowly), I would love to know how you did it.
    Thanks

Maybe you are looking for

  • Sales order schedule line quantities in decimals

    Gurus - Required your help to resolve the following scenario. I'm facing with the problem where schedule lines of a sales order allocated with decimal quantities even though materials with UOM as EA(Each). For example Material A of sales qty 13 has t

  • Labview into excel file

    hello...i'm using labview 8.5 for my final year project..can i convert the graph in the labview into excel file by using this labview version? please...help me....ahaksss Solved! Go to Solution.

  • Designing a selection screen

    How can we design a selection screen. a.select-option  /  Purchase document number b.provide F1 help for the field.

  • Always Use Artboard to Export  (CS5)

    In adobe Illustrator CS5 - Is there a way to always Use Artboards when exporting to JPG...etc? - instead of having to click the check box every time? Actions don't work because they save the path that was last used - I just want to simply be able to

  • Simple Apple RAID question...

    New MacPro, Apple RAID Card, 4 1TB drives, all included in the RAID 5 array that has been created and initialized in RAID Utility booted from the OS install disk. All looks well ATM. I now want to partition this array into two partitions. One 300 GB