Performance issue on Date filter

In my where condition I wanted a date filter on sale date.  I want all sales after 9/1/2014.   
CASE1
If I explicitly use date filter like 
SaleDate > '2014-09-01 00:00:00:000' I am getting the result in 2 seconds.
CASE2
Since I may need to use this data value again, I created a date table with single column "date" and loaded the value '2014-09-01 00:00:00:000' . 
So now my where is like 
SaleDate > (Select date from dateTable)
When I run this , the result does not show up even after 10 mins. Both date types are datetime. I am baffled.  Why is this query not coming up with the result?

As mentioned by Erland, for the optimizer, both situation are very different. With a literal, the optimizer can properly estimate the number of qualifying rows and adapt the query plan appropriately. With a scalar subquery, the value is unknown at compile
time, and the optimizer will use heuristics to accommodate any value. In this case, the selection for all rows more recent than September 1st 2014 is probably a small percentage of the table.
I can't explain why the optimizer or engine goes awry, because the subquery's result is a scalar, and shouldn't result in such long runtime. If you are unlucky, the optimizer expanded the query and actually joins the two tables. That would make the indexes
on table dateTable relevant, as well as distribution and cardinality of dateTable's row values. If you want to know, you would have to inspect the (actual) query plan.
In general, I don't think your approach is a smart thing to do. I don't know why you want to have your selection date in a table (as opposed to a parameter of a stored procedure), but if you want to stick to it, maybe you should break the query up into something
like this. The optimizer would still have to use heuristics (instead of more reliable estimates), but some unintended consequences could disappear.
Declare @min_date datetime
Set @min_date = (SELECT date FROM dateTable)
SELECT ...
FROM ...
WHERE SaleDate > @min_date
If you use a parameter (or an appropriate query hint), you will probably get performance close to your first case.
Gert-Jan

Similar Messages

  • Performance issue and data getting interchanged in BO Webi report + SAP BW

    Hi,
    We are using SAP BW queries as the source for creating some BO reports.
    Environments :
    SAP - SAP BI 7.1
    BO - BO XI 3.1
    Issues :
    The reports were working fine in Dev and Q with less data. But when we point the universes to BW prod ( where we have much data), the reports are taking quite a long time to refresh and getting timed out. This query has some key figures which are having customer exits defined to show only one month data. And also BW accelerators are updated for the infocubes pertaining to this query. The BO report is giving data if we apply a filter in 'Query Panel' of Webi to show only current month dates. But then the issue is the values are getting interchanged for many objects. For ex: there are 2 objects- ABS version and Market region. The values are getting interchanged in the BO level.
    Please let us know if anything needs to be done in BO or BW to fix this issue if anyone has faced the same
    Also Please let us know if customer exits and accelerators works fine with BO
    Thanks
    Sivakami

    Hi,
    Thanks Roberto. We'll check the notes
    @Ingo,
    We are able to solve the performance issue by removing unused Key figures and dimensions from the query, but the column value interchange issue still persisits
    The build version is  - 12.3.0
    Query Stripping
    Where should we enable query stripping? When i went through some documentation it was written that it'll be enabled automatically from XI 3.1 Sp3. Can you please let us know if its so and what we need to do to enable it.
    The coulmn interchange is happening when we use dimensions in a certain order. When product type is used along with Market region. Market region shows values of Product type also in Webi report.
    Thanks & Regards,
    Sivakami

  • Performance issues with data warehouse loads

    We have performance issues with our data warehouse load ETL process. I have run
    analyze and dbms_stats and checked database environment. What other things can I do to optimize performance? I cannot use statspack since we are running Oracle 8i. Thanks
    Scott

    Hi,
    you should analyze the db after you have loaded the tables.
    Do you use sequences to generate PKs? Do you have a lot of indexex and/or triggers on the tables?
    If yes:
    make sure your sequence caches (alter sequence s cache 10000)
    Drop all unneeded indexes while loading and disable trigger if possible.
    How big is your Redo Log Buffer? When loading a large amount of data it may be an option to enlarge this buffer.
    Do you have more then one DBWR Process? Writing parallel can speed up things when a checkpoint is needed.
    Is it possible using a direct load? Or do you already direct load?
    Dim

  • Performance issue loading data out of AS400

    Hi,
    For loading data out of AS400, I have created a view containing a join between three AS400 tables connected with a database link (And some more changes in the file tnsnames.ora and the listener. Hell of a job with Oracle, but it works finally)
    When I use the tool Toad, the results of this query will be shown in about 20 seconds.
    When I use this view in OWB to load this data into a target table, then the load takes about 15 MINUTES!
    Why is this so slow?
    Do I have to configure something in OWB to make this load faster?
    Other loads when I'm using views (to Oracle tables) to load data are running fast.
    It seems that Oracle does internally more dan just running the view.
    Who knows?
    Regards,
    Maurice

    Maurice,
    OWB generates optimized code based on whether sources are local or remote. With remote sources, Warehouse Builder will generate code that uses inline views in order to minimize network traffic.
    In your case, you confuse the generation by creating a view that does some remote/local joins telling OWB that the object is local (which is only partly true).
    Perhaps what you could do is create one-to-one views and leave it up to OWB to join the objects. One additional advantage you gain with this approach is that you can keep track of the impact analysis based on your source tables rather than views that are based on the tables with flat text queries.
    Mark.

  • Urgent : general abap performance issue

    HI floks
    i did some development in new smartform its working fine but i have issue in data base performance is 76% . but i utilize similar below code with various conditions in various 12 places . is it possible to reduce performance this type of code . check it and mail me how can i do it . if possible can suggest me fast .how much % is best for this type of performance issues.
    DATA : BEGIN OF ITVBRPC OCCURS 0,
           LV_POSNR LIKE VBRP-POSNR,
           END OF ITVBRPC.
    DATA : BEGIN OF ITKONVC OCCURS 0,
            LV_KNUMH LIKE KONV-KNUMH,
            LV_KSCHL LIKE KONV-KSCHL,
           END OF ITKONVC.
    DATA:  BEGIN OF ITKONHC OCCURS 0,
           LV_KNUMH LIKE KONH-KNUMH,
           LV_KSCHL LIKE KONH-KSCHL,
           LV_KZUST LIKE KONH-KZUST,
           END OF ITKONHC.
    DATA: BEGIN OF ITKONVC1 OCCURS 0,
           LV_KWERT LIKE KONV-KWERT,
           END OF ITKONVC1.
    DATA :  BEGIN OF ITCALCC OCCURS 0,
           LV_KWERT LIKE KONV-KWERT,
           END OF ITCALCC.
    DATA: COUNTC(3) TYPE n,
           TOTALC LIKE KONV-KWERT.
    SELECT POSNR FROM VBRP INTO ITVBRPC
      WHERE VBELN = INV_HEADER-VBELN AND ARKTX = WA_INVDATA-ARKTX .
    APPEND ITVBRPC.
    ENDSELECT.
    LOOP AT ITVBRPC.
    SELECT KNUMH KSCHL FROM KONV INTO ITKONVC WHERE KNUMV =
    LV_VBRK-KNUMV AND KPOSN = ITVBRPC-LV_POSNR AND KSCHL = 'ZLAC'.
    APPEND ITKONVC.
    ENDSELECT.
    ENDLOOP.
    SORT ITKONVC BY LV_KNUMH.
    DELETE ADJACENT DUPLICATES FROM ITKONVC.
    LOOP AT ITKONVC.
    SELECT KNUMH KSCHL KZUST FROM KONH INTO ITKONHC WHERE KNUMH = ITKONVC-LV_KNUMH AND KSCHL = 'ZLAC' AND KZUST = 'Z02'.
    APPEND ITKONHC.
    ENDSELECT.
    ENDLOOP.
    LOOP AT ITKONHC.
    SELECT KWERT FROM KONV INTO ITKONVC1 WHERE KNUMH = ITKONHC-LV_KNUMH AND
    KSCHL = ITKONHC-LV_KSCHL AND KNUMV = LV_VBRK-KNUMV.
    MOVE ITKONVC1-LV_KWERT TO ITCALCC-LV_KWERT.
    APPEND ITCALCC.
    ENDSELECT.
    endloop.
    LOOP AT ITCALCC.
    COUNTC = COUNTC + 1.
    TOTALC = TOTALC + ITCALCC-LV_KWERT.
      ENDLOOP.
    MOVE ITKONHC-LV_KSCHL TO LV_CKSCHL.
    MOVE TOTALC TO LV_CKWERT.
    it's urgent ..........
    thanks .
    bbbbye
    suresh

    You need to use for all entries instead of select inside the loop.
    Try this:
    DATA : BEGIN OF ITVBRPC OCCURS 0,
    VBELN LIKE VBRP-VBELN,
    LV_POSNR LIKE VBRP-POSNR,
    END OF ITVBRPC.
    DATA: IT_VBRPC_TMP like ITVBRPC occurs 0 with header line.
    DATA : BEGIN OF ITKONVC OCCURS 0,
    LV_KNUMH LIKE KONV-KNUMH,
    LV_KSCHL LIKE KONV-KSCHL,
    END OF ITKONVC.
    DATA: BEGIN OF ITKONHC OCCURS 0,
    LV_KNUMH LIKE KONH-KNUMH,
    LV_KSCHL LIKE KONH-KSCHL,
    LV_KZUST LIKE KONH-KZUST,
    END OF ITKONHC.
    DATA: BEGIN OF ITKONVC1 OCCURS 0,
    KNUMH LIKE KONV-KNUMH,
    KSCHL LIKE KONV- KSCHL,
    LV_KWERT LIKE KONV-KWERT,
    END OF ITKONVC1.
    DATA : BEGIN OF ITCALCC OCCURS 0,
    LV_KWERT LIKE KONV-KWERT,
    END OF ITCALCC.
    DATA: COUNTC(3) TYPE n,
    TOTALC LIKE KONV-KWERT.
    *SELECT POSNR FROM VBRP INTO ITVBRPC
    *WHERE VBELN = INV_HEADER-VBELN AND ARKTX = WA_INVDATA-ARKTX .
    *APPEND ITVBRPC.
    *ENDSELECT.
    SELECT VBELN POSNR FROM VBRP INTO TABLE ITVBRPC
    WHERE VBELN = INV_HEADER-VBELN AND
                     ARKTX = WA_INVDATA-ARKTX .
    If sy-subrc eq 0.
      IT_VBRPC_TMP[] = ITVBRPC[].
      Sort IT_VBRPC_TMP by vbeln posnr.
      Delete adjacent duplicates from IT_VBRPC_TMP comparing vbeln posnr.
    SELECT KNUMH KSCHL FROM KONV
                   INTO TABLE ITKONVC
                   WHERE KNUMV = LV_VBRK-KNUMV AND
                   KPOSN = ITVBRPC-LV_POSNR AND
                    KSCHL = 'ZLAC'.
    if sy-subrc eq 0.
       SORT ITKONVC BY LV_KNUMH.
        DELETE ADJACENT DUPLICATES FROM ITKONVC COMPARING LV_KNUMH.
       SELECT KNUMH KSCHL KZUST FROM KONH
                 INTO TABLE ITKONHC
                 WHERE KNUMH =  ITKONVC-LV_KNUMH AND
                               KSCHL = 'ZLAC' AND
                               KZUST = 'Z02'.
       if sy-subrc eq 0.
    SELECT KNUMH KSCHL KWERT FROM KONV
                   INTO TABLE ITKONVC1
                    WHERE KNUMH = ITKONHC-LV_KNUMH AND
                                   KSCHL = ITKONHC-LV_KSCHL AND
                                    KNUMV = LV_VBRK-KNUMV.
        Endif.
    Endif.
    Endif.
    *LOOP AT ITVBRPC.
    *SELECT KNUMH KSCHL FROM KONV INTO ITKONVC WHERE KNUMV =
    *LV_VBRK-KNUMV AND KPOSN = ITVBRPC-LV_POSNR AND KSCHL = 'ZLAC'.
    *APPEND ITKONVC.
    *ENDSELECT.
    *ENDLOOP.
    *SORT ITKONVC BY LV_KNUMH.
    *DELETE ADJACENT DUPLICATES FROM ITKONVC.
    *LOOP AT ITKONVC.
    SELECT KNUMH KSCHL KZUST FROM KONH INTO ITKONHC WHERE KNUMH = ITKONVC-LV_KNUMH AND KSCHL = 'ZLAC' AND KZUST = 'Z02'.
    *APPEND ITKONHC.
    *ENDSELECT.
    *ENDLOOP.
    *LOOP AT ITKONHC.
    *SELECT KWERT FROM KONV INTO ITKONVC1 WHERE KNUMH = ITKONHC-LV_KNUMH *AND
    *KSCHL = ITKONHC-LV_KSCHL AND KNUMV = LV_VBRK-KNUMV.
    *MOVE ITKONVC1-LV_KWERT TO ITCALCC-LV_KWERT.
    *APPEND ITCALCC.
    *ENDSELECT.
    *endloop.
    LOOP AT ITCALCC.
    COUNTC = COUNTC + 1.
    TOTALC = TOTALC + ITCALCC-LV_KWERT.
    ENDLOOP.
    MOVE ITKONHC-LV_KSCHL TO LV_CKSCHL.
    MOVE TOTALC TO LV_CKWERT.

  • Smartform performance issue

    HI floks
    i did some development in new smartform its working fine but i have issue in data base performance is 76% . but i utilize similar below code with various conditions in various 12 places . is it possible to reduce performance this type of code . check it and mail me how can i do it . if possible can suggest me fast .how much % is best for this type of performance issues.
    DATA : BEGIN OF ITVBRPC OCCURS 0,
           LV_POSNR LIKE VBRP-POSNR,
           END OF ITVBRPC.
    DATA : BEGIN OF ITKONVC OCCURS 0,
            LV_KNUMH LIKE KONV-KNUMH,
            LV_KSCHL LIKE KONV-KSCHL,
           END OF ITKONVC.
    DATA:  BEGIN OF ITKONHC OCCURS 0,
           LV_KNUMH LIKE KONH-KNUMH,
           LV_KSCHL LIKE KONH-KSCHL,
           LV_KZUST LIKE KONH-KZUST,
           END OF ITKONHC.
    DATA: BEGIN OF ITKONVC1 OCCURS 0,
           LV_KWERT LIKE KONV-KWERT,
           END OF ITKONVC1.
    DATA :  BEGIN OF ITCALCC OCCURS 0,
           LV_KWERT LIKE KONV-KWERT,
           END OF ITCALCC.
    DATA: COUNTC(3) TYPE n,
           TOTALC LIKE KONV-KWERT.
    SELECT POSNR FROM VBRP INTO ITVBRPC
      WHERE VBELN = INV_HEADER-VBELN AND ARKTX = WA_INVDATA-ARKTX .
    APPEND ITVBRPC.
    ENDSELECT.
    LOOP AT ITVBRPC.
    SELECT KNUMH KSCHL FROM KONV INTO ITKONVC WHERE KNUMV =
    LV_VBRK-KNUMV AND KPOSN = ITVBRPC-LV_POSNR AND KSCHL = 'ZLAC'.
    APPEND ITKONVC.
    ENDSELECT.
    ENDLOOP.
    SORT ITKONVC BY LV_KNUMH.
    DELETE ADJACENT DUPLICATES FROM ITKONVC.
    LOOP AT ITKONVC.
    SELECT KNUMH KSCHL KZUST FROM KONH INTO ITKONHC WHERE KNUMH = ITKONVC-LV_KNUMH AND KSCHL = 'ZLAC' AND KZUST = 'Z02'.
    APPEND ITKONHC.
    ENDSELECT.
    ENDLOOP.
    LOOP AT ITKONHC.
    SELECT KWERT FROM KONV INTO ITKONVC1 WHERE KNUMH = ITKONHC-LV_KNUMH AND
    KSCHL = ITKONHC-LV_KSCHL AND KNUMV = LV_VBRK-KNUMV.
    MOVE ITKONVC1-LV_KWERT TO ITCALCC-LV_KWERT.
    APPEND ITCALCC.
    ENDSELECT.
    endloop.
    LOOP AT ITCALCC.
    COUNTC = COUNTC + 1.
    TOTALC = TOTALC + ITCALCC-LV_KWERT.
      ENDLOOP.
    MOVE ITKONHC-LV_KSCHL TO LV_CKSCHL.
    MOVE TOTALC TO LV_CKWERT.
    it's urgent ..........
    thanks .
    bbbbye
    suresh

    Hi,
    you can increase the performance by following,
    1. By changing occurs clause to types declaration clause.
       You need to declare like this:
       DATA : BEGIN OF xVBRPC,
                      LV_POSNR LIKE VBRP-POSNR,
                  END OF xVBRPC.
       DATA: ITVBRPC like table of xVBRPC,
    2. do not use select statements in loop statements.
    Ashven

  • Date Performance issue

    hi Guru's,
    I am using 11.1.6.8 OBIEE. One of my report is having performance issue when i dig into that i found that the date filter not applied in the SQL generated to send DB, due to that it is doing table scan, But strange thing is that when it displaying  data based on the Date range filter. It is only happening with the date dimension all other dimensions are working fine. I am not sure what it is missing.
    Thanks In advance.
    regards
    Mohammed.

    hi Saichand,
    Thanks for taking time and looking into.
    The filter is applied on the logical query but the physical query send to the DB is not having the filter. Due to that it is doing full table scan of the fact table and almost 30 minutes to display data. I am not sure why the physical query is not having the date filter. when i add the location or other type of filter it added to the Physical Query send to DB.
    regards
    @li

  • Expression Filter Performance Issues / Misuse?

    I'm currently evaluating the Expression Filter functionality for a new requirement. The basic idea of the requirement is that I have a logging table that I want to get "interesting" records from. The way I want to set it up is to exclude known, "uninteresting", records or record patterns.
    So as far as an implementation I was considering a table of expressions that contained expression filter entries for the "uninteresting" records and checking this against my logging table using the EVALUATE operator and looking for a 0 result.
    In my testing I wanted to return results where the EVALUTE operator is equal to 1 to see if my expressions are correct. In doing this I was experiencing significant performance issues. For example my test filter matches 72 rows out of 61657 possible entries. It took Oracle almost 10 minutes to evaluate this expression. I tried it with and without an Expression Filter index with no noticeable change in execution time. The test case and query is provided below.
    Is this the right use case for Expression Filter? Am I misunderstanding how it works? What am I doing wrong?
    Test Case:
    Version
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
    PL/SQL Release 11.2.0.1.0 - Production
    CORE    11.2.0.1.0      Production
    TNS for 32-bit Windows: Version 11.2.0.1.0 - Production
    NLSRTL Version 11.2.0.1.0 - Production
    Objects & Query:
    CREATE TABLE expressions( white_list VARCHAR2(200));
    CREATE TABLE data
    AS
    SELECT OBJECT_ID
         , OWNER
         , OBJECT_NAME
         , CREATED
         , LAST_DDL_TIME
    FROM   DBA_OBJECTS
    BEGIN
      -- Create the empty Attribute Set --
      DBMS_EXPFIL.CREATE_ATTRIBUTE_SET('exptype');
      -- Define elementary attributes of EXF$TABLE_ALIAS type --
      DBMS_EXPFIL.ADD_ELEMENTARY_ATTRIBUTE('exptype','data',
                                            EXF$TABLE_ALIAS('test_user.data'));
    END;
    BEGIN
      DBMS_EXPFIL.ASSIGN_ATTRIBUTE_SET('exptype','expressions','white_list');
    END;
    INSERT INTO expressions(white_list) VALUES('data.owner=''TEST_USER'' AND data.created BETWEEN TO_DATE(''08/03/2010'',''MM/DD/YYYY'') AND TO_DATE(''08/05/2010'',''MM/DD/YYYY'')');
    exec dbms_stats.gather_table_stats(USER,'EXPRESSIONS');
    exec dbms_stats.gather_table_stats(USER,'DATA');
    CREATE INDEX expIndex ON Expressions (white_list) INDEXTYPE IS EXFSYS.EXPFILTER
      PARAMETERS ('STOREATTRS (data.owner,data.object_name,data.created)
                   INDEXATTRS (data.owner,data.object_name,data.created)');
    SELECT /*+ gather_plan_statistics */ data.* FROM data, expressions WHERE EVALUATE(white_list,exptype.getVarchar(data.rowid)) = 1;
    DROP TABLE expressions PURGE;
    BEGIN
            DBMS_EXPFIL.DROP_ATTRIBUTE_SET(attr_set => 'exptype');
    END;
    DROP TABLE data PURGE;

    Hi,
    If you are already using the queries and are stable enough then rather than modifying query you can try other options to improve the query performance like data compression of the cube, creation of aggregates, placing cube on BIA or creating cache for the query.
    Best Regards,
    Prashant Vankudre.

  • Performance issues with Planning data load & Agg in 11.1.2.3.500

    We recently upgraded from 11.1.1.3 to 11.1.2.3. Post upgrade we face performance issues with one of our Planning job (eg: Job E). It takes 3x the time to complete in our new environment (11.1.2.3) when compared to the old one (11.1.1.3). This job loads then actual data and does the aggregation. The pattern which we noticed is , if we run a restructure on the application and execute this job immediately it gets completed as the same time as 11.1.1.3. However, in current production (11.1.1.3) the job runs in the following sequence Job A->Job B-> Job C->Job D->Job E and it completes on time, but if we do the same test in 11.1.2.3 in the above sequence it take 3x the time . We dont have a window to restructure the application to before running Job E  every time in Prod. Specs of the new Env is much higher than the old one.
    We have Essbase clustering (MS active/passive) in the new environment and the files are stored in the SAN drive. Could this be because of this? has any one faced performance issues in the clustered environment?

    Do you have exactly same Essbase config settings and calculations performing AGG ? . Remember something very small like UPDATECALC ON/OFF can make a BIG difference in timing..

  • QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES

    WHAT ARE  QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
    WHAT ARE DATALOADING PERFORMANCE ISSUES  WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
    WILL REWARD FULL POINT S
    REGARDS
    GURU

    BW Back end
    Some Tips -
    1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 — Background Processing Job Management — to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
    2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 — ABAP/4 Run-time Analysis — and then run the analysis for the transaction code RSA3 — Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
    3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 — Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
    4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 — Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
    5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW — BW IMG Menu — on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
    6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
    7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
    8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
    You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
    9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
    10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables — for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
    11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
    12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
    13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
    14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
    Hope it Helps
    Chetan
    @CP..

  • Performance issue of frequently data inserted tables

    Hi all,
    Have a table named raw_trap_store having columns as trap_id(number,PK),Source_IP(varchar2), OID(varchar2),Message(CLOB)  and received_time(date).
    This table is partitioned across 24 partitions where partitioning column being received_time. (every hour's data will be stored in each partition).
    This table is getting inserted with 40-50 records/sec on an average. Overall number of records for a day will be around 2.8-3 million. Data will be retained for 2 days.
    No updates will be happening on this table.
    Performance issue:N
    Need a report  which involves selection of records from this table based on certain values of Source IP (filtering condition on source_ip column).
    Need a report  which involves selection of records from this table based on certain values of OID (filtering condition on OID column).
    But, if i create an index on SourceIP and OID column, then inserts are getting slow. (have created a normal index. not partitioned index)
    Please help me to address the above issue.

    Giving the nature of your report (based on Source_IP and OID) and the nature of your table partitioning (range partitioned by received_time), you have already made a good decision to create these particular indexes as a normal (b-tree or global) and not locally partitioned. Because if you have locally partitioned them, your reports will not eliminate partitions (because they do not include the partition key in their where clause) and hence your index range scans will scan all 24 partitions generating a lot of logical I/O
    That is said, remember that generally we insert once and select many times. You have to balance that. If you are sure that it is the creation of your two indexes that has decreased the insert performance then you may set them at in an unusable state before the insert and rebuild them after. But this might be a good advice only if the volume of data to be inserted is much bigger than the existing volume of data before the insert.
    And if you are not deleting from the table and the table does not contain triggers and integrity constraints (like FK constraint) then you can opt for a direct path insert using the hint /*+ append */
    Best regards
    Mohamed Houri
    <mod. action: removed unecessary blog ref.>
    Message was edited by: Nicolas.Gasparotto

  • Performance issue with FDM when importing data

    In the FDM Web console, a performance issue has been detected when importing data (.txt)
    In less than 10 seconds the ".txt" and the ".log" files are created the INBOX folder (the ".txt" file) and in the OUTBOX\Logs (the ".log" file).
    At that moment, system shows the message "Processing, please wait” during 10 minutes. Eventually the information is displayed, however if we want to see the second page, we have to wait more than 20 seconds.
    It seems a performance issue when system tries to show the imported data in the web page.
    It has been also noted that when a user tries to import a txt file directly clicking on the tab "Select File From Inbox", the user has to also wait other 10 minutes before the information is displayed on the web page.
    Thx in advance!
    Cheers
    Matteo

    Hi Matteo
    How much data is being imported / displayed when users are interacting with the system.
    There is a report that may help you to analyse this but unfortunately I cannot remember what it is called and don't have access to a system to check. I do remember that it breaks down the import process into stages showing how long it takes to process each mapping step and the overall time.
    I suspect that what you are seeing is normal behaviour but that isn't to say that performance improvements are not possible.
    The copying of files is the first part of the import process before FDM then starts the import so that will be quick. The processing is then the time taken to import the records, process the mapping and write to the tables. If users are clicking 'Select file from Inbox' then they are re-importing so it will take just as long as it would for you to import it, they are not just asking to retrieve previously imported data.
    Hope this helps
    Stuart

  • Performance issue in selecting data from a view because of not in condition

    Hi experts,
    I have a requirement to select data in a view which is not available in a fact with certain join conditions. but the fact table contains 2 crore rows of data. so this view is not working at all. it is running for long time. im pasting query here. please help me to tune it. whole query except prior to not in is executing in 15 minutes. but when i add not in condition it is running for so many hours as the second table has millions of records.
    CREATE OR REPLACE FORCE VIEW EDWOWN.MEDW_V_GIEA_SERVICE_LEVEL11
       SYS_ENT_ID,
       SERVICE_LEVEL_NO,
       CUSTOMER_NO,
       BILL_TO_LOCATION,
       PART_NO,
       SRCE_SYS_ID,
       BUS_AREA_ID,
       CONTRACT,
       WAREHOUSE,
       ORDER_NO,
       LINE_NO,
       REL_NO,
       REVISED_DUE_DATE,
       REVISED_QTY_DUE,
       QTY_RESERVED,
       QTY_PICKED,
       QTY_SHIPPED,
       ABBREVIATION,
       ACCT_WEEK,
       ACCT_MONTH,
       ACCT_YEAR,
       UPDATED_FLAG,
       CREATE_DATE,
       RECORD_DATE,
       BASE_WAREHOUSE,
       EARLIEST_SHIP_DATE,
       LATEST_SHIP_DATE,
       SERVICE_DATE,
       SHIP_PCT,
       ALLOC_PCT,
       WHSE_PCT,
       ABC_CLASS,
       LOCATION_ID,
       RELEASE_COMP,
       WAREHOUSE_DESC,
       MAKE_TO_FLAG,
       SOURCE_CREATE_DATE,
       SOURCE_UPDATE_DATE,
       SOURCE_CREATED_BY,
       SOURCE_UPDATED_BY,
       ENTITY_CODE,
       RECORD_ID,
       SRC_SYS_ENT_ID,
       BSS_HIERARCHY_KEY,
       SERVICE_LVL_FLAG
    AS
       SELECT SL.SYS_ENT_ID,
                 SL.ENTITY_CODE
              || '-'
              || SL.order_no
              || '-'
              || SL.LINE_NO
              || '-'
              || SL.REL_NO
                 SERVICE_LEVEL_NO,
              SL.CUSTOMER_NO,
              SL.BILL_TO_LOCATION,
              SL.PART_NO,
              SL.SRCE_SYS_ID,
              SL.BUS_AREA_ID,
              SL.CONTRACT,
              SL.WAREHOUSE,
              SL.ORDER_NO,
              SL.LINE_NO,
              SL.REL_NO,
              SL.REVISED_DUE_DATE,
              SL.REVISED_QTY_DUE,
              NULL QTY_RESERVED,
              NULL QTY_PICKED,
              SL.QTY_SHIPPED,
              SL.ABBREVIATION,
              NULL ACCT_WEEK,
              NULL ACCT_MONTH,
              NULL ACCT_YEAR,
              NULL UPDATED_FLAG,
              SL.CREATE_DATE,
              SL.RECORD_DATE,
              SL.BASE_WAREHOUSE,
              SL.EARLIEST_SHIP_DATE,
              SL.LATEST_SHIP_DATE,
              SL.SERVICE_DATE,
              SL.SHIP_PCT,
              0 ALLOC_PCT,
              0 WHSE_PCT,
              SL.ABC_CLASS,
              SL.LOCATION_ID,
              NULL RELEASE_COMP,
              SL.WAREHOUSE_DESC,
              SL.MAKE_TO_FLAG,
              SL.source_create_date,
              SL.source_update_date,
              SL.source_created_by,
              SL.source_updated_by,
              SL.ENTITY_CODE,
              SL.RECORD_ID,
              SL.SRC_SYS_ENT_ID,
              SL.BSS_HIERARCHY_KEY,
              'Y' SERVICE_LVL_FLAG
         FROM (  SELECT SL_INT.SYS_ENT_ID,
                        SL_INT.CUSTOMER_NO,
                        SL_INT.BILL_TO_LOCATION,
                        SL_INT.PART_NO,
                        SL_INT.SRCE_SYS_ID,
                        SL_INT.BUS_AREA_ID,
                        SL_INT.CONTRACT,
                        SL_INT.WAREHOUSE,
                        SL_INT.ORDER_NO,
                        SL_INT.LINE_NO,
                        MAX (SL_INT.REL_NO) REL_NO,
                        SL_INT.REVISED_DUE_DATE,
                        SUM (SL_INT.REVISED_QTY_DUE) REVISED_QTY_DUE,
                        SUM (SL_INT.QTY_SHIPPED) QTY_SHIPPED,
                        SL_INT.ABBREVIATION,
                        MAX (SL_INT.CREATE_DATE) CREATE_DATE,
                        MAX (SL_INT.RECORD_DATE) RECORD_DATE,
                        SL_INT.BASE_WAREHOUSE,
                        MAX (SL_INT.LAST_SHIPMENT_DATE) LAST_SHIPMENT_DATE,
                        MAX (SL_INT.EARLIEST_SHIP_DATE) EARLIEST_SHIP_DATE,
                        MAX (SL_INT.LATEST_SHIP_DATE) LATEST_SHIP_DATE,
                        MAX (
                           CASE
                              WHEN TRUNC (SL_INT.LAST_SHIPMENT_DATE) <=
                                      TRUNC (SL_INT.LATEST_SHIP_DATE)
                              THEN
                                 TRUNC (SL_INT.LAST_SHIPMENT_DATE)
                              ELSE
                                 TRUNC (SL_INT.LATEST_SHIP_DATE)
                           END)
                           SERVICE_DATE,
                        MIN (
                           CASE
                              WHEN TRUNC (SL_INT.LAST_SHIPMENT_DATE) >=
                                      TRUNC (SL_INT.EARLIEST_SHIP_DATE)
                                   AND TRUNC (SL_INT.LAST_SHIPMENT_DATE) <=
                                          TRUNC (SL_INT.LATEST_SHIP_DATE)
                                   AND SL_INT.QTY_SHIPPED = SL_INT.REVISED_QTY_DUE
                              THEN
                                 100
                              ELSE
                                 0
                           END)
                           SHIP_PCT,
                        SL_INT.ABC_CLASS,
                        SL_INT.LOCATION_ID,
                        SL_INT.WAREHOUSE_DESC,
                        SL_INT.MAKE_TO_FLAG,
                        MAX (SL_INT.source_create_date) source_create_date,
                        MAX (SL_INT.source_update_date) source_update_date,
                        SL_INT.source_created_by,
                        SL_INT.source_updated_by,
                        SL_INT.ENTITY_CODE,
                        SL_INT.RECORD_ID,
                        SL_INT.SRC_SYS_ENT_ID,
                        SL_INT.BSS_HIERARCHY_KEY
                   FROM (SELECT SL_UNADJ.*,
                                DECODE (
                                   TRIM (TIMA.DAY_DESC),
                                   'saturday',   SL_UNADJ.REVISED_DUE_DATE
                                               - 1
                                               - early_ship_days,
                                   'sunday',   SL_UNADJ.REVISED_DUE_DATE
                                             - 2
                                             - early_ship_days,
                                   SL_UNADJ.REVISED_DUE_DATE - early_ship_days)
                                   EARLIEST_SHIP_DATE,
                                DECODE (
                                   TRIM (TIMB.DAY_DESC),
                                   'saturday',   SL_UNADJ.REVISED_DUE_DATE
                                               + 2
                                               + LATE_SHIP_DAYS,
                                   'sunday',   SL_UNADJ.REVISED_DUE_DATE
                                             + 1
                                             + LATE_SHIP_DAYS,
                                   SL_UNADJ.REVISED_DUE_DATE + LATE_SHIP_DAYS)
                                   LATEST_SHIP_DATE
                           FROM (SELECT NVL (s2.sys_ent_id, '00') SYS_ENT_ID,
                                        cust.customer_no CUSTOMER_NO,
                                        cust.bill_to_loc BILL_TO_LOCATION,
                                        cust.early_ship_days,
                                        CUST.LATE_SHIP_DAYS,
                                        ord.PART_NO,
                                        ord.SRCE_SYS_ID,
                                        ord.BUS_AREA_ID,
                                        ord.BUS_AREA_ID CONTRACT,
                                        NVL (WAREHOUSE, ord.entity_code) WAREHOUSE,
                                        ORDER_NO,
                                        ORDER_LINE_NO LINE_NO,
                                        ORDER_REL_NO REL_NO,
                                        TRUNC (REVISED_DUE_DATE) REVISED_DUE_DATE,
                                        REVISED_ORDER_QTY REVISED_QTY_DUE,
                                        --   NULL QTY_RESERVED,
                                        --    NULL QTY_PICKED,
                                        SHIPPED_QTY QTY_SHIPPED,
                                        sold_to_abbreviation ABBREVIATION,
                                        --        NULL ACCT_WEEK,
                                        --      NULL ACCT_MONTH,
                                        --    NULL ACCT_YEAR,
                                        --  NULL UPDATED_FLAG,
                                        ord.CREATE_DATE CREATE_DATE,
                                        ord.CREATE_DATE RECORD_DATE,
                                        NVL (WAREHOUSE, ord.entity_code)
                                           BASE_WAREHOUSE,
                                        LAST_SHIPMENT_DATE,
                                        TRUNC (REVISED_DUE_DATE)
                                        - cust.early_ship_days
                                           EARLIEST_SHIP_DATE_UnAdj,
                                        TRUNC (REVISED_DUE_DATE)
                                        + CUST.LATE_SHIP_DAYS
                                           LATEST_SHIP_DATE_UnAdj,
                                        --0 ALLOC_PCT,
                                        --0 WHSE_PCT,
                                        ABC_CLASS,
                                        NVL (LOCATION_ID, '000') LOCATION_ID,
                                        --NULL RELEASE_COMP,
                                        WAREHOUSE_DESC,
                                        NVL (
                                           DECODE (MAKE_TO_FLAG,
                                                   'S', 0,
                                                   'O', 1,
                                                   '', -1),
                                           -1)
                                           MAKE_TO_FLAG,
                                        ord.CREATE_DATE source_create_date,
                                        ord.UPDATE_DATE source_update_date,
                                        ord.CREATED_BY source_created_by,
                                        ord.UPDATED_BY source_updated_by,
                                        ord.ENTITY_CODE,
                                        ord.RECORD_ID,
                                        src.SYS_ENT_ID SRC_SYS_ENT_ID,
                                        ord.BSS_HIERARCHY_KEY
                                   FROM EDW_DTL_ORDER_FACT ord,
                                        edw_v_maxv_cust_dim cust,
                                        edw_v_maxv_part_dim part,
                                        EDW_WAREHOUSE_LKP war,
                                        EDW_SOURCE_LKP src,
                                        MEDW_PLANT_LKP s2,
                                        edw_v_incr_refresh_ctl incr
                                  WHERE ord.BSS_HIERARCHY_KEY =
                                           cust.BSS_HIERARCHY_KEY(+)
                                        AND ord.record_id = part.record_id(+)
                                        AND ord.part_no = part.part_no(+)
                                        AND NVL (ord.WAREHOUSE, ord.entity_code) =
                                               war.WAREHOUSE_code(+)
                                        AND ord.entity_code = war.entity_code(+)
                                        AND ord.record_id = src.record_id
                                        AND src.calculate_back_order_flag = 'Y'
                                        AND NVL (cancel_order_flag, 'N') != 'Y'
                                        AND UPPER (part.source_plant) =
                                               UPPER (s2.location_code1(+))
                                        AND mapping_name = 'MEDW_MAP_GIEA_MTOS_STG'
                                      --  AND NVL (ord.UPDATE_DATE, SYSDATE) >=
                                      --         MAX_SOURCE_UPDATE_DATE
                                        AND UPPER (
                                               NVL (ord.order_status, 'BOOKED')) NOT IN
                                               ('ENTERED', 'CANCELLED')
                                        AND TRUNC (REVISED_DUE_DATE) <= SYSDATE) SL_UNADJ,
                                EDW_TIME_DIM TIMA,
                                EDW_TIME_DIM TIMB
                          WHERE TRUNC (SL_UNADJ.EARLIEST_SHIP_DATE_UnAdj) =
                                   TIMA.ACCOUNT_DATE
                                AND TRUNC (SL_UNADJ.LATEST_SHIP_DATE_Unadj) =
                                       TIMB.ACCOUNT_DATE) SL_INT
                  WHERE TRUNC (LATEST_SHIP_DATE) <= TRUNC (SYSDATE)
               GROUP BY SL_INT.SYS_ENT_ID,
                        SL_INT.CUSTOMER_NO,
                        SL_INT.BILL_TO_LOCATION,
                        SL_INT.PART_NO,
                        SL_INT.SRCE_SYS_ID,
                        SL_INT.BUS_AREA_ID,
                        SL_INT.CONTRACT,
                        SL_INT.WAREHOUSE,
                        SL_INT.ORDER_NO,
                        SL_INT.LINE_NO,
                        SL_INT.REVISED_DUE_DATE,
                        SL_INT.ABBREVIATION,
                        SL_INT.BASE_WAREHOUSE,
                        SL_INT.ABC_CLASS,
                        SL_INT.LOCATION_ID,
                        SL_INT.WAREHOUSE_DESC,
                        SL_INT.MAKE_TO_FLAG,
                        SL_INT.source_created_by,
                        SL_INT.source_updated_by,
                        SL_INT.ENTITY_CODE,
                        SL_INT.RECORD_ID,
                        SL_INT.SRC_SYS_ENT_ID,
                        SL_INT.BSS_HIERARCHY_KEY) SL
        WHERE (SL.BSS_HIERARCHY_KEY,
               SL.ORDER_NO,
               Sl.line_no,
               sl.Revised_due_date,
               SL.PART_NO,
               sl.sys_ent_id) NOT IN
                 (SELECT BSS_HIERARCHY_KEY,
                         ORDER_NO,
                         line_no,
                         revised_due_date,
                         part_no,
                         src_sys_ent_id
                    FROM MEDW_MTOS_DTL_FACT
                   WHERE service_lvl_flag = 'Y');
                   thanks
    asn

    Also 'NOT IN' + nullable columns can be an expensive combination - and may not give the expected results. For example, compare these:
    with test1 as ( select 1 as key1 from dual )
       , test2 as ( select null as key2 from dual )
    select * from test1
    where  key1 not in
           ( select key2 from test2 );
    no rows selected
    with test1 as ( select 1 as key1 from dual )
       , test2 as ( select null as key2 from dual )
    select * from test1
    where  key1 not in
           ( select key2 from test2
             where  key2 is not null );
          KEY1
             1
    1 row selected.Even if the columns do contain values, if they are nullable Oracle has to perform a resource-intensive filter operation in case they are null. An EXISTS construction is not concerned with null values and can therefore use a more efficient execution plan, leading people to think it is inherently faster in general.

  • How to get around a performance issue when dealing with a lot of data

    Hello All,
    This is an academic question really, I'm not sure what I'm going to do with my issue, but I have some options.  I was wondering if anyone would like to throw in their two cents on what they would do.
    I have a report, the users want to see all agreements and all conditions related to the updating of rebates and the affected invoices. From a technical perspective ENT6038-KONV-KONP-KONA-KNA1.  THese are the tables I have to hit.  The problem is that when they retroactively update rebate conditions they can hit thousands of invoices, which blossoms out to thousands of conditions...you see the problem. I simply have too much data to grab, it times out.
    I've tried everything around the code.  If you have a better way to get price conditions and agreement numbers off of thousands of invoices, please let me know what that is.
    I have a couple of options.
    1) Use shared memory to preload the data for the report.  This would work, but I'm not going to know what data is needed to be loaded until report run time. They put in a date. I simply can't preload everything. I don't like this option much. 
    2) Write a function module to do this work. When the user clicks on the button to get this particular data, it will launch the FM in background and e-mail them the results. As you know, the background job won't time out. So far this is my favored option.
    Any other ideas?
    Oh...nope, BI is not an option, we don't have it. I know, I'm not happy about it. We do have a data warehouse, but the prospect of working with that group makes me whince.

    My two cents - firstly totally agree with Derick that its probably a good idea to go back to the business and justify their requirement in regards to reporting and "whether any user can meaningfully process all those results in an aggregate". But having dealt with customers across industries over a long period of time, it would probably be bit fanciful to expect them to change their requirements too much as in my experience neither do they understand (too much) technology nor they want to hear about technical limitations for a system etc. They want what they want if possible yesterday!
    So, about dealing with performance issues within ABAP, I'm sure you must be already using efficient programming techniques like using Hash internal tables with Unique Keys, accessing rows of the table using Field-Symbols and all that but what I was going to suggest to you is probably look at using [Extracts|http://help.sap.com/saphelp_nw04/helpdata/en/9f/db9ed135c111d1829f0000e829fbfe/content.htm]. I've had to deal with this couple of times in the past when dealing with massive amount of data and I found it to be very efficient in regards to performance. A good point to remember when using Extracts that, I quote from SAP Help, "The size of an extract dataset is, in principle, unlimited. Extracts larger than 500KB are stored in operating system files. The practical size of an extract is up to 2GB, as long as there is enough space in the filesystem."
    Hope this helps,
    Cheers,
    Sougata.

  • Data Filter issue with

    Hi,
    We have created a document from an Essbase ASO cube.
    A data filter have been created in Essbase ASO Cube. The data filter is 'all Europe Region'. When this data filter is applied to the user and user logs in WA and accesses the document, the below error is produced.
    ========================================================
    Loading this document produced errors.
    Unknown members or dimensions have been removed from the document in an attempt to fix the problem.
    Saving this document now will prevent these errors when the document is loaded again.
    [1033] Native: 1001005 [Fri Mar 06 16:59:20 2009]uxsy9dbt.cos.agilent.com/vip_main/vip_main//Error(1001005) Unknown Member [Americas] in Report
    ==============================================
    With no data filter assigned to the user, the report is fine...
    Tested the same data filter in Excel-addin and there are no issues/error. Is there any setting that needs to be changed in WebAnalysis?

    Following up even thought his is an old issue. Out of the blue my users began complaining their security in Web Analysis had gone awry. Their security in Excel add-in was fine though.
    Most users use a WA Shared Preferences File here. However one user, unbeknownst to me, was able to hard code their userID/password in the "database" connection for the Shared Preference. So anytime another user logged in to Web Analysis, if you open the View Pane to see all the details, the Database user was John Doe - which was wrong, but the WebAnalysis user was John Smith - which was correct. The John Doe user like I said somehow managed to "save" his userID/pwd as the default, which caused all other users to "pass through" to Essbase as John Doe instead of as themselves, which means they were getting HIS filter not their own.
    I wasn't able to figure out how he did that but to fix it I deleted his profile then Edited the Shared Preferences File | Databases tab | highlight database in question and choose Edit | change "Default logon" back to "Use User's ID and Password" instead of "Enter User ID and Password...John Doe|Pwd".
    It works fine now. Users get their same, correct security filters applied whether they go in to Web Analysis or Excel add-in. (In Web Analysis we have it set up to open a report at a certain location depending on their security filters, so this was causing us quite a headache until we figured out what the problem was).
    I opened a case with Tech Support to try and shed light on how the user was able to hard-code their login credentials in to the Shared Preference. Everyone only has Read access to that file, so. ?
    HTH,
    Karen S.

Maybe you are looking for

  • T.code for check the Total SRS

    pls tell me t.code for match the SRS (stock requitions slip)from store  or production ? Thanks Rekha Sharma

  • Why my Mac freeze when PDF files in the FINDER

    I have that problem since yesterday: when I try to either select or even just view the content of a folder with PDF files in it, my Mac freeze progressively after few seconds. Then, the only solution is to force quit by holding the power button. Real

  • OS X Mavericks Contacts phone numbers shown as Floating point numbers

    After installing Mavericks on MBP, I notice that some phone numbers in contacts are shown as floating point numbers, and are unreadable. I am guessing that these are numbers which were proceeded by "+" previously. An example is 4.9161E+12, which is f

  • [Solved] Problems in KDE after upgrade

    Hello, After the recent upgrade in KDE, it hasn't loaded correctly. I get the login screen and then the splash screen, but the splash screen fades out quickly and neither the taskbar/kickoff bar nor my wallpaper load. I have conky and yakuake startin

  • Set up new external drive-clone startup drive alongside Time Machine backup

    I want to clone the boot drive to a 1 TB WD MyBook external (via firewire), using SuperDuper, with the rest of the 1TB drive used for Time Machine backups. I am currently formatting the drive to Mac OS Extended (journaled), and plan to set two partit