Data manager jobs taking too long or hanging

Hoping someone here can provide some assistance with regard to the 4.2 version. We are specifically using BPC/OutlookSoft 4.2SP4 (and in process of upgrading to BPC7.5). Three server environment - SQL, OLAP and Web.
Problem: Data manager jobs in each application of production appset with five applications are either taking too long to complete for very small jobs (single entity/single period data copy/clear, under 1000 records) or completely hanging for larger jobs. This has been an issue for the last 7 days. During normal operation, small DM jobs ran in under a minute and large ones taking only a few minutes.
Failed attempts at resolution thus far:
1. Processed all applications from the OLAP server
2. Confirmed issue is specific to our appset and is not present in ApShell
3. Copied packages from ApShell to application to eliminate package corruption
4. Windows security updates were applied to all three servers but I assume this would also impact ApShell.
5. Cleared tblDTSLog history
6. Rebooted all three servers
7. Suspected antivirus however, problem persists with antivirus disabled on all three servers.
Other Observations
There are several tables in the SQL database named k2import# and several stored procedures named DMU_k2import#. My guess is these did not get removed because I killed the hung up jobs. I'm not sure if their existence is causing any issues.
To make the long story short, how can I narrow down at which point the jobs are hanging up or what is taking the longest time? I have turned on Debug Script but I don't' have documentation to make sense of all this info. What exactly is happening when I run a Clear package?  At this point, my next step is to run SQL Profiler to get a look into what is going on behind the scenes on the sql server. I also want to rule out the COM+ objects on the web server but not sure where to start.
Any help is greatly appreciated!!
Thank you,
Hitesh

Hi ,
The problem seems to be related to database. Do you have any maintenance plan for database?
It is specific for your appset because each appset has own database.
I suspect you have to run an sp_updatestats (Update Statistics) for your database and I think the issue with your jobs hang will be solved.
DMU_K2_importXXX  table are coming from hang imports ...you can delete these tables because it is just growing the size of database and for sure are not used anymore.
Regards
Sorin Radulescu

Similar Messages

  • Job taking too long

    hello,
    I have setup a job and its been running since 1 day and dont know why it has taken such a long time.
    The job is simply writing data on the new created info structure which is transactional data.
    So I am not sure why it is taking so long.
    How can I make some checks etc..
    How should I do the Trace and steps etc..
    Please help

    Hi Andy,
    yes... you will see the statistical record after
    - the program has finished or
    - the program is aborted (not on 4.0B but in new releases ...).
    While it is running... you can only use
    - SM50 (refresh several times) -  to  see what the program is doing NOW
    - SM50 (doubleclick) - to get some acummulated values with resp. to DB accesses
    - SE30 / ST05 / ST12 - to trace the current execution
    - Debugger - to get the call stack and current memory consumption
    Kind regards,
    Hermann

  • Having trouble with the adobe extension manager---download taking too long?

    Hello, I'm currently in the process of downloading the last of my CS6 web design and premium which consists of; Dreamweaver,Flash,Photoshop,Fireworks,Illustrator,Acrobat X,Indesign- Everything downloaded just fine except indesign  and acrobat. The extension manager just keep downloading , its been 3 days--and I've had this problem for awhile and had not had the time to download etc.--Does anyone know what is wrong--can I download indesign and acrobat without the adobe extension manager etc.----????

    great.  here's acrobat xi and a few others.
    p.s. please mark helpful/correct responses.
    Downloads available:
    Suites and Programs:  CC | CS6 | CS5.5 | CS5 | CS4 | CS3
    Acrobat:  XI, X | 9,8 | 9 standard
    Premiere Elements:  12 | 11, 10 | 9, 8, 7
    Photoshop Elements:  12 | 11, 10 | 9,8,7
    Lightroom:  5.4 (win) 5.4 (mac) | 5 | 4 | 3
    Captivate:  8 | 7 | 6 | 5
    Contribute:  CS5 | CS4, CS3
    Download and installation help for Adobe links
    Download and installation help for Prodesigntools links are listed on most linked pages.  They are critical; especially steps 1, 2 and 3.  If you click a link that does not have those steps listed, open a second window using the Lightroom 3 link to see those 'Important Instructions'.

  • I was backing up my iphone by changing the location of library beacause i don't have enough space.My phone was taking too long to copying file so i can celled it.the data is stored in desired location . And now i can't delete that back up

    I was backing up my iphone by changing the location of library because i don't have enough space.My phone was taking too long to copying file so i can celled it.the data is stored in desired location . And now i can't delete that back up.
    Also tell me about the performance of iphone 4 with ios 7.1.1...........
    T0X1C

    rabidrabbit wrote:
    Can I back up my iPhone 4S to my ipad 3 (64 gb)?
    no
    rabidrabbit wrote:
    However, now I don't have enough space in iCloud to backup either device. Why not?
    iCloud only give so much space for free storage, then if you exceed the limit of 5gb you have to pay for additional storage.

  • R/3 Extraction taking too long to load data into BW

    HI There,
    I'm trying to extract SAP Standard extractor 0FI_AP_4 into BW, and its taking endless time.
    Even the Extract checker RSA3  is taking too long to execute the data. Dont know why its taking so long to execute.
    Since there in not much data to take such a long time.
    Enhanced the datasource with three fields from BSEG using user exits.
    Is that the reason why its taking too long? Does User Exit slows down the extraction process?
    What measures i should take to quicken the process?
    Thanks for your time
    Vandana

    Thanks for all you replies.
    Please go through the steps I've gone through :
    - Installed the Business Content and its in version 3.5
    - Changed the update rules, Transfer rules and migrated the datasource to BI 7
    - Enhanced the 0FI_AP_3 to include three fields BSEG table
    - Ran RSA3 and the new fields are showing but the loading is quite slow.
    - Commented the code and ran RSA3 and with little difference data is showing up
    - Removed the comments and ran, its fine, though it takes little more time then previous step...but data is showing up
    - Replicated the datasource into BW
    - Created the info package and started the init process (before this deleted the previous stored init process)
    - Data isn't loading and please see the error message below.
    Diagnosis
    The data request was a full update.  In this case, the corresponding table in the source system does not
    contain any data. System Response Info IDoc received with status 8. Procedure Check the data basis in the source system.
    - Checked the transformation between datasource 0FI_AP_4 and Infosource ZFI_AP_4
       and I DID NOT found the three fields which i enhanced from BSEG table in the 0FI_AP_4 datasource.
    - Replicated the datasource 0FI_AP_4 again, but no change.
    Now...I dont know whats happening here.
    When i check the datasource 0FI_AP_4 in RSA6, i can see the three new fields from BSEG.
    When i check RSA3, i can see the data getting populated with the three new fields from BSEG,
    When i check the fields in the datasource 0FI_AP_4 in BW, I can see the three new fields. It shows
    that the connection between BW and R/3 is fine, isn't it?
    Now...Can anyone please suggest me how to go forward from here?
    Thanks for your time
    Vandana

  • Data Archive Script is taking too long to delete a large table

    Hi All,
    We have data archive scripts, these scripts move data for a date range to a different table. so the script has two parts first copy data from original table to archive table; and second delete copied rows from the original table. The first part is executing very fast but the deletion is taking too long i.e. around 2-3 hours. The customer analysed the delete query and are saying the script is not using index and is going into full table scan. but the predicate itself is the primary key, Please help... More info below
    CREATE TABLE "APP"."MON_TXNS"
       (    "ID_TXN" NUMBER(12,0) NOT NULL ENABLE,
        "BOL_IS_CANCELLED" VARCHAR2(1 BYTE) DEFAULT 'N' NOT NULL ENABLE,
        "ID_PAYER" NUMBER(12,0),
        "ID_PAYER_PI" NUMBER(12,0),
        "ID_PAYEE" NUMBER(12,0),
        "ID_PAYEE_PI" NUMBER(12,0),
        "ID_CURRENCY" CHAR(3 BYTE) NOT NULL ENABLE,
        "STR_TEXT" VARCHAR2(60 CHAR),
        "DAT_MERCHANT_TIMESTAMP" DATE,
        "STR_MERCHANT_ORDER_ID" VARCHAR2(30 BYTE),
        "DAT_EXPIRATION" DATE,
        "DAT_CREATION" DATE,
        "STR_USER_CREATION" VARCHAR2(30 CHAR),
        "DAT_LAST_UPDATE" DATE,
        "STR_USER_LAST_UPDATE" VARCHAR2(30 CHAR),
        "STR_OTP" CHAR(6 BYTE),
        "ID_AUTH_METHOD_PAYER" NUMBER(1,0),
        "AMNT_AMOUNT" NUMBER(23,0) DEFAULT 0,
        "BOL_IS_AUTOCAPTURE" VARCHAR2(1 BYTE) DEFAULT 'N' NOT NULL ENABLE,
        "ID_USE_CASE" NUMBER(4,0) NOT NULL ENABLE,
        "ID_AUTH_METHOD_PAYEE" NUMBER(2,0),
         CONSTRAINT "CKC_BOL_IS_CANCELLED_MON_TXNS" CHECK (BOL_IS_CANCELLED in ('Y','N')) ENABLE,
         CONSTRAINT "PK_MON_TXNS" PRIMARY KEY ("ID_TXN")
      USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
      STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "LARGE_INDEX"  ENABLE,
         CONSTRAINT "FK_MON_TXNS_CURRENCIES" FOREIGN KEY ("ID_CURRENCY")
          REFERENCES "APP"."CURRENCIES" ("ID_CURRENCY") ENABLE,
         CONSTRAINT "FK_MON_TXNS_TO_PAYER" FOREIGN KEY ("ID_PAYER")
          REFERENCES "APP"."CUSTOMERS" ("ID_CUSTOMER") ENABLE,
         CONSTRAINT "FK_MON_TXNS_TO_PAYEE" FOREIGN KEY ("ID_PAYEE")
          REFERENCES "APP"."CUSTOMERS" ("ID_CUSTOMER") ENABLE,
         CONSTRAINT "FK_MON_TXNS_REFERENCE_TXNS" FOREIGN KEY ("ID_TXN")
          REFERENCES "APP"."TXNS" ("ID_TXN") ENABLE,
         CONSTRAINT "FK_MON_TXNS_TO_PI_PAYER" FOREIGN KEY ("ID_PAYER_PI")
          REFERENCES "APP"."PIS" ("ID_PI") ENABLE,
         CONSTRAINT "FK_MON_TXNS_TO_PI_PAYEE" FOREIGN KEY ("ID_PAYEE_PI")
          REFERENCES "APP"."PIS" ("ID_PI") ENABLE,
         CONSTRAINT "FK_MON_TXNS_TO_AUTHMETHOD" FOREIGN KEY ("ID_AUTH_METHOD_PAYER")
          REFERENCES "APP"."AUTHENTICATION_METHODS" ("ID_AUTHENTICATION_METHOD") ENABLE,
         CONSTRAINT "FK_MON_TXNS_USE_CASE_ID" FOREIGN KEY ("ID_USE_CASE")
          REFERENCES "APP"."USE_CASES" ("ID_USE_CASE") ENABLE,
         CONSTRAINT "FK_MON_TXN_AUTH_PAYEE" FOREIGN KEY ("ID_AUTH_METHOD_PAYEE")
          REFERENCES "APP"."AUTHENTICATION_METHODS" ("ID_AUTHENTICATION_METHOD") ENABLE
      CREATE INDEX "APP"."IDX_MON_TXNS" ON "APP"."MON_TXNS" ("ID_PAYER")
      PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
      STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "LARGE_INDEX" ;
      CREATE INDEX "APP"."IDX_PAYEE_MON_TXNS" ON "APP"."MON_TXNS" ("ID_PAYEE")
      PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
      STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "LARGE_DATA" ;
      CREATE INDEX "APP"."IDX_PYE_PI_MON_TXNS" ON "APP"."MON_TXNS" ("ID_PAYEE_PI")
      PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
      STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "LARGE_DATA" ;
      CREATE INDEX "APP"."IDX_PYR_PI_MON_TXNS" ON "APP"."MON_TXNS" ("ID_PAYER_PI")
      PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
      STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "LARGE_DATA" ;
      CREATE INDEX "APP"."IDX_USE_CASE_MON_TXNS" ON "APP"."MON_TXNS" ("ID_USE_CASE")
      PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
      STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "LARGE_DATA" ;
      CREATE UNIQUE INDEX "APP"."PK_MON_TXNS" ON "APP"."MON_TXNS" ("ID_TXN")
      PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
      STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "LARGE_INDEX" ;
    Data is first moved to table in schema3.OTW. and then we are deleting all the rows in otw from original table. below is the explain plan for delete
    SQL> explain plan for
      2  delete from schema1.mon_txns where id_txn in (select id_txn from schema3.OTW);
    Explained.
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    Plan hash value: 2798378986
    | Id  | Operation              | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | DELETE STATEMENT       |            |  2520 |   233K|    87   (2)| 00:00:02 |
    |   1 |  DELETE                | MON_TXNS   |       |       |            |          |
    |*  2 |   HASH JOIN RIGHT SEMI |            |  2520 |   233K|    87   (2)| 00:00:02 |
    |   3 |    INDEX FAST FULL SCAN| OTW_ID_TXN |  2520 | 15120 |     3   (0)| 00:00:01 |
    |   4 |    TABLE ACCESS FULL   | MON_TXNS   | 14260 |  1239K|    83   (0)| 00:00:02 |
    PLAN_TABLE_OUTPUT
    Predicate Information (identified by operation id):
    Please help,
    thanks,
    Banka Ravi

    'Best practice' is just what Oracle is already doing as you have already been told: DELETE FROM myTable WHERE myDate between myStart and Myend.
    Your use case is why many orgs elect to use partitioning and use that DATE column as the partition key. Then it is VERY FAST and VERY EASY to truncate or drop partitions that contain old data when you no longer need them.
    The other solution used is to quit waiting so long to delete data and then you don't have to delete large amounts at the same time. So instead of deleting data once a month delete it once a week or even every night. Then the number of rows being deleted will be much smaller and, if the stats are kept current, Oracle may decide to use the index.

  • SQL Statement taking too long to get the data

    Hi,
    There are over 2500 records in a table and when retrieve all using ' SELECT * From Table' it is taking too long to get the data. ie .. 4.3 secs.
    Is there any possible way to shorten the process time.
    Thanks

    Hi Patrick,
    Here is the sql statement and table desc.
    ID     Number
    SN     Varchar2(12)
    FN     Varchar2(30)
    LN     Varchar2(30)
    By     Varchar(255)
    Dt     Date(7)
    Add     Varchar2(50)
    Add1     Varchar2(30)
    Cty     Varchar2(30)
    Stt     Varchar2(2)
    Zip     Varchar2(12)
    Ph     Varchar2(15)
    Email     Varchar2(30)
    ORgId     Number
    Act     Varchar2(3)     
    select A."FN" || '' '' || A."LN" || '' ('' || A."SN" || '')'' "Name",
    A."By", A."Dt",
    A."Add" || ''
    '' || A."Cty" || '', '' || A."Stt" || '' '' || A."Zip" "Location",
    A."Ph", A."Email", A."ORgId", A."ID",
    A."SN" "OSN", A."Act"
    from "TBL_OPTRS" A where A."ID" <> 0 ';
    I'm displaying all rows in a report.
    if I use 'select * from TBL_OPTRS' , this also takes 4.3 to 4.6 secs.
    Thanks.

  • Network event is taking too long (100%)

    Hi everybody. We have a 10g DB on Windows. We're using OEM to manage the DB, and it has started to show an alert about database time spend waiting for "Network" event. It arises when we execute one module that updates several tables, that is taking too long. Before we had this app on 8i, also on Windows, and that operation was much more faster than now. The indexes on tables are valid, and I've gathered statistics for the CBO, so I suppose the problem is, as the OEM says, related with network, but I don't know why, because the connection speed is the same tan before, and the two machines are in the same LAN.
    Any ideas??

    Here is the output requested:
    SQL> select * from v$system_event
    2 where event like 'SQL%';
    EVENT TOTAL_WAITS TOTAL_TIMEOUTS TIME_WAITED
    AVERAGE_WAIT TIME_WAITED_MICRO EVENT_ID
    SQL*Net message to client 1159200 0 252
    0 2516408 2067390145
    SQL*Net message to dblink 2234 0 1
    0 5590 3655533736
    SQL*Net more data to client 5753 0 166
    0 1657387 554161347
    EVENT TOTAL_WAITS TOTAL_TIMEOUTS TIME_WAITED
    AVERAGE_WAIT TIME_WAITED_MICRO EVENT_ID
    SQL*Net more data to dblink 12 0 0
    0 548 1958556342
    SQL*Net message from client 1159181 0 218341084
    188 2,1834E+12 1421975091
    SQL*Net more data from client 23299 0 180602
    8 1806015123 3530226808
    EVENT TOTAL_WAITS TOTAL_TIMEOUTS TIME_WAITED
    AVERAGE_WAIT TIME_WAITED_MICRO EVENT_ID
    SQL*Net message from dblink 2234 0 3693
    2 36934861 4093028837
    SQL*Net more data from dblink 4021 0 39
    0 390002 1136294303
    SQL*Net break/reset to client 182986 0 2740
    0 27397165 1963888671
    9 filas seleccionadas.

  • Moving the 80 Million records from Conversion database to System Test database (Just for one transaction table) taking too long.

    Hello Friends,
    The background is I am working as conversion manager and we move the data from oracle to SQL Server using SSMA and then we will apply the conversion logic and then move the data to system test ,UAT and Production.
    Scenario:
    Moving the 80 Million records from Conversion database to System Test database (Just for one transaction table) taking too long. Both the databases are in the same server.
    Questions are…
    What is best option?
    IF we use the SSIS it’s very slow and taking 17 hours (some time it use to stuck and won’t allow us to do any process).
    I am using my own script (Stored procedure) and it’s taking only 1 hour 40 Min. I would like know is there any better process to speed up and why the SSIS is taking too long.
    When we move the data using SSIS do they commit inside after particular count? (or) is the Microsoft is committing all the records together after writing into Transaction Log
    Thanks
    Karthikeyan Jothi

    http://www.dfarber.com/computer-consulting-blog.aspx?filterby=Copy%20hundreds%20of%20millions%20records%20in%20ms%20sql
    Processing
    hundreds of millions records can be done in less than an hour.
    Best Regards,Uri Dimant SQL Server MVP,
    http://sqlblog.com/blogs/uri_dimant/
    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting:
    Large scale of database and data cleansing
    Remote DBA Services:
    Improves MS SQL Database Performance
    SQL Server Integration Services:
    Business Intelligence

  • Browser times out when trying to view my website - says the server is taking too long. And no, I don't have a firewall.

    I can't view my website at www.artisancandies.com, even though it's working and everyone else seems to see it. No, I don't have a firewall, and it's not because of my internet provider - I have AT&T at work, and Comcast at home. My husband can see the site on his laptop. I tried dumping my cache in both Firefox and Safari, but it didn't work. I looked at it through proxify.com, and can see it that way, so I know it works. This is so frustrating, because I used to only see it when I typed in artisancandies.com - it would never work for me if I typed in www.artisancandies.com. Now it doesn't work at all. This is the message I get in Firefox:
    "The connection has timed out. The server at www.artisancandies.com is taking too long to respond."
    Please help!!!
    Kristen Scott

    Linc, here's what I've got from what you asked me to do. I hope you don't mind, but it was simple enough to leave everything in, so you could see the progression:
    Kristen-Scotts-Computer:~ kristenscott$ kextstat -kl | awk ' !/apple/ { print $6 $7 } '
    Kristen-Scotts-Computer:~ kristenscott$ sudo launchctl list | sed 1d | awk ' !/0x|apple|com\.vix|edu\.|org\./ { print $3 } '
    WARNING: Improper use of the sudo command could lead to data loss
    or the deletion of important system files. Please double-check your
    typing when using sudo. Type "man sudo" for more information.
    To proceed, enter your password, or type Ctrl-C to abort.
    Password:
    com.microsoft.office.licensing.helper
    com.google.keystone.daemon
    com.adobe.versioncueCS3
    Kristen-Scotts-Computer:~ kristenscott$ launchctl list | sed 1d | awk ' !/0x|apple|edu\.|org\./ { print $3 } '
    com.google.keystone.root.agent
    com.adobe.ARM.202f4087f2bbde52e3ac2df389f53a4f123223c9cc56a8fd83a6f7ae
    Kristen-Scotts-Computer:~ kristenscott$ ls -1A {,/}Library/{Ad,Compon,Ex,Fram,In,La,Mail/Bu,P*P,Priv,Qu,Scripti,Sta}* 2> /dev/null
    /Library/Components:
    /Library/Extensions:
    /Library/Frameworks:
    Adobe AIR.framework
    NyxAudioAnalysis.framework
    PluginManager.framework
    iLifeFaceRecognition.framework
    iLifeKit.framework
    iLifePageLayout.framework
    iLifeSQLAccess.framework
    iLifeSlideshow.framework
    /Library/Input Methods:
    /Library/Internet Plug-Ins:
    AdobePDFViewer.plugin
    Disabled Plug-Ins
    Flash Player.plugin
    Flip4Mac WMV Plugin.plugin
    Flip4Mac WMV Plugin.webplugin
    Google Earth Web Plug-in.plugin
    JavaPlugin2_NPAPI.plugin
    JavaPluginCocoa.bundle
    Musicnotes.plugin
    NP-PPC-Dir-Shockwave
    Quartz Composer.webplugin
    QuickTime Plugin.plugin
    Scorch.plugin
    SharePointBrowserPlugin.plugin
    SharePointWebKitPlugin.webplugin
    flashplayer.xpt
    googletalkbrowserplugin.plugin
    iPhotoPhotocast.plugin
    npgtpo3dautoplugin.plugin
    nsIQTScriptablePlugin.xpt
    /Library/LaunchAgents:
    com.google.keystone.agent.plist
    /Library/LaunchDaemons:
    com.adobe.versioncueCS3.plist
    com.apple.third_party_32b_kext_logger.plist
    com.google.keystone.daemon.plist
    com.microsoft.office.licensing.helper.plist
    /Library/PreferencePanes:
    Flash Player.prefPane
    Flip4Mac WMV.prefPane
    VersionCue.prefPane
    VersionCueCS3.prefPane
    /Library/PrivilegedHelperTools:
    com.microsoft.office.licensing.helper
    /Library/QuickLook:
    GBQLGenerator.qlgenerator
    iWork.qlgenerator
    /Library/QuickTime:
    AppleIntermediateCodec.component
    AppleMPEG2Codec.component
    Flip4Mac WMV Export.component
    Flip4Mac WMV Import.component
    Google Camera Adapter 0.component
    Google Camera Adapter 1.component
    /Library/ScriptingAdditions:
    Adobe Unit Types
    Adobe Unit Types.osax
    /Library/StartupItems:
    AdobeVersionCue
    HP Trap Monitor
    Library/Address Book Plug-Ins:
    SkypeABDialer.bundle
    SkypeABSMS.bundle
    Library/Internet Plug-Ins:
    Move_Media_Player.plugin
    fbplugin_1_0_1.plugin
    Library/LaunchAgents:
    com.adobe.ARM.202f4087f2bbde52e3ac2df389f53a4f123223c9cc56a8fd83a6f7ae.plist
    com.apple.FolderActions.enabled.plist
    com.apple.FolderActions.folders.plist
    Library/PreferencePanes:
    A Better Finder Preferences.prefPane
    Kristen-Scotts-Computer:~ kristenscott$

  • Attribute Change run taking too long time to complete.

    Hi all,
    Attribute change run has been taking too long time to complete.It has to realign 50 odd aggreagates, some by delta , some by reconstruction. But inspite of all the aggregates it used to finish in quick time earlier. But since last 4-5 days it is taking indefinite time to finish.
    Can anyone please suggest what all reasons may be causing this? and what possibly can be the solution to the problem? It is becoming a big issue. So kindly help with ur advises.
    Promise to reward your answer liberally.
    Regards,
    Pradyut.

    Hi,
    Check with your functional owners in R/3 if there are mass changes/realignments or classification changes are going on regarding master data. e.g. reasigning materials to other material groups. This causes a major realignment in BW for all the aggregates. Otherwise check for parameterchanges / patches or missing db stats with your sap basis team.
    Kind regards, Patrick Rieken.

  • WebI Report is taking too long time to opening

    Hi All,
    When iam trying to open the WebI report in Infoview , it is taking long time to open and refresh,
    Please suggest me a solution.
    Thanks in advance..
    Regards,
    Mahesh

    Hi,
    As the issue you are facing is that the webi report is taking too long to open and refresh, I would recommend the below steps.
    1. Check whether the webi report is set to "Refreh on Open" if yes probably you need to uncheck, save the report and open it again.
    2. Try to run the same query in the backend database and see if it returns the data.
    3. Try to run refresh the report for a smaller data selection.
    4. make the report run on a specific webi server, and when refreshing have your BOBJ admin monitor that process to see if the process is going in a hung state, using High memory etc.
    5. restart webi process and run again
    Thanks,
    aKs

  • Database open (recovery) taking too long

    Hi,
    Ive been using your awesome BerkeleyDB Java Edition for a couple of years, and have been very happy with it.
    I am currently facing an issue with trying to open the database after a disk-full issue (which resulted in the database being unable to write, and hence not closed properly).
    While recovery seems to be operating, it has been taking an inordinate amount of time - 16 hours so far. My database has data of around 200GB, which inflated to over 450GB during deletion of entries, hence gobbling up all free space on disk.
    My questions are:
    * Should i continue to wait for recovery?
    * Is there any chance that recovery is looping?
    * Is there an easier way (DBDump?) to extract data from the database without having to perform recovery?
    Some other information that may help:
    * The recovery has decreased the size of the last significant file, and created 3 new files since it started running.
    * I have been monitoring the open files (using lsof), and they change every now and then to other files, though a good amount of its time is spent near the end of the database.
    Thus, i feel like recovery is running normally, just taking too long. Please let me know your opinion.
    A few other things i should mention regarding my issue:
    * The database was, till yesterday, running on bdb java 3.3.75. After running several hours of recovery, i upgraded to 4.1.10 (since i read about a possible recovery looping bug in one of the versions)
    * Once 4.1.10 started recovery, it spat out errors regarding the last 2 files. Only on deleting those 2 files (the last being 0 bytes, the 2nd-last being about 5k) did the recovery start. Note that the older 3.3.75's recovery never complained about those files. I can post the errors here if relevant.
    * Some of the jdb files (about 500 files out of the 47,000 files that make up the database) are 100 MB files, since i had experimented with larger sized files for a few days, then reverted the setting.
    Would any of these above affect a successful recovery?
    My setup is:
    OS:Linux CentOS 5.2, 64-bit, kernel 2.6.18-92.el5
    JVM: Sun Java 1.6.0_20, 64-bit
    Memory: 16 GB RAM, of which 8 GB is allocated to the java process (-Xmx8000M -Xms8000M)
    BDB cache set to use 6GB RAM (envconfig.setCacheSize(6000000000))
    Only the BDB basic API is being used (Environment, database, cursors). We do not use DPL, or HA features.
    Awaiting your kind response,
    Sushant A

    Hi Sushant,
    * Should i continue to wait for recovery?* Is there any chance that recovery is looping?>
    I'm not aware of a bug that would cause recovery to loop, however, you may want to take thread dumps to see if it is progressing. It isn't easy to tell, however, since each phase of recovery is in fact a loop. What you can tell easily from the thread dumps is whether recovery is blocked (completely stopped) for some reason. I don't know of a bug that would cause this, but it's something I would check for.
    Assuming it is not blocked, I suggest that you leave recovery running, and additionally (in parallel) try to obtain some information about your log. While recovery is running you can run the DbPrintLog utility, which does not itself run recovery. I suggest running the following command, which will tell us in general what your log looks like and in particular how far apart the checkpoints are:
    java -jar je-x.y.z.jar DbPrintLog -h <envHome> -S > <output>Please post the output.
    If checkpoints are not running in your application for some reason, or they are running very infrequently, this can cause VERY long recoveries. Unfortunately, you may have such a problem in your app and not be aware of it, until you crash and have to recover. To guard against this sort of thing in the future, you should keep an eye on the checkpoint frequency. EnvironmentStats.getNCheckpoints and getEndOfLog can together be used to tell how much log is written between checkpoints. We will also be able to see this from the DbPrintLog -S output.
    * Is there an easier way (DBDump?) to extract data from the database without having to perform recovery?DbDump normally runs recovery. DbDump with the -r or -R option does not run recovery, but has other drawbacks. With -r, a large amount of memory may be necessary to dump an accurate representation of your data set. If this fails because you run out of memory, -R can be used, but this will dump multiple versions of each record and it will be up to you to interpret the output.
    If regular recovery does not succeed, then DbDump -r is the next thing to try.
    Would any of these above affect a successful recovery?No, I don't believe so.
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • SQL Update statement taking too long..

    Hi All,
    I have a simple update statement that goes through a table of 95000 rows that is taking too long to update; here are the details:
    Oracle Version: 11.2.0.1 64bit
    OS: Windows 2008 64bit
    desc temp_person;
    Name                                                                                Null?    Type
    PERSON_ID                                                                           NOT NULL NUMBER(10)
    DISTRICT_ID                                                                     NOT NULL NUMBER(10)
    FIRST_NAME                                                                                   VARCHAR2(60)
    MIDDLE_NAME                                                                                  VARCHAR2(60)
    LAST_NAME                                                                                    VARCHAR2(60)
    BIRTH_DATE                                                                                   DATE
    SIN                                                                                          VARCHAR2(11)
    PARTY_ID                                                                                     NUMBER(10)
    ACTIVE_STATUS                                                                       NOT NULL VARCHAR2(1)
    TAXABLE_FLAG                                                                                 VARCHAR2(1)
    CPP_EXEMPT                                                                                   VARCHAR2(1)
    EVENT_ID                                                                            NOT NULL NUMBER(10)
    USER_INFO_ID                                                                                 NUMBER(10)
    TIMESTAMP                                                                           NOT NULL DATE
    CREATE INDEX tmp_rs_PERSON_ED ON temp_person (PERSON_ID,DISTRICT_ID) TABLESPACE D_INDEX;
    Index created.
    ANALYZE INDEX tmp_PERSON_ED COMPUTE STATISTICS;
    Index analyzed.
    explain plan for update temp_person
      2  set first_name = (select trim(f_name)
      3                    from ext_names_csv
      4                               where temp_person.PERSON_ID=ext_names_csv.p_id
      5                               and   temp_person.DISTRICT_ID=ext_names_csv.ed_id);
    Explained.
    @?/rdbms/admin/utlxpls.sql
    PLAN_TABLE_OUTPUT
    Plan hash value: 3786226716
    | Id  | Operation                   | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | UPDATE STATEMENT            |                | 82095 |  4649K|  2052K  (4)| 06:50:31 |
    |   1 |  UPDATE                     | TEMP_PERSON    |       |       |            |          |
    |   2 |   TABLE ACCESS FULL         | TEMP_PERSON    | 82095 |  4649K|   191   (1)| 00:00:03 |
    |*  3 |   EXTERNAL TABLE ACCESS FULL| EXT_NAMES_CSV  |     1 |   178 |    24   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       3 - filter("EXT_NAMES_CSV"."P_ID"=:B1 AND "EXT_NAMES_CSV"."ED_ID"=:B2)
    Note
       - dynamic sampling used for this statement (level=2)
    19 rows selected.By the looks of it the update is going to take 6 hrs!!!
    ext_names_csv is an external table that have the same number of rows as the PERSON table.
    ROHO@rohof> desc ext_names_csv
    Name                                                                                Null?    Type
    P_ID                                                                                         NUMBER
    ED_ID                                                                                        NUMBER
    F_NAME                                                                                       VARCHAR2(300)
    L_NAME                                                                                       VARCHAR2(300)Anyone can help diagnose this please.
    Thanks
    Edited by: rsar001 on Feb 11, 2011 9:10 PM

    Thank you all for the great ideas, you have been extremely helpful. Here is what we did and were able to resolve the query.
    We started with Etbin's idea to create a table from the ext table so that we can index and reference easier than an external table, so we did the following:
    SQL> create table ext_person as select P_ID,ED_ID,trim(F_NAME) fst_name,trim(L_NAME) lst_name from EXT_NAMES_CSV;
    Table created.
    SQL> desc ext_person
    Name                                                                                Null?    Type
    P_ID                                                                                         NUMBER
    ED_ID                                                                                        NUMBER
    FST_NAME                                                                                     VARCHAR2(300)
    LST_NAME                                                                                     VARCHAR2(300)
    SQL> select count(*) from ext_person;
      COUNT(*)
         93383
    SQL> CREATE INDEX EXT_PERSON_ED ON ext_person (P_ID,ED_ID) TABLESPACE D_INDEX;
    Index created.
    SQL> exec dbms_stats.gather_index_stats(ownname=>'APPD', indname=>'EXT_PERSON_ED',partname=> NULL , estimate_percent=> 30 );
    PL/SQL procedure successfully completed.We had a look at the plan with the original SQL query that we had:
    SQL> explain plan for update temp_person
      2  set first_name = (select fst_name
      3                    from ext_person
      4                               where temp_person.PERSON_ID=ext_person.p_id
      5                               and   temp_person.DISTRICT_ID=ext_person.ed_id);
    Explained.
    SQL> @?/rdbms/admin/utlxpls.sql
    PLAN_TABLE_OUTPUT
    Plan hash value: 1236196514
    | Id  | Operation                    | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | UPDATE STATEMENT             |                | 93383 |  1550K|   186K (50)| 00:37:24 |
    |   1 |  UPDATE                      | TEMP_PERSON    |       |       |            |          |
    |   2 |   TABLE ACCESS FULL          | TEMP_PERSON    | 93383 |  1550K|   191   (1)| 00:00:03 |
    |   3 |   TABLE ACCESS BY INDEX ROWID| EXTT_PERSON    |     9 |  1602 |     1   (0)| 00:00:01 |
    |*  4 |    INDEX RANGE SCAN          | EXT_PERSON_ED  |     1 |       |     1   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       4 - access("EXT_PERSON"."P_ID"=:B1 AND "RS_PERSON"."ED_ID"=:B2)
    Note
       - dynamic sampling used for this statement (level=2)
    20 rows selected.As you can see the time has dropped to 37min (from 6 hrs). Then we decided to change the SQL query and use donisback's suggestion (using MERGE); we explained the plan for teh new query and here is the results:
    SQL> explain plan for MERGE INTO temp_person t
      2  USING (SELECT fst_name ,p_id,ed_id
      3  FROM  ext_person) ext
      4  ON (ext.p_id=t.person_id AND ext.ed_id=t.district_id)
      5  WHEN MATCHED THEN
      6  UPDATE set t.first_name=ext.fst_name;
    Explained.
    SQL> @?/rdbms/admin/utlxpls.sql
    PLAN_TABLE_OUTPUT
    Plan hash value: 2192307910
    | Id  | Operation            | Name         | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    |   0 | MERGE STATEMENT      |              | 92307 |    14M|       |  1417   (1)| 00:00:17 |
    |   1 |  MERGE               | TEMP_PERSON  |       |       |       |            |          |
    |   2 |   VIEW               |              |       |       |       |            |          |
    |*  3 |    HASH JOIN         |              | 92307 |    20M|  6384K|  1417   (1)| 00:00:17 |
    |   4 |     TABLE ACCESS FULL| TEMP_PERSON  | 93383 |  5289K|       |   192   (2)| 00:00:03 |
    |   5 |     TABLE ACCESS FULL| EXT_PERSON   | 92307 |    15M|       |    85   (2)| 00:00:02 |
    Predicate Information (identified by operation id):
       3 - access("P_ID"="T"."PERSON_ID" AND "ED_ID"="T"."DISTRICT_ID")
    Note
       - dynamic sampling used for this statement (level=2)
    21 rows selected.As you can see, the update now takes 00:00:17 to run (need to say more?) :)
    Thank you all for your ideas that helped us get to the solution.
    Much appreciated.
    Thanks

  • Oracle - Query taking too long (Materialized view)

    Hi,
    I am extracting billing informaiton and storing in 3 different tables... in order to show total billing (80 to 90 columns, 1 million rows per month), I've used a materialized view... I do not have indexes on 3 billing tables - do have 3 indexes on Materialized view...
    at the moment it's taking too long to query the data (running a query via toad fails and shows "Out of Memory" error message; runing a query via APEX though is providing results but taking way too long)...
    Please advice how to make the query efficient...

    tparvaiz,
    Is it possible when building your materialized view to summarize and consolidate the data?
    Out of a million rows, what would your typical user do with that amount data if they could retrieve it readily? The answer to this question may indicate if and how to summarize the data within the materialized view.
    Jeff
    Edited by: jwellsnh on Mar 25, 2010 7:02 AM

Maybe you are looking for

  • Having problem to do "Recovery HD" on Mac Pro?

    Hi, I had Lion 10.7.3 installed on my Mac Pro with four internal harddisks connected to an Apple RAID card. I was trying to re-install Lion by using Lion's "Recovery HD" feature.  So I shutdown my Mac Pro and reboot by pressing "Command+R" keys to st

  • Re: Process Spaces - SOA/BPM Integration

    Hi, I am trying to import default-modeling-workspace-gs.ear and default-process-workspace-gs.ear into Webcenter spaces. It is getting failed with the following error :: No handlers could be found for services with IDs: oracle.webcenter.doclib oracle.

  • Buying Lightroom + Photoshop Elements / Premiere Pro + MC CS5 ?

    Hi Community, i would like to know if it makes sense buying Lightroom 3 AND Phtoshop Elements AND Master Collection CS5 ? I will definately buy soon the CS5 Master Collection, but do the other tools have possibilites / functions and stuff the Master

  • Good photo editing software? (a little off topic...)

    anyone know of a good program for editing photos and other images in general? i used corel photo-paint on my windows pc, and i really liked the balance between easy-to-use and gets-things-done. i've also used photoshop, but i'm not as big a fan of th

  • How can i make indexes on the Master data table

    Hi Gurus, I got a query, in this one i have an Infoobject with many values, like to say invoice number, and i filter and need a lot of Nav Attr of this IO, for this reason the query performance its really bad but i dont know what to do, some friend a