Siebel Prod App poor performance during EIM tables data load

Hi Experts,
I have a situation, Siebel Production application performance is becoming poor when I start loading data into Siebel EIM tables during business hours. I'm not executing any EIM jobs during business hours so how come the database is becoming slow and application is getting affected.
I understand that Siebel Application fetches data from siebel base tables. In that case why is the application getting very slow when EIM tables are only loaded.
FYI - Siebel production Application server has good hardware configuration.
Thanks,
Shaik

You have to talk with your DBA.
Let's say your DB is running from one hard disk (HD).
I guess you can imagine things will slow down when multiple processes start accessing the DB which is running from one HD.
When you start loading the EIM tables, your DB will use a lot of time for writing and has less time to serve the data to the Siebel application server.
The hardware for the Siebel application servers is not really relevant here.
See if you can put the EIM tables on their own partition/hard disks.

Similar Messages

  • Crashing and poor performance during playback of a large project.

    Hi,
    I've been a regular user of iMovie for about 3 years and have edited several 50GB+ projects of DV quality footage without too many major issues with lag or 'dropped frames'. I currently have a 80GB project that resides on a 95% full 320GB Firewire 400 external drive that has been getting very slow to open and near impossible to work with.
    Pair the bursting-at-the-seams external drive, with an overburdened 90% full internal drive - the poor performance wasn't to be unexpected. So I bought a 1TB Firewire 400 drive to free up some space on my Mac. My large iTunes library (150GB) was the main culprit and it was quickly moved onto the new drive.
    The iMovie project was then moved onto my Mac's movie folder - I figured that the project needs some "room" to work (not that I really understand how Macs use memory) and that having roughly 80GB free with 1.5GB RAM (which is more than used to have) would make everything just that much smoother.
    Wrong.
    The project opened in roughly the same amount of time - but when I tried to play back the timeline, it plays like rubbish and then after 10-15 secs the Mac goes into 'sleep' mode. The screen goes off, the fans dies down and the 'heartbeat' light goes on. A click of the mouse 'wakes' the Mac only to find that if i try again, I get the same result.
    I've probably got so many variables going on here that it's probably hard to suggest what the problem might be but all I could think of was repairing permissions (which I did and none needed it).
    Stuck on this. Anyone have any advice?

    I understand completely, having worked with a 100 GB project once. I found that getting a movie bloated up to that size was just more difficult with jerky playback.
    I do have a couple of suggestions for you.
    You may need more than that 80GB free space for this movie. Is there any reason you cannot move it to the 1TB drive? If you have only your iTunes on it, you should have about 800 GB free.
    If you still need to have the project on your computer's drive, set your computer to never sleep.
    How close to finishing editing are you with this movie? If you are nearly done except for adding audio clips, you can export (share) it as QuickTime Full Quality movie. The resultant quicktime version of your iMovie will be smaller because it will contain only the clips actually used in the movie, not all the saved whole clips that iMovie keeps as its nondestructive editing feature. The quicktime movie will be one continuous clip, incorporating all your edits and added audio. It CAN be further edited, but you cannot change text of titles already there, or change transitions or remove already added audios.
    I actually do this with nearly every iMovie. I create my movies by first importing videos, then adding still photos, then editing with titles, effects and transitions. I add audio last, and if it becomes too distorted in playback, I export the movie and then continue adding audio clips.
    My 100+ GB movie slimmed down to only 8 GB with this method. (The large size was due to having so many clips. The movie was from VHS footage of my son's little league all-star game, and the video had so many skipped segments that I had to split it into thousands of clips to remove the dropped ones. Very old VHS tape!).
    I haven't upgraded to QT 7.5.5, but I heard that the jerky playback issue is mostly resolved with this new upgrade. I am in mid-project with about 5 iMovies, so I will probably plod along with my work-around method, not wanting to upgrade in the middle of any of them.
    Hope this is helpful to you.

  • Main Table data load u2013 UNSPSC fields is not loading

    I am new to SAP MDM
    I have the main table data that includes UNSPSC field. UNSPSC (hierarchy) table is already loaded.
    It works fine when I use import manager with field mapping and value mapping. (UNSPSC field value mapping is done).
    When I use the import server using the same map to load the main table data with UNSPSC field (in this case the UNSPSC field value is different but UNSPSC lookup table has that value) , UNSPSC field is not loading but all other fields are loaded including images and PDF's with new values
    If I go to the import manager and do the value mapping again for the UNSPSC field with the new value then save the map and use the import server to load the data then it is loading correctly.
    My question when we use the import server, main table data with UNSPSC codes value will be different  each time and it doesnu2019t make sense to go to the import manager and do the value mapping and saving the import map  before loading the data again.
    What I am missing here?.  Anyone can help me?

    Could anyone clarify this?
    Issue: UNSPSC field value mapping automatically by using the import server while loading the Main table.
    This issue was resolved yesterday and still works fine with the remote system MDC UNSPSC.
    Is there anyn settings in the ' Set MDIS Unmapped value handling'? (Right click on the field Product hierarchy  field at the destination side). By default it is setting to 'Add' for both the working remote system as well as the non working remote system
    SAP MDM 5.5 SP6 and I am using the standard Product Master repository
    I tried this in a different remote system MDC R/3 & ERP and it worked some time and didnu2019t work later. If it is working then during the UNSPSC code field mapping,  it automatically maps the values also.
    The destination side the main table Products and the destination side [Remote key] field is displayed.
    Source file, I have only 3 fields and they are Product No, Product Name and UNSPSC Category and UNSPSC Category is mapped to the destination field Product Hierarchy field(lookup hierarchy)
    Do I have to map any field  or clone any field and map to the [Remote Key Field]  in the destination side? If yes, what field I have to clone and map it to the Remote Key filed? Is there any other settings necessary. I am not using any matching with this field or any other field.
    Steve.
    Edited by: SteveLat on Oct 8, 2009 11:57 PM
    Edited by: SteveLat on Oct 9, 2009 12:03 AM
    Edited by: SteveLat on Oct 9, 2009 12:47 AM

  • How to decrease the dynamic table data loading time

    hi
    i have problem with dynamic table.
    when i execute the the table with passing a query , getting lot of time for loading the table data.( it takes 30sec for every 100 rows.)
    pls help me how to overcome this problem.
    thanks advance.

    Yes, This is oracle application...
    We can move into other tablespace as well. But concern is how to improve the alter table move command performance.
    Is there any specific parameter apart from the nologging and parallel server..
    If it is taking 8 hours , can some have experience that nologging will save how much time. or is there any risk in doing in production.
    Regards

  • BSEG table data load problem

    Hi friends,
    I have a problem where I need to load the BSEG table data but when it loads only one million records, it stuck and gives error. there are more than 5 millions records in it. is there any way to load the data to PSA. I doubt it is due to memory but how I increase the memory or enhancement for loading the data.
    Thanks
    Ahmed

    Hi Ahmed....
    Don't load all the data in a single go..............split the load with selection...........If it is a Transaction data datasource..........then copy the same infopackage..........in the Data selection field give the selection.........and execute them parallely.............since fact table will never loack each other....
    If it a Master data datasource......then don't run paralley..............since  master data table locks each other..........just give the selection one by one.........and load the data.....
    Hope this helps......
    Regards,
    Debjani.....

  • Oracle Database Table data Load it into Excel

    Hello All,
    Please I need your help for this problem:
    I need to load Oracle database Table data and load it into Excel and saved with xls format.
    Example -Select * from Slase data load it into the Excel.
    I appreciate ans sample code to help me do that, Please help me out. This is very urgent.
    Thanks alot and best regards,
    anbu

    >
    I need to load Oracle database Table data and load it into Excel and saved with xls format.
    Example -Select * from Slase data load it into the Excel.
    I appreciate ans sample code to help me do that, Please help me out. This is very urgent.
    >
    Nothing in these forums is 'urgent'. If you have an urgent problem you should contact Oracle support or hire a consultant.
    You have proven over and over again that you are not a good steward of the forums. You continue to post questions that you say are 'urgent' but rarely take the time to mark your questions ANSWERED when they have been.
    Total Questions: 90 (78 unresolved)
    Are you willing to make a commitment to to revisit your 78 unresolved questions and mark them ANSWERED if they have been?
    The easiest way to export Oracle data to Excel is to use sql developer. It is a free download and this article by Jeff Smith shows how easy it is
    http://www.thatjeffsmith.com/archive/2012/09/oracle-sql-developer-v3-2-1-now-available/
    >
    And One Last Thing
    Speaking of export, sometimes I want to send data to Excel. And sometimes I want to send multiple objects to Excel – to a single Excel file that is. In version 3.2.1 you can now do that. Let’s export the bulk of the HR schema to Excel, with each table going to it’s own workbook in the same worksheet.
    >
    And you have previously been ask to read the FAQ at the top of the thread list. If you had done that you would have seen that there is a FAQ for links that have many ways, with code, to export data to Excel.
    5. How do I read or write an Excel file?
    SQL and PL/SQL FAQ

  • Poor performance after altering tables to InnoDB

    I have an application using CF MX, IIS, and MySQL 5.0.37
    running on Microsoft Windows Server 2003.
    When I originally built the application, access from login to
    start page and page to page was very good. But, I started getting
    errors because tables were sometimes getting records added or
    deleted and sometimes not. I thought the "cftransaction" statements
    were protecting my transactions. Then I found out about MyISAM (the
    default) vs InnoDB.
    So, using MySQLAdmin, I altered the tables to InnoDB. Now,
    the transactions work correctly on commits and rollbacks, but the
    performance of the application stinks. It now takes 20 seconds to
    log in.
    The first page involves a fairly involved select statement,
    but it hasn't changed at all. It just runs very slowly. Updates
    also run slowly.
    Is there something else I was supposed to do in addition to
    the "alter table" in this environment? The data tables used to be
    in /data/saf_data. Now the ibdata file and log files are in /data
    and only the ".frm" files are still in saf_data.
    I realize I'm asking this question in a CF forum. But, people
    here are usually very knowledgable and helpful and I'm desperate.
    This is a CF application. Is there anything I need to do for a CF
    app to work well with MySQL InnoDB tables? Any configuration or
    location stuff to know about?
    Help, and Thanks!

    The programs was ported also in earlier versions 1,5 year ago we use forte 6.2 and the performance was o.k.
    Possibly the program design was based on Windows
    features that are inappropriate for Unix. So the principal design didn't change, the only thing is, that we switch to the boost libraries, where we use the thread, regex, filesystem and date-time libraries
    Have you tried any other Unix-like system? Linux, AIX, HPUX,
    etc? If so, how does the performance compare to
    Solaris?Not at the moment, because the order is customer driven, but HP and Linux is also an option.
    Also consider machine differences. For example, your
    old Ultra-80 system at 450Mhz will not keep up with a
    modern x86 or x64 system at 3+ GHz. The clock speed
    could account for a factor of 6. That was my first thought, but how I have wrote in an earlier post, the performance testcase need the same time on a 6x1GHz (some Sun-Fire-T1000) machine
    Also, how much realmemory does the sparc system have? 4 GB! And during the testrun the machine use less than 30% of this memory.
    If the program is not multithreaded, the additional
    processors on the Ultra-80 won't help. But it is!
    If it is multithreaded, the default libthread or libpthread on
    Solaris 8 does not give the best performance. You can
    link with the alternative lwp-based thread library on
    Solaris 8 by adding the link-time option
    -R /usr/lib/lwp (for 32-bit applications)
    -R /usr/lib/lwp/64 (for 64-bit applications)The running application use both, the thread and the pthread library can that be a problem? Is it right, that the lwp path include only the normal thread library?
    Is there a particular reason why you are using the
    obsolete Solaris 8 and the old Sun Studio 10?Because we have customer which do not upgrade? Can we develop on Solaris 10 with SunStudio 11and deploy on 5.8 without risk?
    regards
    Arno

  • Poor performance reading MBEWH table

    Hi,
    I'm getting serious performance problems when reading MBEWH table directly.
    I did the following tests:
      GET RUN TIME FIELD t1.
      SELECT mara~matnr
        FROM mara
        INNER JOIN mbewh ON mbewh~matnr = mara~matnr
        INTO TABLE gt_mbewh
        WHERE mbewh~lfgja = '2009'.
      GET RUN TIME FIELD t2.
      GET RUN TIME FIELD t3.
      SELECT mbewh~matnr
        FROM mbewh
        INTO TABLE gt_mbewh
        WHERE mbewh~lfgja = '2009'.
      GET RUN TIME FIELD t4.
    t2 = t2 - t1.
    t4 = t4 - t3.
    write: 'With join: ', t2.
    write /.
    write: 'Without join: ', t4.
    And as result I got:
    With join:      27.166
    Without join:  103970.297
    All MATNR in MBEWH are in MARA.
    MBEWH has 71.745 records and MARA has 705 records.
    I created an index for lfgja field in MBEWH.
    Why I have better performance using inner join?
    In production client, MBEW has 68 million records, so any selection takes too much time.
    Thanks in advance,
    Frisoni

    Guilherme, Hermann, Siegfried,
    I have just seen this thread and read it from top to bottom, and I would say now is a good time to make a summary..
    This is want I got from Guilherme's comments:
    1) MBEWH has 71.745 records
    2) There are two hidden clients in the same server with 50 million rocords.
    3) Count Distinct mandt = 6
    4) In production client, MBEW has 68 million records
    First measurement
    With join               : 27.166
    Without join            :  103970.297
    Second measurement
    With join               : 96.217
    Without join            : 93.781            << now with hint
    The original question was to understand why using the JOIN made the query much faster.
    So the conclusions are:
    1) Execution times are really now much better (comparing only the not using join case, which is the one we are working on), and the original "mystery" is gone
    2) In this client, MANDT is actually much more selective that the optimizer thinks it is (and it's because of this uneven distrubution, as Hermann mentioned, that forcing the index made such a difference)
    4) Bad news is that this solution was good because of the special case of your development system, but will probably not help in the production system
    5) I suppose the index that Hermann suggested is the best possible thing to do (the table won't be read, assuming you really only want only MATNR from MBEWH, and that it wasn't a simplification for illustration purposes); anyway, noone can really expect that getting all entries from MBEWH for a given year will be a fast thing...
    Rui Dantas

  • Poor performance that change XML data source programically

    The XML file get from database each time is different, so I have to specify the XML path dynamically.
    I find the process of DataBaseController.replaceConnection(conn1,conn2) is very slow if the XML structure is more complex, in my case, the size of the XML is about 100K and tables(hierarchy) are more than 50.
    I also try to use the setDatasource(IXMLSource,'','') and setTableLocation, the result are same and almost spend 3 seconds  or more for this process.
    Any suggestion to me or what's wrong about that case?

    The XML for the documents are generated by the product code using BC4J XML generation. No concurrent program is involved in it.
    As regards to including additional fields from Sourcing documents into the XML generated, you can use the BC4J extension fwk to extend the ViewObject that generates XML and modify the query to include any additional fields as needed.
    All the above assumes knowledge of Oracle Application Framework and the customization fwk. There is a Metalink Note (dont know the exact id) which explains how to extend BC4J objects and deploy them.

  • Poor performance and out of date

    I'm sorry to have to say this, having been a big fan of Robohelp in the past, but Robohelp 8 is like something out of the dark ages. I last used it 2003 and it doesn't seem to have been updated since then.
    These are the problems I've encountered so far:
    1. Compared with general authoring and editing in Dreamweaver it is slow and cumbersome.
    2. The CSS is very difficult to edit and still uses point sizes for fonts (no 'em' option appears on menus). The potential of CSS doesn't seem to be understood by the Robohelp people
    3. The glossary hotspot doesn't find all glossary entries even though the text on the page is EXACTLY the same as the glossary entry. This means that the feature unuseable as I can't see any way of adding them by hand.
    4. There is no help as far as I can see on using CSS styles for Mini TOC
    5. The help generally is the type that states the obvious and describes what's on the screen but doesn't give indepth information.
    I'm in a bit of fix here as I recommended this product and it's really not up to job.

    Dreamweaver creates HTML that is at once source and output. A HAT such as RoboHelp processes source code to generate and publish targeted output for multiple layouts, online and print. It might use points instead of ems, but that's a decision based on these business variables.
    If you double click the minitoc placeholder, you're presented with this dialog for styling it.
    I've always used a separate Glossary topic, accessible from a toolbar button and from the TOC, instead of the default built-in utility, but that's just me.
    If you have any other questions on specific issues that you can't resolve, feel free to post here again.
    Good luck,
    Leon

  • Performance issues with Planning data load & Agg in 11.1.2.3.500

    We recently upgraded from 11.1.1.3 to 11.1.2.3. Post upgrade we face performance issues with one of our Planning job (eg: Job E). It takes 3x the time to complete in our new environment (11.1.2.3) when compared to the old one (11.1.1.3). This job loads then actual data and does the aggregation. The pattern which we noticed is , if we run a restructure on the application and execute this job immediately it gets completed as the same time as 11.1.1.3. However, in current production (11.1.1.3) the job runs in the following sequence Job A->Job B-> Job C->Job D->Job E and it completes on time, but if we do the same test in 11.1.2.3 in the above sequence it take 3x the time . We dont have a window to restructure the application to before running Job E  every time in Prod. Specs of the new Env is much higher than the old one.
    We have Essbase clustering (MS active/passive) in the new environment and the files are stored in the SAN drive. Could this be because of this? has any one faced performance issues in the clustered environment?

    Do you have exactly same Essbase config settings and calculations performing AGG ? . Remember something very small like UPDATECALC ON/OFF can make a BIG difference in timing..

  • Question about table data loading at startup of .jspx page

    I'm trying to debug an intermittent user error - the user loads a .jspx page (jdev 10.1.3.1) - this page contains an af:table that uses a read-only view object with a sql statement containing bind variables. The table uses a partial trigger linked to a command button that when pressed, updates the data in a table by calling a service method on the application module. The pagedef file does not contain an invokeaction, so my assumption is that the table will not attempt to be populated or execute the sql in the view object until the user pressess the command button?
    The error received is a JBO-271222 - it appears (though not 100%) that when the page loads the sql on the view object attempts to execute w/o the user pressing the command button, which gives an error saying the statement parameter 1 (bind variable) is not set. I have not been able to replicate the issue, but a couple of users have experienced this. Is there any reason the table would attempt to execute the view object when the page is first loaded?
    Note one other twist is that the id for the af:table is also used as a partial trigger to another section of the page (so that when the table is populated and a row selected, the other section of the page is updated.) This should not be causing the view object to run at startup though.
    Thanks for any suggestions.

    Hi javaX,
    I'll take a shot at this one...
    Here's my guess: I think the answer may lay in your pageDef.xml file.
    1) You have no <invokeAction> in your file, so you're not running the <action> / <methodAction> to set the parameters.
    2) However, you do have an <iterator> for your viewObject. This <iterator> does infact get executed during the startup of your application.
    My suggestion is to perhaps tweak the refresh= / refreshCondition= for the iterator. This way you can identify when the <iterator> will get executed. Also check the possible use of #{adfFacesContent.postback} in the refreshCondition. This may also help you out.
    The other suggestion is to set some arbritary NDValue= for your <action>/<methodAction> that will cause the query to return nothing, and then <invokeAction> this before your iterator. The execution in pageDef.xml is sequental in the <executables> section.
    Hope this helps!
    Kenton

  • How autoextend affects the performance of a big data load

    I'm doing a bit of reorganization on a datawarehouse, and I need to move almost 5 TB worth of tables, and rebuild their indexes. I'm creating a tablespace por each month, using BIGFILE tablespaces, and assigning to them 600GB, that is the approximate of size of the tables for each month. The process of just assigning the space takes a lot of time, and I decided to try a different approach and change the datafile to AUTOEXTEND ON NEXT 512M, and then run the ALTER TABLE MOVE command to move the tables. The database is Oracle 11g Release 2, and it uses ASM. I was wondering what would be the best approach between these two:
    1. Create the tablespace, with AUTOEXTEND OFF, and assign 600GB to it, and then run the ALTER TABLE MOVE command. The space would be enough for all the tables.
    2. Create the tablespace, with AUTOEXTEND ON, and without assigning more than 1GB, run the ALTER TABLE MOVE command. The diskgroup has enough space for the expected size of the tablespace.
    With the first approach my database is taking 10 minutes approx moving each partition (there's one for each day of the month). Would this number be impacted in a big way if the database has to do an AUTOEXTEND each 512 MB?

    If you measure the performance as the time required to allocate the initial 600 GB data file plus the time to do the load and compare that to allocating a small file and doing the load, letting the data file autoextend, it's unlikely that you'll see a noticable difference. You'll get far more variation just in moving 600 GB around than you'll lose waiting on the data file to extend. If there is a difference, allocating the entire file up front will be slightly more efficient.
    More likely, however, is that you wouldn't count the time required to allocate the initial 600 GB data file since that is something that can be done far in advance. If you don't count that time, then allocating the entire file up front will be much more efficient.
    If you may need less than 600 GB, on the other hand, allocating the entire file at once may waste some space. If that is a concern, it may make sense to compromise and allocate a 500 GB file initially (assuming that is a reasonable lower bound on the size you'll actually need) and let the file extend in 1 GB chunks. That won't be the most efficient approach and you may waste up to a GB of space but that may be a reasonable compromise.
    Justin

  • Struts 2 table data loading problem

    Hi
    i need to display contents from DB in table format in jsp using struts 2.
    i am loading the data into list to display.
    DB data:
    No Name country no user_id
    1 aaa ind 9999 01
    2 aaa ind 2323 01
    3 bbb aus 7777 02
    4 ccc zim 2222 03
    5 ccc zim 5656 03
    currently i display the data in the below format
    aaa ind 9999
    aaa ind 2323
    bbb aus 7777
    ccc zim 2222
    ccc zim 5656
    my requirement is changed so
    I need to get my output as in the below format in struts 2 table in jsp
    aaa ind 9999
    2323
    bbb aus 7777
    ccc zim 2222
    5656
    from the list i am i created nested table outer table contains the name and country and inner table contains the phone number.
    but iam getting the result
    aaa ind 9999
    2323
    7777
    2222
    5656
    bbb aus 9999
    2323
    7777
    2222
    5656
    ccc zim 9999
    2323
    7777
    2222
    5656
    can u help me out

    Opps you bumped in the wrong forum friend.
    However if you are using HTML tables then its very simple. give the table a border 1 and you can see that is wrong where and you can then align

  • Getting error RSTSODSPART during flat file data load to datasource

    Hi SAP Gurus,
    In BI, I dont know why i am getting this error while loading data from flat file to data source using infopackage. Always i am deleting data from PSA before running it even have this error. Could u pls help me out in this issue.... Thanks....
    Error when inserting in PSA table RSTSODSPART
    Message no. RSAODS206
    Diagnosis
    An error occurred with the insert into PSA table RSTSODSPART for InfoSource .
    System Response
    Your data was not updated in the PSA table.
    It is possible that data already exists for this request  and data packet .

    Hello,
    Do you have an MS SQL Database, if yes then the patch in the note 1340371 should resolve the problem. If you have a different DB release please let me know your DB release and your BW release and support package level.
    Best Regards,
    Des

Maybe you are looking for