Performing an HRMS Load

Hi friends,
Im new to informatica OBIA DAC world and im learning it up to now. Im in the verge of performing ETL load for HR analytics.
I have a Oracle Source r12 instance with Oracle 11g database and my Oracle target database is 10.2.0.1.0. I need to perform an HRMS load from my source to target using Informatica. So, as of first step how will i need to connect and import source r12 instance hrms data's to my DAC to perform ETL inorder to load my target database.
Hope u understand.
Thanks in Advance.
Regards,
Saro

Dear Svee,
Thanks for the reply again, yes like you said i checked the custom properties of my Integration service
It is like below
Name: value
SiebelUnicodeDB: apps@test biapps@obia
overrideMpltVarWithMapVar: yes
ServerPort: 4006
SiebleUnicodeDBFlag: NoAs it is already set to 'Yes'.
For one of my failed Workflow "SDE_ORA_Flx_EBSValidationTableDataTmpLoad" in the workflow monitor i right clicked it and selected Get Workflow Log, in that i got those following details like
2012-07-23 10:19:01 : INFO : (1164 | 2556) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_BIAPPS : LM_36435 : Starting execution of workflow [SDE_ORA_Flx_EBSSegDataTmpLoad] in folder [SDE_ORA11510_Adaptor] last saved by user [Administrator].
2012-07-23 10:19:01 : INFO : (1164 | 2556) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_BIAPPS : LM_44206 : Workflow SDE_ORA_Flx_EBSSegDataTmpLoad started with run id [463], run instance name [], run type [Concurrent Run Disabled].
2012-07-23 10:19:01 : INFO : (1164 | 2556) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_BIAPPS : LM_44195 : Workflow [SDE_ORA_Flx_EBSSegDataTmpLoad] service level [SLPriority:5,SLDispatchWaitTime:1800].
2012-07-23 10:19:01 : INFO : (1164 | 2556) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_BIAPPS : LM_44253 : Workflow started. Clients will be notified
2012-07-23 10:19:01 : INFO : (1164 | 2556) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_BIAPPS : LM_36330 : Start task instance [Start]: Execution started.
2012-07-23 10:19:01 : INFO : (1164 | 2556) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_BIAPPS : LM_36318 : Start task instance [Start]: Execution succeeded.
2012-07-23 10:19:01 : INFO : (1164 | 2556) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_BIAPPS : LM_36505 : Link [Start --> SDE_ORA_Flx_EBSSegDataTmpLoad]: empty expression string, evaluated to TRUE.
2012-07-23 10:19:01 : INFO : (1164 | 2556) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_BIAPPS : LM_36388 : Session task instance [SDE_ORA_Flx_EBSSegDataTmpLoad] is waiting to be started.
2012-07-23 10:19:01 : INFO : (1164 | 2556) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_BIAPPS : LM_36682 : Session task instance [SDE_ORA_Flx_EBSSegDataTmpLoad]: started a process with pid [4732] on node [node01_BIAPPS].
2012-07-23 10:19:01 : INFO : (1164 | 2556) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_BIAPPS : LM_36330 : Session task instance [SDE_ORA_Flx_EBSSegDataTmpLoad]: Execution started.
2012-07-23 10:19:02 : ERROR : (1164 | 1380) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_BIAPPS : VAR_27086 : Cannot find specified parameter file [D:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\SDE_ORA11510_Adaptor.SDE_ORA_Flx_EBSSegDataTmpLoad.txt] for [session [SDE_ORA_Flx_EBSSegDataTmpLoad.SDE_ORA_Flx_EBSSegDataTmpLoad]].
2012-07-23 10:19:02 : INFO : (1164 | 1552) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_BIAPPS : LM_36488 : Session task instance [SDE_ORA_Flx_EBSSegDataTmpLoad] : [TM_6793 Fetching initialization properties from the Integration Service. : (Mon Jul 23 10:19:01 2012)]
2012-07-23 10:19:02 : INFO : (1164 | 1552) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_BIAPPS : LM_36488 : Session task instance [SDE_ORA_Flx_EBSSegDataTmpLoad] : [DISP_20305 The [Preparer] DTM with process id [4732] is running on node [node01_BIAPPS].
: (Mon Jul 23 10:19:01 2012)]
2012-07-23 10:19:02 : INFO : (1164 | 1552) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_BIAPPS : LM_36488 : Session task instance [SDE_ORA_Flx_EBSSegDataTmpLoad] : [PETL_24036 Beginning the prepare phase for the session.]
2012-07-23 10:19:02 : INFO : (1164 | 1552) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_BIAPPS : LM_36488 : Session task instance [SDE_ORA_Flx_EBSSegDataTmpLoad] : [TM_6721 Started [Connect to Repository].]
2012-07-23 10:19:02 : INFO : (1164 | 1552) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_BIAPPS : LM_36488 : Session task instance [SDE_ORA_Flx_EBSSegDataTmpLoad] : [TM_6722 Finished [Connect to Repository].  It took [0.21875] seconds.]
2012-07-23 10:19:02 : INFO : (1164 | 1552) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_BIAPPS : LM_36488 : Session task instance [SDE_ORA_Flx_EBSSegDataTmpLoad] : [TM_6794 Connected to repository [Oracle_BI_DW_Base] in domain [Domain_BIAPPS] as user [Administrator] in security domain [Native].]
2012-07-23 10:19:02 : INFO : (1164 | 1552) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_BIAPPS : LM_36488 : Session task instance [SDE_ORA_Flx_EBSSegDataTmpLoad] : [TM_6721 Started [Fetch Session from Repository].]
2012-07-23 10:19:02 : INFO : (1164 | 1552) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_BIAPPS : LM_36488 : Session task instance [SDE_ORA_Flx_EBSSegDataTmpLoad] : [TM_6722 Finished [Fetch Session from Repository].  It took [0.140625] seconds.]
2012-07-23 10:19:02 : INFO : (1164 | 1552) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_BIAPPS : LM_36488 : Session task instance [SDE_ORA_Flx_EBSSegDataTmpLoad] : [TM_6793 Fetching initialization properties from the Integration Service. : (Mon Jul 23 10:19:02 2012)]
2012-07-23 10:19:02 : ERROR : (1164 | 1552) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_BIAPPS : LM_36488 : Session task instance [SDE_ORA_Flx_EBSSegDataTmpLoad] : [CMN_1761 Timestamp Event: [Mon Jul 23 10:19:02 2012]]
2012-07-23 10:19:02 : ERROR : (1164 | 1552) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_BIAPPS : LM_36488 : Session task instance [SDE_ORA_Flx_EBSSegDataTmpLoad] : [PETL_24049 Failed to get the initialization properties from the master service process for the prepare phase [Session task instance [SDE_ORA_Flx_EBSSegDataTmpLoad]: Unable to read variable definition from parameter file [D:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\SDE_ORA11510_Adaptor.SDE_ORA_Flx_EBSSegDataTmpLoad.txt].] with error code [32694552].]
2012-07-23 10:19:04 : ERROR : (1164 | 364) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_BIAPPS : LM_36320 : Session task instance [SDE_ORA_Flx_EBSSegDataTmpLoad]: Execution failed.
2012-07-23 10:19:04 : WARNING : (1164 | 364) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_BIAPPS : LM_36331 : Session task instance [SDE_ORA_Flx_EBSSegDataTmpLoad] failed and its "fail parent if this task fails" setting is turned on.  So, Workflow [SDE_ORA_Flx_EBSSegDataTmpLoad] will be failed.
2012-07-23 10:19:04 : ERROR : (1164 | 364) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_BIAPPS : LM_36320 : Workflow [SDE_ORA_Flx_EBSSegDataTmpLoad]: Execution failed.
Whether is this log is pointing out to the correct reason for that task to be failed. If so, from the above log what is the reason for the failure for that workflow.
Kindly help me with this svee.
Thanks for your help.
Regards,
Saro

Similar Messages

  • Performance problem in loading the Mater data attributes 0Equipment_attr

    Hi Experts,
    We have a Performance problem in loading the Mater data attributes 0Equipment_attr.It is running with psuedo delta(full update) the same infopakage runs with diffrent selections.The problme we are facing is the load is running 2 to 4 hrs in the US morning times and when coming to US night times it is loading for 12-22 hrs and gettin sucessfulluy finished. Even it pulls (less records which are ok )
    when i checked the R/3 side job log(SM37) the job is running late too. it shows the first and second i- docs coming in less time and the next 3and 4 i- docs comes after 5-7 hrs gap to BW and saving in to PSA and then going to info object.
    we have userexits for the data source and abap routines but thay are running fine in less time and the code is not much complex too.
    can you please explain and suggest the steps in r/3 side and bw side. how can i can fix this peformance issue
    Thanks,
    dp

    Hi,
    check this link for data load performance. Under "Extraction Performance" you will find many useful hints.
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3a699d90-0201-0010-bc99-d5c0e3a2c87b
    Regards
    Andreas

  • Router can perform static route load balance

    Dear All
    I am not sure a question. I need your idea and help. The question is if the router can perform static route load balance. I tested it. The result showed No. If you have any experience on it, could share it with me. I also post my result here. Thank you

    Disclaimer
    The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.
    Liability Disclaimer
    In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.
    Posting
    Normally they can, but you generally need different next hops.  How did you "test".

  • QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES

    WHAT ARE  QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
    WHAT ARE DATALOADING PERFORMANCE ISSUES  WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
    WILL REWARD FULL POINT S
    REGARDS
    GURU

    BW Back end
    Some Tips -
    1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 — Background Processing Job Management — to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
    2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 — ABAP/4 Run-time Analysis — and then run the analysis for the transaction code RSA3 — Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
    3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 — Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
    4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 — Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
    5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW — BW IMG Menu — on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
    6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
    7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
    8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
    You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
    9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
    10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables — for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
    11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
    12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
    13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
    14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
    Hope it Helps
    Chetan
    @CP..

  • Performance issue of loading a report

    Post Author: satish_nair31
    CA Forum: General
    Hi,
    I am facing performance related problem in some our reports where we have to fetch some 1-2 lakhs of reports for display. Initially we are passing the dataset as the source of data to the report. It takes hell of time to load using this technique. After that for improving the performance we are only passing the filter condition through the report viewer object. This way we have improved a lot but now also its taking some time to load reports which is not acceptable. Is there any way to improve the performance.

    Post Author: synapsevampire
    CA Forum: General
    How could you possibly know if you're in the same situation, the original poster didn't include the software or the version being used, whether it's the RDC, etc. Very likely the reason why they received no responses.
    They also referenced 2 different methods for retrieving the data, which are you using?
    The trick is to make sure that you are passing all of the WHERE conditions in the SQL to the database.
    You can probably check this using a database tool, but again, nothing technical in your post about the database either, you certainly shouldn't expect quality help.
    -k

  • Essbase Studio Performance Issue : Data load into BSO cube

    Hello,
    Having succesfully built my outline by member loading through Essbase Studio, I have tried to load data into my application again with Studio. However I was never able to complete the data load because it is taking forever. Each time I tried to work with Studio in streaming mode (hoping to increase the query speed), the load gets terminated due to the following error : Socket read timed out.
    In the Studio properties file, I typed in, oracle.jdbc.ReadTimeout=1000000000, but the result has not changed. Even if it did work out, I am also not sure the streaming mode is gonna provide a much faster alternative to working in non-streaming mode. What I'd like to know is, which Essbase settings I can change (either in Essbase or Studio server) in order to speed up my data load. I am loading into a Block Storage database with 3 Dense, 8 Sparse and 2 attribute dimensions. I filtered some dimensions and tried to load data to see exactly how long it takes to create a certain number of blocks. With ODBC setting in Essbase Studio, it took me 2.15 hours to load data into my application where only 153 blocks were created with the block size of 24B. Assuming that in my real application the number of Blocks created are going to be at least 1000 times more than this , I need to make some changes in settings. I am transferring the data from Oracle Database, with 5 tables joined to a fact table (view) from the same data source. All the cache settings in Essbase are in default mode. Would changing cache settings, buffer size or multiple threads help to increase the performance? Or what would you suggest that I should do?
    Thank you very much.

    Hello user13695196 ,
    (sorry I no longer remember my system number here)
    Before it comes to any optimisation attemps in the essbase (also Studio) environment you should definitily make clear that your source data query performs well at the oracle db.
    I would recommand:
    1. to create in your db source schema a View from your sql statement (these behind your data load rule)
    2. query against this view with any GUI (Sql Developer, TOAD etc.) to fetch all rows and measure the time it takes to complete. Also count the effected (returned) number of rows for your information and for future comparing of results.
    If your query runs longer then you think is acceptable then
    a) check DB statistiks,
    b) check and/or consider creating indexes
    c) if you are unsure then kindliy ask your DBA for help. Usually they can help you very fast.
    (Don't be shy - a DBa is a human being like you and me :-) )
    Only when your sql runs fast (enough for you; or your DBA says is the best you can achieve) at the database move your effort over to essbase.
    One hint in addition:
    We had often problems when using views for dataload (not only performance but rather other strange behavior) . Thats the reaons why I like more directly to set up on (persistence) tables.
    Just to keep in mind: If nothing helps create a table from your view and then query your data from this table for your essbase data load. Normaly however this should be your last option.
    Best Regards
    (also to you Torben :-) )
    Andre
    Edited by: andreml on Mar 17, 2012 4:31 AM

  • Enhancing performance of BLOB loading

    Hi,
    I am loading one row of BLOB data using DBMS_LOB Package. To load 5 GB of BLOB data its taking 8:53 Minutes. Due to some restriction in the target database, SQL *Loader can't be used.
    Is there any other option/method to fasten the performance of the above said BLOB load? Any option with you is welcome except SQL *Loader. Thank you.. :)                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

    user637568 wrote:
    Is there any other option/method to fasten the performance of the above said BLOB load? Any option with you is welcome except SQL *Loader. Thank you.. :)Does not matter what you use, there are a bunch of static layers here - and these impacts the performance mostly.
    On the client side - the data of the LOB needs to be read from where? Disk? Mapped drive? The speed the client can read this data is largely dependent on how fast that storage can deliver the data to it.
    The client needs to ship that data across to the Oracle server process. What is the network latency between client and server? How many routes and hops do it take for a client packet to reach the server?
    The client's Oracle driver packages LOB payload into TCP packages. How effective is this? Does it stuff a TCP packet as full as possible? Ideally for network performance, one wants large packets carrying the LOB data to the server - and not a gazillion small packets each with only a couple of bytes of LOB data.
    On the server side, Oracle needs to write that LOB data stream it receives to disk and commit it. Just how effective is I/O on the storage used by Oracle? How many other processes are hitting that same I/O layer?
    You need to look at all these layers and ensure that each one of these is configured and used optimally. The elapsed time of the end-to-end process will be determined by the slowest layer. In such a case, one can consider using parallel processing (assuming that the layers have the capacity).
    For example, the client reads the source data, sees it is 8GB in size and decides to use 16 processes to each read and transfer 512MB concurrently to the server - where the PL/SQL code on the server side has the intelligence to re-assemble these 16 chunks (in 16 different client sessions) into a single 8GB chunk for final storage.

  • (php) Delete being performed on page load instead of on submit-button click

    I'm building a Delete Record page in PHP using Dreamweaver's server behaviors, following the instructions in the DW Help.
    It's working - except the delete seems to be happening upon delete-carpet.php being loading, without the Submit button on that page being clicked.
    I have a page (choose.php) which allows the user to locate the record he wants to delete. Clicking on that link is supposed to take him to the Delete page (delete-carpet.php), which I built following the directions in the DW Help.
    I can see that the URL parameter is being passed, and in fact, the delete is successful. BUT, the Delete page itself, with the hidden form field and server behavior, doesn't even appear - only the "success" page that it re-directs to.
    In other words, when I click the "Delete" link, which looks like this:
    <a href="delete-carpet.php?carpet_id=<?php echo $row_GetCarpets['carpet_id']; ?>
    I thought that I should be getting a URL like this:
    delete-carpet.php?carpet_id=9
    But instead, I get this:
    success.php?carpet_id=9
    It seems to be just performing the delete without needing the user to click the Submit button.
    I've read through the directions and re-created this so many times that either I'm still missing something, or else the instructions are missing something. Any help is much appreciated.
    Patty Ayers | www.WebDevBiz.com
    Free Articles on the Business of Web Development
    Web Design Contract, Estimate Request Form, Estimate Worksheet

    In other words, when I click the "Delete" link, which looks like this:
    <a href="delete-carpet.php?carpet_id=<?php echo $row_GetCarpets['carpet_id']; ?>
    I think your delete link was okay, maybe there might be some mistakes on delete-carpet.php page. Okay, on that page, choose delete record (Insert > Data Objects > Delete record) and popup window will appear and choose data as below.
    First check if variable is defined: Primary key value
    Connection: your database
    Table: the carpet table
    Primary key column: carpet_id
    Primary key value: URL parameter > carpet_id
    After deleting go to: this one u can choose same page(choose.php) or other page except the delete-carpet.php

  • Performance parameters - page load - adf pages

    I am developing a webcenter portal application. most of it's pages are displaying adf tables which data coming from web services.
    business has not given any numbers for performance of system and i need to put numbers in requirement catalog so requirements can be measured later.
    we are into development phase now, services are not yet ready.
    I was thinking how can I come up with these numbers like -
    a 'simple' page should load in 2 sec?
    a 'medium' page should load in 4 sec?
    a 'complex' page should load in 6 sec?
    How this is determined?
    help appreciated.
    thanks.

    Hi,
    You can use a utility called HTTP watch http://www.httpwatch.com/ to measure the page performance. You can also see which files are cached and which are not etc etc.
    Based on that you can tweak your pages to meet the baselines.
    Hope it help,
    Zeeshan

  • RTV-20003: Warning: Cannot perform Partition Exchange Loading

    Hi,
    We have a unique situation in which the data has to be loaded 4 times each month to the fact table. The fact table is partitioned by month. We have configured the mappings so that DIRECT is false and REPLACE DATA option is also false (so that we don't loose exisiting data, if any). When the data is loaded to the fact for the first time then PEL is performed but when data is loaded for 2nd, 3rd and 4th time in the current partition OWB issues a warning (RTV-20003). We understand that in these cases data is still loaded but without using PEL. Is there any way to force OWB to perform PEL all 4 times without lossing the exisiting data in the current partition?
    Thanks, Yashu

    Hi, you can't use PEL to add data to an existing partition: by definition PEL swaps the rows in a regular, non-partitioned table with the rows in 1 partition of a partitioned table. Or better said, it swaps the table data segment with the partition data segment, and possibly the corresponding index segments if the indexes match and the part.table indexes are local. This means you are not really moving any rows so you can't add any to the partition using PEL.
    As a safety net, OWB provides "replace data = false" by default so you don't trash your data.
    If the 4 loads/month are 1 per week, you can consider creating e.g. 4 week partitions instead of month partitions. OWB PEL can't handle weeks: if you follow this route you should disable OWB PEL and exchange partitions 'by hand' inside a stored procedure called in the postmapping. This can be worth if you have really big volumes or if you want to feed your fact table very fast, staging new rows and then swapping them in almost instantaneously. You should evaluate your index rebuild speed requirements too.
    Hope this helps, Antonio

  • Performance of ETL loads on Exadata

    Oracle advertises prominently the improvements on query performance (10-100x), but does anyone know if the performance of the data loads into the DW (ETL) will improve also?

    Brad_Peek wrote:
    In our case there are many Informatica sessions where the majority of time is spent inside the database. Fortnately, Informatica sessions produce a summary at the bottom of each session log that breaks down where the time was spent.
    We are interested to find out how much improvement Exadata will provide from the following types of Informatica workloads:
    1) Batch inserts into a very large target table.
    -- We have found that inserts into large tables (e.g. 700 million rows plus) with high-cardinality indexes can be quite slow.
    -- Slowest when the ix is either non-partitioned or globally partitioned.
    -- Hoping that flash cache will improve the random IO associated with ix maintenance.
    -- In this case, Informatica just happens to be the program issuing the inserts. We have the same issue with batch inserts from any program.
    -- Note that Informatica can do direct-mode inserts, but even for normal inserts it does "array inserts". Just a bit of trivia.
    2) Batch updates to a large table by primary key where the updated key values are widely dispersed over the target table.
    -- Again, this leads to a large amount of small-block physical IO.
    -- We see a large improvement in elapsed time when we can order the updates to match the <A class=bodylinkwhite href="http://www.software-to-convert.com/avi-dvd-conversion-software/avi-dvd-to-matroska-software.html"><FONT face=tahoma,verdana,sans-serif color=#000 size=1>software</FONT></A> order of the rows in the table, but that isn't always possible.
    Thanks for your sharing! I understand this part, It's helpful to me, Nice writing.

  • Native performance library not loading in weblogic 9.1. help

    I'm trying to review an application and this is my first exposure to weblogic9.1. I've noticed that the native library failed to load in the logs but even after adding the right path, i keep seeing the same error message.
    This surely affect the performance of the application running under this evironment i'm reviewing. Can someone else locate the resolve this problem?
    ####<Jun 27, 2007 3:40:13 PM EDT> <Info> <Server> <lydmtltst06> <AdminServer> <[STANDBY] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1182973213868> <BEA-002609> <Channel Service initialized.>
    ####<Jun 27, 2007 3:40:13 PM EDT> <Error> <Socket> <lydmtltst06> <AdminServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1182973213916> <BEA-000438> <Unable to load performance pack. Using Java I/O instead. Please ensure that a native performance library is in: '/opt/java/1.5.0/jre/lib/sparc/server:/opt/java/1.5.0/jre/lib/sparc:/opt/java/1.5.0/jre/../lib/sparc::/usr/dt/lib:/usr/lib:/usr/ccs/lib:/usr/openwin/lib:/usr/ucb/lib:/usr/bin:/usr/ucb:/etc:/opt/bea/weblogic91/domains/lineage/server/native/solaris/sparc:/opt/bea/weblogic91/domains/lineage/server/native/solaris/sparc/oci920_8:/usr/lib'
    >
    ####<Jun 27, 2007 3:40:13 PM EDT> <Info> <Socket> <lydmtltst06> <AdminServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1182973213932> <BEA-000447> <Native IO Disabled. Using Java IO.>
    ####<Jun 27, 2007 3:40:14 PM EDT> <Info> <IIOP> <lydmtltst06> <AdminServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1182973214947> <BEA-002014> <IIOP subsystem enabled.>
    ####<Jun 27, 2007 3:40:22 PM EDT> <Debug> <Deployment> <lydmtltst06> <AdminServer> <[STANDBY] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1182973222439> <000000> <Add DeploymentEventListener weblogic.security.service.DeploymentListener@6cc2a4>
    ####<Jun 27, 2007 3:40:22 PM EDT> <Debug> <Deployment> <lydmtltst06> <AdminServer> <[STANDBY] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1182973222483> <000000> <Add VetoableDeploymentListener: weblogic.security.service.DeploymentListener@6cc2a4>
    The hardware used is a sunfire machine running solaris 9 (32).

    Stating the obvious, but it is in the mbeantypes folder isn't it?
    Did you let the upgrade wizard automatically upgrade the security provider? I used the domain upgrade wizard which converted my security provider, once I'd done that, I just took that new file (not sure what it changed) and I've been using it without problems.
    Pete

  • GC performance and Class Loading/Unloading

    We have EP 6.0, SP11 on Solaris with JDK 1.4.8_02. We are running Web Dynpro version of MSS/ESS and Adobe Document Services. This is a Java stack only Web AS.
    We are experiencing very uneven performance on the Portal. Usually, when the Portal grinds to a halt, the server log shows GC entries or Class unloading entries for the entire time the Portal stops working.
    I am thinking about setting the GC parameters to the same size to try and eliminate sudden GC interruptions. Also, what parameter can I set to allow as many classes to be loaded at startup and stay in memory for as long as possible?
    Thanks,
    Rob Bartlett

    Hi Robert
    Also, if the host running the WebAS is a multi processor machine, then setting the flags
    -XX:+UseConcMarkSweepGC and
    -XX:+UseParNewGC
    will help reduce the pause time during the GC collection in old generation and the young genereation respectively, as the GC will happen using multiple threads
    I can suggest you to check if the GC performs a minor collection or major collection by enabling the flags
    -verbose:gc   
    -XX:+PrintGCTimeStamps   
    -XX:+PrintGCDetails. Based on this, try to tune the young or old generation.
    Regards
    Madhu

  • Performance Tuning Data Load for ASO cube

    Hi,
    Anyone can help how to fine tune data load on ASO cube.
    We have ASO cube which load around 110 million records from a total of 20 data files.
    18 of the data files has 4 million records each and the last two has around 18 million records.
    On average, to load 4 million records it took 130 seconds.
    The data file has 157 data column representing period dimension.
    With BSO cube, sorting the data file normally help. But with ASO, it does not seem to have
    any impact. Any suggestion how to improve the data load performance for ASO cube?
    Thanks,
    Lian

    Yes TimG it sure looks identical - except for the last BSO reference.
    Well nevermind as long as those that count remember where the words come from.
    To the Original Poster and to 960127 (come on create a profile already will you?):
    The sort order WILL matter IF you are using a compression dimension. In this case the compression dimension acts just like a BSO Dense dimension. If you load part of it in one record then when the next record comes along it has to be added to the already existing part. The ASO "load buffer" is really a file named <dbname.dat> that is built in your temp tablespace.
    The most recent x records that can fit in the ASO cache are still retained on the disk drive in the cache. So if the record is still there it will not have to be reread from the disk drive. So you could (instead of sorting) create an ASO cache as large as your final dat file. Then the record would already still be on the disk.
    BUT WAIT BEFORE YOU GO RAISING YOUR ASO CACHE. All operating systems use memory mapped IO therefore even if it is not in the cache it will likely still be in on the disk in "Standby" memory (the dark blue memory as seen in Resource Monitor) this will continue until the system runs out of "Free" memory (light blue in resource monitor).
    So in conclusion if your system still has Free memory there is no need (in a data load) to increase your ASO cache. And if you are out of Free memory then all you will do is slow down the other applications running on your system by increasing ASO Cache during a data load - so don't do it.
    Finally, if you have enough memory so that the entire data file fits in StandBY + Free memory then don't bother to sort it first. But if you do not have enough then sort it.
    Of course you have 20 data files so I hope that you do not have compression members spread out amongst these files!!!
    Finally, you did not say if you were using parallel load threads. If you need to have 20 files read up on having parrallel load buffers and parallel load scripts. that will make it faster.
    But if you do not really need 20 files and just broke them up to load parallel then create one single file and raise your DLTHREADSPREPARE and DLTHREADSWRITE settings. Heck these will help even if you do go parallel and really help if you don't but still keep 20 separate files.

  • Performance issue with loading Proclarity Main Page..

    Hi All,
    I have Proclarity 6.3 installed on a Windows 2008 R2 OS. The Proclarity Reports was working well until last week. From last few days I am seeing a slow response time in loading the Proclarity Main page. 
    Loading Proclarity Main page on Internet Explorer 8 is taking 150 seconds and the same Proclarity Main page is loading on Google Chrome in 30 seconds.
    Have any of you faced similar issue ? 
    Already below things explored
    1. Clear Cache on PAS Tool
    2. Event Viewer, Noticed if there is any error or warning
    3. Tried browsing the Proclarity URL from server itself ( still the performance is slow)
    4. Memory consumption validated on server side. MSSQLServer was consuming more space. Hence restarted /. After restart also same issue ( with loading main page in IE ONLY)
    5. Checked drive space .. All drives has minimum 1.5 GB of free space
    6. Cleared up Proclarity Event Logs 
    The issue is NOT ONLY with loading Main page.. Navigating to any further web pages in Proclarity STANDARD and PROFESSIONAL version is responding VERY slowly.
    The only other option, that I am thinking now is RESTARTING THE WINDOWS SERVER. Which may not be a easy deal SINCE ITS A PRODUCTION SERVER.
    But the loading of web page on Chrome is 30 seconds and on IE its 150 seconds ( i.e, 5 times more..) .. So does proposing to restart the server makes sense ? 
    Any help, suggestion , thoughts on what I am facing.. ? Thanks 
    Regards,
    Aravind

    <b>onInputProcessing for two pages</b>  
      DATA: event TYPE REF TO if_htmlb_data.
      event = cl_htmlb_manager=>get_event_ex( request ).
      IF event IS NOT INITIAL AND event->event_name = 'button'.
        navigation->goto_page( event->event_server_name ).
      ENDIF.
    page1.htm
      <%@page language="abap" otrTrim="true"%>
      <%@extension name="htmlb" prefix="htmlb"%>
      <htmlb:content design="design2003">
        <htmlb:page>
          <htmlb:form>
            <htmlb:button       text          = "next"
                                design        = "NEXT"
                                onClick       = "page2.htm" />
          </htmlb:form>
        </htmlb:page>
      </htmlb:content>
    page 2
    <%@page language="abap" otrTrim="true"%>
      <%@extension name="htmlb" prefix="htmlb"%>
      <htmlb:content design="design2003">
        <htmlb:page>
          <htmlb:form>
            <htmlb:button       text          = "Page 1"
                                design        = "PREVIOUS"
                                onClick       = "page1.htm" />
          </htmlb:form>
        </htmlb:page>
      </htmlb:content>
    above will work fine.
    another way :
    you can define a global variable in your application class and subsquently change its value according to your requirement as the name of the page
    and whenever you want to move to some page. jaust assign on onclick event of the button:
    navigation->goto_page(global_variable);
    where global variable is the variable you have defined.
    hope this works for you.
    if not reply
    regards,
    Hemendra

Maybe you are looking for

  • Phone Service disconnected over VPN

    Hello, I'm using Version 9.2.1 (147214) of Jabber for OS X and I'm using a VPN to connect to my work network with AnyConnect Secure Mobility Client. My issue is that Phone Services are disconnected while the Voicemail and Meeting Accounts are functio

  • Not enough memory to upgrade to Lion?

    I want up upgrade to Lion but I'm told I don't have enough space.  My computer has at least 50 GB free.  What do I do?

  • The offset in the file is too big"? Other than, hey it's FAT formatted, etcetera!

    Does anyone have an answer for "The offset in the file is too big" error while creating a file using QT API "writeToFile"? I mean something other it's FAT formatted, not enough space, et cetera. Anyone?

  • Set up procedure for fax on new HP envy 5530

    I need steps to set up my new HP Envy 5530 for fax services. Please reply ASAP. This question was solved. View Solution.

  • Changing width of an element

    Hi I know it's a very simple solution but I know little bit javascript. so please help me, I want to change the width of <td> element using javascript or JQuery. I have the code like this, <table id="topemp"> <tr> <td id="empid" width="50" height="50