Maintain multi lingual tables poor performance

hello all,
i am trying to install 4 new langauges into one of my test environments.
the first stage is to add the language via licence manager - i have done this ok.
the enxt stage is to run 'maintain multi lingual tables' via adadmin
the problme i am having is that one process is taking over 48 hours to fail with snapshot too old, even though i have increased the undo_retention from 10800 to 48000, the job JTFNLINS.sql still fails
I have a tar raised at the moment, but with not much information coming back,
has any one got any tips/hints on how i can get this job working succesfully and in a shorter time?
many thanks
rob

Maybe you have already seen this note 199716.1
Sam
http://www.appsdbablog.com

Similar Messages

  • Failure during running Maintain Multi-lingual Tables

    Hi,
    Currently, the system is running with R12.1.1 on RHEL 5.7.
    All the prerequisite patches are installed including language performance.
    Then, I am adding 4 additional languages. So, what's done is...
    1. Activated languages in the license manager.
    2. Run Maintain Multi-lingual Tables in AD Admin.
    I assigned 16 workers for this, and the last two assignments failed with the following messages.
    FAILED: file JTFNLINS.sql on worker 2 for product jtf username JTF.
    Deferred: file JTFNLINS.sql on worker 2 for product jtf username JTF. (Deferment number 1 for this job)
    Assigned: file JTFNLINS.sql on worker 2 for product jtf username JTF.
    FAILED: file FNDNLINS.sql on worker 1 for product fnd username APPLSYS.
    Deferred: file FNDNLINS.sql on worker 1 for product fnd username APPLSYS. (Deferment number 1 for this job)
    Assigned: file FNDNLINS.sql on worker 1 for product fnd username APPLSYS.
    FAILED: file JTFNLINS.sql on worker 2 for product jtf username JTF.
    FAILED: file FNDNLINS.sql on worker 1 for product fnd username APPLSYS.
    Completed: file AKNLINS.sql on worker 6 for product ak username AK.
    There are now 0 jobs remaining (current phase=Done):
    0 running, 0 ready to run and 0 waiting.
    ATTENTION: All workers either have failed or are waiting:
    FAILED: file FNDNLINS.sql on worker 1.
    FAILED: file JTFNLINS.sql on worker 2.
    So, when I looked at the worker specific logs, followings are the error message.
    LANGUAGE=AMERICAN
    PACKAGE=JTF_GRID_COLS_PKG
    SQLERRM=ORA-01653: unable to extend table JTF.JTF_GRID_COLS_TL by 16 in tablespa
    ce APPS_TS_SEED
    select to_date('ERROR')
    ERROR at line 1:
    ORA-01858: a non-numeric character was found where a numeric was expected
    Time when worker failed: Tue Sep 06 2011 10:11:16
    LANGUAGE=AMERICAN
    PACKAGE=FND_LOOKUP_VALUES_PKG
    SQLERRM=ORA-01653: unable to extend table APPLSYS.FND_LOOKUP_VALUES by 16 in tab
    lespace APPS_TS_SEED
    select to_date('ERROR')
    ERROR at line 1:
    ORA-01858: a non-numeric character was found where a numeric was expected
    Time when worker failed: Tue Sep 06 2011 10:11:36
    1.
    When I had search work, it seems that <not enough space> is causing this.
    Or, is there any other reason than this for the failure?
    2.
    If <not enough space> is causing the failure, how could this be handled/solved?
    3.
    After the solution is applied, how is <Maintain Multilingual Tables> resumed?
    Currently, my status is that I am on pause at the following message;
    ATTENTION: All workers either have failed or are waiting:
    FAILED: file FNDNLINS.sql on worker 1.
    FAILED: file JTFNLINS.sql on worker 2.
    ATTENTION: Please fix the above failed worker(s) so the manager can continue.
    Can anyone help me out of this problem...?
    Thanks in advance,
    - SH

    Hi,
    When I had search work, it seems that <not enough space> is causing this.
    Or, is there any other reason than this for the failure?I believe this is only the reason and you need to add some space to APPS_TS_SEED tablesapce.
    ALTER TABLESPACE APPS_TS_SEED add datafile '/<replace with the actual file name path>' size 1024M;
    3.
    After the solution is applied, how is <Maintain Multilingual Tables> resumed?
    Currently, my status is that I am on pause at the following message;If you have not stopped the adadmin, then use adctrl to restart the failed workers.
    If you have already stopped/quit adadmin, then run adadmin again.
    thanks

  • Error Maintaining multi-lingual tables.

    Hello, for past few days i have been stack at this error, can't find the solution. So i am bringing my problem here, hoping for an answer.
    Platform: Linux Red Hat Enterprise 5
    i am on R12.1.3 version and DB: 11g 2 realise.
    I need to install a Finnish language on EBS, so i licensed Finnish language in EBS.
    As mentioned in subject my problem lies within Maintaining multi-lingual tables.
    After i run the adadmin and select Maintain multi-lingual tables, everything is right until the last jobs, where 3 workers fail! here is the error
    sqlplus -s APPS/***** @/u01/ar121/VIS/apps/apps_st/appl/ibc/12.0.0/sql/IBCNLINS.sql
    Connected.
    PL/SQL procedure successfully completed.
    MESG
    LANGUAGE=AMERICAN
    PACKAGE=IBC_CITEM_VERSIONS_PKG
    SQLERRM=ORA-29875: failed in the execution of the ODCIINDEXINSERT routine
    ORA-20000: Oracle Text error:
    DRG-50857: oracle error in textindexmethods.ODCIIndexInsert
    ORA-20000: Oracle Text error:
    DRG-10607: index meta data is not ready yet for queuing DML
    DRG-50857: oracle error in drdmlv
    ORA-01426: numeric overflow
    ORA-30576: ConText Option dictionary loading error
    MESG
    ORA-06512: at "CTXSYS.DRUE", line 160
    ORA-06512: at "CTXSYS.TEXTINDEXMETHODS", line 752
    select to_date('ERROR')
    ERROR at line 1:
    ORA-01858: a non-numeric character was found where a numeric was expected
    And here is the worker fail
    ATTENTION: All workers either have failed or are waiting:
    FAILED: file IBCNLINS.sql on worker 1.
    FAILED: file CSNLINS.sql on worker 2.
    FAILED: file IRCNLINS.sql on worker 3.
    the worker error is the same on all three workers, just script is changed.
    I am pretty new at EBS, and newbie at DB administration too, so don't be so hard on me. Thx
    Edited by: 905194 on Dec 30, 2011 4:17 AM
    Edited by: 905194 on Dec 30, 2011 4:20 AM
    Edited by: 905194 on Dec 30, 2011 4:23 AM

    Was this instance upgraded from 11i and 10g database ? If so, a step in the upgrade may have been missed. Pl see this MOS Doc
    Applying The Patch 6678700 Worker 1 Failed: File Cskbcat.Ldt. ERRORS: ORA-20000: Oracle Text error: DRG-50857: oracle error in textindexmethods.ODCIIndexUpdate, DRG-13201: KOREAN_LEXER is desupported (Doc ID 1333659.1)
    If you are using 11.2.0.3, MOS Doc 1386945.1 (Oracle Text release 11.2.0.3.0 mandataroy Patches) may be applicable
    HTH
    Srini

  • Maintain Multi-Lingual Tables Option Fails On Cznlins.Sql

    Hi all.
    I am trying to install Albanian language in a 11.5.10.2 fresh installation.
    I applyed all the neccesary patches to come to the NLS apply patch stage.
    I followed the link
    https://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=316804.1
    for applyeing NLS Patch. I have to run the AD Utility Maintain Multi-Lingual Tables ,before applying NLS.
    But the workers fail on Cznlins.Sql.
    The worker's log shows:
    Start time for file is: Wed Jun 18 2008 14:13:08
    sqlplus -s APPS/***** @/oracle/prodappl/cz/11.5.0/sql/CZNLINS.sql
    PL/SQL procedure successfully completed.
    MESG
    LANGUAGE = AMERICAN
    PACKAGE= CZ_LOCALIZED_TEXTS_PKG
    SQLERRM= ORA-01400: cannot insert NULL into ("CZ"."CZ_LOCALIZED_TEXTS"."CREATED_
    BY")
    select to_date('ERROR')
    ERROR at line 1:
    ORA-01858: a non-numeric character was found where a numeric was expected
    Time when worker failed: Wed Jun 18 2008 14:13:08
    My question are:
    1.) Has anyone encountered this problem before.On the specific Cznlins.Sql file?
    2.) Shall i skip Maintain Multi-Lingual Tables, install the NLS, and run Maintain Multi-Lingual Tables after the installation?Do you recommend me to do this?
    Thanks in advance, Soni

    Hi hsawwan
    i am sending you the Road Map of patches i have applied,in my OEL 4 server enviroment.
    1.     Apply patch 4320012
    2.     Apply patch 6502082
    3.     Apply patch 6323691
    4.     Apply patch 6831988
    5.     Apply patch 4948577
    6.     Apply patch 5713544
    7.     Apply patch 4261542
    8.     Apply patch 5216496
    9.     Apply patch 5753922
    10.     Apply patch 6195759
    11.     Apply patch 5938515
    12.     Apply patch 3830807
    13.     Apply patch 4586086
    14.     Apply patch 4888294
    15.     Apply patch 5989593
    16.     Apply patch 3218526
    17.     Apply patch 3761838
    18.     Apply patch 4206794
    19.     Apply patch 5903765
    20.     Apply patch 3865683
    21.     Apply patch 4619025
    22.     Apply patch 6372396
    23.     Apply patch 4653225
    24.     Installation of Oracle 10g R2 database,software only(different home).
    25.     Installation of Oracle 10g R2 companion cd products.
    26.     Installation of Oracle 10g R2 pachset2
    27.     Apply patch 5257698
    28.     Upgrading to Oracle 10g R2
    29.     Configuring AutoConfig on db tier
    30.     Apply patch 5753621
    31.     Prepare for NLS(failed on AD Maintain Multi-Lingual Tables on CZNLINS.sql file)
    Is this the right way, before applying NLS, for albanian language?
    Do you think i have any conflicts in my list of patches?
    Thanks, Soni

  • Codepage conversion from multi-lingual table using JDBC

    Hello,
    I have an Oracle database that has been around since before any unicode was available. It is currently 9.2.0.7.0 and the charset is WE8ISO8859P1. I am trying to port all of the data in the 'Translation' table to a UTF-8 database. The problem is that each of the translated strings is stored in its own native MS Windows codepage. This table was populated over the years via OCI connections in C/C++, and this method worked. But now we need it moved to UTF-8 and are using JAVA. Obviously, all the cp1252 languages come over just fine, but all the other languages get converted to garbage by JDBC, since it is auto-converting them and assumes the source is 8859. I could do it with OCI, as I mentioned, but I am mandated to use JAVA and a web interface, so users can choose what data to move/convert to the new DB (on-demand) from a web page.
    Is there a way to force the Oracle connection to think the database is in a different codepage so that the conversion comes out correct? OR
    Is there a way to get the data from Oracle as unconverted chars, or generic binary, or something, and then have JAVA convert it to UTF-8 from the correct codepage before writing it to the new table?
    Any help is appreciated.
    Monte
    Message was edited by:
    Wookie

    ## Is there a way to force the Oracle connection to think the database is
    ## in a different codepage so that the conversion comes out correct?
    No.
    ## Is there a way to get the data from Oracle as unconverted chars,
    ## or generic binary, or something, and then have JAVA convert it
    ## to UTF-8 from the correct codepage before writing it to the new table?
    Yes. Use:
    SELECT UTL_RAW.CAST_TO_RAW(col) FROM tab;
    and retrieve the value with ResultSet.getBytes. Then convert the retrieved byte[] to java.lang.String using String(byte[] bytes, String charsetName) constructor. Note, charsetName is Java name. Then, you can use setString() on PreparedStatement containing the INSERT for the converted value.
    -- Sergiusz

  • Poor performances on tables FAGL_011ZC, FAGL_011PC and FAGL_011VC

    We note from some time on a very very poor performances of some programme accessing these tables.
    The tables are very small, still standard.
    We see these tables does not have particular indexes, and their buffering is "allowed but disabled".
    In you opinion can we activate the buffering of these tables ? Are there controindications ?
    Our system is ECC5 quite aligned as SP level.

    Client is looking upon TDMS (Test Data Migration Server) to replicate the PRD data into lower systems.

  • Maintaining default locale in multi-lingual application

    Hello,
    I have a multi-lingual application where the language can be changed at runtime.
    To make the following code work properly, the default locale has to be set each
    time the language changes. Why?
    Since I need to check sometimes the platforms original locale (I do this with
    "Locale.getDefault()"), I am looking for a way not to change the default locale,
    but still to change the resourceBundle.
    In the API I read under "ResourceBundle, Cache Management" that the bundles are cached. Is this the reason why redefining the resourceBundle has no effect? And if yes, how can it be avoided?
    import java.awt.*;
    import java.awt.event.*;
    import java.util.*;
    import javax.swing.*;
    public class Y extends JFrame {
      boolean toggle;
      Locale currentLocale;
      JButton b;
      ResourceBundle languageBundle;
      String country, userLanguage;
      public Y() {
        setSize(300,300);
        setDefaultCloseOperation(WindowConstants.EXIT_ON_CLOSE);
        Container cp= getContentPane();
        b= new JButton();
        b.addActionListener(new ActionListener() {
          public void actionPerformed(ActionEvent evt) {
         toggle= !toggle;
         if (toggle)
           country= "DE";
         else
           country= "GB";
         setUserLanguage(country);
        cp.add(b, BorderLayout.SOUTH);
        setUserLanguage("GB");
        setVisible(true);
      public static void main(String args[]) {
        java.awt.EventQueue.invokeLater(new Runnable() {
          public void run() {
         new Y();
      void setUserLanguage(String country) {
        if (country.equals("DE"))
          userLanguage= "de";
        else
          userLanguage= "en";
        currentLocale = new Locale(userLanguage, country);
    //    System.out.println(currentLocale); // The locale changes ...
    //    Locale.setDefault(currentLocale); // Remove comment slashes and it works.
    //    languageBundle.clearCache(); // No effect.
        languageBundle = ResourceBundle.getBundle("MyBundle",currentLocale);
        System.out.println(languageBundle); // ... but the resourceBundle does not change.
        b.setText(languageBundle.getString("ButtonText"));
    The resource bundle files:
    MyBundle.properties
    ButtonText= Just a button
    MyBundle_de_DE.properties
    ButtonText= Nur ein KnopfEdited by: Joerg22 on 18.08.2008 13:26

    What's your default locale? If your default locale is de_DE, that's the expected behavior. The reason for it is the fallback mechanism searches default locale's bundle before falls back to the base bundle, i.e., in case of searching en_GB bundle, the search order is:
    en_GB
    en
    de_DE
    de
    (base)
    So, it will choose MyBundles_de_DE.
    If you do not want this default locale fallback, you can specify ResourceBundle.Control instance, which is returned from ResourceBundle.Control.getNoFallbackControl() method, in your getBundle() call. Or if you do not use JDK6, you could copy the base bundle to MyBundles_en, which is ugly but should work.
    Naoto
    Edited by: naoto on Aug 18, 2008 1:05 PM

  • Poor performance of the BDB cache

    I'm experiencing incredibly poor performance of the BDB cache and wanted to share my experience, in case anybody has any suggestions.
    Overview
    Stone Steps maintains a fork of a web log analysis tool - the Webalizer (http://www.stonesteps.ca/projects/webalizer/). One of the problems with the Webalizer is that it maintains all data (i.e. URLs, search strings, IP addresses, etc) in memory, which puts a cap on the maximum size of the data set that can be analyzed. Naturally, BDB was picked as the fastest database to maintain analyzed data on disk set and produce reports by querying the database. Unfortunately, once the database grows beyond the cache size, overall performance goes down the drain.
    Note that the version of SSW available for download does not support BDB in the way described below. I can make the source available for you, however, if you find your own large log files to analyze.
    The Database
    Stone Steps Webalizer (SSW) is a command-line utility and needs to preserve all intermediate data for the month on disk. The original approach was to use a plain-text file (webalizer.current, for those who know anything about SSW). The BDB database that replaced this plain text file consists of the following databases:
    sequences (maintains record IDs for all other tables)
    urls -primary database containing URL data - record ID (key), URL itself, grouped data, such as number of hits, transfer size, etc)
    urls.values - secondary database that contains a hash of the URL (key) and the record ID linking it to the primary database; this database is used for value lookups)
    urls.hits - secondary database that contains the number of hits for each URL (key) and the record ID to link it to the primary database; this database is used to order URLs in the report by the number of hits.
    The remaining databases are here just to indicate the database structure. They are the same in nature as the two described above. The legend is as follows: (s) will indicate a secondary database, (p) - primary database, (sf) - filtered secondary database (using DB_DONOTINDEX).
    urls.xfer (s), urls.entry (s), urls.exit (s), urls.groups.hits (sf), urls.groups.xfer (sf)
    hosts (p), hosts.values (s), hosts.hits (s), hosts.xfer (s), hosts.groups.hits (sf), hosts.groups.xfer (sf)
    downloads (p), downloads.values (s), downloads.xfer (s)
    agents (p), agents.values (s), agents.values (s), agents.hits (s), agents.visits (s), agents.groups.visits (sf)
    referrers (p), referrers.values (s), referrers.values (s), referrers.hits (s), referrers.groups.hits (sf)
    search (p), search.values (s), search.hits (s)
    users (p), users.values (s), users.hits (s), users.groups.hits (sf)
    errors (p), errors.values (s), errors.hits (s)
    dhosts (p), dhosts.values (s)
    statuscodes (HTTP status codes)
    totals.daily (31 days)
    totals.hourly (24 hours)
    totals (one record)
    countries (a couple of hundred countries)
    system (one record)
    visits.active (active visits - variable length)
    downloads.active (active downloads - variable length)
    All these databases (49 of them) are maintained in a single file. Maintaining a single database file is a requirement, so that the entire database for the month can be renamed, backed up and used to produce reports on demand.
    Database Size
    One of the sample Squid logs I received from a user contains 4.4M records and is about 800MB in size. The resulting database is 625MB in size. Note that there is no duplication of text data - only nodes and such values as hits and transfer sizes are duplicated. Each record also contains some small overhead (record version for upgrades, etc).
    Here are the sizes of the URL databases (other URL secondary databases are similar to urls.hits described below):
    urls (p):
    8192 Underlying database page size
    2031 Overflow key/data size
    1471636 Number of unique keys in the tree
    1471636 Number of data items in the tree
    193 Number of tree internal pages
    577738 Number of bytes free in tree internal pages (63% ff)
    55312 Number of tree leaf pages
    145M Number of bytes free in tree leaf pages (67% ff)
    2620 Number of tree overflow pages
    16M Number of bytes free in tree overflow pages (25% ff)
    urls.hits (s)
    8192 Underlying database page size
    2031 Overflow key/data size
    2 Number of levels in the tree
    823 Number of unique keys in the tree
    1471636 Number of data items in the tree
    31 Number of tree internal pages
    201970 Number of bytes free in tree internal pages (20% ff)
    45 Number of tree leaf pages
    243550 Number of bytes free in tree leaf pages (33% ff)
    2814 Number of tree duplicate pages
    8360024 Number of bytes free in tree duplicate pages (63% ff)
    0 Number of tree overflow pages
    The Testbed
    I'm running all these tests using the latest BDB (v4.6) built from the source on Win2K3 server (release version). The test machine is 1.7GHz P4 with 1GB of RAM and an IDE hard drive. Not the fastest machine, but it was able to handle a log file like described before at a speed of 20K records/sec.
    BDB is configured in a single file in a BDB environment, using private memory, since only one process ever has access to the database).
    I ran a performance monitor while running SSW, capturing private bytes, disk read/write I/O, system cache size, etc.
    I also used a code profiler to analyze SSW and BDB performance.
    The Problem
    Small log files, such as 100MB, can be processed in no time - BDB handles them really well. However, once the entire BDB cache is filled up, the machine goes into some weird state and can sit in this state for hours and hours before completing the analysis.
    Another problem is that traversing large primary or secondary databases is a really slow and painful process. It is really not that much data!
    Overall, the 20K rec/sec quoted above drop down to 2K rec/sec. And that's all after most of the analysis has been done, just trying to save the database.
    The Tests
    SSW runs in two modes, memory mode and database mode. In memory mode, all data is kept in memory in SSW's own hash tables and then saved to BDB at the end of each run.
    In memory mode, the entire BDB is dumped to disk at the end of the run. First, it runs fairly fast, until the BDB cache is filled up. Then writing (disk I/O) goes at a snail pace, at about 3.5MB/sec, even though this disk can write at about 12-15MB/sec.
    Another problem is that the OS cache gets filled up, chewing through all available memory long before completion. In order to deal with this problem, I disabled the system cache using the DB_DIRECT_DB/LOG options. I could see OS cache left alone, but once BDB cache was filed up, processing speed was as good as stopped.
    Then I flipped options and used DB_DSYNC_DB/LOG options to disable OS disk buffering. This improved overall performance and even though OS cache was filling up, it was being flushed as well and, eventually, SSW finished processing this log, sporting 2K rec/sec. At least it finished, though - other combinations of these options lead to never-ending tests.
    In the database mode, stale data is put into BDB after processing every N records (e.g. 300K rec). In this mode, BDB behaves similarly - until the cache is filled up, the performance is somewhat decent, but then the story repeats.
    Some of the other things I tried/observed:
    * I tried to experiment with the trickle option. In all honesty, I hoped that this would be the solution to my problems - trickle some, make sure it's on disk and then continue. Well, trickling was pretty much useless and didn't make any positive impact.
    * I disabled threading support, which gave me some performance boost during regular value lookups throughout the test run, but it didn't help either.
    * I experimented with page size, ranging them from the default 8K to 64K. Using large pages helped a bit, but as soon as the BDB cached filled up, the story repeated.
    * The Db.put method, which was called 73557 times while profiling saving the database at the end, took 281 seconds. Interestingly enough, this method called ReadFile function (Win32) 20000 times, which took 258 seconds. The majority of the Db.put time was wasted on looking up records that were being updated! These lookups seem to be the true problem here.
    * I tried libHoard - it usually provides better performance, even in a single-threaded process, but libHoard didn't help much in this case.

    I have been able to improve processing speed up to
    6-8 times with these two techniques:
    1. A separate trickle thread was created that would
    periodically call DbEnv::memp_trickle. This works
    especially good on multicore machines, but also
    speeds things up a bit on single CPU boxes. This
    alone improved speed from 2K rec/sec to about 4K
    rec/sec.Hello Stone,
    I am facing a similar problem, and I too hope to resolve the same with memp_trickle. I had these queries.
    1. what was the % of clean pages that you specified?
    2. What duration were you clling this thread to call memp_trickle?
    This would give me a rough idea about which to tune my app. Would really appreciate if you can answer these queries.
    Regards,
    Nishith.
    >
    2. Maintaining multiple secondary databases in real
    time proved to be the bottleneck. The code was
    changed to create secondary databases at the end of
    the run (calling Db::associate with the DB_CREATE
    flag), right before the reports are generated, which
    use these secondary databases. This improved speed
    from 4K rec/sec to 14K rec/sec.

  • Multi-Lingual sorting

    Hi
    As our database tables contain Multi-lingual data(English/Japanese/Jerman/Spanish etc), we need to implement MULTI-LINGUAL sort in our application. We are using ORACLE-10g as DB for .Net application.
    If anyone has already implemented the MULTI-LINGUAL sort in effective way, pls do guide me to proceed in further. I badly need a solution for this.
    hi Scott swank, hope u can give me a better solution.
    Thanks & Regards,
    Ela

    You may use NLS_SORT to get the proper linguistic sorting: for example
    ALTER SESSION SET NLS_SORT='GERMAN';
    Note: Linguistic sort can prevent Oracle from using indexes, decreasing performance.

  • Web Form Validation Message Language Setting at Runtime when work in multi lingual environment

    Business Catalyst use the default culture language to display web form validation message.
    When we are in multi lingual environment and not using subdoamin to handle multilingual sites, we found that the validation message did appear in the default culture setting. To make this work, we need to add the below script in our template.
    <script type="text/javascript">
    $(document).ready(function(){               
    var head= document.getElementsByTagName('head')[0];
    var script= document.createElement('script');
    script.src= '/BcJsLang/ValidationFunctions.aspx?lang=FR';
    script.charset = 'utf-8';
    script.type= 'text/javascript';
    head.appendChild(script);
    </script>
    Assuming the template is in french. You can change the lang parameter in the script according to your language.

    After user 1 submits the page, it might not even be committed, so there is no way to have the pending data from user1 seen by user2.
    However, we do have a new feature in ADF 11g TP4 that I plan to blog more about called Auto-Refresh view objects. This feature allows a view object instance in a shared application module to refresh its data when it receives the Oracle 11g database change notification that a row that would affect the results of the query has been changed.
    The minimum requirements in 11g TP4 to experiment with this feature which I just tested are the following:
    1. Must use Database 11g
    2. Database must have its COMPATIBLE parameter set to '11.0.0.0.0' at least
    3. Set the "AutoRefresh" property of the VO to true (on the Tuning panel)
    4. Add an instance of that VO to an application module (e.g. LOVModule)
    5. Configure that LOVModule as an application-level shared AM in the project properties
    6. Define an LOV based on a view accessor that references the shared AM's VO instance
    7. DBA must have performed a 'GRANT CHANGE NOTIFICATION TO YOURUSER'
    8. Build an ADF Form for the VO that defined the LOV above and run the web page
    9. In SQLPlus, go modify a row of the table on which the shared AM VO is based and commit
    When the Database delivers the change notification, the shared AM VO instance will requery itself.
    However that notification does not arrive all the way out to the web page, so you won't see the change until the next time you repaint the list.
    Perhaps there is some way to take it even farther with the active data feature PaKo mentions, but I'm not familiar enough with that myself to say whether it would work for you hear.

  • Poor performance and high number of gets on seemingly simple insert/select

    Versions & config:
    Database : 10.2.0.4.0
    Application : Oracle E-Business Suite 11.5.10.2
    2 node RAC, IBM AIX 5.3Here's the insert / select which I'm struggling to explain why it's taking 6 seconds, and why it needs to get > 24,000 blocks:
    INSERT INTO WF_ITEM_ATTRIBUTE_VALUES ( ITEM_TYPE, ITEM_KEY, NAME, TEXT_VALUE,
      NUMBER_VALUE, DATE_VALUE ) SELECT :B1 , :B2 , WIA.NAME, WIA.TEXT_DEFAULT,
      WIA.NUMBER_DEFAULT, WIA.DATE_DEFAULT FROM WF_ITEM_ATTRIBUTES WIA WHERE
      WIA.ITEM_TYPE = :B1
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          4           0
    Execute      2      3.44       6.36          2      24297        198          36
    Fetch        0      0.00       0.00          0          0          0           0
    total        3      3.44       6.36          2      24297        202          36
    Misses in library cache during parse: 1
    Misses in library cache during execute: 2Also from the tkprof output, the explain plan and waits - virtually zero waits:
    Rows     Execution Plan
          0  INSERT STATEMENT   MODE: ALL_ROWS
          0   TABLE ACCESS   MODE: ANALYZED (BY INDEX ROWID) OF 'WF_ITEM_ATTRIBUTES' (TABLE)
          0    INDEX   MODE: ANALYZED (RANGE SCAN) OF 'WF_ITEM_ATTRIBUTES_PK' (INDEX (UNIQUE))
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      library cache lock                             12        0.00          0.00
      gc current block 2-way                         14        0.00          0.00
      db file sequential read                         2        0.01          0.01
      row cache lock                                 24        0.00          0.01
      library cache pin                               2        0.00          0.00
      rdbms ipc reply                                 1        0.00          0.00
      gc cr block 2-way                               4        0.00          0.00
      gc current grant busy                           1        0.00          0.00
    ********************************************************************************The statement was executed 2 times. I know from slicing up the trc file that :
    exe #1 : elapsed = 0.02s, query = 25, current = 47, rows = 11
    exe #2 : elapsed = 6.34s, query = 24272, current = 151, rows = 25
    If I run just the select portion of the statement, using bind values from exe #2, I get small number of gets (< 10), and < 0.1 secs elapsed.
    If I make the insert into an empty, non-partitioned table, I get :
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.01       0.08          0        137         53          25
    Fetch        0      0.00       0.00          0          0          0           0
    total        2      0.01       0.08          0        137         53          25and same explain plan - using index range scan on WF_Item_Attributes_PK.
    This problem is part of testing of a database upgrade and country go-live. On a 10.2.0.3 test system (non-RAC), the same insert/select - using the real WF_Item_Attributes_Value table takes :
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.10         10         27        136          25
    Fetch        0      0.00       0.00          0          0          0           0
    total        2      0.00       0.10         10         27        136          25So I'm struggling to understand why the performance on the 10.2.0.4 RAC system is so much worse for this query, and why it's doing so many gets. Suggestions, thoughts, ideas welcomed.
    I've verified system level things - CPUs weren't/aren't max'd out, no significant paging/swapping activity, run queue not long. AWR report for the time period shows nothing unusual.
    further info on the objects concerned:
    query source table :
    WF_Item_Attributes_PK : unique index on Item_Type, Name. Index has 144 blocks, non-partitioned
    WF_Item_Attributes tbl : non-partitioned, 160 blocks
    insert destination table:
    WF_Item_Attribute_Values:
    range partitioned on Item_Type, and hash sub-partitioned on Item_Key
    both executions of the insert hit the partition with the most data : 127,691 blocks total ; 8 sub-partitions with 15,896 to 16,055 blocks per sub-partition.
    WF_Item_Attribute_Values_PK : unique index on columns Item_Type, Item_Key, Name. Range/hash partitioned as per table.
    Bind values:
    exe #1 : Item_Type (:B1) = OEOH, Item_Key (:B2) = 1048671
    exe #2 : Item_Type (:B1) = OEOL, Item_Key (:B2) = 4253168
    number of rows in WF_Item_Attribute_Values for Item_Type = OEOH : 1132587
    number of rows in WF_Item_Attribute_Values for Item_Type = OEOL : 18763670
    The non-RAC 10.2.0.3 test system (clone of Production from last night) has higher row counts for these 2.
    thanks and regards
    Ivan

    hi Sven,
    Thanks for your input.
    1) I guess so, but I haven't lifted the lid to delve inside the form as to which one. I don't think it's the cause though, as I got poor performance running the insert statement with my own value (same statement, using my own bind value).
    2) In every execution plan I've seen, checked, re-checked, it uses a range scan on the primary key. It is the most efficient I think, but the source table is small in any case - table 160 blocks, PK index 144 blocks. So I think it's the partitioned destination table that's the problem - but we only see this issue on the 10.2.0.4 pre-production (RAC) system. The 10.2.0.3 (RAC) Production system doesn't have it. This is why it's so puzzling to me - the source table read is fast, and does few gets.
    3) table storage details below - the Item_Types being used were 'OEOH' (fast execution) and 'OEOL' (slow execution). Both hit partition WF_ITEM49, hence I've only expanded the subpartition info for that one (there are over 600 sub-partitions).
    ============= From DBA_Part_Tables : Partition Type / Count =============
    PARTITI SUBPART PARTITION_COUNT DEF_TABLESPACE_NAME
    RANGE   HASH                 77 APPS_TS_TX_DATA
    1 row selected.
    ============= From DBA_Tab_Partitions : Partition Names / Tablespaces =============
    Partition Name       TS Name         High Value           High Val Len
    WF_ITEM1             APPS_TS_TX_DATA 'A1'                            4
    WF_ITEM2             APPS_TS_TX_DATA 'AM'                            4
    WF_ITEM3             APPS_TS_TX_DATA 'AP'                            4
    WF_ITEM47            APPS_TS_TX_DATA 'OB'                            4
    WF_ITEM48            APPS_TS_TX_DATA 'OE'                            4
    WF_ITEM49            APPS_TS_TX_DATA 'OF'                            4
    WF_ITEM50            APPS_TS_TX_DATA 'OK'                            4
    WF_ITEM75            APPS_TS_TX_DATA 'WI'                            4
    WF_ITEM76            APPS_TS_TX_DATA 'WS'                            4
    WF_ITEM77            APPS_TS_TX_DATA MAXVALUE                        8
    77 rows selected.
    ============= From dba_part_key_columns : Partition Columns =============
    NAME                           OBJEC Column Name                    COLUMN_POSITION
    WF_ITEM_ATTRIBUTE_VALUES       TABLE ITEM_TYPE                                    1
    1 row selected.
    PPR1 sql> @q_tabsubpart wf_item_attribute_values WF_ITEM49
    ============= From DBA_Tab_SubPartitions : SubPartition Names / Tablespaces =============
    Partition Name       SUBPARTITION_NAME              TS Name         High Value           High Val Len
    WF_ITEM49            SYS_SUBP3326                   APPS_TS_TX_DATA                                 0
    WF_ITEM49            SYS_SUBP3328                   APPS_TS_TX_DATA                                 0
    WF_ITEM49            SYS_SUBP3332                   APPS_TS_TX_DATA                                 0
    WF_ITEM49            SYS_SUBP3331                   APPS_TS_TX_DATA                                 0
    WF_ITEM49            SYS_SUBP3330                   APPS_TS_TX_DATA                                 0
    WF_ITEM49            SYS_SUBP3329                   APPS_TS_TX_DATA                                 0
    WF_ITEM49            SYS_SUBP3327                   APPS_TS_TX_DATA                                 0
    WF_ITEM49            SYS_SUBP3325                   APPS_TS_TX_DATA                                 0
    8 rows selected.
    ============= From dba_part_key_columns : Partition Columns =============
    NAME                           OBJEC Column Name                    COLUMN_POSITION
    WF_ITEM_ATTRIBUTE_VALUES       TABLE ITEM_KEY                                     1
    1 row selected.
    from DBA_Segments - just for partition WF_ITEM49  :
    Segment Name                        TSname       Partition Name       Segment Type     BLOCKS     Mbytes    EXTENTS Next Ext(Mb)
    WF_ITEM_ATTRIBUTE_VALUES            @TX_DATA     SYS_SUBP3332         TblSubPart        16096     125.75       1006         .125
    WF_ITEM_ATTRIBUTE_VALUES            @TX_DATA     SYS_SUBP3331         TblSubPart        16160     126.25       1010         .125
    WF_ITEM_ATTRIBUTE_VALUES            @TX_DATA     SYS_SUBP3330         TblSubPart        16160     126.25       1010         .125
    WF_ITEM_ATTRIBUTE_VALUES            @TX_DATA     SYS_SUBP3329         TblSubPart        16112    125.875       1007         .125
    WF_ITEM_ATTRIBUTE_VALUES            @TX_DATA     SYS_SUBP3328         TblSubPart        16096     125.75       1006         .125
    WF_ITEM_ATTRIBUTE_VALUES            @TX_DATA     SYS_SUBP3327         TblSubPart        16224     126.75       1014         .125
    WF_ITEM_ATTRIBUTE_VALUES            @TX_DATA     SYS_SUBP3326         TblSubPart        16208    126.625       1013         .125
    WF_ITEM_ATTRIBUTE_VALUES            @TX_DATA     SYS_SUBP3325         TblSubPart        16128        126       1008         .125
    WF_ITEM_ATTRIBUTE_VALUES_PK         @TX_IDX      SYS_SUBP3332         IdxSubPart        59424     464.25       3714         .125
    WF_ITEM_ATTRIBUTE_VALUES_PK         @TX_IDX      SYS_SUBP3331         IdxSubPart        59296     463.25       3706         .125
    WF_ITEM_ATTRIBUTE_VALUES_PK         @TX_IDX      SYS_SUBP3330         IdxSubPart        59520        465       3720         .125
    WF_ITEM_ATTRIBUTE_VALUES_PK         @TX_IDX      SYS_SUBP3329         IdxSubPart        59104     461.75       3694         .125
    WF_ITEM_ATTRIBUTE_VALUES_PK         @TX_IDX      SYS_SUBP3328         IdxSubPart        59456      464.5       3716         .125
    WF_ITEM_ATTRIBUTE_VALUES_PK         @TX_IDX      SYS_SUBP3327         IdxSubPart        60016    468.875       3751         .125
    WF_ITEM_ATTRIBUTE_VALUES_PK         @TX_IDX      SYS_SUBP3326         IdxSubPart        59616     465.75       3726         .125
    WF_ITEM_ATTRIBUTE_VALUES_PK         @TX_IDX      SYS_SUBP3325         IdxSubPart        59376    463.875       3711         .125
    sum                                                                                               4726.5
    [the @ in the TS Name is my shortcode, as Apps stupidly prefixes every ts with "APPS_TS_"]
    The Tablespaces used for all subpartitions are UNIFORM extent mgmt, AUTO segment_space_management ; LOCAL extent mgmt.regards
    Ivan

  • Poor performance and overheat

    Story
    I've been using arch for the past four months, dual booting with my old Windows XP. As I'm very fond of Flash games and make my own programs with a cross-platform language, I've found few problems with the migration. One of them was the Adobe Flash Player performance, which was stunningly bad. But everyone was saying that was normal, so I left it as is.
    However, one special error always worried me: a seemingly randomly started siren sound coming from the motherboard speaker. Thinking it was a alarm about some fatal kernel error, I had been solving it mostly with reboots.
    But then it happened. While playing a graphics intensive game on Windows quickly after rebooting from Arch, the same siren sound started. It felt like a slap across the face: it was not a kernel error, it was an motherboard overheat alarm.
    The Problem
    Since the computer was giving overheat signs, I started looking at things from another angle. I noticed that some tasks take unusually long times in Arch (i.e.: building things from source; Firefox / OpenOffice startup; any graphics intensive program), specially from the Flash Player.
    A great example is the game Penguinz, that runs flawlessly in Windows but is unbearably slow in Arch. So slow that it alone caused said overheat twice. And trying to record another flash game's record using XVidCap things went so bad that the game halved the FPS and started ignoring key presses.
    Tech Info
    Dual Core 3.2 processor
    1 gb RAM
    256 mb Geforce FX 5500 video card
    Running Openbox
    Using proprietary NVIDIA driver
    TL;DR: poor performance on some tasks. Flash Player is so slow that overheats CPU and makes me cry. It's fine on Windows.
    Off the top of my head I can think up some reasons: bad video driver, unwanted background application messing up, known Flash Player performance problems and Actionscript Linux/Arch-only bug.
    Where do you think is the problem?

    jwcxz wrote:Have you looked at your process table for any program with abnormal CPU usage?  That seems like the logical place to start.  You shouldn't be getting poor performance in anything with that system.  I have a 2.0GHz Core 2 Duo and an Intel GMA 965 and I've never had any problems with Flash.  It's much better than it used to be.
    Pidgin scared me for a while because it froze for no apparent reason. After fixing this, the table contains this two guys here:
    %CPU
    Firefox: 80%~100%
    X: 0~20%
    Graphic intensive test, so I think the X usage is normal. It might be some oddity at the Firefox+Linux+Flash sum, maybe a conflict. I'll try another browser.
    EDIT:
    Did a Javascript benchmark to test both systems and browsers.
    Windows XP + Firefox = 4361.4ms
    Arch + Firefox = 5146.0ms
    So, it's actually a lot slower without even taking Flash into account. If someone knows a platform-independent benchmark to test both systems completely, and not only the browser, feel free to point out.
    I think that something is already wrong here and the lack of power saving systems only aggravated the problem, causing overheat.
    EDIT2:
    Browser performance fixed: migrated to Midori. Flash stills slower than on Windows, but now it's bearable. Pretty neat browser too, goes better with the Arch Way. It shouldn't fix the temperature, however.
    Applied B's idea, but didn't test yet. I'm not into the mood of playing flash games for two straight hours today.
    Last edited by BoppreH (2009-05-03 04:25:20)

  • Poor performance and voltage fluctuation.

    I'm running two 280x in crossfire which I upgraded from two 6970's.   When I'm playing DayZ my FPS never go above 30 FPS.  With my 6970's I was well into the 70's.  I never changed my video settings when I went from the 6970's to the 280X's.   For the most part I have a lot of the graphics low or disabled.   Borderlands 2 is a nightmare on the 280X.  I drop down into 10 fps area and this has never happened with on my 6970's.
    During games my core voltage fluctuates between 500mhz and 1020mhz during game play.  I have ULPS disable as well.
    My power supply is an Antec HCG-900.
    Happened on all these drivers 14.4, 14.6 Beta, and 14.6 RC (currently installed).

    jwcxz wrote:Have you looked at your process table for any program with abnormal CPU usage?  That seems like the logical place to start.  You shouldn't be getting poor performance in anything with that system.  I have a 2.0GHz Core 2 Duo and an Intel GMA 965 and I've never had any problems with Flash.  It's much better than it used to be.
    Pidgin scared me for a while because it froze for no apparent reason. After fixing this, the table contains this two guys here:
    %CPU
    Firefox: 80%~100%
    X: 0~20%
    Graphic intensive test, so I think the X usage is normal. It might be some oddity at the Firefox+Linux+Flash sum, maybe a conflict. I'll try another browser.
    EDIT:
    Did a Javascript benchmark to test both systems and browsers.
    Windows XP + Firefox = 4361.4ms
    Arch + Firefox = 5146.0ms
    So, it's actually a lot slower without even taking Flash into account. If someone knows a platform-independent benchmark to test both systems completely, and not only the browser, feel free to point out.
    I think that something is already wrong here and the lack of power saving systems only aggravated the problem, causing overheat.
    EDIT2:
    Browser performance fixed: migrated to Midori. Flash stills slower than on Windows, but now it's bearable. Pretty neat browser too, goes better with the Arch Way. It shouldn't fix the temperature, however.
    Applied B's idea, but didn't test yet. I'm not into the mood of playing flash games for two straight hours today.
    Last edited by BoppreH (2009-05-03 04:25:20)

  • Poor Performance in ETL SCD Load

    Hi gurus,
    We are facing some serious performance problems during an UPDATE step, which is part of a SCD type 2 process for Assets (SIL_Vert/SIL_AssetDimension_SCDUpdate). The source system is Siebel CRM. The tools for ETL processing are listed below:
    Informatica PowerCenter 9.1.0 HotFix2 0902 357 (R181 D90)
    Oracle BI Data Warehouse Administration Console (Dac Build AN 10.1.3.4.1.patch.20120711.0516)
    The OOTB mapping for this step is a simple SELECT command - which retrieves historical records from the Dimension to be updated - and the Target table (W_ASSET_D), with no UPDATE Strategy. The session is configured to always perform UPDATEs. We also have set $$UDATE_ALL_HISTORY to "N" in DAC: this way we are only selecting the most recent records from the Dimension history, and the only columns that are effectively updated are the system columns of SCD (EFFECTIVE_FROM_DT, EFFECTIVE_TO_DT, CURRENT_FLG, ...).
    The problem is that the UPDATE command is executed individually by Informatica Powercenter, for each record in W_ASSET_D. For a number of 2.486.000 UPDATEs, we had ~2h of processing - a very poor performance for only one ETL step. Our W_ASSET_D has ~150M records today.
    Some questions for the above:
    - is this an expected average execution duration for this number of records?
    - updates record by record are not optimal, this could be easily overcome by a BULK COLLECT/FORALL method. Is there a way to optimize the method used by Informatica or we need to write our own PL-SQL script and run it in DAC?
    Thanks in advance,
    Guilherme

    Hi,
    Thank you for posting in Windows Server Forum.
    Initially please check the configuration & requirement part for RemoteFX. You can follow below article for further research.
    RemoteFX vGPU Setup and Configuration Guide for Windows Server 2012
    http://social.technet.microsoft.com/wiki/contents/articles/16652.remotefx-vgpu-setup-and-configuration-guide-for-windows-server-2012.aspx
    Hope it helps!
    Thanks.
    Dharmesh Solanki
    TechNet Community Support

  • Maintain acct determination (table T030B) for posting key IRX / IGX (M8395)

    Dear Guru,
    I am testing IS-Oil & Gas (Downstream) - Exchanges.
    Created Exchange Agreement and Purchase Contract and Sales Contract, Purchase Contract and Sales Contract are assigned to the Exchange Contract.
    A PO is created with reference to the Purchase Contract.
    Upon post goods receipt in MIGO, encountered error message "Maintain account determination (table T030B) for posting key IRX (Message no. M8395)" that does not allow PGR to get through.
    A call-off is created with reference to the Sales Contract.
    Similarly, when PGI is performed in VL01N, encountered error message "Maintain account determination (table T030B) for posting key IGX (Message no. M8395)" that does not allow PGI to get through.
    Checked in OBYC, there is no transaction key for IRX and IGX.
    Please advise. Thank you.
    Regards,
    WL

    Hi WL,
    Please use transaction code O54E to maintain the required configuration.
    I hope this helps you.
    Thanks & Regards
    Kalpesh Chavda

Maybe you are looking for

  • Keep your Firefox in Sync

    The page on this: https://support.mozilla.org/en-US/kb/how-do-i-set-up-firefox-sync?esab=a&s=Keep+your+Firefox+in+Sync&r=7&as=s is lame. It does not state, let alone clarify, whether the sync is one way or two way. In other words. Does this sync my p

  • Bugs present in Firefox 4's basic behavior that weren't in or were fixed in Firefox 3?

    As a web developer, I encounter some quirks between browsers so I do expect having some inconsistencies along the way. However, I do expect that by 'upgrading' from Firefox 3 to 4, I won't bump into issues that weren't around in the version 3 branch.

  • Commands that can be entered in the 'Command Field'

    Hello All, Does anyone know all commands that can be entered in the 'Command Field'. I am interested in the commands that start with &. For example &sapedit, &vexcel, &vgrid; etc. (OR commands such as PRFB; etc.) Just curious, where all these command

  • How to set up Query Logging in OBIEE 11g 11.1.1.6

    Hi all, I couldn't find the option for setting up the query logging for testing and debugging purpose. Could anyone plz help me in this ... Regards, Arun

  • Transferring a photo album from IPad onto a Macbook Pro?

    I've created an album of holiday photos on my IPad. I would like to transfer the whole album to my Macbook Pro, at present only the photos are there (via photo stream). Could anyone tell me how to get the album from my Ipad onto my Mac please?