[URGENT] Performance problem with BC4J and partioned data

Hi all,
I have a big performance probelm with BC4J and partitioned data. As as partitioned table shouldn't have a primary key like a sequence (or something else) my partitioned table doesn't have any primary key.
When I debug my BC4J application I can see a message showing me "ignoring row with no primary key" from EntityCache. It takes a long time to retrieve my data even if I use the partition keys. A quick & dirty forms application was multiple times faster!
Is this a bug in BC4J, or is BC4J not suitable for partitioned data? Can anyone give me a hint what to do, do make the BC4J application fast even with partitioned data? In a non-partitioned environment the application works quite well. So it seams that it must be an "error" somewhere in this part.
Thanks,
Axel

Here's a SQL statement that creates the table.
CREATE TABLE SEARCH
(SEAR_PARTKEY_DAY              NUMBER(4)        NOT NULL
,SEAR_PARTKEY_EMP            VARCHAR2(2)      NOT NULL
,SEAR_ID                     NUMBER(20)       NOT NULL
,SEAR_ENTRY_DATE             TIMESTAMP        NOT NULL
,SEAR_LAST_MODIFIED            TIMESTAMP             NOT NULL
,SEAR_STATUS                 VARCHAR2(100)    DEFAULT '0'
,SEAR_ITC_DATE               TIMESTAMP        NOT NULL
,SEAR_MESSAGE_CLASS          VARCHAR2(15)     NOT NULL
,SEAR_CHIPHERING_TYPE        VARCHAR2(256)   
,SEAR_GMAT                   VARCHAR2(1)      DEFAULT 'U'
,SEAR_NATIONALITY            VARCHAR2(3)      DEFAULT 'XXX'
,SEAR_MESSAGE_ID             VARCHAR2(32)     NOT NULL
,SEAR_COMMENT                VARCHAR2(256)    NOT NULL
,SEAR_NUMBER_OF              NUMBER(3)        NOT NULL
,SEAR_INTERCEPTION_SYSTEM    VARCHAR2(40)    
,SEAR_COMM_PRIOD_H           NUMBER(5)        DEFAULT -1
,SEAR_PRIOD_R                  NUMBER(5)        DEFAULT -1
,SEAR_INMARSAT_CES           VARCHAR2(40)    
,SEAR_BEAM                   VARCHAR2(10)    
,SEAR_DIALED_NUMBER          VARCHAR2(70)    
,SEAR_TRANSMIT_NUMBER        VARCHAR2(70)    
,SEAR_CALLED_NUMBER          VARCHAR2(40)    
,SEAR_CALLER_NUMBER          VARCHAR2(40)    
,SEAR_MATERIAL_TYPE          VARCHAR2(3)      NOT NULL
,SEAR_SOURCE                 VARCHAR2(10)    
,SEAR_MAPPING                VARCHAR2(100)    DEFAULT '__REST'
,SEAR_DETAIL_MAPPING         VARCHAR2(100)
,SEAR_PRIORITY               NUMBER(3)        DEFAULT 255
,SEAR_LANGUAGE               VARCHAR2(5)      DEFAULT 'XXX'
,SEAR_TRANSMISSION_TYPE      VARCHAR2(40)    
,SEAR_INMARSAT_STD           VARCHAR2(1)     
,SEAR_FILE_NAME              VARCHAR2(100)    NOT NULL
PARTITION BY RANGE (SEAR_PARTKEY_DAY, SEAR_PARTKEY_EMP)
  PARTITION SEARCH_MAX VALUES LESS THAN (MAXVALUE, MAXVALUE) MIRA4_SEARCH_EVEN
);of course SEAR_ID is filled by a sequence but the field is not the primary key as it would decrease the performance of partitioned data.
We moved to native JDBC with our application and the performance is like we never expected to be!

Similar Messages

  • Performance problems with XMLTABLE and XMLQUERY involving relational data

    Hello-
    Is anyone out there using XMLTABLE or XMLQUERY with more than a toy set of data? I am running into serious performance problems tyring to do basic things such as:
    * Combine records in 10 relational tables into a single table of XMLTYPE records using XMLTABLE. This hangs indefinitely for any more than 800 records. Oracle has confirmed that this is a problem and is working on a fix.
    * Combine a single XMLTYPE record with several relational code tables into a single XMLTYPE record using XMLQUERY and ora:view() to insert code descriptions after each code. Performance is 10 seconds for 10 records (terrible) passing a batch of records , or 160 seconds for one record (unacceptable!). How can it take 10 times longer to process 1/10th the number of records? Ironically, the query plan says it will do a full table scan of records for the batch, but an index access for the one record passed to the XMLQUERY.
    I am rapidly losing faith in XML DB, and desparately need some hints on how to work around these performance problems, or at least some assurance that others have been able to get this thing to perform.

    <Note>Long post, sorry.</Note>
    First, thanks for the responses above. I'm impressed with the quality of thought put into them. (Do the forum rules allow me to offer rewards? :) One suggestion in particular made a big performance improvement, and I’m encouraged to hear of good performance in pure XML situations. Unfortunately, I think there is a real performance challenge in two use cases that are pertinent to the XML+relational subject of this post and probably increasingly common as XML DB usage increases:
    •     Converting legacy tabular data into XML records; and
    •     Performing code table lookups for coded values in XML records.
    There are three things I want to accomplish with this post:
    •     Clarify what we are trying to accomplish, which might expose completely different approaches than I have tried
    •     Let you know what I tried so far and the rationale for my approach to help expose flaws in my thinking and share what I have learned
    •     Highlight remaining performance issues in hopes that we can solve them
    What we are trying to accomplish:
    •     Receive a monthly feed of 10,000 XML records (batched together in text files), each containing information about an employee, including elements that repeat for every year of service. We may need to process an annual feed of 1,000,000 XML records in the future.
    •     Receive a one-time feed of 500,000 employee records stored in about 10 relational tables, with a maximum join depth of 2 or 3. This is inherently a relational-to-XML process. One record/second is minimally acceptable, but 10 records/sec would be better.
    •     Consolidate a few records (from different providers) for each employee into a single record. Given the data volume, we need to achieve a minimum rate of 10 records per second. This may be an XML-only process, or XML+relational if code lookups are done during consolidation.
    •     Allow the records to be viewed and edited, with codes resolved into user-friendly descriptions. Since a user is sitting there, code lookups done when a record is viewed (vs. during consolidation) should not take more than 3 seconds total. We have about 20 code tables averaging a few hundred rows each, though one has 450,000 rows.
    As requested earlier, I have included code at the end of this post for example tables and queries that accurately (but simply) replicate our real system.
    Why we did and why:
    •     Stored the source XML records as CLOBS: We did this to preserve the records exactly as they were certified and sent from providers. In addition, we always access the entire XML record as a whole (e.g., when viewing a record or consolidating employee records), so this storage model seemed like a good fit. We can copy them into another format if necessary.
    •     Stored the consolidated XML employee records as “binary XML”. We did this because we almost always access a single, entire record as a whole (for view/edit), but might want to create some summary statistics at some point. Binary XML seemed the best fit.
    •     Used ora:view() for both tabular source records and lookup tables. We are not aware of any alternatives at this time. If it made sense, most code tables could be pre-converted into XML documents, but this seemed risky from a performance standpoint because the lookups use both code and date range constraints (the meaning of codes changes over time).
    •     Stored records as XMLTYPE columns in a table with other key columns on the table, plus an XMLTYPE metadata column. We thought this would facilitate pulling a single record (or a few records for a given employee) quickly. We knew this might be unnecessary given XML indexes and virtual columns, but were not experienced with those and wanted the comfort of traditional keys. We did not used XMLTYPE tables or the XML Repository for documents.
    •     Used XMLTABLE to consolidate XML records by looping over each distinct employee ID in the source batch. We also tried XMLQUERY and it seems to perform about the same. We can achieve 10 to 20 records/second if we do not do any code lookups during consolidation, just meeting our performance requirement, but still much slower than expected.
    •     Used PL/SQL with XMLFOREST to convert tabular source records to XML by looping over distinct employee IDs. We tried this outside PL/SQL both with XMLFOREST and XMLTABLE+ora:view(), but it hangs in both cases for more than 800 records (a known/open issue). We were able to get it to work by using an explicit cursor to loop over distinct employee IDs (rather than processing all records at once within the query). The performance is one record/second, which is minimally acceptable and interferes with other database activity.
    •     Used XMLQUERY plus ora:view() plus XPATH constraints to perform code lookups. When passing a single employee record, the response time ranges from 1 sec to 160 sec depending on the length of the record (i.e., number of years of service). We achieved a 5-fold speedup using an XMLINDEX (thank you Marco!!). The result may be minimally acceptable, but I’m baffled why the index would be needed when processing a single XML record. Other things we tried: joining code tables in the FOR...WHERE clauses, joining code tables using LET with XPATH constraints and LET with WHERE clause constraints, and looking up codes individually via JDBC from the application code at presentation time. All those approaches were slower. Note: the difference I mentioned above in equality/inequality constraint performance was due to data record variations not query plan variations.
    What issues remain?
    We have a minimally acceptable solution from a performance standpoint with one very awkward PL/SQL workaround. The performance of a mixed XML+relational data query is still marginal IMHO, until we properly utilize available optimizations, fix known problems, and perhaps get some new query optimizations. On the last point, I think the query plan for tabular lookups of codes in XML records is falling short right now. I’m reminded of data warehousing in the days before hash joins and star join optimization. I would be happy to be wrong, and just as happy for viable workarounds if I am right!
    Here are the details on our code lookup challenge. Additional suggestions would be greatly appreciated. I’ll try to post more detail on the legacy table conversion challenge later.
    -- The main record table:
    create table RECORDS (
    SSN varchar2(20),
    XMLREC sys.xmltype
    xmltype column XMLREC store as binary xml;
    create index records_ssn on records(ssn);
    -- A dozen code tables represented by one like this:
    create table CODES (
    CODE varchar2(4),
    DESCRIPTION varchar2(500)
    create index codes_code on codes(code);
    -- Some XML records with coded values (the real records are much more complex of course):
    -- I think this took about a minute or two
    DECLARE
    ssn varchar2(20);
    xmlrec xmltype;
    i integer;
    BEGIN
    xmlrec := xmltype('<?xml version="1.0"?>
    <Root>
    <Id>123456789</Id>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    </Root>
    for i IN 1..100000 loop
    insert into records(ssn, xmlrec) values (i, xmlrec);
    end loop;
    commit;
    END;
    -- Some code data like this (ignoring date ranges on codes):
    DECLARE
    description varchar2(100);
    i integer;
    BEGIN
    description := 'This is the code description ';
    for i IN 1..3000 loop
    insert into codes(code, description) values (to_char(i), description);
    end loop;
    commit;
    end;
    -- Retrieve one record while performing code lookups. Takes about 5-6 seconds...pretty slow.
    -- Each additional lookup (times 3 repeating elements in the data) adds about 1 second.
    -- A typical real record has 5 Elements and 20 Subelements, meaning more than 20 seconds to display the record
    -- Note we are accessing a single XML record based on SSN
    -- Note also we are reusing the one test code table multiple times for convenience of this test
    select xmlquery('
    for $r in Root
    return
    <Root>
    <Id>123456789</Id>
    {for $e in $r/Element
        return
        <Element>
          <Subelement1>
            {$e/Subelement1/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement1/Code]/DESCRIPTION/text() }
    </Description>
    </Subelement1>
    <Subelement2>
    {$e/Subelement2/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement2/Code]/DESCRIPTION/text()}
    </Description>
    </Subelement2>
    <Subelement3>
    {$e/Subelement3/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement3/Code]/DESCRIPTION/text() }
    </Description>
    </Subelement3>
    </Element>
    </Root>
    ' passing xmlrec returning content)
    from records
    where ssn = '10000';
    The plan shows the nested loop access that slows things down.
    By contrast, a functionally-similar SQL query on relational data will use a hash join and perform 10x to 100x faster, even for a single record. There seems to be no way for the optimizer to see the regularity in the XML structure and perform a corresponding optimization in joining the code tables. Not sure if registering a schema would help. Using structured storage probably would. But should that be necessary given we’re working with a single record?
    Operation Object
    |SELECT STATEMENT ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | XPATH EVALUATION ()
    | TABLE ACCESS (BY INDEX ROWID) RECORDS
    | INDEX (RANGE SCAN) RECORDS_SSN
    With an xmlindex, the same query above runs in about 1 second, so is about 5x faster (0.2 sec/lookup), which is almost good enough. Is this the answer? Or is there a better way? I’m not sure why the optimizer wants to scan the code tables and index into the (one) XML record, rather than the other way around, but maybe that makes sense if the optimizer wants to use the same general plan as when the WHERE clause constraint is relaxed to multiple records.
    -- Add an xmlindex. Takes about 2.5 minutes
    create index records_record_xml ON records(xmlrec)
    indextype IS xdb.xmlindex;
    Operation Object
    |SELECT STATEMENT ()
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | TABLE ACCESS (BY INDEX ROWID) RECORDS
    | INDEX (RANGE SCAN) RECORDS_SSN
    Am I on the right path, or am I totally using the wrong approach? I thought about using XSLT but was unsure how to reference the code tables.
    I’ve done the best I can constraining the main record to a single row passed to the XMLQUERY. Given Mark’s post (thanks!) should I be joining and constraining the code tables in the SQL WHERE clause too? That’s going to make the query much more complicated, but right now we’re more concerned about performance than complexity.

  • URGENT  timout problems with bc4j

    Hello Jdev Team,
    We have a serious problem with the BC4J. The situation is as
    follows:
    We use BC4J with jsp pages an run the whole thing on a j2ee
    Container.
    We have written our own ApplicationPool class and
    ApplicationModule datatag because the users have to login
    using different credentials. The login users are db-users. The
    application release mode is reserved.
    The application crashes frequently. 2 errors occur.
    First of all we get an JBO-30003: "The application pool, {AM
    Name}, failed to checkout an application module instance."
    after the BC4J Container timeout --> messages ("BC4J HTTP
    Container was timed out" and "The binding listener for { AM
    name} was timed out")
    We haven't found a way to alter the BC4J container timeout.
    Where do we customize the timeout?
    We have tried to use the HttpSessionTimeOut variable but it
    seems to have no effect.
    I hope you can help us with this one.
    Second problem is that the J2EE Container stops functioning
    after a couple of requests.
    Even if an other browser is started (on the same or different
    machine) the Container does not respond to any request.
    We have found a Thread on technet handling this kind of problem
    but the solution doesn't work in our case.
    (the solution on technet was to put synchronized(session) around
    each JSP page)
    Now we run the application under Apache/Jserv and with the same
    Runtime packages as used in the Container and the problem
    seems to have disappeared.
    Here follows the code of the ApplicationPool class:
    The class is based on an example posted on technet
    * @author Juan Oropeza
    package be.cronos.dbwise.jbo;
    import oracle.jbo.common.ampool.ApplicationPoolImpl;
    import oracle.jbo.ApplicationModule;
    import java.util.Properties;
    public class SeperateLoginApplicationPool extends
    ApplicationPoolImpl
    private String vConnectURL;
    private String vUsername;
    private String vPassword;
    public SeperateLoginApplicationPool()
    public void setConnectInfo(String pUsername, String pPassword,
    String pConnectURL)
    this.vUsername = pUsername;
    this.vPassword = pPassword;
    this.vConnectURL = pConnectURL;
    protected void connect(ApplicationModule appModule)
    if (!appModule.getTransaction().isConnected())
    appModule.getTransaction().connect(vConnectURL ,
    vUsername, vPassword);
    //use Optimistic locking as default for all transactions
    appModule.getTransaction().setLockingMode
    (oracle.jbo.Transaction.LOCK_OPTIMISTIC);
    * checkin
    * @param appModule
    public synchronized void checkin(ApplicationModule appModule)
    // release the instance regardless of the release mode
    // this is necessary since we need a fresh instance each
    time.
    this.releaseInstance(appModule);
    this.releaseInstances();
    public void disconnect(ApplicationModule pAppModule, boolean
    pRetainState)
    super.disconnect(pAppModule, pRetainState);
    Here is the code of the applicationPool datatag:
         * @author Juan Oropeza
    * @author Ief Cuynen
    * @version 1.1
    /* Modification history
    package be.cronos.dbwise.datatags;
    import be.cronos.dbwise.jbo.SeperateLoginApplicationPool;
    import javax.servlet.jsp.tagext.TagSupport;
    import javax.servlet.jsp.JspException;
    import java.io.StringWriter;
    import java.io.PrintWriter;
    import oracle.jbo.html.jsp.ConnectionInfo;
    import oracle.jbo.common.ampool.PoolMgr;
    import oracle.jbo.common.ampool.ApplicationPool;
    import oracle.jbo.ApplicationModule;
    import oracle.jbo.html.jsp.JSPApplicationRegistry;
    import java.util.Properties;
    import java.util.Hashtable;
    import java.util.Enumeration;
    public class ApplicationModuleTag extends TagSupport
    String fApplicationName;
    String fConfigName;
    String fUsername;
    String fPassword;
    String fConnectionURL;
    String fIiopUserName;
    String fIiopPassword;
    JSPApplicationRegistry fAppRegistry;
    public ApplicationModuleTag()
    public void setId(String pAppName)
    { this.fApplicationName = pAppName; }
    public void setConfigname(String pValue)
    { this.fConfigName = pValue; }
    * doEndTag
    * @return int
    * @exception javax.servlet.jsp.JspException
    public int doEndTag() throws JspException
    try
    SeperateLoginApplicationPool pool = null;
    init();
    // Get an application module resource.
    fAppRegistry = JSPApplicationRegistry.getInstance();
    // this step will access the custom pool from the property
    file
    appRegistry.registerApplicationFromPropertyFile
    (fApplicationName);
                   // Since we don't want to use a
    PropertyFile per AM, we have written are own
    registerApplicationModule method
    this.registerApplicationModule();
    // get an instance of the pool, which is already existing
                   pool = (SeperateLoginApplicationPool)
    PoolMgr.getInstance().getPool(fApplicationName);
    // get an instance of the application module
    synchronized(pool)
    // Setup the connection information
    pool.setConnectInfo(fUsername, fPassword,
    fConnectionURL);
    // This instance will be used by the rest of the
    DataWebBeans since it&#8217;s part of the context.
                        try
                             //After the BC4J
    container timeout, this method will raise an exception
         ApplicationModule am =
    fAppRegistry.getAppModuleInstance(fApplicationName, pageContext);
                        catch(Exception e)
                             System.out.println
    ("JspRegistry has failed to get application module instance");
    setPageContextValues();
    catch(Exception ex)
    StringWriter writer = new StringWriter();
    PrintWriter prn = new PrintWriter(writer);
    ex.printStackTrace(prn);
    prn.flush();
    throw new JspException(writer.toString());
    return SKIP_BODY;
    * for internal use only
    private void init()
    fUsername = (String)pageContext.getSession().getValue
    ("username");
    fPassword = (String)pageContext.getSession().getValue
    ("password");
    fConnectionURL = (String)pageContext.getSession().getValue
    ("connectionURL");
    private void setPageContextValues()
    // place default renderers into session, these will not be
    exposed via config file
    pageContext.getSession().putValue
    ("oracle_ord_im_OrdImageDomain_Renderer", "oracle.ord.html.OrdBui
    ldURL");
    pageContext.getSession().putValue
    ("oracle_ord_im_OrdAudioDomain_Renderer","oracle.ord.html.OrdBuil
    dURL");
    pageContext.getSession().putValue
    ("oracle_ord_im_OrdVideoDomain_Renderer","oracle.ord.html.OrdBuil
    dURL");
    pageContext.getSession().putValue
    ("oracle_ord_im_OrdVirDomain_Renderer", "oracle.ord.html.OrdBuild
    URL");
    pageContext.getSession().putValue
    ("oracle_ord_im_OrdImageDomain_EditRenderer", "oracle.ord.html.Fi
    leUploadField");
    pageContext.getSession().putValue
    ("oracle_ord_im_OrdAudioDomain_EditRenderer", "oracle.ord.html.Fi
    leUploadField");
    pageContext.getSession().putValue
    ("oracle_ord_im_OrdVideoDomain_EditRenderer", "oracle.ord.html.Fi
    leUploadField");
    pageContext.getSession().putValue
    ("oracle_ord_im_OrdVirDomain_EditRenderer", "oracle.ord.html.File
    UploadField");
    protected synchronized void registerApplicationModule()
    PoolMgr vPoolMgr = PoolMgr.getInstance();
              try
    if (!vPoolMgr.isPoolCreated(fApplicationName))
                        // Parse the ConfigName
                        String vConfigPackage =
    fConfigName.substring(0, fConfigName.lastIndexOf('.'));
    String vConfigSection = fConfigName.substring
    (fConfigName.lastIndexOf('.') + 1);
                        //Strip out the AM Class
                        vConfigPackage =
    vConfigPackage.substring(0, vConfigPackage.lastIndexOf('.'));
    Properties vProps = new Properties();
    vProps.put("ConfigName",fConfigName);
                        ApplicationPool vAppPool =
    vPoolMgr.createPool(fApplicationName,vConfigPackage,
    vConfigSection, vProps);
                        vAppPool.setUserName
    (this.fUsername);
                        vAppPool.setPassword
    (this.fPassword);
              catch (Exception ex)
                   ex.printStackTrace();
                   throw new RuntimeException(ex.toString
    * release() called after doEndTag() to reset state
    public void release()
              super.release();
              fApplicationName = null;
              fConfigName = null;
              fUsername = null;
              fPassword = null;
              fConnectionURL = null;
              fIiopUserName = null;
              fIiopPassword = null;
              JSPApplicationRegistry fAppRegistry = null;
    Another thing is that the JSPApplicationRegistry contains a bug
    (I think)
    It doesn't use the HttpSessionTimeOut variable at all (see
    following code)
    this piece of code is a Method from the
    JSPApplicationRegistry.java File taken from package
    oracle.jbo.html.jsp;
    static synchronized public void
    registerApplicationFromPropertyFile(HttpSession session, String
    sPropFileName)
    if(!mPoolManager.isPoolCreated(sPropFileName))
    registerApplicationFromPropertyFile(sPropFileName);
    if (!PropertyConstants.TRUE.equals((String)session.getValue
    (SESSION_INITIALIZED)))
    Hashtable settings = getAppSettings(sPropFileName);
    /* The timeout variable is declared here */
    int nTimeOut = 300;
    if (settings != null)
    // see if we have a setting for the session timeout
    String sTimeOut;
    /* get the HttpSessionTimeOut variable */
    if(settings.get("HttpSessionTimeOut") != null)
    sTimeOut = (String)settings.get
    ("HttpSessionTimeOut");
    if(sTimeOut != null)
    /* Put the value in the variable... and that's the last thing it
    does. nTimeOut isn't used anywhere in the class */
    nTimeOut = Integer.parseInt(sTimeOut);
    if(settings.get("ImageBase") != null)
    session.putValue("ImageBase", settings.get
    ("ImageBase"));
    else
    settings.put("ImageBase", "/webapp/images");
    session.putValue("ImageBase", "/webapp/images");
    if(settings.get("CSSURL") != null)
    session.putValue("CSSURL",settings.get("CSSURL"));
    else
    settings.put("CSSURL", "/webapp/css/oracle.css");
    session.putValue
    ("CSSURL", "/webapp/css/oracle.css");
    // place default renderers into session, these will not
    be
    // exposed via config file
    session.putValue
    ("oracle_ord_im_OrdImageDomain_Renderer", "oracle.ord.html.OrdBui
    ldURL");
    session.putValue
    ("oracle_ord_im_OrdAudioDomain_Renderer","oracle.ord.html.OrdBuil
    dURL");
    session.putValue
    ("oracle_ord_im_OrdVideoDomain_Renderer","oracle.ord.html.OrdBuil
    dURL");
    session.putValue
    ("oracle_ord_im_OrdVirDomain_Renderer", "oracle.ord.html.OrdBuild
    URL");
    session.putValue
    ("oracle_ord_im_OrdImageDomain_EditRenderer", "oracle.ord.html.F
    ileUploadField");
    session.putValue
    ("oracle_ord_im_OrdAudioDomain_EditRenderer", "oracle.ord.html.F
    ileUploadField");
    session.putValue
    ("oracle_ord_im_OrdVideoDomain_EditRenderer", "oracle.ord.html.F
    ileUploadField");
    session.putValue
    ("oracle_ord_im_OrdVirDomain_EditRenderer", "oracle.ord.html.F
    ileUploadField");
    session.putValue(SESSION_INITIALIZED,
    PropertyConstants.TRUE);
    Am I mistaken or is it a bug?
    Thank you for your fast response.
    Greetings,
    Ief Cuynen

    First of all we get an JBO-30003: "The application pool, {AM Name}, failed to checkout an application module
    instance."
    The JBO-30003 exception is thrown whenever the pool cannot
    properly create/recycle an application module. Please see the
    exception details (scan the exception stack to find the exception
    details) for more information regarding the "root" cause of the
    exception.
    We haven't found a way to alter the BC4J container timeout. Where do we customize the timeout?
    The BC4J container is timed out when the HttpSession is timed
    out. The session timeout is configurable via the web.xml file
    for the J2EE application. Your servlet container may also
    include another mechanism for setting a session timeout.
    Second problem is that the J2EE Container stops functioning after a couple of requests. Even if an other browser
    is started (on the same or different machine) the Container does
    not respond to any request. We have found a Thread on technet
    handling this kind of problem but the solution doesn't work in
    our case.
    The issue sounds like a deadlock. Please use kill -3 (Solaris)
    or ctrl-break (NT) at the java server console to print the thread
    stack trace to stdout. This will contain more information
    regarding which threads are blocked and where. If you would like
    you can send the stack trace to me via mail and I can take a look
    at it.
    The solution of synchronizing your pages with the HttpSession
    context is required only if you are using the BC4J datatags with
    HTML frames (i.e. have multi-threaded application module
    access for a given session). Please note that this solution may
    have a performance impact and should not be implemented unless
    absolutely necessary.
    Another thing is that the JSPApplicationRegistry contains a bug (I think). It doesn't use the HttpSessionTimeOut variable
    at all (see following code)
    This parameter was deprecated after 3.1. It looks like the code
    which used the parameter may have been removed prematurely.
    Sorry for the confusion. Please use the J2EE compliant
    mechanisms mentioned above to configure the session timeout in
    3.2.
    Finally, please note that JDeveloper9i includes new integrated
    features for the often requested feature of supporting different
    db users with the same application pool! Please stay tuned.

  • Problem with item and/or data during page-processing-PS/SQL

    Greetings!
    On my page I have a custom report (from 2 tables) and a small form-field, that adds and edits data in the report. After generating the form with the wizard I added an extra item, to store the id from one of the tables from the report data.
    Now, on submit a calculation should take place, that updates data according user input with a procedure in Page-Prosseses:
    declare a number;
    begin
    case :PLATZ
    when 1 then a:=100;
    when 2 then a:= 50;
    else a:=25;
    end case;
    update TBL_MITGLIEDER set TURNIERPUNKTE = TURNIERPUNKTE + a
    where ID_MITGL = :P14_ID_MITGL;
    end;
    :PLATZ is user selected (1,2,3), :P14_ID_MITGL stores the reference to TBL_MITGLIEDER (and shows the change, when I select another record)
    As I understand, that process should also run, when I submit a chance, but nothing happens then.
    But when I try to save a new record (which worked without any problems before adding that process), I get this error message:
    ORA-06550: line 1, column 64: PL/SQL: ORA-00957: duplicate column name ORA-06550: line 1, column 7: PL/SQL: SQL Statement ignored
         Error      Unable to process row of table TBL_TURNIERSIEGER.
    Then, when I go back into the app-builder and try to run the page again, I get this message:
    ORA-01403: no data found
         Error      Unable to fetch row.
    I am not sure, if you guys have all the information you need, to know whats going on. Maybe this has to do with session-id and the whay, items are updated. I hope you can help me.
    Thanks, best regards,
    tobi

    First can you please post all log file errors
    >> I can't really give you a solution or specific recommendation since I did not saw this error yet myself, but on your own risk you can try:
    1. You may try to just register 'dts.dll' using regsvr32.exe, but this error may indicate a bigger problem with setup.
    If you are running SQL Server 64bit then try running this at the command prompt: %windir%\syswow64\regsvr32 "%ProgramFiles(x86)%\Microsoft SQL Server\90\dts\binn\dts.dll"
    2. You can try reinstall from start (In this case you have to make sure that you un-install all)
    [Personal Site] [Blog] [Facebook]

  • Problem with material and document date in sales order

    Hi All,
    I am working in ECC 6.0 .I am facing problem with two fields when in enter in to sales order(tcode:va01)
    1.A material comes automatically in line item.
    2.Document date also comes as 27.03.2008 instead of todays date in each order i create.
    I think its not related to configuration.I think its related to some modifications through exits or enhancement points.could someone suggest me a where to check and solve that.
    Your's suggestion is appreciated.

    Hi Sharad,
    Check MV45AFZZ include in se38..It is a user exit for basic validation on sales...We may be able to find some code there (if screen data is predefined by code as mentioned)
    Hope it helps
    Regards
    Byju

  • Problem with Import and Export Data Wizard

    Downloaded and installed SQL Server Express 2008 R2 today because I want to explore how Access interacts with SQL Server (using my home computer). I'm using Access 2010 (under Windows 7), so the 2008 version of SQL Server Express seemed to be the version
    to use.
    After a couple of false starts, installation appeared to go okay. After the installation. My Start menu listed Microsoft SQL Server 2008 and Microsoft SQL Server 2008 R2. The latter listed Import and Export Data (64-bit). When I clicked that, the first Import
    and Export Data Wizard page was displayed. I wasn't ready at that time to explore the wizard, so I closed it. An hour or so later I again attempted to open the Import and Export Data wizard. This time, the wizard didn't open. Instead this error message was
    displayed: "The SSIS Runtime object could not be created. Verify that DTS.dll is available and registered."
    I found DTS.dll on my computer at C:\Program Files\Microsoft SQL Server\100\DTS\Binn, so the file is available, but don't know whether it is registered.
    How can I correct this problem?

    First can you please post all log file errors
    >> I can't really give you a solution or specific recommendation since I did not saw this error yet myself, but on your own risk you can try:
    1. You may try to just register 'dts.dll' using regsvr32.exe, but this error may indicate a bigger problem with setup.
    If you are running SQL Server 64bit then try running this at the command prompt: %windir%\syswow64\regsvr32 "%ProgramFiles(x86)%\Microsoft SQL Server\90\dts\binn\dts.dll"
    2. You can try reinstall from start (In this case you have to make sure that you un-install all)
    [Personal Site] [Blog] [Facebook]

  • Urgent: Performance problem with where clause using IN and an OR condition

    Select statement is:
    select fl.feed_line_id
    from ap_expense_feed_lines_all fl
    where ((:1 is not null and
    fl.feed_line_id in (select distinct r2.object_id
    from xxdl_pcard_wf_routing_lists r2,
         per_people_f hr2
    where upper(hr2.full_name) like upper(:1||'%')
              and hr2.person_id = r2.person_id
    and r2.fyi_list is null
              and r2.sequence_number <> 0))
    or
    (:1 is null))
    If I modify the statement to remove the "or (:1 is null))" part at the bottom of the where clause, it returns in .16 seconds. If I modify the statement to only contain the "(:1 is null))" part of the where clause, it returns in .02 seconds. With the whole statement above, it returns in 477 seconds. Anyone have any suggestions?
    Explain plan for the whole statement is:
    (1) SELECT STATEMENT CHOOSE
    Est. Rows: 10,960 Cost: 212
    FILTER
    (2) TABLE ACCESS FULL AP.AP_EXPENSE_FEED_LINES_ALL [Analyzed]
    (2) Blocks: 8,610 Est. Rows: 10,960 of 209,260 Cost: 212
    Tablespace: APD
    (6) TABLE ACCESS BY INDEX ROWID HR.PER_ALL_PEOPLE_F [Analyzed]
    (6) Blocks: 4,580 Est. Rows: 1 of 85,500 Cost: 2
    Tablespace: HRD
    (5) NESTED LOOPS
    Est. Rows: 1 Cost: 4
    (3) TABLE ACCESS FULL XXDL.XXDL_PCARD_WF_ROUTING_LISTS [Analyzed]
    (3) Blocks: 19 Est. Rows: 1 of 1,303 Cost: 2
    Tablespace: XXDLD
    (4) UNIQUE INDEX RANGE SCAN HR.PER_PEOPLE_F_PK [Analyzed]
    Est. Rows: 1 Cost: 1
    Thanks in advance,
    Peter

    Thanks for the reply, but I have already checked what you are suggesting and I am pretty sure those are not causing the problem. The hr2.full_name column has an upper index and the (4) line of the explain plan shows that index being used. In addition, that part of the query executes on its own quickly.
    Because the sql is not displayed in an indented format on this page it is a little hard to understand the structure so I am going to restate what is happening.
    My sql is:
    select a_column
    from a_table
    where ((:1 is not null) and a_column in (sub-select statement)
    or
    (:1 is null))
    The :1 bind variable is set to a varchar2 entered on the screen of an application.
    If I execute either part of the sql without the OR condition, performance is good.
    If the :1 bind variable is null with the whole sql statement (so all rows or a_table are returned), performance is still good.
    If the :1 bind variable is a not-null value with the whole sql statement, performance stinks.
    As an example:
    where (('wa' is not null) and a_column in (sub-select statement)) -- fast
    where (('wa' is null)) -- fast
    where (('' is not null) and a_column in (sub-select statement) -- fast
    or
    ('' is null))
    where (('wa' is not null) and a_column in (sub-select statement) -- slow
    or
    ('wa' is null))

  • Performance problem with sproc and out parameter ref cursor

    Hi
    I have sproc with Ref Cursor as an OUT parameter.
    It is extremely slow looping over the ResultSet (does it record by record in the fetch).
    so I have added setPrefetchRowCount(100) and setPrefetchMemorySize(6000)
    pseudo code below:
    string sqlSmt = "BEGIN get_tick_data( :v1 , :v2); END;";
    Statement* s = connection->createStatement(sqlStmt);
    s->setString(1, i1);
    // cursor ( f1 , f2, f3 , f4 , i1 ) f for float type and i for interger value.
    // 5 columns as part of cursor with 4 columns are having float value and
    // 1 column is having int value assuming 40 bytes for one rec.
    s->setPrefetchRowCount (100);
    s->PrefetchMemorySize(6000);
    s->registerOutParam(2,OCCICURSOR);
    s->execute();
    ResultSet* rs = s->getCursor(2);
    while (rs->next()) {
    // do, and do v slowly!
    }

    Hi,
    I have the same problem. It seems, when retrieving cursor, that "setPrefetchRowCount" is not taking into account by OCCI. If you have a SQL statement like "SELECT STR1, STR2, STR3 FROM TABLE1" that works fine but if your SQL statement is a call to a stored procedure returning a cursor each row fetching need a roudtrip.
    To avoid this problem you need to use the method "setDataBuffer" from the object "ResultSet" for each column of your cursor. It's easy to use with INT type and STRING type, a lit bit more complex with DATE type. But until now, I'm not able to do the same thing with REF type.
    Below a sample with STRING TYPE (It's assuming that the cursor return only one column of STRING type):
    try
      l_Statement = m_Connection->createStatement("BEGIN :1 := PACKAGE1.GetCursor1(:2); END;");
      l_Statement->registerOutParam(1, oracle::occi::OCCINUMBER, sizeof(l_CodeErreur));
      l_Statement->registerOutParam(2, oracle::occi::OCCICURSOR);
      l_Statement->executeQuery();
      l_CodeErreur = l_Statement->getNumber(1);
      if ((int) l_CodeErreur     == 0)
        char                         l_ArrayName[5][256];
        ub2                          l_ArrayNameSize[5];
        l_ResultSet  = l_Statement->getCursor(2);
        l_ResultSet->setDataBuffer(1, l_ArrayName,   OCCI_SQLT_STR, sizeof(l_ArrayName[0]),   l_ArrayNameSize,   NULL, NULL);
        while (l_ResultSet->next(5))
          for (int i = 0; i < l_ResultSet->getNumArrayRows(); i++)
            l_Name = CString(l_ArrayName);
    l_Statement->closeResultSet(l_ResultSet);
    m_Connection->terminateStatement(l_Statement);
    catch (SQLException &p_SQLException)
    I hope that sample help you.
    Regards                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • Performance problems with JDBC and mysql

    Hi,
    I have here a "one table" database which has 1.000.000 data rows. The problem is, that deleting a special entry (ie delete from mytable where da_value="foo1") takes between 10 and 60 seconds.
    Does anyone can help me out of this performance trap.
    LoCal

    table indexed and keys used?
    always preferred postgres! :)

  • Having problem with adding and reading dates to/from database !!!

    Hi
    I am new in J2ME
    I am trying to code a simple software.
    My problem is with dates.
    I have a datefield on my menu and the user will choose the date from here. By default, datefield shows todays date. But when I try to write that date to database using rms, date value transforms to java.util.Date@acfdb0fe.
    As I read from tutorials this is common problem of date class, so I tried to use calendar class.
    But with Calendar class I cannot let user to choose date from screen like DateField. datefield dowsn't work with calendar.
    later, I will use that date for sorting records
    Summary : I need a sample code that read date from screen (preferably with datefield), write it to recordstore. and then read it from recordstore asnd write to screen.
    I searching internet for a sample code through days.
    Please help me
    Thanks

    Hi,
    The best i would suggest is instead of storing the date as 19 Jan 2004 or something like this better store the date in milliseconds.
    DateField df = new DateField();
    Date d = df.getDate();
    long ms = d.getTime();
    store the value of ms in RMS. This is the commonly used way to store date in RMS for j2me.
    You can get back date using
    Date d = new Date(ms);
    DateField df = new DateField();
    df.setDate(d);
    Prabhu.

  • Problems with N80 and usb data transfer

    data transfer started off working fine (I think) but now the pop-port connection seems really flakey. It keeps dropping the connection and causing transfers to fail on mac and PC. hope it's the cable, but anyone else having this problem?

    I'm okay with bluetooth at the moment - I use iSync on the Mac which seems to work okay. File transfer over bluetooth also seem to work okay - just tested with a couple of mp3s. However, I need to use USB for large quantities of data (music, photos, etc) and it doesn't seem to work. I have tried it on both a mac and a pc with the same issue each time.
    The symptoms I have are exactly the same as you sysinit - if I touch the device then it seems to disconnect and reconnect rapidly. Unfortunately, even if I leave the device completely untouched and try to transfer 100MB of data, invariably it only gets halfway before conking out!
    Most of the other software issues I can deal with (I had a Sony Ericsson P900 before this which had its fair share of problems!), but this to me is a showstopper. If there is no fix for the problem then the handset is going back as the problem is so easy to reproduce.

  • Problem with BC4J and PostrgreSQL 7.2

    I successful configure connection to PostgreSQL DB.
    Then I create simple table on DB and create BC4J appmodule.
    In appmodule properties I change jbo.sql92.JdbcDriverClass = org.postgresql.Driver
    I run test appmodule in BC4J browser. I see my row in table but when I try to put something in table
    JBO-27123: SQL error during call statement preparation. Statement: INSERT INTO CITIES(name,last) VALUES (?,?)
    Callable Statements are not supported at this time.
    Same thing is happening on delete or update ???
    When I put some data into table from connection/sqlworksheet in Jdeveloper everething is ok.
    I use JDeveloper 9.0.2.829

    REPOST

  • Problem with tooltips and spry data using startLoadInterval

    I am trying to use tooltips with a dynamic table using the spry data example at the bottom of the page:
    http://labs.adobe.com/technologies/spry/articles/tooltip_overview/index.html
    I have it working except that it appears to have a memory leak as firefox's memory usage continues to grow.
    I did notice there is a 'new' inside the onPostUpdate:function that is being called on every table update.
    So after looking at the tooltip code I modified the function to:
    <script type="text/javascript">
    var tt1;
    var observer = {onPostUpdate:function(){if(tt1 == null){tt1 = new Spry.Widget.Tooltip('tooltip','.trigger');}else{tt1.destroy(); tt1.destroy();if(tt1.checkDestroyed()) alert("destroyed");else alert("not destroyed");delete tt1; tt1 = new Spry.Widget.Tooltip('tooltip','.trigger');}}}
    Spry.Data.Region.addObserver('mainRegion', observer);
    </script>
    The tt1.checkDestroyed() always returns false and my memory usage continues to grow.
    Any help?

    Thanks for the reply.  I found a way to get it to work by changing my
    observer function to the following:
    else{tt1.destroy();
    tt1.init('.trigger', 'tooltip',
    {});tt1.attachBehaviors();Spry.Widget.Tooltip.loadQueue.push(tt1);}}
    Probably not very clean but it doesn’t grow memory and works correctly.
    I just put it back to the old way as in the example with a
    startLoadInterval(1500) :
    And I am over 500MB after ~ 30 minutes where as the top code above ran with
    a startLoadInterval(1500); overnight and is about ~120MB.
    I am using a double repeatchildren loop to build a table with tooltips for
    each cell which may affect it.
    Bottom line is I have a way to make it work so I am moving forward.  If you
    still want to see it I might be able to get it up on the internet and give
    you a link.
    Thanks,
    Greg Wirth

  • Performance problems with combobox and datasource

    We have a perfomance problem, when we are connecting a datatable object or something like this to a datasource property of a combobox. Below you find the source code. The SQL-Statement reads about 40000 rows and the result (all 40000) should be listed in the combobox. There is duration about 30 second before this process has finished. Any suggestions?
    Dim ds As New DataSet
    strSQL = "Select * from am.city"
    conn = New Oracle.DataAccess.Client.OracleConnection(Configuration.ConfigurationSettings.AppSettings("conORA"))
    comm = New Oracle.DataAccess.Client.OracleCommand(strSQL)
    da = New Oracle.DataAccess.Client.OracleDataAdapter(strSQL, conn)
    conn.Open()
    da.Fill(ds)
    conn.Close()
    Dim dt As New DataTable
    dt = ds.Tables(0)
    ComboBox1.DataSource = dt
    ComboBox1.ValueMember = dv.Table.Columns("id").ColumnName
    ComboBox1.DisplayMember = dv.Table.Columns("city").ColumnName

    But how long does it take to fill the DataTable?
    I can fill a 40000 row datatable in under 4 seconds.
    DataBinding a combo box to that many rows is pretty expensive, and not normally recommended.
    David
    Dim strConnection As String = "Data Source=oracle;User ID=scott;Password=tiger;"
    Dim conn As OracleConnection = New OracleConnection(strConnection)
    conn.Open()
    Dim cmd As New OracleCommand("select * from (select * from all_objects union all select * from all_objects) where rownum <= 40000", conn)
    Dim ds As New DataSet()
    Dim da As New OracleDataAdapter(cmd)
    Dim begin As Date = Now
    da.Fill(ds)
    Console.WriteLine(ds.Tables(0).Rows.Count & " rows loaded in " & Now.Subtract(begin).TotalSeconds & " seconds")
    outputs
    40000 rows loaded in 3.734375 seconds

  • Performance Problems with "For all Entries" and a big internal table

    We have big Performance Problems with following Statement:
    SELECT * FROM zeedmt_zmon INTO TABLE gt_zmon_help
      FOR ALL ENTRIES IN gt_zmon_help
        WHERE
        status = 'IAI200' AND
        logdat IN gs_dat AND
        ztrack = gt_zmon_help-ztrack.
    In the internal table gt_zmon_help are over 1000000 entries.
    Anyone an Idea how to improve the Performance?
    Thank you!

    >
    Matthias Weisensel wrote:
    > We have big Performance Problems with following Statement:
    >
    >  
    SELECT * FROM zeedmt_zmon INTO TABLE gt_zmon_help
    >   FOR ALL ENTRIES IN gt_zmon_help
    >     WHERE
    >     status = 'IAI200' AND
    >     logdat IN gs_dat AND
    >     ztrack = gt_zmon_help-ztrack.
    >
    > In the internal table gt_zmon_help are over 1000000 entries.
    > Anyone an Idea how to improve the Performance?
    >
    > Thank you!
    You can't expect miracles.  With over a million entries in your itab any select is going to take a bit of time. Do you really need all these records in the itab?  How many records is the select bringing back?  I'm assuming that you have got and are using indexes on your ZEEDMT_ZMON table. 
    In this situation, I'd first of all try to think of another way of running the query and restricting the amount of data, but if this were not possible I'd just run it in the background and accept that it is going to take a long time.

Maybe you are looking for

  • Convert Regions to New Sampler Track - new bug in 9.1?

    I've tried creating new EXS instruments using the Audio>Convert Regions to New Sampler Track command and all I get is empty audio files, or files containing bursts of white noise. Same thing happens in both 64 and 32-bit versions of 9.1. The same reg

  • Problem with Application definition Editor

    Hi All, the problem is that when the application definition contains Arabic text (e.g. in Labels region) the Application definition Editor didn't open and i got the following error: "Application Structure File appears to be in an invalid state" also

  • It's official $30 Unlimited Data

    I found this article this morning from The Wall Street Journal. It's official: Verizon is going to offer iPhone buyers a $30 unlimited data plan. The carrier's heir apparent and chief operating officer, Lowell McAdam, told us the news ahead of the co

  • Apple TV does not re-connect to wireless network after standby

    When I 'turn on' my ATV after it has been in standby mode it will not re connect to my wlan. I have to do a restart of the ATV box before it will find the network again. Needless to say, this is a complete pain....anyone else suffer from this or know

  • Oracle APIs For Contacts and Role Responsibility

    Hello, Please, I need a direction on how to proceed about the use of the right API. I'm working with Oracle EBS R12. If this is not the place for this kind of doubt, please let me know the best one. I already have a site for a customer, which was cre