Unable to get stable set of records

I am getting error in the below statement, can someone help me to find out what is wrong here? i need an immediate help on this...
ORA-30926: unable to get a stable set of rows in the source tables
ORA-Error in executing meeting_occurrence table
     EXECUTE IMMEDIATE 'MERGE INTO meeting_occurrence m1
     USING (SELECT distinct     m.ma_start_date , -- start date
          m.ma_end_date , -- end date
          m.l_no , -- location no
          m.ma_no , -- meeting number
          m.dtc_no , -- day time code id
          m.dtc_time , -- dtc_time
          m.lob_no , -- line of business id
          m.msc_status , -- meeting occurance status id
          s.user_id -- leader id
     FROM '|| v_schema_name|| ' meetings_assigned m ,
     (SELECT u.user_id,sm.e_no , sm.ma_no, sm.l_no
     FROM '|| v_schema_name||' schedule_meeting sm,
     '|| v_schema_name||' employee e ,
     users u
     WHERE e.e_no = sm.e_no and e.e_no = u.employee_number and sm.FJC_NO =''L'') s
     WHERE m.l_no = s.l_no (+)
     AND m.ma_no = s.ma_no (+)
     AND EXISTS ( SELECT 1 FROM day_time_code dtc
     WHERE m.dtc_no =dtc.day_time_code_id ) ) m2
     ON (m1.meeting_number = m2.ma_no
     AND m1.location_id = m2.l_no)
     WHEN MATCHED THEN
     UPDATE
          SET m1.start_date = m2.ma_start_date ,
          m1.end_date = m2.ma_end_date ,
          -- m1.leader_employee_id = m2.user_id ,
          m1.day_time_code_id = m2.dtc_no ,
          m1.dtc_time = m2.dtc_time ,
          m1.line_of_business_id = m2.lob_no ,
          m1.meeting_occurrence_status_id = m2.msc_status ,
          m1.site_id = :v_site_id ,
          m1.last_upd_date = :v_sysdate ,
          m1.last_upd_by = :v_last_upd_by
     WHERE m1.meeting_number = m2.ma_no
     AND m1.location_id = m2.l_no
     WHEN NOT MATCHED THEN
     INSERT
          meeting_occurrence_id ,
          start_date ,
          end_date ,
          location_id ,
          meeting_number ,
          day_time_code_id ,
          dtc_time ,
          line_of_business_id ,
          meeting_occurrence_status_id ,
leader_employee_id ,
site_id ,
          created_date ,
          created_by ,
          last_upd_date ,
          last_upd_by
     VALUES
     ( meeting_occurrence_id_seq.nextval ,
          m2.ma_start_date , -- start date
          m2.ma_end_date , -- end date
          m2.l_no , -- location no
          m2.ma_no , -- meeting number
          m2.dtc_no , -- day time code id
          m2.dtc_time , -- dtc_time
          m2.lob_no , -- line of business id
          m2.msc_status , -- meeting occurance status id
          m2.user_id , -- leader employee id
          :v_site_id , -- site id
          :v_sysdate ,
          :v_created_by ,
          :v_sysdate ,
          :v_created_by
     )' USING v_site_id ,v_sysdate, v_last_upd_by,v_site_id,v_sysdate,v_created_by,v_sysdate,v_created_by;

No, there can be multiple records from M2.
there can be 1 to many relation from M1 to M2.
if duplicate l_no and ma_no exists in M2, it should update the same row in m1 since combination of this column is unique

Similar Messages

  • Getting an error -unable to get stable set..i cant identify why??

    MERGE INTO aaa_interim ai
    USING (SELECT *
    FROM aaa_staging
    WHERE record_type = 3 or (record_type=2 and terminate_cause_id=1)) asi
    ON (ai.vendor_record_id = asi.vendor_record_id) ---remove the field for session_id
    WHEN MATCHED THEN
    UPDATE
    SET duration_seconds = asi.duration_seconds,
    download_bytes = asi.download_bytes,
    upload_bytes = asi.upload_bytes
    WHEN NOT MATCHED THEN
    INSERT (vendor_record_id ,
    process_time_utc,
    processed_flag,
    vendor_record_creation_time,
    vendor_session_id,
    -- nas_ip_address,
    nas_identifier,
    nas_port,
    user_id,
    user_realm,
    mac_address,
    start_time_utc,
    end_time_utc,-----will put null if its an interim record
    duration_seconds,
    utc_offset_seconds,
    download_bytes,
    upload_bytes,
    terminate_cause_id, ---session_cont,
    locationcategory,
    locationid,
    bsid,
    -- msid,
    -- MIN,
    carrier_id,
    pcf_id,
    aaa_ip_orig,
    aaa_ip_dest,
    ispeak)
    VALUES (asi.vendor_record_id,
    asi.process_time_utc,
    asi.processed_flag,
    asi.vendor_record_creation_time,
    asi.vendor_session_id,
    --asi.nas_ip_address,
    asi.nas_identifier,
    asi.nas_port,
    asi.user_id,
    asi.user_realm,
    asi.mac_address,
    asi.start_time_utc,
    asi.end_time_utc,
    asi.duration_seconds, -------for a stop record and session_cont=1
    asi.utc_offset_seconds,
    asi.download_bytes,
    asi.upload_bytes,
    asi.terminate_cause_id,
    asi.locationcategory,
    asi.locationid,
    asi.bsid,
    -- asi.msid,
    -- asi.MIN,
    asi.carrier_id,
    asi.pcf_id,
    asi.aaa_ip_orig,
    asi.aaa_ip_dest,
    asi.ispeak);

    i chechked it
    the driving query is proper...
    i am just supposed to replace where there is a matchin vendor record_id
    ..the problem is that it is not merging into the table where ever there is a matching vendor record_id
    it is is inserting..strange????
    when one record got inserted with a given vendor_reocrd_id
    is is again inserting another record instead of merging.....
    still wondering how?????

  • Unable to get account set right?

    Am trying to change my email address and think I'm following directions but it keeps saying I do not have the right phone format in?  It also wants an office # (I refuse to give that out) and I am also unable to insert my state?  It is a complete mess.  Would like some help.
    Thanks

    No, there can be multiple records from M2.
    there can be 1 to many relation from M1 to M2.
    if duplicate l_no and ma_no exists in M2, it should update the same row in m1 since combination of this column is unique

  • Unable to delete a set of records from PSA

    Hello All,
    I am trying to delete a PSA request (INIT With data transfer) which had about crores of records. But in my PSA, I can still see about 16000 records from the earlier deleted request.
    Please advice as my next INIT is not brininging in any records. the issue is very critical...
    Regards
    Sneha

    Hi,
    Check out for any other History Init Request available in the PSA and delete it. Or use the program - RSAR_PSA_CLEANUP_DIRECTORY to clean the PSA Completely. Next, check out in the InfoPackage menu, Scheduler - > Initialization for source system, whether there is any successful Initialization request is available or not. If available delete the request by selecting delete from all the systems.
    Now, you can again use the Init Infopackage to initialize the delta.
    Regards,
    Geeta

  • How to fetch 2 set of records in MII from SQL procedure

    Hi Experts,
    I am invoking a SQL procedure from MII which return 2 set of records. But at MII I am able to get only first set of records. Is there any configuration required at MII side or SQL side to get both set of records in MII?
    Here is the SQL Query Structure
    Create procedure Sample_Proc
      @Param1 Varchar(10),
      @Param2 varchar(10),
      @Param3 Varchar(20) OUT,
      SET INCOUNT ON;
    AS
    Begin
      *//Selection statements//*
    END
    SP Executing in MII
    Declare @Param1,
      @Param2,
      @Param3,
    Exec Sample_Proc
      @Param1='name',
      @Param2='Id',
      @Param3=@Param3 OUTPUT,
    Select @Param3
    Our SP is returning values (Say Recordset1)based on the input parameters 1 and 2 , along with Parameter3 value(Say Recordset2) in MS SQL server but in MII its returning only the values(Recordset1) ... how to fetch recordset2 values in MII
    I hope MII can return 2 set of records (rowsets) after executing the procedure.
    MII version -> 12.2.3 Build(182)
    Thanks & Regards,
    Rajasekhar Kantepalli

    Hi Swaroop,
    With MII 14.0 SP5 Patch 11, in a transaction, I get following XML output for a query that executes an SP(returning multiple resultSets) :
    And, results in this format can surely be used for further processing in an MII transaction.
    Thanks Rajasekhar, got to know about this because of your query.
    regards,
    Manisha

  • MERGE error : unable to get a stable set of rows in the source tables

    Hi,
    For an update, the following MERGE statement throws the error-unable to get a stable set of rows in the source tables:
    MERGE INTO table2t INT
    USING (SELECT DISTINCT NULL bdl_inst_id,.......
    FROM table1 ftp
    WHERE ftp.gld_business_date = g_business_date
    AND ftp.underlying_instrument_id IS NOT NULL) ui
    ON ( ( INT.inst_id = ui.inst_id
    AND g_business_date BETWEEN INT.valid_from_date
    AND INT.valid_to_date
    OR ( INT.ric = ui.ric
    AND g_business_date BETWEEN INT.valid_from_date
    AND INT.valid_to_date
    OR ( INT.isin = ui.isin
    AND g_business_date BETWEEN INT.valid_from_date
    AND INT.valid_to_date
    OR ( INT.sedol = ui.sedol
    AND g_business_date BETWEEN INT.valid_from_date
    AND INT.valid_to_date
    OR ( INT.cusip = ui.cusip
    AND g_business_date BETWEEN INT.valid_from_date
    AND INT.valid_to_date
    WHEN MATCHED THEN
    UPDATE
    SET INT.inst_id = ui.inst_id, INT.ric = ui.ric
    WHEN NOT MATCHED THEN
    INSERT (inst_key, ......)
    VALUES (inst_key, ......);
    To determine the existence of a record, first check if any match is found on the first key, if not then search for the second key and so on.
    Now two records with differenct first key, i.e. inst_id, can have the same ric(second key). On a rerun, with the target table already populated, the code fails. The reason is it finds duplicate entries for the second key.
    Any suggestions on how to make this work?
    Thanks in advance.
    Annie

    Annie
    You've spotted the problem (that two records have the same RIC). MERGE doesn't allow that; each record in the data being updated is only allowed to be updated once.
    Is there a PK column (or columns) that we can rely on?
    What you can try is to outer join FTP to INT. Something like:
    MERGE INTO INT int1
    USING (
        select columns_you_need
        from (
            select ftp.columns -- whatever they are
                   , int2.columns
                   , row_number() over (partition by int2.pk_columns order by int2.somecolumns) as rn
            from   ftp
            left join int int2
            on (the condition you used in your query)
        where rn=1
    ) s
    WHEN MATCHED THEN UPDATE ...
    WHEN NOT MATCHED THEN INSERT ...So if you can restrict the driving query so that only the first one of the possible updates actually gets presented to the MERGE operation, you might be in with a chance :-)
    And of course this error is nothing to do with any triggers.
    HTH
    Regards Nigel

  • ORA-30926: unable to get a stable set of rows in the source tables

    hi,
    I am loading data from source table to target table in a interface.
    Using LKM incremental update .
    In the merge rows step , getting the below error.
    30926 : 99999 : java.sql.SQLException: ORA-30926: unable to get a stable set of rows in the source tables
    please help as what should be done to resolve this.

    Below is the query in the merge step...
    when i run from SQL also, same error
    SQL Error: ORA-30926: unable to get a stable set of rows in the source tables
    30926. 00000 - "unable to get a stable set of rows in the source tables"
    *Cause:    A stable set of rows could not be got because of large dml
    activity or a non-deterministic where clause.
    *Action:   Remove any non-deterministic where clauses and reissue the dml.
    merge into     TFR.INVENTORIES T
    using     TFR.I$_INVENTORIES S
    on     (
              T.ORGANIZATION_ID=S.ORGANIZATION_ID
         and          T.ITEM_ID=S.ITEM_ID
    when matched
    then update set
         T.ITEM_TYPE     = S.ITEM_TYPE,
         T.SEGMENT1     = S.SEGMENT1,
         T.DESCRIPTION     = S.DESCRIPTION,
         T.LIST_PRICE_PER_UNIT     = S.LIST_PRICE_PER_UNIT,
         T.CREATED_BY     = S.CREATED_BY,
         T.DEFAULT_SO_SOURCE_TYPE     = S.DEFAULT_SO_SOURCE_TYPE,
         T.MATERIAL_BILLABLE_FLAG     = S.MATERIAL_BILLABLE_FLAG,
         T.LAST_UPDATED_BY     = S.LAST_UPDATED_BY
         ,T.ID     = TFR.INVENTORIES_SEQ.NEXTVAL,
         T.CREATION_DATE     = CURRENT_DATE,
         T.LAST_UPDATE_DATE     = CURRENT_DATE
    when not matched
    then insert
         T.ORGANIZATION_ID,
         T.ITEM_ID,
         T.ITEM_TYPE,
         T.SEGMENT1,
         T.DESCRIPTION,
         T.LIST_PRICE_PER_UNIT,
         T.CREATED_BY,
         T.DEFAULT_SO_SOURCE_TYPE,
         T.MATERIAL_BILLABLE_FLAG,
         T.LAST_UPDATED_BY
         ,T.ID,
         T.CREATION_DATE,
         T.LAST_UPDATE_DATE
    values
         S.ORGANIZATION_ID,
         S.ITEM_ID,
         S.ITEM_TYPE,
         S.SEGMENT1,
         S.DESCRIPTION,
         S.LIST_PRICE_PER_UNIT,
         S.CREATED_BY,
         S.DEFAULT_SO_SOURCE_TYPE,
         S.MATERIAL_BILLABLE_FLAG,
         S.LAST_UPDATED_BY
         ,TFR.INVENTORIES_SEQ.NEXTVAL,
         CURRENT_DATE,
         CURRENT_DATE
         )

  • ORA-30926: unable to get a stable set of rows in the source  table

    When user are trying to open a form getting below error.
    com.retek.platform.exception.RetekUnknownSystemException:ORA-30926: unable to get a stable set of rows in the source tables
    Please advice
    Edited by: user13382934 on Jul 9, 2011 1:32 PM

    Please try this
    create table UPDTE_DEFERRED_MAILING_RECORDS nologging as
    SELECT distinct a.CUST_ID,
    a.EMP_ID,
    a.PURCHASE_DATE,
    a.drank,
    c.CONTACT_CD,
    c.NEW_CUST_CD,
    a.DM_ROW_ID
    FROM (SELECT a.ROWID AS DM_ROW_ID,
    a.CUST_ID,
    a.EMP_ID,
    a.PURCHASE_DATE,
    dense_rank() over(PARTITION BY a.CUST_ID, a.EMP_ID ORDER
    BY a.PURCHASE_DATE DESC, a.ROWID) DRANK
    FROM deferred_mailing a) a,
    customer c
    WHERE a.CUST_ID = c.CUST_ID
    AND a.EMP_ID = c.EMP_ID
    AND (a.PURCHASE_DATE <= c.PURCHASE_DATE OR
    c.PURCHASE_DATE IS NULL)
    and a.drank=1;
    The query you've posted is behaving according to the expectations. The inner select is returning one row and the outer is returning two as the
    WHERE a.CUST_ID = c.CUST_ID
    AND a.EMP_ID = c.EMP_ID
    AND (a.PURCHASE_DATE <= c.PURCHASE_DATE OR
    c.PURCHASE_DATE IS NULL)
    conditions are seeing two rows in the table customer.
    I've added the a.drank=1 clause to skip the duplicates from the inner table and distinct in the final result to remove duplicates from the overall query result.
    For eg, if you have one more row in the deferred_mailing like this
    SQL> select * from DEFERRED_MAILING;
    CUST_ID EMP_ID PURCHASE_
    444 10 11-JAN-11
    444 10 11-JAN-11
    then the query without "a.drank=1" will return 4 rows like this by the outer query.
    CUST_ID EMP_ID PURCHASE_ DM_ROW_ID DRANK C N
    444 10 11-JAN-11 AAATi2AAGAAAACcAAB 2 Y Y
    444 10 11-JAN-11 AAATi2AAGAAAACcAAA 1 Y Y
    444 10 11-JAN-11 AAATi2AAGAAAACcAAB 2 Y Y
    444 10 11-JAN-11 AAATi2AAGAAAACcAAA 1 Y Y
    It'll return the below even if we use distinct on the same query(i.e. without a.drank=1)
    CUST_ID EMP_ID PURCHASE_ DM_ROW_ID DRANK C N
    444 10 11-JAN-11 AAATi2AAGAAAACcAAB 2 Y Y
    444 10 11-JAN-11 AAATi2AAGAAAACcAAA 1 Y Y
    which contains duplicates again.
    So, we need a combination of distinct and dense here.
    btw, Please mark the thread as 'answered', if you feel you got your question answered. This will save the time of others who search for open questions to answer.
    Regards,
    CSM

  • Ora-30926 : unable to get a stable set of rows in source table

    Dear All
    When I try to load my cube I get the error "ora-30926 : unable to get a stable set of rows in source table".
    Any idea? Googling for this error did not return any solutions.
    My env:
    source: Oracle 10g (10.2.x)
    Target: Oracle 11g (11.1.0.7)
    I am using warehouse builder 11.1.0.7 on Linux
    thank you

    Carsten / neashton
    Thanks for your help. Duplicate rows were the issue.
    I finally debugged the problem to my time dimension.
    The OWB wizard generated time dimension contains only date, but no time.
    Unfortunately, for me to uniquely identify my data, I need to include time also (detailed in how do I include 'Time' in a time dimension? .
    Since Carsten was the first one to answer this, I am awarding the points to him.
    thanks a lot both of you
    Edited by: T2 on Jun 2, 2009 4:01 AM

  • MERGE Statement - unable to get a stable set of rows in the source tables

    OWB Client: 10.1.0.2.0
    OWB Repository: 10.1.0.1.0
    I am trying to create a MERGE in OWB.
    I get the following error:
    ORA-12801: error signaled in parallel query server P004 ORA-30926: unable to get a stable set of rows in the source tables
    I have read the other posts regarding this and can't seem to get a fix.
    The target table has a unique index on the field that I am matching on.
    The "incoming" data doesn't have a unique index, but I have checked and confirmed that it is unique on the appropriate key.
    The "incoming" data is created by a join and filter in the mapping and I'd rather avoid having to load this data into a new table and add a unique index on this.
    Any help would be great.
    Thanks
    Laura

    Hello Laura,
    The MERGE statement does not require any constraints on its target table or source table. The only requirement is that two input rows cannot update the same target row, meaning that all existing target rows can be matched by at most one input row (otherwise the MERGE would be undeterministic since you don't know which of the input rows you would end up with in the target).
    If a table takes ages to load (and is not really big) I suspect that your mapping is not running in set mode and that it performs a full table scan on source data for each target row it produces.
    If you ARE running in set mode you should run explain plan to get a hint on what is wrong.
    Regarding your original mapping, try to set the target operator property:
    Match by constraint=no constraints
    and then check the Loading properties on each target column.
    Regards, Hans Henrik

  • Getting error Unable to perform transaction on the record.

    Hi,
    My requirement is to implement the custom attachment, and to store the data into custom lob table.
    my custom table structure is similer to that of standard fnd_lobs table and have inserted the data through EO based VO.
    Structure of custom table
    CREATE TABLE XXAPL.XXAPL_LOBS
    ATTACHMENT_ID NUMBER NOT NULL,
    FILE_NAME VARCHAR2(256 BYTE),
    FILE_CONTENT_TYPE VARCHAR2(256 BYTE) NOT NULL,
    FILE_DATA BLOB,
    UPLOAD_DATE DATE,
    EXPIRATION_DATE DATE,
    PROGRAM_NAME VARCHAR2(32 BYTE),
    PROGRAM_TAG VARCHAR2(32 BYTE),
    LANGUAGE VARCHAR2(4 BYTE) DEFAULT ( userenv ( 'LANG') ),
    ORACLE_CHARSET VARCHAR2(30 BYTE) DEFAULT ( substr ( userenv ( 'LANGUAGE') , instr ( userenv ( 'LANGUAGE') , '.') +1 ) ),
    FILE_FORMAT VARCHAR2(10 BYTE) NOT NULL
    i have created a simple messegefileupload and submit button on my custom page and written below code on CO:
    Process Request Code:
    if(!pageContext.isBackNavigationFired(false))
    TransactionUnitHelper.startTransactionUnit(pageContext, "AttachmentCreateTxn");
    if(!pageContext.isFormSubmission()){
    System.out.println("In ProcessRequest of AplAttachmentCO");
    am.invokeMethod("initAplAttachment");
    else
    if(!TransactionUnitHelper.isTransactionUnitInProgress(pageContext, "AttachmentCreateTxn", true))
    OADialogPage dialogPage = new OADialogPage(NAVIGATION_ERROR);
    pageContext.redirectToDialogPage(dialogPage);
    ProcessFormRequest Code:
    if (pageContext.getParameter("Upload") != null)
    DataObject fileUploadData = (DataObject)pageContext.getNamedDataObject("FileItem");
    String strFileName = null;
    strFileName = pageContext.getParameter("FileItem");
    if(strFileName == null || "".equals(strFileName))
    throw new OAException("Please select a File for upload");
    fileName = strFileName;
    contentType = (String)fileUploadData.selectValue(null, "UPLOAD_FILE_MIME_TYPE");
    BlobDomain uploadedByteStream = (BlobDomain)fileUploadData.selectValue(null, fileName);
    String strItemDescr = pageContext.getParameter("ItemDesc");
    OAFormValueBean bean = (OAFormValueBean)webBean.findIndexedChildRecursive("AttachmentId");
    String strAttachId = (String)bean.getValue(pageContext);
    System.out.println("Attachment Id:" +strAttachId);
    int aInt = Integer.parseInt(strAttachId);
    Number numAttachId = new Number(aInt);
    Serializable[] methodParams = {fileName, contentType , uploadedByteStream , strItemDescr , numAttachId};
    Class[] methodParamTypes = {fileName.getClass(), contentType.getClass() , uploadedByteStream.getClass() , strItemDescr.getClass() , numAttachId.getClass()};
    am.invokeMethod("setUploadFileRowData", methodParams, methodParamTypes);
    am.invokeMethod("apply");
    System.out.println("Records committed in lobs table");
    if (pageContext.getParameter("AddAnother") != null)
    pageContext.forwardImmediatelyToCurrentPage(null,
    true, // retain AM
    OAWebBeanConstants.ADD_BREAD_CRUMB_YES);
    if (pageContext.getParameter("cancel") != null)
    am.invokeMethod("rollbackShipment");
    TransactionUnitHelper.endTransactionUnit(pageContext, "AttachmentCreateTxn");
    Code in AM:
    public void apply(){
    getTransaction().commit();
    public void initAplAttachment() {
    OAViewObject lobsvo = (OAViewObject)getAplLobsAttachVO1();
    if (!lobsvo.isPreparedForExecution())
    lobsvo.executeQuery();
    Row row = lobsvo.createRow();
    lobsvo.insertRow(row);
    row.setNewRowState(Row.STATUS_INITIALIZED);
    public void setUploadFileRowData(String fName, String fContentType, BlobDomain fileData , String fItemDescr , Number fAttachId)
    AplLobsAttachVOImpl VOImpl = (AplLobsAttachVOImpl)getAplLobsAttachVO1();
    System.out.println("In setUploadFileRowData method");
    System.out.println("In setUploadFileRowData method fAttachId: "+fAttachId);
    System.out.println("In setUploadFileRowData method fName: "+fName);
    System.out.println("In setUploadFileRowData method fContentType: "+fContentType);
    RowSetIterator rowIter = VOImpl.createRowSetIterator("rowIter");
    while (rowIter.hasNext())
    AplLobsAttachVORowImpl viewRow = (AplLobsAttachVORowImpl)rowIter.next();
    viewRow.setFileContentType(fContentType);
    viewRow.setFileData(fileData);
    viewRow.setFileFormat("IGNORE");
    viewRow.setFileName(fName);
    rowIter.closeRowSetIterator();
    System.out.println("setting on fndlobs done");
    The attchemnt id is the sequence generated number, and its defaulting logic is written in EO
    public void create(AttributeList attributeList) {
    super.create(attributeList);
    OADBTransaction transaction = getOADBTransaction();
    Number attachmentId = transaction.getSequenceValue("xxapl_po_ship_attch_s");
    setAttachmentId(attachmentId);
    public void setAttachmentId(Number value) {
    System.out.println("In ShipmentsEOImpl value::"+value);
    if (getAttachmentId() != null)
    System.out.println("In AplLobsAttachEOImpl AttachmentId::"+(Number)getAttachmentId());
    throw new OAAttrValException(OAException.TYP_ENTITY_OBJECT,
    getEntityDef().getFullName(), // EO name
    getPrimaryKey(), // EO PK
    "AttachmentId", // Attribute Name
    value, // Attribute value
    "AK", // Message product short name
    "FWK_TBX_T_EMP_ID_NO_UPDATE"); // Message name
    if (value != null)
    // Attachment ID must be unique. To verify this, you must check both the
    // entity cache and the database. In this case, it's appropriate
    // to use findByPrimaryKey() because you're unlikely to get a match, and
    // and are therefore unlikely to pull a bunch of large objects into memory.
    // Note that findByPrimaryKey() is guaranteed to check all AplLobsAttachment.
    // First it checks the entity cache, then it checks the database.
    OADBTransaction transaction = getOADBTransaction();
    Object[] attachmentKey = {value};
    EntityDefImpl attachDefinition = AplLobsAttachEOImpl.getDefinitionObject();
    AplLobsAttachEOImpl attachment =
    (AplLobsAttachEOImpl)attachDefinition.findByPrimaryKey(transaction, new Key(attachmentKey));
    if (attachment != null)
    throw new OAAttrValException(OAException.TYP_ENTITY_OBJECT,
    getEntityDef().getFullName(), // EO name
    getPrimaryKey(), // EO PK
    "AttachmentId", // Attribute Name
    value, // Attribute value
    "AK", // Message product short name
    "FWK_TBX_T_EMP_ID_UNIQUE"); // Message name
    setAttributeInternal(ATTACHMENTID, value);
    Issue faced:
    When i run the page for the first time data gets inserted into custom table perfectly on clicking upload button,
    but when clicked on add another button on the same page (which basically redirects to the same upload page and increments the attachment id by 1)
    i am getting the below error:
    Error
    Unable to perform transaction on the record.
    Cause: The record contains stale data. The record has been modified by another user.
    Action: Cancel the transaction and re-query the record to get the new data.
    Have spent entire day to resolve this issue but no luck.
    Any help on this will be appreciated, let me know if i am going wrong anywhere.
    Thanks nd Regards
    Avinash

    Hi,
    After, inserting the values please re-execute the VO query.
    Also, try to redirect the page with no AM retension
    Thanks,
    Gaurav

  • Unable to getting all records

    hi
    i  am new to bo 4.1 and i am having  source system (orcial)
    in sours level i have a table  in  6 lake record.when i use same table in webi i am unable to getting all record.i don't have any filters in report and object level.my table contain all dimensions only
    i changed in universe level no of records display and report level as well..
    s.s(6 lakes) = target report level( 6 lakes)?
    i am using sap bo 4.1 version
    pls do need ul help

    Hi Sandeep Mishra,
    thanks for your response, actually its executing successfully,but when i am set no of display records for example-5000 to 10000 ...it showing in report level..but when i want all record around 6 lakhs records..it taking time but out put showing empty..
    see the below my objects
    i need all objects at a time....this objects have table in source level 6 lakh ..i want same records in report level like 6 lakhs..
    i changed query properties in display of no of records as well as univer level..
    let me know in case of any clarty..
    thanks

  • Unable to get the composite instance for the invocation. This could be because instance has not yet been created or because the audit level for the SOA infra has been set to Off

    I am on Oracle 11.1.1.7 BPM suite on W8 64 bit. I can't launch the flow trace and get the error "Unable to get the composite instance for the invocation. This could be because instance has not yet been created or because the audit level for the SOA infra has been set to Off".  I have set the audit level to development at the soa-infra>SOA Administration> Common Properties > Audit level set to development and Capture Composite Instance State is Checked.
    Can somebody advice.
    Thanks

    Can you please confirm me the following steps...
    Log in to the EM console, Expand soa-infra (soa_server1) , go to the partition where your composite is been deployed, Click on your composite, On the right, click on the dropdown Settings and choose Composite Audit Level. you can choose to set the Audit Level for this composite. If you choose Inherit, it will take the settings to what the server is being set to. Otherwise, we can override it by choosing Off, Production, or Development.
    Make sure your setting for that composite is not Off, keep inherit or production or development.
    Thanks,
    N

  • I just restored my phone, however I have to sign into icloud and the phone is set to the previous owner's email and password. I am unable to get in contact with them. What can I do?

    I just restored my phone, however I have to sign into icloud and the phone is set to the previous owner's email and password. I am unable to get in contact with them. What can I do?
    Please help!

    Hello codyfromseminole,
    After reviewing your post, I believe your device is under Activaltion Lock. The following article will provide you with more information about the feature:
    iCloud: Activation Lock
    With iOS 7 or later, Find My iPhone includes a feature called Activation Lock, which is turned on automatically when you set up Find My iPhone. Activation Lock makes it harder for anyone to use or sell your iPhone, iPad, or iPod touch if it’s ever lost or stolen. 
    With Activation Lock, your Apple ID and password are required before anyone can:
    Turn off Find My iPhone on your device
    Sign out of iCloud on your device
    Erase and reactivate your device
    Important:    Make sure to remember your Apple ID and password, and that your password is unique and secure—someone else shouldn’t be able to guess it. For more information, see the Apple Support article Security and your Apple ID. 
    If you want to give away or sell your device, be sure to erase your content and settings (in Settings > General > Reset > Erase All Content and Settings). When you erase your content, Find My iPhone and Activation Lock are also turned off. 
    If you no longer have the device, follow the instructions to remove a device you no longer have. For more information, see the Apple Support article What to do before selling or giving away your iPhone, iPad, or iPod touch. 
    For more information about Activation Lock, see the Apple Support article Find My iPhone Activation Lock in iOS 7.
    Thank you for contributing to Apple Support Communities.
    Cheers,
    BobbyD

  • How to get the number of records of a streaming result set

    Hi guys.
    So if it wasn't a streaming result set, I would have done this:
    {noformat}myResultSet<code class="jive-code jive-java">.last();
    {color:navy}*int*{color} numResults = </code>myResultSet<code class="jive-code jive-java">.getRow();
    </code>myResultSet<code class="jive-code jive-java">.beforeFirst();
    </code>{noformat}
    but being a streaming result set, beforeFirst() throws an exception...
    So how do you get the number of records in that result set? I wanna avoid an extra count(*) query, so I would appreciate other solutions than that.

    JoachimSauer wrote:
    vanwil wrote:
    you see, for now I just use a count(*) query to get the number of records, but that's adding a lot of extra waiting time...Iterating over the result twice will surely be slower then doing the count(*).great! so what I got now is actually the fastest way there is... awesome...
    If you get an exception, then you surely have a stack trace. That should tell you what happens, or at least where.com.mysql.jdbc.MysqlIO.checkForOutstandingStreamingData(MysqlIO.java:2066)
    Why do you need to know the number of elements beforehand, anyway?I need to know the number of elements because the incoming data goes into a table. Now of course, I could use ArrayList<String[]> or something, but wouldn't that require more memory resources than Object[][] ?
    No one can tell you that, at least not without more information (say, the stack trace for example).Here's the exception message:
    java.sql.SQLException: Streaming result set com.mysql.jdbc.RowDataDynamic@10c0ef2 is still active. Only one streaming result set may be open and in use per-connection. Ensure that you have called .close() on  any active result sets before attempting more queries.

Maybe you are looking for

  • Ios 7 Calendar year view lost gradient colours

    Noticed that on ios 7 the calendar year view no longer shows the gradient colours. This was very cool feature so you could have an overview of how busy was my agenda. Any idea if this is a setting that needs to be activated somewhere or is it totally

  • Value entered in UI elment field disappears in Adobe forms

    Hi , I am facing a weird behavior in the HCM Processes and Forms on Portals, I have designed the form using HRASR_DT t-code and done all the mapping correctly , The form is a Salary Change Form ,it has 2 fields Current Annual Salary(Read Only) and Ne

  • "folder on server" space???

    when i want to upload to ftp host , what do i need to write in "folder on server" space???

  • Purging the data older than 1 year

    I have a requirement to purge data older than 1 year. Table has 5 billion rows and delete about 4 billion rows in batches. there is no primary key or indexes on table. There is column Datetime and i created nonclustered index on table and planning to

  • Ridiculously slow airport disc transfer speed

    I just set up a new AirPort Extreme on my existing network, and the airport disk feature is unbearably slow. The finder locks up for nearly a minute just connecting to the disk, and then again when i open a folder. It took over a minute to copy a 60k