Oracle MDS cache for B2B and BPEL performance Issue

Hi All,
We have xmx set to 4GB and we have set the b2b.mdsCache to 400 MB.
Our process runs as JMS --> B2B --> JMS --> Composite app(mediator and BPEL)--> OSB
In composite application in one of the step mediator publishes events to EDN.
Now when we dont have B2B MDS cache set to 400 MB, The composite process completes within a second
But when we have b2b mds cache in b2b-config.xml set to 400 MB then the composite app takes 20 sec to complete. when we analyzed this we found that mediator is taking 10 second to publish the even to EDN which inturn increasing the over all processing time as message is being published to END twice in this composite app.
Even subscribing to event is very slow almost 9 sec.
Env Details :
Soa 11.1.1.5 on linus 5.1, in dev mode with default settings
xms - xmx = 2GB - 4GB
We set the b2b.mdscache to 400 mb as per the Oracle doc as below:
A ratio of 5:1 is recommended for the xmx-to-mdsCache values. For example, if the xmx size is 1024, maintain mdsCache at 200 MB
Regards
SVS

Unfortunatly, the only way to tune the cach buffer chains latch is on the application side.
Look for ways to eliminate subqueries by replacing them with inline views and joins. Given the high fetch rate in the buffer cache, this would appear to be the problem.

Similar Messages

  • Agreement Configuration for B2B and BPEL integration

    I configured a listening channel on B2B to pick up custom xml files from a folder. I have a BPEL SOA composite that has a B2B adapter as a service to receive the custom xml files from B2B through direct integration. Since B2B has to send the xml file only to a SOA composite and not to another trading partner, how do I configure the agreement in B2B?
    Appreciate your help.

    Hi Naresh and Anuj,
    Thanks a lot for your responses.
    This is the problem that I am facing now.
    As per your suggestion, I deployed an inbound agreement that specifies that the Sender is the Other TP (EPCOD) and the Receiver is the Host TP EPCOZ). Also I left the channel for the Host TP blank in the agreement - did not select anything from the drop down.
    When I drop the file into the folder specified in the internal listening channel, with the following naming convention (EPCOZ_PayableInvoice_1.0_Custom_1234.xml), the file gets picked up, but I get the following error message:
    Description: Agreement not found for trading partners: FromTP EPCOZ, ToTP EPCOZ with document type PayableInvoice-1.0-OUTBOUND.
    For some reason, it thinks that both the sender and the receiver are the same. If I reverse the file name as EPCOD_PayableInvoice_1.0_Custom_1234.xml, then it thinks that the From TP is EPCOZ and the To TP is EPCOD.
    Any ideas? Appreciate your help.
    Thanks.
    Raja

  • FTP Adapter, B2b and SOA performance Test

    Hi All,
    We have a requirement to processes Large XML files ranging from 1 MB to 200 MB. Our Flow is FTP Adapter picks the XML's Repeatable Nodes. BPEL Transforms and Loops thru each XML node and calls B2B Adapter in Each Loop. We are doing Advanced EDI batching to aggregate all the nodes in one XML to one EDI. Files upto 7 MB are working fine with FTP fileraise property= 1 and polling frequency=300s. Files with 14MB are failing with JTA transaction Time Out (Time Out Set=500s) and Server running in PROD mode. We are using SOA Suite 11.1.1.7 and HIPAA 834 Transactions. Is there a Payload size Limitation For FTP Adapter or SOA Suite? Do we need to Follow a different approach to achieve our functionality? Do we need to set any Performance parameters? Please share your thoughts.
    Thanks In Advance!!!

    Pl do not post duplicates - FTP Adapter, B2b and SOA performance Test

  • Is Oracle 11g released for NW04s and ECC6.0?

    Is Oracle 11g released for NW04s and ECC6.0?
    Please help. Thanks!

    Plain and simple: No, it isn't.
    Check the SAP on Oracle homepage [here|SAP on Oracle] in SDN and read the latest development news on this topic including a rough schedule.
    regards,
    Lars

  • How to update this query and avoid performance issue?

    Hi, guys:
    I wonder how to update the following query to make it weekend day aware. My boss want the query to consider business days only. Below is just a portion of the query:
    select count(distinct cmv.invoicekey ) total ,'3' as type, 'VALID CALL DATE' as Category
    FROM cbwp_mv2 cmv
    where cmv.colresponse=1
    And Trunc(cmv.Invdate)  Between (Trunc(Sysdate)-1)-39 And (Trunc(Sysdate)-1)-37
    And Trunc(cmv.Whendate) Between cmv.Invdate+37 And cmv.Invdate+39the CBWP_MV2 is a materialized view to tune query. This query is written for a data warehouse application, the CBWP_MV2 will be updated every day evening. My boss wants the condition in the query to consider only business days, for example, if (Trunc(Sysdate)-1)-39 falls in weekend, I need to move the range begins from next coming business day, if (Trunc(Sysdate)-1)-37 falls in weekend, I need to move the range ends from next coming business day. but I should always keep the range within 3 business days. If there is overlap on weekend, always push to later business days.
    Question: how to implement it and avoid performance issue? I am afraid that if I use a function, it greatly reduce the performance. This view already contains more than 100K rows.
    thank you in advance!
    Sam
    Edited by: lxiscas on Dec 18, 2012 7:55 AM
    Edited by: lxiscas on Dec 18, 2012 7:56 AM

    You are already using a function, since you're using TRUNC on invdate and whendate.
    If you have indexes on those columns, then they will not be used because of the TRUNC.
    Consider omitting the TRUNC or testing with Function Based Indexes.
    Regarding business days:
    If you search this forum, you'll find lots of examples.
    Here's another 'golden oldie': http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:185012348071
    Regarding performance:
    Steps to take are explained from the links you find here: {message:id=9360003}
    Read them, they are more than worth it for now and future questions.

  • What happened to PDF document 22040 – "PIX/ASA: Monitor and Troubleshoot Performance Issues"?

    Hi, does anyone knows what was happened to the following PDF notes in Cisco? The PDF file is only contains 1 page compared to the original notes in html format which is about a few pages.
    If there is alternative link for this document, please let me know. Thanks.
    Document ID: 22040
    PIX/ASA: Monitor and Troubleshoot Performance Issues
    http://www.cisco.com/image/gif/paws/22040/pixperformance.pdf <PDF Notes, but 1 page only?>
    http://www.cisco.com/en/US/products/hw/vpndevc/ps2030/products_tech_note09186a008009491c.shtml  < HTML Notes>

    Hi experts / marcin
    can anyone of you let me know about my question related to vpn ?
    Jayesh

  • TIMECODES are important for audio and video sync issues

    Hello, just wanted to pass on what I learned so that others can avoid the trouble that I've had to go through. Perhaps this may help someone who is stuck on the launch pad. :)
    BOTTOM LINE: Info for the beginner. Audio and video not in sync in Premiere Pro CS3 V3.2.0
    PROBLEM: Capture works great it seems. When I go to the folder that contains the captured file and view in Windows Media Player audio and video are in sync. BUT when viewing the video asset in the source and program monitors, the audio and video are not in sync.
    SOLUTION:
    Before capturing a tape make certain the following is checked:
    (1) Edit->Preferences->Capture->Use device control timecode
    (2) Edit->Preferences->Device Control->Options->Timecode Format
    (3) Project->Project Settings->General->Video->Display format
    As for the devices timecode choose something other than Auto Detect. Then match the project timecode with what was chosen during capture. The projects display format could of course be set to frames.
    I searched everywhere for audio and video sync issues in google, adobe forums, F1 help, and hv20.com and everyone was talking about:
    (1) Presets: 1080p30 vs. 1080i30 (60i).
    (2) brakes in the tape where timecode for the audio and video get misaligned during capture.
    But choosing the correct hardware settings and timecodes to solve audio and video sync issues never popped up.
    MY HARDWARE: Canon HV30, HDV

    >Audio and video not in sync in Premiere Pro CS3 V3.2.0
    Must be an HDV only issue because my synch is always perfect.

  • Oracle Workflow - Statement of Direction and BPEL

    Hi,
    I am starting to study Oracle BPEL and other people at my company are studying Oracle Workflow.
    Recently I found the following article at the OTN, regarding the Statement of Direction of Oracle Workflow:
    http://www.oracle.com/technology/products/ias/workflow/workflow_sod.html
    It finishes saying:
    "As Oracle BPEL Process Manager provides out-of-the-box features for building human-based workflows, rules-based process automation, and integration style business processes, further development on OW4J will not continue and OW4J will not be released.
    Any new or existing customers who wish to build business processes in the middle tier are recommended to use Oracle BPEL Process Manager."
    So my question is: Does Oracle BPEL provides (or will provide) all the functionality implemented in Oracle Workflow ? If not what are the main differences ?
    I will be waiting for your thoughts
    Thanks,
    Claudio.

    Site mentioned above is an internal site. External site is http://otn.oracle.com/bpel
    I think the above highlighted SOD is about Oracle initiative to re-write current pl/sql based Workflow engine in Java so that it can ported to middle-tier. Since the acuisition of Collaxa they seems to be dumped that idea and positioning Oracle BPEL as the middle-tier process orchestration (BPM) solution.
    Having said that let me take a shot at comparing Oracle Workflow and BPEL products....
    Oracle Workflow
    * No way getting dropped as Oracle EBS has huge investments in it
    * Oracle Fusion Applications has plans to use the Oracle Workflow BES functionality extensively
    * Good for modeling business processes within a single DB/application instances. It's possible to integrate with external apps but not an elegant solution
    * Web Services can be a functional activity with in the work flow but again not an elegant solution. Requires DB java, queues and a listener
    * Doensn't provide sophisticated modeling artifacts like TaskManager, Faults, sensors etc out of the box. It's possible to build your own library.
    * Better performanent if the entire process interaction is within a single instance like Oracle EBS account generators, approval flows etc.
    * Based on proprietary technology
    Oracle BPEL
    * Based on standards like BPEL, WSDL and WSIF
    * What ever you can do in workflow can be done in BPEL but not the otherway round
    * Has adapters to interact with various transports like JDBC, AQ, JMS, JCA, HTTP. That means integration with existing workflows is very easy
    * Interation with Oracle AQs seem to be trivial as there is a native JCA based adapter for it
    * You may get performance hit because of the multiple connection points. You may be loosing performance but gaining in flexibility
    Overall Oracle BPEL seems to be the way to go if your business process involves multiple applications.
    HTH
    Rajesh

  • Fault policies  for Mediator and BPEL

    Hi
    Can i use same fault-policies.xml for both mediator and BPEL or i need to create different fault-policies.xml

    Yes you can use the same file, just you need to mention the BPEL/Mediator name with in component tag. Like below in fault policy file.
    <component faultPolicy="FusionMidFaults">
    <name>MediatorName</name>
    <name>BPELName</name>
    </component>
    You can refer the below URL.
    https://blogs.oracle.com/ateamsoab2b/entry/fault_management_framework_by_example
    *7) Give points - it is good etiquette to reward an answerer points (5 - helpful; 10 - correct) for their post if they answer your question.*
    thanks,
    Vijay

  • Oracle Fusion Intelligence for E1 and the old EPM Data Marts

    I just received the latest edition of the "JD Edwards EnterpriseOne: The Complete Reference" book. Chapter 5 deals with Data Warehousing and OBI. This chapter mentions that Oracle has plans to make the old JDE EPM data marts compatible with OBIEE. The new offering will be called Oracle Fusion Intelligence for EnterpriseOne.
    I was wondering if anyone has more information about what this means as far as joining E1 to OBIEE. Does anyone have experience with the EPM data marts from JDE? I've never used them and was just wondering if I could get some specifics on the entire E1/OBIEE concept.
    I've also heard that Oracle is creating some adapters for E1 into OBIEE, so I'm assuming that those are along these same lines.
    Thanks,

    I am now hearing about finance analytics. I am more confused.

  • Which edition of Oracle is certified for Vista and Windows 7?

    Many thanks.

    Windows 7, is projected.
    As for Windows Vista (10g and 11g):
    Oracle Database (EE, SE, PE and Client) are supported on these platforms:
    - Business edition
    - Enterprise Edition
    - Ultimate Edition
    -Andy

  • SQL Server 2000 and BPEL configuration issues

    I am attempting to get SQL Server 2000 to work with BPEL PM Server, and have followed a similar set of instructions as provided in a previously posted document regarding the switch from oracle lite to oracle production. I am following the OC4J route. I've seen a previous posting on this, however, I am elaborating a little more on the configuration details here and the difficulties that I am encountering.
    I'm am using the following software:
    1) SQL Server 2000 (w/ SP3)
    2) SQL Server 2000 JDBC Driver (SP3 latest version)
    3) BPEL PM (GA release)
    Here's what I've done:
    1) setup the database in SQL Server 2000 (named: ORABPEL). then ran the ddl scripts that came with the BPEL installation for sql server. there were two scripts, one for domain and the other for server. the commandlines to run these scripts:
    sql -Uuser -Ppassword -ddatabase
    -i c:\orabpel\system\database\scripts\domain_sqlserver.ddl
    -o c:\orabpel\system\database\scripts\domain_sqlserver.out
    2) installed stored procedures for JTA. this is documented in the JDBC driver help file.
    3) modified the library paths in application.xml as followed:
    <!-- SQL2K JDBC LIBS -->
    <library path="C:\Program files\Microsoft SQL Server 2000 Driver for JDBC\lib\msbase.jar"/>
    <library path="C:\Program files\Microsoft SQL Server 2000 Driver for JDBC\lib\msutil.jar"/>
    <library path="C:\Program files\Microsoft SQL Server 2000 Driver for JDBC\lib\mssqlserver.jar"/>
    4) modified the datasources in the data-sources.xml:
    - first comment out the oracle lite data-source
    - add datasources for mssql 2000:
    <data-source class="com.evermind.sql.DriverManagerDataSource"
         name="BPELServerDataSource"
         location="loc/BPELServerDataSource"
         xa-location="BPELServerDataSource"
         ejb-location="jdbc/BPELServerDataSource"
         connection-driver="com.microsoft.jdbc.sqlserver.SQLServerDriver"
         url="jdbc:microsoft:sqlserver://127.0.0.1:1433;SelectMethod=cursor;User=<username>;Password=<password>;DatabaseName=ORABPEL">
    </data-source>
    <data-source class="com.evermind.sql.DriverManagerDataSource"
         name="BPELSamplesDataSource"
         location="jdbc/BPELSamplesDataSource"
         xa-location="BPELSamplesDataSource"
         ejb-location="jdbc/BPELSamplesDataSource"
         connection-driver="com.microsoft.jdbc.sqlserver.SQLServerDriver"
         url="jdbc:microsoft:sqlserver://127.0.0.1:1433;SelectMethod=cursor;User=<username>;Password=<password>;DatabaseName=ORABPEL">
    </data-source>
    <data-source class="com.evermind.sql.DriverManagerDataSource"
    name="AdminConsoleDateSource"
    location="jdbc/AdminConsoleDateSource"
    xa-location="AdminConsoleDateSource"
    ejb-location="jdbc/AdminConsoleDateSource"
         connection-driver="com.microsoft.jdbc.sqlserver.SQLServerDriver"
         url="jdbc:microsoft:sqlserver://127.0.0.1:1433;SelectMethod=cursor;User=<username>;Password=<password>;DatabaseName=ORABPEL">
    </data-source>
    after starting the BPEL PM Server, I got the following set of error messages:
    Loading processes for BPEL domain "default" ...
    <2005-06-02 09:36:44,482> <ERROR> <default.collaxa.cube.sensor> <PCException::<i
    nit>> Sensors not supported.
    <2005-06-02 09:36:44,482> <ERROR> <default.collaxa.cube.sensor> <PCException::<i
    nit>> Sensors are not supported on this database platform.
    <2005-06-02 09:36:44,482> <ERROR> <default.collaxa.cube.sensor> <PCException::<i
    nit>> If sensor functionality is required, please switch to a supported platform
    After this I went and changed the class tags to: com.microsoft.jdbcx.sqlserver.SQLServerDataSource
    restarted the server and got the following:
    <2005-06-02 09:22:52,531> <INFO> <collaxa> <ConnectionFactoryImpl::init> Initial
    ized connection factory jdbc/BPELServerDataSource
    05/06/02 09:23:06 ORABPEL-04077
    Cannot fetch a datasource connection.
    The process domain was unable to establish a connection with the datasource with
    the connection URL "loc/BPELServerDataSource". The exception reported is: Cann
    ot fetch a datasource connection.
    The process domain was unable to establish a connection with the datasource with
    the connection URL "loc/BPELServerDataSource". The exception reported is: [Mic
    rosoft][SQLServer 2000 Driver for JDBC]Unable to connect. DataSource property s
    erverName must be specified.
    Please check that the machine hosting the datasource is physically connected to
    the network. Otherwise, check that the datasource connection parameters (user/p
    assword) is currently valid.
    Please check that the machine hosting the datasource is physically connected to
    the network. Otherwise, check that the datasource connection parameters (user/p
    assword) is currently valid.

    Hi,
    I just saw your post message about configuring SQL server 2000 with Oracle BPLE. Have you configured it successfully or still encountered any problem.
    I am new to Oracle BPEL. Want to know if Oracle BPEL can use MSFT SQL server 2000 as the repository entirely, therefore, we don't need Oracle (or Oracle light database).
    Will really appreciate if you can share information and experience to configure SQL 2000 with Oracle BPEL.
    Thank you so much in advance.
    Leey

  • WHERE LIKE% and ASP Performance Issue

    Hi,
    i am facing an issue with my ASP application as i use it as front end web application to connect to a huge oracle Database.
    Basically i use my queries within the ASP pages, one of them uses Where LIKE to more than one column
    Example: i have Col1, Col2 i have created the following indexes:
    Index1 on Col1, Index2 on Col2 and Index3 on Col1,Col2
    From the ASP page i have field 1, field 2 and would like to use LIKE on both fields (Field1,Field2) but the process take so much time to get result not to mention the resources it takes.
    My ASP Query:
    sqlstr = "Select * From TABLE Where COL1 Like '"&field1&"%' And COL2 Like '"&field2&"%' ORDER BY Num ASC"
    Set Rs = Conn.Execute(Sqlstr)
    What to use instead of this query to get same result but much faster (optimized)?
    Thanks.

    if the ratio of the data returning is appropriate for index access Oracle optimizer should choose to use it, but for further commenting;
    a. I couldn't see your query in the output you provided?
    b. I need to know the data distribution; what is the ratio of the data coming over all table's data with the literals you use? you can check it by taking a count of the columns you indexed with a group by query.
    c. I assume that your indexes are in VALID status and you collected statistics with dbms_stats and cascaded to the indexes, and depending on the question above if your data is not skewed which may cause extra need for histograms,
    d. I also assume if like is starting with '%', which in this case Oracle does not use indexes and Text option is what you need to read as advised, or for another smart idea on making “like ‘%xxxx’" use index in Oracle you may check - http://oracle-unix.blogspot.com/2007/07/performance-tuning-how-to-make-like.html
    After you supply the query with literals included and the data distribution, maybe as a last resort we need to force index access with a hint and compare the statistics provided by timing and autotrace options of SQL*Plus.
    ps: Also you may produce a 10053 event trace to understand the optimizer decision - http://tonguc.wordpress.com/2007/01/20/optimizer-debug-trace-event-10053-trace-file/

  • CAT4900M and NetApp - Performance issue

    Hi,
    I'm struggling with a performance issue between our two NetApp Fas3170-devices.
    The setup is quite simple: Each NetApp is connected via two TenGig interfaces to a CAT4900M. The 4900M's are also connected via two TenGig interfaces. Each pair of connections are bundled into an Layer2-etherchannel, configured as a dot.1q trunk. Mode is set to 'ON' on both the 4900 and the NetApp. According to NetApp documentation, this configuration is supported. Across each etherchannel, the vlans 219 and 220 are allowed. Two partitions are configured on the NetApp's, one being active in our primary datacenter and another in our secondary datacenter. Vlan219 and Vlan220 are configured for each the two partitions, using HSRP for gateway redundancy.
    None of the interfaces nor the etherchannels shows any signs of misconfiguration. All links are up and etherchannels working as expected, well almost. Nothing indicates packet loss, crc-errors, Input/Output queue-drops or anything the would impact performance. Jumboframe is not configured, although this has been discussed.
    The problem is, that we're unable to achieve satisfactory performance, when for instance, performing a volume copy between the two NetApp partitions. Even though we have a teoretical bandwidth of 20Gbps end-to-end, we never climb above 75-80 Mbytes of actual transfer-rate between the two NetApps. So performancewise, is almost looks as if we're "scaled" down to a 1Gig link. No QoS or other kind of ratelimiting has been implemented on the 4900's, so from a network point of view, the NetApps can go full-throttle. NetApp sw has been updated and configurations for both NetApp and 4900's have been revised by NetApp engineers and given a "clean bill of health".
    The configuration for the 4900->NetApp etherchannel/interfaces is as follows:
    interface TenGigabitEthernet1/5
    description *** Trunk NetAPP DC1 ***
    switchport trunk encapsulation dot1q
    switchport trunk allowed vlan 219,220
    switchport mode trunk
    udld port aggressive
    channel-group 2 mode on
    spanning-tree bpdufilter enable
    interface TenGigabitEthernet1/6
    description *** Trunk NetAPP DC1 ***
    switchport trunk encapsulation dot1q
    switchport trunk allowed vlan 219,220
    switchport mode trunk
    udld port aggressive
    channel-group 2 mode on
    spanning-tree bpdufilter enable
    interface Port-channel2
    description *** Trunk Etherchannel DC1 ***
    switchport
    switchport trunk encapsulation dot1q
    switchport trunk allowed vlan 219,220
    switchport mode trunk
    spanning-tree bpdufilter enable
    spanning-tree link-type point-to-point
    Configuration for 4900->4900 interfaces/etherchannel is as follows:
    interface TenGigabitEthernet1/1
    description *** Site-to-Site trunk ***
    switchport trunk encapsulation dot1q
    switchport trunk allowed vlan 10,219,220
    switchport mode trunk
    udld port aggressive
    channel-group 1 mode on
    interface TenGigabitEthernet1/2
    description *** Site-to-Site trunk ***
    switchport trunk encapsulation dot1q
    switchport trunk allowed vlan 10,219,220
    switchport mode trunk
    udld port aggressive
    channel-group 1 mode on
    interface Port-channel1
    description *** Site-to-Site trunk ***
    switchport
    switchport trunk encapsulation dot1q
    switchport trunk allowed vlan 10,219,220
    switchport mode trunk
    spanning-tree link-type point-to-point
    Vlan10 used for mngt-purpose.
    Does anyone have similar experiences or suggestions as to why we're having theese performanceissues?
    Thanks
    /Ulrich
    Message was edited by: UHansen1976

    Hi,
    Thanks for your reply.
    I take it, that you mean baseline performance between the two NetApp's. Well, that's really out of my hands, as another department is responsible for the NetApp's. I'm not aware of any baseline performance, nor have I seen any benchmark tests or anything, that could give me hint.
    Just as you suggest, I've gone through the switch-setup systematically. Basically, starting with the physical layer and working my way up. So far, I've found nothing that would indicate a physical problem. The switchport/etherchannel setup has been verified by my peers and also verified by NetApp according to the configuration on the NetApps, as well as the various best-practice documentation availible. Futhermore, I haven't seen any signs of packets drops, crc-errors, massive re-transmissions or anything like not, neither on the switches nor the NetApps.
    Recently we had a status-meeting with our NetApp-partner and it looks to me like they're persuing the logical setup on the NetApps, as the're apparently a number of settings etc. that needs adjustment. Also, we're waiting for NetApp tech-support to comment on the traces, config-dump etc. we've send to them.
    /Ulrich

  • UNION ALL and UNION performance issue

    Hi All,
    I am trying to figure out the data for which only receive transaction has been done and further processing is pending. These transactions include all PO, RMA , ISO etc...
    I have to use UNION ALL in this case as for RMA and ISO, details which i want are not able to gather in a single query.
    But query is taking a lot of time ...may be around 30..mins in UNION ALL while 6 to 7 mins in UNION.
    To get all records I must have to use UNION ALL...
    So kindly suggest the solution for this problem
    Thanks
    Sachin
    Query is given below...
    SELECT /* + FIRST_ROWS */ DECODE(rsl.SOURCE_DOCUMENT_CODE,'REQ',(SELECT org1.ORGANIZATION_NAME
                                                           FROM     org_organization_definitions org1
                                                           WHERE org1.ORGANIZATION_ID =
                                                           rsl.FROM_ORGANIZATION_ID)) Vendor_Name
    ,rsh.RECEIPT_NUM Receipt_Number
         ,TO_CHAR(rt3.TRANSACTION_DATE,'Mon-DD-YYYY HH:MM:SS') Receipt_Date_and_Time
         ,msi.SEGMENT1 Part_Number
         ,msi.DESCRIPTION Part_Name
         ,rt3.QUANTITY Quantity
         ,rt3.UNIT_OF_MEASURE UOM
         ,NULL ASL_Status
         --for ISO no asl flag ASL Flag
         ,TO_CHAR(TRUNC((((86400*(SYSDATE-rt3.TRANSACTION_DATE))/60)/60)/24))|| ' Days ' || TO_CHAR(TRUNC(((86400*(SYSDATE-rt3.TRANSACTION_DATE))/60)/60)-24*(TRUNC((((86400*(SYSDATE-rt3.TRANSACTION_DATE))/60)/60)/24)))|| ' Hours' Days_and_hours_passed
         ,DECODE(
                        NVL(msi.max_minmax_quantity,0) ,
                        0 , 0 ,
                        (NVL(msi.max_minmax_quantity,0) -
                        NVL(inmohqd.onhand,0))
                             * 100
                             / NVL(msi.max_minmax_quantity,0)
                        ) gap_percent
    FROM rcv_transactions rt3
         ,rcv_shipment_headers rsh
         ,rcv_shipment_lines rsl
         ,mtl_system_items msi
         ,org_organization_definitions org
         --,MTL_ONHAND_QUANTITIES_DETAIL moqhd
         ,(SELECT NVL(SUM(primary_transaction_quantity),0) onhand,INVENTORY_ITEM_ID item_id,ORGANIZATION_ID organization_id
         FROM      mtl_onhand_quantities_detail
         WHERE SUBINVENTORY_CODE NOT IN ('Wip_SF','Wip_Int','Reject','Scrap','FG Trading','FG')
         GROUP BY INVENTORY_ITEM_ID, ORGANIZATION_ID) inmohqd
    WHERE inmohqd.item_id(+) = msi.INVENTORY_ITEM_ID
         AND inmohqd.organization_id(+) = msi.ORGANIZATION_ID
         --AND inmoqhd.SUBINVENTORY_CODE NOT IN  ('Wip_SF','Wip_Int','Reject','Scrap','FG Trading','FG')
         AND msi.INVENTORY_ITEM_ID = rsl.ITEM_ID
         AND rsh.SHIPMENT_HEADER_ID = rsl.SHIPMENT_HEADER_ID
         AND org.ORGANIZATION_ID = rt3.ORGANIZATION_ID
         AND msi.ORGANIZATION_ID = rt3.ORGANIZATION_ID
         AND rsh.SHIPMENT_HEADER_ID = rt3.SHIPMENT_HEADER_ID
         AND rsl.SHIPMENT_HEADER_ID = rt3.SHIPMENT_HEADER_ID
         AND rsl.SHIPMENT_LINE_ID = rt3.SHIPMENT_LINE_ID
         AND rt3.PO_HEADER_ID IS NULL
         AND TRUNC(rt3.TRANSACTION_DATE) <= TRUNC(p_tilldate)
         AND rsl.TO_ORGANIZATION_ID = p_organization_id
         AND rsh.ORGANIZATION_ID = p_organization_id
         AND CONCAT(TRIM(rt3.SHIPMENT_HEADER_ID),TRIM(rt3.SHIPMENT_LINE_ID)) IN
         SELECT CONCAT(TRIM(rt1.SHIPMENT_HEADER_ID),TRIM(rt1.SHIPMENT_LINE_ID))
         FROM     rcv_transactions rt1
         WHERE NOT EXISTS(
         SELECT 1
              FROM     rcv_transactions rt2
              WHERE     rt2.TRANSACTION_TYPE <> 'RECEIVE'
                        AND rt1.SHIPMENT_HEADER_ID = rt2.SHIPMENT_HEADER_ID
                        AND rt1.SHIPMENT_LINE_ID = rt2.SHIPMENT_LINE_ID
                        AND rt2.ORGANIZATION_ID = p_organization_id
    UNION
    SELECT /* + FIRST_ROWS */ pv.VENDOR_NAME Vendor_Name
         ,rsh.RECEIPT_NUM Receipt_Number
         ,TO_CHAR(rt.TRANSACTION_DATE,'Mon-DD-YYYY HH:MM:SS') Receipt_Date_and_Time
         ,msi.SEGMENT1 Part_Number
         ,msi.DESCRIPTION Part_Name
         ,rt.QUANTITY Quantity
         ,rt.UNIT_OF_MEASURE UOM
         --start 001
         ,NVL((SELECT DISTINCT DECODE (ASL_STATUS_ID,1,'New',2,'Approved','To be checked')
                   FROM po_approved_supplier_list pasl
                   WHERE pasl.item_id=rsl.ITEM_ID
                             AND pasl.VENDOR_ID(+) = pv.VENDOR_ID
                             AND pasl.VENDOR_SITE_ID(+) = pvs.VENDOR_SITE_ID),'No_data') ASL_Status
              --end 001
              ,TO_CHAR(TRUNC((((86400*(SYSDATE-rt.TRANSACTION_DATE))/60)/60)/24))|| ' Days ' || TO_CHAR(TRUNC(((86400*(SYSDATE-rt.TRANSACTION_DATE))/60)/60)-24*(TRUNC((((86400*(SYSDATE-rt.TRANSACTION_DATE))/60)/60)/24)))|| ' Hours' Days_and_hours_passed          ,DECODE(
                   NVL(msi.max_minmax_quantity,0) ,
              0 , 0 ,
              (NVL(msi.max_minmax_quantity,0) -
              NVL(inmohqd.onhand,0))
                   * 100
                   / NVL(msi.max_minmax_quantity,0)
              ) gap_percent
    FROM rcv_transactions rt
         ,po_vendors pv
         ,po_vendor_sites_all pvs
         ,rcv_shipment_headers rsh
         ,rcv_shipment_lines rsl
         ,mtl_system_items msi
         ,org_organization_definitions org
         --,mtl_onhand_quantities_detail moqhd
         ,(SELECT NVL(SUM(primary_transaction_quantity),0) onhand,INVENTORY_ITEM_ID item_id,ORGANIZATION_ID organization_id
         FROM      mtl_onhand_quantities_detail
         WHERE SUBINVENTORY_CODE NOT IN ('Wip_SF','Wip_Int','Reject','Scrap','FG Trading','FG')
         GROUP BY INVENTORY_ITEM_ID, ORGANIZATION_ID) inmohqd
    WHERE inmohqd.item_id(+) = msi.INVENTORY_ITEM_ID
         AND inmohqd.ORGANIZATION_ID(+) = msi.ORGANIZATION_ID
         --AND inmoqhd.SUBINVENTORY_CODE NOT IN  ('Wip_SF','Wip_Int','Reject','Scrap','FG Trading','FG')
         AND msi.INVENTORY_ITEM_ID = rsl.ITEM_ID
         AND rsh.SHIPMENT_HEADER_ID = rsl.SHIPMENT_HEADER_ID
         AND pv.VENDOR_ID = pvs.VENDOR_ID
         AND org.ORGANIZATION_ID = rt.ORGANIZATION_ID
         AND msi.ORGANIZATION_ID = rt.ORGANIZATION_ID
         AND pvs.VENDOR_SITE_ID = rt.VENDOR_SITE_ID
         AND pv.VENDOR_ID = rt.VENDOR_ID
         AND rsh.SHIPMENT_HEADER_ID = rt.SHIPMENT_HEADER_ID
         AND rsl.SHIPMENT_HEADER_ID = rt.SHIPMENT_HEADER_ID
         AND rsl.SHIPMENT_LINE_ID = rt.SHIPMENT_LINE_ID
         AND TRUNC(rt.TRANSACTION_DATE) <= TRUNC(p_tilldate)
         AND rsl.TO_ORGANIZATION_ID = p_organization_id
         AND CONCAT(TRIM(rt.SHIPMENT_HEADER_ID),TRIM(rt.SHIPMENT_LINE_ID)) IN
              SELECT CONCAT(TRIM(rt1.SHIPMENT_HEADER_ID),TRIM(rt1.SHIPMENT_LINE_ID))
              FROM RCV_TRANSACTIONS rt1
              WHERE rt1.TRANSACTION_TYPE = 'RECEIVE'
                   AND rt1.DESTINATION_TYPE_CODE = 'RECEIVING'
                   AND rt1.PO_HEADER_ID IS NOT NULL
                   AND NOT EXISTS(
                   SELECT 1
                        FROM     RCV_TRANSACTIONS rt2
                        WHERE     rt2.SHIPMENT_HEADER_ID = rt1.SHIPMENT_HEADER_ID
                                  AND rt2.SHIPMENT_LINE_ID = rt1.SHIPMENT_LINE_ID
                                  AND rt2.TRANSACTION_TYPE <> 'RECEIVE'
         )

    In this case, for selected columns, all data is same for one of the RMA with more than one line. So UNION will skip one of the records. However, shipment line id are different for both records, so by selecting it in select list is solving the problem and so no need to use UNION ALL. But, anyhow UNION ALL is better than UNION in performance as it does not require to sort. Then why I am facing this problem...
    Kindly suggest
    Regards,
    Sachin

Maybe you are looking for