How to avoid BPEL operations corrupting namespaces
Hi
I am new to BPEL and XML but I'm having a problem with the seemingly automatic behaviour of the Oracle BPEL engine.
I invoke a Web service which returns a valid XML document with namespaces declared inside the SOAP wrapper.
When I execute the BPEL outlined below the reply generated has lost the namespace declarations resulting in an invalid response to the BPEL service consumer.
I can achieve the correct namespaces by using a Transform step rather than the Copy, but should a transformation be required?
Is there an alternative approach?
Roy
The BPEL logic
<invoke name="retrievePolicy" partnerLink="AlphaCoreService"
portType="ns1:AlphaCoreService" operation="retrieveFullPolicy"
inputVariable="Invoke_1_retrieveFullPolicy_InputVariable"
outputVariable="Invoke_1_retrieveFullPolicy_OutputVariable"/>
<assign name="Assign_2">
<copy>
<from variable="Invoke_1_retrieveFullPolicy_OutputVariable"
part="parameters"
query="/ns2:retrieveFullPolicyResponseElement/ns2:result/ns2:pom"/>
<to variable="outputVariable" part="payload"
query="/client:getPolicyProcessResponse/client:pom"/>
</copy>
</assign>
<reply name="replyOutput" partnerLink="client" portType="client:getPolicy"
operation="process" variable="outputVariable"/>
The retrievePolicy response supplies this document to BPEL ....
<?xml version="1.0" encoding="windows-1252" ?>
<env:Envelope xmlns:env="http://schemas.xmlsoap.org/soap/envelope/"
xmlns:xsd="http://www.w3.org/2001/XMLSchema"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:ns0="http://service.ws.core.alpha.web.aif.cgi.com/types/"
xmlns:ns1="http://xpom.ws.core.alpha.web.aif.cgi.com">
<env:Body>
<ns0:retrieveFullPolicyResponseElement>
<ns0:result>
<ns0:errors xsi:nil="1"/>
<ns0:pom>
<ns1:contractId>7140</ns1:contractId>
<ns1:policyRef>POL7140</ns1:policyRef>
etc.
<ns1:version>
<ns1:systemStartDate>2009-06-10T15:33:48.000+01:00</ns1:systemStartDate>
<ns1:changeDescription xsi:nil="1"/>
etc.
</ns1:version>
<ns1:vehicleArray xsi:type="ns1:Vehicle">
<ns1:userModified>ROYC</ns1:userModified>
<ns1:uniqueReference>abc123A</ns1:uniqueReference>
etc.
<ns1:makeModel>17532001</ns1:makeModel>
<ns1:regNo>abc123A</ns1:regNo>
</ns1:vehicleArray>
</ns0:pom>
</ns0:result>
</ns0:retrieveFullPolicyResponseElement>
</env:Body>
</env:Envelope>
======================
The Copy activity results in an ="outputVariable" of this shape (note the namespace ns1 re-creation at the incorrect levels) ....
<outputVariable>
<part xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" name="payload">
<getPolicyProcessResponse xmlns="http://xmlns.oracle.com/getPolicy">
<pom>
<ns1:contractId xmlns:ns1="http://xpom.ws.core.alpha.web.aif.cgi.com">7140</ns1:contractId>
<ns1:policyRef xmlns:ns1="http://xpom.ws.core.alpha.web.aif.cgi.com">POL7140</ns1:policyRef>
<ns1:version xmlns:ns1="http://xpom.ws.core.alpha.web.aif.cgi.com">
<ns1:systemStartDate>2009-06-10T15:33:48.000+01:00</ns1:systemStartDate>
<ns1:changeDescription xsi:nil="1"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"/>
</ns1:version>
<ns1:vehicleArray xsi:type="ns1:Vehicle"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:ns1="http://xpom.ws.core.alpha.web.aif.cgi.com">
<ns1:makeModel>17532001</ns1:makeModel>
<ns1:regNo>abc123A</ns1:regNo>
</ns1:vehicleArray>
</pom>
</getPolicyProcessResponse>
</part>
</outputVariable>
=================================
Finally the Reply activity generates this XML to send back as a response - note the ns1:vehicle invalid reference
<?xml version="1.0" encoding="windows-1252" ?>
<env:Envelope xmlns:env="http://schemas.xmlsoap.org/soap/envelope/">
<env:Header/>
<env:Body>
<getPolicyProcessResponse xmlns="http://xmlns.oracle.com/getPolicy">
<result>success message</result>
<extra>extra tag</extra>
<pom>
<ns1:contractId xmlns:ns1="http://xpom.ws.core.alpha.web.aif.cgi.com">7140</ns1:contractId>
<ns1:policyRef xmlns:ns1="http://xpom.ws.core.alpha.web.aif.cgi.com">POL7140</ns1:policyRef>
<version>
<systemStartDate>2009-06-10T15:33:48.000+01:00</systemStartDate>
<changeDescription nil="1"/>
</version>
<vehicleArray type="ns1:Vehicle">
<makeModel>17532001</makeModel>
<regNo>abc123A</regNo>
</vehicleArray>
</pom>
</getPolicyProcessResponse>
</env:Body>
</env:Envelope>
Hi Marc
Having consulted with a few colleagues, you are correct, this is a valid XML message after all.
The Oracle BPEL console however has a problem with it and generates the error
"Could not initiate the BPEL process because the input xml is not well formed, the reason is : Error parsing envelope Please correct the input xml."
The BPEL script has not only been initiated, but succesfully completed, so the message can only be referring to the SOAP message it got back from the BPEL process.
Oracle BPEL has also changed the namespaces, which is surely a corruption of the document that should need correcting by someone.
The Web service message had "pom" defined in the "http://service.ws.core.alpha.web.aif.cgi.com/types/" namespace and "contractID" and "vehicleArray" defined in the "http://xpom.ws.core.alpha.web.aif.cgi.com" namespace.
The BPEL reply message has "pom" and "vehicleArray" defined in the "http://xmlns.oracle.com/getPolicy" namespace, "contractID" correctly in the "http://xpom.ws.core.alpha.web.aif.cgi.com" namespace and the "http://service.ws.core.alpha.web.aif.cgi.com/types/" namespace has completely disappeared.
I can't see what I've done to generate the corruption, so surely this must be an Oracle defect?
Roy
Similar Messages
-
How to avoid BPEL process deployment after server restart
Hi,
Message flow in my application is as below
BPEL1 --> ESB --> BPEL2
After every server restart the ESB is failing to invoke a perticular operation of BPEL2 service. All other operations of BPEL2 are working fine.
We are redeploying the BPEL2 proccess everytime the server restart happens to fix this issue. As a temporary fix, we are trying to bypass the ESB and directly invoke BPEL2 from BPEL1, but not sure if this solution works.
Can someone please suggest me as to how to avoid redeployment of the BPEL2 process?
Please let me know incase any additional information is required.
Thanks in advance.
Regards,
Manoj
Edited by: user11290555 on Jun 20, 2010 3:18 PMHi,
try the SOA forum. This here is for JDeveloper and ADF related questions
Frank -
How to avoid click/operation on page when an event is already in queue
Hi,
I have an app, in which we have a many fields which have patial submit.
How can we avoid/stop user form taking any action of page untilll the previous request is responded back.
-NageshIn property inspector go to behavior section and set Blocking = true
An Blocking attribute which if set to TRUE the component will start blocking user input when the action is initiated. The blocking will stop when a response is received from the server. -
Hi..
How to avoid distinct operation in table view reports
In my database data are
id__Name__Salary
01_aaaaa__1000
01_aaaaa__1000
01_aaaaa__1000
02_aaaaa__1000
02_aaaaa__1000
My output
01_aaaaa__1000
02_aaaaa__1000
but I need
id__Name__Salary
01_aaaaa__1000
01_aaaaa__1000
01_aaaaa__1000
02_aaaaa__1000
02_aaaaa__1000This may help, see the answer:
Re: OBIEE using distinct
Have you got ID in the table that is unique?
Regards
Goran
http://108obiee.blogspot.com -
How to avoid timeouts from Enterprise Manager/BPEL console?
Occasionally I want to deploy (from JDev) some BPEL processes to the AppServer.
Surprisingly I got some errors. I found out when I re-login into
Enterprise Manager and tried again to deploy it everything works fine.
So it seems to me all errors occur because of a forced logout from Enterprise Manager.
How can I avoid such situations?
PeterHi peter,
See this forum link:
Re: How to avoid auto-logout (=timeout) BPEL console?
It contains your post only. See it.
Cheers,
Abhi... -
How to avoid version information in http response
Hi,
We have a SAP java web application in webdynpro framework developed using SAP NetWeaver.
If I right click on broswer and see View Source of the page, it is displaying information related to Development components, java version, SAP version etc..
I am very new to SAP and would like to know how to avoid the information.
I have already tried setting useServerHeader to false and DevelopmentMode to False in SAP J2EE engine.
Below is the information displayed in the view source.
This page was created by SAP NetWeaver. All rights reserved.
Web Dynpro client:
HTML Client
Web Dynpro client capabilities:
User agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E), client type: msie7, client type profile: ie6, ActiveX: enabled, Cookies: enabled, Frames: enabled, Java applets: enabled, JavaScript: enabled, Tables: enabled, VB Script: enabled
Accessibility mode: false
Web Dynpro runtime:
Vendor: SAP, build ID: 7.0026.20120524121557.0000 (release=NW04S_26_REL, buildtime=2012-05-24:14:38:29[GMT+00:00], changelist=141071, host=VMW4330.wdf.sap.corp), build date: Fri Nov 16 03:56:13 CET 2012
Web Dynpro code generators of DC
SapDictionaryGenerationCore: 7.0021.20091119120521.0000 (release=NW04S_21_REL, buildtime=2009-12-11:15:55:08[UTC], changelist=76328, host=PWDFM114.wdf.sap.corp)
J2EE Engine:
7.00 PatchLevel 129925.450
Java VM:
SAP Java Server VM, version: 4.1.024 21.1-b02, vendor: SAP AG
Operating system:
Linux, version: 2.6.32-131.17.1.el6.x86_64, architecture: amd64
Hope I explained the issue and thank you so much in advance..Or
Have a look and these query on WD and portal
refer; Disabling the Right click functionality in the Detailed Navigation?
Disable WD ABAP default context menu
Remove / Hide standard right click menu in Web Dynpro ABAP application
Regd Right click functionality in portal -
How to avoid performance problems in PL/SQL?
How to avoid performance problems in PL/SQL?
As per my knowledge, below some points to avoid performance proble in PL/SQL.
Is there other point to avoid performance problems?
1. Use FORALL instead of FOR, and use BULK COLLECT to avoid looping many times.
2. EXECUTE IMMEDIATE is faster than DBMS_SQL
3. Use NOCOPY for OUT and IN OUT if the original value need not be retained. Overhead of keeping a copy of OUT is avoided.Susil Kumar Nagarajan wrote:
1. Group no of functions or procedures into a PACKAGEPutting related functions and procedures into packages is useful from a code organization standpoint. It has nothing whatsoever to do with performance.
2. Good to use collections in place of cursors that do DML operations on large set of recordsBut using SQL is more efficient than using PL/SQL with bulk collects.
4. Optimize SQL statements if they need to
-> Avoid using IN, NOT IN conditions or those cause full table scans in queriesThat is not true.
-> See to queries they use Indexes properly , sometimes Leading index column is missed out that cause performance overheadAssuming "properly" implies that it is entirely possible that a table scan is more efficient than using an index.
5. use Oracle HINTS if query can't be further tuned and hints can considerably help youHints should be used only as a last resort. It is almost certainly the case that if you can use a hint that forces a particular plan to improve performance that there is some problem in the underlying statistics that should be fixed in order to resolve issues with many queries rather than just the one you're looking at.
Justin -
How to avoid Time out issues in Datapump?
Hi All,
Iam loading one of our schema from stage to test server using datapump expdp and impdp.Its size is around 332GB.
My Oracle server instance is on unix server rwlq52l1 and iam connecting to oracle from my client instance(rwxq04l1).
iam running the expdp and impdp command from Oracle client using the below commands.
expdp pa_venky/********@qdssih30 schemas=EVPO directory=PA_IMPORT_DUMP dumpfile=EVPO_Test.dmp CONTENT=all include=table
impdp pa_venky/********@qdsrih30 schemas=EVPO directory=PA_IMPORT_DUMP dumpfile=EVPO_Test.dmp CONTENT=all include=table table_exists_action=replace
Here export is completed and import is struck at below index building.After some time iam seeing below time out in log files
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX.
Error:-
VERSION INFORMATION:
TNS for Linux: Version 11.1.0.7.0 - Production
Unix Domain Socket IPC NT Protocol Adaptor for Linux: Version 11.1.0.7.0 - Production
Oracle Bequeath NT Protocol Adapter for Linux: Version 11.1.0.7.0 - Production
TCP/IP NT Protocol Adapter for Linux: Version 11.1.0.7.0 - Production
Time: 13-JAN-2012 12:34:31
Tracing not turned on.
Tns error struct:
ns main err code: 12535
TNS-12535: TNS:operation timed out
ns secondary err code: 12560
nt main err code: 505
TNS-00505: Operation timed out
nt secondary err code: 110
nt OS err code: 0
Client address: (ADDRESS=(PROTOCOL=tcp)(HOST=170.217.82.86)(PORT=65069))
The above ip address is my unix client system(rwxq04l1) ip.
How to see oracle client system port number?
Please suggest me how to avoid this time out issues.Seems this time out is between oracle server and client.
Thanks,
Venkat Vadlamudi.Don't run from the client ... run from the server
or
if running from a client use the built-in DBMS_DATAPUMP package's API.
http://www.morganslibrary.org/reference/pkgs/dbms_datapump.html -
Hi,
After writing into serial port, the same message gets bounced back into the Inqueue also. If anyone know how to avoid this, please reply.
Thanks,
GaneshIf you disconnect the cable going to the serial device, do you still get the echo? If so you have something going on in the port setup. If disconnecting the cable stops the echo then the device you're talking to is doing it - which would be my bet. One thing to check is whether this might not be normal operation. I have seen devices that if a command was successful, it simply echo'd back the command string you had sent. Also many serial devices have setting for specifying whether they are to echo commands.
Mike...
Certified Professional Instructor
Certified LabVIEW Architect
LabVIEW Champion
"... after all, He's not a tame lion..."
Be thinking ahead and mark your dance card for NI Week 2015 now: TS 6139 - Object Oriented First Steps -
How to avoid doubleclick event on a datagrid scrollbars?
Hello.
I've a datagrid.
I need a doubleclick event cliccking on a data grid row.
I've enabled the doubleclick event. It works fine, when the user doubleclicks on a row, an event happens! In my case I open a modal window. Great!
Now the problem:
The problem arises when the user clicks in a short time on a data grid scroll bar (both horizontal or vertical). In this case a double click event is dispatched.
But his intenton is just to scroll the grid, no more.
Please, notice that the dobule click on a scrollbar is a well-known action that can be performed in all the applications and operating systems.
I neet the "double click" event just on a data grid rows and not on its scrollbars. Cliccking twice or more on the scrollbar I just want to scroll the grid. How can avoid to dispatch an event?
Thank you
PbesiI was returning today to add something about custom item renderers but you beat me to it, Pbesi. I have a custom gridItemRenderer which I now have to check for just as you demonstrated above. What I don't understand is why I can't just do this:
if( event.target is IGridItemRenderer ) //Should be true for both default and custom
My custom renderer implements the IGridItemRenderer interface, but when I double click one in the grid, the event.target is not myCustomGridItemRenderer, it is GridLayer. So what I have to do is this:
if( (event.target is IGridItemRenderer) || (event.target is GridLayer) )
I presume this would work for all custom gridItemRenderers, but I only have one, so I haven't tested this. Any idea why GridLayer is the type of the event target? My custom renderer is very simple. It just renders Booleans as "Yes/No" rather than "True/False"
<s:GridItemRenderer xmlns:fx="http://ns.adobe.com/mxml/2009"
xmlns:s="library://ns.adobe.com/flex/spark"
xmlns:mx="library://ns.adobe.com/flex/mx"
clipAndEnableScrolling="true"
implements="spark.components.gridClasses.IGridItemRenderer">
<fx:Script>
<![CDATA[
override public function prepare(hasBeenRecycled:Boolean):void {
lblData.text = (data[column.dataField] == true) ? "Yes" : "No" ;
]]>
</fx:Script>
<s:Label id="lblData" top="9" left="7"/>
</s:GridItemRenderer> -
Hi how to avoid nested loops in this program to improve the performence
Hi all
How to avoide the nested loops in this programing what is the replacement for the nested loops in this program coding......
LOOP AT itb_ekpo.
READ TABLE itb_marc WITH KEY
matnr = itb_ekpo-matnr
werks = itb_ekpo-werks BINARY SEARCH.
CHECK sy-subrc = 0.
FAE 26446 fin remplacement
itb_pca-ebeln = itb_ekpo-ebeln.
itb_pca-ebelp = itb_ekpo-ebelp.
itb_pca-lifnr = itb_ekko-lifnr. "-FAE26446
itb_pca-lifnr = itb_ekpo-lifnr. "+FAE26446
itb_pca-ekgrp = itb_ekpo-ekgrp. "+FAE26446
itb_pca-dispo = itb_ekpo-dispo. "+FAE26446
itb_pca-matnr = itb_ekpo-matnr.
itb_pca-werks = itb_ekpo-werks.
Recherche du libellé article
READ TABLE itb_makt
WITH KEY matnr = itb_ekpo-matnr
spras = text-fra
BINARY SEARCH.
IF sy-subrc = 0.
itb_pca-maktx = itb_makt-maktx.
ELSE.
READ TABLE itb_makt
WITH KEY matnr = itb_ekpo-matnr
spras = text-ang
BINARY SEARCH.
IF sy-subrc = 0.
itb_pca-maktx = itb_makt-maktx.
ENDIF.
ENDIF.
IF NOT itb_ekpo-bpumn IS INITIAL.
itb_pca-menge = itb_ekpo-menge * itb_ekpo-bpumz /
itb_ekpo-bpumn.
ENDIF.
Sélect° ds la table EKES des dates de livraisons et des qtés
en transit
CLEAR w_temoin_ar.
CLEAR w_etens.
LOOP AT itb_ekes
FROM w_index_ekes.
IF itb_ekes-ebeln = itb_ekpo-ebeln
AND itb_ekes-ebelp = itb_ekpo-ebelp.
IF itb_ekes-ebtyp = text-arn.
itb_pca-eindt = itb_ekes-eindt.
w_temoin_ar = 'X'.
ELSE.
Si c'est une qté en transit alors on recupere
la quantité et la date.
IF itb_ekes-dabmg < itb_ekes-menge.
itb_pca-qtran = itb_pca-qtran + itb_ekes-menge -
itb_ekes-dabmg.
ENDIF.
IF itb_ekes-etens > w_etens.
w_etens = itb_ekes-etens.
itb_pca-dtran = itb_ekes-eindt.
ENDIF.
ENDIF.
ELSEIF itb_ekes-ebeln > itb_ekpo-ebeln
OR ( itb_ekes-ebeln = itb_ekpo-ebeln
AND itb_ekes-ebelp > itb_ekpo-ebelp ).
w_index_ekes = sy-tabix.
EXIT.
ENDIF.
ENDLOOP.
S'il n'y a pas d'AR alors récupérat° de la date livraison dans EKET.
LOOP AT itb_eket
FROM w_index_eket.
IF itb_eket-ebeln = itb_ekpo-ebeln
AND itb_eket-ebelp = itb_ekpo-ebelp.
IF w_temoin_ar IS INITIAL.
itb_pca-eindt = itb_eket-eindt.
ENDIF.
itb_pca-slfdt = itb_eket-slfdt.
Calcul du portefeuille fournisseur à partir de la
qté commandée et la qté reçue
itb_pca-attdu = itb_pca-attdu + itb_eket-menge -
itb_eket-wemng.
Calcul du montant du poste
itb_pca-netpr = itb_ekpo-netpr * itb_pca-attdu.
IF itb_ekpo-peinh NE 0.
itb_pca-netpr = itb_pca-netpr / itb_ekpo-peinh.
ENDIF.
Calcul quantité réceptionnée.
itb_pca-wemng = itb_pca-wemng + itb_eket-wemng.
Calcul du retard en nombre de jours calendaires
Le calcul du retard ne doit pas prendre en compte
le jour de livraison
ADD 1 TO itb_eket-eindt.
IF NOT itb_pca-attdu IS INITIAL
AND itb_eket-eindt LT sy-datum.
Calcul du retard en nombre de jours ouvrés
CLEAR w_retard.
CALL FUNCTION 'Z_00_BC_WORKDAYS_PER_PERIOD'
EXPORTING
date_deb = itb_eket-eindt
date_fin = sy-datum
IMPORTING
jours = w_retard.
itb_pca-rtard = itb_pca-rtard + w_retard .
ENDIF.
ELSEIF itb_eket-ebeln > itb_ekpo-ebeln
OR ( itb_eket-ebeln = itb_ekpo-ebeln
AND itb_eket-ebelp > itb_ekpo-ebelp ).
w_index_eket = sy-tabix.
EXIT.
ENDIF.
ENDLOOP.
Recherche de la derniere date de livraison.
LOOP AT itb_mseg
FROM w_index_mseg.
IF itb_mseg-ebeln = itb_ekpo-ebeln
AND itb_mseg-ebelp = itb_ekpo-ebelp.
READ TABLE itb_mkpf
WITH KEY mblnr = itb_mseg-mblnr
mjahr = itb_mseg-mjahr
BINARY SEARCH.
IF sy-subrc = 0.
IF itb_mkpf-bldat > itb_pca-bldat.
itb_pca-bldat = itb_mkpf-bldat.
ENDIF.
ENDIF.
ELSEIF itb_mseg-ebeln > itb_ekpo-ebeln
OR ( itb_mseg-ebeln = itb_ekpo-ebeln
AND itb_mseg-ebelp > itb_ekpo-ebelp ).
w_index_mseg = sy-tabix.
EXIT.
ENDIF.
ENDLOOP.
APPEND itb_pca.
CLEAR itb_pca.
FAE26446 suppression parag. suivant
ELSEIF itb_ekpo-ebeln > itb_ekko-ebeln.
w_index_ekpo = sy-tabix.
EXIT.
ENDIF.
ENDLOOP.
Fin FAE26446
ENDLOOP.
Thanks in advance for all.....Hi
these are the performance tips
Instead of using nested Select loops it is often better to use subqueries.
SELECT * FROM SPFLI
INTO TABLE T_SPFLI
WHERE CITYFROM = 'FRANKFURT'
AND CITYTO = 'NEW YORK'.
SELECT * FROM SFLIGHT AS F
INTO SFLIGHT_WA
FOR ALL ENTRIES IN T_SPFLI
WHERE SEATSOCC < F~SEATSMAX
AND CARRID = T_SPFLI-CARRID
AND CONNID = T_SPFLI-CONNID
AND FLDATE BETWEEN '19990101' AND '19990331'.
ENDSELECT.
The above mentioned code can be even more optimized by using subqueries instead of for all entries.
SELECT * FROM SFLIGHT AS F INTO SFLIGHT_WA
WHERE SEATSOCC < F~SEATSMAX
AND EXISTS ( SELECT * FROM SPFLI
WHERE CARRID = F~CARRID
AND CONNID = F~CONNID
AND CITYFROM = 'FRANKFURT'
AND CITYTO = 'NEW YORK' )
AND FLDATE BETWEEN '19990101' AND '19990331'.
ENDSELECT.
<b>Internal Tables</b>
1. Table operations should be done using explicit work areas rather than via header lines.
2. Always try to use binary search instead of linear search. But dont forget to sort your internal table before that.
3. A dynamic key access is slower than a static one, since the key specification must be evaluated at runtime.
4. A binary search using secondary index takes considerably less time.
5. LOOP ... WHERE is faster than LOOP/CHECK because LOOP ... WHERE evaluates the specified condition internally.
6. Modifying selected components using MODIFY itab TRANSPORTING f1 f2.. accelerates the task of updating a line of an internal table.
Point # 2
READ TABLE ITAB INTO WA WITH KEY K = 'X BINARY SEARCH.
IS MUCH FASTER THAN USING
READ TABLE ITAB INTO WA WITH KEY K = 'X'.
If TAB has n entries, linear search runs in O( n ) time, whereas binary search takes only O( log2( n ) ).
Point # 3
READ TABLE ITAB INTO WA WITH KEY K = 'X'. IS FASTER THAN USING
READ TABLE ITAB INTO WA WITH KEY (NAME) = 'X'.
Point # 5
LOOP AT ITAB INTO WA WHERE K = 'X'.
ENDLOOP.
The above code is much faster than using
LOOP AT ITAB INTO WA.
CHECK WA-K = 'X'.
ENDLOOP.
Point # 6
WA-DATE = SY-DATUM.
MODIFY ITAB FROM WA INDEX 1 TRANSPORTING DATE.
The above code is more optimized as compared to
WA-DATE = SY-DATUM.
MODIFY ITAB FROM WA INDEX 1.
7. Accessing the table entries directly in a "LOOP ... ASSIGNING ..." accelerates the task of updating a set of lines of an internal table considerably
8. If collect semantics is required, it is always better to use to COLLECT rather than READ BINARY and then ADD.
9. "APPEND LINES OF itab1 TO itab2" accelerates the task of appending a table to another table considerably as compared to LOOP-APPEND-ENDLOOP.
10. DELETE ADJACENT DUPLICATES accelerates the task of deleting duplicate entries considerably as compared to READ-LOOP-DELETE-ENDLOOP.
11. "DELETE itab FROM ... TO ..." accelerates the task of deleting a sequence of lines considerably as compared to DO -DELETE-ENDDO.
Point # 7
Modifying selected components only makes the program faster as compared to Modifying all lines completely.
e.g,
LOOP AT ITAB ASSIGNING <WA>.
I = SY-TABIX MOD 2.
IF I = 0.
<WA>-FLAG = 'X'.
ENDIF.
ENDLOOP.
The above code works faster as compared to
LOOP AT ITAB INTO WA.
I = SY-TABIX MOD 2.
IF I = 0.
WA-FLAG = 'X'.
MODIFY ITAB FROM WA.
ENDIF.
ENDLOOP.
Point # 8
LOOP AT ITAB1 INTO WA1.
READ TABLE ITAB2 INTO WA2 WITH KEY K = WA1-K BINARY SEARCH.
IF SY-SUBRC = 0.
ADD: WA1-VAL1 TO WA2-VAL1,
WA1-VAL2 TO WA2-VAL2.
MODIFY ITAB2 FROM WA2 INDEX SY-TABIX TRANSPORTING VAL1 VAL2.
ELSE.
INSERT WA1 INTO ITAB2 INDEX SY-TABIX.
ENDIF.
ENDLOOP.
The above code uses BINARY SEARCH for collect semantics. READ BINARY runs in O( log2(n) ) time. The above piece of code can be more optimized by
LOOP AT ITAB1 INTO WA.
COLLECT WA INTO ITAB2.
ENDLOOP.
SORT ITAB2 BY K.
COLLECT, however, uses a hash algorithm and is therefore independent
of the number of entries (i.e. O(1)) .
Point # 9
APPEND LINES OF ITAB1 TO ITAB2.
This is more optimized as compared to
LOOP AT ITAB1 INTO WA.
APPEND WA TO ITAB2.
ENDLOOP.
Point # 10
DELETE ADJACENT DUPLICATES FROM ITAB COMPARING K.
This is much more optimized as compared to
READ TABLE ITAB INDEX 1 INTO PREV_LINE.
LOOP AT ITAB FROM 2 INTO WA.
IF WA = PREV_LINE.
DELETE ITAB.
ELSE.
PREV_LINE = WA.
ENDIF.
ENDLOOP.
Point # 11
DELETE ITAB FROM 450 TO 550.
This is much more optimized as compared to
DO 101 TIMES.
DELETE ITAB INDEX 450.
ENDDO.
12. Copying internal tables by using ITAB2[ ] = ITAB1[ ] as compared to LOOP-APPEND-ENDLOOP.
13. Specify the sort key as restrictively as possible to run the program faster.
Point # 12
ITAB2[] = ITAB1[].
This is much more optimized as compared to
REFRESH ITAB2.
LOOP AT ITAB1 INTO WA.
APPEND WA TO ITAB2.
ENDLOOP.
Point # 13
SORT ITAB BY K. makes the program runs faster as compared to SORT ITAB.
Internal Tables contd
Hashed and Sorted tables
1. For single read access hashed tables are more optimized as compared to sorted tables.
2. For partial sequential access sorted tables are more optimized as compared to hashed tables
Hashed And Sorted Tables
Point # 1
Consider the following example where HTAB is a hashed table and STAB is a sorted table
DO 250 TIMES.
N = 4 * SY-INDEX.
READ TABLE HTAB INTO WA WITH TABLE KEY K = N.
IF SY-SUBRC = 0.
ENDIF.
ENDDO.
This runs faster for single read access as compared to the following same code for sorted table
DO 250 TIMES.
N = 4 * SY-INDEX.
READ TABLE STAB INTO WA WITH TABLE KEY K = N.
IF SY-SUBRC = 0.
ENDIF.
ENDDO.
Point # 2
Similarly for Partial Sequential access the STAB runs faster as compared to HTAB
LOOP AT STAB INTO WA WHERE K = SUBKEY.
ENDLOOP.
This runs faster as compared to
LOOP AT HTAB INTO WA WHERE K = SUBKEY.
ENDLOOP.
<b>Reward if usefufll</b> -
How to avoid repeatation of code
hi
My code is as mentioned below.
if l_location ='USA'
insert into location
select f1,f2,f3,f4
from usa_tab
else if l_location = 'FRANCE'
insert into location
select f1,f2,f3,f4
from france_tab f , x1_tab x
where f.id = x.id
else if l_location = 'UK'
insert into location
select f1,f2,f3,f4
from uk_tab u,y1_tab y
where u.id = y.id
end if;
how to avoid the repeatation of code here?954992 wrote:
it is an existing application. The tables can not be changed.
actually here the insert and select statements are fixed , only the from and where conditions are getting changed.
howf to avoid repeatation of the fixed code?Oracle supports features called "+partition views+" and "+instead of triggers+". This can be used to glue tables (same structure) together and select and insert against these tables via a view.
Basic example:
// tables that constitutes the partition view - a check constraint on
// country is used to specify which cities are in which table, similar
// to a partition key
SQL> create table location_france(
2 country varchar2(10) default 'FRANCE' not null,
3 city varchar2(20) not null,
4 --
5 constraint chk_france check (country in 'FRANCE'),
6 constraint pk_location_france primary key
7 ( country, city )
8 ) organization index;
Table created.
SQL> create table location_uk(
2 country varchar2(10) default 'UK' not null,
3 city varchar2(20) not null,
4 --
5 constraint chk_uk check (country in 'UK'),
6 constraint pk_location_uk primary key
7 ( country, city )
8 ) organization index;
Table created.
SQL> create table location_spain(
2 country varchar2(10) default 'SPAIN' not null,
3 city varchar2(20) not null,
4 --
5 constraint chk_spain check (country in 'SPAIN'),
6 constraint pk_location_spain primary key
7 ( country, city )
8 ) organization index;
Table created.A partition view is a view that uses union all to glue these tables together:
SQL> create or replace view locations as
2 select * from location_france
3 union all
4 select * from location_uk
5 union all
6 select * from location_spain
7 /
View created.To support inserts against the partition view, an instead-of trigger is used:
SQL> create or replace trigger insert_location
2 instead of insert on locations
3 begin
4 case :new.country
5 when 'FRANCE' then
6 insert into location_france values( :new.country, :new.city );
7 when 'UK' then
8 insert into location_uk values( :new.country, :new.city );
9 when 'SPAIN' then
10 insert into location_spain values( :new.country, :new.city );
11 else
12 raise_application_error(
13 -20000,
14 'Country name ['||:new.country||'] is not supported.'
15 );
16 end case;
17 end;
18 /
Trigger created.Rows can now be inserted into the view and the trigger will ensure that the rows wind up in the correct table.
SQL> insert into locations values( 'FRANCE', 'PARIS' );
1 row created.
SQL> insert into locations values( 'UK', 'LONDON' );
1 row created.
SQL> insert into locations values( 'SPAIN', 'BARCELONA' );
1 row created.As with a partition table, a select on a partition view that uses the "partition column" (in this case, the COUNTRY column), the CBO can prune the non-relevant tables from the view and only select against the relevant table. In the following example, the UK is used as country filter and the CBO shows that only table LOCATION_UK is used.
SQL> set autotrace on explain
SQL> select * from locations where country = 'UK';
COUNTRY CITY
UK LONDON
Execution Plan
Plan hash value: 1608298493
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 19 | 1 (0)| 00:00:01 |
| 1 | VIEW | LOCATIONS | 1 | 19 | 1 (0)| 00:00:01 |
| 2 | UNION-ALL | | | | | |
|* 3 | FILTER | | | | | |
|* 4 | INDEX RANGE SCAN| PK_LOCATION_FRANCE | 1 | 19 | 2 (0)| 00:00:01 |
|* 5 | INDEX RANGE SCAN | PK_LOCATION_UK | 1 | 19 | 2 (0)| 00:00:01 |
|* 6 | FILTER | | | | | |
|* 7 | INDEX RANGE SCAN| PK_LOCATION_SPAIN | 1 | 19 | 2 (0)| 00:00:01 |
Predicate Information (identified by operation id):
3 - filter(NULL IS NOT NULL)
4 - access("COUNTRY"='UK')
5 - access("COUNTRY"='UK')
6 - filter(NULL IS NOT NULL)
7 - access("COUNTRY"='UK')
Note
- dynamic sampling used for this statement (level=2)
SQL>Oracle provides a number of methods to address flawed data models and problematic client code. However, despite this flexibility on Oracle's part, you should still consider fixing the flawed design and code - as that flaws invariable mean reducing flexibility, performance and scalability. -
How to avoid international roaming iphone
Hey everyone,
So I travelled to Colombia recently and in general, had my phone either turned off, or on airplane mode the entire trip. I only used it for one five minute phone call and to listen to music for about five hours accumulated. I was there for about 8 days.
I was told to put the phone on airplane mode, so as to avoid any roaming charges, as my fellow travel companions (who all also owned iPhones), said "that way nothing comes through. No wi-fi, messages, nothing".
So, yes, went with that advice.
Get home, BAM! $255 phone bill.
Now, I was obviously expecting a larger bill than normal, but I have all together $130 in roaming charges. How the eff did this happen?
Excuse me if I'm just a dummy, but I have never owned an iPhone before and this was my first time outside of North America.
I am going to Uruguay next week and obviously, don't want the same to happen.
Any advice on how to avoid this again?
Thanks in advance everyone!That's not surprising. AT&T's roaming charges are $2.89 per minute, and data is $19.95 per megabyte. Text messages are $0.50 each. If you turned on the cellular without turning off cellular data, then data could have been exchanged while you were using the phone to make a call. It should be itemized on your phone bill to see where the money went. They charge a flat fee for international roaming too.
You don't need to use "airplane mode" however, to prevent the charges. "Airplane mode" disables the radios in the phone from operating entirely. However, you only get charged to for using cellular voice and data. You'll not be charged for using WiFi (at a cafe or hotel, for instance).
The phone's settings section gives you some control over this. See Settings > General > Network and note the "Cellular Data" and "Data Roaming" options. Those should be set to "Off" when you travel. If these aren't off, whenever you are out of "Airplane mode" the phone will receive notifications, look for e-mail, or whatever else you have configured that uses data semi-automatically. Because it's SO expensive (3333% what AT&T charges for domestic data), you don't want ANY data charges.
You can safely leave the cellular service turned on as you are only charged for connected calls and text messages. They are cheaper and can be avoided by not answering the phone. -
Forest trust - security issues and how to avoid
Hi guys,
I have few questions.
1/Planning do Forest trust.We have Forest + Domain functional level at WS 2003 level.
In case of trust what are the security issues and how to avoid them? Meant something like browsing in AD, possible hacking from new destination etc.
2/ What in case that the trust will not be possible create because of security reasons (rejected by other company)? What can be an workaround for that? I have idea with resource forest or ADFS? Any other ideas?
Thanks in advance or for a good link to study about.
Petr WeinerOther than broad general answers it is difficult to answer this from the negative side. I work in a very large company where we have hundreds of domains with one way trusts in place and I don't believe we have any security issues in place. With
the large numbers of domains we can't operate in any other fashion. We have a user forest and many resource forests. All of our domains and forests are operated and maintained within the company but if you have domains operated by different departments
then you can run into issues on who trusts. Also if you need to have a situation where you need to trust other companies then you start to look at ADFS, you can also use it internally for many applications as well as cloud services. But as I already
mentioned you haven't detailed what exactly is going on so it is hard to try and give you a concrete answer.
Paul Bergson
MVP - Directory Services
MCITP: Enterprise Administrator
MCTS, MCT, MCSE, MCSA, Security, BS CSci
2012, 2008, Vista, 2003, 2000 (Early Achiever), NT4
Twitter @pbbergs http://blogs.dirteam.com/blogs/paulbergson
Please no e-mails, any questions should be posted in the NewsGroup.
This posting is provided AS IS with no warranties, and confers no rights. -
i have one database table called "sms1" that table is updated every day or on daily basis it has the following fields in it:
SQL> desc sms1;
Name Null? Type
MOBILE NUMBER
RCSTCNATCNATCNATCNAWTHER VARCHAR2(39 CHAR)
SNO NUMBER
INDATE DATE
From this table the is one column "RCSTCNATCNATCNATCNAWTHER VARCHAR2(39 CHAR)" . I am splitting it into different columns like :
SQL> desc smssplit;
Name Null? Type
R VARCHAR2(2 CHAR)
C VARCHAR2(2 CHAR)
S VARCHAR2(1 CHAR)
TC VARCHAR2(3 CHAR)
NA VARCHAR2(3 CHAR)
TC2 VARCHAR2(3 CHAR)
NA2 VARCHAR2(3 CHAR)
TC3 VARCHAR2(3 CHAR)
NA3 VARCHAR2(3 CHAR)
TC4 VARCHAR2(3 CHAR)
NA4 VARCHAR2(3 CHAR)
WTHER VARCHAR2(10 CHAR)
SNO NUMBER
INSERTDATA VARCHAR2(25 CHAR)
Now I am written a procedure to insert the data from "Sms1" table to smssplit table...
CREATE OR REPLACE PROCEDURE SPLITSMS
AS
BEGIN
INSERT INTO scott.SMSSPLIT ( R,C,S,TC,NA,TC2,NA2,TC3,NA3,TC4,NA4,WTHER,SNO)
SELECT SUBSTR(RCSTCNATCNATCNATCNAWTHER,1,2) R,
SUBSTR(RCSTCNATCNATCNATCNAWTHER,3,2) C,
SUBSTR(RCSTCNATCNATCNATCNAWTHER,5,1) S,
SUBSTR(RCSTCNATCNATCNATCNAWTHER,6,3) TC,
SUBSTR(RCSTCNATCNATCNATCNAWTHER,9,3) NA,
SUBSTR(RCSTCNATCNATCNATCNAWTHER,12,3) TC2,
SUBSTR(RCSTCNATCNATCNATCNAWTHER,15,3) NA2,
SUBSTR(RCSTCNATCNATCNATCNAWTHER,18,3) TC3,
SUBSTR(RCSTCNATCNATCNATCNAWTHER,21,3) NA3,
SUBSTR(RCSTCNATCNATCNATCNAWTHER,24,3) TC4,
SUBSTR(RCSTCNATCNATCNATCNAWTHER,27,3) NA4,
SUBSTR(RCSTCNATCNATCNATCNAWTHER,30,10) WTHER, SNO
FROM scott.SMS1 where SNO=(select MAX (sno) from SMS1);
END;
Now in order to update the second table with data from first table on regular basis I have written a job scheduler and I am using oracle 9.0. version...
DECLARE
X NUMBER;
JobNumber NUMBER;
BEGIN
SYS.DBMS_JOB.SUBMIT
job => X
,what => 'scott.SPLITSMS;'
,next_date => SYSDATE+1/1440
,interval => 'SYSDATE+1/1440 '
,no_parse => FALSE
:JobNumber := to_char(X);
END;
Now this job scheduler is working properly and updating the data for every one minute but it is taking or updating the duplicate values also ..like example:
R C S TC NA TC2 NA2 TC3 NA3 TC4 NA4 WTHER SNO
INSERTDATA
33 35 2 123 456 789 543 241 643 243 135 RRRRRR 55
06-SEP-2012 03:49:16
33 35 2 123 456 789 543 241 643 243 135 RRRRRR 55
06-SEP-2012 03:49:16
33 35 2 123 456 789 543 241 643 243 135 RRRRRR 55
06-SEP-2012 03:50:17
R C S TC NA TC2 NA2 TC3 NA3 TC4 NA4 WTHER SNO
INSERTDATA
33 35 2 123 456 789 543 241 643 243 135 RRRRRR 55
06-SEP-2012 03:50:17
33 35 2 123 456 789 543 241 643 243 135 RRRRRR 55
06-SEP-2012 03:51:19
33 35 2 123 456 789 543 241 643 243 135 RRRRRR 55
06-SEP-2012 03:51:19
R C S TC NA TC2 NA2 TC3 NA3 TC4 NA4 WTHER SNO
INSERTDATA
33 35 2 123 456 789 543 241 643 243 135 RRRRRR 55
06-SEP-2012 03:52:20
33 35 2 123 456 789 543 241 643 243 135 RRRRRR 55
06-SEP-2012 03:52:20
33 35 2 123 456 789 543 241 643 243 135 RRRRRR 55
06-SEP-2012 03:53:22
R C S TC NA TC2 NA2 TC3 NA3 TC4 NA4 WTHER SNO
INSERTDATA
33 35 2 123 456 789 543 241 643 243 135 RRRRRR 55
06-SEP-2012 03:53:22
33 35 2 123 456 789 543 241 643 243 135 RRRRRR 55
06-SEP-2012 03:54:45
33 35 2 123 456 789 543 241 643 243 135 RRRRRR 55
06-SEP-2012 03:54:45
Now I do not want the duplicate values to be updated ...and want them to ignore them.....
please I need a help on this query........How to avoid the duplicate values............Look at the posts closely:might not be needed if formatted ;)
create or replace procedure splitsms as
begin
insert into scott.smssplit (r,c,s,tc,na,tc2,na2,tc3,na3,tc4,na4,wther,sno)
select substr(rcstcnatcnatcnatcnawther,1,2) r,
substr(rcstcnatcnatcnatcnawther,3,2) c,
substr(rcstcnatcnatcnatcnawther,5,1) s,
substr(rcstcnatcnatcnatcnawther,6,3) tc,
substr(rcstcnatcnatcnatcnawther,9,3) na,
substr(rcstcnatcnatcnatcnawther,12,3) tc2,
substr(rcstcnatcnatcnatcnawther,15,3) na2,
substr(rcstcnatcnatcnatcnawther,18,3) tc3,
substr(rcstcnatcnatcnatcnawther,21,3) na3,
substr(rcstcnatcnatcnatcnawther,24,3) tc4,
substr(rcstcnatcnatcnatcnawther,27,3) na4,
substr(rcstcnatcnatcnatcnawther,30,10) wther,
sno
from scott.sms1 a
where sno = (select max(sno)
from sms1
where sno != a.sno
); ---------------> added where clause with table alias.
end;Regards
Etbin
Maybe you are looking for
-
Application not accessible over VPN
I have written a windows service application that opens a TCPIP port (and posts a listen) on a server. A matching client application can connect and works perfectly when the client computer is on the local network. When I am connected to the server u
-
Disable particular row in a grid
I displayed the table in a grid, and I want to disable the row if the field "Status" is equal to 1, how to do it? Ken
-
The payment card you entered is not valid in Philippines.
Hello. iTunes won't let me purchase any apps because this error keeps appearing. "The payment card you entered is not valid in Philippines. Please provide a valid payment card for Philippines." I was trying to purchase an app when iTunes notified me
-
When iTunes automatically fills free space with songs....
When iTunes automatically fills free space with songs, where are these songs listed? I am very nearly at my limit, so been deleting a few playlists but as I was doing that, it seemed to be filling back up again just as quick. So I deselected the 'Aut
-
Hello people, I have a datablock where one of the items is a check box. I would like to know how to delete all items that has been selected by the checkbox. At the moment I use 'delete_record' but that only deletes the first selected row. thank you i