Should we be 'Obfuscating'/wrapping our PL/SQL code?

Versions:10gR2, 11G, 11GR2
We are a software development firm in Retail Domain. We have around 35 packages, 80 procedures and functions. Currently none of our PL/SQL source codes are hiddent('obfuscated'). Is this a professional approach?
If the client faces an issue with our code and when they send us the exp dump file to reproduce this issue. We wouldn't even be able see let alone debug the code. Right? Are there any other disadvantages with Obfuscation?

Jiri in SF wrote:
I would really appreciate of Oracle would provide code to their own packages. For example UTL_MAIL has issues for some SMTP servers, why I cannot take source code, improve it the way I want and then use it (of course I would not expect oracle to provide support for changed code).UTL_MAIL is perhaps a bad example. The code can be unwrapped - and the resulting source does not look good. The API itself is designed poorly IMO.
Instead of rewriting UTL_MAIL, I would rather see it redesigned. For example, the existing API for example does not allow you to view the Mime payload to send via the DATA command at all. This is essential for debugging purposes.
What about wanting to create a valid e-mail (Mime) that you want to deliver via another protocol (e.g. IMAP)? The API should enable you to create that and then select to use the payload without necessarily transmitting it via SMTP.
Despite my dislike for Microsoft the company, I've always found their API sets logical, sensible and easy to use. Unfortunately the same can often not be said from the supplied PL/SQL package interfaces from Oracle. :-(

Similar Messages

  • Need way to Wrap my PL/SQL code

    Hi
    How can I Wrap my PL/SQL code ?
    The current Wrap.exe can be Hacked easily using this site http://hz.codecheck.ch/UnwrapIt/Unwrap.jsp.
    what is the best way to hide Wrap my PL/SQL code in DB.
    Thanx
    Rafeek Abd Elmonsef

    reemax wrote:
    please where can I download "plsql developer 10g version 2000 release 10.2.0.2"There's no point downloading a GUI to do it for you, all that does is call the Oracle WRAP functionality.
    There's only the one way to wrap PL/SQL code. Just because it can be unwrapped doesn't mean that everybody will have the ability to unwrap it. If you're really that concerned about people getting your source code then you shouldn't release it in any format. At the end of the day, if it's packages that are being supplied to a customer, wrap them and have appropriate copyright and reverse engineering agreements in place, then if your code gets stolen, you can take legal action.
    You can get some tools (I've seen them but never used them) that help to anonymise your code, by renaming all the variables etc. to nonsensical ones such as a,b,c,d etc. (just like some people's coding I've seen LOL!), so if you were to do that and then wrap the code, if anyone does unwrap it, it will be harder for them to understand what the code actually does.

  • Wrapping the PL/SQL Code

    Can anybody pls advise on how to unwrap a wrapped SQL procedure?
    For example i wrap one procedure, and i get .plb object code for the SQL file. How can i revert back to the code, if i have only the .plb file?
    Thanking in Advance
    Rajeev

    Hi,
    Unwrapping cannot be done.
    Wrapping is like creating a exe file.
    Using the plb file u can not get the original procedure.
    There is no Unwrapping in Oracle.
    I think Oracle never provides Unwrapping procedure even in the future.
    Regards
    Kiran

  • How to 100% Protect PL/SQL Code By Wrapped in Oracle Database 10g R2

    Hello,
    Is Possible to 100% Protect PL/SQL Code By Wrapped in Oracle 10g R2 ?
    If it is not possible by wrap in oracle 10g R2,
    Please, let me suggest, how i will be able to 100% protect PL/SQL code in Oracle Database 10g R2.
    Because, I have lot of functions, procedures & package's in my project.
    Which is running in field filed.
    So, i have needed to protect 100%.
    Also, will i convert to al functions, procedures & package's to .pll file ?
    And .pll file to .plx file?
    Is it possible to convert .plx file to .pll file ?
    Please, let know any better solutions in this case....
    Regards
    Mehedi

    Hello,
    No, wrapping is not a 100% secure method. It could prevent your code from amateurs, but not from professional hackers. Look at the article http://docs.oracle.com/cd/B28359_01/appdev.111/b28370/wrap.htm#BEHGBJAA
    It says: "•Wrapping is not a secure method for hiding passwords or table names.
    Wrapping a PL/SQL unit prevents most users from examining the source code, but might not stop all of them."
    Edited by: apiminov on 03.12.2012 3:23
    Edited by: apiminov on 03.12.2012 3:24

  • The passive node should not be shared by any other SQL instance.

    Dears,
    the customer has SQL cluster  with multiple instance for many technology.  Can I install new instance to host Lync backend database or not ??? 
    I found in the below URL
    "SQL Clustering support is for an active/passive configuration. For performance reasons, the passive node should not be shared by any other SQL instance."
    I want to understand Why Lync 2013 the backend SQL passive node should not be shared by any other SQL instance. whish is mean that I can not install SQL new instance to host LYNC backend on the current my SQL cluster which is work with multiple  instance
    http://technet.microsoft.com/en-us/library/gg398990.aspx
    Amr Nassar

    Hi Amr,
    Yes you can install a new instance on your cluster to host the Lync back end database. I believe the statement about not sharing the passive node with any other SQL instance refers to a SQL instance outside of the A/P cluster. (Basically the passive node
    should be purely that and able to handle the load in the event it becomes the active node without anything else hindering it)
    Providing your SQL cluster is able to handle the load Lync places on it along with everything else you should not have any issues.
    If this helped you please click "Vote As Helpful" if it answered your question please click "Mark As Answer"
    Georg Thomas | Lync MVP
    Blog www.lynced.com.au | Twitter
    @georgathomas
    Lync Edge Port Check (Beta)

  • How to encrypt PL/SQL Code?

    Hi All,
    I want to share our application code to third party. I don't want them to see our application PL/SQL Code.
    I have tried wrapper utility provided by Oracle, however there are un-wrappers available.
    Please Let me know the options available to hide my PL/SQL code.
    Thanks in advance
    Madhu

    As Billy says, the only proper way is through legal means.
    The next best thing is the wrap utility.  Yes, people have produced unwrappers out there, but most companies don't have their own software developers (otherwise they're less likely to be buying code from you), so won't be unwrapping it anytime soon.
    I've seen some 3rd party tools that try and obfuscate the code by turning all your variables and suchlike into meaningless names, making it hard for people to follow the code even if they can read it, so if you find a good one of those, and then wrap it as well, you're making it hard for people.
    DBMS_CRYPTO, as suggested by the first response on this thread, isn't an option as that is for encrypting or hashing etc. of data, not of PL/SQL code... at least not if you want Oracle to be able to execute the code still.

  • Apex SQL Code

    Hi guys:
    I'm not an expert in SQL and I saw what Apex is doing is very useful.
    Can I see the SQL code generated by Apex -by woking in Apex environment-

    With a little research on tracing, you should be able to easily get back the SQL statements that APEX uses, but not the PL/SQL. The PL/SQL is wrapped, so you won't be able to see that. With that said, the SQL is nothing special. That's is not a knock on the APEX team (I used to be on it), it's just nothing you couldn't find examples of throughout asktom.oracle.com.
    The APEX Team follows some pretty simple and well known design principals that keeps the code very efficient. Above all else, the principal of designing the data structures for how they'll be accessed, not how they make sense to the developers is key. Since the nature of the APEX data structures is 95% read, they may make the edit of a page less efficient and do some extra work up front to make the display of that page as efficient and fast as possible.
    In keeping with that theme, they will create indexes on tables that makes the reads as fast as possible, but again may make the create / edit of a page much slower due to additional index maintenance.
    You'll notice that all of the tables have a single primary key column; none of this 3 column composite key stuff you see discussed all over this forum. This makes foreign keys, queries, updates, and deletes much easier on the programmers. I've stopped responding to the questions about DML on tables with 3+ composite key columns or tables with no primary key at all. IMHO it's just not good database design, but people get very defensive of their data models. So, I've given up. My advise to you is to stick with a single, random primary key that will never ever be changed. Also please ignore the flame war this is likely to start in this thread ;)
    I'm also a big believer in instrumenting your code. This is definately Tom Kyte's influence on me, but also the APEX team's. Notice that APEX has a debug mode with timing, so they've clearly instrumented their code. You can see an example of this in a wiki project I'm working on now. Take a look at this package. Notice how every procedure in there has 2 "$IF $$debug $THEN" blocks, 1 at the beginning and one at the end. This is using conditional compilation and allows me to recompile this package with debug on so I can see how much time each procedure is taking. I can't stress this enough, instrument your code.
    Other good SQL principals to keep in mind include:
    - Don't call functions in SQL predicates (where clauses) as it's VERY slow. If you need to call a function such as upper, make sure you use a function-based index on that column.
    - Do everything in bulk SQL operations that you can. A lot of people resort to PL/SQL loops for processing. Tom Kyte refers to this as "row-by-row or slow-by-slow".
    - Don't call functions in SQL at all if you are returning a lot of rows. That function will be called once for every row and you'll context switch between SQL and PL/SQL once for every row.
    - For your most popular queries, particularly on "wide" tables, if you index every column in the select and predicates section, the data can be returned from just the index without going back to the table.
    - Materialized views are your friend when you want to pre-compute the answers to your most expensive questions.
    - Buy Tom Kyte's books. This is not a commercial for them, but the philosophy contained in those books has absolutely influenced the philosophy of the APEX team... They all used to work together on a daily basis.
    Hope this helps,
    Tyler

  • PL/SQL code not running

    Hi , Please help to get the rows which have a character in the PFNUM column for this table
    EMPNU PFNUM NAME
    100 222 rat
    101 a33 sanu
    102 4a4 rahul
    our PL/SQL is
    1 DECLARE
    2 i number;
    3 j number;
    4 t varchar2(10);
    5 CURSOR ratnesh_cur IS
    6 select PFNUM from employee;
    7 BEGIN
    8 OPEN ratnesh_cur;
    9 FETCH ratnesh_cur into t;
    10 FOR i IN 0..LENGTH(t)
    11 loop
    12 FOR j IN 0..9
    13 loop
    14 IF SUBSTR('t',i,1)= j THEN dbms_output.put_line(t);
    15 ENDIF;
    16 END LOOP;
    17 END LOOP;
    18 CLOSE ratnesh_cur;
    19* END
    20 /
    END LOOP;
    ERROR at line 16:
    ORA-06550: line 16, column 5:
    PLS-00103: Encountered the symbol "LOOP" when expecting one of the following:
    if
    ORA-06550: line 19, column 3:
    PLS-00103: Encountered the symbol "end-of-file" when expecting one of the
    following:
    loop

    TO APC ...
    <quote>Which solution is appropriate depends on the spec details.</quote>
    Indeed … that is why I provided a simple alternative answering the original question (my interpretation of it extrapolated from the actual data set provided) … I did not comment on your reply.
    But let me address your answers:
    <quote>The best way of checking whether a value is numeric is to test for failure of the Oracle built-in function TO_NUMBER()</quote>
    That would be too strong a statement, wouldn’t it?
    Let us assume a table with a varchar2 column which we know contains only alphanumeric characters … 2097152 rows … half containing only digits … half with a mix of digits, upper and lower alphabetical characters.
    flip@FLOP> select count(0) from apc
    2 where instr(translate(lower(v)
    3 ,'abcdefghijklmnopqrstuvwxyz'
    4 ,'xxxxxxxxxxxxxxxxxxxxxxxxxx'),'x') > 0
    5 ;
    COUNT(0)
    1048576
    Elapsed: 00:00:05.02
    flip@FLOP> select count(0) from apc
    2 where is_number(v) = 0
    3 ;
    COUNT(0)
    1048576
    Elapsed: 00:00:13.05
    “is_number” is your function modified to return 1 (true) or 0 (false) instead of the Boolean.
    Clearly a solution employing TO_NUMBER is not always the most performant way (as you seem to imply) … all those context switches between SQL and PL/SQL do add up.
    Soooo … there is no “most performant way” to check for numeric or non-numeric values for all categories of problems … in fact there isn’t even a universal solution … one has to have some knowledge of the data domain being checked and the environment context.
    So how about the ‘?’ and '@’? … Is the string ‘-1?123@99' numeric or not? Having modified your “is_number” function yet again to do “ln := to_number(pv_string,'9G999G999D99');” …
    flip@FLOP> select is_number('-1?123@99') from dual;
    IS_NUMBER('-1?123@99')
    0
    NO.
    flip@FLOP> alter session set nls_numeric_characters='@?';
    Session altered.
    flip@FLOP> select is_number('-1?123@99') from dual;
    IS_NUMBER('-1?123@99')
    1
    YES.
    But forget about nls settings … how about ‘-123e2’? … is this numeric or not ?
    flip@FLOP> select to_number('-123e2') from dual;
    TO_NUMBER('-123E2')
    -12300
    According to TO_NUMBER, it is numeric … for the person having the knowledge of the data domain being checked that well may be a false positive.
    Hope this proves the point about the universal solution and the qualities of TO_NUMBER.
    As for <quote>And if the character is uppercase?</quote> … this kind of flipped me … looking at the link supplied by you … and glancing over the implementation of “StringParse” one could well ask: “and if the string contains lowercase?” … but nobody did … would’ve been a bit too picky and outside the main technique being demonstrated.
    Gabe

  • Can I have a PL/SQL code for LOV

    Hi,
    I apologize for this dumb question. I have been so out of touch with dev (almost 8 years). Plus new to pl/sql.
    I am creating a status report application. On the dashboard, I currently have some metrics (horizontal charts). I wanted to expose these metrics based on filter.
    I had created 2 filters. The first one will identify the type (Week, Month, Qtr or Year) and the 2nd filter would be based on the type. For e.g. if I select Month as type, the 2nd filter should show May-12, April-12 etc. Once I select the 2nd filter, I should use them to show the appropriate metrics. The metric that I currently have would show all the projects that users have worked on during the week/month/qtr or year (depending on the 1st and 2nd filter).
    On the dashboard region, I added a condition to check if the value for both 1st and 2nd filter is not null. This allows me to show the dashboard only if the 2 filters have been selected.
    For the 2nd filter, I need to write a PL/SQL code to show the LOV.
    I am assuming that the pl/sql would return a SQL query. The SQL query will be based on week, month etc. Is that right?
    Thanks
    balaji
    Edited by: user644868 on May 17, 2012 11:29 AM

    rbalaji2026 wrote:
    For the 2nd filter, I need to write a PL/SQL code to show the LOV. Doesn't appear necessary. With filter 2 cascading from filter 1, why not:
    select
            /* Week query */
    from
    where
    and     :p1_filter_1 = 'WEEK'       
    union all
    select
            /* Month query */
    from
    where
    and     :p1_filter_1 = 'MONTH'       
    union all
    select
            /* Quarter query */
    from
    where
    and     :p1_filter_1 = 'QUARTER'       
    union all
    select
            /* Year query */
    from
    where
    and     :p1_filter_1 = 'YEAR'       

  • How to change the profile value in the pl/sql code without making change in the database

    How to change the profile value in the pl/sql code without making change in the database.

    I have program ,where if the profiles 'printer and nunber of copies ' are set at the user level, by default when the report completes the O/p will be sent to the printer mentioned in the set-up. but what user wants is
    if these Profiles are set for the user running this program automatic printing should not be done.

  • APEX,PDF's, BI Publisher and SQL Query returning SQL code..

    I don't know if I should be posting this in this Forum or the BI Publisher forum, so I am posting in BOTH forums..
    I love APEX, let me say that first.. And appreciate the support offered here by the group, but am running int a confusing issue when BI Publisher tries to build a report from the above type APEX report..
    Here is my dilemma:
    I have a number of reports that are part of a Oracle package. They return an SQL Query back to a reports region on a page. I am having to deal with the column names returned are col01, col02..
    The issue I have is, when building the Application Level query to download the XML sample from in building RTF layouts in Word, you can not use this code, you MUST use a standard SQL Select.
    I have taken the sql from the function returning sql, and copied into the application query, supplying the required data values for bind variables being used in the query.
    An XML file is produced, and I use this to build the RTF format file that I load back into APEX and try to use it for the PDF rendering of the report. I can view the output as a PDF in the Word add on, but when I try using it with the report, it is returning an empty PDF file.
    Can anyone tell me what error log files on the bi publisher side I can look at to see what error is happening?
    Thank you,
    Tony Miller
    UTMB/EHN
    Title adjusted to allow people to know what I am talking about...
    Message was edited by:
    Tony Miller

    Tony,
    You can find the log as follows:
    - go to http://[yourserver]:[yourport]/em
    - logon to OC4J EM: oc4jadmin/[yourpassword]
    - click on "logs" at the bottom of the page
    - in the hgrid/tree, expand OC4J->home->Application
    xmlpserver
    - click on view log icon
    You can also observe what's going on in BI Publisher
    by going to the command prompt from where you started
    it.
    Or, as a third option, you can locate the file on
    your file system, depending on your setup, the path
    would be something similar to this:
    \oracle\product\10.2.0\bip\j2ee\home\application-deplo
    yments\xmlpserver\application.log
    With that said though, I don't expect you'll find
    much in there that would help with your particular
    problem. I suspect you either get no rows in your XML
    at runtime, due to some session state issues, or your
    XML structure does in fact not match your RTF
    template.
    I'm not quite following your problem description,
    i.e. when did you do what and are you associating
    your report layout with a report query or report
    region. So just some general notes, your query needs
    to be parseable at design-time, when exporting the
    XML, so that you get the XML file with the proper
    column names derived from your query. If you want to
    use your RTF template with a standard report region,
    you must export the XML file first using the advanced
    XML structure option. And of course the column names
    in your report query need to match the column names
    in your report region.
    Perhaps this helps you further diagnose what's going
    on, if you have additional information that could
    help, let me know. And if you could stage this on
    apex.oracle.com, I'd be happy to take a look.
    Regards,
    MarcMarc,
    Thanks for looking at this issue. Below find my remarks to your questions..
    Re: your query needs
    to be parseable at design-time, when exporting the
    XML, so that you get the XML file with the proper
    column names derived from your query.At the start of this process, the query code was a function in a package. The function was returning an SQL select statement, for a report region on a page. I took the select statement, built an application query to build a sample of the xml for BI Publisher desktop (Add-on for Word). The code was producing the usual Col01, Col02.. since at design time that is were the column names.
    When I then took the xml from this and built the rtf for loading into my APEX application.
    When testing the Application Query with this RTF report layout, I am getting PDF's. When using it with the report region sending an xml feed to BI Publisher I am getting nothing back.
    I have since taken the sql code and moved it back into the report region, and set the region to have a type of straight SQL Query. I have even tried to hard-code the parameters I was getting from the page to limit data returned.
    Is it possible to see the xml being produced by the APEX page?
    Re: Stage this on apex.oracle.com.. I would love to, but we would have HIPPA issues if I posted the data on a public website.
    Can I send you the RTF file and the xml file that the application query is creating to see if there something weird about them?
    Thank you,
    Tony Miller
    UTMB/EHN

  • How to design BPEL process where BPEL is called by PL/SQL code?

    Hi,
    My BPEL process is called by a PL/SQL code given below.
    CREATE OR REPLACE PROCEDURE testd(errbuf OUT VARCHAR2,
    retcode OUT VARCHAR2)
    IS
    soap_request VARCHAR2(20000);
    soap_respond VARCHAR2(10000);
    http_req UTL_HTTP.REQ;
    http_resp UTL_HTTP.RESP;
    l_detail VARCHAR2(10000);
    endpoint VARCHAR2(130);
    begin
    endpoint := 'http://afsmlnx04.rheem.com:7105/soa-infra/services/default/HelloWorldPayload/bpelprocess1_client_ep';
    soap_request := '<soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"> <soap:Body xmlns:ns1="http://oracle.nl/HelloWorldPayload"><ns1:process><ns1:input>abc</ns1:input></ns1:process></soap:Body></soap:Envelope>';
    http_req := utl_http.begin_request(
    endpoint
    , 'POST'
    , 'HTTP/1.1'
    utl_http.set_header(http_req
    , 'Content-Type'
    , 'text/xml');
    utl_http.set_header(http_req
    , 'Content-Length'
    , length(soap_request));
    utl_http.set_header(http_req
    , 'SOAPAction'
    , 'process');
    utl_http.write_text(http_req, soap_request);
    http_resp := utl_http.get_response(http_req);
    utl_http.read_text(http_resp, soap_respond);
    utl_http.end_response(http_resp);
    dbms_output.put_line('soap'||soap_respond);
    EXCEPTION
    WHEN utl_http.end_of_body THEN
    utl_http.end_response(http_resp);
    WHEN utl_http.request_failed THEN
    DBMS_OUTPUT.PUT_LINE('Request Failed: ' || utl_http.get_detailed_sqlerrm);
    WHEN utl_http.http_server_error THEN
    DBMS_OUTPUT.PUT_LINE('Server Error: ' || utl_http.get_detailed_sqlerrm);
    WHEN utl_http.http_client_error THEN
    DBMS_OUTPUT.PUT_LINE('Client Error: ' || utl_http.get_detailed_sqlerrm);
    WHEN others THEN
    DBMS_OUTPUT.PUT_LINE(sqlerrm);
    END;
    The above procedure will be defined as concurrent program in Oracle EBS.This concurrent program will call the bpel process.My question is how should i design the bpel process so that bpel will know it is called by a concurrent program?
    My BPEL is doing picking a file by FTP adapter and inserting data into a table.
    Please throw some lights on this!!

    Option 1:
    You have to design the service as a synchronous BPEL process
    1. Do a synchronous ftp get to read the file.
    2. Transform and write it into database table
    3. Reply results back to plsql
    Disadvantage: Your BPEL process should complete from BPEL timeout happens.
    Option 2:
    1. Enqueue the message into AQ from the concurrent program
    2. From the BPEL monitor the AQ, start the process when the message arrives
    3. Do a synchronous ftp get to read the file.
    4. Transform and write it into database table
    You cannot reply the results back to concurrent program.
    5. However, you could have another AQ to send the results back to concurrent program.
    6. Your concurrent program should listen to the resultsAQ to get the results back from BPEL.
    Option 2 is a relliable design.
    --Prasanna                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Performance tuning in PL/SQL code

    Hi,
    I am working on already existing PL/SQL code which is written by someone else on validation and conversion of data from a temporary table to base table. It usually has 3.5 million rows. and the procedure takes arount 2.5 - 3 hrs to complete.
    Can I enhance the PL/SQL code for better performance ? or, is this OK to take so long to process these many rows?
    Thanks!
    Yogini

    Can I enhance the PL/SQL code for better performance ? Probably you can enhance it.
    or, is this OK to take so long to process these many rows? It should take a few minutes, not several hours.
    But please provide some more details like your database version etc.
    I suggest to TRACE the session that executes the PL/SQL code, with WAIT events, so you'll see where and on what time is spent, you'll identify your 'problem statements very quickly' (after you or your DBA have TKPROF'ed the trace file).
    SQL> alter session set events '10046 trace name context forever, level 12';
    SQL> execute your PL/SQL code here
    SQL> exitWill give you a .trc file in your udump directory on the server.
    http://www.oracle-base.com/articles/10g/SQLTrace10046TrcsessAndTkprof10g.php
    Also this informative thread can give you more ideas:
    HOW TO: Post a SQL statement tuning request - template posting
    as well as doing a search on 10046 at AskTom, http://asktom.oracle.com will give you more examples.
    and reading Oracle's Performance Tuning Guide: http://www.oracle.com/pls/db102/to_toc?pathname=server.102%2Fb14211%2Ftoc.htm&remark=portal+%28Getting+Started%29

  • Need PL/SQL code for this

    Hi,
    I need a PL/SQL code for this one...
    Let me know if something is not clear...
    1) The table CLOB_CLOBJECT_CDA has the columns described below...
    Explaining only those fields which are important in this context
    -- CDA_STEP_ID : Basically a Sequence
    -- CLOBJECT_SOURCE1_ID : Every id has got a set of records
    -- CLOBJECT_SOURCE2_ID : Every id has got a set of records
    -- LVL : There are total 8 levels..
    This is the main aim :
    1) There are total 16 million rows..(limited to 10 rows here)
    2) We need to go through level by level (LVL column) & insert the intersection records (CLOBJECT_SOURCE1_ID intersect CLOBJECT_SOURCE2_ID)
    into another table...but this is how it goes..
    Level (LVL column) 3's basically have CLOBJECT_SOURCE1_ID as level (LVL column) 2 CDA_STEP_ID's..
    (consider the statement --** where CLOBJECT_SOURCE1_ID = 285 which is same as 1st insert statement step id)..
    The above process goes for next levels until 8..(so have to use loops)
    So for ex :
    We go through the first insert statement and insert the insertion records only when both CLOBJECT_SOURCE1_ID & CLOBJECT_SOURCE2_ID has got records ..
    If we don't find any records for both of them we should skip the corresponding step id when we go to the next levels...
    Let's go through the 1st insert statement...
    -- We have CDA_STEP_ID = 285 & two sources CLOBJECT_SOURCE1_ID as 19 & CLOBJECT_SOURCE2_ID as 74...
    -- We see the table CLOBJECT_COUNTS & check whether we have counts for both 19 & 74 ..(In fact we insert counts into this table only if they have records)
    -- If so, we insert the intersection records into CDA_MRN_RESULTS ( we do have counts for both of them..) with CDA_STEP_ID 285...
    -- Then we insert the step id which is 285 along with the count into CLOBJECT_COUNTS..
    Let's go through another insert statement...
    -- Consider CDA_STEP_ID = 288 which has two sources CLOBJECT_SOURCE1_ID as 19 & CLOBJECT_SOURCE2_ID as 92...
    -- We see the table CLOBJECT_COUNTS & check whether we have counts for both 19 & 92 ..(we have records for 19 but not for 92)
    -- So we should not proceed with this..& also skip all those records (future records with increasing levels..basically level 3's) which have got 288 as CLOBJECT_SOURCE1_ID..
    (As said earlier that the present CDA_STEP_ID will always be CLOBJECT_SOURCE1_ID in the next level)...
    I wrote the following code which is after the statement...
    Let me have the create & insert statements here..
    create table CLOB_CLOBJECT_CDA
        CDA_STEP_ID           NUMBER,
        CDA_ID                NUMBER,
        CDA_SEQ_NUMBER        NUMBER,
        CLOBJECT_SOURCE1_TYPE VARCHAR2(3000),
        CLOBJECT_SOURCE1_ID   NUMBER,
        CLOBJECT_OPERATOR     VARCHAR2(3000),
        CLOBJECT_SOURCE2_TYPE VARCHAR2(3000),
        CLOBJECT_SOURCE2_ID   NUMBER,
        LVL                   NUMBER
    insert into clob_clobject_cda (CDA_STEP_ID, CDA_ID, CDA_SEQ_NUMBER, CLOBJECT_SOURCE1_TYPE, CLOBJECT_SOURCE1_ID, CLOBJECT_OPERATOR, CLOBJECT_SOURCE2_TYPE, CLOBJECT_SOURCE2_ID, LVL)
    values (285, 285, 1, 'CLOBJECT', 19, 'INTERSECT', 'CLOBJECT', 74, 2);
    insert into clob_clobject_cda (CDA_STEP_ID, CDA_ID, CDA_SEQ_NUMBER, CLOBJECT_SOURCE1_TYPE, CLOBJECT_SOURCE1_ID, CLOBJECT_OPERATOR, CLOBJECT_SOURCE2_TYPE, CLOBJECT_SOURCE2_ID, LVL)
    values (286, 286, 1, 'CLOBJECT', 19, 'INTERSECT', 'CLOBJECT', 75, 2);
    insert into clob_clobject_cda (CDA_STEP_ID, CDA_ID, CDA_SEQ_NUMBER, CLOBJECT_SOURCE1_TYPE, CLOBJECT_SOURCE1_ID, CLOBJECT_OPERATOR, CLOBJECT_SOURCE2_TYPE, CLOBJECT_SOURCE2_ID, LVL)
    values (287, 287, 1, 'CLOBJECT', 19, 'INTERSECT', 'CLOBJECT', 91, 2);
    insert into clob_clobject_cda (CDA_STEP_ID, CDA_ID, CDA_SEQ_NUMBER, CLOBJECT_SOURCE1_TYPE, CLOBJECT_SOURCE1_ID, CLOBJECT_OPERATOR, CLOBJECT_SOURCE2_TYPE, CLOBJECT_SOURCE2_ID, LVL)
    values (288, 288, 1, 'CLOBJECT', 19, 'INTERSECT', 'CLOBJECT', 92, 2);
    insert into clob_clobject_cda (CDA_STEP_ID, CDA_ID, CDA_SEQ_NUMBER, CLOBJECT_SOURCE1_TYPE, CLOBJECT_SOURCE1_ID, CLOBJECT_OPERATOR, CLOBJECT_SOURCE2_TYPE, CLOBJECT_SOURCE2_ID, LVL)
    values (4869, 4869, 1, 'CDA_STEP', 285, 'INTERSECT', 'CLOBJECT', 91, 3);  -- **
    insert into clob_clobject_cda (CDA_STEP_ID, CDA_ID, CDA_SEQ_NUMBER, CLOBJECT_SOURCE1_TYPE, CLOBJECT_SOURCE1_ID, CLOBJECT_OPERATOR, CLOBJECT_SOURCE2_TYPE, CLOBJECT_SOURCE2_ID, LVL)
    values (4870, 4870, 1, 'CDA_STEP', 285, 'INTERSECT', 'CLOBJECT', 92, 3);
    insert into clob_clobject_cda (CDA_STEP_ID, CDA_ID, CDA_SEQ_NUMBER, CLOBJECT_SOURCE1_TYPE, CLOBJECT_SOURCE1_ID, CLOBJECT_OPERATOR, CLOBJECT_SOURCE2_TYPE, CLOBJECT_SOURCE2_ID, LVL)
    values (4871, 4871, 1, 'CDA_STEP', 285, 'INTERSECT', 'CLOBJECT', 93, 3);
    insert into clob_clobject_cda (CDA_STEP_ID, CDA_ID, CDA_SEQ_NUMBER, CLOBJECT_SOURCE1_TYPE, CLOBJECT_SOURCE1_ID, CLOBJECT_OPERATOR, CLOBJECT_SOURCE2_TYPE, CLOBJECT_SOURCE2_ID, LVL)
    values (4880, 4880, 1, 'CDA_STEP', 286, 'INTERSECT', 'CLOBJECT', 91, 3);
    insert into clob_clobject_cda (CDA_STEP_ID, CDA_ID, CDA_SEQ_NUMBER, CLOBJECT_SOURCE1_TYPE, CLOBJECT_SOURCE1_ID, CLOBJECT_OPERATOR, CLOBJECT_SOURCE2_TYPE, CLOBJECT_SOURCE2_ID, LVL)
    values (4881, 4881, 1, 'CDA_STEP', 286, 'INTERSECT', 'CLOBJECT', 92, 3);
    insert into clob_clobject_cda (CDA_STEP_ID, CDA_ID, CDA_SEQ_NUMBER, CLOBJECT_SOURCE1_TYPE, CLOBJECT_SOURCE1_ID, CLOBJECT_OPERATOR, CLOBJECT_SOURCE2_TYPE, CLOBJECT_SOURCE2_ID, LVL)
    values (4882, 4882, 1, 'CDA_STEP', 286, 'INTERSECT', 'CLOBJECT', 93, 3);
    create table CDA_MRN_RESULTS
       CDA_STEP_ID      NUMBER,
      MRN              NUMBER,
      INSERT_DATE_TIME DATE
    insert into cda_mrn_results (CDA_STEP_ID, MRN, INSERT_DATE_TIME)
    values (19, 1, to_date('19-10-2011', 'dd-mm-yyyy'));
    insert into cda_mrn_results (CDA_STEP_ID, MRN, INSERT_DATE_TIME)
    values (19,  2, to_date('19-10-2011', 'dd-mm-yyyy'));
    insert into cda_mrn_results (CDA_STEP_ID, MRN, INSERT_DATE_TIME)
    values (19,  3, to_date('19-10-2011', 'dd-mm-yyyy'));
    insert into cda_mrn_results (CDA_STEP_ID, MRN, INSERT_DATE_TIME)
    values (74,  1, to_date('19-10-2011', 'dd-mm-yyyy'));
    insert into cda_mrn_results (CDA_STEP_ID, MRN, INSERT_DATE_TIME)
    values (74,  2, to_date('19-10-2011', 'dd-mm-yyyy'));
    insert into cda_mrn_results (CDA_STEP_ID, MRN, INSERT_DATE_TIME)
    values (74,  4, to_date('19-10-2011', 'dd-mm-yyyy'));
    insert into cda_mrn_results (CDA_STEP_ID, MRN, INSERT_DATE_TIME)
    values (75,  1, to_date('19-10-2011', 'dd-mm-yyyy'));
    insert into cda_mrn_results (CDA_STEP_ID, MRN, INSERT_DATE_TIME)
    values (75,  2, to_date('19-10-2011', 'dd-mm-yyyy'));
    insert into cda_mrn_results (CDA_STEP_ID, MRN, INSERT_DATE_TIME)
    values (75,  6, to_date('19-10-2011', 'dd-mm-yyyy'));
    insert into cda_mrn_results (CDA_STEP_ID, MRN, INSERT_DATE_TIME)
    values (91,  2, to_date('19-10-2011', 'dd-mm-yyyy'));
    insert into cda_mrn_results (CDA_STEP_ID, MRN, INSERT_DATE_TIME)
    values (91,  3, to_date('19-10-2011', 'dd-mm-yyyy'));
    create table CLOBJECT_COUNTS
      CDA_STEP_ID    NUMBER,
      CLOBJECT_COUNT NUMBER,
      DATE_TIME      DATE
    Insert into CLOBJECT_COUNTS values (19,3, to_date('19-10-2011', 'dd-mm-yyyy'));
    Insert into CLOBJECT_COUNTS values (74,3, to_date('19-10-2011', 'dd-mm-yyyy'));
    Insert into CLOBJECT_COUNTS values (75,3, to_date('19-10-2011', 'dd-mm-yyyy'));
    Insert into CLOBJECT_COUNTS values (91,2, to_date('19-10-2011', 'dd-mm-yyyy'));The output goes into two tables...
    CDA_MRN_RESULTS : O/p of intersection records between source1 & source2 id
    CLOBJECT_COUNTS : Step id with counts ...(useful for skipping next level step id's if either of source id has "0" counts)
    Any help is appreciated..
    Thanks..

    I tried to code this..but looping takes a lot of time..I want to skip certain rows where source1_step_id & source_2_step_id are not in clobject_counts table as we proceed to the next levels..Not sure how to skip the rows..
    declare
    cursor c1 (p_level varchar2 ) is
      Select * from clob_clobject_cda
        where lvl = p_level    ;
       TYPE V_TT IS TABLE OF C1%ROWTYPE INDEX BY PLS_INTEGER;
        L_TT V_TT;
        v1 number;
        v2 number;
        v_step_id number;
        v_operator varchar2(100) := '';
    begin
    for i in 2..8 loop
      open c1(i);
      LOOP
           FETCH C1 BULK COLLECT INTO L_TT LIMIT 500;
            FOR indx IN 1 .. L_TT.COUNT
             LOOP
               v1 := L_TT(indx).clobject_source1_id;
               v2 := L_TT(indx).clobject_source2_id;
               v_step_id := L_TT(indx).cda_step_id;
               v_operator := L_TT(indx).clobject_operator;
      Execute Immediate ('Insert into cda_mrn_results Select --+ parallel (cm 128)
                                                      distinct ' || v_step_id || ', mrn, trunc(sysdate) dt from cda_mrn_results  cm
                        where cda_step_id = ' || v1 || '
                        and   cda_step_id in (Select cda_step_id from clobject_counts) ' ||
         v_operator ||
                    '  Select --+ parallel (cm 128)
                                                      distinct ' || v_step_id || ', mrn, trunc(sysdate) dt from cda_mrn_results  cm
                        where cda_step_id = ' || v2 || '
                        and   cda_step_id in (Select cda_step_id from clobject_counts)  ' );
    Insert --+ Append
           into clobject_counts Select cda_step_id, count(distinct mrn),
                       insert_date_time dt from cda_mrn_results  where cda_step_id =  v_step_id   group by cda_step_id,insert_date_time;
       COMMIT;                    
             END LOOP;
           EXIT WHEN L_TT.COUNT = 0;
         END LOOP;
      CLOSE C1;
    End Loop;    
    Commit;
    End;

  • How to insert sql code in module (not form) other than API?

    I generated module as web pl/sql in Oracle Design Editor 6i. I have different user types with different privileges. I want to do some permission checking before a user can reach the tables. All the help are related to API and Form. Is there a way to execute sql code with out using API?

    Yes, you can add in your own user defined PL/SQL (and JavaScript)at module component and item level. Select the module component in the Design Editor and expand the node until you see "Application Logic" -> Events. Now add your logic. For help on this use the context sensitive help and you should find the PL/SQL help. (or try the topic "About user-defined application logic and Web PL/SQL Generator"
    (Is this the piece you wanted to avoid?) You can also add user defined PL/SQL to the Table API generated code. For this you need to use the Server Model tab. Navigate to the Table and expand the node for the desired table and find the Table TAPI/trigger Logic section. Again, make use of the context sensitive help here.
    Regards
    Sue

Maybe you are looking for

  • How can I get my old calendar app restored

    The new update changed my calendar and it is now HARDER to use. Is there a way to restore the old calendar? If not is there a calendar app that is similar to the old IPHONE calendar? I'm just about ready to leave apple for another phone becasue each

  • Contacts and calendar won't sync to iphone

    So I just updated to 2.1, and then let the iphone sync. I noticed that changes that I made on iCal were not syncing to my iPhone, so I checked the boxes at the bottom of the info scree that said "replace information" for contacts and calendar. All of

  • PP cc 2014 crashes when exporting files to H.264

    I have a very simple file. It only has a mp3 file, (a hour ) and a few files in it. But it will not export the file. Media Encoder crashes also. I seems Premiere Pro CC 2014 is a piece of crap. Heck I cannot even play the mp3 file in PP without it cr

  • Permanently disable X/openbox screensaver (black screen after 10mins)

    Hey, I run Archlinux with LXDE (openbox) for a few weeks now. I've been trying to disable the black screen that openbox (or is it triggered farther above, at X level?) shows me after 10 minutes. It works with the command "xset -dpms s off", which sug

  • Burning a QT .mov file to CD for Windows use.

    I have tried burning a QT .mov file to CD for Windows use with Finder and with Toast T 6, Mac & PC option. I burned the QT .mov file and Windows QT Installer on to the CD with both above apps.. However, when I try to read the CD on a Windows PC, it d