How do MQLs materialise in SFDC?

In theory 'MQL' (Marketing Qualified Lead) is an individual's stage in the marketing funnel. In my organisation it is one of the key stages in the funnel, because it is the trigger for further qualification and opportunity detection. One might say that 'MQL' is the stage that all our demand generation activities are driving towards. So there is a high interest in reporting and reviewing the number of MQLs that our marketing activities generate.
For such reporting most of our stakeholders will turn to SFDC. This is the CRM we use in connection to Eloqua.
Here we run into a real challenge: MQL is a stage i.e. it passes/changes. So how could we report on something like the number of MQLs we generated in the past 2 quarters, or the number of MQLs generated by specific campaign(s)? We can report on how many MQLs we currently have, but how many we generated during a specific time and/or through specific campaigns...?
We are thinking of creating a new/custom object in SFDC which will represent this MQL: when the combined activities and profile of a person meet the thresholds of our lead scoring model(s), such an MQL record will be created in SFDC and will be linked to that individual's contact/lead record.
That MQL-record will then be the basis for further telemarketing qualification and opportunity detection. When an opportunity gets detected, this is created as an opportunity record and will be linked to the MQL-record. So we'll have Contact + MQL + Opportunity records which will be interlinked and which will have stages to identify their current place in the marketing/sales funnel. At some point an MQL will reach the end of its cycle. A specific stage will indicate this and the stage on the Contact record will be reset to one of the earlier stages in the buying cycle. This way we'll have an ever existing record of MQLs that were generated, their current stage and the opportunities/revenue associated to them.
Of course this is quite a development and it introduces a new kind of (custom) object in SFDC which will need to be managed and related to all the other SFDC objects.
I was wondering, are there similar set-ups like this 'out there' or did you implement alternative solutions to have a more permanent representation of MQLs in your CRM (SFDC); so that you can historically report on marketing's performance in creating MQLs and their progression into opportunities/revenue?
Best regards
Roger

Thanks Adam, that is a very practical and effective solution. When considering this 'MQL-date' field, did you ever look at just using the Lead-status field and SFDC's feature to store the values of that field historically? I am wondering whether that function would be of any use. If I understand things correctly this 'history feature' could provide the records that at some point had a certain value in the Lead Status field and it could track the different transitions from one value to another. But I am not sure this will suffice or may bring us in reporting problems still. Instead, a specific 'MQL-status' checkbox with a date stamp seems quite simple and these simple solutions are often the most effective ;-)
Do I understand correctly that in your implementation the Lead record represents responses or combination of responses. It does not so much represent a person. If an existing person within your SFDC database has the right behavior/criteria, a Lead will be created for that person as well and that Lead record is linked to the existing (Contact) record? The Lead record is the item that is followed-up with and qualified, by telemarketing or sales?
Thanks for your suggestions and insights.
Best regards
Roger

Similar Messages

  • How to truncate a partition of materialised view?

    Hi,
    I want to create a materialised view that has 30 days data. I want a fast refreseh to load daliy data. I also want that the view have only 30 day data.
    Will partiotioning help me to truncate 30 day old partition. (Considering materialised view has daily partition)
    How can this be done?
    This materialized will have outer joins in it.
    Thanks,
    Vishu

    One way:
    ALTER TABLE <table_name> TRUNCATE PARTITION <partition_name> DROP STORAGE;http://www.psoug.org/reference/partitions.html

  • How long does it take for the custom field created in SFDC to show up in the field mapping list?

    How long does it take for the custom field created in SFDC to show up in the field mapping list? I hit the refresh field button, but it is not showing up after 5 min. Do I just need to have patience? 

    Hi,
    What do you have to do to the field in SFDC to make it accessible so that it shows up in the Eloqua field mapping area as a field to be mapped?   

  • How do I find out the actual status of my fibre or...

    Hi,
    I am posting here in despair because I cannot get an answer I can understand about why my fibre broadband order has been "delayed". 
    The call centre agents who ring me are not English and all they can do is repeat the same 5 sentences over and over again. 
    Apparently, there is no technical department that I can speak to either which means I am expected to just accept the fact that my order is "delayed" and be happy about it.
    My story is this.
    I placed my order with BT on the 12th of July and was assured that:
    - I could get fibre
    - My phone line would be activated by midnight on the 23rd July
    - An engineer would attend my property on the 24th July to install my fibre broadband
    I was very happy and booked the 24th off work.
    Then, two days ago, a person who I could barely understand because of a very thick accent told me that there was a"delay". 
    What delay I asked? 
    I was then told they would not know when I can have fibre until my phone line is activated. 
    Why I asked?  What has my phone line got to do with it?  If you know which exchange I am going to, why does my phone line have to be active before I can be told when an engineer can attend my property?
    I got absolutely no joy with getting an answer to this question.  Also, ominously, the agent mentioned something about there not being enough "slots at my exchange".  I asked when they were getting more.  At which point the guy repeated what he had said previously about there being a delay.
    My questions are very, very simple.
    When I placed my order, the sales person told me I could get fibre.  But, if there are not enough slots at my exchange then obviously I can't get fibre.  So which is it?  Can I get fibre or not?
    Regardless of whether or not I can get fibre, why can't BT tell me one way or the other now?  Why do I have to wait until my phone line is activated before they can tell me if fibre is support or not?
    Why can't BT tell me when they will be getting more "slots" if this is indeed why my order is "deleyed"?
    Assuming that I can get fibre (which should be a fair assumption or BT should not have taken my order in the first place) and assuming that my phoneline activates as planned, why can't BT leave my booking for an engineer for the 24th?  Why have they cancelled my booking and why are they now making me wait until my phone line is active before making a new booking?
    And finally, why can't anyone at BT actually give me any facts?  Either I can get fibre or I can't.  If there are not enough slots, then when are BT installing new ones?  Why should I be expected to wait around until the 24th for a magic date that may or may not materialise?  What kind of shop are BT running where a customer can place an order and be givein no idea whatsoever of when that order will be fulfilled?
    Perhaps someone on this forum knows how the technical details work and can explain what is going on? 
    Or maybe someone can advise me on how to gain answers to my questions from BT themselves?
    Or maybe a BT representative could respond regarding why their order management team can't explain to a customer why their order has been delayed?
    At this moment in time I wish I had never signed up with them.
    Kind regards,
    Jean Milne

    I understand that an active phone line is required before the engineer can install my fibre.
    What I don't understand is why BT will only place the order for my fibre engineer on the date they say my phone line will activate.
    They have said my phone line will be activated "by midnight on the 23rd". 
    Therefore, there is *no reason* why BT should not leave the original engineer date of the 24th as it was.
    *If*, for whetever reason, the phone line does not activate on the 23rd as planned, *only then* should they cancel my engineer.
    Instead, they have cancelled my order already and will not re-issue it until my phone line is activated.
    And on top of that they have no idea how long the order will take once they do place it.  I am expected to sit here twiddling my thumbs until BT deign to bother to give me a date.
    And this is all aside from the fact that the sales person *lied* to me when he said I could have both my phone line and my broadband on the same day.  I *specifically* checked this with him at the time I placed my order. I told him that I need broadband as soon as possible or I can't do my job and I was assured that there would be no problems.
    And, this is all assuming there is not a fundamental reason that BT have kept hidden about why my order is delayed e..g. not enough slots at the exchange.  One of the agents I spoke to said that and then back-tracked big-time when I asked why my order had been taken if BT did not have the infrastructure to fulfil it.
    All in all, the lies and the lack of information is completely unacceptable.
    I have had no response to my requests for *actual facts* from the so-called Order Management Team which is also unacceptable.
    I am left with no option but to sit here twiddling my thumbs waiting for BT to maybe or maybe not give me a date.
    Any other comapny that operated in this way would not be able to retain me as a customer but, unfortunately, in this case I have no choice but to suck up the bull BT are selling because they are the only company that can give me fibre.
    It is beyond a joke.
    At the very least, BT should have offered set up ADSL for free in the interim for utterly messing me around.
    They have offered *nothing* except a measly text message stating that they "will be in touch".
    It is a disgrace.

  • How to stop bitmap conversion

    Hi All,
    Here is the situation.
    To get the reports one global temporary table has been created.
    Whenever reports has to generate
    1) It first insert records into temporary table from multiple tables based on select statement (here global temporary table is not analyzed after the inserts of millions records)
    Insert into temp_table (select * from table_a,table_b where <condition) ;
    number of records in temp_table = 5-10 millions
    2) Now the reports are being generated with select statement that uses temporary table along with other table
    Select <column_list> from temp_table , table_c where <some_condition>;
    If I check the execution plan it includes bitmap conversion from row_id and to_id.
    ID PARENT OPERATION OBJECT_NAME
    0 SELECT STATEMENT
    1 0 TABLE ACCESS BY INDEX ROWID LXRO_685993E3
    2 1 NESTED LOOPS
    3 2 INDEX FULL SCAN MXTEMPOID_MXOID_MXINDEX
    4 2 BITMAP CONVERSION TO ROWIDS
    5 4 BITMAP AND
    6 5 BITMAP CONVERSION FROM ROWIDS
    7 6 INDEX RANGE SCAN LXRO_685993E3_LXFROMLAT_INDEX
    8 5 BITMAP CONVERSION FROM ROWIDS
    9 8 INDEX RANGE SCAN LXRO_685993E3_LXTYPE_INDEX
    To execute this query it takes long time compare to non bitmap conversion.
    3) Whenevr I gather stats on the temp_table and see the new execution plan look like following (no bitmap conversion). Gathering stats only affect the subsequent queries not the current query which is running with bitmap conversion. Here I want to stop the bitmap conversion for the current query. I mean before it picks up the execution plan with bitmap conversion
    ID PARENT OPERATION OBJECT_NAME
    0 SELECT STATEMENT
    1 0 TABLE ACCESS BY INDEX ROWID LXRO_685993E3
    2 1 NESTED LOOPS
    3 2 INDEX SKIP SCAN MXTEMPOID_MXOID_MXINDEX
    4 2 INDEX RANGE SCAN LXRO_685993E3_TYPFLATFID_INDEX
    4) Please explain how can I stop the bitmap conversion happening in execution plan. (had it been permanent table once the stats gathering is enough but as its temporary table)
    5) after the report generation records from the temp tables are deleted immediately.
    Thank you
    -Anurag

    user635138 wrote:
    Hi Jonathan,
    a) table is created as "on commit delete rows" (the default)
    b) user is getting rid of the temporary data by deleting it,Not related to the index issue, but with "on commit delete rows", the users don't need to delete the GTT data, they can simply issue a "commit;" and their data will disappear. It's possible, of course, that the 3rd party application won't let you do this.
    c) everyone who uses this table insert different volumes of data from different different select queriesSo we need to know if there are any statst generated at any time - perhaps by program, possibly by dynamic_sampling, that could cause plans to change when cursors were invalidated. Your early posts mentioned gathering stats on the GTT - but if you use any of the normal collection method with an "on commit delete rows" table, you should get stats showing no data in the table, and that is likely to affect the execution plan. What method are you using to generate the plan, by the way ?
    d) What indexes exist on this table. for this see the syntax of table and index. it might help to come to solution
    My mistake - I should have noticed that the bitmap conversion was happening on the other table, not the GTT. This suggests that you may need to consider the two-column index as a solution to the problem - but before you do that, take a look at the queries and data. You say that you get 5M to 10M rows in the GTT: that's quite a lot of "temporary" data - without looking at what the optimizer suggests, can you work out a sensible execution path for the query.
    In passing - you have "conditions" in the where clause, but how variable are these, and are they only join coniditions or do you also have some flitering conditions on the non-GTT.
    >
    More over user executes MQL statement (this MQL is converted to SQL internally then comes on database) I have very little knowledge of MQl, don't know what change has to be made into MQL to add hints in SQL statement. no control over SQL. so only thing is, Is there any way to gather stats on table before report generation.
    If you can load a real table with representative data, you could generate stats for it, then transfer the stats to the GTT. Another option - with the limitations of the dynamic_sampling hint - is to set the parameter optimizer_dynamic_sampling to the value 2, which will tell the optimizer to sample a few blocks from every table that has no stats (and this includes GTTs automatically). If you try this, remember that you seem to have collected zero stats on the GTT, so you may have to delete these before Oracle samples the table.
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format.
    "Science is more than a body of knowledge; it is a way of thinking"
    Carl Sagan                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • JAVA Mapping for SFDC integration

    Hello Friends,
    I am working on a R/3-SFDC integration.While pushing data from PI to SFDC it expects session ID and traget url and through java mapping we can acchive it.
    I have got Uarunas blog for java mapping which is very gud.
    http://wiki.sdn.sap.com/wiki/display/XI/SFDCIntegrationusingPI7.1-HowtoaddSOAPEnvelopeinJava+Mapping
    I have tried that code but it is not working, it is bulding the soap envelope but not fetching the session id from SFDC(might be it is not able to login to SFDC). Can anybody tell me how to pass the user id and password for this code.
    Regards,
    Jayesh.

    Check 2 things in your code?
    a) Are you passing your communication component name and receiver comm channel name in the code...?
    i.e  here..
    Channel channel = LookupService.getChannel("BC_SFDC","CC_Login");
    b) Check what namespace you use in loginxml  String.... use the one that is required for the target system?
    String loginxml = "<login xmlns=\"urn:enterprise.soap.sforce.com\"> <username>"

  • PI 7.0 as webservice with SFDC

    Hi Experts,
                 There is a requirement that
    1. SFDC sends a service request to PI 7.0 for an update in SAP r/3.
    2. PI 7.0 extracts the request and call an RFC on R/3 to do the update
    3. R/3 responds back to PI after the update.
    4. The response is sent back from PI to SFDC.
    All activities are to be synchronous.
    I am quite novice to PI as well as SFDC, I need your kind help, guidance in this regard. Here are my queries
    a. Is this possible to achieve? if yes then ...
    b. What all I need to know before I start with this project?
    c. How each step mentioned above, are going to take place? What all technologies are required for this?
    d. how do we extract data in step 2?
    e. How we put response back to SFDC in step 4?
    f. How  R/3 responds back to PI in step 3.
    g. Could PI be used as web-service provider?
    best wishes
    Anupam

    I am quite novice to PI as well as SFDC, I need your kind help, guidance in this regard. Here are my queries
    a. Is this possible to achieve? if yes then ...
    yes
    b. What all I need to know before I start with this project?
    You need to know about SOAP and SOA in PI.
    You can refer SDN documents on how to publish an interface as web service
    *d. how do we extract data in step 2?*
    Using mapping program between uour SFDC request structure and RFC structure.
    e. How we put response back to SFDC in step 4?
    Need to create synchronus interface with request and reponse structure.
    And need to mapping
    SFDC request----
    >RFC Request
    RFC response----
    > SFDC response
    f. How R/3 responds back to PI in step 3.
    RFC will have an RFC XML response structure.
    g. Could PI be used as web-service provider?
    Yes
    Edited by: Debashish on Jul 14, 2011 5:08 PM
    Edited by: Debashish on Jul 14, 2011 5:09 PM

  • How to handle Asynch Synch (IDoc - SOAP) interfce without using BPM

    Hi Experts,
    I have a Idoc to SOAP scenario and where I need to hadle SOAP response but for this how it can be handle without BPM I am unable to figure out. Specificaly it is a interfacce with SFDC (sales force.com) CRM and need to handle IDoc - SOAP interface and response needs to be captured with another idoc which can be sent back to SAP.
    Any suggestion will be helpful.
    Regards,
    Nitin Patil

    Hi,
    you cannot do it with IDOC adapter unless you have PI 7.3 without a BPM
    you need to use a different adapter like ABAP proxy (on java) or RFC
    hope it's clear,
    Regards,
    Michal Krawczyk

  • How do i update Installed AIR Applications? // General Questions

    Hi,
    first of all, i'm completly new to AIR. However, i made my way through your product documentation and i'm now able to deploy AIR and a AIR Application (SFDC Chatter) and register it.
    Works fine so far, but some questions:
    - How can i update installed AIR applications? If i just try to re-install it, the installer will fail with exit code "9"
         -> How can i update them directly? If this is not possible
          -> How can i uninstall AIR applications in a large enterprise environment?
    - Where is the ARH Tool that is described in your product documentation?
    - When installing AIR as SYSTEM User (I'm using SCCM) where is the logfile?
    - Do you recommend updating AIR on a regular base? Are "older" applications compatible with newer AIR Versions?
    Thanks for your replies!
    Ben

    Hi Ben,
    I'd recommend taking a good look at our captive runtime option that is available in AIR 3.0 and higher.  While you have to roll your own installer (and updater) this could be as simple as a zip/batch file combination.  It also allows you to use the runtime that you prefer and not be dependent on what the user installs or uninstalls.  You do have to be mindful of security implications, but it's definitely the way I'd go.
    http://www.adobe.com/devnet/air/articles/air3-install-and-deployment-options.html
    http://help.adobe.com/en_US/air/build/WSfffb011ac560372f709e16db131e43659b9-8000.html
    As for updating, it should be as simple as increasing your app descriptors version number and installing again.  This article explains how you can also use our update framework: http://www.adobe.com/devnet/air/articles/air_update_framework.html
    The ARH tool can be found here:
    Mac: http://airdownload.adobe.com/air/distribution/latest/mac/arh
    Win: http://airdownload.adobe.com/air/distribution/latest/win/arh.exe
    Linux: http://airdownload.adobe.com/air/distribution/latest/lin/arh
    Thanks,
    Chris

  • How to use a function in a Where Clause?

    Hi,
    I've got a doubt. If MY_FUNCT is a function that returns a boolean, can I use it in a where clause for writing a query like this?:
    select ...
    from table a
    where ...
    and MY_FUNC (a.field) = true
    Thanks!
    Edited by: Mark1970 on 2-lug-2010 3.27

    Bear in mind that this could kill your performance.
    Depending on what you're doing, how many tables and other predicates are involved, you might want to try to eliminate all other data early before applying your function predicate otherwise your function might be called more times than you might have imagined. Strategies for this include subquery factoring and the old ROWNUM trick for materialising an inline view.
    If performance is impacted, you might also want to consider using a function-based index provided that the function is deterministic.

  • Should I use materialised view?

    I am using the following sql in my code. This takes a long time to execute and I need to tune this. Please review and suggest if I should be using a materialized view instead.
    Main Select statement for interest calculation
    SELECT          intratechgcur.SECURITY, intratechgcur.srl_no, intratechgcur.schg_type,
         CASE WHEN intratechgcur.effective_date < ADt_Start_Date THEN ADt_Start_Date
                             ELSE intratechgcur.effective_date END AS start_date,
         CASE WHEN intratechgcur.effective_date < ADt_End_Date
                             AND NVL(intratechgnext.effective_date, ADt_End_Date) > ADt_End_Date THEN ADt_End_Date
                             ELSE NVL(intratechgnext.effective_date, ADt_End_Date) END AS end_date,
                        intratechgcur.rate, intratechgcur.face_value, intratechgcur.listing_int, intratechgcur.comm_prod_int,
                        intratechgcur.sec_create_int, intratechgcur.int_type, intratechgcur.interest_key, intratechgcur.margin,
                        intratechgcur.FLOOR, intratechgcur.cap, intratechgcur.reset_freq, intratechgcur.cmpd_y_n, intratechgcur.cmpd_freq,
                        intratechgcur.comp_type, intratechgcur.int_day, intratechgcur.int_day_1, intratechgcur.int_day_2, intratechgcur.int_dtls_yn
    FROM               v_intratechg intratechgcur, v_intratechg intratechgnext
    WHERE               intratechgcur.SECURITY          = AS_Security
    AND          intratechgcur.effective_date < ADt_End_Date
    AND               intratechgnext.SECURITY          (+)= intratechgcur.SECURITY
    AND               intratechgnext.srl_no          (+)= intratechgcur.srl_no + 1
    ORDER BY      intratechgcur.SECURITY, intratechgcur.effective_date, intratechgcur.srl_no ;
    The code for the view V_intratechg is;
    CREATE OR REPLACE VIEW V_INTRATECHG AS
    SELECT     security,
         schg_type,
         effective_date,
         SUM(1) over (PARTITION BY security ORDER BY security, effective_date ASC, schg_type ASC) AS srl_no,
         face_value,
         rate,
         listing_int,
         comm_prod_int,
         sec_create_int,
         int_type,
         interest_key,
         margin,
         FLOOR,
         cap,
         NVL(reset_freq, 'DAILY') AS reset_freq,
         NVL(cmpd_y_n, 'N') AS cmpd_y_n,
         NVL(cmpd_freq, 'DAILY') AS cmpd_freq,
         NVL(comp_type, 'N') AS comp_type,
         int_day, int_day_1, int_day_2, int_dtls_yn
    FROM
         (SELECT     security.security, 'IM' AS schg_type,
              GREATEST(security.prv_int_dt, NVL(security.allot_date, security.prv_int_dt),
              NVL(security.first_int_date,security.prv_int_dt)) AS effective_date,
              DECODE(intday.int_day_1, 'ACD', NVL((SELECT interest_amt FROM securityschddtls A WHERE security.security = A.security
              AND A.adhoc_schd_date > GREATEST(security.prv_int_dt, NVL(security.allot_date, security.prv_int_dt),
              NVL(security.first_int_date,security.prv_int_dt))
              AND a.rectype ='L' AND A.ADHOC_SCHD_DATE = (SELECT MIN(ADHOC_SCHD_DATE) FROM securityschddtls
    WHERE      securityschddtls.adhoc_schd_date > GREATEST(security.prv_int_dt, NVL(security.allot_date, security.prv_int_dt), NVL(security.first_int_date,security.prv_int_dt))
         AND securityschddtls.security = A.security AND securityschddtls.rectype='L')),
              NVL(secchg.rate, security.interest)), NVL(secchg.rate, security.interest)) AS rate,
              NVL(secchg.face_value, security.face_value) AS face_value,
              NVL(secchg.listing_int, security.listing_int) AS listing_int,
              NVL(secchg.comm_prod_int, security.comm_prod_int) AS comm_prod_int,
              NVL(secchg.sec_create_int,security.sec_create_int) AS sec_create_int,
              NVL(secchg.int_type, security.int_type) AS int_type,
              NVL(secchg.interest_key, security.interest_key) AS interest_key,
              NVL(secchg.margin, security.margin) AS margin,
              NVL(secchg.FLOOR, security.FLOOR) AS FLOOR,
              NVL(secchg.cap, security.cap) AS cap,
              NVL(secchg.reset_freq, security.reset_freq) AS reset_freq,
              NVL(secchg.cmpd_y_n, security.cmpd_y_n) AS cmpd_y_n,
              NVL(secchg.cmpd_freq, security.cmpd_freq) AS cmpd_freq,
              NVL(secchg.comp_type, security.comp_type) AS comp_type,
              NVL(secchg.int_day, security.int_day) AS int_day, intday.int_day_1,
              intday.int_day_2, 'Y' AS int_dtls_yn
              FROM          security, assetype, intday, securityschddtls secdtls,
                        (SELECT     secchg.security AS security, secchg.call_date AS effective_date,
                        NVL(secchg.rate,0) AS rate, secchg.face_value,
                        SUM(1) over (PARTITION BY secchg.security ORDER BY secchg.security,
                        secchg.call_date ASC) AS srl_no,
                        NVL(secchg.listing_int,0) AS listing_int, NVL(secchg.comm_prod_int,0) AS comm_prod_int,
                        NVL(secchg.sec_create_int,0) AS sec_create_int,
                        secchg.int_type, secchg.interest_key,
                        nvl(secchg.margin,0) as margin, nvl(secchg.FLOOR,0) as floor,
                        nvl(secchg.cap,0) as cap, secchg.reset_freq,
                        secchg.cmpd_y_n, secchg.cmpd_freq, secchg.comp_type, secchg.int_day FROM          secchg) secchg
              WHERE          security.asset_type     = assetype.asset_type
                   AND          security.int_day          = intday.int_day
                   AND          assetype.int_y_n           = 'Y'
                   AND          security.rectype          = 'L'
                   AND          assetype.rectype          = 'L'
                   AND          intday.rectype               = 'L'
                   AND          secchg.security (+)= security.security
                        AND          secchg.srl_no (+)= 1
                        AND          secdtls.security (+)= security.security
                        AND          secdtls.srl_no (+)= 1
                        AND          secdtls.rectype (+)= 'L'
              UNION ALL
              SELECT     schedules.security,
                   DECODE(schedules.schd_past_yn, 'Y', 'RP', 'RS') AS schg_type,
                   DECODE(intday.int_day_1, 'ACD',security_cashflow.start_date,security_cashflow.inflow_date) AS effective_date,
                   --commented by vijai
                   -- DECODE(intday.int_day_1, 'ACD', intschdamt.amount, NVL(intratechg.rate,security.interest)) AS rate,
                   DECODE(intday.int_day_1, 'ACD', intschdamt.amount,decode(security_cashflow.start_Date,intratechg.value_Date, intratechg.rate, security.interest)) as rate,
                   decode(nvl(schedules.tot_face_value - schedules.cum_face_value,security.face_value),0,security.face_value,schedules.tot_face_value - schedules.cum_face_value,security.face_value) AS face_value,
                   NVL(intratechg.listing_int,security.listing_int) as listing_int,
                   NVL(intratechg.comm_prod_int,security.comm_prod_int) as comm_prod_int,
                   NVL(intratechg.sec_create_int,security.sec_create_int),
                   NVL(intratechg.int_type,security.int_type) as int_type,
                   nvl(intratechg.interest_key,security.interest_key) as interest_key,
                   nvl(intratechg.margin,security.margin) as margin,
                   nvl(intratechg.FLOOR,security.floor) as floor,
                   nvl(intratechg.cap,security.cap) as cap,
                   nvl(intratechg.reset_freq,security.reset_freq) as reset_freq,
                   nvl(intratechg.cmpd_y_n,security.cmpd_y_n) as cmpd_y_n,
                   nvl(intratechg.cmpd_freq,security.cmpd_freq) as cmpd_freq,
                   nvl(intratechg.comp_type,security.comp_type),
                   nvl(intratechg.int_day,security.int_day),
                   intday.int_day_1, intday.int_day_2,
                   DECODE(intratechg.security, NULL, 'N', 'Y') AS int_dtls_yn
              FROM     v_schedules schedules, security, intday, intratechg, v_schedules intschdamt, security_cashflow
              WHERE     schedules.security          = security.security
              AND          schedules.red_yn      = 'Y'
              AND          security.int_day               = intday.int_day
              AND          security.rectype               = 'L'
              AND          intday.rectype                    = 'L'
              AND          intratechg.security (+)= schedules.security
              AND          intratechg.value_date(+)= schedules.schd_date
              AND          intratechg.rectype     (+)= 'L'
              AND          intschdamt.security (+)= schedules.security
              AND          intschdamt.schd_date (+)= schedules.schd_date
              AND          intschdamt.red_yn (+)= 'N'
              AND           security_cashflow.inflow_type      = 'INT'
              AND           security_cashflow.inflow_date     = schedules.schd_date
              AND           security.security               = security_cashflow.security
              AND           schedules.security          = security_cashflow.security
              UNION ALL
              SELECT     intratechg.security, 'IR' AS schg_type,
                   intratechg.value_date AS effective_date,
                   NVL(intratechg.rate,security.interest),
                   security.face_value,
                   NVL(intratechg.listing_int,security.listing_int),
                   NVL(intratechg.comm_prod_int,security.comm_prod_int),
                   NVL(intratechg.sec_create_int,security.sec_create_int),
                   nvl(intratechg.int_type,security.int_type),
                   nvl(intratechg.interest_key,security.interest_key),
                   nvl(intratechg.margin,security.margin),
                   nvl(intratechg.FLOOR,security.floor),
                   nvl(intratechg.cap,security.cap),
                   nvl(intratechg.reset_freq,security.reset_freq),
                   nvl(intratechg.cmpd_y_n,security.cmpd_y_n),
                   nvl(intratechg.cmpd_freq,security.cmpd_freq),
                   nvl(intratechg.comp_type,security.comp_type),
                   nvl(intratechg.int_day,security.int_day),
                   intday.int_day_1, intday.int_day_2, 'Y' AS int_dtls_yn
              FROM     intratechg, security, intday
              WHERE     intratechg.security          = security.security
              AND     security.int_day               = intday.int_day
              AND     intratechg.rectype          = 'L'
              AND     security.rectype               = 'L'
              AND     intday.rectype                    = 'L'
              AND     NOT EXISTS     (SELECT     1 FROM          v_schedules schedules
                             WHERE          schedules.security     = intratechg.security
                             AND          schedules.schd_date     = intratechg.value_date
                             AND          schedules.red_yn          = 'Y'))
              ORDER BY security, srl_no
    The code for the view V_schedules is;
    CREATE OR REPLACE VIEW V_SCHEDULES AS
    SELECT schdall.security,
              schdall.schd_date,
              schdall.schd_type,
              schdall.percent,
              schdall.units_o,
              schdall.units_n,
              schdall.amount,
              schdall.sequences,
    schdall.act_sch_dt,
         schdall.security_n,
              schdall.prior_act,
         schdall.red_amount,
              schdall.ben_refer,
         schdall.round_method,
    schdall.round_dec,
    schdall.average_y_n,
         schdall.schd_past_yn,
         CASE WHEN schd_type IN(sysschd.red, sysschd.disred) THEN 'Y' ELSE 'N' END AS red_yn,
         SUM( CASE WHEN schd_type IN(sysschd.red, sysschd.disred) THEN schdall.red_amount ELSE 0 END)
                                  over(PARTITION BY schdall.security) AS tot_face_value,
    SUM( CASE WHEN schd_type IN(sysschd.red, sysschd.disred) THEN schdall.red_amount ELSE 0 END)
                                  over(PARTITION BY schdall.security
                                       ORDER BY schdall.security, schdall.schd_date ASC) AS cum_face_value,
         SUM( CASE WHEN schd_type IN(sysschd.red, sysschd.disred) THEN schdall.red_amount ELSE 0 END)
                                       over(PARTITION BY schdall.security
                                       ORDER BY schdall.security, schdall.schd_date DESC) AS to_be_redeemed ,
              SUM(CASE WHEN schd_type =sysschd.INT THEN 0 ELSE 1 END)
                                       over(PARTITION BY schdall.security,schdall.schd_date,schd_type)
                             AS no_of_schd
    FROM
                   (SELECT      schedules.security_o AS security,
                                  schedules.schd_date,
                                  schedules.schd_type,
                                  schedules.percent,
                                  schedules.units_o,
                                  schedules.units_n,
                                  schedules.amount,
                                  schedules.sequences,
                                  schedules.act_sch_dt,
                                  schedules.security_n,
                                  schedules.prior_act,
                                  schedules.red_amount,
                                  schedules.ben_refer,
                                  schedules.round_method,
                                  schedules.round_dec,
                                  schedules.average_y_n,
                                  DECODE(schedules.schd_type,'RED','Y','Y') AS schd_past_yn
              FROM           schdpast schedules
              WHERE           prior_act = 'A'
              AND                rectype = 'L'
              UNION ALL
              SELECT           schedules.security_o AS security,
                                  schedules.schd_date,
                                  schedules.schd_type,
                                  schedules.percent,
                             schedules.units_o,
                   schedules.units_n,
                             schedules.amount,
                        schedules.sequences,
                        schedules.act_sch_dt,
                   schedules.security_n,
                   schedules.prior_act,
                        schedules.red_amount,
                        schedules.ben_refer,
                   schedules.round_method,
                   schedules.round_dec,
                   schedules.average_y_n,
                        'N' AS schd_past_yn
              FROM           schedules
              WHERE           prior_act = 'A'
              AND                rectype = 'L'
              AND                Process_date IS NULL ) schdall,
              (SELECT           MAX(redschdtype) AS red,
                             MAX(disredtype) AS disred,
                                  MAX(intschdtype) AS      INT
                                  FROM sysparamschd
              WHERE rectype = 'L') sysschd
    ORDER BY security, schd_date, schd_type

    Too much SQL... makes me eyes hurt.
    I think you're running down the wrong alley here. The very first and fundamental principle of performance tuning is identifying the performance problem. Saying that there is a problem is not identifying the actual problem.
    You cannot run down the alley with a knife looking for a performance problem to kill if you do not know how it looks like. Good that you are running though - the old Klingon saying of "a running warrior can slit more throats" hold very true. :-)
    Why is the existing SQL slow? You first need to identify that. Sure, a materialised view can make the end-query much faster as it has no longer to do all the work - that has now been done in batch by a DBMS_REFRESH job updating and maintaining that materialised view. But that work is still done... so have you actually fixed the cause of the performance problem, or merely hid it by addressing the symptoms?
    How does one find and identify the underlaying performance problem with too-much-SQL-that-makes-Billy's-eyes-hurt? Software Engineering 101. Take any complex problem. Break it down into lots of smaller little problems. Solve each on in turn.
    Take the SQL, break it down into simpler pieces and check each for performance issues. Look at the execution plan and cost. Determime if you (via the physical db design) are providing optimal I/O paths to the CBO in order for it to get to the required data with as little I/O as possible.
    Once you deal with the facts, you can make an informed decision or whether or not a materialised view will actually fix the cause of the performance problem.

  • What is the use of materialised view in ORACLE

    Hi All,
    What is the use of Materialised view in ORACLE.Can anyone please help me out by giving a real time example in banking application (How MV is used in banking applications).
    Thanks & Regards,
    SB2013

    SB2013 wrote:
    What is the use of Materialised view in ORACLE.Can anyone please help me out by giving a real time example in banking application (How MV is used in banking applications).http://docs.oracle.com/cd/E11882_01/server.112/e25789/schemaob.htm#CNCPT411
    Just add for example in a banking application at the end of each paragraph
    E.g.
    >
    In data warehouses, you can use materialized views to compute and store data generated from aggregate functions such as sums and averages, for example in a banking application

  • OLAP on 11g and Materialised Views with Multiple Value-Based Hierarchies

    Hello OLAPians
    I am trying to setup Orable BIEE to report on an OLAP cube with pre-aggregated data. As OBIEE is not able to hook into the OLAP directly i have to create an SQL cubeview.
    Currently i am on a 10g OLAP environment and am using the oracle sample SQL cubeview generator to create an SQLview of my cube.
    The cube itself has multiple dimensions and these dimensions have multiple VALUE-based (ragged) hierarchies and dimension members can be shared across hierarchies also.
    Initially i had a problem running the view generator plugin because there is a bug within it that does not finish if there are multiple value-based hierarchies present. I was able to get around this by manually editing the limitmap for the cubeview and manually creating the SQL view.
    The question that i want to ask is how robust is the 11g materialised views with multiple value-based hierarchies and the sharing of dimension members across different hierarchies?
    Has anyone successfully been able to create a cubeview and import it into OBIEE without the hassle of manually editing the limitmap?
    A problem arises with the value-based setup whereby if the client creates a newer depth in the ragged hierarchy, i need to manually create the limitmap and the cube-view over again, and then re-map the BI Administration mappings.

    The simple answer to your question,
    how robust is the 11g materialised views with multiple value-based hierarchies...?is that materialized views are not supported on top of value-based hierarchies in 11g. The reason is that it is not possible to write a reasonable SQL statement that aggregates a fact over a value-based hierarchy. Such a SQL statement is necessary if we want to create a rewritable MV on top of the cube.
    But I suspect this is not what you are really asking. If you are trying to set up OBIEE on top of the cube in 10g using the view generator, then you will probably want to use the "ET VIEWS" that are generated automatically in 11g. These are generated whether or not you enable materialized views on top of your cube. I am not aware of any issues with the generated value-based hierarchy view support in 11g. Members may be shared between value hierarchies and you will not need to generate or modify limit maps.

  • How to provide joins between oracle tables and sql server tables

    Hi,
    I have a requirement that i need to generate a report form two different data base. i.e Oracle and Sql Server.
    how to provide joins between oracle tables and sql server tables ? Any help on this
    Regards,
    Malli

    user10675696 wrote:
    I have a requirement that i need to generate a report form two different data base. i.e Oracle and Sql Server. Bad idea most times. Heterogeneous joins do not exactly scale and performance can be severely degraded by network speed and b/w availability. And there is nothing you can do in the application and database layers to address performance issue at the network level in this case - your code's performance is simply at the mercy of network performance. With a single glaring fact - network performance is continually degrading. All the time. Always. Until it is upgraded. When the performance degradation starts all over again.
    If the tables are not small (few 1000 rows each) and row volumes static, I would not consider doing a heterogeneous join. Instead I would rather go for a materialised view on the Oracle side, use a proper table and index structure, and do a local database join.

  • Insufficient priv error when creating a materialised view using DB link!!!!

    Hi,
    I have a two databases db1 and db2.
    I have created a database link from db2 to access user1 schema in db1.
    When i try to create a materialised view by accessing user1 table using user 'user2' in db2 database, i get error "ORA-1031: Insufficient Privilege".
    The user2 has the priviliges "create view", "create any view", "create materialised view" etc.
    I am able to select data, of the table using the database link, but creating materialised view gives this error.
    I want to know if the user "user1" should be granting any privilige to user2, if so how is it possible as user2 dosent exist in db1!
    i.e should i give command something like "grant select on user1.table to user2@db2"(this dosent work as it says user2@db2 dosent exist, obviously as its taking user2@db2 as a username in db1 schema)
    or is this a problem with user2 priviliges in db2 database, if so what are all the priviliges that have to be given to user2 in db2 schema.
    regards,

    Hi,
    User from db1 can not assign any privs to user on db2 database. If user from db1 want to access any object from db2 database, then on local level user2 on db2 should have all the acess to required object. Then you can use user2 on db2 using dblink from db1 user to access those objects residing on remote database.
    If user1 from db1 can grant privs to user2 on db2 database on any objects on db2 database then it is highly security issue. This is not permitted.
    Regards
    e.g.
    conn userx@db2
    grant select on userx.table1 to user2;
    conn user2@db2
    select count(*) from userx.table1;
    conn user1@db1
    select count(*) from userx.table1@db2_dblink;
    Edited by: skvaish1 on Apr 29, 2010 3:57 PM

Maybe you are looking for

  • Problems with Sratch Disks

    I have a strange problem with my scratch disks. In my FCP preferences under scratch disks, I have 3 hard drives set up. 2 internal and one external that has been formatted correctly. My log and capture window tells me my scratch disks on both interna

  • Jax-rpc and rmi confusion

    I'm new to the jax-rpc, and I'm very confused about jax-rpc, if it is using RPC (remote precedure call), why we have to import java.rmi.Remote ? is it somewhere still using RMI? Thank you in advance for you answer!

  • How do I put videos from youtube or yahoo or google on to my ipod?

    plaese help me

  • Inadvertantly deleted Firefox for an iMac running OSX 10.5.8. Can I get an older version?

    Recieved an email to upgrade Firefox to latest version. This I did and then when it went into the Applications folder on my iMac running OSX 10.5.8 (which can't be updated) it "replaced" the older version. Now I haven't got any Firefox at all. Can I

  • Backup Without PC/Mac

    Ok, So my grandads old XP laptop appears to be starting To fail.  He wants to get a new machine, but he'll end up with Windows 7 which will cause him & his It consultant (me) nightmares! So, I suggested he gets an ipad, a bluetooth keyboard & a digit