Conditional Filters

I need help getting a frame to disappear using conditional filters.  Basically I want to have the contents of the frame not show in the printed job if there is no data to input in the fields included inside that frame.
I'm new to the program and the scripting language used to create filters.
I thought if I used something like:
iif(empty(DB.Field)), deleted()
If I had this condition to the frame will it remove that frame from the page if there is no data in the DB.Field?
Any help would be appreciated

You do have the correct forum for Label Toolbox questions.
Your conditional filter would just be empty(db.field).
Thanks,
Tammy Fritz

Similar Messages

  • OBIEE 11g Report Level Conditional Filters

    Is it possible to have conditional filtering of data in obiee answers once data is retrieved.
    i.e without hitting the database again.
    e.g I have the sales details for a region. Once the report is displayed in dashboard, can the users filter records based on sales figures > 10,000 etc?
    thanks

    Hi,
    I understand that you want the cache to have the latest data instead of stale data. For the BI server to have the Latest Data the only way is to get the latest data from Database, so it has to hit the DB. We cannot help it ,but we can control when it can hit and the number of times according to our usage and the changes to the DB.
    We can use the seeding cahe option to keep the cache up to date (Using Agents), the time interval is your wish(Generally right after the ETL load).
    http://rampradeeppakalapati.blogspot.in
    Thanks,
    Ram

  • Conditional Filters using case statements

    Hello,
    I have a table view which displays Total Quota and Theatre Quota. Against the Total Quota, there are 2 values - Rollover Revenue and Theatre Revenue. Against the Theatre Quota, there is only Theatre Revenue.
    What I want to accomplish is to display only the Rollover Revenue Aggregated Quarterly number whenever there is a Total Quota number and not display the Theatre Revenue number
    In the table view,
    Year Name     Quarter Name     Quarter Name Sort     Person Region     Quota Name     Quarterly Quota     Credit Amount     QTD Attainment     Credit Type Name
    YEAR-2012     QTR-1-2012     QTR-1-2012     750     Total Quota     6,128,500     5,492,081     89.62%     Rollover Revenue Aggregated Quarterly
                   750          6,128,500     5,344,000     87.20%     Theatre Revenue
         QTR-2-2012     QTR-2-2012     750     Total Quota     5,922,500     5,890,264     99.46%     Rollover Revenue Aggregated Quarterly
                   750          5,922,500     6,120,000     103.33%     Theatre Revenue
         QTR-3-2012     QTR-3-2012     750     Total Quota     5,716,500     0     0.00%     
         QTR-4-2012     QTR-4-2012     750     Total Quota     5,510,500     0     0.00%     
    I used an example in the following link:
    http://oraclebizint.wordpress.com/2008/02/06/oracle-bi-ee-101332-conditional-filters-using-case-statements-in-filters/
    and applied the example in my scenario:
    CASE WHEN Quota."Quota Name" = 'Total Quota' THEN "Credit Type"."Credit Type Name" ELSE 'Dummy' END != 'Theatre Revenue'
    I still get duplicate rows.
    Thanks.

    Could you suggest any solutions for this problem where I can conditionally hide the number only for a certain type of data and not for all type of data?
    Thanks.

  • Using multy-conditional filters (if/then/ELSEIF/else)

    Hello,
    I need to create multiple policies for adding disclaimers.
    We have a default disclaimer and now two unique ones for domains that need to maintain their own.
    I created text recourses for all disclaimers and like to make a content filter that does the following:
    if the senderdomain = UniqueDomain1 then
    add disclaimerDomain1
    Else if the senderdomain = UniqueDomain2 then
    add disclaimerDomain2
    Else
    Add default Disclaimer
    I created the following filter syntax:
    add_disclaimer: if (recv-listener == "relay") AND (mail-from == "^.*@domain1\\.com$")
    {add-footer("domain1Disclaimer");}
    elif (recv-listener == "relay") AND (mail-from == "^.*@domain2\\.com$")
    {add-footer("domain2Disclaimer");}
    elif (recv-listener == "relay")
    {add-footer("defaultDisclaimer");}
    When I submit the filter I get the error "An error occurred during processing: Syntax error at the end of filter".
    Does anyone know if Ironport filters support the Python "elif" routine? if so, what am I doing wrong, if not..... does anyone have any suggestions what way I can deliver this functionality?
    Thank you!
    Steven

    You can probably get the IF/ELSE IF/ELSE logic to work on the Ironport message filters, but I would suggest that you use a combination of Outgoing mail policies and outgoing content filters.
    1. Create all your different disclaimers in the [Mail Policies > Text Resources] section.
    2. Create a corresponding outgoing content that pertains to each sender domain and assign the outgoing content filter.
    i.e.
    Let's say you have two domains and then the default domain:
    domain1, domain2, default domain
    if "mail-from" ends with "@domain1"
    then
    apply domain1-disclaimer-footer
    deliver();
    if "mail-from" ends with "@domain2"
    then
    apply domain2-disclaimer-footer
    deliver();
    if "mail-from" ends with "default-domain"
    then
    apply default-disclaimer-footer
    deliver();
    3. Then go to your outgoing mail policies > default policies > content filters and enable all three filters.
    Similarily, you can create two additional customized outgoing mail policies and each policy can have it's own outgoing content filters.
    Bottom line, you can definitely do it with message filters, but the outgoing mail policies/outgoing content filters may be easier to maintain.
    Let me know if you have any questions or concerns.
    apply domain2-
    Hello,
    I need to create multiple policies for adding disclaimers.
    We have a default disclaimer and now two unique ones for domains that need to maintain their own.
    I created text recourses for all disclaimers and like to make a content filter that does the following:
    if the senderdomain = UniqueDomain1 then
    add disclaimerDomain1
    Else if the senderdomain = UniqueDomain2 then
    add disclaimerDomain2
    Else
    Add default Disclaimer
    I created the following filter syntax:
    add_disclaimer: if (recv-listener == "relay") AND (mail-from == "^.*@domain1\\.com$")
    {add-footer("domain1Disclaimer");}
    elif (recv-listener == "relay") AND (mail-from == "^.*@domain2\\.com$")
    {add-footer("domain2Disclaimer");}
    elif (recv-listener == "relay")
    {add-footer("defaultDisclaimer");}
    When I submit the filter I get the error "An error occurred during processing: Syntax error at the end of filter".
    Does anyone know if Ironport filters support the Python "elif" routine? if so, what am I doing wrong, if not..... does anyone have any suggestions what way I can deliver this functionality?
    Thank you!
    Steven

  • Output Condition filtering

    Hi All
    Is there a way I can set up separate output conditions per plant?
    Requirement:
    I wish to have 2 GR slip outputs in plant A for 101 movements only
    Plants B,C,D are to have only 1 GR Slip output for movement type 101
    Can this be done?
    Thanks in advance
    Darren

    Hi
    maintain the Condtion records with the Key combination Trans./Event Type/ Print Version/Print item/Plant, in MN21 then you will be able to set your condition records based on Plant.
    In the Communication medium maintain the number of copies based on the Plant
    In the standard this Key combination is not avialble , so you need to create anew condition table & then create a  new access sequence & Attach it to the Output type say WE01 or WE02 or WE03.
    Maintian the Config in
    SPRO-> IMG-> Materials Management-> Inventory Management and Physical Inventory-> Output Determination->
    Maintain Condition Tables
    Maintain Access Sequences
    Maintain Output Types
    Thanks & REgards
    Kishore

  • Multiple condition filters for loading rowsources

    It looks like datasource to rowsource map filters are and'ed. Say I'm loading records with a plant location field. Then creating two filters, 'Plant equals US' and 'Plant equals Canada' will result in no records loaded.
    Is there a way to cause filters to be or'd in IOP or does this have to be done externally?
    Thanks.
    Edited by: matt on Dec 7, 2010 3:08 PM

    Yes, thats the current behavior. You can use two measures, if that makes sense. Our recommendation is to do all the filtering externally in ETL layer as this reduces the amount of data that gets loaded into rowsources and reduces load time. This doesnt work if there is a requirement to see all the data in rowsource through RSQL query. Otherwise, there is no need to have the data in rowsource and could be filtered out before it enters the IOP system.
    The filtering is typically used when there is a need to see all the data in rowsource query, but only push the filtered value into the cubes. It is typically used in two instances (1) to represent different states, say, a column value moves from booked, received, ordered, etc. (2) table (datasource/rowsource) has a single column that needs to be loaded into multiple measures and the filtering column decides the measure to which the column gets loaded -- booked order, shipped order, etc. and there is a column which says what the column (like state)

  • Conditional filtering

    Hi,
    Is this possible? I would like to have a form field that the
    user can enter their email address. This then checks a user xml
    file (generated from a php page) for the user and can display their
    personal data? I'd love this to happen without reloading the page.
    But I'm not sure how to go about it.
    Thanks,

    Hi DaggerLily,
    One way to accomplish this would be to use the none
    destructive filter sample
    http://labs.adobe.com/technologies/spry/samples/data_region/NonDestructiveFilterSample.htm l
    var dsStates = new
    Spry.Data.XMLDataSet("../../data/states/states.xml",
    "states/state")
    function FilterData()
    var tf = document.getElementById("filterTF");
    if (!tf.value)
    // If the text field is empty, remove any filter
    // that is set on the data set.
    dsStates.filter(null);
    return;
    // Set a filter on the data set that matches any row
    // that begins with the string in the text field.
    var regExpStr = tf.value;
    if (!document.getElementById("containsCB").checked)
    regExpStr = "^" + regExpStr;
    var regExp = new RegExp(regExpStr, "i");
    var filterFunc = function(ds, row, rowNumber)
    var str = row["name"];
    if (str && str.search(regExp) != -1
    && ds.getRowCount(false)==1)
    return row;
    return null;
    dsStates.filter(filterFunc);
    function StartFilterTimer()
    if (StartFilterTimer.timerID)
    clearTimeout(StartFilterTimer.timerID);
    StartFilterTimer.timerID = setTimeout(function() {
    StartFilterTimer.timerID = null; FilterData(); }, 100);
    The additional part in bold I have added. It basically gets
    the number of rows of the filtered set and only dispays the rows if
    exactly one row is present. So when you type an email address and
    only one row matches, the details of the person can be displayed.
    I hope this helps.

  • SSRS Expression for Conditional Filtering using the "IN" operator

    Hello,
    I need to filter my dataset based on a parameter:
    If Period= 1 then Week must be in (W1, W2, W3),  Else Week must be in (W10, W20, W30)
    I tried using the "IN" operator but don't know how to create the expression for the "Value" field. I tried the following:
    iif(Parameters!Period.Value = 1,
    "W1, W2, W3",
    "W10, W20, W30")
    But it doesn't work.
    Expression: Week
    Operator: IN
    Value: ???
    Any help would be highly appreciated!

    Hi,
    Use split function.
    See this expression: IIF(Parameters!Period.Value = 1, SPLIT("W1,W2,W3",","), SPLIT("W10,W20,W30",","))
    http://social.msdn.microsoft.com/Forums/sqlserver/en-US/8da78c9b-7f0c-42f1-a9c4-82f065f317c9/using-the-in-operator-in-ssrs-expressions
    Thanks Shiven:) If Answer is Helpful, Please Vote

  • Query is taking much time (query optimization)

    Dear all ,
    I have write this can you please guide me how to optimize this query ,
    SELECT distinct papf.employee_number,paaf.POSITION_ID, paaf.ASS_ATTRIBUTE3 ntn,
    paaf.payroll_id,
    paaf.assignment_id,paa.assignment_action_id,papf.EFFECTIVE_START_DATE pop_st_date,papf.EFFECTIVE_END_DATE pop_end_date,
    paaf.effective_start_date ass_st,
    paaf.effective_end_date ass_end,
    ptp.start_date pay_st,
    ptp.end_date pay_end,
    TO_CHAR (pac.effective_date, 'YYYY/MM/DD') eff_date,
    ppg.segment1 region,
    pac.payroll_action_id,
    hla.location_id,
    hla.location_code branch,
    substr(papf.full_name,0,35) NAME,
    pg.short_name CATEGORY,
    DECODE (paaf.employment_category,
    'CONT_EXPAT', 'Expatriate',
    'CONT_PR', 'Post Retirement',
    'CONT_PT', 'Part Time (Visiting Faculty)',
    'CONT_VISIT', 'Ad-Hoc(Regular Contract)',
    'Permanent'
    ) status,
    TO_CHAR (papf.original_date_of_hire, 'DD-MON-YYYY') joining_date,
    hr_payrolls.display_period_name (pac.payroll_action_id) period_name
    FROM per_all_assignments_f paaf,
    per_all_people_f papf,
    pay_people_groups ppg,
    hr_locations_all hla,
    per_grades pg,
    pay_assignment_actions paa,
    pay_run_results prr,
    pay_payroll_actions pac,
    per_time_periods ptp,
    hr_lookups hl
    WHERE paaf.person_id = papf.person_id
    AND paa.ASSIGNMENT_ACTION_ID = prr.ASSIGNMENT_ACTION_ID
    AND paaf.people_group_id = ppg.people_group_id
    AND pac.payroll_id = ptp.payroll_id
    AND paaf.location_id = hla.location_id
    AND paaf.grade_id = pg.grade_id
    AND paaf.employment_category = hl.lookup_code
    AND hl.lookup_type LIKE 'EMP_CAT%'
    AND hl.enabled_flag = 'Y'
    AND hl.lookup_code = NVL (:assignment_category, hl.lookup_code)
    AND pg.short_name = NVL (:grade, pg.short_name)
    AND paa.assignment_id = paaf.assignment_id
    AND pac.payroll_action_id = paa.payroll_action_id
    AND paaf.primary_flag = 'Y'
    AND paaf.payroll_id IS NOT NULL
    AND pac.action_type IN ('R', 'Q')
    and ptp.END_DATE between papf.EFFECTIVE_START_DATE and papf.EFFECTIVE_END_DATE
    AND pac.effective_date >= ptp.start_date
    AND pac.effective_date <= ptp.end_date
    AND ptp.period_name = :period_name
    AND pac.payroll_id = NVL (:p_payroll_id, pac.payroll_id)
    AND papf.employee_number = nvl(:P_EMP_NUM,papf.employee_number)
    AND ptp.end_date BETWEEN paaf.effective_start_date
    AND paaf.effective_end_date
    AND prr.assignment_action_id = ------------------------ This subquery filters the record to Max assignment action id with payroll run twice in a month
    (SELECT MAX (paa1.assignment_action_id)
    FROM per_all_assignments_f paaf1,
    per_all_people_f papf1,
    pay_people_groups ppg1,
    hr_locations_all hla1,
    per_grades pg1,
    pay_assignment_actions paa1,
    pay_run_results prr1,
    pay_payroll_actions pac1,
    per_time_periods ptp1,
    hr_lookups hl1
    WHERE paaf1.person_id = papf1.person_id
    AND paa1.assignment_action_id =
    prr1.assignment_action_id
    AND paaf1.people_group_id = ppg1.people_group_id
    AND pac1.payroll_id = ptp1.payroll_id
    AND paaf1.location_id = hla1.location_id
    AND paaf1.grade_id = pg1.grade_id
    AND paaf1.employment_category = hl1.lookup_code
    AND hl1.lookup_type LIKE 'EMP_CAT%'
    AND hl1.enabled_flag = 'Y'
    AND hl1.lookup_code =
    NVL (:assignment_category, hl1.lookup_code)
    AND pg1.short_name = NVL (:grade, pg1.short_name)
    AND paa1.assignment_id = paaf1.assignment_id
    AND pac1.payroll_action_id = paa1.payroll_action_id
    AND paaf1.primary_flag = 'Y'
    AND paaf1.payroll_id IS NOT NULL
    AND pac1.action_type IN ('R', 'Q')
    AND papf1.employee_number = papf.employee_number
    AND ptp1.end_date BETWEEN papf1.effective_start_date
    AND papf1.effective_end_date
    AND pac1.effective_date >= ptp1.start_date
    AND pac1.effective_date <= ptp1.end_date
    AND ptp1.period_name = :period_name
    AND pac1.payroll_id =
    NVL (:p_payroll_id, pac1.payroll_id)
    AND ptp1.end_date BETWEEN paaf1.effective_start_date
    AND paaf1.effective_end_date)
    -------- This code is added on 15-SEP 09 for advance salary payment
    ORDER BY region, branch
    Regards

    Maybe
    SELECT distinct
           employee_number,
           POSITION_ID,
           ntn,
           payroll_id,
           assignment_action_id,
           pop_st_date,
           pop_end_date,
           ***_st,
           ***_end,
           pay_st,
           pay_end,
           eff_date,
           region,
           payroll_action_id,
           location_id,
           branch,
           NAME,
           CATEGORY,
           status,
           joining_date,
           period_name
      from (select papf.employee_number,
                   paaf.POSITION_ID,
                   paaf.***_ATTRIBUTE3 ntn,
                   paaf.payroll_id,
                   paaf.assignment_id,
                   paa.assignment_action_id,
                   papf.EFFECTIVE_START_DATE pop_st_date,
                   papf.EFFECTIVE_END_DATE pop_end_date,
                   paaf.effective_start_date ***_st,
                   paaf.effective_end_date ***_end,
                   ptp.start_date pay_st,
                   ptp.end_date pay_end,
                   TO_CHAR(pac.effective_date,'YYYY/MM/DD') eff_date,
                   ppg.segment1 region,
                   pac.payroll_action_id,
                   hla.location_id,
                   hla.location_code branch,
                   substr(papf.full_name,0,35) NAME,
                   pg.short_name CATEGORY,
                   DECODE(paaf.employment_category,
                          'CONT_EXPAT','Expatriate',
                          'CONT_PR',   'Post Retirement',
                          'CONT_PT',   'Part Time (Visiting Faculty)',
                          'CONT_VISIT','Ad-Hoc (Regular Contract)',
                          'Permanent'
                         ) status,
                   TO_CHAR(papf.original_date_of_hire,'DD-MON-YYYY') joining_date,
                   hr_payrolls.display_period_name(pac.payroll_action_id) period_name,
    /* To enable filter on Max assignment action id with payroll run twice in a month */
                   max(paa.assignment_action_id) over
                      (partition by papf.employee_number) max_assignment_action_id
    /* This code is added on 15-SEP 09 for advance salary payment */
              FROM per_all_assignments_f  paaf,
                   per_all_people_f       papf,
                   pay_people_groups      ppg,
                   hr_locations_all       hla,
                   per_grades             pg,
                   pay_assignment_actions paa,
                   pay_run_results        prr,
                   pay_payroll_actions    pac,
                   per_time_periods       ptp,
                   hr_lookups             hl
             WHERE paaf.person_id           = papf.person_id
               AND paa.ASSIGNMENT_ACTION_ID = prr.ASSIGNMENT_ACTION_ID
               AND paaf.people_group_id     = ppg.people_group_id
               AND pac.payroll_id           = ptp.payroll_id
               AND paaf.location_id         = hla.location_id
               AND paaf.grade_id            = pg.grade_id
               AND paaf.employment_category = hl.lookup_code
               AND hl.lookup_type           LIKE 'EMP_CAT%'
               AND hl.enabled_flag          = 'Y'
               AND hl.lookup_code           = NVL(:assignment_category,hl.lookup_code)
               AND pg.short_name            = NVL(:grade,pg.short_name)
               AND paa.assignment_id        = paaf.assignment_id
               AND pac.payroll_action_id    = paa.payroll_action_id
               AND paaf.primary_flag        = 'Y'
               AND paaf.payroll_id          IS NOT NULL
               AND pac.action_type          IN ('R','Q')
               AND papf.employee_number     = nvl(:P_EMP_NUM,papf.employee_number)
               and ptp.END_DATE             between papf.EFFECTIVE_START_DATE
                                                and papf.EFFECTIVE_END_DATE
               AND pac.effective_date       >= ptp.start_date
               AND pac.effective_date       <= ptp.end_date
               AND ptp.period_name          = :period_name
               AND pac.payroll_id           = NVL(:p_payroll_id,pac.payroll_id)
               AND ptp.end_date             BETWEEN paaf.effective_start_date
                                                AND paaf.effective_end_date
    /* This condition filters the record to Max assignment action id with payroll run twice in a month */
    where assignment_action_id = max_assignment_action_id
    /* This code is added on 15-SEP 09 for advance salary payment */
    ORDER BY region,branchRegards
    Etbin

  • 2LIS__04_P_COMP load failure

    Hi,
    I am facing a problem while loading data from R/3 to BW using the datasource 2LIS__04_P_COMP.
    The data is extracted successfully but takes a lot of time in the update rules around 10 hrs.Ulitmately the load fails and i have to manually update it after which the load becomes successful.
    When i check the job status in the R/3 side it shows me ARFSTATE =SYSFAIL error.
    There are two source systems from which i load data .. one is A6P and other is ANP..the load from ANP is always successful and the one from A6P always fails with SYSFAIL error.The update rules for both the system is the same.
    Please can someone assist me with this problem.

    I will try debugging again.. but i have a query .When i check the particular request in the R/3 side when the load happens for the first time and it fails that is before it is manually updated, the job log in the R/3 side mentions ARFCSTATE = SYSFAIL in what cases does this kind of error occur .Job log details in R/3
    17:17:13  Job started                                                                               
    17:17:13  Step 001 started (program SBIE0001, variant &0000000036256, user name ADMDTLOAD)                     
    17:17:13  DATASOURCE = 2LIS_04_P_COMP                                                                          
    17:17:13  *************************************************************************                            
    17:17:13  *           Current values of selected profile parameter                *                            
    17:17:13  *************************************************************************                            
    17:17:13  * abap/heap_area_nondia......... 4000000000                              *                           
    17:17:13  * abap/heap_area_total.......... 4000000000                              *                           
    17:17:13  * abap/heaplimit................ 62914560                                *                           
    17:17:13  * zcsa/installed_languages...... 1MEFDST                                 *                           
    17:17:13  * zcsa/system_language.......... E                                       *                           
    17:17:13  * ztta/max_memreq_MB............ 64                                      *                           
    17:17:13  * ztta/roll_area................ 6500352                                 *                           
    17:17:13  * ztta/roll_extension........... 150994944                               *                           
    17:17:13  *************************************************************************                            
    17:18:03  37 LUWs confirmed and 37 LUWs to delete with FM RSC2_QOUT_CONFIRM_DATA                               
    17:18:21  Call up of customer enhancement BW_BTE_CALL_BW204010_E (BTE) with 52.524 records                     
    17:18:21  Result of customer enhancement: 52.524 records                                                       
    17:18:21  Call up of customer enhancement EXIT_SAPLRSAP_001 (CMOD) with 52.524 records                         
    17:18:21  Result of customer enhancement: 52.524 records                                                       
    17:18:21  Asynchronous send of data package 000001 in task 0002 (1 parallel tasks)                             
    17:18:26  Call up of customer enhancement BW_BTE_CALL_BW204010_E (BTE) with 51.032 records                     
    17:18:26  Result of customer enhancement: 51.032 records                                                       
    17:18:26  Call up of customer enhancement EXIT_SAPLRSAP_001 (CMOD) with 51.032 records                         
    17:18:26  Result of customer enhancement: 51.032 records                                                       
    17:18:27  Asynchronous send of data package 000002 in task 0003 (1 parallel tasks)                             
    17:18:30  Call up of customer enhancement BW_BTE_CALL_BW204010_E (BTE) with 14.966 records                     
    17:18:30  Result of customer enhancement: 14.966 records                                                       
    17:18:30  Call up of customer enhancement EXIT_SAPLRSAP_001 (CMOD) with 14.966 records                         
    17:18:30  Result of customer enhancement: 14.966 records                                                       
    17:18:30  Asynchronous send of data package 000003 in task 0004 (2 parallel tasks)                             
    17:18:32  Selection conditions filtered out a total of 0 records                                               
    20:18:44  tRFC: Data package = 000003, TID = 9B7CE88956FD47E8C3670107, duration = 03:00:06, ARFCSTATE = SYSFAIL
    20:18:44  tRFC: start = 25.03.2008 17:18:38, end = 25.03.2008 20:18:44                                            
    20:18:52  tRFC: Data package = 000001, TID = 9B7CE889236647E8C35F0176, duration = 03:00:08, ARFCSTATE = SYSFAIL   
    20:18:52  tRFC: start = 25.03.2008 17:18:44, end = 25.03.2008 20:18:52                                            
    20:18:57  tRFC: Data package = 000002, TID = 9B7CE88917F647E8C3640256, duration = 03:00:10, ARFCSTATE = SYSFAIL   
    20:18:57  tRFC: start = 25.03.2008 17:18:47, end = 25.03.2008 20:18:57                                            
    20:18:57  Job Finished

  • Query execution time

    Dear SCN,
    I am new to BOBJ Environment. I have created a webi report on top of bex query by using BISC connection. Bex query is build for Vendor Ageing Analysis. My bex query will take very less time to execute the report (max 1 min). But in case of webi is takeing around 5 min when i click on refresh. I have not used any conditions,filters,restrictions are done at webi level all are done at bex level only.
    Please let me know techniques to optimize the query execution time in webi. Currently we are in BO 4.0.
    Regards,
    PRK

    Hi Praveen
    Go through this document for performance optimization using BICS connection
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/d0e3c552-e419-3010-1298-b32e6210b58d?QuickLink=index&…

  • How to make saved IR available for all users

    Hi,
    I've created IR and saved it to several tabs based on search conditions.
    But they're only visible for developers.
    How to make these tabs available for all end-users ?
    Does version 4.0 support this option ?
    Thank you!

    Hi
    At present this feature is not included, although I believe it may be in 4.0. Many people have provided workarounds for this. None of which I have tried. I cannot find the original thread but here is a solution from a chap called Ruud
    >
    One way to share your saved reports with others is to 'Publish' your report settings to a few intermediate tables in your application and have other users 'Import' your settings from there. The reason for using intermediate tables is so that not all your saved reports need to be 'visible' to other users (only those that you've chosen to publish).
    Basically you have available the following views and package calls that any APEX user can access:-
    - flows_030100.apex_application_pages (all application pages)
    - flows_030100.apex_application_page_ir_rpt (all saved reports - inclusing defaults and all user saved reports)
    - flows_030100.apex_application_page_ir_cond (the associated conditions/filters for above saved reports)
    - wwv_flow_api.create_worksheet_rpt (package procedure that creates a new saved report)
    - wwv_flow_api.create_worksheet_condition (package procedure that creates a condition/filter for above saved report)
    The way I've done it is that I've created 2 tables in my application schema that are straightforward clones of the 2 above views.
    CREATE TABLE user_report_settings AS SELECT * FROM flows_030100.apex_application_page_ir_rpt;
    CREATE TABLE user_report_conditions AS SELECT * FROM flows_030100.apex_application_page_ir_cond;
    ( NB. I deleted any contents that may have come across to make sure we start with a clean slate. )
    These two tables will act as my 'repository'.
    To simplify matters I've also created 2 views that look at the same APEX views.
    CREATE OR REPLACE VIEW v_report_settings AS
    SELECT r.*
    p.page_name
    FROM flows_030100.apex_application_page_ir_rpt r,
    flows_030100.apex_application_pages p
    WHERE UPPER ( r.application_name ) = <Your App Name>
    AND r.application_user 'APXWS_DEFAULT'
    AND r.session_id IS NULL
    AND p.application_id = r.application_id
    AND p.page_id = r.page_id;
    CREATE OR REPLACE VIEW v_report_conditions AS
    SELECT r.*
    p.page_name
    FROM flows_030100.apex_application_page_ir_cond r,
    flows_030100.apex_application_pages p
    WHERE UPPER ( r.application_name ) = <Your App Name>
    AND r.application_user 'APXWS_DEFAULT'
    AND p.application_id = r.application_id
    AND p.page_id = r.page_id;
    I then built 2 screens:-
    1) Publish Report Settings
    This shows 2 report regions:-
    - Region 1 - Shows a list of all your saved reports from V_REPORT_SETTINGS (filtered to only show yours)
    SELECT apex_item.checkbox ( 1, report_id ) " ",
    page_name,
    report_name
    FROM v_report_settings
    WHERE application_user = :APP_USER
    AND ( page_id = :P27_REPORT OR :P27_REPORT = 0 )
    ORDER BY page_name,
    report_name
    Each row has a checkbox to select the required settings to publish.
    The region has a button called PUBLISH (with associated process) that when pressed will copy the settings from
    V_REPORT_SETTINGS (and V_REPORT_CONDITIONS) into USER_REPORT_SETTINGS (and USER_REPORT_CONDITIONS).
    - Region 2 - Shows a list of already published reports in table USER_REPORT_SETTINGS (again filtered for your user)
    SELECT apex_item.checkbox ( 10, s.report_id ) " ",
    m.label,
    s.report_name
    FROM user_report_settings s,
    menu m
    WHERE m.page_no = s.page_id
    AND s.application_user = :APP_USER
    AND ( s.page_id = :P27_REPORT OR :P27_REPORT = 0 )
    ORDER BY m.label,
    s.report_name
    Each row has a checkbox to select a setting that you would like to delete from the repository.
    The region has a button called DELETE (with associated process) that when pressed will remove the selected
    rows from USER_REPORT_SETTINGS (and USER_REPORT_CONDITIONS).
    NB: P27_REPORT is a "Select List With Submit" to filter the required report page first.
    Table MENU is my application menu table where I store my menu/pages info.
    2) Import Report Settings
    This again shows 2 report regions:-
    - Region 1 - Shows a list of all published reports in table USER_REPORT_SETTINGS (filtered to show only other users saved reports)
    SELECT apex_item.checkbox ( 1, s.report_id ) " ",
    m.label,
    s.report_name,
    s.application_user
    FROM user_report_settings s,
    menu m
    WHERE m.page_no = s.page_id
    AND s.application_user :APP_USER
    AND ( s.page_id = :P28_REPORT OR :P28_REPORT = 0 )
    ORDER BY m.label,
    s.report_name,
    s.application_user
    Each row has a checkbox to select the setting(s) that you would like to import from the repository.
    The region has one button called IMPORT that when pressed will import the selected settings.
    It does this by using the 2 above mentioned package procedure to create a new saved report for you
    with the information form the repository. Be careful to match the right column with the right procedure
    parameter and to 'reverse' any DECODEs that the view has.
    - Region 2 - Shows a list of all your saved reports from V_REPORT_SETTINGS (filtered to only show yours)
    SELECT page_name,
    report_name
    FROM v_report_settings
    WHERE application_user = :APP_USER
    AND ( page_id = :P28_REPORT OR :P28_REPORT = 0 )
    ORDER BY page_name,
    report_name
    This is only needed to give you some feedback as to whether the import succeeded.
    A few proviso's:-
    a) I'm sure there's a better way to do all this but this works for me :-)
    b) This does not work for Computations! I have not found an API call to create computations.
    They will simply not come across into the repository.
    c) If you import the same settings twice I've made it so that the name is suffixed with (2), (3) etc.
    I did not find a way to update existing report settings. You can only create new ones.
    d) Make sure you refer to your saved reports by name, not ID, when matching APEX stored reports and the
    reports in your repository as the ID numbers may change if you re-import an application or if you
    auto-generate your screens/reports (as I do).
    Ruud
    >
    To me this is a bit too much of a hack and I personally wouldn't implement it - it's just an example to show it can be done.
    Also if you look here in the help in APEX Home > Adding Application Components > Creating Reports > Editing Interactive Reports
    ...and go to the last paragraph, you can embed predicates in the URL.
    Cheers
    Ben
    http://www.munkyben.wordpress.com
    Don't forget to mark replies helpful or correct ;)
    Edited by: Munky on Jul 30, 2009 8:03 AM

  • How can I add "AND" "OR" criteria to smart folders?

    I want to create some smart folders in Finder that search for documents based on type and date last edited, opened or modified.  In some instances I want the result to show files that match some of the criteria and in other instances, I want results to return files that match all of the criteria.  I have read somewhere that you can add these conditional filters by pressing the Option key when clicking on the '+' sign to add a new criteria, but it doesn't seem to work in Mavericks.  Is there another key combination for this?
    Any help greatly appreciated.
    Many thanks
    Donna

    When adding Criteria, hold down the Option key on the keyboard. The + button will turn to and elipsis (…)
    You can make groups of Any, All, or None (Or, And, Not).

  • IS_Data Insight View Data Function Limitation

    Hi Experts,
    Is there any limitation to view data of a table using View Data function in Data Insight module in Information Steward. I come across a strange issue with this, the details are explained below.
    I am trying to perform Data profiling on table, as part of this I imported table into a Data Insight project. When i tried to view data of a table using View Data function, it is showing blank like (0 from 998987). I am able able to see data in database and even in DS designer too.
    Then i created a view on top of this table by selecting all columns and tried to view data, again showed blank. Then i removed some columns in view and tried, now it showed data. The table contains 150 columns, I used around 110 columns in view.
    My question here is, is there any limitations in Data Insight for viewing data apart 500 records. Will View Data function consider the number of Rows or the size of data to display the data. If it consider these two, is there any option available in IS to control these two parameters i.e., increase / decrease the size or no of rows.
    If anyone come across with this issue, could you please help me if any solutions to fix this.
    Thanks,
    Ramakrishna Kamurthy

    Hello Rama,
    In IS 4.2 this limitation is actually stated.
    See here: IS_421_user_en.pdf in Related Information section pg 44 which states that:
    The software displays only 500 records when you view data from an SAP table. 
    Also more details available in section: 2.5.10.2 Limit of 500 records when viewing data from SAP tables.
    The software displays only 500 records when you view data from an SAP table.
    Views that contain SAP tables have the potential to be quite large, especially when they are joined with other SAP tables. The limit of 500 records when viewing data prevents your computer from hanging or never completing the task because the tables were too large.
    In addition to the 500 records limit, you can take steps to enhance performance in the following ways:
    ● Reduce the size of the file by mapping fields, join conditions, filters, and so on to limit the data in the table to information that you really need.
    ● Use SAP ABAP-supported functions in forming expressions in views. Using non-supported functions is allowed, but doing so may adversely affect performance.
    ● Use the View Data filter tools when you view and export data from SAP tables.
    With the 500 records limit for viewing SAP table data, there is a potential for no records showing up in the View Data window.
    This could happen, for example, when the view contains a child view, the child view contains one or more SAP tables, and a join is set up to join the entire data set.
    A message appears at the top of the View Data window that instructs you to export the data to an external source (text file, CSV, or Excel file) to view all of the records.
    I hope this is helpful.
    Mike

  • Query of Query(cursor)

    Hi all,
    I want to implement something like "query of queries" in Oracle with the help of CURSORS.
    Is it possible to use a cursor in the from clause of a query?
    Please help me....
    Thanking u all in advance...
    Regards,
    Aswathy.

    > But can i know if this can be done with CURSORS.
    A cursor is not a data set or result set - it is not a copy of rows (or row identifiers) in memory.
    A cursor is (SQL) program parsed & compiled into an execution plan. This contains the instructions to fetch the applicable rows.
    A FETCH from a cursor is an instruction to Oracle to execute that SQL program to return the next rows (or rows in the case of a bulk fetch).
    It seems to me that you are thinking of row-by-row processing - which is why you want a query of a query and use a cursor for that.
    Row-by-row processing is also aptly named slow-by-slow processing.
    The correct approach is to think data sets. And write queries to deal with data sets. And yes, you can create a data set using SQL and re-use it with the WITH syntax.
    For example (and a very poor example as analytical SQL should be used instead, but the intention is simply to show you how the WITH syntax works):
    with MY_DATA_SET as(
    select
    col1,
    col2,
    col3
    from table1
    where <some conditions & filters>
    select
    'MAX C1' as NAME,
    max(col1)
    from my_data_set
    union all
    select
    'MAX C2' as NAME,
    max(col2)
    from my_data_set
    union all
    select
    'C1=C2' as NAME,
    count(*)
    from my_data_set
    where c1 = c2
    <etc>
    You can create multiple "data sets" like this and join them, further refine them, etc. Just remember to aim to write minimal SQL code... the less code there is, the less complexity, the less work to do (usually) and the better the likelihood of good performance and scalability.

Maybe you are looking for

  • Parameters for Ingesting wide fixed-length files?

    I created an SSIS process to ingest fixed-length text files into this table. CREATE TABLE [dbo].[Temp_Source_Fixed](  [Column 0] [text] NULL,  [ID] [int] IDENTITY(1,1) NOT NULL,  CONSTRAINT [Temp_Source_Fixed_ID] PRIMARY KEY CLUSTERED  [ID] ASC )WITH

  • I can no longer connect to my pc home sharing

    I have been using my itunes on my HTPC as my media server for my atv2 for months without issue, but after a power outage i can no longer connect to my HTPC.  It shows up under "computers" but it will not connect.  I didn't change any firewall setting

  • Can we recieve goods for purchase order with out material master data in PO

    I have service PR for doing the services. I will  give the material which is not accounted ( no material in the PO)and raise PO with reference to production order. but once the service is done,that product will become a  material ( master data availa

  • Not able to delpoy a process

    Can someone tell me how to fix this problem. I guess something went wrong in Oracle AS . <2006-10-06 10:49:29,540> <ERROR> <default.collaxa.cube.engine.dispatch> <BaseScheduledWorker::process> Failed to handle dispatch message ... exception ORABPEL-0

  • BADI exchange rate according to document date instead of posting date

    Hello, I am looking for a BADI that will enable us to change the exchange rate according to document date instead of posting date. This change is needed for documents of travel expenses. I tried to use substitution but it didn't work for documents cr