How to avoid seeing data for a particular member in the row level

Hello,<BR>I am using Essbase. In that i am having members like taks n/a, demand n/a. Purpose of these members are, if i load data for demand, task wont have data so i am mapping to task n/a.And same for the task loading.I am using alphablox for reporting and report script for retriving the data.So, if i select the task n/a and demand n/a then some value will be existing for this. Is there any possibility to remove this from report.I mean while seeing in the report, it should not get this member and respective value for display and also avoid taking this data for any manipulation.I know RESTRICT for column level data. Is there anything for row level restrictions.<BR>Please help me on this.<BR>Regards<BR>R.Prasanna

Hi,
Into the rule containing the Color metadata, you can restrict the values of the list.
In "Has restricted list and pane", list the values 1, 2 and 3 (new line for each value) :
1
2
3
Romain.

Similar Messages

  • How to apply hyper link for a particular cell in the column?

    Year
    2001
    2002
    2003
    2004
    I wanna apply Hyper link for 2001 only! How to achieve this?
    Can we apply no. of hyper links for a column?

    You have to create a conditional variable.
    For eg. create a variable with hyperlink in the syntax for Year.
    Now if else logic, so if YEAR=2001, return hyperlink else YEAR object.
    Hope you understood.
    Thanks
    Gaurav

  • How to see data for particular date from a alert log file

    Hi Experts,
    I would like to know how can i see data for a particular date from alert_db.log in unix environment. I'm suing 0racle 9i in unix
    Right now i'm using tail -500 alert_db.log>alert.txt then view the whole thing. But is there any easier way to see for a partiicular date or time
    Thanks
    Shaan

    Hi Jaffar,
    Here i have to pass exactly date and time, is there any way to see records for let say Nov 23 2007. because when i used this
    tail -500 alert_sid.log | grep " Nov 23 2007" > alert_date.txt
    It's not working. Here is the sample log file
    Mon Nov 26 21:42:43 2007
    Thread 1 advanced to log sequence 138
    Current log# 3 seq# 138 mem# 0: /oracle/NEWDB/oradata/NEWDB/redo3.log
    Mon Nov 26 21:42:43 2007
    ARCH: Evaluating archive log 1 thread 1 sequence 137
    Mon Nov 26 21:42:43 2007
    ARC1: Evaluating archive log 1 thread 1 sequence 137
    ARC1: Unable to archive log 1 thread 1 sequence 137
    Log actively being archived by another process
    Mon Nov 26 21:42:43 2007
    ARCH: Beginning to archive log 1 thread 1 sequence 137
    Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/NEWDB/admin/arch/1_137
    .dbf'
    ARCH: Completed archiving log 1 thread 1 sequence 137
    Mon Nov 26 21:42:44 2007
    Thread 1 advanced to log sequence 139
    Current log# 2 seq# 139 mem# 0: /oracle/NEWDB/oradata/NEWDB/redo2.log
    Mon Nov 26 21:42:44 2007
    ARC0: Evaluating archive log 3 thread 1 sequence 138
    ARC0: Beginning to archive log 3 thread 1 sequence 138
    Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/NEWDB/admin/arch/1_138
    .dbf'
    Mon Nov 26 21:42:44 2007
    ARCH: Evaluating archive log 3 thread 1 sequence 138
    ARCH: Unable to archive log 3 thread 1 sequence 138
    Log actively being archived by another process
    Mon Nov 26 21:42:45 2007
    ARC0: Completed archiving log 3 thread 1 sequence 138
    Mon Nov 26 21:45:12 2007
    Starting control autobackup
    Mon Nov 26 21:45:56 2007
    Control autobackup written to SBT_TAPE device
    comment 'API Version 2.0,MMS Version 5.0.0.0',
    media 'WP0033'
    handle 'c-2861328927-20071126-01'
    Clearing standby activation ID 2873610446 (0xab47d0ce)
    The primary database controlfile was created using the
    'MAXLOGFILES 5' clause.
    The resulting standby controlfile will not have enough
    available logfile entries to support an adequate number
    of standby redo logfiles. Consider re-creating the
    primary controlfile using 'MAXLOGFILES 8' (or larger).
    Use the following SQL commands on the standby database to create
    standby redo logfiles that match the primary database:
    ALTER DATABASE ADD STANDBY LOGFILE 'srl1.f' SIZE 10485760;
    ALTER DATABASE ADD STANDBY LOGFILE 'srl2.f' SIZE 10485760;
    ALTER DATABASE ADD STANDBY LOGFILE 'srl3.f' SIZE 10485760;
    ALTER DATABASE ADD STANDBY LOGFILE 'srl4.f' SIZE 10485760;
    Tue Nov 27 21:23:50 2007
    Starting control autobackup
    Tue Nov 27 21:30:49 2007
    Control autobackup written to SBT_TAPE device
    comment 'API Version 2.0,MMS Version 5.0.0.0',
    media 'WP0280'
    handle 'c-2861328927-20071127-00'
    Tue Nov 27 21:30:57 2007
    ARC1: Evaluating archive log 2 thread 1 sequence 139
    ARC1: Beginning to archive log 2 thread 1 sequence 139
    Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/NEWDB/admin/arch/1_139
    .dbf'
    Tue Nov 27 21:30:57 2007
    Thread 1 advanced to log sequence 140
    Current log# 1 seq# 140 mem# 0: /oracle/NEWDB/oradata/NEWDB/redo1.log
    Tue Nov 27 21:30:57 2007
    ARCH: Evaluating archive log 2 thread 1 sequence 139
    ARCH: Unable to archive log 2 thread 1 sequence 139
    Log actively being archived by another process
    Tue Nov 27 21:30:58 2007
    ARC1: Completed archiving log 2 thread 1 sequence 139
    Tue Nov 27 21:30:58 2007
    Thread 1 advanced to log sequence 141
    Current log# 3 seq# 141 mem# 0: /oracle/NEWDB/oradata/NEWDB/redo3.log
    Tue Nov 27 21:30:58 2007
    ARCH: Evaluating archive log 1 thread 1 sequence 140
    ARCH: Beginning to archive log 1 thread 1 sequence 140
    Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/NEWDB/admin/arch/1_140
    .dbf'
    Tue Nov 27 21:30:58 2007
    ARC1: Evaluating archive log 1 thread 1 sequence 140
    ARC1: Unable to archive log 1 thread 1 sequence 140
    Log actively being archived by another process
    Tue Nov 27 21:30:58 2007
    ARCH: Completed archiving log 1 thread 1 sequence 140
    Tue Nov 27 21:33:16 2007
    Starting control autobackup
    Tue Nov 27 21:34:29 2007
    Control autobackup written to SBT_TAPE device
    comment 'API Version 2.0,MMS Version 5.0.0.0',
    media 'WP0205'
    handle 'c-2861328927-20071127-01'
    Clearing standby activation ID 2873610446 (0xab47d0ce)
    The primary database controlfile was created using the
    'MAXLOGFILES 5' clause.
    The resulting standby controlfile will not have enough
    available logfile entries to support an adequate number
    of standby redo logfiles. Consider re-creating the
    primary controlfile using 'MAXLOGFILES 8' (or larger).
    Use the following SQL commands on the standby database to create
    standby redo logfiles that match the primary database:
    ALTER DATABASE ADD STANDBY LOGFILE 'srl1.f' SIZE 10485760;
    ALTER DATABASE ADD STANDBY LOGFILE 'srl2.f' SIZE 10485760;
    ALTER DATABASE ADD STANDBY LOGFILE 'srl3.f' SIZE 10485760;
    ALTER DATABASE ADD STANDBY LOGFILE 'srl4.f' SIZE 10485760;
    Wed Nov 28 21:43:31 2007
    Starting control autobackup
    Wed Nov 28 21:43:59 2007
    Control autobackup written to SBT_TAPE device
    comment 'API Version 2.0,MMS Version 5.0.0.0',
    media 'WP0202'
    handle 'c-2861328927-20071128-00'
    Wed Nov 28 21:44:08 2007
    Thread 1 advanced to log sequence 142
    Current log# 2 seq# 142 mem# 0: /oracle/NEWDB/oradata/NEWDB/redo2.log
    Wed Nov 28 21:44:08 2007
    ARCH: Evaluating archive log 3 thread 1 sequence 141
    ARCH: Beginning to archive log 3 thread 1 sequence 141
    Wed Nov 28 21:44:08 2007
    ARC1: Evaluating archive log 3 thread 1 sequence 141
    ARC1: Unable to archive log 3 thread 1 sequence 141
    Log actively being archived by another process
    Wed Nov 28 21:44:08 2007
    Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/NEWDB/admin/arch/1_141
    .dbf'
    Wed Nov 28 21:44:08 2007
    ARC0: Evaluating archive log 3 thread 1 sequence 141
    ARC0: Unable to archive log 3 thread 1 sequence 141
    Log actively being archived by another process
    Wed Nov 28 21:44:08 2007
    ARCH: Completed archiving log 3 thread 1 sequence 141
    Wed Nov 28 21:44:09 2007
    Thread 1 advanced to log sequence 143
    Current log# 1 seq# 143 mem# 0: /oracle/NEWDB/oradata/NEWDB/redo1.log
    Wed Nov 28 21:44:09 2007
    ARCH: Evaluating archive log 2 thread 1 sequence 142
    ARCH: Beginning to archive log 2 thread 1 sequence 142
    Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/NEWDB/admin/arch/1_142
    .dbf'
    Wed Nov 28 21:44:09 2007
    ARC0: Evaluating archive log 2 thread 1 sequence 142
    ARC0: Unable to archive log 2 thread 1 sequence 142
    Log actively being archived by another process
    Wed Nov 28 21:44:09 2007
    ARCH: Completed archiving log 2 thread 1 sequence 142
    Wed Nov 28 21:44:36 2007
    Starting control autobackup
    Wed Nov 28 21:45:00 2007
    Control autobackup written to SBT_TAPE device
    comment 'API Version 2.0,MMS Version 5.0.0.0',
    media 'WP0202'
    handle 'c-2861328927-20071128-01'
    Clearing standby activation ID 2873610446 (0xab47d0ce)
    The primary database controlfile was created using the
    'MAXLOGFILES 5' clause.
    The resulting standby controlfile will not have enough
    available logfile entries to support an adequate number
    of standby redo logfiles. Consider re-creating the
    primary controlfile using 'MAXLOGFILES 8' (or larger).
    Use the following SQL commands on the standby database to create
    standby redo logfiles that match the primary database:
    ALTER DATABASE ADD STANDBY LOGFILE 'srl1.f' SIZE 10485760;
    ALTER DATABASE ADD STANDBY LOGFILE 'srl2.f' SIZE 10485760;
    ALTER DATABASE ADD STANDBY LOGFILE 'srl3.f' SIZE 10485760;
    ALTER DATABASE ADD STANDBY LOGFILE 'srl4.f' SIZE 10485760;
    Thu Nov 29 21:36:44 2007
    Starting control autobackup
    Thu Nov 29 21:42:53 2007
    Control autobackup written to SBT_TAPE device
    comment 'API Version 2.0,MMS Version 5.0.0.0',
    media 'WP0206'
    handle 'c-2861328927-20071129-00'
    Thu Nov 29 21:43:01 2007
    Thread 1 advanced to log sequence 144
    Current log# 3 seq# 144 mem# 0: /oracle/NEWDB/oradata/NEWDB/redo3.log
    Thu Nov 29 21:43:01 2007
    ARCH: Evaluating archive log 1 thread 1 sequence 143
    ARCH: Beginning to archive log 1 thread 1 sequence 143
    Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/NEWDB/admin/arch/1_143
    .dbf'
    Thu Nov 29 21:43:01 2007
    ARC1: Evaluating archive log 1 thread 1 sequence 143
    ARC1: Unable to archive log 1 thread 1 sequence 143
    Log actively being archived by another process
    Thu Nov 29 21:43:02 2007
    ARCH: Completed archiving log 1 thread 1 sequence 143
    Thu Nov 29 21:43:03 2007
    Thread 1 advanced to log sequence 145
    Current log# 2 seq# 145 mem# 0: /oracle/NEWDB/oradata/NEWDB/redo2.log
    Thu Nov 29 21:43:03 2007
    ARCH: Evaluating archive log 3 thread 1 sequence 144
    ARCH: Beginning to archive log 3 thread 1 sequence 144
    Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/NEWDB/admin/arch/1_144
    .dbf'
    Thu Nov 29 21:43:03 2007
    ARC0: Evaluating archive log 3 thread 1 sequence 144
    ARC0: Unable to archive log 3 thread 1 sequence 144
    Log actively being archived by another process
    Thu Nov 29 21:43:03 2007
    ARCH: Completed archiving log 3 thread 1 sequence 144
    Thu Nov 29 21:49:00 2007
    Starting control autobackup
    Thu Nov 29 21:50:14 2007
    Control autobackup written to SBT_TAPE device
    comment 'API Version 2.0,MMS Version 5.0.0.0',
    media 'WP0280'
    handle 'c-2861328927-20071129-01'
    Thanks
    Shaan

  • How can I see date and time of a imovie event (new imovie 10.0.4). Thank you for your answer and greetings from switzerland.

    How can I see Date and Time of an event in iMovie 10.0.4?
    Thank you for your answer.
    greetings from Switzerland

    Managing Profiles
    http://support.mozilla.com/en-US/kb/Managing-profiles

  • How to see data for for deffirent customers with deffirent dunnings

    Hi
    i want to make BI report like this.
    how can i see balance for deffirent dunning levels at ECC system
    Customer Number (Payer)     Description     Dunning Level 0     Dunning Level 1     Dunning Level 2     Dunning Level 3     Dunning Level 4     Dunning Level 5     Total Balance
    7019706     Carefour     82     38     92     82     17     5     316
    8281554     Sheffield Central     22     48     88     57     87     92     394
    8256558     Liverpool Ents     65     66     17     26     26     64     264
    2920349     The SUA     52     95     40     3     99     98     387
    9376822     Bar Central     1     96     43     52     40     66     298
    4777091     Bar Local     62     78     0     68     9     79     296
    1910859     Café Colombus     93     32     8     53     1     11     198
    Total          377     453     288     341     279     415     2153

    In FBL5n, You need to sub total based on the customer number and dunning level and you will get the data required, however to have the output in the specific format required by you, you need to make the report using this data.
    Regards,
    Gaurav

  • How to generate test data for all the tables in oracle

    I am planning to use plsql to generate the test data in all the tables in schema, schema name is given as input parameters, min records in master table, min records in child table. data should be consistent in the columns which are used for constraints i.e. using same column value..
    planning to implement something like
    execute sp_schema_data_gen (schemaname, minrecinmstrtbl, minrecsforchildtable);
    schemaname = owner,
    minrecinmstrtbl= minimum records to insert into each parent table,
    minrecsforchildtable = minimum records to enter into each child table of a each master table;
    all_tables where owner= schemaname;
    all_tab_columns and all_constrains - where owner =schemaname;
    using dbms_random pkg.
    is anyone have better idea to do this.. is this functionality already there in oracle db?

    Ah, damorgan, data, test data, metadata and table-driven processes. Love the stuff!
    There are two approaches you can take with this. I'll mention both and then ask which
    one you think you would find most useful for your requirements.
    One approach I would call the generic bottom-up approach which is the one I think you
    are referring to.
    This system is a generic test data generator. It isn't designed to generate data for any
    particular existing table or application but is the general case solution.
    Building on damorgan's advice define the basic hierarchy: table collection, tables, data; so start at the data level.
    1. Identify/document the data types that you need to support. Start small (NUMBER, VARCHAR2, DATE) and add as you go along
    2. For each data type identify the functionality and attributes that you need. For instance for VARCHAR2
    a. min length - the minimum length to generate
    b. max length - the maximum length
    c. prefix - a prefix for the generated data; e.g. for an address field you might want a 'add1' prefix
    d. suffix - a suffix for the generated data; see prefix
    e. whether to generate NULLs
    3. For NUMBER you will probably want at least precision and scale but might want minimum and maximum values or even min/max precision,
    min/max scale.
    4. store the attribute combinations in Oracle tables
    5. build functionality for each data type that can create the range and type of data that you need. These functions should take parameters that can be used to control the attributes and the amount of data generated.
    6. At the table level you will need business rules that control how the different columns of the table relate to each other. For example, for ADDRESS information your business rule might be that ADDRESS1, CITY, STATE, ZIP are required and ADDRESS2 is optional.
    7. Add table-level processes, driven by the saved metadata, that can generate data at the record level by leveraging the data type functionality you have built previously.
    8. Then add the metadata, business rules and functionality to control the TABLE-TO-TABLE relationships; that is, the data model. You need the same DETPNO values in the SCOTT.EMP table that exist in the SCOTT.DEPT table.
    The second approach I have used more often. I would it call the top-down approach and I use
    it when test data is needed for an existing system. The main use case here is to avoid
    having to copy production data to QA, TEST or DEV environments.
    QA people want to test with data that they are familiar with: names, companies, code values.
    I've found they aren't often fond of random character strings for names of things.
    The second approach I use for mature systems where there is already plenty of data to choose from.
    It involves selecting subsets of data from each of the existing tables and saving that data in a
    set of test tables. This data can then be used for regression testing and for automated unit testing of
    existing functionality and functionality that is being developed.
    QA can use data they are already familiar with and can test the application (GUI?) interface on that
    data to see if they get the expected changes.
    For each table to be tested (e.g. DEPT) I create two test system tables. A BEFORE table and an EXPECTED table.
    1. DEPT_TEST_BEFORE
         This table has all EMP table columns and a TEST_CASE column.
         It holds EMP-image rows for each test case that show the row as it should look BEFORE the
         test for that test case is performed.
         CREATE TABLE DEPT_TEST_BEFORE
         TESTCASE NUMBER,
         DEPTNO NUMBER(2),
         DNAME VARCHAR2(14 BYTE),
         LOC VARCHAR2(13 BYTE)
    2. DEPT_TEST_EXPECTED
         This table also has all EMP table columns and a TEST_CASE column.
         It holds EMP-image rows for each test case that show the row as it should look AFTER the
         test for that test case is performed.
    Each of these tables are a mirror image of the actual application table with one new column
    added that contains a value representing the TESTCASE_NUMBER.
    To create test case #3 identify or create the DEPT records you want to use for test case #3.
    Insert these records into DEPT_TEST_BEFORE:
         INSERT INTO DEPT_TEST_BEFORE
         SELECT 3, D.* FROM DEPT D where DEPNO = 20
    Insert records for test case #3 into DEPT_TEST_EXPECTED that show the rows as they should
    look after test #3 is run. For example, if test #3 creates one new record add all the
    records fro the BEFORE data set and add a new one for the new record.
    When you want to run TESTCASE_ONE the process is basically (ignore for this illustration that
    there is a foreign key betwee DEPT and EMP):
    1. delete the records from SCOTT.DEPT that correspond to test case #3 DEPT records.
              DELETE FROM DEPT
              WHERE DEPTNO IN (SELECT DEPTNO FROM DEPT_TEST_BEFORE WHERE TESTCASE = 3);
    2. insert the test data set records for SCOTT.DEPT for test case #3.
              INSERT INTO DEPT
              SELECT DEPTNO, DNAME, LOC FROM DEPT_TEST_BEFORE WHERE TESTCASE = 3;
    3 perform the test.
    4. compare the actual results with the expected results.
         This is done by a function that compares the records in DEPT with the records
         in DEPT_TEST_EXPECTED for test #3.
         I usually store these results in yet another table or just report them out.
    5. Report out the differences.
    This second approach uses data the users (QA) are already familiar with, is scaleable and
    is easy to add new data that meets business requirements.
    It is also easy to automatically generate the necessary tables and test setup/breakdown
    using a table-driven metadata approach. Adding a new test table is as easy as calling
    a stored procedure; the procedure can generate the DDL or create the actual tables needed
    for the BEFORE and AFTER snapshots.
    The main disadvantage is that existing data will almost never cover the corner cases.
    But you can add data for these. By corner cases I mean data that defines the limits
    for a data type: a VARCHAR2(30) name field should have at least one test record that
    has a name that is 30 characters long.
    Which of these approaches makes the most sense for you?

  • How to get the histoical data for newly added field in the cube?

    Hi Experts,
    I have small doubt on remodeling the infocube.
    After adding the characteristic or keyfigure  to a cube by using remodeling concept, how can I get the historical data for that particular field.
    I have searched in SDN also but I didn't get proper information.
    Please excuse me if I posted repeated question.
    helpful answer will be awarded with poitns.
    Thanks & regards,
    Venkat.

    hi
    depending on your customer need you could use the remodelling functionnality but sometimes you have no way to retrieve what you want so another option you should consider is the following:
    Advantages
    that will cost less effort and guarantee the result.
    Drawbacks
    data is redondant for a while
    space (depending on the volume of historical data)
    So here are the steps :
    step 1Adjust your extraction process according to the fields you need to add to populate the cube.
    step 2 Then create a dso next or even a cube, feed the dso with a full load with the enhanced extractor you adjusted with the new fields in step 1 only once in fact this should be one shot.
    step 3 Copy the query to the previous built  multi-provider on top of the new historical data from dso and the running live delta cube. Adjust the queries if necessary.
    optionnal Then if you want to get rid of the dso or new cube for historical data you could empty the actual one push the data from the new data provider and that's all.
    bye
    Boujema

  • In mdx how to get max date for all employees is it posible shall we use group by in mdx

    in mdx how to get max date for all employees is it posible shall we use group by in mdx
    example
    empno  ename date
    1         hari        12-01-1982
    1         hari        13-06-2000
    by using above data i want to get max data

    Hi Hari3109,
    According to your description, you want to get the max date for the employees, right?
    In your scenario, do you want to get the max date for all the employees or for each employee? In MDX, we have the Max function to achieve your requirement. You can refer to Naveen's link or the link below to see the details.
    http://www.sqldbpros.com/2013/08/get-the-max-date-from-a-cube-using-mdx/
    If this is not what you want, please provide us more information about the structure of you cube, so that we can make further analysis.
    Regards,
    Charlie Liao
    TechNet Community Support

  • How to find condition  type for a particular material document no.

    hiiiiii
    How to find condition type for a particular material document no.

    Hi
    Condition types are maintained at PO level
    Take the EKKO-KNUMV and pass to
    KONV-KNUMV field and take the different condition types values from KONV
    Take the Material Document No (MBLNR) and pass to MSEG table and take the EBELN field and from EKKO table take EKKO-KNUMV field and pass to KONV
    see the table T685 for different condition types.
    <b>Reward points for useful Answers</b>
    Regards
    Anji

  • How To avoid column heading for only total line in ALV list Display

    Hi,
    How To avoid column heading for only total line in ALV list Display.

    Hi,
    to change colunm header field catlog is built
    look at the example below
    Changing column text headers
    use this to change, hide, or alter the ALV columns
    CLEAR: gt_fcat.
    READ TABLE gt_fcat WITH KEY fieldname = 'TEXT1' " ***
    *TEXT1 is your field name
       ASSIGNING <gtfcat>.
    IF sy-subrc = 0.
       <gtfcat>-coltext   = 'Date Type'.
       <gtfcat>-no_out    = ' '.
       <gtfcat>-tooltip   = 'Date Type Text from IT0019'.
       <gtfcat>-seltext   = 'IT0019'.
    keep seltext to '' if u want to hide
    ENDIF.
    regards
    austin

  • How to trigger a badi for a particular screen of a transaction alone?

    friends
    am trying to issue an error msg. in the PAI of the screen SAPMQEVA (0102), which belongs to QA11/Usage decision..In the method PUT_DATA of implementation QEVA_SUBSCREEN_1101, am able to capture all the screen fields..but the problem is there are two tabstrips (Defects, Inspection Lot stock), i want to trigger the msg only in the PAI of SAPMQEVA (0102), i.e the last tabstrip (Inspection Lot stock)..but this badi triggers in every pai of QA11 and issues the error msg..but the values of the screen fields of the last tabstrip are getting filled only in the PAI of the same, but how to trigger it for the PAI of the last tabstrip alone..no parameters in the method seem to have screen no. or okcode...hav tried with badi INSPECTIONLOT_UPDATE, which doesn't solve the purpose...pl suggest
    regards
    Sathish. R

    hi
      tx for answering...i have tried in all the user exits, including the one that you mentioned, but in none of them the UD data or storage loc. data sits in..only in the badi that i've mentioned i get all the data of the screen fields...how to make it trigger for that particular screen alone, is that doable?
    regards
    Sathish. R

  • How to avoid invalid data entering in LOV through code

    hi
    1)i have developed lov in table region, but user easily can enter invalid data and saved into the database tables.
    2)i created one formvalue and mapping into that return item , still its not working in table region LOV.
    3)how to avoid invalid data entering in LOV through code. i have tried this below code in EOimpl set value method. but some how its not wokring.
    if (value!=null)
    throw new OAAttrValException(OAAttrValException.TYP_ENTITY_OBJECT,
    getEntityDef().getFullName(),
    getPrimaryKey(),
    "ProcurementCategory",
    getProcurementCategory(),
    "FND",
    "FND_LOV_SELECT_VALUE");
    Thanks.
    krish.

    Thanks reetesh and gourav for your help.
    i followed below mapping details
    LOV Item Properties
    ID -PurcCommodity
    ViewInstance -VendorVO1
    ViewObject -PurcCommodity
    map1 properties
    LOV Region Item - segment1
    Return Item -PurcCommodity
    CriteriaItem -PurcCommodity
    Usefor Validation -Default
    map2 properties
    LOV Region Item - segment1
    Return Item -validation(formvalue)
    CriteriaItem -
    Usefor Validation -yes
    form value properties
    ID -validation
    ViewInstance -VendorVO1
    ViewObject -PurcCommodity
    Gourav- i double checked multiple rows it is not working, some times it is not working for single row too.
    Thanks
    krish.

  • How to Get Missing Dates for Each Support Ticket In My Query?

    Hello -
    I'm really baffled as to how to get missing dates for each support ticket in my query.  I did a search for this and found several CTE's however they only provide ways to find missing dates in a date table rather than missing dates for another column
    in a table.  Let me explain a bit further here -
    I have a query which has a list of support tickets for the month of January.  Each support ticket is supposed to be updated daily by a support rep, however that isn't happening so the business wants to know for each ticket which dates have NOT been
    updated.  So, for example, I might have support ticket 44BS which was updated on 2014-01-01, 2014-01-05, 2014-01-07.  Each time the ticket is updated a new row is inserted into the table.  I need a query which will return the missing dates per
    each support ticket.
    I should also add that I DO NOT have any sort of admin nor write permissions to the database...none at all.  My team has tried and they won't give 'em.   So proposing a function or storable solution will not work.  I'm stuck with doing everything
    in a query.
    I'll try and provide some sample data as an example -
    CREATE TABLE #Tickets
    TicketNo VARCHAR(4)
    ,DateUpdated DATE
    INSERT INTO #Tickets VALUES ('44BS', '2014-01-01')
    INSERT INTO #Tickets VALUES ('44BS', '2014-01-05')
    INSERT INTO #Tickets VALUES ('44BS', '2014-01-07')
    INSERT INTO #Tickets VALUES ('32VT', '2014-01-03')
    INSERT INTO #Tickets VALUES ('32VT', '2014-01-09')
    INSERT INTO #Tickets VALUES ('32VT', '2014-01-11')
    So for ticket 44BS, I need to return the missing dates between January 1st and January 5th, again between January 5th and January 7th.  A set-based solution would be best.
    I'm sure this is easier than i'm making it.  However, after playing around for a couple of hours my head hurts and I need sleep.  If anyone can help, you'd be a job-saver :)
    Thanks!!

    CREATE TABLE #Tickets (
    TicketNo VARCHAR(4)
    ,DateUpdated DATETIME
    GO
    INSERT INTO #Tickets
    VALUES (
    '44BS'
    ,'2014-01-01'
    INSERT INTO #Tickets
    VALUES (
    '44BS'
    ,'2014-01-05'
    INSERT INTO #Tickets
    VALUES (
    '44BS'
    ,'2014-01-07'
    INSERT INTO #Tickets
    VALUES (
    '32VT'
    ,'2014-01-03'
    INSERT INTO #Tickets
    VALUES (
    '32VT'
    ,'2014-01-09'
    INSERT INTO #Tickets
    VALUES (
    '32VT'
    ,'2014-01-11'
    GO
    GO
    SELECT *
    FROM #Tickets
    GO
    GO
    CREATE TABLE #tempDist (
    NRow INT
    ,TicketNo VARCHAR(4)
    ,MinDate DATETIME
    ,MaxDate DATETIME
    GO
    CREATE TABLE #tempUnUserdDate (
    TicketNo VARCHAR(4)
    ,MissDate DATETIME
    GO
    INSERT INTO #tempDist
    SELECT Row_Number() OVER (
    ORDER BY TicketNo
    ) AS NROw
    ,TicketNo
    ,Min(DateUpdated) AS MinDate
    ,MAx(DateUpdated) AS MaxDate
    FROM #Tickets
    GROUP BY TicketNo
    SELECT *
    FROM #tempDist
    GO
    -- Get the number of rows in the looping table
    DECLARE @RowCount INT
    SET @RowCount = (
    SELECT COUNT(TicketNo)
    FROM #tempDist
    -- Declare an iterator
    DECLARE @I INT
    -- Initialize the iterator
    SET @I = 1
    -- Loop through the rows of a table @myTable
    WHILE (@I <= @RowCount)
    BEGIN
    --  Declare variables to hold the data which we get after looping each record
    DECLARE @MyDate DATETIME
    DECLARE @TicketNo VARCHAR(50)
    ,@MinDate DATETIME
    ,@MaxDate DATETIME
    -- Get the data from table and set to variables
    SELECT @TicketNo = TicketNo
    ,@MinDate = MinDate
    ,@MaxDate = MaxDate
    FROM #tempDist
    WHERE NRow = @I
    SET @MyDate = @MinDate
    WHILE @MaxDate > @MyDate
    BEGIN
    IF NOT EXISTS (
    SELECT *
    FROM #Tickets
    WHERE TicketNo = @TicketNo
    AND DateUpdated = @MyDate
    BEGIN
    INSERT INTO #tempUnUserdDate
    VALUES (
    @TicketNo
    ,@MyDate
    END
    SET @MyDate = dateadd(d, 1, @MyDate)
    END
    SET @I = @I + 1
    END
    GO
    SELECT *
    FROM #tempUnUserdDate
    GO
    GO
    DROP TABLE #tickets
    GO
    DROP TABLE #tempDist
    GO
    DROP TABLE #tempUnUserdDate
    Thanks, 
    Shridhar J Joshi 
    <If the post was helpful mark as 'Helpful' and if the post answered your query, mark as 'Answered'>

  • How to get current date for posting date

    hi,
    how to get current date for posting date ? any sample code ?
    Thanks

    Hi......
    Use
    Select getdate()
    for current date.......
    Regards,
    Rahul

  • How does AWR contributes data for ADDM.

    Hi,
    In oracle10g I understand that AWR is used to get information for ADDM.
    I am looking for a detail explanation as how its doing that and How ORACLE is handling the data and deciding the recommendation.
    Regards
    MMU

    Hi,
    I am looking for a detail explanation as how its doing that and How ORACLE is handling the dataAs I understand it, the data for AWR is collected exactly the same as a STATSPACK snapshot, copying the contents of the in-memory accumulators (as exposed by the x$ fixed tables).
    When you take an AWR snapshot, your collection threshold settings determine the volume of data collected, especially SQL. The snapshot thresholds only apply to the SQL statements that are captured in the stats$sql_summary table. The stats$sql_summary table can easily become the largest tables in the STATSPACK schema because each snapshot might collect several hundred rows, one for each SQL statement that was in the library cache at the time of the snapshot.
    In the Automatic Workload Repository (AWR), the executions_th , parse_calls_th , disk_read_th , buffer_gets_th , sharable_mem_th , and version_count_th SQL collection threshold settings allow the DBA to set thresholds for SQL statements. If any of the thresholds are exceeded, the information will be stored by STATSPACK in the repository.
    http://www.dba-oracle.com/tips_oracle_statspack_snapshot_thresholds.htm
    How ORACLE is handling the data and deciding the recommendation.That's a closely-held secret, a proprietary feature that Oracle will never expose!
    Hope this helps. . . .
    Donald K. Burleson
    Author of "Oracle Tuning: The Definitive Reference"
    http://www.rampant-books.com/t_oracle_tuning_book.htm

Maybe you are looking for

  • Ipod does recognize restore...wont start up now!

    i recently attempted to sync my ipod to update newly added songs from itunes today only to find that itunes "froze" and did not respond. I force quit itunes and waited for my ipod to disconnect....nothing happen and i disconnected after waiting for s

  • Trying to setup a Belkin N750 between Mac & Win 7 to share Itunes Library on an external HDD

    Hello all, Please excuse my complicated question. I really cannot fathom out how to resolve this technological nightmare. I have a 50gb Itunes Library stored on an External HDD in MAC format (HFS+) I want to share this library with the Windows 7 Lapt

  • 5th gen iPod Software

    i just got a 30gb ipod w video but it didnt come with a cd. it says just plug it in and it will download software but windows wants a cd. Where can i get the ipod software? im about ready to throw my ipod out the window here.. im deploying in a few w

  • Can't install profiles in Configuator

    I am a teacher trying to use a cart of 30 Ipad2 with configurator to set up a profile and I have the same recurring problem.  First, can configurator be used to set up a profile if you haven't gone through the set up assistant ahead of time?  I thoug

  • Why can't I restore my ipod touch?

    My ipod is still on the screen that shows the ipod needs to connect toitunes when i connect it to restore it won't connect to itunes