Indexes: Inconsistent with DDIC source

Hi, i have an issue in the DB02, i have a message that missing some index, when i check the se14 i see the error
Database object for /BI0/E0BPM_C01 is inconsistent: (Secondary indexes)
and Indexes: Inconsistent with DDIC source
on the database the index exist, but in Dictionary do not exist
with the TX se14 I try to "activate and adjust database" with Save data but the issue still continue
any idea or support is welcome
regards

when i try to create a index with the TX SE11 i can't
i obtain the message "Index ID 0 is reserved for the primary index"
regards

Similar Messages

  • Error during Initial Provisioning/Modify: Inconsistency with address

    Hi all
    During Initial Provisioning (and also after any modif) of some users I get the following errors:
    E:Failed storing DDIC
    E:Exception from Mod operation:Inconsistency with address
    I:MX_PERSON has valid company address assignment.
    I:The entry with MSKEY "69802" has MSKEYVALUE "COMPANY:SAP_IDM_DEFAULT".
    E:Failed storing SAP*
    E:Exception from Mod operation:Inconsistency with address
    I:MX_PERSON has valid company address assignment.
    I:The entry with MSKEY "69801" has MSKEYVALUE "COMPANY:SAP_IDM_DEFAULT".
    E:Failed storing SAPCPIC
    E:Exception from Mod operation:Inconsistency with address
    I:MX_PERSON has valid company address assignment.
    I:The entry with MSKEY "69802" has MSKEYVALUE "COMPANY:SAP_IDM_DEFAULT".
    Surprisingly they're all "special users"; standard users are provisioned without errors (not related to the company address).
    Unfortunately it's not mentioned in the logs which system has these inconsistencies, because mostly provisioning works well (for these users).
    Questions:
    1. Does anybody know how I can resolve or what combination of values lead to this error?
    2. Where can I add a Debug-Line to see which system(s) fail?
    Any help appreciated
    BR
    Michael

    I solved this.
    First I added an Initialization script to the UpdateABAPUser-Pass which told me the repository and prints it as warning ot the job log.
    Then I took a look at these ABAP-Systems using SU01 and realized that SAP*, DDIC and SAPCPIC don't have the mandatory "Lastname" set, so SU01 says "Inconsistency with address" (DE: "Schiefstand bei Adresse").
    After I set this once on ABAP-side the UpdateABAPUser-Task ran fine.
    But still I would have expected that IdM sets this attribute as long as it is set on IdM-side, especially since it is mandatory in the destination system. At least it could provide a more informative error message.
    Thanks for any help.
    BR
    Michael

  • Index issue with or and between when we set one partition index to unusable

    Need to understand why optimizer unable to use index in case of "OR" whenn we set one partition index to unusable, the same query with between uses index.
    “OR” condition fetch less data comparing to “BETWEEN” still oracle optimizer unable to use indexes in case of “OR”
    1. Created local index on partitioned table
    2. ndex partition t_dec_2009 set to unusable
    -- Partitioned local Index behavior with “OR” and with “BETWEEN”
    SQL> CREATE TABLE t (
      2    id NUMBER NOT NULL,
      3    d DATE NOT NULL,
      4    n NUMBER NOT NULL,
      5    pad VARCHAR2(4000) NOT NULL
      6  )
      7  PARTITION BY RANGE (d) (
      8    PARTITION t_jan_2009 VALUES LESS THAN (to_date('2009-02-01','yyyy-mm-dd')),
      9    PARTITION t_feb_2009 VALUES LESS THAN (to_date('2009-03-01','yyyy-mm-dd')),
    10    PARTITION t_mar_2009 VALUES LESS THAN (to_date('2009-04-01','yyyy-mm-dd')),
    11    PARTITION t_apr_2009 VALUES LESS THAN (to_date('2009-05-01','yyyy-mm-dd')),
    12    PARTITION t_may_2009 VALUES LESS THAN (to_date('2009-06-01','yyyy-mm-dd')),
    13    PARTITION t_jun_2009 VALUES LESS THAN (to_date('2009-07-01','yyyy-mm-dd')),
    14    PARTITION t_jul_2009 VALUES LESS THAN (to_date('2009-08-01','yyyy-mm-dd')),
    15    PARTITION t_aug_2009 VALUES LESS THAN (to_date('2009-09-01','yyyy-mm-dd')),
    16    PARTITION t_sep_2009 VALUES LESS THAN (to_date('2009-10-01','yyyy-mm-dd')),
    17    PARTITION t_oct_2009 VALUES LESS THAN (to_date('2009-11-01','yyyy-mm-dd')),
    18    PARTITION t_nov_2009 VALUES LESS THAN (to_date('2009-12-01','yyyy-mm-dd')),
    19    PARTITION t_dec_2009 VALUES LESS THAN (to_date('2010-01-01','yyyy-mm-dd'))
    20  );
    SQL> INSERT INTO t
      2  SELECT rownum, to_date('2009-01-01','yyyy-mm-dd')+rownum/274, mod(rownum,11), rpad('*',100,'*')
      3  FROM dual
      4  CONNECT BY level <= 100000;
    SQL> CREATE INDEX i ON t (d) LOCAL;
    SQL> execute dbms_stats.gather_table_stats(user,'T')
    -- Mark partition t_dec_2009 to unusable:
    SQL> ALTER INDEX i MODIFY PARTITION t_dec_2009 UNUSABLE;
    --- Let’s check whether the usable index partition can be used to apply a restriction: BETWEEN
    SQL> SELECT count(d)
        FROM t
        WHERE d BETWEEN to_date('2009-01-01 23:00:00','yyyy-mm-dd hh24:mi:ss')
                    AND to_date('2009-02-02 01:00:00','yyyy-mm-dd hh24:mi:ss');
    SQL> SELECT * FROM table(dbms_xplan.display_cursor(format=>'basic +partition'));
    | Id  | Operation               | Name | Pstart| Pstop |
    |   0 | SELECT STATEMENT        |      |       |       |
    |   1 |  SORT AGGREGATE         |      |       |       |
    |   2 |   PARTITION RANGE SINGLE|      |    12 |    12 |
    |   3 |    INDEX RANGE SCAN     | I    |    12 |    12 |
    --- Let’s check whether the usable index partition can be used to apply a restriction: OR
    SQL> SELECT count(d)
        FROM t
        WHERE
        (d >= to_date('2009-01-01 23:00:00','yyyy-mm-dd hh24:mi:ss') and d <= to_date('2009-01-01 23:59:59','yyyy-mm-dd hh24:mi:ss'))
        or
        (d >= to_date('2009-02-02 01:00:00','yyyy-mm-dd hh24:mi:ss') and d <= to_date('2009-02-02 02:00:00','yyyy-mm-dd hh24:mi:ss'))
    SQL> SELECT * FROM table(dbms_xplan.display_cursor(format=>'basic +partition'));
    | Id  | Operation           | Name | Pstart| Pstop |
    |   0 | SELECT STATEMENT    |      |       |       |
    |   1 |  SORT AGGREGATE     |      |       |       |
    |   2 |   PARTITION RANGE OR|      |KEY(OR)|KEY(OR)|
    |   3 |    TABLE ACCESS FULL| T    |KEY(OR)|KEY(OR)|
    ----------------------------------------------------“OR” condition fetch less data comparing to “BETWEEN” still oracle optimizer unable to use indexes in case of “OR”
    Regards,
    Sachin B.

    Hi,
    What is your database version????
    I ran the same test and optimizer was able to pick the index for both the queries.
    SQL> select * from v$version;
    BANNER
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Prod
    PL/SQL Release 10.2.0.4.0 - Production
    CORE    10.2.0.4.0      Production
    TNS for 32-bit Windows: Version 10.2.0.4.0 - Production
    NLSRTL Version 10.2.0.4.0 - Production
    SQL>
    SQL> set autotrace traceonly exp
    SQL>
    SQL>
    SQL>  SELECT count(d)
      2  FROM t
      3  WHERE d BETWEEN to_date('2009-01-01 23:00:00','yyyy-mm-dd hh24:mi:ss')
      4              AND to_date('2009-02-02 01:00:00','yyyy-mm-dd hh24:mi:ss');
    Execution Plan
    Plan hash value: 2381380216
    | Id  | Operation                 | Name | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
    |   0 | SELECT STATEMENT          |      |     1 |     8 |    25   (0)| 00:00:01 |       |       |
    |   1 |  SORT AGGREGATE           |      |     1 |     8 |            |          |       |       |
    |   2 |   PARTITION RANGE ITERATOR|      |  8520 | 68160 |    25   (0)| 00:00:01 |     1 |     2 |
    |*  3 |    INDEX RANGE SCAN       | I    |  8520 | 68160 |    25   (0)| 00:00:01 |     1 |     2 |
    Predicate Information (identified by operation id):
       3 - access("D">=TO_DATE(' 2009-01-01 23:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
                  "D"<=TO_DATE(' 2009-02-02 01:00:00', 'syyyy-mm-dd hh24:mi:ss'))
    SQL>  SELECT count(d)
      2  FROM t
      3  WHERE
      4  (
      5  (d >= to_date('2009-01-01 23:00:00','yyyy-mm-dd hh24:mi:ss') and d <= to_date('2009-01-01 23:59:59','yyyy-mm-dd hh24:mi:ss'
      6  or
      7  (d >= to_date('2009-02-02 01:00:00','yyyy-mm-dd hh24:mi:ss') and d <= to_date('2009-02-02 02:00:00','yyyy-mm-dd hh24:mi:ss'
      8  );
    Execution Plan
    Plan hash value: 3795917108
    | Id  | Operation                | Name | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
    |   0 | SELECT STATEMENT         |      |     1 |     8 |     4   (0)| 00:00:01 |       |       |
    |   1 |  SORT AGGREGATE          |      |     1 |     8 |            |          |       |       |
    |   2 |   CONCATENATION          |      |       |       |            |          |       |       |
    |   3 |    PARTITION RANGE SINGLE|      |    13 |   104 |     2   (0)| 00:00:01 |     2 |     2 |
    |*  4 |     INDEX RANGE SCAN     | I    |    13 |   104 |     2   (0)| 00:00:01 |     2 |     2 |
    |   5 |    PARTITION RANGE SINGLE|      |    13 |   104 |     2   (0)| 00:00:01 |     1 |     1 |
    |*  6 |     INDEX RANGE SCAN     | I    |    13 |   104 |     2   (0)| 00:00:01 |     1 |     1 |
    Predicate Information (identified by operation id):
       4 - access("D">=TO_DATE(' 2009-02-02 01:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
                  "D"<=TO_DATE(' 2009-02-02 02:00:00', 'syyyy-mm-dd hh24:mi:ss'))
       6 - access("D">=TO_DATE(' 2009-01-01 23:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
                  "D"<=TO_DATE(' 2009-01-01 23:59:59', 'syyyy-mm-dd hh24:mi:ss'))
           filter(LNNVL("D"<=TO_DATE(' 2009-02-02 02:00:00', 'syyyy-mm-dd hh24:mi:ss')) OR
                  LNNVL("D">=TO_DATE(' 2009-02-02 01:00:00', 'syyyy-mm-dd hh24:mi:ss')))
    SQL> set autotrace off
    SQL>Asif Momen
    http://momendba.blogspot.com

  • Where does Lightroom put HDR in the grid view? Is there anyway to have Lightroom stack the HDR file with the source files?

    I can't decipher where (and why) the program is putting the HDR image in the grid. I stack all of my HDR source images so they are easy to track and manage. Other apps/plugins allow you to stack resulting images with their source image. That would be great if there's a way to set that in LR preferences.

    Thanks, but this doesn't really answer the question about stacking the HDR file with the source files. Yes, it does put the file in the same folder, however many of my folders have 100s of images (that often look similar) and as far as I can tell, LR places them randomly in the sort order. It doesn't appear to put them at the beginning or end of the sort (usually by date), but somewhere randomly in the middle. Even if it could be made clear what method it is using to sort them, that would help locate one file among hundreds.
    Ideally, however it should allow you to stack with the stacked source files. Is there anyway to do this? If not, is it a feature that could be requested?

  • Creation of  rules index failing with ORA-01652 exception

    I am trying to create a rules index in the following way,
    BEGIN
         SEM_APIS.CREATE_RULES_INDEX(
         'APPS_RDF_IDX',
         SEM_Models('SEMANTIC_SEARCH_MODEL'),
         SEM_Rulebases('OWLPRIME','SEMANTIC_SEARCH_RULEBASE'));
    END;
    with semantic_search_rulebase having about 5 rules and with 28839 triples in the model.
    When I am trying to run create index it fails after a long time by throwing exception
    ORA-01652: unable to extend temp segment by 128 in tablespace TEMP
    though TEMP is allocated 5GB memory.
    Please clarify me on the following questions,
    1. How much TEMP space should be allocated if the triples are going to be in millions and rules at about 10 to 100 and why is indexing taking a lot of TEMP space with a less amount of triples.
    2. How much time normally would create rules index take with triples of size from thousands to millions.
    3. How to make the create rules index run faster.
    Thanks,
    Phani

    First of all, please start using create_entailment API instead of that create_rules_index API.
    Regarding 1), 5GB temp space is not a whole lot.
    It is hard to say exactly how much you need because you have user defined rules.
    Regarding 2) and 3), please check out the following inference best practice paper.
    http://www.oracle.com/technology/tech/semantic_technologies/pdf/semantic_infer_bestprac_wp.pdf
    Also, if you like, please post your rules and I may be able to help you model
    some of your rules using native OWL constructs.

  • How to get an alert when user login with "DDIC" in any of the systems?

    Hi all,
    Can it be possible when ever the user login with DDIC user  in any of the satellite system,can we we  get an alert -as DDIC login attempt in any system?
    Is this possiblem in CCMS or BPM or...?
    Regards,
    Neni

    Hi Srikrishna,
    Link which you have give is good.But when i login with DDIC i am not geting alerts and i am not able to add any satllites system to
    under Security node
    My configuration:
    Miximum values for list                               1 min
    When should an alert be triggered?
    From value                   Red               Severity      2
    Max. number of alerts for each message ID             50
    Max. number of lines to be saved                      50
    SM19
    Client     *                                                     Events
    User       DDIC selected -Dailog logon         Alll
                                           systmem
    Please help me.
    Regards,
    Swaroop

  • Problem with setting Source Level in Sun Studio 2

    I've got problem with setting Source Level to 1.5 in Sun Studio 2. When I try to set it to 1.5 in Project properties and click Ok everything seem to go well, but when I open Project Properties again Source Level is set to 1.4. I need this to work cause I started to lear Java recently and I want to use foreach loop.
    Please help

    I'm just citing an example using Date().
    In fact, whether I use DateFormat or Calendar, it shows the same result.
    When I set the date to 1 Jan 1950 0 hours 0 minutes 0 seconds,
    jdk1.4.2 will always return me 1 Jan 1950 0 hours 10 minutes 0 seconds.
    It works correctly under jdk1.3.1

  • Howto deal with multiple source files having the same filename...?

    Ahoi again.
    I'm currently trying to make a package for the recent version of subversive for Eclipse Ganymede and I'm almost finished.
    Some time ago the svn.connector components have been split from the official subversive distribution and have to be packed/packaged extra. And here is where my problem arises.
    The svn.connector consists (among other things) of two files which are named the same:
    http://www.polarion.org/projects/subversive/download/eclipse/2.0/update-site/features/org.polarion.eclipse.team.svn.connector_2.0.3.I20080814-1500.jar
    http://www.polarion.org/projects/subversive/download/eclipse/2.0/update-site/plugins/org.polarion.eclipse.team.svn.connector_2.0.3.I20080814-1500.jar
    At the moment makepkg downloads the first one, looks at its cache, and thinks that it already has the second file, too, because it has the same name. As a result, I can neither fetch both files nor use both of them in the build()-function...
    Are there currently any mechanisms in makepkg to deal with multiple source files having the same name?
    The only solution I see at the moment would be to only include the first file in the source array, install it in the build()-function and then manually download the second one via wget and install it after that (AKA Quick & Dirty).
    But of course I would prefer a nicer solution to this problem if possible. ^^
    TIA!
    G_Syme

    Allan wrote:I think you should file a bug report asking for a way to deal with this (but I'm not sure how to fix this at the moment...)
    OK, I've filed a bug report and have also included a suggestion how to solve this problem.

  • Dynamic columns with Excel Source?

    I have Excel file number 1 with columns A ja B.
    I have database table with columns A,B,C,D,E,F,G,H etc. (there are 100 columns)
    I know how to import normally data with Excel Source and OLE DB Destination with Excel file 1.
    Now I have new need.
    I should be able to import dynamically Excel files with any combination of columns.
    It should be automatic import with For Each Container.
    When new files like below is imported, I should not be make any changes to SSIS.
    Excel 2) Columns A, G, X (so column A data must be added to column A in database etc)
    Excel 3) Columns B, C, G, Y (so column B data must be added to column B in database etc)
    Excel 4) Columns D, X
    IS THIS POSSIBLE WITH SSIS? HOW?
    CUSTOM CODE IS NEEDED? ANY POINTERS TO SOLUTION?
    Kenny_I

    How you are going to deal with the rows here?
    For example, 
    Excel 2) Columns A, G, X (so column A data must be added to column A in database etc)
    when Excel 3 files are processing are you going to update the record that you added in Excel
    2.
    Excel 3) Columns B, C, G, Y (so column B data must be added to column B in database etc)
    Excel 4) Columns D, X
    when Excel 4 files are processing what is your scenario for column X which you just added in
    Excel 2. Are you going to update it or add new rows here.
    As Arthur has suggested generating your package in code seems the way to go for but before that you need
    to think about all the scenarios.
    Vikash Kumar Singh || www.singhvikash.in

  • Inconsistency with document splitting

    Dear All,
    I have activated Document splitting option after posting few documents----open itmes. After document splitting is activated now i cannot clear those line items which were posted before document splitting was activated.
    Is there any way to remove this inconsistency with the old documents. Any program to be run...or any other solution.
    Please help to resolve the issue.
    Sap Frido.

    hi,
    Is there any program which removes this inconsistency. As we have program to run the inconsistency in witholding tax.
    Thnaks and regards
    Sap frido

  • Received response from host (router IP address) with invalid source port 32784

    I replaced my old wireless router with a Cisco Linksys E4200, running firmware version 1.0.02 build 13  May 24, 2011.  About once a minute the router sends an unsolicited DNS message to the IPV4 multicast address 01:00:5e:00:00:fb with a destination IP address of 224.0.0.251.  The unsolicited message is a DNS response with source port 32784, transaction ID 0, flags 0x8400 (standard query response, no error), questions 0, answer RRs 2, authority RRs 0 and additional RRs 1.  The two answers both relate to the router itself: one has Name Cisco18738.local, type A (host address), class 1 (IN), cache flush true, time to live 1 minute, data length 4, and the address of the router.  The other is the reverse of the same address.  The additional record is for Cisco18738.local, type NSEC, class IN, cache flush true, time to live 1 minute, data length 5, next domain name Cisco18738.local, RR type A (host address).
    When my desktop computer receives these messages it logs an error, for example: "Jun 23 07:39:22 sauterws02 avahi-daemon[1067]: Received response from host 10.146.9.1 with invalid source port 32784 on interface 'eth0.0'"  The 10.146.9.1 is the router's IP address.  I also see these messages on the wireless link from my laptop.
    I suppose the E4200 is generating these DNS messages in a misguided attempt to make sure there is no old information about its name.  Is there a way to turn them off?  If not, is there a way to report this to Cisco as a bug?
    Solved!
    Go to Solution.

    gv wrote:
    1. To contact Linksys, call support.
    2. From the internet draft: "Multicast DNS implementations MUST
    silently ignore any Multicast DNS Responses they receive where the
    source UDP port is not 5353." Your avahi-daemon does not comply with this draft.
    Thank you for the reference.  For the sake of others who may read this thread, the current draft of multicast DNS is at http://www.ietf.org/id/draft-cheshire-dnsext-multicastdns-14.txt. 
    Here is the whole paragraph from which you quoted, from section 6 (Responding): "The source UDP port in all Multicast DNS Responses MUST be 5353 (the well-known port assigned to mDNS). Multicast DNS implementations MUST silently ignore any Multicast DNS Responses they receive where the source UDP port is not 5353."
    Thus, it appears that there are two errors here: the Cisco E4200 is not using 5353 as the source port, and the logger is not silently ignoring it.
    The message appears to be an announcement rather than an actual response to a query.  In section 8.3 (Announcing) I found this paragraph: "A Multicast DNS Responder MUST NOT send announcements in the absence of information that its network connectivity may have changed in some relevant way. In particular, a Multicast DNS Responder MUST NOT send regular periodic announcements as a matter of course."  Thus, it appears that there are three errors here.

  • How to modify field symbol of type Index Table with other field symbol of type any.

    Hello Experts,
    How is it possible to update an filed symbol table of type Index table with other filed symbol table.
    e.g.
    Field symbol :  <lt_table1> type Index table.
    Field symbol : <lt_table2> type Index table.
    after some code...at run time these table filled like following.
    <lt_tabel1 > has  value fore column  like c11 , c12 , c13 
    <lt_table2> has value for column like C11     , C12 , C13 , C14 , C15 . some extra  values from <lt_table1>
    Now I want to be modify <table1> one entires like C12 with <table2 > col C12.
    how I can achieve this.
    Regards,
    Chetan.

    Hi,
    did you try  ASSIGN COMPONENT xx OF STRUCTURE <IT_TABEL1> TO <IT_TABLE2>.
    xx will contain the number of the column
    or maybe, if you have the description with a field catalog or other, that will be easier ..
    regards
    Fred

  • IOS-XR 5.1.3 SP2 - %OS-RT_CHECK-3-INCONSISTENCY_DETECTED : ipv4-unicast detected inconsistency with

    I'm curious if anyone else has seen this message logged after an up/down grade to 5.1.3 w/ SP2
    %OS-RT_CHECK-3-INCONSISTENCY_DETECTED : ipv4-unicast detected inconsistency with 1 entries for scan-id N
    We were told by TAC this is a cosmetic issue only and not to worry.  However, the engineer inside me wants to know what the router is upset about and how to suppress the log message.  I'd also like to ask Cisco to create a cosmetic bug fix for 5.1.3 to resolve the log message if it is indeed truly cosmetic only in nature.
    Thanks!
    -ben

    hi Ben, the ddts is fixed in 52x forward. There is no smu planned for prior releases.
    you can schedule a periodic selective clear of the log buffer via:
    RP/0/RSP0/CPU0:A9K-BNG#clear log events delete [option] [field]
    the logging correlator, here is a little write up on that (copy/paste from my kb)
    XR nag Killer
    The short version  (i.e. here's the code to make it happen!)
    !! enter config mode... 
    conf t
    !! this removes the correlator so you can edit it...
    no logging correlator apply rule kill-annoyances all-of-router
    !!! define the rule
    logging correlator rule kill-annoyances type nonstateful
      timeout 600000
    !!! this is the "root cause" one... make sure you pick something that happens frequently
    rootcause PLATFORM ENVMON FAN_FAIL
    !!! these are all the NON root cause events. this is what gets squashed along with the root cause.
    !!! add things here that you want squashed.
      nonrootcause
      alarm PLATFORM ENVMON FAN_CLEAR
      alarm PLATFORM ENVMON FANTRAY_FAIL
      alarm PLATFORM ENVMON ENV_CONDITION
      alarm PLATFORM ENVMON FANTRAY_CLEAR
    !!! timeouts are currently maxed at ten minutes... (smu anyone?)
      timeout-rootcause 600000
    !!! this re-applies the correlator
     logging correlator apply rule kill-annoyances all-of-router
    !!! now commit the thing
    commit
    !!! done...
    On a somewhat related note, if anyone is not already familiar with the
     "logging correlator" function -- it can be used to greatly reduce the
     amount of "noise" generated by all these various little things that are
     broken (like single fan tray systems!)
     An example config that I have on my box is as follows:
     logging correlator rule fan type nonstateful
      timeout 600000
      rootcause PLATFORM ENVMON FAN_FAIL
      nonrootcause
      alarm PLATFORM ENVMON FAN_CLEAR
      alarm PLATFORM ENVMON FANTRAY_FAIL
      alarm PLATFORM ENVMON ENV_CONDITION
      alarm PLATFORM ENVMON FANTRAY_CLEAR
      timeout-rootcause 600000
    logging correlator apply rule fan
      all-of-router
    >>
    Which essentially says the following:
    >>
    1) a message of format "PLATFORM-ENVMON-FAN_FAIL" is a 'root cause' event.
    the timeout for root cause events is set to 600000ms (ten minutes), so
    no matter how many of these events I see, I will only actually throw a
    syslog every ten minutes.
    2) underneath this 'root cause' event are a number of 'nonrootcause'
    events.  If I see any of these events within the timeout specified (again,
    ten minutes) of a 'root cause' I will also suppress these messages -- the
    theory here is that I already know the root cause and don't want to clutter
    myself up with all the side effects.    In reality we're just hacking the
    correlator to get rid of messages, but hey -- it works.  ;-)
    >>
    3) this particular "correlator rule" is applied to the whole router (you
    *can* do all sorts of funky stuff with where you apply it if you want).
    4) in real environments the idea is to have lots of different correlators
    for different events... but what I do is basically maintain a great big
    list of known syslog messages that I don't want to have splattering my
    screen, and the correlator gobbles them all up for me.
    >>
    Limitations:
    >>
    5) UPDATE: you can now set timeouts up to 7200000 seconds (LONG time...)
    >>
    6) the only really annoying part is that you have to unapply the rule
    before you can edit it... so the process is "unapply rule, commit, change
    rule, apply rule, commit" instead of just "change rule".  But hey, it's
    better than nothing.
    >>
    7) if you want to see the messages that got suppressed/correlated, use the
    "show logging correlator buffer all-in-buffer" command -- and sit back and
    be amazed at how much console bandwidth you've saved.  ;-)
    >>
    Hope people find this helpful...
    config example courtesy of LJ Wobker.
    xander

  • Enterprise Portal integration with E-Sourcing Portal

    Hi,
    Has anyone done Enterprise Portal integration with E-Sourcing Portal.How can we use the E-Sourcing portal url in Enterprise Portal?
    Thanks,
    Rajani

    Hello Rajani,
    For Connecting  web based Application we have to use AppIntergator Application.
    Please go thorugh the blog and implement the same thing its many connect to your java portal.
    blog is Integrating your Web Front-ends into the SAP Enterprise Portal using the Application Integrator
    Please go through the below blog where its define how to pass user and password it the url iview.
    SP12/SP20: Setting URL Dynamically in URL iView
    Thanks
    Chittya Bej
    Edited by: Chittya Bej on Apr 5, 2010 11:31 AM
    Edited by: Chittya Bej on Apr 5, 2010 11:33 AM

  • With out source system creation we have transported all the flow

    Hi all,
    With out source system creation in Quality we have transported all the data flow
    now is there any option to map the source system to transported flow.
    Regards
    Kiran Kumar

    Hi,
    you can reimport the same TR again once the mapping is done in BW from source system.
    If you have left the TR's in the import queue of BW for later import then you can reimport it again.
    This will go correctly.
    In qualtiy to do the mappings you will need development authorization.
    Thanks
    Ajeet

Maybe you are looking for

  • Connecting an old bondi blue 233 to an existing network

    I already have a network set up with an indigo g3 imac wired with an ethernet cable which is currently running on 10.3 and an g4 ibook connected wirelessly running on 10.4. I have recently acquired an old bondi blue 233 imac and want to connect it wi

  • How to use search feature in my JSP?

    Hi all, In my struts application, there is one Jsp page which receives some keyword and based on that keyword it searches the local file system(all diskslike C:\,D:\,E:\.....). how to implement this feature and should display only the latest 10 recor

  • Exporting direct from FCP vs. sending to Compressor?

    I've done both, but the choice ends up being a coin toss. So I'm curious if there's any accepted wisdom as to the Pros & Cons of each workflow (using the current versions of each app, and in a hypothetical All Things Being Equal scenario)? In other w

  • Applications not starting on reopening

    In the past couple of weeks I've been experiencing a problem with my relatively new Macbook Pro 15" (mid 2009 model), purchased only a couple of months ago. Mostly it's working fine, but certain applications - including native Apple ones such as iTun

  • Should I defragment disk after deleting BootCamp ?

    1. I made a mistake - installed Windows on Mac via Boot Camp (Now I know that I won't need Windows anymore). 2. Then I deleted Windows and restored Boot Camp via BootCamp assistant. Hopefully. 3. To restore the normal boot speed I did PRAM reset (Tur