INavigationNode siblings/parent or a way to compare INavigationNodes..

Hi,
I'm trying to code an iView, that takes advantage of the opportunities given through the portal navigation API.
My code (pseudo) does, or rather - should be doing this:
1) Obtain NavigationNodes tree.
2) Iterate tree, and compare with current selected node
3) If we have a match, all parents above the found node must be marked.
4) Write out everything to response object (AbstractPortalComponent).
However, I have an issue.
Since INavigationNode does not implement a valid .equals() method, I have no way of knowing if the current node equals the one I am comparing it with.
I can compare to .getName() - and get dousins of false-positives, or compare the reference - and get no matches at all.
Please help me determine the correct way of comparing 2 INavigationNode's...
I marked my problem with a "what??" below...
Thanks for helping.
<my code>
  NavigationEventsHelperService helperService = (NavigationEventsHelperService)PortalRuntime.getRuntimeResources().getService(NavigationEventsHelperService.KEY);
  INavigationNode activeNode = helperService.getCurrentNavNode(request);
  NavigationNodes path = (NavigationNodes)helperService.getRealInitialNodes(request);
  Iterator iter = path.iterator();
  while (iter.hasNext()) {
    iterateAndWriteOutChilds((INavigationNode)iter.next(), activeNode);
private ... iterateAndWriteOutChilds(INavigationNode node, INavigationNode activeNode) {
  if (node.getSomeObscurePropertyToCompareWith() == activeNode.getSomeObscurePropertyToCompareWith()) <------ what???
    // Mark node...
  if (node.hasChilds()) {
    ... iterate these as well
</my code>
Message was edited by: Klavs Birk

Thanks
Hello Apple -- FEATURE REQUEST!!
That "other product" can do it.

Similar Messages

  • Is there a way to compare two .itl files?

    I recently uninstalled and reinstalled iTunes on my Vista machine. I then re-imported all by music, which resulted in a new .itl file. I then realized I could use my old .itl file and preserve some data, such as date added, play count, etc. I notice, however, that the new .itl file says that there are seven more items in the library than were in the old one (out of over 11,000 items). I am curious as to what these 11 items are, since I didn't make any additions to the library between the date of the old .itl file and the new one (they both have the same date stamp, in fact, though the new one is several hours newer than the old one). Is there any way to compare these .itl files and see what the additional items are? Thanks.
    Howard

    You have to do it on a field-by-field basis in a formula using the Previous() function.  However, you cannot "nest" calls to either Previous() or Next() - in other words, you can't do something like Previous(Previous(<some field>))  You'll also need to look at OnFirstRecord and/or PreviousIsNull() to work with data on the first row of the report.
    -Dell

  • Different ways to compare the java objects

    Hi,
    I want to compare rather search one object in collection of thousands of objects.
    And now i am using the method which uses the unique id's of objects to compare. still that is too much time consuming operation in java.
    so i want the better way to compare the objects giving me the high performance.

    If you actually tried using a HashMap, and (if necessary) if you correctly implemented certain important methods in your class (this probably wasn't necessary but might have been), and if you used HashMap in a sane way and not an insane way, then you should be getting very fast search times.
    So I conclude that one of the following is true:
    1) you really didn't try using HashMap.
    2) you tried using hashmap, but did so incorrectly (although it's actually pretty simple...)
    3) your searches are pretty fast, and any sluggishness you're seeing is caused by something else.
    4) your code isn't actually all that sluggish, but you assume it must be for some reason.
    Message was edited by:
    paulcw

  • A Way To Compare iPod to iTunes?

    We have over 20,000 songs on iTunes when we update our iPod there are 131 songs that are not loading onto the iPod. These are not missing files. Is there any easy way to compare what is on the iPod to what is on iTunes without having to go through every single song?

    Hello johnsont423,
    And welcome to Apple Discussions!
    You could use a third party utility such as [YamiPod|http://support.apple.com/kb/SP569] to export a list of songs from your iPod that you can use to compare the songs on your iPod to the songs in your iTunes library.
    Are you comparing the song count on your iPod (by going to the Menu -> Settings -> About) to the count that can be found at the bottom of your iTunes window when you have your Music library selected?
    B-rock

  • Explain plan cardinallity is way off compared to actual rows being returned

    Database version 11.2.0.3
    We have a small but rapidly growing datawarehouse which has OBIEE as its front end reporting tool. Our DBA has set up a automatic stats gathering method in OEM and we can see that it run and gathers stats on stale objects on a regular basis. So we know the statistics are upto date.
    In checking some slow queries I can see that the cardinality being reported in explain plans is way off compared to what is actually being returned.
    For example the actual number of rows returned are 8000 but the cardinality estimate is > 300,000.
    Now as per an Oracle White paper(The Oracle Optimizer Explain the Explain Plan) having "multiple single column predicates on a single table" can affect cardinality estimates and in case of our query that is true. Here is the "WHERE Clause section" of the query
    SQL> select D1.c1  as c1,
      2         D1.c2  as c2,
      3         D1.c3  as c3,
      4         D1.c4  as c4,
      5         D1.c5  as c5,
      6         D1.c6  as c6,
      7         D1.c7  as c7,
      8         D1.c8  as c8,
      9         D1.c9  as c9,
    10         D1.c10 as c10,
    11         D1.c11 as c11,
    12         D1.c12 as c12,
    13         D1.c13 as c13,
    14         D1.c14 as c14,
    15         D1.c15 as c15,
    16         D1.c16 as c16
    17    from (select D1.c4 as c1,
    18                 D1.c5 as c2,
    19                 D1.c3 as c3,
    20                 D1.c1 as c4,
    21                 D1.c6 as c5,
    22                 D1.c7 as c6,
    23                 D1.c2 as c7,
    24                 D1.c8 as c8,
    25                 D1.c9 as c9,
    26                 D1.c10 as c10,
    27                 D1.c9 as c11,
    28                 D1.c11 as c12,
    29                 D1.c2 as c13,
    30                 D1.c2 as c14,
    31                 D1.c12 as c15,
    32                 'XYZ' as c16,
    33                 ROW_NUMBER() OVER(PARTITION BY D1.c2, D1.c3, D1.c4, D1.c5, D1.c6, D1.c7, D1.c8, D1.c9, D1.c10, D1.c11, D1.c12 ORDER BY D1.c2 ASC, D1.c3 ASC, D1.c4 ASC, D1.c5 ASC, D1.c6 ASC, D1.c
    ASC, D1.c8 ASC, D1.c9 ASC, D1.c10 ASC, D1.c11 ASC, D1.c12 ASC) as c17
    34            from (select distinct D1.c1 as c1,
    35                                  D1.c2 as c2,
    36                                  'CHANNEL1' as c3,
    37                                  D1.c3 as c4,
    38                                  D1.c4 as c5,
    39                                  D1.c5 as c6,
    40                                  D1.c6 as c7,
    41                                  D1.c7 as c8,
    42                                  D1.c8 as c9,
    43                                  D1.c9 as c10,
    44                                  D1.c10 as c11,
    45                                  D1.c11 as c12
    46                    from (select sum(T610543.GLOBAL1_EXCHANGE_RATE * case
    47                                       when T610543.X_ZEB_SYNC_EBS_FLG = 'Y' then
    48                                        T610543.X_ZEB_AIA_U_REVN_AMT
    49                                       else
    50                                        0
    51                                     end) as c1,
    52                                 T536086.X_ZEBRA_TERRITORY as c2,
    53                                 T526821.LEVEL9_NAME as c3,
    54                                 T526821.LEVEL1_NAME as c4,
    55                                 T577698.PER_NAME_FSCL_YEAR as c5,
    56                                 T577698.FSCL_QTR as c6,
    57                                 T31796.X_ZEBRA_TERRITORY as c7,
    58                                 T31796.X_OU_NUM as c8,
    59                                 T664055.TERRITORY as c9,
    60                                 T536086.X_OU_NUM as c10,
    61                                 T526821.LEVEL4_NAME as c11
    62                            from W_INT_ORG_D        T613144 /* Dim_ZEB_W_INT_ORG_D_POS_Client_Attr_Direct */,
    63                                 W_ZEBRA_REGION_D   T664055 /* Dim_ZEB_W_ZEBRA_REGION_D_POS_Client_Direct */,
    64                                 W_DAY_D            T577698 /* Dim_ZEB_W_DAY_D_Order_Invoice_Date */,
    65                                 WC_PRODUCT_HIER_DH T526821 /* Dim_WC_PRODUCT_HIER_DH */,
    66                                 W_PRODUCT_D        T32069 /* Dim_W_PRODUCT_D */,
    67                                 W_ORG_D            T31796,
    68                                 W_ORG_D            T536086 /* Dim_ZEB_W_ORG_D_Reseller */,
    69                                 W_ORDERITEM_TMP_F      T610543 /* Fact_ZEB_W_ORDERITEM_F_Direct */
    70                           where (T610543.PR_OWNER_BU_WID = T613144.ROW_WID and
    71                                 T577698.ROW_WID =
    72                                 T610543.X_ZEB_AIA_TRXN_DT_WID and
    73                                 T32069.ROW_WID = T526821.PROD_WID and
    74                                 T32069.ROW_WID = T610543.ROOT_LN_PROD_WID and
    75                                 T536086.ROW_WID = T610543.ACCNT_WID and
    76                                 T31796.DATASOURCE_NUM_ID =
    77                                 T610543.DATASOURCE_NUM_ID and
    78                                 T31796.INTEGRATION_ID = T610543.VIS_PR_BU_ID and
    79                                 T536086.DELETE_FLG = 'N' and
    80                                 T610543.X_ZEB_DELETE_FLG = 'N' and
    81                                 T613144.X_ZEB_REGION_WID = T664055.ROW_WID and
    82                                 T577698.FSCL_DAY_OF_YEAR < 97 and
    83                                 '2006' < T577698.PER_NAME_FSCL_YEAR and
    84                                 T536086.X_OU_NUM <> '11073' and
    85                                 T536086.X_ZEBRA_TERRITORY !=
    86                                 'XX23' and
    87                                 T536086.X_OU_NUM != '56791647728774' and
    88                                 T536086.X_OU_NUM != '245395890' and
    89                                 T536086.X_ZEBRA_TERRITORY !=
    90                                 'STRATEGIC ACCTS 2' and
    91                                 T526821.LEVEL2_NAME != 'Charges' and
    92                                 T526821.LEVEL9_NAME != 'Unspecified' and
    93                                 T536086.X_ZEBRA_TERRITORY !=
    94                                 'XX1' and T536086.X_ZEBRA_TERRITORY !=
    95                                 'XX2' and T536086.X_ZEBRA_TERRITORY !=
    96                                 'XX3' and T536086.X_ZEBRA_TERRITORY !=
    97                                 'XX4' and
    98                                 (T536086.X_ZEBRA_TERRITORY in
    99                                 ( ... In List of 22 values )) and
    125                                 T32069.X_ZEB_EBS_PRODUCT_TYPE is null)
    126                           group by T31796.X_ZEBRA_TERRITORY,
    127                                    T31796.X_OU_NUM,
    128                                    T526821.LEVEL1_NAME,
    129                                    T526821.LEVEL4_NAME,
    130                                    T526821.LEVEL9_NAME,
    131                                    T536086.X_OU_NUM,
    132                                    T536086.X_ZEBRA_TERRITORY,
    133                                    T577698.FSCL_QTR,
    134                                    T577698.PER_NAME_FSCL_YEAR,
    135                                    T664055.TERRITORY) D1) D1) D1
    136   where (D1.c17 = 1)
    137  /
    Elapsed: 00:00:35.19
    Execution Plan
    Plan hash value: 3285002974
    | Id  | Operation                                         | Name               | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     | Pstart| Pstop |    TQ  |IN-OUT| PQ Distrib |
    |   0 | SELECT STATEMENT                                  |                    |  2145M|  2123G|       |   612K  (1)| 03:03:47 |       |       |        |      |            |
    |   1 |  PX COORDINATOR                                   |                    |       |       |       |            |          |       |       |        |      |            |
    |   2 |   PX SEND QC (RANDOM)                             | :TQ10012           |  2145M|  2123G|       |   612K  (1)| 03:03:47 |       |       |  Q1,12 | P->S | QC (RAND)  |
    |*  3 |    VIEW                                           |                    |  2145M|  2123G|       |   612K  (1)| 03:03:47 |       |       |  Q1,12 | PCWP |            |
    |*  4 |     WINDOW NOSORT                                 |                    |  2145M|   421G|       |   612K  (1)| 03:03:47 |       |       |  Q1,12 | PCWP |            |
    |   5 |      SORT GROUP BY                                |                    |  2145M|   421G|   448G|   612K  (1)| 03:03:47 |       |       |  Q1,12 | PCWP |            |
    |   6 |       PX RECEIVE                                  |                    |  2145M|   421G|       |  1740  (11)| 00:00:32 |       |       |  Q1,12 | PCWP |            |
    |   7 |        PX SEND HASH                               | :TQ10011           |  2145M|   421G|       |  1740  (11)| 00:00:32 |       |       |  Q1,11 | P->P | HASH       |
    |*  8 |         HASH JOIN BUFFERED                        |                    |  2145M|   421G|       |  1740  (11)| 00:00:32 |       |       |  Q1,11 | PCWP |            |
    |   9 |          PX RECEIVE                               |                    |   268K|  7864K|       |    93   (2)| 00:00:02 |       |       |  Q1,11 | PCWP |            |
    |  10 |           PX SEND HASH                            | :TQ10009           |   268K|  7864K|       |    93   (2)| 00:00:02 |       |       |  Q1,09 | P->P | HASH       |
    |  11 |            PX BLOCK ITERATOR                      |                    |   268K|  7864K|       |    93   (2)| 00:00:02 |       |       |  Q1,09 | PCWC |            |
    |  12 |             TABLE ACCESS FULL                     | W_ORG_D            |   268K|  7864K|       |    93   (2)| 00:00:02 |       |       |  Q1,09 | PCWP |            |
    |  13 |          PX RECEIVE                               |                    |   345K|    59M|       |  1491   (2)| 00:00:27 |       |       |  Q1,11 | PCWP |            |
    |  14 |           PX SEND HASH                            | :TQ10010           |   345K|    59M|       |  1491   (2)| 00:00:27 |       |       |  Q1,10 | P->P | HASH       |
    |* 15 |            HASH JOIN BUFFERED                     |                    |   345K|    59M|       |  1491   (2)| 00:00:27 |       |       |  Q1,10 | PCWP |            |
    |  16 |             PX RECEIVE                            |                    |  1321 | 30383 |       |     2   (0)| 00:00:01 |       |       |  Q1,10 | PCWP |            |
    |  17 |              PX SEND BROADCAST                    | :TQ10006           |  1321 | 30383 |       |     2   (0)| 00:00:01 |       |       |  Q1,06 | P->P | BROADCAST  |
    |  18 |               PX BLOCK ITERATOR                   |                    |  1321 | 30383 |       |     2   (0)| 00:00:01 |       |       |  Q1,06 | PCWC |            |
    |  19 |                TABLE ACCESS FULL                  | W_ZEBRA_REGION_D   |  1321 | 30383 |       |     2   (0)| 00:00:01 |       |       |  Q1,06 | PCWP |            |
    |* 20 |             HASH JOIN                             |                    |   345K|    52M|       |  1488   (2)| 00:00:27 |       |       |  Q1,10 | PCWP |            |
    |  21 |              JOIN FILTER CREATE                   | :BF0000            |  9740 |   114K|       |     2   (0)| 00:00:01 |       |       |  Q1,10 | PCWP |            |
    |  22 |               PX RECEIVE                          |                    |  9740 |   114K|       |     2   (0)| 00:00:01 |       |       |  Q1,10 | PCWP |            |
    |  23 |                PX SEND HASH                       | :TQ10007           |  9740 |   114K|       |     2   (0)| 00:00:01 |       |       |  Q1,07 | P->P | HASH       |
    |  24 |                 PX BLOCK ITERATOR                 |                    |  9740 |   114K|       |     2   (0)| 00:00:01 |       |       |  Q1,07 | PCWC |            |
    |  25 |                  TABLE ACCESS FULL                | W_INT_ORG_D        |  9740 |   114K|       |     2   (0)| 00:00:01 |       |       |  Q1,07 | PCWP |            |
    |  26 |              PX RECEIVE                           |                    |   344K|    47M|       |  1486   (2)| 00:00:27 |       |       |  Q1,10 | PCWP |            |
    |  27 |               PX SEND HASH                        | :TQ10008           |   344K|    47M|       |  1486   (2)| 00:00:27 |       |       |  Q1,08 | P->P | HASH       |
    |  28 |                JOIN FILTER USE                    | :BF0000            |   344K|    47M|       |  1486   (2)| 00:00:27 |       |       |  Q1,08 | PCWP |            |
    |* 29 |                 HASH JOIN BUFFERED                |                    |   344K|    47M|       |  1486   (2)| 00:00:27 |       |       |  Q1,08 | PCWP |            |
    |  30 |                  JOIN FILTER CREATE               | :BF0001            | 35290 |   964K|       |    93   (2)| 00:00:02 |       |       |  Q1,08 | PCWP |            |
    |  31 |                   PX RECEIVE                      |                    | 35290 |   964K|       |    93   (2)| 00:00:02 |       |       |  Q1,08 | PCWP |            |
    |  32 |                    PX SEND HASH                   | :TQ10004           | 35290 |   964K|       |    93   (2)| 00:00:02 |       |       |  Q1,04 | P->P | HASH       |
    |  33 |                     PX BLOCK ITERATOR             |                    | 35290 |   964K|       |    93   (2)| 00:00:02 |       |       |  Q1,04 | PCWC |            |
    |* 34 |                      TABLE ACCESS FULL            | W_ORG_D            | 35290 |   964K|       |    93   (2)| 00:00:02 |       |       |  Q1,04 | PCWP |            |
    |  35 |                  PX RECEIVE                       |                    |   344K|    38M|       |  1392   (2)| 00:00:26 |       |       |  Q1,08 | PCWP |            |
    |  36 |                   PX SEND HASH                    | :TQ10005           |   344K|    38M|       |  1392   (2)| 00:00:26 |       |       |  Q1,05 | P->P | HASH       |
    |  37 |                    JOIN FILTER USE                | :BF0001            |   344K|    38M|       |  1392   (2)| 00:00:26 |       |       |  Q1,05 | PCWP |            |
    |* 38 |                     HASH JOIN BUFFERED            |                    |   344K|    38M|       |  1392   (2)| 00:00:26 |       |       |  Q1,05 | PCWP |            |
    |  39 |                      PX RECEIVE                   |                    | 93791 |  4671K|       |     7   (0)| 00:00:01 |       |       |  Q1,05 | PCWP |            |
    |  40 |                       PX SEND HASH                | :TQ10001           | 93791 |  4671K|       |     7   (0)| 00:00:01 |       |       |  Q1,01 | P->P | HASH       |
    |  41 |                        PX BLOCK ITERATOR          |                    | 93791 |  4671K|       |     7   (0)| 00:00:01 |       |       |  Q1,01 | PCWC |            |
    |* 42 |                         TABLE ACCESS FULL         | WC_PRODUCT_HIER_DH | 93791 |  4671K|       |     7   (0)| 00:00:01 |       |       |  Q1,01 | PCWP |            |
    |* 43 |                      HASH JOIN                    |                    |   894K|    57M|       |  1384   (2)| 00:00:25 |       |       |  Q1,05 | PCWP |            |
    |  44 |                       JOIN FILTER CREATE          | :BF0002            |   243K|  1904K|       |    48   (3)| 00:00:01 |       |       |  Q1,05 | PCWP |            |
    |  45 |                        PX RECEIVE                 |                    |   243K|  1904K|       |    48   (3)| 00:00:01 |       |       |  Q1,05 | PCWP |            |
    |  46 |                         PX SEND HASH              | :TQ10002           |   243K|  1904K|       |    48   (3)| 00:00:01 |       |       |  Q1,02 | P->P | HASH       |
    |  47 |                          PX BLOCK ITERATOR        |                    |   243K|  1904K|       |    48   (3)| 00:00:01 |       |       |  Q1,02 | PCWC |            |
    |* 48 |                           TABLE ACCESS FULL       | W_PRODUCT_D        |   243K|  1904K|       |    48   (3)| 00:00:01 |       |       |  Q1,02 | PCWP |            |
    |  49 |                       PX RECEIVE                  |                    |   894K|    50M|       |  1336   (2)| 00:00:25 |       |       |  Q1,05 | PCWP |            |
    |  50 |                        PX SEND HASH               | :TQ10003           |   894K|    50M|       |  1336   (2)| 00:00:25 |       |       |  Q1,03 | P->P | HASH       |
    |  51 |                         JOIN FILTER USE           | :BF0002            |   894K|    50M|       |  1336   (2)| 00:00:25 |       |       |  Q1,03 | PCWP |            |
    |* 52 |                          HASH JOIN                |                    |   894K|    50M|       |  1336   (2)| 00:00:25 |       |       |  Q1,03 | PCWP |            |
    |  53 |                           PX RECEIVE              |                    |   292 |  3504 |       |   136   (0)| 00:00:03 |       |       |  Q1,03 | PCWP |            |
    |  54 |                            PX SEND BROADCAST LOCAL| :TQ10000           |   292 |  3504 |       |   136   (0)| 00:00:03 |       |       |  Q1,00 | P->P | BCST LOCAL |
    |  55 |                             PX BLOCK ITERATOR     |                    |   292 |  3504 |       |   136   (0)| 00:00:03 |       |       |  Q1,00 | PCWC |            |
    |* 56 |                              TABLE ACCESS FULL    | W_DAY_D            |   292 |  3504 |       |   136   (0)| 00:00:03 |       |       |  Q1,00 | PCWP |            |
    |  57 |                           PX BLOCK ITERATOR       |                    |  4801K|   215M|       |  1199   (2)| 00:00:22 |     1 |    11 |  Q1,03 | PCWC |            |
    |* 58 |                            TABLE ACCESS FULL      | W_ORDERITEM_TMP_F  |  4801K|   215M|       |  1199   (2)| 00:00:22 |     1 |    44 |  Q1,03 | PCWP |            |
    Note
       - dynamic sampling used for this statement (level=5)
    Statistics
            498  recursive calls
           2046  db block gets
        1193630  consistent gets
          74398  physical reads
              0  redo size
         655170  bytes sent via SQL*Net to client
          11761  bytes received via SQL*Net from client
            541  SQL*Net roundtrips to/from client
             64  sorts (memory)
              0  sorts (disk)
           8090  rows processed
    SQL>So my question is if, cardinality estimates are way off, is that an indicator that the explain plans being generated are sub-optimal?
    Can you provide me with some tips or links to blog post or books on how I approach tuning such queries where cardinalities are not good?
    Edited by: qqq on Apr 7, 2013 2:27 PM

    As already asked in your other thread:
    Please see the FAQ for how to post a tuning request and the information that you need to provide.
    Part of that information is:
    1. DDL for the table and indexes
    2. The query being used
    3. row counts for the table and for the predicates used in the query
    4. info about stats. You did update the table and index stats didn't you?
    5. The 'actual' execution plans.
    An explain plan just shows what Oracle 'thinks' it is going to do. The actual plans show what Oracle actually 'did' do. Just because Oracle expected to save doesn't mean the savings were actually achieved.
    When you post the plans use on the line before and on the line after to preserve formatting.
    Your partial code is virtually unusable because of the missing conditions in the predicates. You need to use '!=' for 'not equals' if that's what those missing conditions are.
    Please edit your post to use code tags, add the missing conditions and provide the other information needed for a tuning request.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • Best way to compare data of 2 tables present on 2 different servers

    Hi,
    We are doing data migration and I wil like to compare data between 2 tables which are present on 2 different server. I know to find the difference i can go for minus or full outer join and by creating the database link.
    But my problem is the volume of the data. The tables under consideration has approximately 40-60 columns and number of rows in each tables are around 60-70 million. Also both the tables are on 2 diffferent servers.
    I would like to know
    1] What will be the best way to compare the data and print the difference from performance perepective? I know that if I am going for DB links then its will definitely impact the performance as the tables are across 2 different servers.
    2] Is it advisable to go for using SQL - PL/SQL for this kind of senario or dump the data in flat files and use C or C++ code to find the difference.
    Regards,
    Amol

    Check this at asktom.oracle.com. Search for "Marco Stefanetti" and follow the few posts between Marco and Tom. As far as your tables being on separate servers, you could consider dumping the data to file and using external table or using CTAS ( create table as select ) to get both tables locally.
    http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:2151582681236

  • What's the best way to compare projects?

    Is there a way of comparing two versions of a RH10 project and getting a list of any differences between them?
    I want to be able to provide our translator with an accurate list of what's changed in each new version of help.

    Hi there
    Since this is in house, there is one way you could set up that may help. It depends on how familiar you are with See Also Keywords and See Also Controls.
    What you could do is create a See Also Keyword. Perhaps name it "Attention". Then as you edit or add new content, you would assign the affected topic(s) to the keyword. Then you would have a topic that has a See Also Control that is linked to the keyword. In order to "see" what needs attention, whomever is working with the content would just open that topic and click the See Also Control. All the affected topics would then be listed. Essentially, you are just keeping track as you go along.
    Note that this isn't automated nor is it foolproof. It would require being diligent to assign topics to the keyword. But once everyone has become accustomed to it, it should be easier than running a comparison application and trawling through code.
    Cheers... Rick

  • Is there a way to compare the contents of a library by comparing itl files or some other files?

    Is there a way to compare the contents of a library by comparing itl files or some other files?
    I need to compare the contents of my current library to one that existed 2 months ago but ALL I have is the itl file from 2 months ago.  Can this be done?

    You are correct, many people have noticed over the years that Sound Check does not work as advertised.
    A more effective approach is to treat the files with a 3rd party volume normalization program. Although I have not personally used it, many users on this Forum have reported good results from an inexpensive program called iVolume.

  • Is there any way to compare between last two activation of a Transformation

    Hi,
    I have activated a transformation 2 days back. Later on I found that the same transformation has been activated by someone else. I knew that from the transport organizer. I want to know what changes has been done after my activation. I have gone through each and every step i.e start routine or field routine everything. Didn't find any commenting also. Is there any way  to compare the version between last two activation of that transformation or any BW Object.
    Any version management option is available or not.
    Thanks in advance
    Snehasish

    You can get the information who had changed and at wht time from the properties of the Transformations
    Menu --> Transformation -> properties --> you can get who had changed n time...
    if you want to compare the versions .... you can follow the below steps...
    From the menu bar of transformations -->Extras --> display generated program --> utilities --> versions --> version management
    This will display the changes made... if any.
    if there are no changes it will display only one...version.

  • Best way to compare the effect of an adjustment

    In iPhoto, if I change any settings, such as exposure, I can press the shift key to quickly toggle back and forth between the 'before' and 'after' version. I'm trying to figure out if there is a way to do the same thing in Aperture.
    I know that I can press 'M' to flip back to the master image, but this has two problems: 1) If I've cropped the original image, then switching to the master doesn't keep me at the same place where I was zoomed into, and 2) I don't want to compare to the master, I just want to compare to the version just before I made the edit. So if I've made 4 adjustments, and then I make a 5th adjustment, I want to compare to the version with 4 adjustments, not the master (with no adjustments).
    I also know that I can make a duplicate version of any photo and put them side by side. This is handy for some purposes, but this has problems too: 1) I don't want to have to create a new version every time I make any adjustment, just so that I can compare before and after, 2) it's several steps to arrange the two images side by side at the same zoom on the same section of the photo, which is very annoying to have to do for each adjustment, and 3) I'm working on a 15" MacBook Pro, so I really don't get to see much of my image when I have two versions side-by-side.
    I've tried just using undo and redo, but that is so slow for large images that it's not useful for comparison purposes. I want to instantly flip back and forth between the two versions.
    I would just suck it up and get over it IF iPhoto didn't have this exact feature - just press shift to compare to the previous version.

    When I am adjusting an image using brushes I quit often compare the effect of the brush by toggling the "invert" action (from the cog wheel in the Brush HUD). This will switch between the image with the effect brushed in, and the image with the effect applied to the parts of the image that have not been brushed.  It's not quite the same as in iPhoto, but it helps to judge the effect of a brushed in adjustment quickly
    Regards
    Léonie

  • Fast way to compare strings in a log file

    I am having a log file ( contains more than 100,000 lines) based on system statistics for a time period. For each time point I am logging 12 sets of data each set starts with a tag /**START**/ and ends with a tag /**END**/. I would like to know the best and fast way to read from the log file and prepare records for each set for a specific time period.
    Currently I am reading from the file using BufferedReader.readLine() method and compare the tags and distribute the data sets accordingly....
    any suggestions...?

    Yes You are right.... I am doing the string parsing after reading each line... May be that is taking much time.. since we are providing an option to customize chart list, I have to process all the lines and distribute the data accordingly before preparing the charts. Some data set (between START & END) contains upto 256 lines and some single line... Here is a sample format....
    @@LOG@@[email protected]_040507143739@@
    **/START1/**
    [3, 15, 1, 20, 0]
    [6, 2181452, 292, 7477, 7475]
    [11, 14, 2, 8, 0]
    [13, 14, 1, 12, 0]
    [14, 10568857, 320, 33068, 30223]
    [20, 0, 0, 3, 0]
    [54, 2, 2, 1, 0]
    [130, 17, 0, 3275, 0]
    [132, 3457, 0, 7303, 0]
    [133, 0, 0, 1, 0]
    [134, 400, 400, 1, 1]
    [139, 5, 5, 1, 0]
    [153, 0, 0, 2, 0]
    [174, 7, 1, 12, 0]
    [175, 0, 0, 7, 0]
    [176, 5, 0, 3280, 3274]
    [177, 0, 0, 78, 0]
    [184, 1047, 2, 558, 6]
    [187, 1062, 0, 3177, 0]
    [188, 111, 0, 581, 0]
    [189, 1, 0, 12, 0]
    [190, 0, 0, 888, 438]
    [212, 0, 0, 52, 0]
    [213, 0, 0, 24, 0]
    [242, 0, 0, 1, 0]
    [243, 2072050, 25269, 82, 71]
    [295, 2171328, 2918, 744, 744]
    [296, 2165224, 5820, 372, 372]
    [338, 4, 0, 15034, 0]
    [340, 1, 0, 140, 0]
    [342, 993643, 66, 14979, 0]
    [343, 0, 0, 27, 0]
    [346, 26, 0, 218, 0]
    [351, 2097798, 2984, 703, 703]
    end
    **/START2/**
    end
    **/START3/**
    [0.0,0.0,0.0]
    end
    **/START4/**
    end
    **/START5/**
    0=0,99
    1=0,11
    2=0,33280
    3=0,60
    4=0,328
    6=0,14913
    7=0,383150
    8=0,4505
    9=0,848635
    11=0,12152
    12=0,12167
    13=0,60698377887
    14=0,60698377887
    15=0,3429960
    16=0,34000240
    17=0,3289
    18=0,3290
    19=0,26217
    20=0,12276884
    21=0,14439988
    25=0,123226
    26=0,19
    27=0,123209
    40=0,169917
    41=0,678718
    42=0,9063
    43=0,235714
    44=0,33269
    45=0,89
    46=0,1525
    47=0,1384
    48=0,280
    49=0,1295
    50=0,122
    51=0,966
    53=0,3
    54=0,521
    55=0,3
    56=0,640
    57=0,640
    58=0,1
    71=0,841
    72=0,646
    73=0,1066
    75=0,28283
    76=0,28
    78=0,2226
    79=0,28
    86=0,54801
    87=0,54801
    90=0,962
    92=0,17308
    95=0,5177
    97=0,52
    98=0,24
    102=0,247531
    103=0,72296
    105=0,325
    107=0,51452
    110=0,237986
    114=0,120630
    115=0,27158624
    117=0,878432
    119=0,3280
    120=0,56544
    121=0,340
    163=0,33269
    164=0,293974
    165=0,56
    166=0,3766
    167=0,4
    171=0,8
    173=0,2604
    174=0,60
    175=0,33094
    176=0,83
    177=0,27
    178=0,100
    183=0,30506
    184=0,191
    188=0,2804828
    189=0,55236
    190=0,476465
    191=0,116
    192=0,59771
    193=0,81252
    194=0,88210
    195=0,10
    196=0,2
    203=0,152412
    204=0,70002
    207=0,378
    222=0,706016
    223=0,488746
    224=0,4
    227=0,29438
    230=0,1177
    231=0,1757
    232=0,39397
    233=0,1361
    234=0,19
    235=0,75451
    236=0,1925701
    237=0,1141694
    238=0,14802
    242=0,23470
    244=0,619811
    end
    **/START6/**
    teln
    [Filesystem, 1024-blocks, Used, Available, Capacity, Mounted, on]
    [dev/hda2, 18112140, 12153812, 5038276, 71%, /]
    [dev/hda1, 102454, 17303, 79861, 18%, /boot]
    [none, 256928, 0, 256928, 0%, /dev/shm]
    [dev/hdb1, 19228276, 1426104, 16825424, 8%, /hdb1]
    [dev/hdb2, 19235868, 6691572, 11567168, 37%, /hdb2]
    [dev/hdc1, 12822880, 3236, 12168276, 1%, /hdc1]
    [dev/hdc2, 12822912, 1204140, 10967404, 10%, /hdc2]
    [dev/hdc3, 12823416, 190408, 11981616, 2%, /hdc3]
    end
    **/START7/**
    [0.0,0.0,0.0,0.0]
    end
    **/START8/**
    [11,9]
    end
    **/START9/**
    [99.94,94.17]
    end
    **/START10/**
    [99.14,040507143739]
    end
    **/START11/**
    [1.0]
    end
    **/START12/**
    procs memory swap io system cpu
    r b w swpd free buff cache si so bi bo in cs us sy id
    0 0 0 8 49112 24376 364320 0 0 19 28 540 48 1 0 99
    0 0 0 8 49112 24376 364320 0 0 0 0 526 22 1 0 99
    end
    -- Next Time Point

  • Is there any easy way to compare LIKE Addresses from one table, which contains 3rd party data, to another table, our database source

    We have a 3rd party that is supplying us data and we need to compare the addressing between the 3rd party data to our source database addressing. I'd like to make it somewhat flexible meaning I'd like to somehow use the LIKE comparison rather than comparing
    the exact address values. (I have noticed that the 3rd party addressing sometime has a leading <space> at the beginning of the address...why I'd prefer to use LIKE)
    Is there any easy way to do this? Or does this dictate using a CURSOR and processing through the CURSOR of 3rd party data and plugging in the address LIKE as dynamic SQL?
    Please let me know your thoughts on this and I appreciate your review and am hopeful for a reply.

    Yes, it's possible and there are a variety of options but it's may not be for the faint of heart.
    The last time I did it, I ended up taking several passes at the data.
    1st pass was a straight up comparison with no modifications or functions.
    2nd pass was the same but with all special characters removed.
    3rd pass involved splitting the numeric portion of address and comparing just the street numbers and used a double meta-phone function (kind of like a soundex on steroids) to compare the street names.
    Jason Long

  • CF9 - Best way to compare LDAP data in .ldf file against database?

    Hello, everyone.
    I am tasked with creating a script that will take LDAP information from a .ldf file and compare each entry to a database, adding names that meet certain criteria.
    What is the best way to do this? I've never worked with LDFs, before.
    Thank you,
    ^_^

    JDBC concerns connecting to the database. I have only worked on Oracle and I am not sure if this function is one of theirs or SQL standard. Basically it works like this:
    The following statement combines results with the MINUS operator, which returns only rows returned by the first query but not by the second:
    SELECT product_id FROM inventories
    MINUS
    SELECT product_id FROM order_items;
    taken from : http://www.cs.ncl.ac.uk/teaching/facilities/swdoc/oracle9i/server.920/a96540/queries5a.htm

  • Is there an easy way to compare two null columns?

    I need to compare a significant number of columns between records in two tables for a data conversion. I really need a comparison that will return true if: 1) both columns are null; or 2) both columns are not null and equal. I want it to return false if: 3) one column is null and the other is not; or 4) both columns are not null and are not equal. I am trying to find records which are not exact matches.
    I found documentation at oracle-base.com about the SYS_OP_MAP_NONNULL function that would do what I want, but I don't want to use this since it's an undocumented feature and my code will be in production for a period of time.
    I would rather not have to use a construct like this for each and every column I'm comparing:
              a.col is null
              and b.col is null
         or (
              a.col = b.col
    )Also, I know about the NVL function, but I'm comparing columns which are entered by users, and I'm not comfortable substituting any values for null because those values might actually exist in the data.

    Performance wasn't the issue but they will be about the same anyway. The issue was avoiding the messy syntax needed to do the job.
    Which of these looks easier to you if you had to understand and maintain the code or add new columns
    >
    I would rather not have to use a construct like this for each and every column I'm comparing:
              a.col is null
              and b.col is null
         or (
              a.col = b.col
    >
    or
    1 = DECODE (a.col, b.col, 1)So if, like the OP, you had to 'compare a significant number of columns' which syntax would you use?

  • Best way to compare column values of 2 different records

    Hi,
    In my PL/SQL cursor, I want to store the column values of the first record and compare it with the column values of the next record to compare if they have duplicate column values. Should I store the results of the first record with an array and the second item using the cursor? Can you provide me the best practices? Thanks.
    DECLARE
    CURSOR cs IS
    SELECT
    A,
    B
    FROM TABLE;
    BEGIN
    for rec in cs
    loop
    -- this is where I would like to store the column values of the first record and
    -- then compare it with the column values of the 2nd record.
    end loop;
    END;

    The best practice I can recommend would be to not use PL/SQL at all. Cursor FOR LOOPs are one of the slowest forms of processing.
    You should investigate analytical functions such as LAG and LEAD at http://tahiti.oracle.com
    It is always helpful to provide the following:
    1. Oracle version (SELECT * FROM V$VERSION)
    2. Sample data in the form of CREATE / INSERT statements.
    3. Expected output
    4. Explanation of expected output (A.K.A. "business logic")
    5. Use \ tags for #2 and #3. See FAQ (Link on top right side) for details.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

Maybe you are looking for