Best way to perform the same task wih arrays of different Objects

Hello all.
This isn't an XPath question, it's a much lower-level Java Object question, but it involves XPath. I hope that doesn't distract from the question, or confuse anything.
I have 4-5 different types of Objects (call them A, B, C, etc.) that each contain an array of other Objects (call them a[], b[], c[], etc.)
a, b, c, etc., each have an Xpath object and and XPathExpression. When I create each XPath object I assign it a NamespaceContext object which contains the namespaces needed for that XPath.
When I create, for example, an A object, I pass is constructor an array of a and an array of Namespaces. With each a object I need to:
1. create a NamespaceContext object
2. go through all the Namespace objects and if its route matches,
(if Namespace.route == a.getRoute() )
3. add that Namespace to the NamespaceContext Object,
4. assign the NamespaceContext to the XPath object, and finally
5. create the a object, passing it the XPath object
My problem / question is: I also have to do the same thing with B and b[], C and c[], etc. It's not that long of a process, not that much code, but all the same, I was wondering what the best way to write this code once would be, and have it work for all the different types of Objects.
In other words, I'd like to write a mehod, call it assignNamespaces(), that accepts an array of Objects(a[], b[], c[], etc.) and an array of Namespaces, creates the XPath Object for each a, b, c, etc., and that creates and returns an array of a[],b[],c[], etc., sending the XPath Object as a parameter.
That way when I create for example an A Object, I call:
class A
ObjectsWithXpath[] objectsWithXPath;
this..objectsWithXPath = assignNamespaces(objectsWithXPath,namespaces);
Should the assgnNamespaces() method simply use Class.forName() to see what type of Object it is and do a cast accordingly, or is there some other, more elegant way of doing this?
I hope I've explained myself, and haven't bored you ...
Thanks in advance,
Bob

Thanks for your reply!
I poked around a bit looking into Factory classes. I've used them a million times but never actually made one. At any rate, they seem like a good idea when you have a bunch of different classes that are similar but not equal.
In my case my classes ONLY have the XPathExpression in common. Which means that I'd have to make a base abstract class with a bunch of methods of which only a percentage are defined by any class that extends it. In other words, if I had 3 classes -- a, b and c -- that extend the base abstract class, and each had 5 getters and setters, the base abstract class would have 15 methods, 5 only used by a, 5 only used by b and 5 only used by c.
It seemed a bit ugly to me. Besides, I only have 3 classes. I decided to factor out the inner loop, which is about 70% of the code, stick it in a utility class, and call it from there. I am repeating some code still, but it isn't that bad, and it saves me having to define an interface, an abstract class, a Factory class, etc.
It ain't perfect, but then nothing ever is.
Thanks for all the help! Viva the Java community!
Bob

Similar Messages

  • What was the exact functionality of "Device Control/Status.vi" (version 5)? Is there any vi in Labview 6.1 which performs the same tasks?

    I have a vi developed in LabVIEW 5.1 and I want to upgrade it to LabVIEW 6.1. So I must replace "Device Control/Status.vi" with a newer one but I do not know wich vi performs the same tasks in v6.1

    The Device Control/Status.vi is included with LabVIEW 6.1 as part of the serial compatibility VIs. You can find it by opening up and looking at
    Instrument I/O -> I/O Compatibility -> Serial Compatibility -> Bytes At Serial Port.vi
    Also, if you open up the VI found in
    vi.lib/platform/_sersup.llb/serial line ctrl.vi
    it will expose the functionality of Device Control/Status
    Thanks,

  • How to link the same horizontal page for two different vertical pages without duplicate them?

    Hey guys,
    Is there a way to link the same horizontal page on two different vertical pages without duplicate the horizontal page?
    I have a doublepage of a book splitted in two parts in different vertical pages but i want link the fullsized image in the horizontal page for both of them. Got that? hahaha
    Thank you all

    Confusing But interesing.
    I think it's possible. I have a idea. It will need one advanced trick.
    To explain it, I need test simply haha.

  • What is the best way to replace the Inline Views for better performance ?

    Hi,
    I am using Oracle 9i ,
    What is the best way to replace the Inline Views for better performance. I see there are lot of performance lacking with Inline views in my queries.
    Please suggest.
    Raj

    WITH plus /*+ MATERIALIZE */ hint can do good to you.
    see below the test case.
    SQL> create table hx_my_tbl as select level id, 'karthick' name from dual connect by level <= 5
    2 /
    Table created.
    SQL> insert into hx_my_tbl select level id, 'vimal' name from dual connect by level <= 5
    2 /
    5 rows created.
    SQL> create index hx_my_tbl_idx on hx_my_tbl(id)
    2 /
    Index created.
    SQL> commit;
    Commit complete.
    SQL> exec dbms_stats.gather_table_stats(user,'hx_my_tbl',cascade=>true)
    PL/SQL procedure successfully completed.
    Now this a normal inline view
    SQL> select a.id, b.id, a.name, b.name
    2 from (select id, name from hx_my_tbl where id = 1) a,
    3 (select id, name from hx_my_tbl where id = 1) b
    4 where a.id = b.id
    5 and a.name <> b.name
    6 /
    Execution Plan
    0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=7 Card=2 Bytes=48)
    1 0 HASH JOIN (Cost=7 Card=2 Bytes=48)
    2 1 TABLE ACCESS (BY INDEX ROWID) OF 'HX_MY_TBL' (TABLE) (Cost=3 Card=2 Bytes=24)
    3 2 INDEX (RANGE SCAN) OF 'HX_MY_TBL_IDX' (INDEX) (Cost=1 Card=2)
    4 1 TABLE ACCESS (BY INDEX ROWID) OF 'HX_MY_TBL' (TABLE) (Cost=3 Card=2 Bytes=24)
    5 4 INDEX (RANGE SCAN) OF 'HX_MY_TBL_IDX' (INDEX) (Cost=1 Card=2)
    Now i use the with with the materialize hint
    SQL> with my_view as (select /*+ MATERIALIZE */ id, name from hx_my_tbl where id = 1)
    2 select a.id, b.id, a.name, b.name
    3 from my_view a,
    4 my_view b
    5 where a.id = b.id
    6 and a.name <> b.name
    7 /
    Execution Plan
    0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=8 Card=1 Bytes=46)
    1 0 TEMP TABLE TRANSFORMATION
    2 1 LOAD AS SELECT
    3 2 TABLE ACCESS (BY INDEX ROWID) OF 'HX_MY_TBL' (TABLE) (Cost=3 Card=2 Bytes=24)
    4 3 INDEX (RANGE SCAN) OF 'HX_MY_TBL_IDX' (INDEX) (Cost=1 Card=2)
    5 1 HASH JOIN (Cost=5 Card=1 Bytes=46)
    6 5 VIEW (Cost=2 Card=2 Bytes=46)
    7 6 TABLE ACCESS (FULL) OF 'SYS_TEMP_0FD9D6967_3C610F9' (TABLE (TEMP)) (Cost=2 Card=2 Bytes=24)
    8 5 VIEW (Cost=2 Card=2 Bytes=46)
    9 8 TABLE ACCESS (FULL) OF 'SYS_TEMP_0FD9D6967_3C610F9' (TABLE (TEMP)) (Cost=2 Card=2 Bytes=24)
    here you can see the table is accessed only once then only the result set generated by the WITH is accessed.
    Thanks,
    Karthick.

  • How can I move a task from a Summary Task to another Summary Task in the same task list?

    Hey, I tried to move tasks through the SP UI from one Summary Task to other in the same task list, but I didn't find possibility for it.
    Then I spent time to learn the SP Client Object Model and now I can read tasks form the list. I see every task has a "FileRef" and a "FileDirRef" field. If I think right these fields show the relation between list elements for example between
    a Task and a Summary Task elements.
    I changed these fields' values and I tried to Update the ListItem but I got this error message: "Invalid data has been used to update the list item. The field you are trying to update may be read only."
    I really need to move tasks what were created below a wrong Summary Task so please explain a method or pattern how I can do this. (I can create a new Task below to any Summary Task and I can set field values but I hope there is a way to really move tasks (I
    mean I should change the right field values somehow.).
    I can reach the Task List both on the server and client side so I'm very interested in every solution. PowerShell solution is also good for me.
    I'm using SharePoint 2010 SP2.
    Thank you for your answer and your time. :)
    Csaba Marosi

    Hi,
    According to your post, my understanding is that you want to move a task from one summary task to another in the same task list.
    We can do it like this:
    We can create a Gantt View for this task list, then copy your tasks inside a summary task, then navigate back to the other summary and paste, then go back to original and delete.
    Here is another way for your reference:
    SharePoint vs Powershell – Moving List Items between folders
    http://sharepointstruggle.blogspot.in/2010/07/sharepoint-vs-powershell-moving-list.html
    Best Regards
    Dennis Guo
    TechNet Community Support

  • Best way to Fetch the record

    Hi,
    Please suggest me the best way to fetch the record from the table designed below. It is Oracle 10gR2 on Linux
    Whenever a client visit the office a record will be created for him. The company policy is to maintain 10 years of data on the transaction table but the table holds record count of 3 Million records per year.
    The table has the following key Columns for the Select (sample Table)
    Client_Visit
    ID Number(12,0) --sequence generated number
    EFF_DTE DATE --effective date of the customer (sometimes the client becomes invalid and he will be valid again)
    Create_TS Timestamp(6)
    Client_ID Number(9,0)
    Cascade Flg vahrchar2(1)
    On most of the reports the records are fetched by Max(eff_dte) and Max(create_ts) and cascade flag ='Y'.
    I have following queries but the both of them are not cost effective and takes 8 minutes to display the records.
    Code 1:
    SELECT   au_subtyp1.au_id_k,
                                       au_subtyp1.pgm_struct_id_k
                                  FROM au_subtyp au_subtyp1
                                 WHERE au_subtyp1.create_ts =
                                          (SELECT MAX (au_subtyp2.create_ts)
                                             FROM au_subtyp au_subtyp2
                                            WHERE au_subtyp2.au_id_k =
                                                                au_subtyp1.au_id_k
                                              AND au_subtyp2.create_ts <
                                                     TO_DATE ('2013-01-01',
                                                              'YYYY-MM-DD'
                                              AND au_subtyp2.eff_dte =
                                                     (SELECT MAX
                                                                (au_subtyp3.eff_dte
                                                        FROM au_subtyp au_subtyp3
                                                       WHERE au_subtyp3.au_id_k =
                                                                au_subtyp2.au_id_k
                                                         AND au_subtyp3.create_ts <
                                                                TO_DATE
                                                                    ('2013-01-01',
                                                                     'YYYY-MM-DD'
                                                         AND au_subtyp3.eff_dte < =
                                                                TO_DATE
                                                                    ('2012-12-31',
                                                                     'YYYY-MM-DD'
                                   AND au_subtyp1.exists_flg = 'Y'
    Explain Plan
    Plan hash value: 2534321861
    | Id  | Operation                | Name      | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT         |           |     1 |    91 |       | 33265   (2)| 00:06:40 |
    |*  1 |  FILTER                  |           |       |       |       |            |          |
    |   2 |   HASH GROUP BY          |           |     1 |    91 |       | 33265   (2)| 00:06:40 |
    |*  3 |    HASH JOIN             |           |  1404K|   121M|    19M| 33178   (1)| 00:06:39 |
    |*  4 |     HASH JOIN            |           |   307K|    16M|  8712K| 23708   (1)| 00:04:45 |
    |   5 |      VIEW                | VW_SQ_1   |   307K|  5104K|       | 13493   (1)| 00:02:42 |
    |   6 |       HASH GROUP BY      |           |   307K|    13M|   191M| 13493   (1)| 00:02:42 |
    |*  7 |        INDEX FULL SCAN   | AUSU_PK   |  2809K|   125M|       | 13493   (1)| 00:02:42 |
    |*  8 |      INDEX FAST FULL SCAN| AUSU_PK   |  2809K|   104M|       |  2977   (2)| 00:00:36 |
    |*  9 |     TABLE ACCESS FULL    | AU_SUBTYP |  1404K|    46M|       |  5336   (2)| 00:01:05 |
    Predicate Information (identified by operation id):
       1 - filter("AU_SUBTYP1"."CREATE_TS"=MAX("AU_SUBTYP2"."CREATE_TS"))
       3 - access("AU_SUBTYP2"."AU_ID_K"="AU_SUBTYP1"."AU_ID_K")
       4 - access("AU_SUBTYP2"."EFF_DTE"="VW_COL_1" AND "AU_ID_K"="AU_SUBTYP2"."AU_ID_K")
       7 - access("AU_SUBTYP3"."EFF_DTE"<=TO_DATE(' 2012-12-31 00:00:00', 'syyyy-mm-dd
                  hh24:mi:ss') AND "AU_SUBTYP3"."CREATE_TS"<TIMESTAMP' 2013-01-01 00:00:00')
           filter("AU_SUBTYP3"."CREATE_TS"<TIMESTAMP' 2013-01-01 00:00:00' AND
                  "AU_SUBTYP3"."EFF_DTE"<=TO_DATE(' 2012-12-31 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
       8 - filter("AU_SUBTYP2"."CREATE_TS"<TIMESTAMP' 2013-01-01 00:00:00')
       9 - filter("AU_SUBTYP1"."EXISTS_FLG"='Y')Code 2:
    I already raised a thread a week back and Dom suggested the following query, it is cost effective but the performance is same and used the same amount of Temp tablespace
    select au_id_k,pgm_struct_id_k from (
    SELECT au_id_k
          ,      pgm_struct_id_k
          ,      ROW_NUMBER() OVER (PARTITION BY au_id_k ORDER BY eff_dte DESC, create_ts DESC) rn,
          create_ts, eff_dte,exists_flg
          FROM   au_subtyp
          WHERE  create_ts < TO_DATE('2013-01-01','YYYY-MM-DD')
          AND    eff_dte  <= TO_DATE('2012-12-31','YYYY-MM-DD') 
          ) d  where rn =1   and exists_flg = 'Y'
    --Explain Plan
    Plan hash value: 4039566059
    | Id  | Operation                | Name      | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT         |           |  2809K|   168M|       | 40034   (1)| 00:08:01 |
    |*  1 |  VIEW                    |           |  2809K|   168M|       | 40034   (1)| 00:08:01 |
    |*  2 |   WINDOW SORT PUSHED RANK|           |  2809K|   133M|   365M| 40034   (1)| 00:08:01 |
    |*  3 |    TABLE ACCESS FULL     | AU_SUBTYP |  2809K|   133M|       |  5345   (2)| 00:01:05 |
    Predicate Information (identified by operation id):
       1 - filter("RN"=1 AND "EXISTS_FLG"='Y')
       2 - filter(ROW_NUMBER() OVER ( PARTITION BY "AU_ID_K" ORDER BY
                  INTERNAL_FUNCTION("EFF_DTE") DESC ,INTERNAL_FUNCTION("CREATE_TS") DESC )<=1)
       3 - filter("CREATE_TS"<TIMESTAMP' 2013-01-01 00:00:00' AND "EFF_DTE"<=TO_DATE('
                  2012-12-31 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))Thanks,
    Vijay

    Hi Justin,
    Thanks for your reply. I am running this on our Test environment as I don't want to run this on Production environment now. The test environment holds 2809605 records (2 Million).
    The query output count is 281699 (2 Hundred Thousand) records and the selectivity is 0.099. The Distinct values of create_ts, eff_dte, and exists_flg is 2808905 records. I am sure the index scan is not going to help out much as you said.
    The core problem is both queries are using lot of Temp tablespace. When we use this query to join the tables, the other table has the same design as below so the temp tablespace grows bigger.
    Both the production and test environment are 3 Node RAC.
    First Query...
    CPU used by this session     4740
    CPU used when call started     4740
    Cached Commit SCN referenced     21393
    DB time     4745
    OS Involuntary context switches     467
    OS Page reclaims     64253
    OS System time used     26
    OS User time used     4562
    OS Voluntary context switches     16
    SQL*Net roundtrips to/from client     9
    bytes received via SQL*Net from client     2487
    bytes sent via SQL*Net to client     15830
    calls to get snapshot scn: kcmgss     37
    consistent gets     52162
    consistent gets - examination     2
    consistent gets from cache     52162
    enqueue releases     19
    enqueue requests     19
    enqueue waits     1
    execute count     2
    ges messages sent     1
    global enqueue gets sync     19
    global enqueue releases     19
    index fast full scans (full)     1
    index scans kdiixs1     1
    no work - consistent read gets     52125
    opened cursors cumulative     2
    parse count (hard)     1
    parse count (total)     2
    parse time cpu     1
    parse time elapsed     1
    physical write IO requests     69
    physical write bytes     17522688
    physical write total IO requests     69
    physical write total bytes     17522688
    physical write total multi block requests     69
    physical writes     2139
    physical writes direct     2139
    physical writes direct temporary tablespace     2139
    physical writes non checkpoint     2139
    recursive calls     19
    recursive cpu usage     1
    session cursor cache hits     1
    session logical reads     52162
    sorts (memory)     2
    sorts (rows)     760
    table scan blocks gotten     23856
    table scan rows gotten     2809607
    table scans (short tables)     1
    user I/O wait time     1
    user calls     11
    workarea executions - onepass     1
    workarea executions - optimal     9
    Second Query
    CPU used by this session     1197
    CPU used when call started     1197
    Cached Commit SCN referenced     21393
    DB time     1201
    OS Involuntary context switches     8684
    OS Page reclaims     21769
    OS System time used     14
    OS User time used     1183
    OS Voluntary context switches     50
    SQL*Net roundtrips to/from client     9
    bytes received via SQL*Net from client     767
    bytes sent via SQL*Net to client     15745
    calls to get snapshot scn: kcmgss     17
    consistent gets     23871
    consistent gets from cache     23871
    db block gets     16
    db block gets from cache     16
    enqueue releases     25
    enqueue requests     25
    enqueue waits     1
    execute count     2
    free buffer requested     1
    ges messages sent     1
    global enqueue get time     1
    global enqueue gets sync     25
    global enqueue releases     25
    no work - consistent read gets     23856
    opened cursors cumulative     2
    parse count (hard)     1
    parse count (total)     2
    parse time elapsed     1
    physical read IO requests     27
    physical read bytes     6635520
    physical read total IO requests     27
    physical read total bytes     6635520
    physical read total multi block requests     27
    physical reads     810
    physical reads direct     810
    physical reads direct temporary tablespace     810
    physical write IO requests     117
    physical write bytes     24584192
    physical write total IO requests     117
    physical write total bytes     24584192
    physical write total multi block requests     117
    physical writes     3001
    physical writes direct     3001
    physical writes direct temporary tablespace     3001
    physical writes non checkpoint     3001
    recursive calls     25
    session cursor cache hits     1
    session logical reads     23887
    sorts (disk)     1
    sorts (memory)     2
    sorts (rows)     2810365
    table scan blocks gotten     23856
    table scan rows gotten     2809607
    table scans (short tables)     1
    user I/O wait time     2
    user calls     11
    workarea executions - onepass     1
    workarea executions - optimal     5Thanks,
    Vijay
    Edited by: Vijayaraghavan Krishnan on Nov 28, 2012 11:17 AM
    Edited by: Vijayaraghavan Krishnan on Nov 28, 2012 11:19 AM

  • What is the best way to update the delivery time?

    Hi All,
    One of the tasks I have in my job is to updste purchase orders with information found in the orderconfirmation we receive. In the order screen, ME22N, you can change the date in the item, where you have all positions listed. But you can also change it in the tab classifications.
    What is the best way to update the delivery time?
    Best Regards
    Praveen

    Hi
    It may userfull to you
    If you change the delivery date after you have send the PO, then the statistical delivery date is still containing the old delivery date., as long the order is not send this date is changed together with the delivery date.
    Vendor evaluation is performed based on statistical delivery date.
    So if you are responsible for a date change, then you change both dates, that the vendor does not get bad points. but if the vendor cannot deliver at the wished dates, then you change the delivery date only, that all people in your company and MRP run can rely on the new delivery date, but your vendor is evaluated against the old date, because of his fault.
    regards
    Madhu

  • How to use two different boot images in the same task sequence

    I have a need to use Two boot images in the same task sequence. The reason is I'm deploying ZTI to McAfee encrypted devices which I have already done with McAfee v6 Encryption successfully.
    Here's where my problem comes in. We are about to deploy McAfee v7 which means we will have a mix of v6 and v7 in our environment. I must have two special Boot images one with v6 drivers and one with v7 drivers in order for my process to work. I don't see
    a way right now to assign more than one boot image to a task sequence. Is there any way to do this?
    If I can't do that then I will have to create two task sequences and target collections based on the Endpoint Encryption version to deploy to.
    If anyone has suggestions it would be much appreciated. Thank you

    Have you tested copying those files to %windir%\system32\drivers during the WinPE session (just enable commandline support to your boot image) to see if they need to be in there before the OS start? Test that and if they don't need to be there during the
    boot up of WinPE then:
    Create two packages (v6 and v7)
    "Run command line", use the package created earlier and just use "copy /Y .\*.* %windir%\system32\drivers" ...again, this should be run according to your needs, variable or some other check like I said

  • Doing the same task for different data.. Do I need Queues? How to use them if yes?

    Hello all,
    I have created a VI which is getting data from some FTP server and then after comparing with the HDD specified folder copy the missing data from the FTP ... Description is also in the VI. There are few things I need to ask.
    1) The email sending VI gives error 1172..What could be the reason .. is it firewall.. or is there any mistake in the code?
    2) As you people are experts so I really like any suggestion to improve the VI..
    3) The most important .. Currently this VI can only perform the whole task for one FTP folder. Actually my task is I need to check for 4 different FTP folder on different servers..its not 4 different folder in one FTP .. its 4 different FTP.Now my question is how I can do this: First it compare and copy from FTP1.Then FTP2... and so on.. How can I change the data for the cluster for different FTPs? Do I need to use queues? If yes how because I don't have any experience with queues.
    I will really appreciate is someone can either provide me the relevant example or can give me some idea.
    The main VI is the 'TASK START'.VI please find the attached files.
    Thanks
    Regards,
    Naqqash
    Naqqash
    Attachments:
    Project.zip ‏151 KB

    Hi Peter,
    Thank you very much for your reply. I have understood your idea but there are few problems.
    Please see the attached "Final test.vi".. actually my top level vi should be like this.. due to this reason I need to develop the cluster like one with name "Settings" as shown in Enum FTP events.vi.. In this cluster all the data types are not constants (but can be set as constants)  and further all of them are not of same type so whenever I try to create as you created I have got error, wire broken..so what do you think i should do.. In this cluster there are different types of data .. string, path, numeric, array and a cluster with name file properties also... can't figure out what to do..I know things are little scattered and wiered but due to lack of experience, I guess, I am now a little bit confused for this matter. I
    hope guys here in the forum will help me as you guys always did.
    Naqqash
    Attachments:
    Final test.vi ‏11 KB
    Enum FTP Events.vi ‏14 KB

  • What is the best way to mimic the data from production to other server?

    Hi,
    here we user streams and advanced replication to send the data for 90% of tables from production to another production database server. if one goes down can use another one. is there any other best option rather using the streams and replication? we are having lot of problems with streams these days they keep break and get calls.
    I heard about data guard but dont know what is use of it? please advice the best way to replicate the data.
    Thanks a lot.....

    RAC, Data Guard. The first one is active-active, that is, you have two or more nodes accessing the same database on shared storage and you get both HA and load balancing. The second is active-passive (unless you're on 11.2 with Active Standby or Snapshot Standby), that is one database is primary and the other is standby, which you normally cannot query or modify, but to which you can quickly switch in case primary fails. There's also Logical Standby - it's based on Streams and generally looks like what you seem to be using now (sort of.) But it definitely has issues. You can also take a look at GoldenGate or SharePlex.

  • What is the best way to get the new iPhone/ iPhone 5

    What is the best way to get the new iPhone/ iPhone 5 when it is released. I want to get the phone as quick as possible. If i order it on The first day of pre orders will i get it on the same day as the release in stores? I'm on an unlimited data plan with AT&amp;T. Because the new iPhone will most likely have 4 g data will I loose the unlimited plan?

    Apple has not made any announcement.  We are not allowed to speculate on this forum.

  • Best way to apply the patch

    Hello,
    Could someone tell me what's the best way to apply the
    patch?
    Should I install create the database after I use the patch?
    Thanks!
    John
    null

    Hey,
        The Best way would be:
    Apply the patch on the secondry one first and then on the primary box.
    You dont really need to make any box a stand alone one. You can apply the patch in the ditributed enviorment.
    One more thing:
    If you apply the patch on the primary box and then perform replication.It WILLNOT replicate the patch on the secondry box.
    Here is the procedure:
    Steps to apply the patch
           Issue the following "acs patch" command in the EXEC mode to install the ACS patch:
           "acs patch install patch-name.tar.gpg repository repository-name"
           ACS displays the following confirmation message:
            Installing an ACS patch requires a restart of ACS services.
           Would you like to continue? yes/no
           Enter yes.
    Please ensure that you use FTP server. Donot use TFTP as its not supported.
    Regards
    Minakshi
    Do rate the helpful posts:)

  • Best way to perform vSync in a java applet

    Hi, (sorry for my bad english)
    I'm looking for the best way to perform vertical synchronization in a java applet.
    I saw it is possible with the jogl library but i don't need a heavy 3d lib (it's just for 2d)
    Is there a more simple lib allowing vSync in applets (and if possible, working on every windows, mac, linux...)

    why do you need this?

  • Best way to check the data

    I have a made some changes to an existing view. The change is adding a join and a new column from new table. I have created a version2 with all these changes. I want to make sure that same data exists in both the views. what is the best way to test the data.

    Bad wording, the query results could be:
    No rows selected. Same number of rows on both views and same data (considering view1 structure and view without the new column)
    At least one row selected. If the count(*) on both was different, rows returned will be at least the difference. If the count was the same, then every returned row means that there's no equivalent row on view2 and a data difference may exist in at least one of the fields, so You have to find the equivalent row on view2 to compare.
    On the query sintaxis, as view2 hava a new column and I guess it's not a constant value, You have to specify every column for the view1, and the same columns for view2, so new field isn't included and compared.

  • Write a SELECT statement in different ways is not the same thing!!!

    Let's take a look
    use AdventureWorks2012
    go
    dbcc freeproccache
    go
    and now select something
    select * from [HumanResources].[vEmployeeDepartmentHistory]
    go
    so, analyzing 'Compiled Plan'
    SELECT usecounts, cacheobjtype, objtype, text
    FROM Sys.dm_exec_cached_plans
    CROSS APPLY sys.dm_exec_sql_text(plan_handle)
    where cacheobjtype = 'Compiled Plan'
    ORDER BY usecounts DESC;
    GO
    and now let's write some SELECT in "fuzzy case"
    dbcc freeproccache
    go
    select * from [HUMANResources].[vEmployeeDepartmentHistory]
    go
    select * from [HumanRESOURCES].[vEmployeeDepartmentHistory]
    go
    select * from [HumanResources].[VEmployeeDepartmentHistory]
    go
    select * from [HumanResources].[vEmployeeDEPARTMENTHistory]
    go
    select * from [HumanResources].[vEmployeeDepartmentHISTORY]
    go
    SELECT * from [HumanResources].[vEmployeeDepartmentHistory]
    go
    select * FROM [HumanResources].[vEmployeeDepartmentHistory]
    go
    and now let's see compiled plans again
    SELECT usecounts, cacheobjtype, objtype, text
    FROM Sys.dm_exec_cached_plans
    CROSS APPLY sys.dm_exec_sql_text(plan_handle)
    where cacheobjtype = 'Compiled Plan'
    ORDER BY usecounts DESC;
    GO
    So, write a SELECT statement in different ways is not the same thing!!!
    Hope it'll be useful
    P.Ceglie

    Yes.  There is little or no query normalization that happens in front of the query plan cache.  It would be a performance tradeoff to parse and normalize the query before matching it to the cached plans.  The benefit of reducing the number
    of plans in the cache probably wouldn't be worth it. 
    David
    David http://blogs.msdn.com/b/dbrowne/

Maybe you are looking for

  • Premiere Pro has encountered an error message

    Hello, I keep getting this message everytime I try to open premiere pro cs5. It says Premiere Pro has encountered an error [/Scully64/shared/adobe/MediaCore/ASL/Foundation/Make/Mac/../..Src/DirectoryRegistry.cpp-2 83] Can anyone help me resolve this

  • HT5312 I want to change my rescue email address. When I get to step 4 above no Password and Security to click. Is there an easy way to do this?

    I want to change my rescue email address.I've followed the instructioins to step 4. I have no 'Password and Security@ to click. Is there an easy way to change it?

  • Pop up message that won't go away

    After downloading an automatic update for Microsoft Office for Mac, I lost all internet access. After 1.5 hrs with Comcast. Linxsys (my router), Microsoft, India and points closer to home, everyone said--not their fault or maybe it was the router, so

  • Question about a Template

    I use Dreamweaver CS3 and have a template for our web site. I deleated a graphic from this template and uploaded it, but it still shows up on the web site, what am I doing wrong? Thanks

  • Revaluation in Retail

    Dear all, I'm setting up the logistics extraction in a retail environment. For Revaluations, standard BC infosources & data sources are available. Can somebody please tell me what the difference is between 2LIS_03_UM and 2LIS_40_REVAL? Should I use b