Let all channels(with same data structure)run one program

Hello,
I want to let all channels with same data structure run one program,how can i program it?
Thanks!

Hi charleen,
You're going to need to provide more details for us to help you.  In general "channels" do not "run" programs.  You can, however, run a program which loops over channels, checks their properties, and decides whether to include them in the overall analysis.  What do you mean by "data structure"?
What do you want to actually happen in the "program"?
Brad Turpin
DIAdem Product Support Engineer
National Instruments

Similar Messages

  • Merging data from 2 schemas with same object structure into one schema

    Hi
    I want to merge data from 2 schemas in different environments ( say test1 and test 2) into 1 schema ( say Test_final) for testing. Both these schemas are having same structure, the data can be same or different.
    What I did is that I took an export of schema on Test1 and then import it into Test_final. Now I need to merge/append data from Test2 into Test_final.
    I can not merge the data with import due to primary key constraints and also import doesnt support this feature, so I tried SQL*Loader to "append" the data by using sequence to generate Primary key.
    But my worries are that since new primary keys are generated so foreign keys will become invalidated and the data will not be consistent.
    Is there any other way to do this task..
    Regards
    Raman

    This approach might be better...
    create table test_final
    as
    select *
    from schema1.test1
    insert into test_final
    select t2.* from schema2.test1 ,  test_final tf
    where t2.pk != tf.pk
    /...assuming duplicate primary keys mean duplicate records. If that assumption is not the case then you have a more complex data migration exercise on your hands and you need to figure out some rules to determine which version of the data takes precedence.
    Cheers, APC

  • G/L A/C OUTSTANDING SHOWING CLEARED IN ALL ITEMS ON SAME DATE

    Dear Expert
    G/L A/C outstanding documents are showing cleared transactions in all items category
    using the same date in FBL1N.
    To illustrate in detail when we click on the radio button with
    open item with date 17.11.2009 i  m getting certain uncleared
    documents, now when I am using the radio button with all items with
    same date i m getting all those docs which were uncleared showing
    cleared.While in the database documents are showing in BSAK table only.
    What are the causes for the same .
    Thanks - Viral

    The items that you are questioning most likely were cleared after 17.11.2009.  When you run FBL1N for open items with key date 17.11.2009, SAP will show you all items that were open as of that date - even if the items have since been cleared.  When you run FBL1N for all items, their clearing status is displayed as of the current date.  For example, if an item was cleared on 18.11.2009, then it was open as of 17.11.2009, but is cleared as of today.
    Regards,
    Shannon

  • Need to create a new row in table with same data as Primary key, but new PK

    Hello Gurus ,
    I have a table with one column as primary key, I need to create a new row in the table, but with same data as in one of the rows, but with different primary key, in short a duplicate row with diferent primary key ..
    Any ideas of how it can be done without much complication?
    Thanks in advance for your reply.
    Reards,
    Swapneel Kale

    user9970447 wrote:
    Hello Gurus ,
    I have a table with one column as primary key, I need to create a new row in the table, but with same data as in one of the rows, but with different primary key, in short a duplicate row with diferent primary key ..
    Any ideas of how it can be done without much complication?
    Thanks in advance for your reply.
    Reards,
    Swapneel Kalesomething like
    insert into mytable values ('literal for new pk',
                                           select non-pk-1,
                                                    non-pk-2,
                                                    non-pk-n
                                           from mytable
                                           where pk-col = 'literal for existing pk')

  • MRP split purchase requisition with same date

    Hi guru
    When I run MRP, if there are more requirements for the same material with same date, the system creates one purchase requisition grouping requirements.
    So, if I have two requirements of 3 and 2 pieces for material A, with date 18042015, the system creates one purchase requisition of 5 pieces
    I tried to use a BADI in order to split purchase requistion and have in this case a purchase requisition of 3 pieces and another one of 2 pieces but maybe I wrong something because MRP works wrong.
    Is there anyone that could give some tips about this request? Is there anyone that had same problem?
    Thanks
    Regards
    Raffaele

    Hi,
    Hope your BAdI is working now.
    MD51 is used for project. It creates replenishment orders against a WBS element. After this, you can run MD01 which will create replenishment order by material/plant level.
    If you have a question related to PS, maybe you can post your thread in this space:
    SAP Project Systems (SAP PS)
    Kind Regards,
    Mariano

  • Show all Columns with datatype = date

    Is it possible to select all columns with datatype = date in the Repository.
    We like to cast it like dd.mm.yyyy.
    Regards,
    Stefan

    Wouldn't it be easier to do it in the presentation services rather than the .rpd?
    Update: anyways, if you wanna be nasty, use something like this:
    CONCAT( CAST ( DAYOFMONTH("Siebel Data Warehouse"."Catalog"."dbo"."Dim_Incident (W_INCIDENT_D)"."CLOSED_DATE") AS VARCHAR ( 2 )), CONCAT('.', CONCAT( CAST ( EXTRACT( MONTH FROM "Siebel Data Warehouse"."Catalog"."dbo"."Dim_Incident (W_INCIDENT_D)"."CLOSED_DATE") AS VARCHAR ( 2 )), CONCAT('.', CAST ( EXTRACT( YEAR FROM "Siebel Data Warehouse"."Catalog"."dbo"."Dim_Incident (W_INCIDENT_D)"."CLOSED_DATE") AS VARCHAR ( 4 ))))))
    Message was edited by:
    ChrisBerg

  • Re-initialize the data during running the program

    Any function in the Labview that can help me to re-initialize the data during running the program?
    Thanks!
    Regards,
    Ivan

    Ivan,
    I am not sure if I understand your question. You can reinitialize your board and load different board parameters at any time as a part of your program. This will stop actions on the board and then start over with any movements. Use Initialize Controller.flx VI to do this.
    A. Talley
    Applications Engineering
    National Instruments

  • HT204053 I have iPad and I phone 4 with same apple I'd one is at home and othervisvwith me I wanna use face time how is it possible

    I have iPad and I phone 4 with same apple I'd one is at home and othervisvwith me I wanna use face time how is it possible

    I would assume it would be a Canon app.
    This article may help:
    https://discussions.apple.com/thread/4123312?start=0&tstart=0

  • OC4J: marshalling does not recreate the same data structure onthe client

    Hi guys,
    I am trying to use OC4J as an EJB container and have come across the following problem, which looks like a bug.
    I have a value object method that returns an instance of ArrayList with references to other value objects of the same class. The value objects have references to other value objects. When this structure is marshalled across the network, we expect it to be recreated as is but that does not happen and instead objects get duplicated.
    Suppose we have 2 value objects: ValueObject1 and ValueObject2. ValueObject1 references ValueObject2 via its private field and the ValueObject2 references ValueObject1. Both value objects are returned by our method in an ArrayList structure. Here is how it will look like (number after @ represents an address in memory):
    Object[0] = com.cramer.test.SomeVO@1
    Object[0].getValueObject[0] = com.cramer.test.SomeVO@2
    Object[1] = com.cramer.test.SomeVO@2
    Object[1].getValueObject[0] = com.cramer.test.SomeVO@1
    We would expect to see the same (except exact addresses) after marshalling. Here is what we get instead:
    Object[0] = com.cramer.test.SomeVO@1
    Object[0].getValueObject[0] = com.cramer.test.SomeVO@2
    Object[1] = com.cramer.test.SomeVO@3
    Object[1].getValueObject[0] = com.cramer.test.SomeVO@4
    It can be seen that objects get unnecessarily duplicated – the instance of the ValueObject1 referenced by the ValueObject2 is not the same now as the instance that is referenced by the ArrayList instance.
    This does not only break referential integrity, structure and consistency of the data but dramatically increases the amount of information sent across the network. The problem was discovered when we found that a relatively small but complicated structure that gets serialized into a 142kb file requires about 20Mb of network communication. All this extra info is duplicated object instances.
    I have created a small test case to demonstrate the problem and let you reproduce it.
    Here is RMITestBean.java:
    package com.cramer.test;
    import javax.ejb.EJBObject;
    import java.util.*;
    public interface RMITestBean extends EJBObject
    public ArrayList getSomeData(int testSize) throws java.rmi.RemoteException;
    public byte[] getSomeDataInBytes(int testSize) throws java.rmi.RemoteException;
    Here is RMITestBeanBean.java:
    package com.cramer.test;
    import javax.ejb.SessionBean;
    import javax.ejb.SessionContext;
    import java.util.*;
    public class RMITestBeanBean implements SessionBean
    private SessionContext context;
    SomeVO someVO;
    public void ejbCreate()
    someVO = new SomeVO(0);
    public void ejbActivate()
    public void ejbPassivate()
    public void ejbRemove()
    public void setSessionContext(SessionContext ctx)
    this.context = ctx;
    public byte[] getSomeDataInBytes(int testSize)
    ArrayList someData = getSomeData(testSize);
    try {
    java.io.ByteArrayOutputStream byteOutputStream = new java.io.ByteArrayOutputStream();
    java.io.ObjectOutputStream objectOutputStream = new java.io.ObjectOutputStream(byteOutputStream);
    objectOutputStream.writeObject(someData);
    objectOutputStream.flush();
    System.out.println(" serialised output size: "+byteOutputStream.size());
    byte[] bytes = byteOutputStream.toByteArray();
    objectOutputStream.close();
    byteOutputStream.close();
    return bytes;
    } catch (Exception e) {
    System.out.println("Serialisation failed: "+e.getMessage());
    return null;
    public ArrayList getSomeData(int testSize)
    // Create array of objects
    ArrayList someData = new ArrayList();
    for (int i=0; i<testSize; i++)
    someData.add(new SomeVO(i));
    // Interlink all the objects
    for (int i=0; i<someData.size()-1; i++)
    for (int j=i+1; j<someData.size(); j++)
    ((SomeVO)someData.get(i)).addValueObject((SomeVO)someData.get(j));
    ((SomeVO)someData.get(j)).addValueObject((SomeVO)someData.get(i));
    // print out the data structure
    System.out.println("Data:");
    for (int i = 0; i<someData.size(); i++)
    SomeVO tmp = (SomeVO)someData.get(i);
    System.out.println("Object["+Integer.toString(i)+"] = "+tmp);
    System.out.println("Object["+Integer.toString(i)+"]'s some number = "+tmp.getSomeNumber());
    for (int j = 0; j<tmp.getValueObjectCount(); j++)
    SomeVO tmp2 = tmp.getValueObject(j);
    System.out.println(" getValueObject["+Integer.toString(j)+"] = "+tmp2);
    System.out.println(" getValueObject["+Integer.toString(j)+"]'s some number = "+tmp2.getSomeNumber());
    // Check the serialised size of the structure
    try {
    java.io.ByteArrayOutputStream byteOutputStream = new java.io.ByteArrayOutputStream();
    java.io.ObjectOutputStream objectOutputStream = new java.io.ObjectOutputStream(byteOutputStream);
    objectOutputStream.writeObject(someData);
    objectOutputStream.flush();
    System.out.println("Serialised output size: "+byteOutputStream.size());
    objectOutputStream.close();
    byteOutputStream.close();
    } catch (Exception e) {
    System.out.println("Serialisation failed: "+e.getMessage());
    return someData;
    Here is RMITestBeanHome:
    package com.cramer.test;
    import javax.ejb.EJBHome;
    import java.rmi.RemoteException;
    import javax.ejb.CreateException;
    public interface RMITestBeanHome extends EJBHome
    RMITestBean create() throws RemoteException, CreateException;
    Here is ejb-jar.xml:
    <?xml version = '1.0' encoding = 'windows-1252'?>
    <!DOCTYPE ejb-jar PUBLIC "-//Sun Microsystems, Inc.//DTD Enterprise JavaBeans 2.0//EN" "http://java.sun.com/dtd/ejb-jar_2_0.dtd">
    <ejb-jar>
    <enterprise-beans>
    <session>
    <description>Session Bean ( Stateful )</description>
    <display-name>RMITestBean</display-name>
    <ejb-name>RMITestBean</ejb-name>
    <home>com.cramer.test.RMITestBeanHome</home>
    <remote>com.cramer.test.RMITestBean</remote>
    <ejb-class>com.cramer.test.RMITestBeanBean</ejb-class>
    <session-type>Stateful</session-type>
    <transaction-type>Container</transaction-type>
    </session>
    </enterprise-beans>
    </ejb-jar>
    And finally the application that tests the bean:
    package com.cramer.test;
    import java.util.*;
    import javax.rmi.*;
    import javax.naming.*;
    public class RMITestApplication
    final static boolean HARDCODE_SERIALISATION = false;
    final static int TEST_SIZE = 2;
    public static void main(String[] args)
    Hashtable props = new Hashtable();
    props.put(Context.INITIAL_CONTEXT_FACTORY, "com.evermind.server.rmi.RMIInitialContextFactory");
    props.put(Context.PROVIDER_URL, "ormi://lil8m:23792/alexei");
    props.put(Context.SECURITY_PRINCIPAL, "admin");
    props.put(Context.SECURITY_CREDENTIALS, "admin");
    try {
    // Get the JNDI initial context
    InitialContext ctx = new InitialContext(props);
    NamingEnumeration list = ctx.list("comp/env/ejb");
    // Get a reference to the Home Object which we use to create the EJB Object
    Object objJNDI = ctx.lookup("comp/env/ejb/RMITestBean");
    // Now cast it to an InventoryHome object
    RMITestBeanHome testBeanHome = (RMITestBeanHome)PortableRemoteObject.narrow(objJNDI,RMITestBeanHome.class);
    // Create the Inventory remote interface
    RMITestBean testBean = testBeanHome.create();
    ArrayList someData = null;
    if (!HARDCODE_SERIALISATION)
    // ############################### Alternative 1 ##############################
    // ## This relies on marshalling serialisation ##
    someData = testBean.getSomeData(TEST_SIZE);
    // ############################ End of Alternative 1 ##########################
    } else
    // ############################### Alternative 2 ##############################
    // ## This gets a serialised byte stream and de-serialises it ##
    byte[] bytes = testBean.getSomeDataInBytes(TEST_SIZE);
    try {
    java.io.ByteArrayInputStream byteInputStream = new java.io.ByteArrayInputStream(bytes);
    java.io.ObjectInputStream objectInputStream = new java.io.ObjectInputStream(byteInputStream);
    someData = (ArrayList)objectInputStream.readObject();
    objectInputStream.close();
    byteInputStream.close();
    } catch (Exception e) {
    System.out.println("Serialisation failed: "+e.getMessage());
    // ############################ End of Alternative 2 ##########################
    // Print out the data structure
    System.out.println("Data:");
    for (int i = 0; i<someData.size(); i++)
    SomeVO tmp = (SomeVO)someData.get(i);
    System.out.println("Object["+Integer.toString(i)+"] = "+tmp);
    System.out.println("Object["+Integer.toString(i)+"]'s some number = "+tmp.getSomeNumber());
    for (int j = 0; j<tmp.getValueObjectCount(); j++)
    SomeVO tmp2 = tmp.getValueObject(j);
    System.out.println(" getValueObject["+Integer.toString(j)+"] = "+tmp2);
    System.out.println(" getValueObject["+Integer.toString(j)+"]'s some number = "+tmp2.getSomeNumber());
    // Print out the size of the serialised structure
    try {
    java.io.ByteArrayOutputStream byteOutputStream = new java.io.ByteArrayOutputStream();
    java.io.ObjectOutputStream objectOutputStream = new java.io.ObjectOutputStream(byteOutputStream);
    objectOutputStream.writeObject(someData);
    objectOutputStream.flush();
    System.out.println("Serialised output size: "+byteOutputStream.size());
    objectOutputStream.close();
    byteOutputStream.close();
    } catch (Exception e) {
    System.out.println("Serialisation failed: "+e.getMessage());
    catch(Exception ex){
    ex.printStackTrace(System.out);
    The parameters you might be interested in playing with are HARDCODE_SERIALISATION and TEST_SIZE defined at the beginning of RMITestApplication.java. The HARDCODE_SERIALISATION is a flag that specifies whether Java serialisation should be used to pass the data across or we should rely on OC4J marshalling. TEST_SIZE defines the size of the object graph and the ArrayList structure. The bigger this size is the more dramatic effect you get from data duplication.
    The test case outputs the structure both on the server and on the client and prints out the size of the serialised structure. That gives us sufficient comparison, as both structure and its size should be the same on the client and on the server.
    The test case also demonstrates that the problem is specific to OC4J. The standard Java serialisation does not suffer the same flaw. However using the standard serialisation the way I did in the test case code is generally unacceptable as it breaks the transparency benefit and complicates interfaces.
    To run the test case:
    1) Modify provider URL parameter value on line 15 of the RMITestApplication.java for your environment.
    2) Deploy the bean to the server.
    4) Run RMITestApplication on a client PC.
    5) Compare the outputs on the server and on the client.
    I hope someone can reproduce the problem and give their opinion, and possibly point to the solution if there is one at the moment.
    Cheers,
    Alexei

    Hi,
    Eugene, wrong end user recovery.  Alexey is referring to client desktop end user recovery which is entirely different.
    Alexy - As noted in the previous post:
    http://social.technet.microsoft.com/Forums/en-US/bc67c597-4379-4a8d-a5e0-cd4b26c85d91/dpm-2012-still-requires-put-end-users-into-local-admin-groups-for-the-purpose-of-end-user-data?forum=dataprotectionmanager
    Each recovery point has users permisions tied to it, so it's not possible to retroacively give the users permissions.  Implement the below and going forward all users can restore their own files.
    This is a hands off solution to allow all users that use a machine to be able to restore their own files.
     1) Make these two cmd files and save them in c:\temp
     2) Using windows scheduler – schedule addperms.cmd to run daily – any new users that log onto the machine will automatically be able to restore their own files.
    <addperms.cmd>
     Cmd.exe /v /c c:\temp\addreg.cmd
    <addreg.cmd>
     set users=
     echo Windows Registry Editor Version 5.00>c:\temp\perms.reg
     echo [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft Data Protection Manager\Agent\ClientProtection]>>c:\temp\perms.reg
     FOR /F "Tokens=*" %%n IN ('dir c:\users\*. /b') do set users=!users!%Userdomain%\\%%n,
     echo "ClientOwners"=^"%users%%Userdomain%\\bogususer^">>c:\temp\perms.reg
     REG IMPORT c:\temp\perms.reg
     Del c:\temp\perms.reg
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. Regards, Mike J. [MSFT]
    This posting is provided "AS IS" with no warranties, and confers no rights.

  • VIEW won't display channels with same names

    Hello,
    We have recently installed DIAdem 2010 and are beginning to find channel name problems that we didn't have before with 9.1.
    Traditionally we have always set the name-oriented channel references to 'only channel name' and we have never had any problems when displaying multiple traces in VIEW. Now, we find that when we display more than one trace in VIEW with channel names that are repeated we get problems with the axis system display dialogue box. This dialogue box tells me that the X and Y channel names AND NUMBERS are the same for all three traces when in fact the data displayed is different. If I try and change one of the channels it immediately reverts back to the first channel with that name.
    I have attached a screen dump to help explain. The image shows a VIEW window with three speed time histories from three separate data files that have been loaded together. The three X channels will all be named 'time' but will be channel numbers 1, 4 and 7. The Y channnels are all 'speed km\h' and are numbered 2, 5 and 8. However, the dialogue box shows that all three traces are made from channels 1 and 2 despite the three traces appearing different. If I change one of the 'time' channels from number 1 to, say, number 4, then it reverts back to number 1 as soon as I move the focus to somewhere else in the box. Version 9.1 never had this problem.
    If I swith the name-oriented channel references to '[group index]/channel name' then the problem goes away - even thought the data displayed in the graph is the same!
    I'm not entirely clear on the reasons for settling on 'name only' references - probably historical - but I am even less clear on the effect of switching to '[group index]/channel name', particularly with the many autosequences that we use.
    Is this a bug that can be fixed?
    Regards, Si.
    Attachments:
    VIEW.jpg ‏236 KB

    Simon,
    The DataFinder is a database that DIAdem installs that integrates directly into the NAVIGATOR, so that you can easily query data files/groups/channels or compile properties from multiple files into correlated curves on a graph, etc.  You might be interested in the ability to look inside your data files in the NAVIGATOR to see the group/channel structure and their properties.  This is particularly interesting if you wan to load selected groups/channels from one or more files.
    Here are the feature description files,
    Brad Turpin
    DIAdem Product Support Engineer
    National Instruments
    Attachments:
    New Features from DIAdem 9.1 to 2010.zip ‏69 KB

  • Help to fill BPS Cube with same data in a Cube with these conditions

    Hi,
    I need some help in implementing BPS in a small project. (Integrated Planning is not available).
    An existing cube, Cube1 has: Year/month, Year, char1, char2, keyfig1, keyfig2
    Keyfig1 is filled directly from R3 with actuals; keyfig2 (planned values) is  filled manually filled with a monthly flat file load.
    Now, there is a change in direction to fill keyfig2 through BPS features and bring in additional key figures all based on keyfig1.
    Cube2 has been created only for the purpose of this BPS project. Cube2 was a copy of Cube1(with no data). For Cube1, I have created a Planningarea1 and PlanningLevel1; and for Cube2, Planningarea2 and PlanningLevel2 in BPS0.
    How do I fill the BPS Cube2 with the same data as in BPS Cube1 with the following conditions:
    keyfig1 : same as source value from R3 (not modifiable)
    keyfig2 : modifiable by users only on the first and second of the month.
    keyfig3 : keyfig1 * 1.1
    keyfig4 : keyfig1 of previous Year/month 
    keyfig5 : same as source value from R3 (But modifiable)
    keyfig6 : same as keyfig5 as of last day of 20th of the current month (not modifiable)
    The goal is to create a multi planning area to join the two cubes. Hints all that will also be appreciated.
    Thanks

    Your thought of having a multi area is right.
    Create a multi area and being the basic areas to which you have assigned cube 1 and cube 2 underneath the multi area.
    UNder your planing package, create a function of type Formula and create a parameter set like this:
    = * 1.1.
    Just this one line will is enough.
    TO get keyfig 4 as previous month's key fig; you need another fox. To do this, you need to have a BPS variable to et previous month and use this variable in the parameter set.
    Your fox will be like this.
    DATA CURRMONTN TYPE 0CALMONTH.
    DATA PREMONTH TYPE 0CALMONTH.
    {KEYFIG4, CURRMONTH} = {KEYFIG1, PREMONTH}.
    To make users modify only on days 1 and 2, you need to define a dara slice.
    Ravi Thothadri

  • Problem with different resultset with same data and same query in Oracle 8.1.7 and 9i

    Hello,
    I have been using this query in oracle 8.1.7
    SELECT
    ID,
    AREA_NO
    FROM MANAGER_AREA MGR
    WHERE COMPANY_ID = :id AND
    (:value < (SELECT COUNT(ROWID)
    FROM MANAGER_WORK MW
    WHERE MW.AREA_ID = MGR.ID AND
    (MW.END_WORK IS NULL OR MW.END_WORK >= SYSDATE)))
    order by AREA_NO;
    In the above query I want to see rows from MANAGER_AREA table depending upon date criteria in the table MANAGER_WORK and also upon the parameter :value i.e if I pass a value as 0 I get to see records for which their is atleast 1 record in MANAGER_WORK with the date criteria and if I pass -1 then I get all the records because minimum value that count(*) would give is 0. The resultset was as expected in 8.1.7.
    A couple of days back I installed PERSONAL 9i to test for testing the basic functionality of our program with the same data. This query fails and irrespective whether I pass -1 or 0 it returns me same dataset which I would have got in case if I pass 0.
    I do not know whether this is a bug that has got introduced in 9i. Can anybody help me with this problem. It would be difficult for me to change the parameter send to this query as the Query is called from many different places.
    Thanks in advance
    Amol.

    I cannot use a Group by and a having statement over here. The problem with 'Group by' and 'having' clause is If I have to make a join between the two tables. When I use join then I get only rows that are linked to each other in the table.
    If I use outer join to solve that problem then I have to take in consideration the other date condition. My previous query use to virtually discard the corelated query result by using -1 as the value. This will not happen in the join query.
    Amol.

  • Employee Information with same date

    Dear All,
    Can any one tell me how to retrieve employees who hires on same date?
    Thanks in advance.

    /* Formatted on 7/30/2012 1:29:04 PM (QP5 v5.139.911.3011) */
    WITH t
            AS (SELECT 'a' empl, TO_DATE ('31-12-2012', 'dd-mm-yyyy') hire
                  FROM DUAL
                UNION ALL
                SELECT 'b' empl, TO_DATE ('31-12-2011', 'dd-mm-yyyy') hire
                  FROM DUAL
                UNION ALL
                SELECT 'c' empl, TO_DATE ('31-12-2011', 'dd-mm-yyyy') hire
                  FROM DUAL
                UNION ALL
                SELECT 'd' empl, TO_DATE ('5-12-2012', 'dd-mm-yyyy') hire
                  FROM DUAL)
    SELECT *
      FROM t t1, t t2
    WHERE t1.hire = t2.hire AND t1.empl != t2.emplc     12/31/2011     b     12/31/2011
    b     12/31/2011     c     12/31/2011

  • Two Switches with same MAC+Priority running STP

    Hi
    what will happen if two switch with same MAC-ADDRESS and same STP priority will run STP , how will elect to the root bridge ???? how the STP process will handle this situation?
    thanx

    Well, first off, no two switches (or switchports) will have the same MAC (if they do, you're buying switches from the wrong, very bad place).
    Every port of the switch will have (at least) one MAC of it's own.
    Given the same priority for all switches/bridges on the LAN, the lowest switch/bridge MAC on the LAN will win as the root.
    VLANS take on the MAC of the administrative interface ("CPU"): i.e., xxxx.xxxx.5000, the first port (i.e., fa0/1) is xxxx.xxxx.5001, the second (fa0/2) is xxx.xxxx.5002, and so on
    If you do a "show interface", the MAC / bia is displayed in the second line.
    Good Luck
    Scott

  • Use same report with same database structure to different database server.

    Hi,
    I have standard crystal reports which needs to be copied to another database server.
    The origin database has exactly same database structure with the data base in other server.
    My only question is how to change the connection of the crystal report file from old database server to new one without affecting the linked tables in the report. Would it be possible?
    Note: origin database and new database has same structure, tables and columns but differ in data on it. and both has different server. (MS SQL)
    Thanks,

    Hi Mark,
    Open the report in the CR Designer > select Database option on the top > select Set Datasource location.
    The pane on the top shows the current connection. Go ahead and create a new connection to the target database from the pane at the bottom. Once created, highlight one table from the top, highlight the corresponding table from the bottom pane and click Update. Do this for each table.
    -Abhilash

Maybe you are looking for

  • HELP!  Consolidated music and now Can't use it and Can't find it!

    My laptop was stolen and I've recently used iPod Rip S/W to get my music from iPod to my older G4 Tower Hardrive. before doing this I consolidated my music (following some apple directions) and now that all my music is in iTunes using iPodRip, the st

  • HT5167 Finder, Mail and other apps crash since updating to Lion 10.7.4. My 2009 Macbook pro now unuseable. Am I alone?

    I obediently clicked okay when Appple software update urged me to upgrade to Lion 10.7.4. No problems with my 2010 mac mini. But every time I boot up my late 2008 macbook pro and proceed to open an app (a Finder window, or Mail, plus others) a messag

  • Script to rename all views in a database

    Hi guys, Does anyone has a script in handy to rename all the existing views  in a database . Thanks in advance

  • BW 3.5 vs BI 7.0

    Dear all, from other threads an documents in sdn I learned the differences between 3. 5 and 7.0. If you look at it from the customer's point of view, what would be the reasons to migrate from 3.5 to 7.0 As real advantages I only figured out that BI 7

  • PDF encryption question

    Hi, I was wondering if it was possible to encrypt pdf files with preview in such a way where someone can open the document and print it without needing a password, but not copy or change anything else. It seems to be an "all or nothing" where either