Naming of data streams

I need the possibility to name data streams. In case you use the output of a function and you want to bundle this data with another data, then the bundled variable takes the name of the source VI output. This might not be useful in many cases. It can be avoided by naming the data streams. I tried it with adding a description, but this does not work. Can someone help me finding a solution or, in case a developer reads this message, can someone implement this feature?

Let me first confess that I ddi not fully follow that Q.
Have you looked a LVOOP (LabVIEW Object Oriented Programming) ?
The dynamic dispatch adapts to the class and all down stream data also adapts to that class data being used.
Ben
Ben Rayner
I am currently active on.. MainStream Preppers
Rayner's Ridge is under construction

Similar Messages

  • Zone information on download files - no longer in alternate data stream on Windows 7 or IE 8 ?

    Hi,
    In Windows XP and Vista, when IE (or messenger) download a file an alternate data steam named "Zone.Identifer" is created to store information about the zone the file orginated from.  This was how Windows knew which files should have that "This file
    came from another computer and might be blocked" with an "Unblock" button in the properites for such files.   Of course, this only works if downloading to an NTFS volume.  
    Vista also added the /R option for DIR, which allows us to see the alternate data streams.  I can easily confirm this behavior on my home PC, which is Vista.
    But on my Windows 7 desktop at the office, there do not seem to be any alternate data streams on downloaded files.  DIR /R shows no Zone.Identifier ADS for downloaded files, and "This file came from another computer and might be blocked" is not displayed
    in the downloaded file properties.
    Nonetheless, downloaded files are preserving zone information. If I download an HTML file from our Intranet, and then open the downloaded file in IE, it shows as being in the Intranet zone.   If I then copy that file to a FAT partition and back
    to my desktop, it then shows as being in the local Computer zone.  So there is metadata that was removed by copying to/from FAT. 
    Any ideas what is happening here?  How is the zone information being stored, if it is not in an ADS any more?

    HMM I have the same problem. My researches suggest that the Zone.Identifier ADS is still being used for the internet zone, but not the Intranet zone. I too can get an intranet location out of IE, but not by querying Zone.Identifier eg using Powershell. This
    issue is present in XP as well as Win 7. It appears to extend to trusted zone ids.
    One hypothesis is that this is being stored in hidden form to prevent zone ids that decrease security being applied by malware.
    I had thought that maybe the intranet zone id was only present during download - but your experiment suggests otherwise.

  • SAP MII 14.0 SP5 Patch 11 - Error has occurred while processing data stream Dynamic Query role is not assigned to the Data Server

    Hello All,
    We are using a two tier architecture.
    Our Corp server calls the refinery server.
    Our CORP MII server uses user id abc_user to connect to the refinery data server.
    The user id abc_user has the SAP_xMII_Dynamic_Query role.
    The data server also has the checkbox for allow dynamic query enabled.
    But we are still getting the following error
    Error has occurred while processing data stream
    Dynamic Query role is not assigned to the Data Server; Use query template
    Once we add the SAP_xMII_Dynamic_Query role to the data server everything works fine. Is this feature by design ?
    Thanks,
    Kiran

    Thanks Anushree !!
    I thought that just adding the role to the user and enabling the dynamic query checkbox on the data server should work.
    But we even needed to add the role to the data server.
    Thanks,
    Kiran

  • Can we use 0INFOPROV as a selection in Load from Data Stream

    Hi,
    We have implemented BW-SEM BPS and BCS (SEM-BW - 602 and BI 7 ) in our company.
    We have two BPS cubes for Cost Center and Revenue Planning and we have Actuals Data staging cube, we use 0SEM_BCS_10 to load actuals.
    We created a MultiProvider on BPS cubes and Staging cube as a Source Data Basis for BCS.
    Issue:
    When loading plan data or Actuals data into BCS (0BCS_C11) cube using Load from Data Stream method, we have performance issue, We automated load process in a Process Chain. Some times it take about 20 hrs for only Plan data load for 3 group currencies and then elimination tasks.
    What I noticed is, (for example/) when loading Plan data, system is also reading Actuals Cube which is not required, there is no selection available in Mapping or selection tab where I can restrict data load from a particular cube.
    I tried to add 0INFOPROV into databasis but then it doen't show up as selection option in the data collection tasks.
    Is there a way where I can restrict data load into BCS using this load option and able to restrict on cube I will be reading data from ?
    I know that there is a filter Badi available, but not sure how it works.
    Thanks !!
    Naveen Rao Kattela

    Thanks Eugene,
    We do have other characteristics like Value Type (10 = Actual and 20 = Plan) and Version (100 = USD Actual and 200 = USD Plan), but when I am loading data into BCS using Load from Data Stream method, the request goes to all the underlying cubes, which in my case are Planning cubes and Actual Cube, but I don't want request to goto Actual Cube when I am running only Plan load. I think its causing some performance issue.
    For this reason I am thinking if I can use 0INFOPROV as we use in Bex queries to filter the InfoProvider so that the data load performance will improve. 
    I was able to to bring in 0INFOPROV into DataBasis by adding 0INFOPROV  in the characteristics folder used by the Data Basis.
    I am able to see this InfoObject Data Stream Fileds tab. I checked marked it to use it in the selection and regenerated the databasis.
    I was expecting that now this field would be available for selection in data collection method, but its not.
    So If its confirmed that there is no way we can use 0INFOPROV as a selection then I would suggest my client for a redesign of the DataBasis it self.
    Thanks,
    Naveen Rao Kattela

  • What does the Merge Data Stream processor do when there are multiple input streams to it from the same reader?

    Hi,
    I have a process with a reader of master data that outputs 5 records that feeds simultaneously into 3 different lookup and return processors.
    Each lookup and return processor brings some data back from a detail table. There can be multiple details so I follow each lookup processor with a split records from array processor. Hence I end up with 3 'streams' of data. Stream 1 has 8 records, stream 2 has 5 records and stream 3 has 6 records.
    I join all these streams to a Merge Data Streams processor.
    I end up with 9 records so although the help for the Merge Data Streams processor says 'Merge Data Streams does not perform any transformation, matching, or merging of records' there is clearly some merging going on.
    What is the behaviour of the merge data streams processor in this scenario?
    I have added attributes and flags into each of the streams. How many records should I see and what values should the added attributes/flags have (some records show attributes/flags from all 3 streams whereas others show just those attributes/flags from one stream).
    I have developed a test case simply to understand what the processor is doing but it's not obvious and furthermore it's probably unwise to develop EDQ processes where the processor behaviour is not documented and guaranteed to remain consistent. What I am trying to achieve is to bring all of a person's (the master data) various details (assignments, employers, etc.) together so we can check the data (some rules require data from multiple details).
    Thanks, Nik

    Cheers Mike - and for the explanation of the terms.
    I think I understand now how it's supposed to work.
    What I'm finding however is that when I set a flag to Y at the beginning of a path (that includes a lookup and return and then split records from array processor) that flag is showing no (i.e. an empty) value in SOME of the records shown in the subsequent MDS processor (it's fine the very last split processor before we get to the MDS but then again there are fewer records in that split processor than the MDS).
    In my case there are obviously more records in the MDS processor than there were in the original reader (because the lookup and returns are configured to have unlimited maximum matches). As mentioned, the different paths return different numbers of  records before being combined in the MDS. Say a reader has 5 records and path 1 returns 8 records in total including a path-specific flag (flag1, set to Y) but path 2 (that again adds its own path-specific flag (flag2, set to Y) returns just 5 records (since nothing was added from the lookups) are you saying that flag2 would show as 'Y' for all 8 records shown in the MDS?
    Hopefully you would be able to see what I mean if you try to create a process like the one I've described (or I can upload a package).
    Re. the purpose of the separate paths approach it is simply to allow the visualisation ('showing the working' as Neil puts it) of the different checks being carried out by the process.
    This is considered one of the benefits of the tool over writing SQL queries (with outer joins, query criteria, etc.).
    Also, as mentioned I was following an example that Neil put together for us to ensure that we are doing things in a 'proper' and supported way.
    If we put all the lookups, etc. for all the checks into one datastream then it no longer becomes so understandable and the value of joining processors in a process over simply writing SQL becomes questionable; arguably the EDQ process in fact becomes less easy to understand than simply writing SQL.
    Also, to go down this route I will need to revise the (what was previously substantially working until I revised it) processes that I have already developed.
    Thanks, Nik

  • Create a continuous data stream from C++, and read it in LabView

    Hello all.
    I'm working on a project which involves connecting to a motion tracker and reading position and orientation data from it in realtime. The code to get the data is in c++, so I decided that the best way to do this would be to create a c++ DLL file which contains all the necessary functions to first connect to the device and read the data from it, and use the Call Library Function node to feed this data into Labview. 
    I'm having trouble though, since ideally I would like a continuous stream of data from the c++ code into Labview, and I'm not sure how to achieve this. Putting the call library function node in a while loop seems like an obvious solution, but if I do it this way I would have to reconnect to the device every time I get the data, which is quite a bit too slow. 
    So my question is, if I created c++ function which created a data stream, could I read this into Labview without having to continually call a function? I'd prefer to only have to call a function once, and then read the data stream until a stop command is given.
    I'm using Labview 2010, version 10.0.
    Apologies if the question is poorly phrased, many thanks for your help.
    Dave
    Solved!
    Go to Solution.

    dr8086 wrote:
    This method sounds like an excellent suggestion, but I do have a few questions where I dont think I've understood fully.
    From what I understand the basic premise is to use one call library function node to access a DLL which creates an instance of the device object, and passes a pointer too it into labview. Then a seperate call library function node would pass this pointer to another DLL which could access the device object, update it and read the data. This part could be in a while loop and carry on reading the data until a stop command is given.
    That's it. I'm including some skeleton code as an example. I'm also including the code because I don't know how much you have experience with multi threading, so I'm showing how you'd have to use critical sections to guard the interactions between threads so that they don't lead to issues.
    // exported function to access the devices
    extern "C" __declspec(dllexport) int __stdcall init(uintptr_t *ptrOut)
    *ptrOut= (uintptr_t)new CDevice();
    return 0;
    extern "C" __declspec(dllexport) int __stdcall get_data(uintptr_t ptr, double vals[], int size)
    return ((CDevice*)ptr)->get_data(vals, size);
    extern "C" __declspec(dllexport) int __stdcall close(uintptr_t ptr, double last_vals[], int size)
    int r= ((CDevice*)ptr)->close();
    ((CDevice*)ptr)->get_data(last_vals, size);
    delete (CDevice*)ptr;
    return r;
    // h file
    // Represents a device
    class CDevice
    public:
    virtual ~CDevice();
    int init();
    int get_data(double vals[], int size);
    int close();
    // only called by new thread
    int ThreadProc();
    private:
    CRITICAL_SECTION rBufferSafe; // Needed for thread saftey
    vhtTrackerEmulator *tracker;
    HANDLE hThread;
    double buffer[500];
    int buffer_used;
    bool done; // this HAS to be protected by critical section since 2 threads access it. Use a get/set method with critical sections inside
    //cpp file
    DWORD WINAPI DeviceProc(LPVOID lpParam)
    ((CDevice*)lpParam)->ThreadProc(); // Call the function to do the work
    return 0;
    CDevice::~CDevice()
    DeleteCriticalSection(&rBufferSafe);
    int CDevice::init()
    tracker = new vhtTrackerEmulator();
    InitializeCriticalSection(&rBufferSafe);
    buffer_used= 0;
    done= false;
    hThread = CreateThread(NULL, 0, DeviceProc, this, 0, NULL); // this thread will now be saving data to an internal buffer
    return 0;
    int CDevice::get_data(double vals[], int size)
    EnterCriticalSection(&rBufferSafe);
    if (vals) // provides a way to get the current used buffer size
    memcpy(vals, buffer, min(size, buffer_used));
    int len= min(size, buffer_used);
    buffer_used= 0; // Whatever wasn't read is erased
    } else // just return the buffer size
    int len= buffer_used;
    LeaveCriticalSection(&rBufferSafe);
    return len;
    int CDevice::close()
    done= true;
    WaitForSingleObject(hThread, INFINITE); // handle timeouts etc.
    delete tracker;
    tracker= NULL;
    return 0;
    int CDevice::ThreadProc()
    while (!bdone)
    tracker->update();
    EnterCriticalSection(&rBufferSafe);
    if (buffer_used<500)
    buffer[buffer_used++]= tracker->getRawData(0);
    LeaveCriticalSection(&rBufferSafe);
    Sleep(100);
    return 0;
    dr8086 wrote:
    My main concern is that the object may go out of memory or be deallocated, since it wouldnt be held in any namespace or anything.
    Since you create the object with new, the object won't expire until either the dll is unloaded or the process (LabVIEW) closes. So the object will stay valid between dll calls provided LabVIEW didn't unload the dll (which it does if the VIs are closed). When that happens, I'm not exactly sure what happens to live objects (i.e. if you forgot to call close), I imagine the system reclaims the memory but the device might still be open.
    What I do to make sure that everything gets closed when the dll unloads before I could call close and delete the object is to everytime I create a new object in the dll I add it to a list, when the dll unloads, if the object is still on the list I delete it.
    dr8086 wrote:
    I also have a more general programming question about the purpose of the buffer. Would the buffer basically be a big table of position values, which are stored until they can be read into the rest of the VI? 
    Yes, see the example code.
    However, depending on the frequency with which you need to collect data from the device you might not need this buffer at all. I.e. if you collect a sample about every 100ms then you could remove all threading and buffer related functions and instead read the data from the read function itself like this:
    double CDevice::get_data()
    tracker->update();
    return tracker->getRawData(0);
     Because you'd only need a buffer and a seperate thread if you collect data at a high frequency and you cannot lose any data.
    Matt

  • Validation on Inventory and Supplier Data Stream

    Hi Experts Team,
    My client wants validation on Inventory and Supplier Data Stream.  When I am trying to create method it shows only total record and documents in data stream field.
    How I can get the above two stream in method.
    Can any one suggest me.

    Thanks Dan for your speedy reply
    I will do validation according to documet type
    Thanks & Regards
    Madhu

  • Conversion of a binary data stream to decimal numbers

    Hi
    I am having great difficult working out how to convert my binary data stream to decimal numbers.
    The data I am reading back is in the format of a binary string, starting with the Most Significant Bit (MSB) of the first word, then the corresponding Least Significant Bit (LSB), where a word is two bytes long. A carriage return indicates message termination.  The return message starts with ‘bin,’ followed by the number of bytes requested. No delimiters are used to separate the data, but a carriage return is appended onto the end of the data.
    bin,<first word msb><first word lsb>...<last word lsb><CR>
    e.g. bin,$ro¬z1;@*...etc
    Does anybody know of any examaple vi that can help me convert this data from binary to decimal numbers?
    Many Thanks
    Ash

    Hi Ashley,
    after getting the string you can strip the first 4 characters. After this try a typecasting to array of U16. If the numbers are not correct, you can add a swap bytes operation to the resulting array.
    Message Edited by GerdW on 09-13-2006 02:46 PM
    Best regards,
    GerdW
    CLAD, using 2009SP1 + LV2011SP1 + LV2014SP1 on WinXP+Win7+cRIO
    Kudos are welcome
    Attachments:
    Convert.png ‏2 KB

  • Data Streaming error in BPM workspace

    Hi All
    Developed a BPM Process and ADF form with BPM 11g suite and deployed to Oracle weblogic Server 10.3. while we are fetching the data from DB into ADF form in the same Process we are getting the small amount of data but while fetching the large volume of data to that ADF form we are getting the popup like "Data streaming",
    Could you please guide me to resolve this issue.
    Thank you
    Balaji J

    Please paste the exact error stack along with your Jdev version for us to help you.

  • Problems Reading SSL  server socket  data stream using readByte()

    Hi I'm trying to read an SSL server socket stream using readByte(). I need to use readByte() because my program acts an LDAP proxy (receives LDAP messages from an LDAP client then passes them onto an actual LDAP server. It works fine with normal LDAP data streams but once an SSL data stream is introduced, readByte just hangs! Here is my code.....
    help!!! anyone?... anyone?
    1. SSL Socket is first read into  " InputStream input"
    public void     run()
              Authorization     auth = new Authorization();
              try     {
                   InputStream     input     =     client.getInputStream();
                   while     (true)
                   {     StandLdapCommand command;
                        try
                             command = new StandLdapCommand(input);
                             Authorization     t = command.get_auth();
                             if (t != null )
                                  auth = t;
                        catch( SocketException e )
                        {     // If socket error, drop the connection
                             Message.Info( "Client connection closed: " + e );
                             close( e );
                             break;
                        catch( EOFException e )
                        {     // If socket error, drop the connection
                             Message.Info( "Client connection close: " + e );
                             close( e );
                             break;
                        catch( Exception e )
                             //Way too many of these to trace them!
                             Message.Error( "Command not processed due to exception");
                             close( e );
                                            break;
                                            //continue;
                        processor.processBefore(auth,     command);
                                    try
                                      Thread.sleep(40); //yield to other threads
                                    catch(InterruptedException ie) {}
              catch     (Exception e)
                   close(e);
    2 Then data is sent to an intermediate function 
    from this statement in the function above:   command = new StandLdapCommand(input);
         public StandLdapCommand(InputStream     in)     throws IOException
              message     =     LDAPMessage.receive(in);
              analyze();
    Then finally, the read function where it hangs at  "int tag = (int)din.readByte(); "
    public static LDAPMessage receive(InputStream is) throws IOException
        *  LDAP Message Format =
        *      1.  LBER_SEQUENCE                           --  1 byte
        *      2.  Length                                  --  variable length     = 3 + 4 + 5 ....
        *      3.  ID                                      --  variable length
        *      4.  LDAP_REQ_msg                            --  1 byte
        *      5.  Message specific structure              --  variable length
        DataInputStream din = new DataInputStream(is);
        int tag = public static LDAPMessage receive(InputStream is) throws IOException
        *  LDAP Message Format =
        *      1.  LBER_SEQUENCE                           --  1 byte
        *      2.  Length                                  --  variable length     = 3 + 4 + 5 ....
        *      3.  ID                                      --  variable length
        *      4.  LDAP_REQ_msg                            --  1 byte
        *      5.  Message specific structure              --  variable length
        DataInputStream din = new DataInputStream(is);
           int tag = (int)din.readByte();      // sequence tag// sequence tag
        ...

    I suspect you are actually getting an Exception and not tracing the cause properly and then doing a sleep and then getting another Exception. Never ever catch an exception without tracing what it actually is somewhere.
    Also I don't know what the sleep is supposed to be for. You will block in readByte() until something comes in, and that should be enough yielding for anybody. The sleep is just literally a waste of time.

  • TSV_TNEW_PAGE_ALLOC_FAILED - BCS load from data stream task

    Hi experts,
    We had a short dump when executing BCS Load from Data Stream task. The message is: TSV_TNEW_PAGE_ALLOC_FAILED.
    No storage space available for extending an internal table.
    What happened? How we can solve this error?
    Thanks
    Marilia

    Hi,
    Most likely, the remedy for your problem is the same as in my answer to your another question:
    Raise Exception when execute UCMON

  • SEM-BCS:Data Stream Upload

    Hi! All.
    I am facing an issue in Data Stream upload.....The Target field 0Company is 6 char. and the source field 0Company code is 4 char. in length....the system gives an error ...the value Target field exceeds source field..use info object with greater length.....however, upon mapping Company to another infoobject...with 6 char..used instead of 0comp-Code...the system returns yet
    another error that its not coming from source system or source system can't be determined i.e. the new info object.......
    I was thinking changing target field length to four...any work around this issue...If the target field is changed i.e. 0company what implications will it have or just changing the field in data model do the trick??
    Thanks for your input....
    Victor

    Hello Viktor,
    If i understood well your problem i faced the same thing in the past.
    When costumizing the load from data stream include the lenght to 4 characters or try to include an offset of 2.
    Hope is helps.
    If yes award points.
    Best regards,
    João Arvanas

  • Data Streaming Destination in SQL Server 2014?

    I would like to publish a SSIS package as a SQL view. I'm working with SQL Server 2014 Developer edition, and
    SSDT-BI for Visual Studio 2013.
    The instructions for SQL Server 2012 are here:
    http://msdn.microsoft.com/en-us/library/dn600376(v=sql.110).aspx
    The steps include downloading and installing the
    Microsoft SQL Server 2012 Integration Services Data Feed Publishing Components. 
    I'm not able to find a similar package for SQL Server 2014.  I've tried all kinds of maneuvers (installing the SSIS 2012 and SSDT-BI for VS2012 components, attempting to upgrade SSIS 2012 packages with the Data Streaming Destination to SSIS 2014,
    etc.), but I've not found any way to be able to use a Data Streaming Destination in SSDT-BI for VS2013 targeting SSIS 2014.  Am I missing something?

    It seems not be supported in BIDS 2013.
    Regards, Leo

  • Using Digital I/O to generate serial data stream

    Hello All,
    I am in need to generate a serial data stream. HW I use is MIO-16E-10.
    I am planning to use the digital line out to generate the serial data stream.
    I seems that if I use the wait timer the minimum pulse width I can get is
    1ms. But for my application the pulses have to be shorter than that. I was
    wondering about an alternative way to achive this. Any one who has worked
    with a similar application please help!
    Thanks in advance!
    Anand.

    Anand,
    What you are looking to do is not possible with the digital lines on the board you have.
    Depending on what other connections you have on the board, you could use one of the two analog outputs to generate your required serial data stream using pattern generation. Check the DAQ solution wizard=>Custom DAQ applications=>Analog Output=>Generate continuous sine wave. This example should give you a baseline to get started. Substitute the sine wave generator for your desired digital stream Logic 0 =0V, Logic 1=5V.
    If you need more than 2 lines, or some form of handshaking, I would suggest using the PCI-6534 or (DIO-32-HS as it was previously called) This will give you the ability to generate serial data streams with timing and/or handshaking.

  • How do you separate UDP data streams

    Hi.  I'm currently using LabVIEW 2013  to read in a UDP data stream (packet) that contains various parameters.  I am successful in receiving the UDP packet using the UDP Open and UDP Read vi.  On the data out of the UDP Read I have it running into the "Unflatten to String" function and the type I have created a cluster containing 32-bit floating point (Single Precision in LabVIEW) numeric constant, since that is the type of data I am receiving.  To be able to separate the different parameters coming in, I have created the same number of numeric constants blocks to equal the number of parameters.  Is there an easier way to separate the data from the UDP data out so that I don't have to create so many numeric constants to represent the same number of data I have coming in?  Sometimes we will way more than what is listed below.
    Any help will be appreciated.  Thank you in advanced
    Attachments:
    Example layout (Not actual).vi ‏11 KB

    If all the data is the same type, then you can wire an array as the datatype instead of a cluster. Set the "data includes array or string size" input to False (the default is True). The output will be an array, and you can index out the values you want.

Maybe you are looking for

  • Issues in Web service created for updating the Sales Order

    Hi All, We created a Web service using the Web service tool in CRM 7.0 ( Using the Component WS_DESIGN_TOOLS ) Web service was created for three modes such as READ, CREATE and CHANGE Modes. READ Web Service we tested and its working fine as we were a

  • No infotype found in personnel action

    hello,         i hv created one action in PA(personnel administration). I wanted that action has to come in pa40 screen. so i changed the uger group in the parameter coloumn. but while saving i saw an error message. it did not allow me to save. it wa

  • Issue with ASM disk

    Hello, When using database configuration assistant to configure ASM and create disk group, it can't find any disk. I've tried to chage disk discovery path by using 'ORCL:*', '/dev/oracleasm/disk/*' and install procedure is as follow: TIA, Step 1 : #

  • Modifying ESS Iview.

    Hi, I have tried to change the properties of ESS Iview but iam struck.Please guide me in doing this. The procedure i followed are Content Admin/Content provided by SAP/End User Content/ESS Business Package/Picked the Iview i want to change/Previewed

  • JSF AJAX problem

    Hi every body I am using ajax for jsf for form submit .I can use this.form.submit() , but I want to use the ajax type form submission .It is working fine in the fire fox , but when I am running my application on IE it is not working . Is there any li