Incorrect data order during cluster unbundle

I get a mysterious error when I try to unbundle data inside my subVI.
The sequence order of data is changed. See the attached picture of the data structure of my clusters. The structure on the left shows what I put into the subVI. The structure on the right shows what I read inside the subVI.
The problem is that my data is no longer associated with the correct names. Inside my subVI, the data type 'Active Relay' has moved up two positions. But this variable still contains the value of Temp_CH3. So although the name order has changes, the order of data remains unchanges. This is also confirmed by the fact that inside my subVI the last variable inside the cluster, Temp_CH4, still contains the data from 'Active Relay'.
Any good ideas as to why this is happening?
Message Edited by Claus_DK on 11-19-2007 03:50 AM
Attachments:
clusters1.gif ‏9 KB

What names were you expecting?
The names of the locals are only used the first time, when you change them the cluster type in the main VI is not changed.
Note that your data is coerced upon entering the subVI (see red dot), I don't think the order has changed, could you verify that with some testdata?
Secondly I urge you to use a type-def control:
Right click on the control in the subVI, select Advanced, Custimize
In the new window change the type from control to type def (it's in the button bar)
Save the control
Close control and select 'use xxx instead of flow control'
right click on Block Diagram and add the control their.
Use bundle by name to store the data in the cluster
This should remove data coercion, and your code is better readable.
Ton
Free Code Capture Tool! Version 2.1.3 with comments, web-upload, back-save and snippets!
Nederlandse LabVIEW user groep www.lvug.nl
My LabVIEW Ideas
LabVIEW, programming like it should be!

Similar Messages

  • Weekly Lot-sizing yielding incorrect Planned Order dates

    Hi All,
    need a help..
    Lot-sizing key = W2 (weekly)
    "Scheduling" on the config of the lot-sizing key = 3 (Period start := start and period end := delivery date). This data has been maintained for a material that is to be planned/produced on a weekly basis.
    Requirement --- Weekly planning on my material. MRP should return Planned Orders with Start Date = Reqt Date (of the Dependent reqt) & End date = Availability date (of the dep reqt).
    Currently, the system is calculating the "only" hours on the basis of the Routing data (I'm running Lead Time Scheduling) while creating Planned Orders during the MRP run against my expectation of spanning the production/order out over multiple days (per the lot-sizing config)
    Any clue would be of a great help
    cheers

    Thanks Mario,
    Had found that out during the course of my testing. I'm doing a Lead Time Sched, so obviously my reqt can't be met..
    So, we can conclude that though a weekly lot-sizing procedure is/could be compatible with LTS, the Config within the lot-size (field Scheduling = 3) cannot be used to influence the dates on the created Planned Orders...
    Cheers

  • Why am I being charged data usage during the times my phone is not being used?

    I got a notice on my phone that my number has used all of its allowed data usage for the month.  I looked at my current usage and seen that I was being charged for data usage during the times that I am not using my phone.  I figured it up and it adds up to be more than the amount that you say I am over.  Explain please.

    Ok, what all should I turn off on my 5s and I should get any additional fees waived.  I just got this new phone about a week before Thanksgiving and never got a notice till it was full.  I have never went over before.  I also have never been told that I could be charged data even if I am not using my phone and I should make these changes to prevent this.
    Also, the phone was 2 days late getting to me.  So in addition to any additional fees that I may occur from the overage I should get a credit for the delay in my phone being sent.  Oh, and it probably would have been longer had I not called to see where it was at, which the guy that helped me put in the order and was to follow up the next day with a phone call to let me know what the status was and never did.

  • Classification check of an internal order during KO01 KO02 transactions

    Hi All,
    i need to check if some Classification fields of an internal order are been valued when i save the order during transaction KO01 and KO02.
    I'm focused on the user-exit EXIT_SAPLRKIO_002 because it stars at the save event.
    Can anyone help me?
    Thank you everybody!!
    Regards,
    Roberto

    Hi,
    try this inside your user-exit:
    data: c_progn  VALUE  '(program name of transaction)<table name>[]'.
    FIELD-SYMBOLS: <fs>       TYPE ANY TABLE.
    ASSIGN (c_progn) TO <fs>.
    In this way you can use table, structure or variable of program that call your user-exit a run time.
    Regards,
    Leo.

  • Incorrect date getting determined

    Hi,
    I not sure what is happening. I written the below code. The begda and endda is between 11.06.2012 and 24.06.2012 but the begda for the particular pernr is 14.05.2012. gt_pa0008 has the correct values but its very strange gs_pa0008 is fetching an incorrect date but the other fields are correct
    PROVIDE FIELDS * FROM gt_pa0001 INTO gs_pa0001 VALID flag1
               BOUNDS begda AND endda
               FIELDS * FROM gt_pa0008 INTO gs_pa0008 VALID flag2
               BOUNDS begda AND endda
               FIELDS * FROM gt_pa0007 INTO gs_pa0007 VALID flag3
               BOUNDS  begda AND endda
        where begda <= lv_date AND endda >= wa_t549q-begda
         BETWEEN pn-begda AND pn-endda.
    Thank you.
    Regards,
    Narayani

    This relates to an incorrect FX Rate determination when a specific condition record is used.
    Under normal circumstances, this should not occur, but it looks like, if the ZHIV condition record is set up in EUR, rather than, (in this case) HRK, the values in the sales order / invoice and subsequently finance are converted back to front, (i.e 1 EUR = 7 HRK, rather than 7HRK = 1 EUR).

  • Flickr Group RSS feed has incorrect dates in Mail - How do I fix this?

    I since I've updated to Safari 3.1 RSS posts from Flickr group pools have had an incorrect 'Date Received'. Rather than the date they were added to the group pool (the date the RSS post was sent/received) new posts are now dated by the date the photo was uploaded, which could be significantly in the past. So my RSS mailbox is in completely the wrong order. I'm not 100% sure that this is down to the Safari 3.1 update, but it was definitely not a problem before I made the update.
    I've checked on other RSS readers and they all work correctly. I also posted on the Flickr Help Forum (http://www.flickr.com/help/forum/en-us/69088/) and it would appear that others have had the same problem with Mail.
    Any help would be much appreciated!

    It appears the behavior is caused by iCloud pushing the information, to include the password, to Mail.
    If I delete my iCloud account password and save the changes, the password reappears without me typing it in.
    If I go to System Preferences/iCloud and uncheck the sync Mail box, my iCloud account in Mail disappears. When I recheck the sync Mail box, the account reappears, complete with password.
    Based on my testing, you are stuck with things the way they are.
    If you want to provide feedback to Apple, you can do that in Mail under the Mail menu.

  • How to recover missing data from PCL4 cluster

    Hello experts,
    Is there any way to recover missing data from PCL4 cluster ?
    We recently found that some data related to W2 production run for past years was missing in the PCL4 cluster. Tables T5UXX & T5UXY has entries with the filing dates but certain data is missing in the cluster related to those table entries.
    Would there be any specific reason for such a data loss ?
    Has anyone come across this issue earlier and found resolution on the same ?
    Any feedback is appreciated.
    Thanks,
    Dipesh.

    When you delete PCL4 entries for production runs, the corresponding     
    control information in tables T5UXX and T5UXY also will be deleted. A    
    control number may be used for more than one Form number (in Tax         
    Reporter control tables). Therefore, if a control number that is         
    assigned to the Form number to be deleted was also assigned to other     
    form numbers, then this control number information from T5UXY will not   
    be deleted. These details will be displayed in the program results.      
    However, if all Form numbers that were assigned to this particular       
    control number were deleted, then the control information in table T5UXY 
    for the control number will be deleted automatically as the last        
    dependent Form number is deleted."
    Following we have two questions:                                         
    o Was PCL4 deleted?                                                      
    o Did the W2 generate without errors?                                    
    Basically if the PCL4 data is corrupted, then we have 2 options to      
    rebuild the PCL4 Cluster:                                                
    1. Delete the existing PCL4 & rerun the production run with the same     
       As of date.                                                           
    2. Overwrite the production results without deleting PCL4 data but with  
       different As of Date.                                                                               
    If the Employee data is not changed after your                           
    actual production run then overwriting the results will not cause any    
    difference in the PCL4 data.                                             
    Secondly, if you dont want PDF spools or magmedia layouts to be          
    generated during overwriting of production run, then kindly uncheck      
    the following on the PU19 screen, so that only PCL4 data is rebuild :    
    1. Uncheck Employee Copy                                                 
    2. Uncheck Authority Copy                                                
    3. Uncheck Magnetic table"                                               
    Edited by: Johan Peersman on Mar 25, 2010 4:14 PM
    Edited by: Johan Peersman on Mar 25, 2010 4:15 PM

  • Customer Defined Data Classes on Cluster Tables?

    Hi all,
    I noticed that there is no option within db13 to change the storage option to a customer defined data class, for cluster tables. I am sure this is by design but wanted to check to see if anyone has had any luck defining a data class on a cluster table.
    We could move a cluster table to a DB2 tablespace and define the new Data Class through dbacockpit/db02, but there is no option to change the data class definition in the data dictionary (se13) to use the new Data Class for the table itself.
    Mainly what we are aiming for is to be able to move 2 tables into their own tablespaces on the target server, during a migration. Sapinst is looking at the Data Classes to create DDLDB6.XML, which defines the tablespace assignments to the target server, so perhaps there is a preferred method of making this change, if it does exist.
    Thanks for any insights,
    Derek

    Hi guys,
    Thanks for the helpful answers. I should add some more detail.
    Our tablespaces have already been moved to new tablespaces on the source system, with DB6CONV, and we have updated our data classes through db02, with no problems. So sapinst uses the data class definitions to create the custom tablespaces, and then uses the data class assignments to assign the table to a tablespace.
    However, the cluster tables do not seem to support the data class assignment in se13, the option is not available as it is for normal tables. Let me see if I can post a screenshot somewhere with a cluster table versus a normal table. For example, there is no option to change the data class for table CDCLS in se13.
    I am wondering if this is a limitation of the version of R/3 we are using, or if the limitation for cluster tables is something that is intended on all versions of R/3.
    If we can't use db13 to make the assignment, we may try the custom definition in the STR file as Kiran mentioned.

  • Incorrect CS Orders Settlement

    A CS order was created with correct partner information, however with the revenue settlement in COPA, the correct partner profit center was not derived. The derived profit center is incorrect and can lead to incorrect data/figures.
    What are the Customization i need to check to know the exact position of error ?
    What T-Code's i need to check for resolving this issue ?
    Kindly provide me solution , Thanks
    Amit Arora ..

    Hello,
    The profit center needs to pick form the order and the error value of the partner profit center is having value of the profit center.
    can you please help us out
    regards,
    Prakash

  • SSAS Default Member Causing incorrect data in Excel Pivot Table using Multi-select in filter

    I have an Excel 2013 pivot table connected to an SSAS (2012) cube. One of my dimensions has a default member specified. When I drop this dimension in the filter area of my pivot table and select multiple members including the default member then
    only data for the default member is shown. Selecting multiple members where the default member is not included does not result in an issue.
    I believe this may be in an issue in how Excel builds the MDX but wanted to see if there are any work arounds.
    This issue can be recreated using AdvetureWorks using the following steps:
    Alter the Product Dimension of the SSAS Default Member by setting the default member of the Product Line Attribute to Mountain: [Product].[Product Line]&[M] 
    Process the cube
    Connect to the cube via Excel 2013
    Drag internet gross profit to the Values area (The value will be 4700437.22 which reflects the Mountain default filter)
    Drag Product Model Lines to the Filters area (you will see Mountain selected by default)
    Change the filter by checking the Select Multiple Items checkbox and checking Mountain and Road (You will see that the amount does not change)
    Change the filter again by selecting Road only (to demonstrate that Road has a value, 5602105.8, associated with it)
    Change the filter again to select Road and Touring (to demonstrate that the correct aggregation of the two selected members is preformed)

    Hi Hirmando,
    According to your description, the default member cause incorrect data when dragging a attribute that contain a default member to the FILTERS area, right?
    I can reproduce this issue on my environment, when dropping this dimension in the filter area of my pivot table and select multiple members including the default member then only data for the default member is shown. Currently, it's hard to say the root
    reason that cause this issue. In order to narrow down this issue, please apply the latest server pack and cumulative update.
    Besides, you can submit a feedback at
    http://connect.microsoft.com/SQLServer/Feedback So that microsoft will confirm if this is a know issue.
    Regards,
    Charlie Liao
    If you have any feedback on our support, please click
    here.
    Charlie Liao
    TechNet Community Support

  • Unpack data channels from cluster

    Hi all,
    I tried searching for this, it seems common enough, but I'm not sure I used the right set of words:
    I'm receiving data from a device in packets. Each packet contains multiple frames, each frame contains two sub-frames, and each sub-frame contains several data channels. In my vi, the packet is a cluster, the frames are clusters, and the sub-frames are clusters, all typedef'ed.
    What I want to do, is take the initial big cluster, and split the data into a cluster of arrays, one per data channel, without generating spaghetti.
    The "decimate array" function would work beautifully to do the job, but alas, it only operates on arrays, not clusters, and it seems my clusters cannot be converted to arrays.
    Is there something I'm missing? I've attached a high-level picture...note I'm not an artist. The values "n" and "m" are fixed, so there is no variability
    Solved!
    Go to Solution.
    Attachments:
    ClusterToClusters.png ‏11 KB

    Right-click on Build Array and choose "Concatenate Inputs" so you'll get a 1-D array.
    If you get rid of the "Header" and "Footer" elements, you can wire your overall cluster to Cluster to Array, which will give you an array of Frame clusters, which you can then put through a For loop with Cluster to Array, which will give you a 2-D array, that you can reshape/decimate/extract columns or rows as necessary. Or, in newer versions of LabVIEW, you can right-click on the loop output tunnel and choose the option to concatenate, giving you a 1-D array. However, I suspect that the 2-D array will actually work well for you - you'll have frames on one axis, and data channels on the other.
    If you need the Header and Footer to remain part of the overall cluster, add one more level of nested cluster (there's no memory/speed penalty) containing all the frames, so that you can unbundle just the data.
    Is there any possibility of reading the data into an array in the first place, instead of a cluster? That might be easier.

  • 2012 R2 DPM: massive increase in data transferred during syncronisation of a Deduplicated protected volume

    I have been using 2012 R2 DPM to protect a 46 TB 2012 R2 deduplicated volume for the past 4 months without too much issue. The volume is a low access archive server. Current usage on the volume is 23 TB (if not deduplicated, this would be more like 38TB).
    Data transferred during a sync (twice daily) is normally in the order of a few 10's of GB. A Restore Point is set for 12am daily.
    However, on Saturday during a scheduled sync, it started to transfer very large amounts of data and got to 9TB before failing early this Monday morning ( it had reached the 90% threshold and couldn't increase the replica volume any more). At first I had
    thought that
    a. There had been a large amount of data suddenly put on the server,
    or
    b. A lot of files had been changed causing them to be un-deduplicated
    However on checking the volume stats ( I have an independent monitoring program), the data usage on the protected volume was fairly static throughout the week at 23 TB - and is still around that now. Besides, nobody generally uses this volume at the weekend.
    I am currently running a consistency check on this volume ( this can take 36+ hours), in an attempt to get the sync & restore points working again, however , in the meantime, I need to find out what has caused this.
    Has anyone experienced this sort of behavior before ? Where should I start looking ?
    DPM version 4.2.1292

    Hi,
    When was DPM UR5 installed ?
    DPM 2012 R2 UR5 (4.2.1292) introduced a fix where long term tape based backups or restores of files that were deduped would fail under the below circumstance.
    Backup job for dedupe-enabled volume on Windows 2012 R2 fails. For example, Data Protection Manager backup might fail for following setup:
    Data Protection Manager2012 R2 is installed on Windows Server 2012.
    Data Protection Manager is protecting a dedupe-enabled volume on Windows Server 2012 R2.
    If your DPM Server is running on Windows server 2012 and the protected dedup volume is on Windows server 2012 R2, then after installing UR5, all data will be protected in a non-dedup state.   This is necessary because the Windows Dedup filter
    on Windows 2012 does not know how to read Windows 2012 R2 dedup data structures which would be the replica volume on the DPM Server.  
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. Regards, Mike J. [MSFT] This
    posting is provided "AS IS" with no warranties, and confers no rights.

  • How to get the data from a cluster table to BW

    Dear All,
    I want to extract the data from R/3 to BW by using 2 tables and one Cluster B2.
    Actually my report contains some fields from PA2001, PA2002 and one cluster table B2 (Table ZES). Can I create View by using these 3 tables? If it is not possible how can I get the data from the cluster? Can I create generic datasource by using cluster tables directly?
    In SE11 Transaction the Cluster (table ZES) is showing invalid table.
    I referred some Forums, but no use.
    Can any body tell me procedure to get the data from a cluster (table ZES) ?
    Waiting for you results.
    Thanks and regards
    Rajesh

    HI Siggi,
    Thank you for your reply..
    I am also planning to do FM to get the data. But it is saying that the Cluster table ZES does not exist (ZES is the the standard table, in SE11 also).
    How can I use the Fields from the that table.?
    What can I do now, can you please explain me about this point.
    Waiting for your reply.
    Thanks and Regards
    Rajesh
    Message was edited by:
            rajesh

  • Photos. They are on my macBook, backed up on time machine. Copied them (hours and hours) to external hard drive. They are now in alphabetical order (19,000  of them) NOT IN THEIR EVENTS or date order-and have been taken with different cameras- help!!!!!!

    Photos.
    They are on my macBook, backed up on time machine. There are 19,000+ of them, some rescued from a pc crash- which I used the nifty iPhoto to put back into date order.    
    I want to take them all off my laptop, now............as I need to use it for WORK!!
    Copied them (hours and hours) to another external hard drive.
    They are now in alphabetical order (all 19,000+ of them) NOT IN THEIR EVENTS or date order. (-They have also been taken with different cameras over the years and some of the generic camera numbering remains.)
    I have tried to copy them (only a few as an experiment)  one event at a time, but that again "opens up" the Event "folder" and tips them out as individuals and therefore just lists image letters and numbers alphabetically.
    How can I copy
    the whole library still in  "Events" to an external hard drive?
    the folders/albums I have already made on to this hard drive?
    and how can I add to this back up monthly, again keeping events and folders separate and updated?
    Mac is so user friendly - there must be a way.........
    Thanks

    UPDATE : I have re-installed from disk, various apps that were no longer functioning such as iLife, iWork etc. So, I now can access my photos again.
    Also, I had to re-install all the software for my printer ( Stylus Pro 4880 ) and reset it so the printer is working again.
    Photoshop CS4 won't open. I think I will have to get in touch with Adobe as basically, I guess they have a built-in "blocker" which prevents me from opening the app as the license is for only 1 user and having re-installed the OS, there are now, in effect, 2 users ( Me and Me 1, I think ).
    So, having added on a new external HD, Time Machine has made a copy of most of the files, folders, apps etc onto the external HD. The internal HD is still nearly full ( 220 GBs out of 232 GBs ).
    I am guessing the way to go now in order to free up space on the internal HD is to delete/trash older photos from my iPhoto library and hope that if needed, I will be able to access them on the external HD.
    Am I correct ? Please advise before I do something I will regret.
    Thanks, Sean.

  • Incorrect data in ODS

    Hi All,
    we are loading data  from one ODS to another with delta processes.The second ODS got some incorrect data Populated.After that I did a full repair load selectively for one record only from first ODS to another and it went fine.But when I am loading by full repair load the records for entire company code,one particular key figure is getting doubled for most of the records.I've checked there is no problem with the start routine.
    Please  let me know if I am missing anything in full repair load.
    Thanks in advance.

    Hi,
    In sequence to this post,now i am getting a differnt and strange error.
    the data gets loaded from ODS1(0FIA_DS11) to ODS2(0FIA_DS12).
    But for some asset main nos the CACQ_VL_YR and TACQ_VL_YR does not get populated.
    WE have a start routine in between.The start routine poplualtes the CACQ_VL_YR .TACQ_VL_YR is polulated with ACQ_VAL_TR.
    the routine is as follows:
    DATA: lt_trans LIKE table of DATA_PACKAGE,
             wa_trans LIKE LINE OF lt_trans.
      LOOP AT DATA_PACKAGE.
        CLEAR wa_trans.
    Nicht-Werfelder 1:1 übernehmen
        MOVE: DATA_PACKAGE-currency   TO wa_trans-currency ,
              DATA_PACKAGE-comp_code  TO wa_trans-comp_code ,
              DATA_PACKAGE-/BIC/ZASS_MAIN TO wa_trans-/BIC/ZASS_MAIN ,
              DATA_PACKAGE-/BIC/Zdep_area TO wa_trans-/BIC/Zdep_area ,
              DATA_PACKAGE-fiscyear   TO wa_trans-fiscyear,
              DATA_PACKAGE-fiscvarnt  TO wa_trans-fiscvarnt.
    Wertfelder mit identischer Übertragungslogik bei PLN und ***
        MOVE:
        DATA_PACKAGE-vaw_odp_tr TO wa_trans-vaw_odp_tr,
        DATA_PACKAGE-vaw_sdp_tr TO wa_trans-vaw_sdp_tr,
        DATA_PACKAGE-vaw_udp_tr TO wa_trans-vaw_udp_tr,
        DATA_PACKAGE-vaw_avr_tr TO wa_trans-vaw_avr_tr,
        DATA_PACKAGE-va_rapc_tr TO wa_trans-va_rapc_tr,
        DATA_PACKAGE-va_rodp_tr TO wa_trans-va_rodp_tr,
        DATA_PACKAGE-va_invs_tr TO wa_trans-va_invs_tr.
        CASE DATA_PACKAGE-transtype.
          WHEN '***'.
        Wertfelder kumuliert
            MOVE:
            DATA_PACKAGE-acq_val_tr TO wa_trans-cacq_vl_yr,
            DATA_PACKAGE-inv_sup_tr TO wa_trans-cinv_gr_yr,
            DATA_PACKAGE-ord_dep_tr TO wa_trans-cor_dep_yr,
            DATA_PACKAGE-spc_dep_tr TO wa_trans-csp_dep_yr,
            DATA_PACKAGE-upl_dep_tr TO wa_trans-cup_dep_yr,
            DATA_PACKAGE-res_trf_tr TO wa_trans-cres_tr_yr,
            DATA_PACKAGE-interst_tr TO wa_trans-cinter_yr,
            DATA_PACKAGE-reval_tr   TO wa_trans-crev_rv_yr,
            DATA_PACKAGE-rev_odp_tr TO wa_trans-crev_od_yr.
            COLLECT wa_trans INTO lt_trans.
        Wertfelder geplant
          WHEN 'PLN'.
            MOVE:
            DATA_PACKAGE-acq_val_tr TO wa_trans-acq_val_tr,
            DATA_PACKAGE-inv_sup_tr TO wa_trans-inv_sup_tr,
            DATA_PACKAGE-ord_dep_tr TO wa_trans-plo_dep_yr,
            DATA_PACKAGE-spc_dep_tr TO wa_trans-pls_dep_yr,
            DATA_PACKAGE-upl_dep_tr TO wa_trans-upl_dep_yr,
            DATA_PACKAGE-res_trf_tr TO wa_trans-pln_avr_yr,
            DATA_PACKAGE-interst_tr TO wa_trans-pln_int_yr,
            DATA_PACKAGE-reval_tr   TO wa_trans-prev_rv_yr,
            DATA_PACKAGE-rev_odp_tr TO wa_trans-prev_aodyr.
            COLLECT wa_trans INTO lt_trans.
       'echte' Bewegungen ignorieren.
          WHEN OTHERS.
        ENDCASE.
      ENDLOOP.
      DATA_PACKAGE[] = lt_trans[].
    if abort is not equal zero, the update process will be canceled
      ABORT = 0.
    Can anyone throw some light on this?
    Regards,
    Dhanya.

Maybe you are looking for