BPC NW10 - data collection - manage the flow of intercompanies

Hi,
For the budgeting process, there are 2 budget form
First input Form 1: enter the total amount of balance (ignoring the flow intercompanies)
ENTITY
ACCOUNT
INTERCO
MEASURES
Company1
Acount 1
NO_INTERCO
100
Second input Form 2: enter the flow inter companies
ENTITY
ACCOUNT
INTERCO
MEASURES
Company1
Acount 1
Company2
20
In summary, unfortunately the data records are
ENTITY
ACCOUNT
INTERCO
MEASURES
Company1
Acount 1
NO_INTERCO
100
Company1
Acount 1
Company2
20
Target : when saving data ​​of flux intercompanies Form 2,
  the system automatically reduces the values ​​'no_intercos'
ENTITY
ACCOUNT
INTERCO
MEASURES
Company1
Acount 1
NO_INTERCO
80
Company1
Acount 1
Company2
20
We created the excel formula on the record  'No_interco ' = sum of flow intercompanies - total amount = 100-20 =80
But after save data, the excel formula  disappears
We changed the sheet options / refresh : we select the following options
Keep formula on data
Keep formulas static that reference report cells
But after save data, the excel formula  always disappears
Do you have the solution for reduction the values 'no_intercos' to 80?
Bastien

Hi Bastien,
Sorry, but absolutely not clear. Can you show simple screenshots of the input forms?
Vadim

Similar Messages

  • Monitor the data collection

    How can we know if there is no data collected for a server on a specific monitor item/performance rules? Could there be an alert generated if there is no data collection from the server?

    First of all, SCOM has no built-in mechanism in keep tracking the functioanality of performance data collection. SCOM agent stores performance data in local data store, c:\porgram file\Microsoft Monitoring Agent\Agent\Health Service State\Health Service
    Store\healthserviceStore.edb. Then it will push data into SCOM DB. Once its data is transmitt to SCOM DB, its data is clear. As a result, it is not feasible to recover performance data from the client size.
    1) no data collected: Usually, a grey agent under discovered inventory indicates that the agent loss connection to managment server and performance data cannot push into SCOM DB.
    2) Database Corrupt: I think that you should recover from your backup and SCOM agent will store its performance locally until the DB is recovery.
    Roger

  • Data collection was switched from an AI Config task writing to an hsdl file to synchronized DAQmx tasks logging to TDMS files. Why are different readings produced for the same test?

    A software application was developed to collect and process readings from capacitance sensors and a tachometer in a running spin rig. The sensors were connected to an Aerogate Model HP-04 H1 Band Preamp connected to an NI PXI-6115. The sensors were read using AI Config and AI Start VIs. The data was saved to a file using hsdlConfig and hsdlFileWriter VIs. In order to add the capability of collecting synchronized data from two Eddy Current Position sensors in addition to the existing sensors, which will be connected to a BNC-2144 connected to an NI PXI-4495, the AI and HSDL VIs were replaced with DAQmx VIs logging to TDMS. When running identical tests, the new file format (TDMS) produces reads that are higher and inconsistent with the readings from the older file format (HSDL).
    The main VIs are SpinLab 2.4 and SpinLab 3.8 in folders "SpinLab old format" and "Spinlab 3.8" respectfully. SpinLab 3.8 requires the Sound and Vibration suite to run correctly, but it is used after the part that is causing the problem. The problem is occuring during data collection in the Logger segment of code or during processing in the Reader/Converter segment of code. I could send the readings from the identical tests if they would be helpful, but the data takes up approximately 500 MB.
    Attachments:
    SpinLab 3.8.zip ‏1509 KB
    SpinLab 2.4.zip ‏3753 KB
    SpinLab Screenshots.doc ‏795 KB

    First of all, how different is the data?  You say that the reads are higher and inconsistent.  How much higher?  Is every point inconsistent, or is it just parts of your file?  If it's just in parts of the file, does there seem to be a consistent pattern as to when the data is different?
    Secondly, here are a couple things to try:
    Currently, you are not calling DAQmx Stop Task outside of the loop; you're just calling DAQmx Clear Task.  This means that if there were any errors that occured in the logging thread, you might not be getting them (as DAQmx Clear Task clears outstanding errors within the task).  Add a DAQmx Stop Task before DAQmx Clear Task to make sure that you're not missing an error.
    Try "Log and Read" mode.  "Log and Read" is probably going to be fast enough for your application (as it's pretty fast), so you might just try it and see if you get any different result.  All that you would need to do is change the enum to "Log and Read", then add a DAQmx Read in the loop (you can just use Raw format since you don't care about the output).  I'd recommend that you read in even multiples of the sector size (normally 512) for optimal performance.  For example, your rate is 1MHz, perhaps read in sizes of 122880 samples per channel (something like 1/8 of the buffer size rounded down to the nearest multiple of 4096).  Note: This is a troubleshooting step to try and narrow down the problem.
    Finally, how confident are you in the results from the previous HSDL test?  Which readings make more sense?  I look forward to hearing more detail about how the data is inconsistent (all data, how different, any patterns).  As well, I'll be looking forward to hearing the result of test #2 above.
    Thanks,
    Andy McRorie
    NI R&D

  • Utility data collection job Failure on SQL server 2008

    Hi,
    I am facing data collection job failure issue (Utility-data Collection) on SQL server 2008 server for, below is the error message as  :
    <service Name>. The step did not generate any output.  Process Exit Code 5.  The step failed.
    Job name is collection_set_5_noncached_collect_and_upload, as I gothrough the google issue related to premission issue but where exactly the access issues are coimng, this job is running on proxy account. Thanks in advance.

    Hi Srinivas,
    Based on your description, you encounter the error message after configuring data collection in SQL Server 2008. For further analysis, could you please help to collect detailed log information? You can check the job history to find the error log around the
    issue, as is mentioned in this
    article. Also please check Data Collector logs by right-clicking on Data Collection in the Management folder and selecting View Logs.
    In addition, as your post, the exit code 5 is normally a ‘Access is denied ’ code.  Thus please make sure that the proxy account has admin permissions on your system. And ensure that SQL Server service account has rights to access the cache folder.
    Thanks,
    Lydia Zhang

  • Cisco Works - Campus Data Collection

    Cisco Works - Common Services 3.0.3, Campus Manager 4.0.3, RME 4.0.3
    I have devices that are "discovered" via Device Discovery, but do not show up in Campus Data Collection. The devices that do not show up, are "reachable" in Device Discovery. I thought all devices in Discovery that are reachable, were sent to Data Collection? I have no filters in Data Collection. It should allow anything.
    Any ideas why Data Collection is not importing those devices?

    you could watch the Discrepancy reports on Campus Manager, maybe it could help you, it would be a duplicate hostname or IP address.
    it happened to me and it was because i had a duplicate hostname

  • Hyperion Planning form timeout via Open Form under SV Data Source Manager

    Power User can't open one particular big Planning form in Excel. Takes about 30 secs and gets net retry intverval timeout error. The same user can ad-hoc (one menu option up in the SV Data Source Manager) the form in question and can open the other (smaller) forms sucessfully. Logged into the users machine I can open the form in question. Logged into my machine the user can open the form in question.
    Already increased the timeouts in user's browser to 2 mins and reinstalled his smartview client.
    Seems like if I can do it on his machine - not his machine
    AND if he can do it on my machine - not his rights/provisioning
    AND if he can get at the same data one menu option up still using smartview - again not his rights or an error in his smartview
    Totally stumped.

    Hmmm
    We had some timeout issues in Smart View in the initial stages of our deployment (although running 11.1.2.1), which sound similar to those you are experiencing.
    Adding the following reg settings to client machines sorted the issues for us, it cant hurt trying these with your user if not applied already.
    Path:
    HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\InternetSettings
    Three ‘DWORD Values’ to add:
    •     KeepAliveTimeout
    •     ServerInfoTimeout
    •     ReceiveTimeout
    Value for each as 900000, with a Base setting of ‘Decimal’.
    JB

  • [JAVA] How to input data collected in a table

    Hello!
    I'm writing a program that monitors the signal of a sensor in a chart. Now I want to insert into a table of data collected by the sensor using a JTable.The problem is that if I run the program, data are not included in the table.
    Where did I go wrong?
    Here's part of the source code.
    ArrayList<Data> datiArray = new ArrayList<Data>();
    DataTableModel dtm= new DataTableModel(datiArray);
    public class DataTableModel extends AbstractTableModel {
        private ArrayList<Data> datiArray;    // <---- Il table model ha la collezione
        public DataTableModel(ArrayList<Data> datiArray) {
            this.datiArray=datiArray;
        public void addData(Data d) {
            datiArray.add(d);
    int row = datiArray.size()-1;
    fireTableRowsInserted(row,row);
             private String colName[] = {"Time", "G-value" };
            public String getColumnName(int col) {
                return colName[col];
            public int getRowCount() {
                return datiArray.size();
            public int getColumnCount() {
                return 2;
            public boolean isCellEditable(int row, int col) {
                return false;
              public Class getColumnClass(int c) {
              return (c == 0) ? Long.class : Double.class;
    public Object getValueAt(int row, int column) {
            Data d = datiArray.get(row);
            switch (column) {
                case 0: return dati.getTime();
                case 1: return dati.getGvalue();
            return null;
        private class Data  {
            public long time;
            public double gvalue;
            public Data(long time, double gvalue) {
                this.tempo = tempo;
                          this.gvalue = gvalue;
    public long getTime() { return time; }
        public double getGvalue() { return gvalue; }
    RecordButtonAction
        private void recordButtonActionPerformed(java.awt.event.ActionEvent evt) {                                            
            int i=0;
    int j= graphView.getSampleTime();
    int k=graphView.getIndexMax();
    System.out.println(j);
    System.out.println(k);
            while(i<k){
    Data dr= new Data(graphView.getTime(i),graphView.getGvalue(i));
    //datiArray.add(dr);
    dtm.addData(dr);
    //these System.out.println are served to me to check if the values were actually included in the class Data and DataTableModel
    System.out.print(graphView.getTime(i));
    System.out.println("/t"+graphView.getGvalue(i));
    System.out.println(dr.getTime());
    System.out.println(dtm.getValueAt(i ,1 ));
            i=i+j;
            readyRecord = false;
    Sorry for my bad English.

    Please don't cross-post the same question in multiple forums. Especially since you've already been given directions as to how to ask the question smartly.

  • Synchronous data collection using pci-6143's

    I can set up synchronous data collection across the analog inputs of my three PCI-6143's using a separate task for each board and explicitly sharing the master timebase from board 1 (the one receiving the trigger to start data collection) to the other 2.  Then I need 3 read channel VI's etc. 
    The DAQ Assistant will configure all the AI channels to work inside one task across the three boards, which is very convenient, but I lose the synchronicity.  Specifically, the device triggering the data collection (board 1), leads the other two boards by a few microseconds.  How can I use a single task for all three boards for analog input (voltages) while retaining completely synchronous data collection?  Thanks!

    Hi Brian_g,
    You should be able to synchronize your SMIO cards by including them in the same task this way. You will have to type in the names, ie "Dev1\ai0:7, Dev2\ai0:7, Dev3\ai0:7" and still specify the start trigger off of your master device. I would work from the "Cont Acq & Graph Int Clk.vi" example and add in the digital trigger.
    Please post back if this does not resolve your issue or I didn't answer your question.
    Cheers,
    Andrew S.
    National Instruments
    Getting Started with NI-DAQmx
    Measurement Fundamentals

  • Calculations for data collection

    Hi,
    In 'formula' type parameter for data collection, for the first time when operator enters values, the formula parameter does the calculation and give results.
    However, if later on the same data is edited, then the calculations donot take place and operator himself has to calculate and update the field.
    foreg: P1 and P2 are 2 parameters under 1 DC group.
    P3=P1+P2.
    When operator enters for the first time values for P1 and P2, P3 gets calculated. However, If later on values for P1 and P2 are edited through 'data collection edit' activity, P3 calculations donot change and operator has to calculate it and re-enter in P3 field.
    This is my understanding, please correct me if these calculations automatically takeplace whenever the base values are edited .
    Thanks.
    Regards
    Mansi

    nimaq,
    One of my more astute coworkers (a know it all, so he thinks, hehe), reminded me that we do sell a PCMCIA/Cardbus PXI Chassis controller and a ExpressCard PXI controller (if you have a computer with an ExpressCard slot).
    Cardbus controller link...
    NI PXI-CardBus8310
    ExpressCard controller link...
    NI PXI-ExpressCard8360
    If you buy a PXI controller for your Laptop, a PXI Chassis, and a PXI digital frame grabber, then you will have a system that allows you to grab images from your digital camera via your Laptop.  If you have not yet purchased a Laptop, I highly recommend you purchase a Laptop with an ExpressCard slot (they have much higher bandwidth) which you will want for frame grabbing.
    Below is a Link to a 4 slot PXI Chassis...
    NI PXI-1031
    Below is a Link to a Digital PXI Frame grabber...
    NI PXI-1422
    Buying these 3 products (PXI-ExpressCard8360, PXI-1031, and the PXI-1422) and the proper laptop, will give you a top of the line digital camera acquisition system.
    Lorne Hengst
    Application Engineer
    National Instruments

  • Sluggish data collection will log but not plot

    Please be gentle with another newbie here. Unfortunately, I am stuck using LV6 on a Windows XP machine so I am limited in some of the options I have to control data logging and event structures. However, I have done the best I can for the application with what I have learned of LabView. I am trying to set up a multichannel, continuous (long-term) data collection from the serial port which will send the data to a chart and a log. I have tried to build my own event structure to tell it to collect faster if there is a change in the data value and to collect slower when the change in the data is minimal (based on the mean values).
    Any ideas on why this is running so sluggishly and not charting?
    Thanks for all input and help!!
    Attachments:
    4 Channel Temp Monitor_latest.vi ‏1170 KB

    Some things I see.
    1.  You are setting a lot of properties for the charts on every iteration, along with property nodes for some other controls.  Particularly the ones involving scaling.  These cause the UI interface to need to update often.  I would recommend only writing values to property nodes in the event that something changes.  If you can use an event structure, great.  If not, just compare the old value to the new value and only write out values if they are different.
    2.  I can't tell if anything controls the speed of you main while loop.  You might want to put in a small wait statement.
    3.  Don't open and close the serial port on every iteration.  You are actually doing it several times within an iteration.  Open it once, Read and Write to it in a loop. Close the port when the program ends after the loop.
    4.  Some of the stacked sequence structures seem suspect.  Some are using dequeues from the same queue in every frame, only to OR all the data together at the end.  It seems like a For Loop would be a better choice.
    5.  Do all your graphs need to be single representation?  Make them double.  You can also avoide the bullet conversion from double to single in your Scan from String functions if you wire a single representation constant into the type terminal of the Scan from String function.
    I'm sure there are more things that could be fixed, but I really suspect #1 and #2 as the main problems as to why your code seems sluggish.

  • Why does iTunes U site manager log me out when returning to collection manager?

    So here's the problem ... kind of a weird thing has been happening lately with Site Manager.  When I click on the feed of a collection to interact with the actual content (change the order, correct a typo etc.) and then click to return to collections manager, the site logs me out.  When I log back in, it won't let me go back into the collection and gives an error message saying that the collection is unavailable because someone is already logged in to it.  When I wait an hour or so, I can access the collection feed again but once I return to collections manager, I get logged out.  I have tried this on different versions of Safari and in Lion with the same results.  It is just in the last couple of weeks that this has been happening.  Any ideas?

    You are welcome, WonderProfessor.
    Updating iTunes may not be the reason at all. It could very well be the Support Team has fixed the problem, which i think is the case.
    Also, I learned i should report EACH problem to iTunes U Support Team using its Report Form:
    https://ssl.apple.com/support/itunes-u/public-site-manager/contact.html
    I used the Report Form a couple of months ago to report a different problem. One tech support person responsded and I have been emailing to that individual via email about a few more issues since then. This, however, is NOT a good practice. Reporting each problem using the report form will help the iTunes Support Team to prioritize the work based on urgency. 
    Anyway, I am releaved it is working now. Thank you. And thanks to the iTunes U Support Team and iTunes U Editors.
    Q. Wang

  • What are the Disadvantages of Management Data Warehouse (data collection) ?

    Hi All,
    We are plan to implement Management Data Warehouse in production servers .
    could you please explain the Disadvantages of Management Data Warehouse (data collection) .
    Thanks in advance,
    Tirumala 
     

    >We are plan to implement Management Data Warehouse in production servers
    It appears you are referring to production server performance.
    BOL: "You can install the management data warehouse on the same instance of SQL Server that runs the data collector. However, if server resources or performance is an issue on the server being monitored, you can install the management data warehouse
    on a different computer."
    Management Data Warehouse
    Kalman Toth Database & OLAP Architect
    SQL Server 2014 Database Design
    New Book / Kindle: Beginner Database Design & SQL Programming Using Microsoft SQL Server 2014

  • Data Collection in BPC

    Hi,
    How does the Data Collection(UCMON tcode) works in BPC?. Do we have any standard Data Manager Package to do the Data Collection?.How can we achieve this
    Would appreciate your time and response.
    Thanks.

    Hi,
    First, I am not sure what does the tcode that you mentioned does. There is no such code in BPC as far as iam aware.
    Regarding the data collection into BPC application (in other words a cube) -
    You could have a flat file approach -  Upload the flat file into BPC file service using upload data file - and then using standard import data manager package - you could import the data into BPC application. You need to define  and use the BPC transoformations and conversions (if any) in the import process. These are nothing but the files.
    Alternatively, you could also load the data directly from the BW infocube or even from a multi-provider. There is a separate data manager package to do this.
    You could also collect data into BPC through input schedules.
    You have the advantage of triggering the default logic while collecting the data which would apply your business logic on the data before it is saved to the database.
    Hope the above gives some insight to you.
    Thanks

  • Report Manager - Data collection

    Hi,
    I'm having problems with data collection on a number of servers.
    Essentially, I've configured data logging on a number of servers
    by creating a job in the 'Manage jobs' window, using the task type
    'data logging' and running it against a number of hosts.
    Unfortunately, when I look at the 'data collection' window via report
    manager, a number of host entries have an excamation mark attached
    which implies that data has not been collected. If I look at the attribute editor for one of the logged entries on an affected server, the history tab
    clearly states that the 'Save History as Disk_File' is ticked.
    Any ideas?

    I have also seen this issue. Some of these issues were fixed by checking
    DNS. For data collection to work both forward and reverse DNS must work
    for the agent. If there is a mismatch or missing DNS it doesn't work.
    If there is a mismatch in DNS sometimes a re-discovery is required.
    Hi,
    I'm having problems with data collection on a
    number of servers.
    Essentially, I've configured data logging on a number
    of servers
    by creating a job in the 'Manage jobs' window, using
    the task type
    'data logging' and running it against a number of
    hosts.
    Unfortunately, when I look at the 'data
    collection' window via report
    manager, a number of host entries have an excamation
    mark attached
    which implies that data has not been collected. If I
    look at the attribute editor for one of the logged
    entries on an affected server, the history tab
    clearly states that the 'Save History as Disk_File'
    is ticked.
    Any ideas?

  • Data Collection in Campus Manager hung

    /* Style Definitions */
    table.MsoNormalTable
    {mso-style-name:"Table Normal";
    mso-tstyle-rowband-size:0;
    mso-tstyle-colband-size:0;
    mso-style-noshow:yes;
    mso-style-priority:99;
    mso-style-qformat:yes;
    mso-style-parent:"";
    mso-padding-alt:0in 5.4pt 0in 5.4pt;
    mso-para-margin-top:0in;
    mso-para-margin-right:0in;
    mso-para-margin-bottom:10.0pt;
    mso-para-margin-left:0in;
    line-height:115%;
    mso-pagination:widow-orphan;
    font-size:11.0pt;
    font-family:"Calibri","sans-serif";
    mso-ascii-font-family:Calibri;
    mso-ascii-theme-font:minor-latin;
    mso-fareast-font-family:"Times New Roman";
    mso-fareast-theme-font:minor-fareast;
    mso-hansi-font-family:Calibri;
    mso-hansi-theme-font:minor-latin;}
    I am running LMS 3.2 and Campus Manager 5.2.1.  My data collection in Campus  Manger for my devices does not complete and runs indefinitely.  It has been hung in the "running" state for the past week and will not complete.  I only way I found to get it out of the "running" state is to reset the ANIServer service and reboot the server.  I have had this problem since I have upgraded to LMS 3.2.  Does anyone have any input of this issue or possible solutions? 

    The data collection hung again as expected over the weekend.  The ANI.log file grew to 1.8 GB again.  It is to large for me to open it.  I could try to split it up into smaller files.  What should be my next course of action?

Maybe you are looking for