Chart data set size

I only occasionally use numbers 08. When I make a bar chart from a data set the size of the data set seems limited. That is, after a certain size, if I add one more data value to the data set then the chart displays the new item but does not include the first item in the set. Is there a solution to this?

DavidLewis wrote:
I only occasionally use numbers 08. When I make a bar chart from a data set the size of the data set seems limited. That is, after a certain size, if I add one more data value to the data set then the chart displays the new item but does not include the first item in the set. Is there a solution to this?
I didn't notice this problem with a data set that started with 10 items and then had 15 more added to it.
Expanding that to a bar graph starting with about 40 items in the data set, then adding items to a total of 362 was a slow process, but did not exhibit the problem you describe.
The chart is a bar graph showing values of Sine(n°) for integer values of n from 0 to 361.
How many items are in your set?
How are you adding one more data value to the data set to be included in the chart?
Regards,
Barry

Similar Messages

  • How to determine binary file data set size

    Hi all
    I am writing specific sets of array data to a binary file, appending each time so the file grows by one data set for each write operation.  I use the set file position function to make sure that I am at the end of the file each time.
    When I read the file, I want to read only the last 25 (or some number) data sets.  To do this, I figured on using the set file position function to place the file position to where it was 25 data sets from the end.  Easy math, right ?  Apparently not.
    Well, as I have been collecting file size data as I have started the initial tet run, I am finding the the file size (using file size command and getting number of bytes as a result) that the size is not growing the same amount every time.  My size and format of my data being written is identical each time, an array of four double precision numbers.
    The increments I get are as follows, after first write - 44 bytes, after 2nd - 52 bytes, 3rd - 52 bytes, 4th 44 bytes, 5th - 52 bytes, 6th - 52 bytes, 7th - 44 bytes and it appears to maintain this pattern going forward.
    Why would each write operation not be identical in size of bytes.  This means that my basic math for determining the correct file poistion to read only the last 25 data sets will not be simple and if somewhere along the line after I have accumulated hundreds or thousands of data sets, what if the pattern changes.
    Any help on why this is occuring or on a method of working around the problem would be much appreciated.
    Thanks
    Doug
    Doug
    "My only wish is that I am capable of learning each and every day until my last breath."
    Solved!
    Go to Solution.

    I have stripped out the DSC module functions from the vi and attached it here.  I also set default values to all the inputs so it will run with no other inputs.  I also included my current data files (zipped as I have four of them) though the file names are hard coded in the vi so they can be changed to whatever works locally. In fact probably will have to be to modified for the path anyway.
    If you point to a path that has no file, it will create a new one on the first run and the file size will show zero since there is no data in it. It will start to show the changes on each subsequent run.
    As I am creating and appending four different files, each with it's own set of data but always the same format (array of four double precision numbers) and the file size information always increments the same for all four files (as will be seen in the File Size Array) I don't think it is a function of the size of the acutal numbers but something idiosyncracy with how the binary file is created.
    If this proves to be a major hurdle I guess I could try a TDM file but I already got everything else working with this one and need to move on to other tasks.
    Thanks for any continued assistance
    Doug
    Doug
    "My only wish is that I am capable of learning each and every day until my last breath."
    Attachments:
    !_Data_Analysis_Simple.vi ‏40 KB
    SPC.zip ‏2 KB

  • Header data come from the Data Set, Issue if Data Set is sorted

    Currently in our Data Set, the 1st line contains unique fields for the HEADER. For example "adress of the user site".
    If we are sorting the Data Set, the 1st line value is empty, so the Header Data is empty.
    What would be the best way to solve this problem:
      - Duplicate on all lines, but we want to avoid this solution to reduce the Data Set Size?
      - Does Crystal Reports fields can have a specific "Set-up" to define that they are for the Header so, it's an unique value?
    Remark: Our Data Set is unique, ie: there is no table link because it's not possible from the database we are extracting values.
    Thanks in advance

    Hi Alexandre
    The best way to display only the header i.e. the uniques line would be to write a formula. Write a formula to extract the line for header and drag this formula field to your report.
    Hope it helps!!
    Regards
    Sourashree

  • Dynamic Calc processor cannot lock more than [100] ESM blocks during the calculation, please increase CalcLockBlock setting and then retry(a small data cache setting could also cause this problem, please check the data cache size setting).

    Hi,
    Our Environment is Essbase 11.1.2.2 and working on Essbase EAS and Shared Services components.One of our user tried to run the Cal Script of one Application and faced this error.
    Dynamic Calc processor cannot lock more than [100] ESM blocks during the calculation, please increase CalcLockBlock setting and then retry(a small data cache setting could also cause this problem, please check the data cache size setting).
    I have done some Google and found that we need to add something in Essbase.cfg file like below.
    1012704 Dynamic Calc processor cannot lock more than number ESM blocks during the calculation, please increase CalcLockBlock setting and then retry (a small data cache setting could also cause this problem, please check the data cache size setting).
    Possible Problems
    Analytic Services could not lock enough blocks to perform the calculation.
    Possible Solutions
    Increase the number of blocks that Analytic Services can allocate for a calculation:
    Set the maximum number of blocks that Analytic Services can allocate to at least 500. 
    If you do not have an $ARBORPATH/bin/essbase.cfg file on the server computer, create one using a text editor.
    In the essbase.cfg file on the server computer, set CALCLOCKBLOCKHIGH to 500.
    Stop and restart Analytic Server.
    Add the SET LOCKBLOCK HIGH command to the beginning of the calculation script.
    Set the data cache large enough to hold all the blocks specified in the CALCLOCKBLOCKHIGH setting. 
    Determine the block size.
    Set the data catche size.
    Actually in our Server Config file(essbase.cfg) we dont have below data  added.
    CalcLockBlockHigh 2000
    CalcLockBlockDefault 200
    CalcLockBlocklow 50
    So my doubt is if we edit the Essbase.cfg file and add the above settings and restart the services will it work?  and if so why should we change the Server config file if the problem is with one application Cal Script. Please guide me how to proceed.
    Regards,
    Naveen

    Your calculation needs to hold more blocks in memory than your current set up allows.
    From the docs (quoting so I don't have to write it, not to be a smarta***:
    CALCLOCKBLOCK specifies the number of blocks that can be fixed at each level of the SET LOCKBLOCK HIGH | DEFAULT | LOW calculation script command.
    When a block is calculated, Essbase fixes (gets addressability to) the block along with the blocks containing its children. Essbase calculates the block and then releases it along with the blocks containing its children. By default, Essbase allows up to 100 blocks to be fixed concurrently when calculating a block. This is sufficient for most database calculations. However, you may want to set a number higher than 100 if you are consolidating very large numbers of children in a formula calculation. This ensures that Essbase can fix all the required blocks when calculating a data block and that performance will not be impaired.
    Example
    If the essbase.cfg file contains the following settings:
    CALCLOCKBLOCKHIGH 500  CALCLOCKBLOCKDEFAULT 200  CALCLOCKBLOCKLOW 50 
    then you can use the following SET LOCKBLOCK setting commands in a calculation script:
    SET LOCKBLOCK HIGH; 
    means that Essbase can fix up to 500 data blocks when calculating one block.
    Support doc is saying to change your config file so those settings can be made available for any calc script to use.
    On a side note, if this was working previously and now isn't then it is worth investigating if this is simply due to standard growth or a recent change that has made an unexpected significant impact.

  • Record set size in Gantt Chart

    Hi all,
    I am developing an application that has a Gantt Chart.
    The tree drawn is a 2 level tree. I have two VOs.
    The second VO (Child) has more than 20 rows. I want to show only 10 child at once.
    Record set size attribute is the one that is used to restrict the size of the number of nodes displayed. Just like in HGRID.
    But when the graph is rendered the values that I give are being ignored and the entire graph is rendered.
    Any pointers or suggestions on this will be really helpful.
    Regards,
    Santhosh.

    Hi Santosh,
    As Anand suggested to add the "rownum <= 10" condition to limit the record set.
    But this condition should be set in View Link SQL(Query where clause).
    I hope this will work.
    Thanks
    Renu

  • How to get UI5 Chart selected data set index

    Hi experts,
    I am facing problem with ui5 bar chart. i am not able to get the selected data set index of selected bar on the chart.
    I am using below method, but not getting any result from this method.
    ObjectName.getChartObject().getSelectedDataSetIndices()
    please suggest any methods are avaliable to get selected index?
    Thanks & Regards
    Venkat

    Hi Venkat,
    You can get the legend index and y-value by using the below code in the event handler that you have registered for ChartSelectionEvent:
         var meaureIndex = event.srcElement.__data__.ctx.path.mi + 1;
         alert(meaureIndex); //hack for getting legend index
         var selPoint = c.getChartObject().getSelectedPoint();
         alert(selPoint); //selected bar index
         var yValue = c.getChartObject().getYDataValueAt(meaureIndex,selPoint);
         alert(yValue);
    Using the legend index you should be able to get the legend name.
    Please note that parts of the above code for getting legend index is a hack (not in i5Chart documentation/reference)
    Hope this helps!
    Regards,
    Ria

  • Three data sets in a single Flex chart?

    Hello everyone;
    I've successfully integrated two data sets into a single Flex
    chart; now I'm wondering how to bring in another one. Here's an
    example of what I'd want to do. Say I wanted to chart average
    lifespan and average household income by US State. So I'm imagining
    each state could be represented by a pair of bar charts - one
    representing lifespan and using the left y axis and one
    representing income and using the right y axis. The third component
    would be the state, which the user should be able to display or
    hide. I'm imagining states being represented by color and a label
    on the x axis, and user controls to display more or fewer years.
    Any change I could actually pull this off?
    Thanks for any help.

    I'm not entirely certain where the conflict from adding
    States would appear, but I'm a little unclear on your data model.
    Have you considered putting all the information into a single data
    set? I would suggest filling an Array Collection with a class like
    this:
    public class infoHolder
    public var lifespan:int;
    public var salary:int;
    public var stateName:string;
    Then you can simply fill your chart like this:

  • Regrd the data packet size

    Hi
    i have two questions
    1) where can i set the data packet size in BW and R3 and also in BI how and where can we set .
    2)by what method/logic we will select the key fields in dso.
    ex: i have 5 tables in the sourse and each table will have primary keys, now how do we know that particular primary keys should be kept in KEY FIELDS folder in DSO.
    full points will be assigned.

    HI,
    Data package settings for the data to be extracted from R3 to BI can be done through :
    1) SBIW>General Settings> Maintain Control Parameters for Data Transfer
        These settings are common for all the info packages which extract data from R3.
    2) If u want to do settings relevent to specific Infopackage then :
    RSA1>Click On the Specific Infopackage>Scheduler(in the Menu Bar)-->DataS Default Data Transfer.
    3) And if you want to do DSO Package settings then:
    Got to Transaction RSODSO_SETTINGS 
    Here u can do package settings for DSO activation ,paramenter for SID generation ect.
    And selection of Key fileds depends upon the requirement.
    Based on the key fields what all the data fileds u want to overwrite or add the corespomding data fields
    Regards,
    Chaitanya.

  • Download using open data set and close data set

    can any body please send some sample pgm using open data set and close data set .the data should get downloaded in application server
    very simple pgm needed

    Hi Arun,
    See the Sample code for BDC using OPEN DATASET.
    report ZSDBDCP_PRICING no standard page heading
    line-size 255.
    include zbdcrecx1.
    *--Internal Table To hold condition records data from flat file.
    Data: begin of it_pricing occurs 0,
    key(4),
    f1(4),
    f2(4),
    f3(2),
    f4(18),
    f5(16),
    end of it_pricing.
    *--Internal Table To hold condition records header .
    data : begin of it_header occurs 0,
    key(4),
    f1(4),
    f2(4),
    f3(2),
    end of it_header.
    *--Internal Table To hold condition records details .
    data : begin of it_details occurs 0,
    key(4),
    f4(18),
    f5(16),
    end of it_details.
    data : v_sno(2),
    v_rows type i,
    v_fname(40).
    start-of-selection.
    refresh : it_pricing,it_header,it_details.
    clear : it_pricing,it_header,it_details.
    CALL FUNCTION 'UPLOAD'
    EXPORTING
    FILENAME = 'C:\WINDOWS\Desktop\pricing.txt'
    FILETYPE = 'DAT'
    TABLES
    DATA_TAB = it_pricing
    EXCEPTIONS
    CONVERSION_ERROR = 1
    INVALID_TABLE_WIDTH = 2
    INVALID_TYPE = 3
    NO_BATCH = 4
    UNKNOWN_ERROR = 5
    GUI_REFUSE_FILETRANSFER = 6
    OTHERS = 7.
    WRITE : / 'Condition Records ', P_FNAME, ' on ', SY-DATUM.
    OPEN DATASET P_FNAME FOR INPUT IN TEXT MODE.
    if sy-subrc ne 0.
    write : / 'File could not be uploaded.. Check file name.'.
    stop.
    endif.
    CLEAR : it_pricing[], it_pricing.
    DO.
    READ DATASET P_FNAME INTO V_STR.
    IF SY-SUBRC NE 0.
    EXIT.
    ENDIF.
    write v_str.
    translate v_str using '#/'.
    SPLIT V_STR AT ',' INTO it_pricing-key
    it_pricing-F1 it_pricing-F2 it_pricing-F3
    it_pricing-F4 it_pricing-F5 .
    APPEND it_pricing.
    CLEAR it_pricing.
    ENDDO.
    IF it_pricing[] IS INITIAL.
    WRITE : / 'No data found to upload'.
    STOP.
    ENDIF.
    loop at it_pricing.
    At new key.
    read table it_pricing index sy-tabix.
    move-corresponding it_pricing to it_header.
    append it_header.
    clear it_header.
    endat.
    move-corresponding it_pricing to it_details.
    append it_details.
    clear it_details.
    endloop.
    perform open_group.
    v_rows = sy-srows - 8.
    loop at it_header.
    perform bdc_dynpro using 'SAPMV13A' '0100'.
    perform bdc_field using 'BDC_CURSOR'
    'RV13A-KSCHL'.
    perform bdc_field using 'BDC_OKCODE'
    '/00'.
    perform bdc_field using 'RV13A-KSCHL'
    it_header-f1.
    perform bdc_dynpro using 'SAPMV13A' '1004'.
    perform bdc_field using 'BDC_CURSOR'
    'KONP-KBETR(01)'.
    perform bdc_field using 'BDC_OKCODE'
    '/00'.
    perform bdc_field using 'KOMG-VKORG'
    it_header-f2.
    perform bdc_field using 'KOMG-VTWEG'
    it_header-f3.
    **Table Control
    v_sno = 0.
    loop at it_details where key eq it_header-key.
    v_sno = v_sno + 1.
    clear v_fname.
    CONCATENATE 'KOMG-MATNR(' V_SNO ')' INTO V_FNAME.
    perform bdc_field using v_fname
    it_details-f4.
    clear v_fname.
    CONCATENATE 'KONP-KBETR(' V_SNO ')' INTO V_FNAME.
    perform bdc_field using v_fname
    it_details-f5.
    if v_sno eq v_rows.
    v_sno = 0.
    perform bdc_dynpro using 'SAPMV13A' '1004'.
    perform bdc_field using 'BDC_OKCODE'
    '=P+'.
    perform bdc_dynpro using 'SAPMV13A' '1004'.
    perform bdc_field using 'BDC_OKCODE'
    '/00'.
    endif.
    endloop.
    *--Save
    perform bdc_dynpro using 'SAPMV13A' '1004'.
    perform bdc_field using 'BDC_OKCODE'
    '=SICH'.
    perform bdc_transaction using 'VK11'.
    endloop.
    perform close_group.
    Hope this resolves your query.
    Reward all the helpful answers.
    Regards

  • Essbase Error:Set is too large to be processed. Set size exceeds 2^64 tuple

    Hi,
    we are using obiee 11.1.1.6 version with essbase 9.3.3 as a data source . when I try to run a report in obiee, I am getting the below error :
    State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 43113] Message returned from OBIS. [nQSError: 43119] Query Failed: [nQSError: 96002] Essbase Error: Internal error: Set is too large to be processed. Set size exceeds 2^64 tuples (HY000)
    but if I run the same query in excel add in, I am just getting 20 records . wondering why I am getting this error in obiee . Does any one encountered the same issue ?
    Thanks In advance,

    Well if you want to export in I think you have to manually do it.
    The workaround it to open your aperture library by right clicking it and show contents...
    Then go into your project right click show contents...
    In here there are sub folders on dates that the pictures were added to those projects. If you open the sub folder and search for your pictures name it should be in that main folder.
    You can just copy it out as you would any normal file to any other location.
    Voila you have manually exported out your file.
    There is a very similar post that has been close but again you can't export the original file that you are working on - FYI http://discussions.apple.com/thread.jspa?threadID=2075419

  • Open data set

    hi friends
    i make one file like dat.i use open data set so dat file created
    so my question is that where this file is created ?
    it will create on applicationserver?
    bcoz its not create on my local drive.

    Hi
    Yes
    A file will be created on Application server.
    We use DATASET concept for reading, writing, transfering data from Local to Application servers.
    see the sample code
    ABAP code for uploading a TAB delimited file into an internal table. See code below for structures.
    *& Report  ZUPLOADTAB                                                  *
    *& Example of Uploading tab delimited file                             *
    REPORT  zuploadtab                    .
    PARAMETERS: p_infile  LIKE rlgrap-filename
                            OBLIGATORY DEFAULT  '/usr/sap/'..
    DATA: ld_file LIKE rlgrap-filename.
    *Internal tabe to store upload data
    TYPES: BEGIN OF t_record,
        name1 like pa0002-VORNA,
        name2 like pa0002-name2,
        age   type i,
        END OF t_record.
    DATA: it_record TYPE STANDARD TABLE OF t_record INITIAL SIZE 0,
          wa_record TYPE t_record.
    *Text version of data table
    TYPES: begin of t_uploadtxt,
      name1(10) type c,
      name2(15) type c,
      age(5)  type c,
    end of t_uploadtxt.
    DATA: wa_uploadtxt TYPE t_uploadtxt.
    *String value to data in initially.
    DATA: wa_string(255) type c.
    constants: con_tab TYPE x VALUE '09'.
    *If you have Unicode check active in program attributes then you will
    *need to declare constants as follows:
    *class cl_abap_char_utilities definition load.
    *constants:
       con_tab  type c value cl_abap_char_utilities=>HORIZONTAL_TAB.
    *START-OF-SELECTION
    START-OF-SELECTION.
    ld_file = p_infile.
    OPEN DATASET ld_file FOR INPUT IN TEXT MODE ENCODING DEFAULT.
    IF sy-subrc NE 0.
    ELSE.
      DO.
        CLEAR: wa_string, wa_uploadtxt.
        READ DATASET ld_file INTO wa_string.
        IF sy-subrc NE 0.
          EXIT.
        ELSE.
          SPLIT wa_string AT con_tab INTO wa_uploadtxt-name1
                                          wa_uploadtxt-name2
                                          wa_uploadtxt-age.
          MOVE-CORRESPONDING wa_uploadtxt TO wa_upload.
          APPEND wa_upload TO it_record.
        ENDIF.
      ENDDO.
      CLOSE DATASET ld_file.
    ENDIF.
    *END-OF-SELECTION
    END-OF-SELECTION.
    *!! Text data is now contained within the internal table IT_RECORD
    Display report data for illustration purposes
      loop at it_record into wa_record.
        write:/     sy-vline,
               (10) wa_record-name1, sy-vline,
               (10) wa_record-name2, sy-vline,
               (10) wa_record-age, sy-vline.
      endloop.
    Reward points for useful Answers
    Regards
    Anji

  • Open Data Set Issue

    Hello
    How can we set the length of the record size in the open data set. I have like 40 fields of internal table and I need to upload it to Appl server. I could do it but all the fields dont appear. How can set the record size ? Can anyone explain..Thanks

    HI Vicky,
    doesn't show all fileds: scence in the file?
    then either you are not passing the whole data or you are not selecting the whole data to the sending internal table
    change your declaration to the string rather than character....
    follow the code below:
    DATA:  filepath TYPE string.
    OPEN DATASET filepath FOR OUTPUT IN BINARY MODE.
        LOOP AT l_xml_table INTO l_xml_rec.
         TRANSFER your itab TO filepath.
        ENDLOOP.
        CLOSE DATASET filepath.

  • Open data set problem with binary mode

    Hi everyone,
    I am trying to read into a buffer the content of a binary file (like Excel) and to write that content file using open data set into a folder destination.
    The file destination has a size diferent than from the original one and cannot be open.
    Here is the code.
      DATA:  lv_file_name(100),
             lt_content LIKE sdokcntbin OCCURS 0 WITH HEADER LINE.
      MOVE '\computerTestA1.xls' TO lv_file_name.
      OPEN DATASET lv_file_name FOR INPUT IN BINARY MODE.
      IF sy-subrc NE 0.
        EXIT.
      ENDIF.
      DO.
        READ DATASET lv_file_name INTO lt_content-line.
        IF sy-subrc NE 0.
          EXIT.
        ENDIF.
        APPEND lt_content.
      ENDDO.
      CLOSE DATASET lv_file_name.
    *  DELETE DATASET lv_file_name.
      CLEAR: lv_file_name.
      MOVE '\computerTestB2.xls' TO lv_file_name.
      OPEN DATASET lv_file_name FOR OUTPUT IN BINARY MODE.
      IF sy-subrc NE 0.
        EXIT.
      ENDIF.
      LOOP AT lt_content.
        TRANSFER lt_content-line TO lv_file_name.
      ENDLOOP.
      CLOSE DATASET lv_file_name.
    THanks in advance.
    Regards.
    Message was edited by:
            David Fryda

    Hi,
    I solve the problem.
    In fact, when I try to write the last line of lt_content, it does write all the line even if this line is not full of data.
    Solution :
      DATA:  lv_file_name(100),
             lt_content LIKE sdokcntbin OCCURS 0 WITH HEADER LINE,
             alen TYPE i,
             pos TYPE i.
      MOVE '\mr0221TestA1.xls' TO lv_file_name.
      OPEN DATASET lv_file_name FOR INPUT IN BINARY MODE.
      IF sy-subrc NE 0.
        EXIT.
      ENDIF.
      SET DATASET lv_file_name POSITION END OF FILE.
      GET DATASET lv_file_name POSITION pos.
      SET DATASET lv_file_name POSITION 0.
      DO.
        CLEAR lt_content.
        READ DATASET lv_file_name INTO lt_content-line ACTUAL LENGTH alen .
        IF sy-subrc NE 0.
          APPEND lt_content.
          EXIT.
        ENDIF.
        APPEND lt_content.
      ENDDO.
      CLOSE DATASET lv_file_name.
    *  DELETE DATASET lv_file_name.
      CLEAR: lv_file_name.
      MOVE '\mr0221TestB2.xls' TO lv_file_name.
      OPEN DATASET lv_file_name FOR OUTPUT IN BINARY MODE.
      IF sy-subrc NE 0.
        EXIT.
      ENDIF.
      LOOP AT lt_content.
        AT LAST.
          TRANSFER lt_content-line TO lv_file_name LENGTH alen.
          EXIT.
        ENDAT.
        TRANSFER lt_content-line TO lv_file_name.
      ENDLOOP.
      CLOSE DATASET lv_file_name.
    Thanks.

  • How to use multiple Spry Data Sets in one page

    I'm using two spry data sets in one page. When I add the first spry data set to my page everything runs OK, When I add the second spry data set to the page the first data set stops working. Does anyone know what the problem is?
    This is how I have my data sets listed.
    var ds1 = new Spry.Data.HTMLDataSet("/accounts/tower/list.php", "list");
    var ds2 = new Spry.Data.HTMLDataSet("/accounts/tower/numvisits.php", "chart");
    Thanks, let me know if you need more information.

    Good News!
    There is nothing wrong with what you have shown.
    Bad news!
    The problem could be in that part that you have not shown.
    Gramps

  • Large OLTP data set to get through the cache in our new ZS3-2 storage.

    We recently purchased a ZS3-2 and are currently attempting to do performance testing.  We are using various tools to simulate load within our Oracle VM 3.3.1 cluster of qty5 Dell m620 servers-- swingbench, vdbench, and dd.  The OVM repositories are connecting via NFS.  The Swingbench load testing servers have a base OS disk mounted from the repos and NFS mounts via NFS v4 from within the VM (we would also like to test dNFS later in our testing). 
    The problem I'm trying to get around is that the 256G of DRAM (and a portion of that for ARC) is large enough where my reads are not touching the 7200 RPM disks.  I'd like to create a large enough data set so the amount of random reads cannot possible be stored within the ARC cache  (NOTE: we have no L2ARC at the moment).
    I've run something similar to this in the past, but have adjusted the "sizes=" to be larger than 50m.  My thought here is that, if the ARC is up towards around 200 or so MB's, if I create the following on four separate VM's and run vdbench at just about the same time, it will be attempting to read more data than can possibly fit in the cache.
    * 100% random, 70% read file I/O test.
    hd=default
    fsd=default,files=16,depth=2,width=3,sizes=(500m,30,1g,70)
    fsd=fsd1,anchor=/vm1_nfs
    fwd=fwd1,fsd=fsd*,fileio=random,xfersizes=4k,rdpct=70,threads=8
    fwd=fwd2,fsd=fsd*,fileio=random,xfersizes=8k,rdpct=70,threads=8
    fwd=fwd3,fsd=fsd*,fileio=random,xfersizes=16k,rdpct=70,threads=8
    fwd=fwd4,fsd=fsd*,fileio=random,xfersizes=32k,rdpct=70,threads=8
    fwd=fwd5,fsd=fsd*,fileio=random,xfersizes=64k,rdpct=70,threads=8
    fwd=fwd6,fsd=fsd*,fileio=random,xfersizes=128k,rdpct=70,threads=8
    fwd=fwd7,fsd=fsd*,fileio=random,xfersizes=256k,rdpct=70,threads=8
    rd=rd1,fwd=fwd1,elapsed=900,interval=30,fwdrate=max,format=yes,pause=30,openflags=fsync
    rd=rd2,fwd=fwd2,elapsed=900,interval=30,fwdrate=max,format=yes,pause=30,openflags=fsync
    rd=rd3,fwd=fwd3,elapsed=900,interval=30,fwdrate=max,format=yes,pause=30,openflags=fsync
    rd=rd4,fwd=fwd4,elapsed=900,interval=30,fwdrate=max,format=yes,pause=30,openflags=fsync
    rd=rd5,fwd=fwd5,elapsed=900,interval=30,fwdrate=max,format=yes,pause=30,openflags=fsync
    rd=rd6,fwd=fwd6,elapsed=900,interval=30,fwdrate=max,format=yes,pause=30,openflags=fsync
    rd=rd7,fwd=fwd7,elapsed=900,interval=30,fwdrate=max,format=yes,pause=30,openflags=fsync
    However, the problem I keep running into is that vdbench's java processes will throw exceptions
    ... <cut most of these stats.  But suffice it to say that there were 4k, 8k, and 16k runs that happened before this...>
    14:11:43.125 29 4915.3 1.58 10.4 10.0 69.9 3435.9 2.24 1479.4 0.07 53.69 23.12 76.80 16384 0.0 0.00 0.0 0.00 0.0 0.00 0.1 7.36 0.1 627.2 0.0 0.00 0.0 0.00 0.0 0.00
    14:12:13.071 30 4117.8 1.88 10.0 9.66 69.8 2875.1 2.65 1242.7 0.11 44.92 19.42 64.34 16384 0.0 0.00 0.0 0.00 0.0 0.00 0.1 12.96 0.1 989.1 0.0 0.00 0.0 0.00 0.0 0.00
    14:12:13.075 avg_2-30 5197.6 1.52 9.3 9.03 70.0 3637.8 2.14 1559.8 0.07 56.84 24.37 81.21 16383 0.0 0.00 0.0 0.00 0.0 0.00 0.1 6.76 0.1 731.4 0.0 0.00 0.0 0.00 0.0 0.00
    14:12:15.388
    14:12:15.388 Miscellaneous statistics:
    14:12:15.388 (These statistics do not include activity between the last reported interval and shutdown.)
    14:12:15.388 WRITE_OPENS Files opened for write activity: 89 0/sec
    14:12:15.388 FILE_CLOSES Close requests: 81 0/sec
    14:12:15.388
    14:12:16.116 Vdbench execution completed successfully. Output directory: /oracle/zfs_tests/vdbench/output
    java.lang.RuntimeException: Requested parameter file does not exist: param_file
      at Vdb.common.failure(common.java:306)
      at Vdb.Vdb_scan.parm_error(Vdb_scan.java:50)
      at Vdb.Vdb_scan.Vdb_scan_read(Vdb_scan.java:67)
      at Vdb.Vdbmain.main(Vdbmain.java:550)
    So I know from reading other posts, that vdbench will do what you tell it (Henk brought that up).  But based on this, I can't tell what I should do differently to the vdbench file to get around this error.  Does anyone have advice for me?
    Thanks,
    Joe

    ah... it's almost always the second set of eyes.  Yes, it is run from a script.  And I just looked and realized that the list last line didn't have the \# in it.  Here's the line:
       "Proceed to the "Test Setup" section, but do something like `while true; do ./vdbench -f param_file; done` so the tests just keep repeating."
    I just added the hash to comment that out and am rerunning my script.  My guess is that it'll complete   Thanks Henk.

Maybe you are looking for

  • HP LaserJet Pro 400 color Printer M451dn won't print

    My HP LaserJet Pro 400 color Printer M451dn stopped printing after a short usage of non hp toner. How do I get this printer to accept these toner cartridge ? My 3600dn never has a problem

  • XI Message Mapping

    Hi , I have updated an Idoc segment with a new field. I need to import the newly updated IDoc in Integration Builder. My question is, there is mapping for the previous version of Idoc and if i import the updated IDoc again will the mapping  be erased

  • IOS 8.0.2 iPhone 5s Music Library issue

    I have been having an issue with my iTunes library on my iPhone even after IOS 8.0.2. Most of my songs are from indie groups and so not all of the songs have actual albums to go with them, so I have a large amount of songs in the "Unknown Album". If

  • Database Data Security: both from within (using MIMEbase64 encoding) and from without (using encryption).

    In my MDB file databases (Jet 4.x format) I turn on both compression and encryption.  This gives me data security from without, meaning, I can't open a MDB file in a Word Processing software (or otherwise) and read the data in a humanly recognized fo

  • YouTube Widescreen PAL DV Format - Right Setting?

    This is not a question, but having seen a few postings here in the forum (and in YouTube Help) I thought it might be useful to publish my own findings in order to help others looking to place PAL Widescreen 16:9 DV Format 720x576 onto YouTube. I have