BIP cannot handle large CLOB data.

We have OBIEE 1.3.4.1 on Redhat, database is Oracle 11.2.0.2 on the same box.
In Publisher, I build a report with table with a CLOB type.
BIP can handle of the clob data over 4000 limit of varchar2, but failed at one of the rows that with clob size over 25000 character. The error message isThe report cannot be rendered because of an error, please contact the administratorIs this really beyond the limit of Publisher, or there is a place I can configure BIP to handle it.
Thanks

If I do not use any template and just show the XML in BIP View, I get a more meaningful error A semi colon character was expected. Error processing resource 'http://cchdb.thinkstream.com:9704/xmlpserver/servlet/xdo'. ...
10/31/2006  LA-DOC P&P W ORLEANS, LA      PROB: POSS OF COCAINE BEGIN 10/30/2006 END 10/30
----------------------^This appear to due to the '&'. How can make BIP treat '&' as an ordinary character?

Similar Messages

  • Coherence cannot handle year in dates of 9999 ?

    Hi,
    I have been running into an issue with dates in coherence. We have some data which specifies a date of 01/01/9999 to indicate that the date does not expire.
    However this results in errors when loading into coherence as follows:-
    java.io.IOException: year is likely out of range: 9999
    I then found this related thread:-
    Storing large DateTime values from .NET
    Is it really true that coherence cannot handle a valid date ???? Why would you possibly validate a year in the first place given 9999 is actually a valid year !.
    TIA
    Martin

    Hi,
    What is the code that causes the error? What version of Coherence are you using?
    I can serialize/deserialize the date you have without seeing any problems. For example...
    Calendar c = Calendar.getInstance();
    c.set(9999, Calendar.JANUARY, 1);
    Date d = c.getTime();
    ConfigurablePofContext pofContext = new ConfigurablePofContext("coherence-pof-config.xml");
    Binary b = ExternalizableHelper.toBinary(d, pofContext);
    Date d2 = (Date) ExternalizableHelper.fromBinary(b, pofContext);
    System.out.println(d2);The above code works fine when I ran it with Coherence 3.7.1.6
    If I have a class that implements PortableObject with a date field set to 1/1/9999 this also serializes without any problems.
    JK

  • Ways to handle large volume data (file size = 60MB) in PI 7.0 file to file

    Hi,
    In a file to file scenario (flat file to xml file), the flat file is getting picked up by FCC and then send to XI. In xi its performing message mapping and then XSL transformation in a sequence.
    The scenario is working fine for small files (size upto 5MB) but when the input flat file size is more then 60 MB, then XI is showing lots of problem like (1) JCo call error or (2) some times even XI is stoped and we have to strat it manually again to function properly.
    Please suggest some way to handle large volume (file size upto 60MB) in PI 7.0 file to file scenario.
    Best Regards,
    Madan Agrawal.

    Hi Madan,
    If every record of your source file was processed in a target system, maybe you could split your source file into several messages by setting up this in Recordset Per Messages parameter.
    However, you just want to convert you .txt file into a .xml file. So, try firstly to setting up
    EO_MSG_SIZE_LIMIT parameter in SXMB_ADM.
    However this could solve the problem in Inegration Engine, but the problem will persit in Adapter Engine, I mean,  JCo call error ...
    Take into account that file is first proccessed in Adapter Engine, File Content Conversion and so on...
    and then it is sent to the pipeline in Integration Engine.
    Carlos

  • Should Not handle large base64Binary data with BPEL?

    Hi,
    we need to implement a file saving function. I have no problem to implement the web service with Java class by using MTOM streaming but I question on the best design with BPEL for this or if BPEL should not be used for this at all. Please help.
    For the requirement, the file content could be the text entered from a web page or the binary data from any resource such as an existing file or email message body etc, which is not limited. Also the web service would receive the desired file name. But the actual file name should be created by the web service based on the desired file name plus some business rule.
    I am thinking of creating a BPEL app for this. The input for the file content is designed to be of type base64Binary so that the application could handle either ASCII or Binary data. In this BEPL app, it needs first to call a web service to get the information where to put the file (this is dynamic) and generate the actual file name and then it calls another web service to save the file with the actual file name and content. I wonder in the case of saving content of big size such as content read from a PDF file, it could cause resource issue due to the dehydration in BPEL. I am not so clear about dehydration. Does that mean when the BPEL invokes the 1st web service to get the information where to put the file, the base64Binary data for the file content would be first saved into the DB (dehydrated)? Would this cause issue? If so, does that mean for this business needs, we should not use SOA, instead, we should just implement it with JAX-WS?

    Operating System is Windows 7
    I do not know what you mean by Patched to 7.0.4
    I do not have a crash report, the software just freezes and I have to do a force close on the program.
    Thank you for your time...

  • My lumia 520 cannot handle a slow Data connection.

    I use a 2g data connection of about 8 KB/sec avg speed for internet access but ie or other browser and apps cannot display the website of data. Is there a workaround? At first I thought that my net was slow but that is the case as it works well on android phones. Please help my phone is just using my bandwith.

    IE cannot load even google website sometimes.in the morning though when speeds are better it works better.
    I even use uc browser in speed mode but it is the same. Help

  • Export cannot handle large numbers

    Exporting to TXT or CSV fails on when columns contain large numbers. I presume it uses 32 bit signed integers. The result is blanks for all fields where the contents are too large. I have tested on both win 2000 and unix with the same results.
    Eddie

    Similar problem occurs with the visualization, as mentioned in:
    acctno is null when displaying data
    Can to be that the problem with the EXPORT has the same cause of the problem with the visualization.

  • Handling large xml data source files

    Post Author: LeCoqDeMort
    CA Forum: Crystal Reports
    Hello. I have developed a crystal report that uses an xml data file as a data source (supported by an xml schema file). I have successfully run the report for small sample xml data files, but I now have a realistic data file that is around 4Mb in size.When I run the report within the Crystal Reports designer (ver. 11.0.0.1994), i get a "failure to retrieve data from database" error.  Is there some sort of configurable limit on data file/cache size that I can adjust - if indeed that is the problem? Thanks LeCoq 

    Post Author: LeCoqDeMort
    CA Forum: Crystal Reports
    Hello. I have developed a crystal report that uses an xml data file as a data source (supported by an xml schema file). I have successfully run the report for small sample xml data files, but I now have a realistic data file that is around 4Mb in size.When I run the report within the Crystal Reports designer (ver. 11.0.0.1994), i get a "failure to retrieve data from database" error.  Is there some sort of configurable limit on data file/cache size that I can adjust - if indeed that is the problem? Thanks LeCoq 

  • Writing CLOB data using UTL_FILE to several files by settingmaxrows in loop

    Hey Gurus,
    I have a procedure that creates a CLOB document (in the form of a table in oracle 9i). I need to now write this large CLOB data (with some 270,000 rows) into several files, setting the row max at 1000. So essentially ther would be some sort of loop construct and substr process that creates a file after looping through 1000 rows and then continues the count and creates a another file until all 270 xml files are created. Simple enough right...lol? Well I've tried doing this and haven't gotten anywhere. My pl/sql coding skills are too elementary I am guessing. I've only been doing this for about three months and could use the assistance of a more experienced person here.
    Here are the particulars...
    CLOB doc is a table in my Oracle 9i application named "XML_CLOB"
    Directory name to write output to "DIR_PATH"
    DIRECTORY PATH is "\app\cdg\cov"
    Regards,
    Chris

    the xmldata itself in the CLOB would look like this for one row.
    <macess_exp_import_export_file><file_header><file_information></file_information></file_header><items><documents><document><external_reference></external_reference><processing_instructions><update>Date of Service</update></processing_instructions><filing_instructions><folder_ids><folder id="UNKNOWN" folder_type_id="19"></folder></folder_ids></filing_instructions><document_header><document_type id="27"></document_type><document_id>CUE0000179</document_id><document_description>CUE0000179</document_description><document_date name="Date of Service">1900-01-01</document_date><document_properties></document_properties></document_header><document_data><internal_file><document_file_path>\\adam\GiftSystems\Huron\TIFS\066\CUE0000179.tif</document_file_path><document_extension>TIF</document_extension></internal_file></document_data></document></documents></items></macess_exp_import_export_file>

  • SAP BO Data Services XI 3.2 - Cannot Handle Multithreaded RFC Connection?

    Hi Guys,
    Just want to ask for your inputs if Data Services cannot handle multiple RFC connection request to BW system?
    The scenario is:
    There is one BODI job using RFC connection and trigger the 2nd job at the same time and it happen that the 2nd job failed.
    Current version of SAP BO Data Services XI 3.2 that we are using is 12.2.2.1
    Thanks in advance,
    Randell

    Arpan,
    One way to get to the multiprovider data is to use Open Hub with a DTP that gets the data from the multiprovider and exposes it as an open hub destination to Data Services. With Data Services XI 3.2 we now fully support Open Hub where Data Services will (1) start the process chain to load the data (2) read the data when process chain ended and (3) notify Open Hub when done so that the data can be purged again.
    More info on Open Hub here : http://help.sap.com/saphelp_nw04/helpdata/en/1e/c4463c6796e61ce10000000a114084/content.htm
    But I will also look into the why we show the multiproviders when browsing the metadata, but get an error when trying to extract using the ABAP method (not via Open Hub). You could be right in your assumptions below and we might just need to hide the multiproviders when browsing metadata.
    Thanks,
    Ben.
    Edited by: Ben Hofmans on Jan 5, 2010 6:06 PM - added link to Open Hub documentation which references multiproviders as possible source.

  • Can express vi handle large data

    Hello,
    I'm facing problem in handling large data using express vi's. The input to express vi is a large data of 2M samples waveform & i am using 4 such express vi's each with 2M samples connected in parallel. To process these data the express vi's are taking too much of time compared to other general vi's or subvi's. Can anybody give the reason why its taking too much time in processing. As per my understanding since displaying large data in labview is not efficient & since the express vi's have an internal display in the form of configure dialog box. Hence i feel most of the processing time is taken to plot the data on the graph of configure dailog box. If this is correct then Is there any solution to overcome this.
    waiting for reply
    Thanks in advance

    Hi sayaf,
    I don't understand your reasoning for not using the "Open Front Panel"
    option to convert the Express VI to a standard VI. When converting the
    Express VI to a VI, you can save it with a new name and still use the
    Express VI in the same VI.
    By the way, have you heard about the NI LabVIEW Express VI Development Toolkit? That is the choice if you want to be able to create your own Express VIs.
    NB: Not all Express VIs can be edited with the toolkit - you should mainly use the toolkit to develop your own Express VIs.
    Have fun!
    - Philip Courtois, Thinkbot Solutions

  • Using CLOB data type - Pros and Cons

    Dear Gurus,
    We are designing a database that will be receiving comments from external data source. These comments are stored as CLOB in the external database. We found that only 1% of incoming data will be larger than 4000 characters and are now evaluating the Pros and Cons of storing only 4000 characters of incoming comments in VARCHAR2 data type or using CLOB data type.
    Some of the concerns brought up during discussion were:
    - having to store CLOBs in separate tablespace;
    - applications, such Toad require changing defaults settings to display CLOBs in the grid. Default value is not to display them;
    - applications that build web page with CLOBs will be struggling to fit 18 thousand chararcters of which 17 thousand are blank lines;
    - cashing CLOBs in memory will consume big chunk of data buffers which will affect performance;
    - to manipulate CLOBs you need PL/SQL anonymous block or procedure;
    - bind variables cannot be assigned CLOB value;
    - dynamic SQL cannot use CLOBs;
    - temp tables don't work very well with CLOBs;
    - fuzzy logic search on CLOBs is ineffective;
    - not all ODBC drivers support Oracle CLOBs
    - UNION, MINUS, INTERSECT don't work with CLOBs
    I have not delt with CLOB data type in the past, so I am hoping to hear from you of any possible issues/hastles we may encounter?

    848428 wrote:
    Dear Gurus,
    We are designing a database that will be receiving comments from external data source. These comments are stored as CLOB in the external database. We found that only 1% of incoming data will be larger than 4000 characters and are now evaluating the Pros and Cons of storing only 4000 characters of incoming comments in VARCHAR2 data type or using CLOB data type.
    Some of the concerns brought up during discussion were:
    - having to store CLOBs in separate tablespace;They can be stored inline too. Depends on requirements.
    - applications, such Toad require changing defaults settings to display CLOBs in the grid. Default value is not to display them;Toad is a developer tool so that shouldn't matter. What should matter is how you display the data to end users etc. but that will depend on the interface. Some can handle CLOBs and others not. Again, it depends on the requirements.
    - applications that build web page with CLOBs will be struggling to fit 18 thousand chararcters of which 17 thousand are blank lines;Why would they struggle? 18,000 characters is only around 18k in file size, that's not that big to a web page.
    - cashing CLOBs in memory will consume big chunk of data buffers which will affect performance;Who's caching them in memory? What are you planning on doing with these CLOBs? There's no real reason they should impact performance any more than anything else, but it depends on your requirements as to how you plan to use them.
    - to manipulate CLOBs you need PL/SQL anonymous block or procedure;You can manipulate CLOBs in SQL too, using the DBMS_LOB package.
    - bind variables cannot be assigned CLOB value;Are you sure?
    - dynamic SQL cannot use CLOBs;Yes it can. 11g supports CLOBs for EXECUTE IMMEDIATE statements and pre 11g you can use the DBMS_SQL package with CLOB's split into a VARCHAR2S structure.
    - temp tables don't work very well with CLOBs;What do you mean "don't work well"?
    - fuzzy logic search on CLOBs is ineffective;Seems like you're pulling information from various sources without context. Again, it depends on your requirements as to how you are going to use the CLOB's
    - not all ODBC drivers support Oracle CLOBs not all, but there are some. Again, it depends what you want to achieve.
    - UNION, MINUS, INTERSECT don't work with CLOBsTrue.
    I have not delt with CLOB data type in the past, so I am hoping to hear from you of any possible issues/hastles we may encounter?You may have more hassle if you "need" to accept more than 4000 characters and you are splitting it into seperate columns or rows, when a CLOB would do it easily.
    It seems as though you are trying to find all the negative aspects of CLOBs and ignoring all the positive aspects, and also ignoring the negative aspects of not using CLOB's.
    Without context you're assumptions are just that, assumptions, so nobody can tell you if it will be right or wrong to use them. CLOB's do have their uses, just as XMLTYPE's have their uses etc. If you're using them for the right reasons then great, but if you're ignoring them for the wrong reasons then you'll suffer.

  • Problem in using CLOB Data from a Data and exporting into File

    Hi,
    UTL_FILE Error Occured while using UTL_FILE with CLOB Data.
    UTL_FILE: A write error occurred.
    The Below Code is for reference:
    DECLARE
    C_AMOUNT CONSTANT BINARY_INTEGER := 32767;
    L_BUFFER VARCHAR2(32767);
    L_CHR10 PLS_INTEGER;
    L_CLOBLEN PLS_INTEGER;
    L_FHANDLER UTL_FILE.FILE_TYPE;
    L_POS PLS_INTEGER := 1;
    BEGIN
         FILE_NAME:=UTL_FILE.FOPEN('EXPORT_DIR','EXPORT_FILE'||'.sql','W');
         FOR C1_EXP IN (SELECT INSERT_STRING FROM EXPORTED_DUMP) LOOP
         L_CLOBLEN := DBMS_LOB.GETLENGTH(C1_EXP.INSERT_STRING);
         DBMS_OUTPUT.PUT_LINE('THE CLOB LEN '||L_CLOBLEN);
         DBMS_OUTPUT.PUT_LINE('THE POSITION '||L_POS);
         WHILE L_POS < L_CLOBLEN LOOP
    L_BUFFER := DBMS_LOB.SUBSTR(C1_EXP.INSERT_STRING, C_AMOUNT, L_POS);
         DBMS_OUTPUT.PUT_LINE('THE BUFFER IS '||L_BUFFER);
    EXIT WHEN L_BUFFER IS NULL;
    UTL_FILE.PUT_LINE(FILE_NAME, C1_EXP.INSERT_STRING);
    L_POS := L_POS + LEAST(LENGTH(L_BUFFER)+1,c_amount);
    END LOOP;
         END LOOP;
    UTL_FILE.FCLOSE(FILE_NAME);
    EXCEPTION
    WHEN UTL_FILE.INTERNAL_ERROR THEN
    DBMS_OUTPUT.PUT_LINE ('UTL_FILE: An internal error occurred.');
    UTL_FILE.FCLOSE_ALL;
    WHEN UTL_FILE.INVALID_FILEHANDLE THEN
    DBMS_OUTPUT.PUT_LINE ('UTL_FILE: The file handle was invalid.');
    UTL_FILE.FCLOSE_ALL;
    WHEN UTL_FILE.INVALID_MODE THEN
    DBMS_OUTPUT.PUT_LINE ('UTL_FILE: An invalid open mode was given.');
    UTL_FILE.FCLOSE_ALL;
    WHEN UTL_FILE.INVALID_OPERATION THEN
    DBMS_OUTPUT.PUT_LINE ('UTL_FILE: An invalid operation was attempted.');
    UTL_FILE.FCLOSE_ALL;
    WHEN UTL_FILE.INVALID_PATH THEN
    DBMS_OUTPUT.PUT_LINE ('UTL_FILE: An invalid path was give for the file.');
    UTL_FILE.FCLOSE_ALL;
    WHEN UTL_FILE.READ_ERROR THEN
    DBMS_OUTPUT.PUT_LINE ('UTL_FILE: A read error occurred.');
    UTL_FILE.FCLOSE_ALL;
    WHEN UTL_FILE.WRITE_ERROR THEN
    DBMS_OUTPUT.PUT_LINE ('UTL_FILE: A write error occurred.');
    UTL_FILE.FCLOSE_ALL;
    WHEN OTHERS THEN
    DBMS_OUTPUT.PUT_LINE ('Some other error occurred.');
    UTL_FILE.FCLOSE_ALL;
    END;

    Hi user598986!
    OK, I understood that there is a problem with your code. But please would you be so kindly to tell us here what error exactly happens (the errormessage). Mabay after that someone will be able to help you.
    yours sincerely
    Florian W.
    P.S. If you enclose your code into tags it will be shown formated.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • How can you SELECT via Database Link CLOB data using Application Express?

    Customer Issue:
    Developer using Oracle's Application Express 3.1. The Developer is trying to SELECT a CLOB datatype column from a remote (10.2.0.3) database, via a database link on her 10.2.0.4 based client Application. The Developer wants to be able to select CLOB data from the remote database which has limitation that she can't make any changes to the remote database.
    Developer's Comments:
    I do a select and get the error. Getting error ORA-22992: cannot use LOB locators selected from remote tables. So she feels she can't use dbms_lob.substr in this configuration I can do a "select into" but that is for one value. I am trying to run a select statement for a report that brings back more than one row. I do not have permission to change anything on the remote database. I want to access the remote database and multiple tables.
    This is not something I work with, would greatly appreciate help or ideas. Is this a limitation of the 3.1; or does she just not have this set up correctly; or should she be using a Collection (if yes, please share example)
    Thanks very much,
    Pam
    Edited by: pmoutrie on Jun 4, 2009 12:01 PM
    Hello???
    Would really appreciate an answer.
    Thanks,
    Pam

    This may not be a perfect solution for you but it worked for my situation.
    I wanted to grab some data from Grid Control's MGMT$JOB_STEP_HISTORY table but I couldnt' create an Interactive Report due to the existance of a CLOB column. I cheated this by creating a view on the GC DB, grabbing the first 4000 characters and turning it into a varchar2 column:
    create view test_job_step_history as
    select job_Name, target_name, status, start_time, end_time, to_char(substr(output,1,4000)) output
    from MGMT$JOB_STEP_HISTORY where trunc(end_time) > trunc(sysdate)-90
    In an APEX Interactive Report:
    select * from test_job_step_history@GCDB
    Granted, the output looks aweful right now but I am only looking for a very particular output (failed, denied, ORA-, RMAN-, etc) so the formatting isn't the most important thing to me right now.
    If anyone can improve -- and I'm sure you can -- on this I'd love to hear about it.
    Thanks,
    Rich

  • How do I handle large resultsets in CRXI without a performance issue?

    Hello -
    Problem Definition
    I have a performance problem displaying large/huge resultset of data on a crystal report.  The report takes about 4 minutes or more depending on the resultset size.
    How do you handle large resultsets in Crystal Reports without a performance issue?
    Environment
    Crystal Reports XI
    Apache WebSvr 2.X, Jboss 4.2.3, Struts
    Java Reporting Component (JRC),Crystal Report Viewer (CRV)
    Firefox
    DETAILS
    I use the CRXI thick client to build my report (.rpt) and then use it in my webapplication (webapp) under Jboss.
    User specifies the filter criteria to generate a report (date range etc) and submits the request to the webapp.  Webapp  queries the database, gets a "resultset".
    I initialize the JRC and CRV according to all the specifications and finally call the "processHttpRequest" method of Crystal Report Viewer to display the report on browser.
    So.....
    - Request received to generate a report with a filter criteria
    - Query DB to get resultset
    - Initialize JRC and CRV
    - finally display the report by calling
        reportViewer.processHttpRequest(request, response, request.getSession().getServletContext(), null);
    The performance problem is within the last step.  I put logs everywhere and noticed that database query doesnt take too long to return resultset.  Everything processes pretty quickly till I call the processHttpRequest of CRV.  This method just hangs for a long time before displaying the report on browser.
    CRV runs pretty fast when the resultset is smaller, but for large resultset it takes a long long time.
    I do have subreports and use Crystal report formulas on the reports.  Some of them are used for grouping also.  But I dont think Subreports is the real culprit here.  Because I have some other reports that dont have any subreports, and they too get really slow displaying large resultsets.
    Solutions?
    So obviously I need a good solution to this generic problem of "How do you handle large resultsets in Crystal Reports?"
    I have thought of some half baked ideas.
    A) Use external pagination and fetch data only for the current page being displayed.  But for this, CRXI must allow me to create my own buttons (previous, next, last), so I can control the click event and fetch data accordingly.  I tried capturing events by registering event handler "addToolbarCommandEventListener" of CRV.  But my listener gets invoked "after" processHttpRequest method completes, which doesnt help.
    Some how I need to be able to control the UI by adding my own previous page, next page, last page buttons and controlling it's click events. 
    B) Automagically have CRXI use a javascript functionality, to allow browser side page navigation.  So maybe the first time it'll take 5 mins to display the report, but once it's displayed, user can go to any page without sending the request back to server.
    C) Try using Crystal Reports 2008.  I'm open to using this version, but I couldnt figureout if it has any features that can help me do external pagination or anything that can handle large resultsets.
    D) Will using the Crystal Reports Servers like cache server/application server etc help in any way?  I read a little on the Crystal Page Viewer, Interactive Viewer, Part Viewer etc....but I'm not sure if any of these things are going to solve the issue.
    I'd appreciate it if someone can point me in the right direction.

    Essentialy the answer is use smaller resultsets or pull from the database directly instead of using resultsets.

  • Large OLTP data set to get through the cache in our new ZS3-2 storage.

    We recently purchased a ZS3-2 and are currently attempting to do performance testing.  We are using various tools to simulate load within our Oracle VM 3.3.1 cluster of qty5 Dell m620 servers-- swingbench, vdbench, and dd.  The OVM repositories are connecting via NFS.  The Swingbench load testing servers have a base OS disk mounted from the repos and NFS mounts via NFS v4 from within the VM (we would also like to test dNFS later in our testing). 
    The problem I'm trying to get around is that the 256G of DRAM (and a portion of that for ARC) is large enough where my reads are not touching the 7200 RPM disks.  I'd like to create a large enough data set so the amount of random reads cannot possible be stored within the ARC cache  (NOTE: we have no L2ARC at the moment).
    I've run something similar to this in the past, but have adjusted the "sizes=" to be larger than 50m.  My thought here is that, if the ARC is up towards around 200 or so MB's, if I create the following on four separate VM's and run vdbench at just about the same time, it will be attempting to read more data than can possibly fit in the cache.
    * 100% random, 70% read file I/O test.
    hd=default
    fsd=default,files=16,depth=2,width=3,sizes=(500m,30,1g,70)
    fsd=fsd1,anchor=/vm1_nfs
    fwd=fwd1,fsd=fsd*,fileio=random,xfersizes=4k,rdpct=70,threads=8
    fwd=fwd2,fsd=fsd*,fileio=random,xfersizes=8k,rdpct=70,threads=8
    fwd=fwd3,fsd=fsd*,fileio=random,xfersizes=16k,rdpct=70,threads=8
    fwd=fwd4,fsd=fsd*,fileio=random,xfersizes=32k,rdpct=70,threads=8
    fwd=fwd5,fsd=fsd*,fileio=random,xfersizes=64k,rdpct=70,threads=8
    fwd=fwd6,fsd=fsd*,fileio=random,xfersizes=128k,rdpct=70,threads=8
    fwd=fwd7,fsd=fsd*,fileio=random,xfersizes=256k,rdpct=70,threads=8
    rd=rd1,fwd=fwd1,elapsed=900,interval=30,fwdrate=max,format=yes,pause=30,openflags=fsync
    rd=rd2,fwd=fwd2,elapsed=900,interval=30,fwdrate=max,format=yes,pause=30,openflags=fsync
    rd=rd3,fwd=fwd3,elapsed=900,interval=30,fwdrate=max,format=yes,pause=30,openflags=fsync
    rd=rd4,fwd=fwd4,elapsed=900,interval=30,fwdrate=max,format=yes,pause=30,openflags=fsync
    rd=rd5,fwd=fwd5,elapsed=900,interval=30,fwdrate=max,format=yes,pause=30,openflags=fsync
    rd=rd6,fwd=fwd6,elapsed=900,interval=30,fwdrate=max,format=yes,pause=30,openflags=fsync
    rd=rd7,fwd=fwd7,elapsed=900,interval=30,fwdrate=max,format=yes,pause=30,openflags=fsync
    However, the problem I keep running into is that vdbench's java processes will throw exceptions
    ... <cut most of these stats.  But suffice it to say that there were 4k, 8k, and 16k runs that happened before this...>
    14:11:43.125 29 4915.3 1.58 10.4 10.0 69.9 3435.9 2.24 1479.4 0.07 53.69 23.12 76.80 16384 0.0 0.00 0.0 0.00 0.0 0.00 0.1 7.36 0.1 627.2 0.0 0.00 0.0 0.00 0.0 0.00
    14:12:13.071 30 4117.8 1.88 10.0 9.66 69.8 2875.1 2.65 1242.7 0.11 44.92 19.42 64.34 16384 0.0 0.00 0.0 0.00 0.0 0.00 0.1 12.96 0.1 989.1 0.0 0.00 0.0 0.00 0.0 0.00
    14:12:13.075 avg_2-30 5197.6 1.52 9.3 9.03 70.0 3637.8 2.14 1559.8 0.07 56.84 24.37 81.21 16383 0.0 0.00 0.0 0.00 0.0 0.00 0.1 6.76 0.1 731.4 0.0 0.00 0.0 0.00 0.0 0.00
    14:12:15.388
    14:12:15.388 Miscellaneous statistics:
    14:12:15.388 (These statistics do not include activity between the last reported interval and shutdown.)
    14:12:15.388 WRITE_OPENS Files opened for write activity: 89 0/sec
    14:12:15.388 FILE_CLOSES Close requests: 81 0/sec
    14:12:15.388
    14:12:16.116 Vdbench execution completed successfully. Output directory: /oracle/zfs_tests/vdbench/output
    java.lang.RuntimeException: Requested parameter file does not exist: param_file
      at Vdb.common.failure(common.java:306)
      at Vdb.Vdb_scan.parm_error(Vdb_scan.java:50)
      at Vdb.Vdb_scan.Vdb_scan_read(Vdb_scan.java:67)
      at Vdb.Vdbmain.main(Vdbmain.java:550)
    So I know from reading other posts, that vdbench will do what you tell it (Henk brought that up).  But based on this, I can't tell what I should do differently to the vdbench file to get around this error.  Does anyone have advice for me?
    Thanks,
    Joe

    ah... it's almost always the second set of eyes.  Yes, it is run from a script.  And I just looked and realized that the list last line didn't have the \# in it.  Here's the line:
       "Proceed to the "Test Setup" section, but do something like `while true; do ./vdbench -f param_file; done` so the tests just keep repeating."
    I just added the hash to comment that out and am rerunning my script.  My guess is that it'll complete   Thanks Henk.

Maybe you are looking for

  • Is there an easy way to eliminate duplicate tracks from iTunes on a mac?

    All of a sudden I have over 6,500 duplicate tracks in my iTunes.  Not entire albums but many tracks from an album are duplicated.  Is there an easy way to get rid of these tracks or do I have to do this one by one?  Also why would this be happening?

  • HFR - Tried to create a bottom line / border line on EVERY single page

    Currently deal with this issues - Trying to make my report in HFR to perfect. I have a report created in HFR. It vary, can run from 2, 3, 4 ,5 pages. The PROBLEM: at the bottom of each page, there is no line (go across the page) to close that page. e

  • Manual Setup of SMD Agent failed

    Hi gurus!! I have already installed the SMDAgent into my monitored system (EP 7.0 on Windows) but when I execute the smdsetup.bat with the corresponding parameters the next error ocurred when it tries to create the secstore.properties file: - STEP 6:

  • Is the iPad 2 covered by an International Warrenty Re: light leakage

    My brother in-law returned from the USA at the weekend from business and bought me an iPad 2 back from the Las Vegas Apple Store, as I had asked him to. I'm experiencing the light leakage along with the video camera shooting weird colours. My questio

  • Changes between a RAC(Objects) and NON RAC

    Hi all, I want to findout what all the different things to be considered when an application is made to run on RAC env. For eg: I found there is a difference between the way the Sequences are used in a RAC env from Non RAC. Are there any guidelines f