REP-56093: Cached output for job 1058 is no longer valid.

When i run report, this error message showed up.
REP-52251: The output of job ID 1058 requested on Wed May 18 11:18:54 KST 2011 cannot be retrieved.
REP-56093: Cached output for job 1058 is no longer valid.
I tested like below.
When add rownum < 100 condition, the reports run well.
When no rownum condition, the reports failed to run.
Are there anyone to tell me the key point to solve this problem ???
I use weblogic and fussion middleware.
Edited by: backsulee on 2011. 5. 17 오후 7:31

metalink note " REP-56093: Cached Output For Job <###> Is No Longer Valid Running Large Report [ID 305398.1]" maybe does give the answer for you

Similar Messages

  • REP-56093: Cached output for job 1466 is no longer valid

    Report running on 10g r2 with parameter form. I changed the cache in rep_reportname_frsrvcs.conf file to 500... I can't imagine that is not enough, this is not a report with any kind of special characters or pulling that much data, plus it should be going to parameter form first, which means it has not really pulled any data yet. Any ideas? The report is being pulled through the cgicmd.dat and the other reports seem to work fine. Thanks for any help!
    Va.

    metalink note " REP-56093: Cached Output For Job <###> Is No Longer Valid Running Large Report [ID 305398.1]" maybe does give the answer for you

  • I forgot the password to my icloud but the email address registered for the icloud is no longer valid

    i forgot the password to my icloud but the email address registered for the icloud is no longer valid

    Apple ID: If you forget your password

  • I can't figure out how to delete and email address for someone that is no longer valid.

    I can't figure out how to delete an email address that is no longer valid.

    Assuming that this is on your iOS device, then you can change a contact's details via the Contacts app. If it's not linked to one of your contacts then I get a blue 'i' to the right of the email address on the popup list :
    Tapping on that gives me a second popup, at the bottom of which is :

  • REP-52251: Cannot get output of job ID 69 you requested  ....

    OS: Oracle As 10.1.2.0.2 (rel 2 for Linux 86)
    I have moved my forms and reports from windows XP (DS 9.04) to the Linux box.
    I have compiled and run FORMS and most of the REPORTS succesfully.
    I have compiled one report succesufully.
    But when I run this report from the FORM, I got message as below:
    Error
    REP-52251: Cannot get output of job ID 69 you requested on Thu Sep 29 15:37:03 PDT 2005.<P>REP-56093: Cached output for job 69 is no longer valid
    I have checked
    http://abc:7778/reports/rwservlet/showjobs?
    Status is as follows:
    Error
    Finished successfully but output is voided
    Regards,
    DN

    Just in case someone else too finds this thread with google. We had too REP-56093: Cached output for job 5429 is no longer valid. After checking reports cache dir we did see that the problematic reports have cache key with (broken)umlauds. So we disabled :DESNAME parameter and that got rid the umlauds in file names and that resolved the cache error. So if you get this error, you might check the cache directory if the problematic reports have some weird cache files/filenames.

  • REP-52251: Cannot get output of job ID

    Hi,
    I am trying to call a report from oracle form and i am getting this error
    REP-52251: Cannot get output of job ID 220 you requested on Mon Oct 16 11:19:12 EDT 2006.
    REP-56033: Job 220 does not exist
    please help, any Ideas.
    Best regards,

    Have you tried to create an URL with all the parameters you're trying to run your report with?
    Try to run the report that way first - just to make sure your report can actually be run with the parameters you're using.
    If that works then try to use WEB.show_document from your form, passing the same URL you just built, if that works then you are only having problems with your RUN_REPORT_OBJECT (I am not sure if you're using rp2rro library for running your reports).
    Try that -- another thing that might be happening is that your output is being generated but due to the parameters (I think there is some form of timeout in the reports server, when the files in your cache are deleted, not sure if this is the problem).
    Try the run the report with an URL first - that should shed some light.
    If you're still having problems, post it here, so we can review it.

  • REP-52251 Cannot get output of job ID -2

    When Trying to Run a Report Getting This Error and only one person is getting this error as there are around 1000 + customers for whom the application is
    working fine and can generate the Report.
    REP-52251: Cannot get output of job ID -2 you requested on Wed Nov 29 11:49:41 EST 2006.<P>REP-56033: Job -2 does not exist
    Does somebody ever faced this kind of problem. if so pls share the idea of how to deal with this issue.
    Thanks
    Anil.

    Hi,
    I am not sure about the error message!!. The possible case is the user might have change his oracle password... like with spaces or %20 (eg. emq K96 is kind of password In web automatically this will be turned into emq%k96,
    . As per the report server the password is wrong while the user submit the job.This error won't occur while user connecting to the Oracle forms. But in the report server thru URL user trying to submitting the details.!!!).
    One of my user have this kind problem!!!>
    He is able to connect the forms but while he submit the job either he got the page cannot found error/ Report submission error.
    This case is applicable only if you are using Oracle web forms/ reports.
    This may be helpfull to you!!!
    Cheers,
    Nats

  • Unable to find the published output for this request - problem

    Hi ,
    In the invoice Payables module i modified a report to be open in xml and not text as it was. The program name of this report iis Print Invoice Notice. I did same thing we normally do to register a xml report in ebs.
    The problem is that when i run the report from the invoice form and from the button wich call this raport i get this error:
    Unable to find the published output for this request.
    No output file exist for the request
    and if i see the log file i get this message :
    Arguments
    P_INVOICE_ID='10243'
    APPLLCSP Environment Variable set to :
    XML_REPORTS_XENVIRONMENT is :
    +/oracle/prodora/8.0.6/guicommon6/tk60/admin/Tk2Motif_UTF8.rgb+
    XENVIRONMENT is set to  /oracle/prodora/8.0.6/guicommon6/tk60/admin/Tk2Motif_UTF8.rgb
    Current NLS_LANG and NLS_NUMERIC_CHARACTERS Environment Variables are :
    AMERICAN_ALBANIA.UTF8
    +' '+
    REP-3000: Internal error starting Oracle Toolkit.
    Report Builder: Release 6.0.8.27.0 - Production on Fri Mar 27 02:30:46 2009
    +(c) Copyright 1999 Oracle Corporation. All rights reserved.+
    The thing is that if i run this report from Other ->Request -> Run of the same module this raport runs correctly
    And if i change the output of my report (From concurrent manager) to text its open correctly even from there where i want to open it.
    Can anyone give me any sugestion why i get this error while i try to open this xml publisher report from a button in a form in application?
    PS: i am using ebs 11.5 version
    Thanks in advance,
    Best regards

    Hi,
    REP-3000: Internal error starting Oracle ToolkitThis errors was discussed many times before in this thread, so please search for REP-3000 and fix this error first ( [REP-3000|http://forums.oracle.com/forums/search.jspa?threadID=&q=REP-3000&objID=c84&dateRange=all&userID=&numResults=15] )
    Regards,
    Hussein

  • Sites using TALEO for job searches no longer works in 19, worked in last official 18 release

    Company websites which use the TALEO job search/apply app/site, no longer work right in 19, they worked fine in 18.xx. Specially, the job search function does not work right when entering the TALEO job search parameters then hitting SEARCH FOR JOB does nothing. AN example site (which does not require registration) is https://nielsen.taleo.net/careersection/3/jobsearch.ftl?lang=en. If I make selections for JOB FIELD and/or LOCATIOn then hit SEARCH FOR JOB, nothing happends. Sites using TALEO worked fine in the last offical 18.x release. TALEO worksd fine in IE 7&Chrome 25.
    Clearing the cache does not change behavior. Finally did a full FF reset--no change.
    WIN XP SP3 | Dell Dimension 4600 | 1.2 G RAM

    I am running Firefox version 19.0.2 and I have the same problem. I am able to filter using Internet Explorer which I really don't like. Rather than users having to figure out why this isn't working either Taleo needs to fix their scripts or a fix needs to happen in Firefox.

  • Lpq outputs print jobs + error T: unknown: moving-to-paused +

    Dears,
    I have setup remote printing, printing from the client show the print queue at print server(T) however command does not get executed and it outputs
    print jobs + error "T: unknown: moving-to-paused"
    When is triggered lpstat command before printing
    # lpstat -t
    scheduler is running
    system default printer: T
    device for Alis: /dev/ecpp0
    device for T: /dev/lp1
    Alis accepting requests since Wed Jan 25 14:27:39 2012
    T accepting requests since Wed Jan 25 14:39:03 2012
    printer Alis is idle. enabled since Sun Feb 19 11:23:34 2012. available.
    printer T is idle. enabled since Sun Feb 19 12:03:31 2012. available.
    When is triggered lpstat command After printing
    bash-3.2# lpstat -t
    scheduler is running
    system default printer: T
    device for Alis: /dev/ecpp0
    device for T: /dev/lp1
    Alis accepting requests since Wed Jan 25 14:27:39 2012
    T accepting requests since Wed Jan 25 14:39:03 2012
    printer Alis is idle. enabled since Sun Feb 19 11:23:34 2012. available.
    printer T waiting for auto-retry. available.
    Failed to open the printer port. (Device busy)
    T-80 [email protected] 201097 Feb 19 12:28 f iltered
    T-81 [email protected] 600569 Feb 19 12:28 f
    lpq
    ===
    bash-3.2# lpq
    T: unknown: moving-to-paused
    Rank Owner Job File(s) Total Size
    1st appltest 80 /global/u02/oracle/TEST/inst/app201097 bytes
    2nd appltest 81 /global/u02/oracle/TEST/inst/app600569 bytes

    Hi Freind,
    I hope you are doing good.
    Q1l:-
    Are these printers really physically attached to your Ultra-* parallel port (/dev/ecpp0)?
    and what device is "/dev/lp1"?
    Answer:-
    "/dev/lp1" which is printer "T" and "/dev/ecpp0" which is printer "Alis" are configured at OS level via "printmgr" on the same machine.
    This machine hostname is "tprinter" which is a print server. It is connected physically to "CIMA6120 LINE PRINTER".
    [what I meant is: what physical port is this printer attached to on the Solaris machine? is this a Ultra 5/10/60 machine? I don't recall the /dev/ecpp port
    existing on any modern sun hardware]
    "failed to open printer port" can mean:
    Answer2:-  It's is a 25 pin serial port cable connected to 25 pin parallel port of Optiplex GX100 machine.  I installed solaris 10 x86 to make print server. as our CIMA6120 its not a network printer, and our company want to print reports from oracle application to this printer.
    Q:-you have attached your printer to the wrong port?
    Answer: it is connected to the correct port.
    [again . . how do you know this or test this?]
    Answer2:- it is connected at the rare panel of the machine on a 25-hole connector (bidirectional).
    Q:-you have attached your printer to the right port, but are using the wrong name?
    Answer: I think we could have Alias, not necessarily give the same printer name.
    [what i was getting at here is that you think the printer is attached to the parallel port, but actually the cable
    is actually attached to the wrong port. For example the serial port on those older machines will fit but in that case, if you configure the printer with ecpp
    it's not pointing to the correct port]
    Answer2:-  I configured CIMA6120 with "/dev/lp1" port which was perfectly working, I print numerous pages with this configuration, Suddenly this error "moving-to-paused" occurred. as i said it is connected to the parallel port at the rare panel.  Initially, I attempted and tried to configure this printer with "/dev/ecpp0" port which was not successful.
    Q:-you have a bad or wrong cable?
    Answer:- I have checked the cable it is good.
    [how did you do the checking?]
    Answer2:- I tested the cable by connecting to other printer and it printed. so no problem with the cable.
    In the olden days when local printers like this was the norm, we'd do test like catting a file to the port and see if the printer lights
    up or responds. Something like: cat /etc/hosts > /dev/ecpp0 (or /dev/lp1, or whatever the device is). Then observe if the printer seems to
    receives anything, blinks, prints the host file. You are copying the file directly to the port outside of lp or other mechanism.
    Answer2:- I agree there was some conflict among the ports internally, or some process may be running at the background, that is why though it shows the request waiting in the queue it was moving to pause for some reason. when I check the error by command "lpstat -t" it sez "device busy" in /dev/lp1" file.
    Solution
    ============
    After goggling some more I found a fix for this issue at http://fixunix.com/solaris/143011-unaccepted-printer.html
    1, I tried to remove the printer from printmgr, cud not remove it.
    2, try kill the entries that going to /dev/lp1 port to printer "T". could not do it
    3, uncheck the default printer option via printmgr.
    4, Add a new printer name "test" with /dev/ecpp0" and reproduce the issue "got the same error"
    5, With the forum above it sez its a "bug" and the Bug ID: 6374608. here are the details of it:-
    Synopsis: No /devices entry for line printer
    http://bugs.opensolaris.org/bugdatab...bug_id=6374608
    Workaround is to manually install the missing driver binding
    entry, using the command:
    update_drv -a -i '"lp"' ecpp
    This workaround should create the missing /dev/lp* devices. And after
    running it, you should find the following line in "prtconf -D" output:
    lp, instance #0 (driver name: ecpp)
    6, I unplug the serial cable from the printer and turnoff and replug the serial cable to the printer and turn on the printer, made sure printer is ""ONLINE".
    7, I restart the Solaris machine and delete the newly created "Test" printer via printmgr.
    7, I just gave the lp /etc/passwd and it printed.
    Finally, I thank you for your time and suggest me your opinion on this solution please.
    Regards,
    MKY
    Edited by: user9007339 on Feb 22, 2012 12:15 AM
    Edited by: user9007339 on Feb 22, 2012 12:21 AM

  • REP-52262: Diagnostic output is disabled.

    i have installed oracle 11g R2 on windows 2003 server weblogic + forms and reports when i tried to chet reports server it give the following error
    REP-52262: Diagnostic output is disabled.
    i found a solution on the OTN to change in rwservlet.properties
    from
    <!--webcommandaccess>L1</webcommandaccess-->
    to
    <--webcommandaccess>L2</webcommandaccess-->
    now there is another error and i cant find solution for this
    Error 500--Internal Server Error
    From RFC 2068 Hypertext Transfer Protocol -- HTTP/1.1:
    *10.5.1 500 Internal Server Error*
    The server encountered an unexpected condition which prevented it from fulfilling the request.
    anyone can help me to resolve the issue so i can run reports on my machine :(

    I would recommend posting any questions relating to Oracle Reports in the Reports forum: Reports
    Anyway, you are on the right track with fixing the REP-52262 error. You'll need to remove the dashes in the webcommandaccess line as this is throwing the Error 500. Modify the rwservlet.properties so that it shows the following:
    <webcommandaccess>L2</webcommandaccess>
    For more information, you may check out this KB article which talks about this error in greater detail: http://pitss.com/us/2012/12/20/running-reports-in-formsreports-11-1-2-1-generates-rep-52262-with-default-settings/
    Thanks,
    Scott

  • T Code for doing mass output for Sales order like VL71 for Delivery

    Hi
    Can anyone let me know what is the t code for the sales orders for which despatch time was set as "3- using own transaction"
    Muthu

    Hi,
    Though there is a program, there is no output for sales order similar to VL71 for delivery. Sales order is the only transaction for which there is no tcode for this purpose.
    You can do the following.
    Go to SE38 give program name SD70AV1A and execute
    Now give ur selection criteria and save this as a variant.
    Now you can schedule a job at regular intervals using this variant.
    This will enable that if you maintain the output record as 3(using own transaction), the output will be created in mass for multiple sales orders.

  • What functional thing fresher  person know before applying for job

    hi
    i complete my oracle applications training from hydrabad.from comp-u-learn institutes.My domain exp.is in Purchasing , account payables.I know purchasing & payables module well. Can any body tell me what are the necessary things functional person must know . because i know only all setup .what other things fuctional person .before start working some where or before applying for job.
    i request all my senoir who is working in this field for a long time , please advice me .
    waiting for u response
    thanks & regards
    user642769

    My MSI MEGA 180 has a strange display on the monitor.
    I opened the boxed removed all items.
    Installed the CPU (supported on the CPU compatability list for the MSI MEGA 180), installed the heatsink & fan.
    Installed the HD drive.
    Installed the MSI DVD+/-RW
    Installed 1 DIMM module of DDR RAM PC2700 333MHZ 1GB
    Pushed the Power button for the PC and the desplay is corrupted.
    I tried the 4 possible configurations for Jumpers 7 and 8 to alter the FSB speed with no luck.
    I also tried the 4 different J7 and J8 configurations without having the HDD and DVD-RW drives connected - same result.
    I also tried the 4 different J7 and J8 configurations with the Secondary VGA output connected to my monitor.
    I also tried the 4 different J7 and J8 configurations with the S-Video output connected to my TV and no signal at all.
    Both my monitor and HDD work fine with my other PC.
    I noticed the jumper pins are not numbered - made the configuration job a bit more tedious.
    Please note the LCD screen works correctly, however the redio tuner in HiFi mode doesn't pick up any local stations.
    As I am a Software Test Engineer, I am 100% confident I have checked all possible configurations including clearing the CMOS, but the system simply will not display to the VDU. The POST should at least be visible when Power is turned on for the PC mode.
    NOTE: I will update the RAM Brand as soon as I find out the title from the supplier, but I believe it is OEM.
    NOTE II: this is the URL to the CPU used: http://www.excaliberpc.com/product_info.php?products_id=1798
    PLeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeesssss ssssssssssssssssssssssseeeeeeeeeeeeeeee eee  help ease my pain!!

  • Crond error: unable to exec /usr/sbin/sendmail: cron output for user .

    I have cronjobs running as a normal user.
    shadyabhi@ArchLinux ~/cronjobs $ crontab -l
    #CronJobs located in $HOME/cronjobs/*
    0 * * * * /home/shadyabhi/cronjobs/snailmail.sh
    * 5 * * * /home/shadyabhi/cronjobs/backup_pkgs.sh
    * 5 * * * /home/shadyabhi/cronjobs/backup_songs.sh
    shadyabhi@ArchLinux ~/cronjobs $ sudo tail -f -n 10 /var/log/crond.log
    Oct 16 09:00:12 ArchLinux crond[13410]: unable to exec /usr/sbin/sendmail: cron output for user shadyabhi /home/shadyabhi/cronjobs/snailmail.sh to /dev/null
    Oct 16 09:40:01 ArchLinux crond[1552]: FILE /var/spool/cron/root USER root PID 13622 job sys-hourly
    Oct 16 10:00:01 ArchLinux crond[1552]: FILE /var/spool/cron/shadyabhi USER shadyabhi PID 13739 /home/shadyabhi/cronjobs/snailmail.sh
    Oct 16 10:00:13 ArchLinux crond[13806]: mailing cron output for user shadyabhi /home/shadyabhi/cronjobs/snailmail.sh
    Oct 16 10:00:13 ArchLinux crond[13806]: unable to exec /usr/sbin/sendmail: cron output for user shadyabhi /home/shadyabhi/cronjobs/snailmail.sh to /dev/null
    Oct 16 10:40:01 ArchLinux crond[1552]: FILE /var/spool/cron/root USER root PID 14418 job sys-hourly
    Oct 16 11:00:01 ArchLinux crond[1552]: FILE /var/spool/cron/shadyabhi USER shadyabhi PID 15527 /home/shadyabhi/cronjobs/snailmail.sh
    Oct 16 11:00:13 ArchLinux crond[15593]: mailing cron output for user shadyabhi /home/shadyabhi/cronjobs/snailmail.sh
    Oct 16 11:00:13 ArchLinux crond[15593]: unable to exec /usr/sbin/sendmail: cron output for user shadyabhi /home/shadyabhi/cronjobs/snailmail.sh to /dev/null
    Oct 16 11:07:01 ArchLinux crond[1552]: reading /var/spool/cron/cron.update
    ^C
    shadyabhi@ArchLinux ~/cronjobs $ pacman -Q | grep sendmail
    sendmail 8.14.4-1
    shadyabhi@ArchLinux ~/cronjobs $
    Why am I getting that error?

    loafer wrote:The AUR package doesn't install the sendmail binary  for some reason.  There are quite a few comments about this on the AUR page.  I just built it myself.  The sendmail binary is there in the src but it doesn't get installed as part of the package.  Better ask the maintainer about why this is.
    But, I have sendmail binary in /usr/sbin
    shadyabhi@ArchLinux ~ $ whereis sendmail
    sendmail: /usr/sbin/sendmail /usr/share/man/man1/sendmail.1.gz
    shadyabhi@ArchLinux ~ $

  • Global-Cache-Manager for Multi-Environment Applications

    Hi,
    Within our server implementation we provide a "multi-project" environment. Each project is fully isolated from the rest of the server e.g. in terms of file-system usage, backup and other ressources. As one might expect the way to go is using a single VM with multiple BDB environments.
    Obviously each JE-Environment uses its own cache. Within a our environment with dynamic numbers of active projects this causes a problem because the optimal cache configuration within a given memory frame depends on the JE-Environments in use BUT there is no way to define a global JE cache for ALL JE-Environments.
    Our "plan of attack" is to implement a Global-Cache-Manager to dynamicly configure the cache sizes of all active BDB environments depending on the given global cache size.
    Like Federico proposed the starting point for determining the optimal cache setting at load time will be a modification to the DbCacheSize utility so that the return value can be picked up easily, rather than printed to stdout. After that the EnvironmentMutableConfig.setCacheSize will be used to set the cache size. If there is enough Cache-RAM available we could even set a larger cache but I do not know if that really makes sense.
    If Cache-Memory is getting tight loading another BDB environment means decreasing cache sizes for the already loaded environments. This is also done via EnvironmentMutableConfig.setCacheSize. Are there any timing conditions one should obey before assuming the memory is really available? To determine if there are any BDB environments that do not use their cache one could query each cache utilization using EnvironmentStats.getCacheDataBytes() and getCacheTotalBytes().
    Are there any comments to this plan? Is there perhaps a better solution or even an implementation?
    Do you think a global cache manager is something worth back-donating?
    Related Postings: Multiple envs in one process?
    Stefan Walgenbach

    Here is the updated DbCacheSize.java to allow calling it with an API.
    Charles Lamb
    * See the file LICENSE for redistribution information.
    * Copyright (c) 2005-2006
    *      Oracle Corporation.  All rights reserved.
    * $Id: DbCacheSize.java,v 1.8 2006/09/12 19:16:59 cwl Exp $
    package com.sleepycat.je.util;
    import java.io.File;
    import java.io.PrintStream;
    import java.math.BigInteger;
    import java.text.NumberFormat;
    import java.util.Random;
    import com.sleepycat.je.Database;
    import com.sleepycat.je.DatabaseConfig;
    import com.sleepycat.je.DatabaseEntry;
    import com.sleepycat.je.DatabaseException;
    import com.sleepycat.je.Environment;
    import com.sleepycat.je.EnvironmentConfig;
    import com.sleepycat.je.EnvironmentStats;
    import com.sleepycat.je.OperationStatus;
    import com.sleepycat.je.dbi.MemoryBudget;
    import com.sleepycat.je.utilint.CmdUtil;
    * Estimating JE in-memory sizes as a function of key and data size is not
    * straightforward for two reasons. There is some fixed overhead for each btree
    * internal node, so tree fanout and degree of node sparseness impacts memory
    * consumption. In addition, JE compresses some of the internal nodes where
    * possible, but compression depends on on-disk layouts.
    * DbCacheSize is an aid for estimating cache sizes. To get an estimate of the
    * in-memory footprint for a given database, specify the number of records and
    * record characteristics and DbCacheSize will return a minimum and maximum
    * estimate of the cache size required for holding the database in memory.
    * If the user specifies the record's data size, the utility will return both
    * values for holding just the internal nodes of the btree, and for holding the
    * entire database in cache.
    * Note that "cache size" is a percentage more than "btree size", to cover
    * general environment resources like log buffers. Each invocation of the
    * utility returns an estimate for a single database in an environment.  For an
    * environment with multiple databases, run the utility for each database, add
    * up the btree sizes, and then add 10 percent.
    * Note that the utility does not yet cover duplicate records and the API is
    * subject to change release to release.
    * The only required parameters are the number of records and key size.
    * Data size, non-tree cache overhead, btree fanout, and other parameters
    * can also be provided. For example:
    * $ java DbCacheSize -records 554719 -key 16 -data 100
    * Inputs: records=554719 keySize=16 dataSize=100 nodeMax=128 density=80%
    * overhead=10%
    *    Cache Size      Btree Size  Description
    *    30,547,440      27,492,696  Minimum, internal nodes only
    *    41,460,720      37,314,648  Maximum, internal nodes only
    *   114,371,644     102,934,480  Minimum, internal nodes and leaf nodes
    *   125,284,924     112,756,432  Maximum, internal nodes and leaf nodes
    * Btree levels: 3
    * This says that the minimum cache size to hold only the internal nodes of the
    * btree in cache is approximately 30MB. The maximum size to hold the entire
    * database in cache, both internal nodes and datarecords, is 125Mb.
    public class DbCacheSize {
        private static final NumberFormat INT_FORMAT =
            NumberFormat.getIntegerInstance();
        private static final String HEADER =
            "    Cache Size      Btree Size  Description\n" +
        //   12345678901234  12345678901234
        //                 12
        private static final int COLUMN_WIDTH = 14;
        private static final int COLUMN_SEPARATOR = 2;
        private long records;
        private int keySize;
        private int dataSize;
        private int nodeMax;
        private int density;
        private long overhead;
        private long minInBtreeSize;
        private long maxInBtreeSize;
        private long minInCacheSize;
        private long maxInCacheSize;
        private long maxInBtreeSizeWithData;
        private long maxInCacheSizeWithData;
        private long minInBtreeSizeWithData;
        private long minInCacheSizeWithData;
        private int nLevels = 1;
        public DbCacheSize (long records,
                   int keySize,
                   int dataSize,
                   int nodeMax,
                   int density,
                   long overhead) {
         this.records = records;
         this.keySize = keySize;
         this.dataSize = dataSize;
         this.nodeMax = nodeMax;
         this.density = density;
         this.overhead = overhead;
        public long getMinCacheSizeInternalNodesOnly() {
         return minInCacheSize;
        public long getMaxCacheSizeInternalNodesOnly() {
         return maxInCacheSize;
        public long getMinBtreeSizeInternalNodesOnly() {
         return minInBtreeSize;
        public long getMaxBtreeSizeInternalNodesOnly() {
         return maxInBtreeSize;
        public long getMinCacheSizeWithData() {
         return minInCacheSizeWithData;
        public long getMaxCacheSizeWithData() {
         return maxInCacheSizeWithData;
        public long getMinBtreeSizeWithData() {
         return minInBtreeSizeWithData;
        public long getMaxBtreeSizeWithData() {
         return maxInBtreeSizeWithData;
        public int getNLevels() {
         return nLevels;
        public static void main(String[] args) {
            try {
                long records = 0;
                int keySize = 0;
                int dataSize = 0;
                int nodeMax = 128;
                int density = 80;
                long overhead = 0;
                File measureDir = null;
                boolean measureRandom = false;
                for (int i = 0; i < args.length; i += 1) {
                    String name = args;
    String val = null;
    if (i < args.length - 1 && !args[i + 1].startsWith("-")) {
    i += 1;
    val = args[i];
    if (name.equals("-records")) {
    if (val == null) {
    usage("No value after -records");
    try {
    records = Long.parseLong(val);
    } catch (NumberFormatException e) {
    usage(val + " is not a number");
    if (records <= 0) {
    usage(val + " is not a positive integer");
    } else if (name.equals("-key")) {
    if (val == null) {
    usage("No value after -key");
    try {
    keySize = Integer.parseInt(val);
    } catch (NumberFormatException e) {
    usage(val + " is not a number");
    if (keySize <= 0) {
    usage(val + " is not a positive integer");
    } else if (name.equals("-data")) {
    if (val == null) {
    usage("No value after -data");
    try {
    dataSize = Integer.parseInt(val);
    } catch (NumberFormatException e) {
    usage(val + " is not a number");
    if (dataSize <= 0) {
    usage(val + " is not a positive integer");
    } else if (name.equals("-nodemax")) {
    if (val == null) {
    usage("No value after -nodemax");
    try {
    nodeMax = Integer.parseInt(val);
    } catch (NumberFormatException e) {
    usage(val + " is not a number");
    if (nodeMax <= 0) {
    usage(val + " is not a positive integer");
    } else if (name.equals("-density")) {
    if (val == null) {
    usage("No value after -density");
    try {
    density = Integer.parseInt(val);
    } catch (NumberFormatException e) {
    usage(val + " is not a number");
    if (density < 1 || density > 100) {
    usage(val + " is not betwen 1 and 100");
    } else if (name.equals("-overhead")) {
    if (val == null) {
    usage("No value after -overhead");
    try {
    overhead = Long.parseLong(val);
    } catch (NumberFormatException e) {
    usage(val + " is not a number");
    if (overhead < 0) {
    usage(val + " is not a non-negative integer");
    } else if (name.equals("-measure")) {
    if (val == null) {
    usage("No value after -measure");
    measureDir = new File(val);
    } else if (name.equals("-measurerandom")) {
    measureRandom = true;
    } else {
    usage("Unknown arg: " + name);
    if (records == 0) {
    usage("-records not specified");
    if (keySize == 0) {
    usage("-key not specified");
         DbCacheSize dbCacheSize = new DbCacheSize
              (records, keySize, dataSize, nodeMax, density, overhead);
         dbCacheSize.caclulateCacheSizes();
         dbCacheSize.printCacheSizes(System.out);
    if (measureDir != null) {
    measure(System.out, measureDir, records, keySize, dataSize,
    nodeMax, measureRandom);
    } catch (Throwable e) {
    e.printStackTrace(System.out);
    private static void usage(String msg) {
    if (msg != null) {
    System.out.println(msg);
    System.out.println
    ("usage:" +
    "\njava " + CmdUtil.getJavaCommand(DbCacheSize.class) +
    "\n -records <count>" +
    "\n # Total records (key/data pairs); required" +
    "\n -key <bytes> " +
    "\n # Average key bytes per record; required" +
    "\n [-data <bytes>]" +
    "\n # Average data bytes per record; if omitted no leaf" +
    "\n # node sizes are included in the output" +
    "\n [-nodemax <entries>]" +
    "\n # Number of entries per Btree node; default: 128" +
    "\n [-density <percentage>]" +
    "\n # Percentage of node entries occupied; default: 80" +
    "\n [-overhead <bytes>]" +
    "\n # Overhead of non-Btree objects (log buffers, locks," +
    "\n # etc); default: 10% of total cache size" +
    "\n [-measure <environmentHomeDirectory>]" +
    "\n # An empty directory used to write a database to find" +
    "\n # the actual cache size; default: do not measure" +
    "\n [-measurerandom" +
    "\n # With -measure insert randomly generated keys;" +
    "\n # default: insert sequential keys");
    System.exit(2);
    private void caclulateCacheSizes() {
    int nodeAvg = (nodeMax * density) / 100;
    long nBinEntries = (records * nodeMax) / nodeAvg;
    long nBinNodes = (nBinEntries + nodeMax - 1) / nodeMax;
    long nInNodes = 0;
         long lnSize = 0;
    for (long n = nBinNodes; n > 0; n /= nodeMax) {
    nInNodes += n;
    nLevels += 1;
    minInBtreeSize = nInNodes *
         calcInSize(nodeMax, nodeAvg, keySize, true);
    maxInBtreeSize = nInNodes *
         calcInSize(nodeMax, nodeAvg, keySize, false);
         minInCacheSize = calculateOverhead(minInBtreeSize, overhead);
         maxInCacheSize = calculateOverhead(maxInBtreeSize, overhead);
    if (dataSize > 0) {
    lnSize = records * calcLnSize(dataSize);
         maxInBtreeSizeWithData = maxInBtreeSize + lnSize;
         maxInCacheSizeWithData = calculateOverhead(maxInBtreeSizeWithData,
                                  overhead);
         minInBtreeSizeWithData = minInBtreeSize + lnSize;
         minInCacheSizeWithData = calculateOverhead(minInBtreeSizeWithData,
                                  overhead);
    private void printCacheSizes(PrintStream out) {
    out.println("Inputs:" +
    " records=" + records +
    " keySize=" + keySize +
    " dataSize=" + dataSize +
    " nodeMax=" + nodeMax +
    " density=" + density + '%' +
    " overhead=" + ((overhead > 0) ? overhead : 10) + "%");
    out.println();
    out.println(HEADER);
    out.println(line(minInBtreeSize, minInCacheSize,
                   "Minimum, internal nodes only"));
    out.println(line(maxInBtreeSize, maxInCacheSize,
                   "Maximum, internal nodes only"));
    if (dataSize > 0) {
    out.println(line(minInBtreeSizeWithData,
                   minInCacheSizeWithData,
                   "Minimum, internal nodes and leaf nodes"));
    out.println(line(maxInBtreeSizeWithData,
                   maxInCacheSizeWithData,
    "Maximum, internal nodes and leaf nodes"));
    } else {
    out.println("\nTo get leaf node sizing specify -data");
    out.println("\nBtree levels: " + nLevels);
    private int calcInSize(int nodeMax,
                   int nodeAvg,
                   int keySize,
                   boolean lsnCompression) {
    /* Fixed overhead */
    int size = MemoryBudget.IN_FIXED_OVERHEAD;
    /* Byte state array plus keys and nodes arrays */
    size += MemoryBudget.byteArraySize(nodeMax) +
    (nodeMax * (2 * MemoryBudget.ARRAY_ITEM_OVERHEAD));
    /* LSN array */
         if (lsnCompression) {
         size += MemoryBudget.byteArraySize(nodeMax * 2);
         } else {
         size += MemoryBudget.BYTE_ARRAY_OVERHEAD +
    (nodeMax * MemoryBudget.LONG_OVERHEAD);
    /* Keys for populated entries plus the identifier key */
    size += (nodeAvg + 1) * MemoryBudget.byteArraySize(keySize);
    return size;
    private int calcLnSize(int dataSize) {
    return MemoryBudget.LN_OVERHEAD +
    MemoryBudget.byteArraySize(dataSize);
    private long calculateOverhead(long btreeSize, long overhead) {
    long cacheSize;
    if (overhead == 0) {
    cacheSize = (100 * btreeSize) / 90;
    } else {
    cacheSize = btreeSize + overhead;
         return cacheSize;
    private String line(long btreeSize,
                   long cacheSize,
                   String comment) {
    StringBuffer buf = new StringBuffer(100);
    column(buf, INT_FORMAT.format(cacheSize));
    column(buf, INT_FORMAT.format(btreeSize));
    column(buf, comment);
    return buf.toString();
    private void column(StringBuffer buf, String str) {
    int start = buf.length();
    while (buf.length() - start + str.length() < COLUMN_WIDTH) {
    buf.append(' ');
    buf.append(str);
    for (int i = 0; i < COLUMN_SEPARATOR; i += 1) {
    buf.append(' ');
    private static void measure(PrintStream out,
    File dir,
    long records,
    int keySize,
    int dataSize,
    int nodeMax,
    boolean randomKeys)
    throws DatabaseException {
    String[] fileNames = dir.list();
    if (fileNames != null && fileNames.length > 0) {
    usage("Directory is not empty: " + dir);
    Environment env = openEnvironment(dir, true);
    Database db = openDatabase(env, nodeMax, true);
    try {
    out.println("\nMeasuring with cache size: " +
    INT_FORMAT.format(env.getConfig().getCacheSize()));
    insertRecords(out, env, db, records, keySize, dataSize, randomKeys);
    printStats(out, env,
    "Stats for internal and leaf nodes (after insert)");
    db.close();
    env.close();
    env = openEnvironment(dir, false);
    db = openDatabase(env, nodeMax, false);
    out.println("\nPreloading with cache size: " +
    INT_FORMAT.format(env.getConfig().getCacheSize()));
    preloadRecords(out, db);
    printStats(out, env,
    "Stats for internal nodes only (after preload)");
    } finally {
    try {
    db.close();
    env.close();
    } catch (Exception e) {
    out.println("During close: " + e);
    private static Environment openEnvironment(File dir, boolean allowCreate)
    throws DatabaseException {
    EnvironmentConfig envConfig = new EnvironmentConfig();
    envConfig.setAllowCreate(allowCreate);
    envConfig.setCachePercent(90);
    return new Environment(dir, envConfig);
    private static Database openDatabase(Environment env, int nodeMax,
    boolean allowCreate)
    throws DatabaseException {
    DatabaseConfig dbConfig = new DatabaseConfig();
    dbConfig.setAllowCreate(allowCreate);
    dbConfig.setNodeMaxEntries(nodeMax);
    return env.openDatabase(null, "foo", dbConfig);
    private static void insertRecords(PrintStream out,
    Environment env,
    Database db,
    long records,
    int keySize,
    int dataSize,
    boolean randomKeys)
    throws DatabaseException {
    DatabaseEntry key = new DatabaseEntry();
    DatabaseEntry data = new DatabaseEntry(new byte[dataSize]);
    BigInteger bigInt = BigInteger.ZERO;
    Random rnd = new Random(123);
    for (int i = 0; i < records; i += 1) {
    if (randomKeys) {
    byte[] a = new byte[keySize];
    rnd.nextBytes(a);
    key.setData(a);
    } else {
    bigInt = bigInt.add(BigInteger.ONE);
    byte[] a = bigInt.toByteArray();
    if (a.length < keySize) {
    byte[] a2 = new byte[keySize];
    System.arraycopy(a, 0, a2, a2.length - a.length, a.length);
    a = a2;
    } else if (a.length > keySize) {
    out.println("*** Key doesn't fit value=" + bigInt +
    " byte length=" + a.length);
    return;
    key.setData(a);
    OperationStatus status = db.putNoOverwrite(null, key, data);
    if (status == OperationStatus.KEYEXIST && randomKeys) {
    i -= 1;
    out.println("Random key already exists -- retrying");
    continue;
    if (status != OperationStatus.SUCCESS) {
    out.println("*** " + status);
    return;
    if (i % 10000 == 0) {
    EnvironmentStats stats = env.getStats(null);
    if (stats.getNNodesScanned() > 0) {
    out.println("*** Ran out of cache memory at record " + i +
    " -- try increasing the Java heap size ***");
    return;
    out.print(".");
    out.flush();
    private static void preloadRecords(final PrintStream out,
    final Database db)
    throws DatabaseException {
    Thread thread = new Thread() {
    public void run() {
    while (true) {
    try {
    out.print(".");
    out.flush();
    Thread.sleep(5 * 1000);
    } catch (InterruptedException e) {
    break;
    thread.start();
    db.preload(0);
    thread.interrupt();
    try {
    thread.join();
    } catch (InterruptedException e) {
    e.printStackTrace(out);
    private static void printStats(PrintStream out,
    Environment env,
    String msg)
    throws DatabaseException {
    out.println();
    out.println(msg + ':');
    EnvironmentStats stats = env.getStats(null);
    out.println("CacheSize=" +
    INT_FORMAT.format(stats.getCacheTotalBytes()) +
    " BtreeSize=" +
    INT_FORMAT.format(stats.getCacheDataBytes()));
    if (stats.getNNodesScanned() > 0) {
    out.println("*** All records did not fit in the cache ***");

Maybe you are looking for

  • How can I transfer specific date from my old hard drive to a new one?

    I'm planning on buying the OWC Data Doubler and installing an additional drive [240GB OWC SSD].  I want to transfer everything except my iTunes library to the SSD so that it will run faster.  I can't transfer the iTunes library because it's too big. 

  • User Name, Organization, or serial Number not valid

    I have windows 8 I own 4 disc Creative Suites 2.0 I installed the program. All programs open but Photoshop. I get the Error: "Your Adobe Photoshop user name, organizaiton, or serial number is missing or invalid. The application cannot continue and mu

  • Payment proposal - how to restart or delete

    We have a Payment proposal (F110) that was run and it did not select anything -  we need to delete or restart this poposal however I don't know how.  Can anyone help.  The proposal status reads like this -  it appears the payment proposal ran however

  • HT4790 What's the point if anyone can log into Recovery HD?

    What happens if I use Command-R and use the Recovery HD to log into the machine and erase (format) everything? Can this scenario actually happen? I mean, the Recovery HD won't ask for a password, so it would be pretty simple to lose everything or am

  • HELP!!! Boot Camp Disaster

    hey you guys, i recently got a macbook pro and i tried to install the boot camp program. everything ran fine, but when i ran the xp installation cd, it wouldn't let me hit enter to continue the install. i tried to hit f3 to quit, but no response. i e