Density percentage

Hello,
I run program SAP_INFOCUBE_DESIGN in se38.
This programm shows list of cubes and number of records per each cube.
Does somebody know what density percentage mean? (It has white colour and harly ever to see)

hi,
take a look this thread
"density" in SAP_INFOCUBE_DESIGNS
hope it helps.
experts discussed the topic very detail, don't miss it
Message was edited by: A.H.P

Similar Messages

  • Statisctics adjustments

    What can I do too get Optimize the
    Block density
    Compression ratio
    Ave clustering ratio
    hit ratio on index cache
    Hit on ratio data cache
    Hit ratio on data file cache
    Edited by: Next Level on May 14, 2012 2:53 PM

    Block density:( it all depands on you database )
    Check the block density after each process (i.e., calculation, dataload, export) and you might see some changes to it. The block density is just reporting the density of the datablocks of your database. Specific recommended density percentages allowed for a database are not known. General recommendation do not exist, being that Essbase consumer databases vary widely in size and structure. A low density and high percentage of maximum blocks is indicative that you may want to increase the number of dense dimensions, to have an optimal block size which may in turn help decrease calc time. When you have a low density and high number of blocks, it means there are many unique sparse intersections that do not store data.
    Compression ratio:
    Always "No compression" i havnt tried
    The Hit Ratio on Index Cache setting indicates the Essbase Kernel success rate in locating index information in the index cache without having to retrieve another index page from disk.
    The Hit Ratio on Data File Cache setting indicates the Essbase Kernel success rate in locating data file pages in the data file cache without having to retrieve the data file from disk.
    I dont know why iam writing all this .. i guess you will get all in Admin guide :) and in more detail

  • Global-Cache-Manager for Multi-Environment Applications

    Hi,
    Within our server implementation we provide a "multi-project" environment. Each project is fully isolated from the rest of the server e.g. in terms of file-system usage, backup and other ressources. As one might expect the way to go is using a single VM with multiple BDB environments.
    Obviously each JE-Environment uses its own cache. Within a our environment with dynamic numbers of active projects this causes a problem because the optimal cache configuration within a given memory frame depends on the JE-Environments in use BUT there is no way to define a global JE cache for ALL JE-Environments.
    Our "plan of attack" is to implement a Global-Cache-Manager to dynamicly configure the cache sizes of all active BDB environments depending on the given global cache size.
    Like Federico proposed the starting point for determining the optimal cache setting at load time will be a modification to the DbCacheSize utility so that the return value can be picked up easily, rather than printed to stdout. After that the EnvironmentMutableConfig.setCacheSize will be used to set the cache size. If there is enough Cache-RAM available we could even set a larger cache but I do not know if that really makes sense.
    If Cache-Memory is getting tight loading another BDB environment means decreasing cache sizes for the already loaded environments. This is also done via EnvironmentMutableConfig.setCacheSize. Are there any timing conditions one should obey before assuming the memory is really available? To determine if there are any BDB environments that do not use their cache one could query each cache utilization using EnvironmentStats.getCacheDataBytes() and getCacheTotalBytes().
    Are there any comments to this plan? Is there perhaps a better solution or even an implementation?
    Do you think a global cache manager is something worth back-donating?
    Related Postings: Multiple envs in one process?
    Stefan Walgenbach

    Here is the updated DbCacheSize.java to allow calling it with an API.
    Charles Lamb
    * See the file LICENSE for redistribution information.
    * Copyright (c) 2005-2006
    *      Oracle Corporation.  All rights reserved.
    * $Id: DbCacheSize.java,v 1.8 2006/09/12 19:16:59 cwl Exp $
    package com.sleepycat.je.util;
    import java.io.File;
    import java.io.PrintStream;
    import java.math.BigInteger;
    import java.text.NumberFormat;
    import java.util.Random;
    import com.sleepycat.je.Database;
    import com.sleepycat.je.DatabaseConfig;
    import com.sleepycat.je.DatabaseEntry;
    import com.sleepycat.je.DatabaseException;
    import com.sleepycat.je.Environment;
    import com.sleepycat.je.EnvironmentConfig;
    import com.sleepycat.je.EnvironmentStats;
    import com.sleepycat.je.OperationStatus;
    import com.sleepycat.je.dbi.MemoryBudget;
    import com.sleepycat.je.utilint.CmdUtil;
    * Estimating JE in-memory sizes as a function of key and data size is not
    * straightforward for two reasons. There is some fixed overhead for each btree
    * internal node, so tree fanout and degree of node sparseness impacts memory
    * consumption. In addition, JE compresses some of the internal nodes where
    * possible, but compression depends on on-disk layouts.
    * DbCacheSize is an aid for estimating cache sizes. To get an estimate of the
    * in-memory footprint for a given database, specify the number of records and
    * record characteristics and DbCacheSize will return a minimum and maximum
    * estimate of the cache size required for holding the database in memory.
    * If the user specifies the record's data size, the utility will return both
    * values for holding just the internal nodes of the btree, and for holding the
    * entire database in cache.
    * Note that "cache size" is a percentage more than "btree size", to cover
    * general environment resources like log buffers. Each invocation of the
    * utility returns an estimate for a single database in an environment.  For an
    * environment with multiple databases, run the utility for each database, add
    * up the btree sizes, and then add 10 percent.
    * Note that the utility does not yet cover duplicate records and the API is
    * subject to change release to release.
    * The only required parameters are the number of records and key size.
    * Data size, non-tree cache overhead, btree fanout, and other parameters
    * can also be provided. For example:
    * $ java DbCacheSize -records 554719 -key 16 -data 100
    * Inputs: records=554719 keySize=16 dataSize=100 nodeMax=128 density=80%
    * overhead=10%
    *    Cache Size      Btree Size  Description
    *    30,547,440      27,492,696  Minimum, internal nodes only
    *    41,460,720      37,314,648  Maximum, internal nodes only
    *   114,371,644     102,934,480  Minimum, internal nodes and leaf nodes
    *   125,284,924     112,756,432  Maximum, internal nodes and leaf nodes
    * Btree levels: 3
    * This says that the minimum cache size to hold only the internal nodes of the
    * btree in cache is approximately 30MB. The maximum size to hold the entire
    * database in cache, both internal nodes and datarecords, is 125Mb.
    public class DbCacheSize {
        private static final NumberFormat INT_FORMAT =
            NumberFormat.getIntegerInstance();
        private static final String HEADER =
            "    Cache Size      Btree Size  Description\n" +
        //   12345678901234  12345678901234
        //                 12
        private static final int COLUMN_WIDTH = 14;
        private static final int COLUMN_SEPARATOR = 2;
        private long records;
        private int keySize;
        private int dataSize;
        private int nodeMax;
        private int density;
        private long overhead;
        private long minInBtreeSize;
        private long maxInBtreeSize;
        private long minInCacheSize;
        private long maxInCacheSize;
        private long maxInBtreeSizeWithData;
        private long maxInCacheSizeWithData;
        private long minInBtreeSizeWithData;
        private long minInCacheSizeWithData;
        private int nLevels = 1;
        public DbCacheSize (long records,
                   int keySize,
                   int dataSize,
                   int nodeMax,
                   int density,
                   long overhead) {
         this.records = records;
         this.keySize = keySize;
         this.dataSize = dataSize;
         this.nodeMax = nodeMax;
         this.density = density;
         this.overhead = overhead;
        public long getMinCacheSizeInternalNodesOnly() {
         return minInCacheSize;
        public long getMaxCacheSizeInternalNodesOnly() {
         return maxInCacheSize;
        public long getMinBtreeSizeInternalNodesOnly() {
         return minInBtreeSize;
        public long getMaxBtreeSizeInternalNodesOnly() {
         return maxInBtreeSize;
        public long getMinCacheSizeWithData() {
         return minInCacheSizeWithData;
        public long getMaxCacheSizeWithData() {
         return maxInCacheSizeWithData;
        public long getMinBtreeSizeWithData() {
         return minInBtreeSizeWithData;
        public long getMaxBtreeSizeWithData() {
         return maxInBtreeSizeWithData;
        public int getNLevels() {
         return nLevels;
        public static void main(String[] args) {
            try {
                long records = 0;
                int keySize = 0;
                int dataSize = 0;
                int nodeMax = 128;
                int density = 80;
                long overhead = 0;
                File measureDir = null;
                boolean measureRandom = false;
                for (int i = 0; i < args.length; i += 1) {
                    String name = args;
    String val = null;
    if (i < args.length - 1 && !args[i + 1].startsWith("-")) {
    i += 1;
    val = args[i];
    if (name.equals("-records")) {
    if (val == null) {
    usage("No value after -records");
    try {
    records = Long.parseLong(val);
    } catch (NumberFormatException e) {
    usage(val + " is not a number");
    if (records <= 0) {
    usage(val + " is not a positive integer");
    } else if (name.equals("-key")) {
    if (val == null) {
    usage("No value after -key");
    try {
    keySize = Integer.parseInt(val);
    } catch (NumberFormatException e) {
    usage(val + " is not a number");
    if (keySize <= 0) {
    usage(val + " is not a positive integer");
    } else if (name.equals("-data")) {
    if (val == null) {
    usage("No value after -data");
    try {
    dataSize = Integer.parseInt(val);
    } catch (NumberFormatException e) {
    usage(val + " is not a number");
    if (dataSize <= 0) {
    usage(val + " is not a positive integer");
    } else if (name.equals("-nodemax")) {
    if (val == null) {
    usage("No value after -nodemax");
    try {
    nodeMax = Integer.parseInt(val);
    } catch (NumberFormatException e) {
    usage(val + " is not a number");
    if (nodeMax <= 0) {
    usage(val + " is not a positive integer");
    } else if (name.equals("-density")) {
    if (val == null) {
    usage("No value after -density");
    try {
    density = Integer.parseInt(val);
    } catch (NumberFormatException e) {
    usage(val + " is not a number");
    if (density < 1 || density > 100) {
    usage(val + " is not betwen 1 and 100");
    } else if (name.equals("-overhead")) {
    if (val == null) {
    usage("No value after -overhead");
    try {
    overhead = Long.parseLong(val);
    } catch (NumberFormatException e) {
    usage(val + " is not a number");
    if (overhead < 0) {
    usage(val + " is not a non-negative integer");
    } else if (name.equals("-measure")) {
    if (val == null) {
    usage("No value after -measure");
    measureDir = new File(val);
    } else if (name.equals("-measurerandom")) {
    measureRandom = true;
    } else {
    usage("Unknown arg: " + name);
    if (records == 0) {
    usage("-records not specified");
    if (keySize == 0) {
    usage("-key not specified");
         DbCacheSize dbCacheSize = new DbCacheSize
              (records, keySize, dataSize, nodeMax, density, overhead);
         dbCacheSize.caclulateCacheSizes();
         dbCacheSize.printCacheSizes(System.out);
    if (measureDir != null) {
    measure(System.out, measureDir, records, keySize, dataSize,
    nodeMax, measureRandom);
    } catch (Throwable e) {
    e.printStackTrace(System.out);
    private static void usage(String msg) {
    if (msg != null) {
    System.out.println(msg);
    System.out.println
    ("usage:" +
    "\njava " + CmdUtil.getJavaCommand(DbCacheSize.class) +
    "\n -records <count>" +
    "\n # Total records (key/data pairs); required" +
    "\n -key <bytes> " +
    "\n # Average key bytes per record; required" +
    "\n [-data <bytes>]" +
    "\n # Average data bytes per record; if omitted no leaf" +
    "\n # node sizes are included in the output" +
    "\n [-nodemax <entries>]" +
    "\n # Number of entries per Btree node; default: 128" +
    "\n [-density <percentage>]" +
    "\n # Percentage of node entries occupied; default: 80" +
    "\n [-overhead <bytes>]" +
    "\n # Overhead of non-Btree objects (log buffers, locks," +
    "\n # etc); default: 10% of total cache size" +
    "\n [-measure <environmentHomeDirectory>]" +
    "\n # An empty directory used to write a database to find" +
    "\n # the actual cache size; default: do not measure" +
    "\n [-measurerandom" +
    "\n # With -measure insert randomly generated keys;" +
    "\n # default: insert sequential keys");
    System.exit(2);
    private void caclulateCacheSizes() {
    int nodeAvg = (nodeMax * density) / 100;
    long nBinEntries = (records * nodeMax) / nodeAvg;
    long nBinNodes = (nBinEntries + nodeMax - 1) / nodeMax;
    long nInNodes = 0;
         long lnSize = 0;
    for (long n = nBinNodes; n > 0; n /= nodeMax) {
    nInNodes += n;
    nLevels += 1;
    minInBtreeSize = nInNodes *
         calcInSize(nodeMax, nodeAvg, keySize, true);
    maxInBtreeSize = nInNodes *
         calcInSize(nodeMax, nodeAvg, keySize, false);
         minInCacheSize = calculateOverhead(minInBtreeSize, overhead);
         maxInCacheSize = calculateOverhead(maxInBtreeSize, overhead);
    if (dataSize > 0) {
    lnSize = records * calcLnSize(dataSize);
         maxInBtreeSizeWithData = maxInBtreeSize + lnSize;
         maxInCacheSizeWithData = calculateOverhead(maxInBtreeSizeWithData,
                                  overhead);
         minInBtreeSizeWithData = minInBtreeSize + lnSize;
         minInCacheSizeWithData = calculateOverhead(minInBtreeSizeWithData,
                                  overhead);
    private void printCacheSizes(PrintStream out) {
    out.println("Inputs:" +
    " records=" + records +
    " keySize=" + keySize +
    " dataSize=" + dataSize +
    " nodeMax=" + nodeMax +
    " density=" + density + '%' +
    " overhead=" + ((overhead > 0) ? overhead : 10) + "%");
    out.println();
    out.println(HEADER);
    out.println(line(minInBtreeSize, minInCacheSize,
                   "Minimum, internal nodes only"));
    out.println(line(maxInBtreeSize, maxInCacheSize,
                   "Maximum, internal nodes only"));
    if (dataSize > 0) {
    out.println(line(minInBtreeSizeWithData,
                   minInCacheSizeWithData,
                   "Minimum, internal nodes and leaf nodes"));
    out.println(line(maxInBtreeSizeWithData,
                   maxInCacheSizeWithData,
    "Maximum, internal nodes and leaf nodes"));
    } else {
    out.println("\nTo get leaf node sizing specify -data");
    out.println("\nBtree levels: " + nLevels);
    private int calcInSize(int nodeMax,
                   int nodeAvg,
                   int keySize,
                   boolean lsnCompression) {
    /* Fixed overhead */
    int size = MemoryBudget.IN_FIXED_OVERHEAD;
    /* Byte state array plus keys and nodes arrays */
    size += MemoryBudget.byteArraySize(nodeMax) +
    (nodeMax * (2 * MemoryBudget.ARRAY_ITEM_OVERHEAD));
    /* LSN array */
         if (lsnCompression) {
         size += MemoryBudget.byteArraySize(nodeMax * 2);
         } else {
         size += MemoryBudget.BYTE_ARRAY_OVERHEAD +
    (nodeMax * MemoryBudget.LONG_OVERHEAD);
    /* Keys for populated entries plus the identifier key */
    size += (nodeAvg + 1) * MemoryBudget.byteArraySize(keySize);
    return size;
    private int calcLnSize(int dataSize) {
    return MemoryBudget.LN_OVERHEAD +
    MemoryBudget.byteArraySize(dataSize);
    private long calculateOverhead(long btreeSize, long overhead) {
    long cacheSize;
    if (overhead == 0) {
    cacheSize = (100 * btreeSize) / 90;
    } else {
    cacheSize = btreeSize + overhead;
         return cacheSize;
    private String line(long btreeSize,
                   long cacheSize,
                   String comment) {
    StringBuffer buf = new StringBuffer(100);
    column(buf, INT_FORMAT.format(cacheSize));
    column(buf, INT_FORMAT.format(btreeSize));
    column(buf, comment);
    return buf.toString();
    private void column(StringBuffer buf, String str) {
    int start = buf.length();
    while (buf.length() - start + str.length() < COLUMN_WIDTH) {
    buf.append(' ');
    buf.append(str);
    for (int i = 0; i < COLUMN_SEPARATOR; i += 1) {
    buf.append(' ');
    private static void measure(PrintStream out,
    File dir,
    long records,
    int keySize,
    int dataSize,
    int nodeMax,
    boolean randomKeys)
    throws DatabaseException {
    String[] fileNames = dir.list();
    if (fileNames != null && fileNames.length > 0) {
    usage("Directory is not empty: " + dir);
    Environment env = openEnvironment(dir, true);
    Database db = openDatabase(env, nodeMax, true);
    try {
    out.println("\nMeasuring with cache size: " +
    INT_FORMAT.format(env.getConfig().getCacheSize()));
    insertRecords(out, env, db, records, keySize, dataSize, randomKeys);
    printStats(out, env,
    "Stats for internal and leaf nodes (after insert)");
    db.close();
    env.close();
    env = openEnvironment(dir, false);
    db = openDatabase(env, nodeMax, false);
    out.println("\nPreloading with cache size: " +
    INT_FORMAT.format(env.getConfig().getCacheSize()));
    preloadRecords(out, db);
    printStats(out, env,
    "Stats for internal nodes only (after preload)");
    } finally {
    try {
    db.close();
    env.close();
    } catch (Exception e) {
    out.println("During close: " + e);
    private static Environment openEnvironment(File dir, boolean allowCreate)
    throws DatabaseException {
    EnvironmentConfig envConfig = new EnvironmentConfig();
    envConfig.setAllowCreate(allowCreate);
    envConfig.setCachePercent(90);
    return new Environment(dir, envConfig);
    private static Database openDatabase(Environment env, int nodeMax,
    boolean allowCreate)
    throws DatabaseException {
    DatabaseConfig dbConfig = new DatabaseConfig();
    dbConfig.setAllowCreate(allowCreate);
    dbConfig.setNodeMaxEntries(nodeMax);
    return env.openDatabase(null, "foo", dbConfig);
    private static void insertRecords(PrintStream out,
    Environment env,
    Database db,
    long records,
    int keySize,
    int dataSize,
    boolean randomKeys)
    throws DatabaseException {
    DatabaseEntry key = new DatabaseEntry();
    DatabaseEntry data = new DatabaseEntry(new byte[dataSize]);
    BigInteger bigInt = BigInteger.ZERO;
    Random rnd = new Random(123);
    for (int i = 0; i < records; i += 1) {
    if (randomKeys) {
    byte[] a = new byte[keySize];
    rnd.nextBytes(a);
    key.setData(a);
    } else {
    bigInt = bigInt.add(BigInteger.ONE);
    byte[] a = bigInt.toByteArray();
    if (a.length < keySize) {
    byte[] a2 = new byte[keySize];
    System.arraycopy(a, 0, a2, a2.length - a.length, a.length);
    a = a2;
    } else if (a.length > keySize) {
    out.println("*** Key doesn't fit value=" + bigInt +
    " byte length=" + a.length);
    return;
    key.setData(a);
    OperationStatus status = db.putNoOverwrite(null, key, data);
    if (status == OperationStatus.KEYEXIST && randomKeys) {
    i -= 1;
    out.println("Random key already exists -- retrying");
    continue;
    if (status != OperationStatus.SUCCESS) {
    out.println("*** " + status);
    return;
    if (i % 10000 == 0) {
    EnvironmentStats stats = env.getStats(null);
    if (stats.getNNodesScanned() > 0) {
    out.println("*** Ran out of cache memory at record " + i +
    " -- try increasing the Java heap size ***");
    return;
    out.print(".");
    out.flush();
    private static void preloadRecords(final PrintStream out,
    final Database db)
    throws DatabaseException {
    Thread thread = new Thread() {
    public void run() {
    while (true) {
    try {
    out.print(".");
    out.flush();
    Thread.sleep(5 * 1000);
    } catch (InterruptedException e) {
    break;
    thread.start();
    db.preload(0);
    thread.interrupt();
    try {
    thread.join();
    } catch (InterruptedException e) {
    e.printStackTrace(out);
    private static void printStats(PrintStream out,
    Environment env,
    String msg)
    throws DatabaseException {
    out.println();
    out.println(msg + ':');
    EnvironmentStats stats = env.getStats(null);
    out.println("CacheSize=" +
    INT_FORMAT.format(stats.getCacheTotalBytes()) +
    " BtreeSize=" +
    INT_FORMAT.format(stats.getCacheDataBytes()));
    if (stats.getNNodesScanned() > 0) {
    out.println("*** All records did not fit in the cache ***");

  • "density" in SAP_INFOCUBE_DESIGNS

    Can someone tell me what is the meaning of "density" that is displayed in white text for each cube from program SAP_INFOCUBE_DESIGNS ?

    Never really noticed the Density measure - with my color pallet, that line of text is barely visible.
    I think Patrick is talking about the size of a dimension relative the the size of the fact table.  Don't believe that is what the InfoCube density is trying measure.
    Here's my interpretation ( could be full of you know what, but got to take a stab at it ) -
    As Bhanu points out, it's the number of fact table rows divided by the product of the multiplication of all of the row counts all of the dimension tables. ( hope that creates a clear mental image... yikes!)
    This product of the multiplication of the dimensions provides the number of cells in the OLAP cube, assuming that every dimension is being considered.  Now in reality, it would be really rare that every possible combination dimension values exists in your data.  If it did, you would have a density of 100% - that is every cell in the OLAP cube would be populated with data.
    The density measure then, is trying to tell you what percentage of the cells of your OLAP cube are populated - or put another way - of all of the theoretically possible combinations of dimension values, how many actually exist.   
    What do you do with that info? Could perhaps be used to indicate that there are certain relationships among your characteristics that you might want to explore (which is something you would typically look at when defining dimensions during the modelling phase. Have to think a little more about it's use. 
    That's my best guess for what it's worth.  Hey, if I'm wrong, I at least hope it sounded good!
    Pizzaman
    Message was edited by: Pizzaman
    Thinking some more (always a dangerous thing) Not sure of SAP's cube design with respect to the amount of storage a cell in the OLAP cube consumes if it has values vs cells without values.  Maybe there is no difference, but if there is, the density measure might give you an idea of how memory the cube would consume, e.g. a cube with 10,000 cells with 9,000 populated with a value might consume more memory than if the cube only had 1,000 cells populated.  But, I really don't know - maybe a null cell consumes just as much storage - sounds like a question for the Cube gurus at SAP.

  • Toolbar - rgb percentages to rgb level readouts?

    The reading in the toolbar ... as one passes the cursor over the image ... shows the corresponding pixel values in RGB percentages.
    This is meaningless to me, and worse, useless.
    What I need are the rgb level readings, as in ACR or any other program.
    How do I change to those? I can find no info on this yet in my searches.
    Neil

    Thanks, Don...
    I found that thread just after posting this one.
    I am rather surprised they decided NOT to do something they and every other raw converter designer has done ... let the user choose an output space and give the levels for that output space. Otherwise, as they have chosen to do, their "output" file is useless as a final product. You have to check it in something else, and adjust further, before sending to a lab.
    With that as their final deal, this is a clunker of a program for my useage. We do all our adjusting of color/contrast/density and send to our lab with the express instructions they NOT apply anything to the files. Just print precisely as received, no color or density changes. I was expecting to be able to do this to all files, whether the raw's from my D200 or my wife's S3 jpegs. Yea, one program would handle all our "developing" needs.
    Oh contraire ... do 'em here then open them in Photoshop also. How studid is that?
    I hadn't even checked for this before buying the program because I had never seen ANYONE ... including Adobe ... say something so technic-geek stupid.
    OF COURSE the levels would not mean anything unless you had a chosen output space. WHICH IS WHY THAT HAS ALWAYS BEEN THE WORKFLOW. Choosing your output space is ALWAYS one of the setup options.
    Sorry, I don't often do all caps but this has me really steamed.
    Neil

  • High or Low Density RAM???

    Im looking at expanding the RAM in my imac, and have been shopping around on eBay. however, i dont know the difference between high and low density ram modules? Does anyone know which is most suitable or compatible or beneficial for my imac G5?
    Thanks

    Buying the best does not have to cost more or be an exercise in electrical engineering. Look at 1GB RAM modules from Crucial.com, it's the exact same OEM RAM that Apple ships in a large percentage of Macs (and it's usually Micron brand, sometimes Samsung brand).
    On the other hand, if you're looking to buy an additional 512MB (a mistake IMHO) there's plenty of OEM RAM (Samsung, Micron, Hynix, Nanya brands) that sells very cheap on eBay that's been pulled from Macs prior to upgrading to 1GB sticks. 512 sticks are cheap because very few want them and a lot wish to sell what they've removed.

  • Error while entering  the Overhead percentage 0.3693 in Costing Sheet

    HI,
    While defining costing sheet in the system,i need to give the overhead percentage as 0.3692 as per the client requirement.but the system is giving the below error.
    Input should be in the form __,___,__~.___V
    Message no. 00088
    Diagnosis
    Your entry does not match the specified input format.
    System response
    The entry in this field was rejected.
    Procedure
    The entry must comply with the edit format. The following edit format characters have a special meaning:
    "_" (underscore)
    There should be an input character at this point; this should be a number for numeric fields.
    "." (decimal point) (applies to numeric fields)
    The decimal point occurs here (setting in the user master record).
    "," (thousands separator) (applies to numeric fields)
    This separator occurs (optionally) for more than three figures. Depending on the setting in the user master record, it can be a period or a comma.
    "V" (applies to numeric fields)
    The operational sign appears here. If used, it must by at the right margin of the field. The sign is either "-" or " "(space).
    "~" (tilde) (applies to numeric fields)
    As of and including this character, leading zeros must also be entered. Otherwise, this character has the same meaning as an underscore. Leading zeros need not be entered on the left of the tilde. They are not output at this position.
    All other characters have their normal meanings and must be entered in the same position as in the edit format.
    Thanks
    Sunitha

    hi Sunitha,
    this error is related to Decimal Format pls check your setting in country
    For 4.7C
    SPRO > IMG > General Setting > Define countries in mySAPsystem
    For ECC 6.0
    SPRO > IMG > SAP Netweaver > General Setting > Define countries in mySAPsystem
    select your countries and double click in that there is last field for Decimal Pt. format check that format and try to enter value according to that format because in error it showing 4 decimal and there you may have maintained in two decimal. also check decimal currecy setting. One more reason could be there instead of full stop you may have maintained comma in setting.
    thx.
    Ganu

  • Laserjet CP1025nw color, unable to change Print density

    Hi,
    I need to raise the "Print Density" to 5 (for pcb toner transfer) on my Laserjet
    CP1025nw color, but when I open the printer properties I do not have that option
    in the device settings tab, despite what is said on the help page of that model.
    OS: Windows XP
    The printer is on the USB port.
    The pilot is: HP LaserJet Professional CP1020 Series + last update
    How can I change that setting ?
    Please help !!!
    Thank you in advance for any help you can bring !
    All the best,
    This question was solved.
    View Solution.

    From what you describe you want to increase your toner density on your subsequent print jobs. First of all, if you have not, you should uninstall this printer and make sure that the drivers are the most up to date using the website I will link below :
    http://h10025.www1.hp.com/ewfrf/wc/softwareCategory?cc=us&lc=en&dlc=en&product=4052972
    Here you should also check for your firmware to be sure that it is up to date. 
    I would also, both before and after, check under "Printer Preferences" (not properties) and see if there is a way to change the density under the Paper/Quality tab. 
    Let me know if this helps you out! 
    -Spencer 

  • Working with percentages in Adobe Acrobat 9 Pro

    So, I have a quick question. I am new to working with forms in acrobat, and am having a hard time figuring this part out. If anyone can help me, I would greatly appreciate it!
    So I have a form that I've created in acrobat 9 pro. Below I have listed "individual 1:" and then "percentage allocation". The "percentage alloctions" are the different form fields for the percentages I've created. For each form field, I limited the values to not exceed 100%. I want individuals filling in this form to be able to fill in the percentages by each of the names without the TOTAL percentage value (of all three together) going over 100%. But...the trick is, I don't have a separate "Total" form field, there's no room for one. So is there a way to make it so the total of the three equals 100% without having a "total" form field?
    individaul 1:  percentage allocation
    Individual 2: percentage allocation1
    Individual 3: percentage allocation2
    Please let me know if this makes sense,

    Try67,
    Thank you for sending me that reference guide, that will help. This is extremely frustrating for me. Thank you for the field visible or hidden script...I did end up finding those yesterday, just didn't know how to incorporate them to become visible if the total number is less than 100.
    I ended up creating another text field right next to the total, and called it "statement". I marked this field as "hidden" through the general properties option in the adobe acrobat dialog box. And , I set up default text to read, "Total value must not be less than 100%". All of this, I did through the properties dialog box.
    Individual 1: 30
    Individual 2: 20
    Individual 3: 10
    Total:  60%             "statement" text field is here.
    I figured what I could do is add another "if statement" to the validation script I put in for the "Total" text field I already had (below)..don't know if that was a stupid thing to do or not, but didn't know how else to do it. And where it says getField, I added the "statement" text field (the one that is hidden), thinking it would just get that field, but it's not working, what did I do wrong?
    var total = Number(getField("individual1").value) + Number(getField("individual2").value) + Number(getField("individual3").value);
    if (total>100) {
        app.alert("The total of the beneficiary fields must equal 100%");
        event.rc = false;
    if (total<100) {
    this.getField("statement").display = display.visible;

  • Percentage calculation in report painter

    Friends,
    How do we calculate percentage in report painter. I know we can do it using a formula. But when i give the formula as ex: A/B it is giving it as Zero. Do i need to do anything else? A is a subtotal of some accounts and B is a subtotal is some accounts.
    Thanks in Advance

    Hi,
    For decimal places
    select the column
    go to formatting--> columns
    in decimal places put 0.00
    This will give u decimals
    Points if useful
    Regards,
    Kiran

  • Trying to create a formula to calculate a discounted percentage, I need Help!?!?

    I am learning as I go.  I have a form I am working on and I need to calculate a discount.  So I created a combo box that I want to add 5 and 10% into.  I want my salesman to choose one of those percentages and have it calculate the 5% or 10% discount off the subtotal and give me a dollar amount in the discount field.
    5% x $100.00(subtotal) =  discount amount $5.00
    I am really not sure where to start with this.  Hoping for some pros out there to give me the dummies version of how to.

    As the export value for each of these options enter 0.05 and 0.1, resp.,
    and then you'll be able to just use the drop-down field directly in a
    Product calculation, together with the subtotal field, to calculate the
    value of the discount.

  • Query help,  Percentages / ratio to reports / nests

    Hi
    I have a query that returns data like this
    D_NO POINTS COUNT_POINTS ID_COUNT
    4002 L_T_69 12 282
    4219 L_T_69 1 151
    4228 L_T_69 18 193
    4229 L_T_69 7 181
    4230 L_T_69 0 197
    I need to also output a column that works out a percentage of count_points and Id_count. e.g 12/282 * 100 = 4.2
    I had a try with ratio to reports function but no joy for me. I think i need to add in
    another nested select or something but what i was trying wasnt working.
    Can anyone help.
    here is the query so far
    SELECT D_NO,
    GROUPS.POINTS,
    DECODE(GROUPS.POINTS, 'L_T_69' , L_T_69) AS COUNT_POINTS,
    ID_COUNT
    FROM
         (SELECT D_NO,
         Count (CASE WHEN VERBAL <= 69 THEN 1
              END) AS L_T_69,
         COUNT(ID_NUMBER) AS ID_COUNT
         FROM TBL_1
         WHERE VERBAL IS NOT NULL
         group by D_NO)
    TBL_1,
    ( SELECT 'L_T_69' POINTS FROM DUAL )GROUPS
    thank you

    Not sure if this is what you're looking for but it may give you some clues:
    select object_type
          ,has_a_c
          ,type_total
          ,round(100 * (has_a_c / type_total),2) ratio
    from
       select object_type
             ,sum (case when instr(object_name,'C') <> 0 then 1
                        else 0
                   end) has_a_c
             ,count(*) type_total
       from   all_objects
       group by object_type
    OBJECT_TYPE          HAS_A_C   TYPE_TOTAL   RATIO
    CONSUMER GROUP             1            2      50
    EVALUATION CONTEXT         1            1     100
    FUNCTION                  50          113   44.25
    INDEX                      7           20      35
    LIBRARY                    0            2       0
    OPERATOR                   1            2      50
    PACKAGE                  500         1158   43.18
    PACKAGE BODY             487         1126   43.25
    PROCEDURE                 54           86   62.79
    SEQUENCE                  62          116   53.45
    SYNONYM                 1060         2298   46.13
    TABLE                    365          721   50.62
    TABLE PARTITION           15           15     100
    TYPE                     104          272   38.24
    VIEW                     834         1896   43.99
    15 rows selected.

  • Can HPCM calculate percentages to use in allocations as well as perform the allocations

    Hi All
    I have a client that is evaluating which products to purchase from the Oracle stack.  Unfortunately I don't know HPCM yet so I'm not sure of its capabilities.  I think they will definitely go with Hyperion Planning but they do some relatively simple allocations and I cannot decide whether or not they need HPCM.  They currently have a relational / access database that is used to calculate percentages to be used to allocate costs from back office to front office.  Whether they need to purchase HPCM seems to depend upon whether HPCM can calculate the percentages required for the allocation as well as perform the allocations.  The options as I see it are:
    1.  Replace legacy system with HPCM and use HPCM to calculate percentage and apply percentages to allocate costs
    2.  Keep legacy system for calculating percentages, upload percentages to HPCM and use HPCM to allocate costs
    3.  Keep legacy system for calculating percentages, upload percentages to Essbase and use Essbase to allocate costs
    4.  Keep legacy system for calculating percentages and allocating costs (not ideal as will require additional system interfaces of data from and to Hyperion Planning)
    I know HPCM uses Essbase, when I say Essbase above I mean Hyperion Planning.  I have advised the client that they ought to avoid option 4 but I am not clear enough on what HPCM can do to advise them on which of the other options is most suitable / cost effective.
    Any help would be appreciated.
    Regards
    Stu

    Hi Stu,
    the plain answer is Yes, HPCM can do such a simple task. In my private opinion, it would be a bit oversized.
    I want to add an option and put my vote on this:
    0.  Pull in a DB connection the required data into Planning and use Planning to calculate percentage and apply percentages to allocate costs.
    Regards,
    Philip Hulsebosch.
    www.trexco.nl
    p.s. To all users, close questions which were answered. Then we do not have to open them to see if we can help = saves us time to help others.

  • Calculate Revenue's Percentage per Year, Month or any filter that is applied to the Dashboard in DAX using Excel PowerView.

    Hello folks,
    I am facing an odd issue here with a Dashboard Report in PowerView.
    The task is to display, in a table, a percentage of total (revenue) by any filter that the client wants. For instance, if the client selects the year (to be short) 2013, it should display for each Customer the percentage of total as Participation field.
    If the client select 2014, the data should change dinamically and the same if he selects both years.
    The issue that I am facing is that it calculates correctly (showing total of 100%) for either 2013 or 2014, but when I select both it sums up and shows 200% total, altering all other lines with the same behavior.
    So, basically what I need is to immitate a PARTITION BY analytic function, for instance, PARTITION BY YEAR and calculate the data over the total of the selected year only (if there are many years selected, the should be calculated over it, resulting in a
    100% total).
    Here is the formula that I am using in the column:
    =[Valor NF]/CALCULATE(SUM([Valor NF]);ALLEXCEPT(Faturamento;Faturamento[Ano]))
    Where [Valor NF] is the source column (invoyce total), Faturamento is my table (Revenue) and [Ano] is the year column, (I am calculating only by year to test)
    I'd appreciate any suggestion.
    Thanks in advance
    MCP

    Hi Nick,
      Thanks for your suggestion and it  worked for me   I  added the 'System message' web item and set the visibility to 'Hidden'  .Now I am not getting any message. Thank you for both.
    Thanks & Regards,
    Raja

  • Custom Calculation Script for percentages

    Hi everyone,
    can someone help me, I am trying to write a script, but I face some unsatisfactory results
    I upload the form so you can have a look at it:
    http://cjoint.com/?BGrwV2Sq5d2
    he re is the scenario:
    so after selecting the different module code
    the fields: Excellent, Good, Satisfactory and Unsatisfactory have different values (they have a script)
    eg. if Y1H3 is selected, excellent will be at 33.33 % etc...
    but then I would like that when the Grade is selcted (Grade 1, Grade 2 ....) the fields ( MarksRow1, MarksRow2 ...) take the correspondant value of the fields: Excellent, Good, Satisfactory and Unsatisfactory
    I've tried to write for each of the field MarksRow1, MarksRow1 etc...  the following script this but it doesn't work
    (function () {
        var z = getField("Grade 1").value;
    if (z === 0) {
            event.value = "";
    if (z === 3) {
            event.value = getField("Excellent").value;
    if (z === 4) {
            event.value = getField("Good").value;
    if (z === 5) {
            event.value = getField("Satisfactory").value;
    if (z === 6) {
            event.value = getField("Unsatisfactory").value;
    I would like after that to work out the total in the MarksTotal field by adding the value of ( MarksRow1, MarksRow2 ...)
    can you help me please?
    thanks

    hi George,
    thank you for your interest
    I've just realised that I was complety wrong in the differents tests,  there is no 5 and 6 but only 0,1,2,3 and 4
    and now it's working, thanks
    yes for the total, basically after selecting the different grade, it will display a percentage on the marks column fields ( "MarksRow1"- "MarksRow11" )
    I would like to add this percentages and display the total in the MarksTotale field
    also I would like to leave this field (MarksTotal) empty if nothing has been selected
    I've tried to to do this, but there is a zero when nothing has been selected and it doesn't display the % sign next the total
    here is the form: http://cjoint.com/?BGstXulF9PX

Maybe you are looking for

  • OLMRLIST screen how to fix the cost center tab and its impact?

    Hi Explain the function of the T-Code OLMRLIST. How does it interact with the MIRO T-Code? What is the significance of a field check box being checked in OLMRLIST. I particularly want to know about the Cost center field. Raja

  • Help with email attachment of keynote presentation

    Hi, I have just spent a significant amount of time doing a presentation on keynote. I am trying to email it to myself to access it tommorrow. However everytime I attempt to do it, it states that safari can not open the page. Any advice! I thank you i

  • Mountain Lion SPoD on all programs every few hours?

    I upgraded my late 2011 15" MacBook Pro from Lion to Mountain Lion about 2 weeks ago and noticed that it started pinwheeling randomly after awhile during day-to-day stuff.  This of course was after I reinstalled all my aftermarket software.  What wou

  • Itunes not playing

    to all, today, itunes wont play any of my music in the library. i have checked all the soundcard propertis, but when i click on a file to play it, the bar at the top saying how much has played and how much is left etc, doesn't move. i have had the pr

  • Table compare and derive alter script between 2 schemas

    I am in Oracle 10g. We are in need to synchronise table structures between two different database. and execute the alter script in the target database. I have an idea to find the tables which have the table definition changed using all_tab_columns an