Impact of data file size on DB performance

Hi,
I have a general query regarding size of data files.
Considering DB performance, which of the below 2 options are better?
1. Bigger data file size but less number of files (ex. 2 files with size 8G each)
2. Smaller data file size but more number of files (ex. 8 files with size 2G each)
I am working on a DB where I have noticed where very high I/O.
I understand there might be many reasons for this.
However, I am checking for possibility to improve DB performance though optimizing data file sizes. (Including TEMP/UNDO table spaces)
Kindly share your experiences with determining optimal file size.
Please let me know in case you need any DB statistics.
Few details are as follows:
OS: Solaris 10
Oracle: 10gR2
DB Size: 80G (Approx)
Data Files: UserData - 6 (15G each), UNDO - 2 (8G each), TEMP - 2 (4G each)
Thanks,
Ullhas

Ullhas wrote:
I have a general query regarding size of data files.
Considering DB performance, which of the below 2 options are better?Size or number really does not matter assuming other variables constant. More files results in more open file handles, but in your size db, it matters not.
I am working on a DB where I have noticed where very high I/O.
I understand there might be many reasons for this.
However, I am checking for possibility to improve DB performance though optimizing data file sizes. (Including TEMP/UNDO table spaces)Remember this when tuning I/O: The fastest I/O is the one that never takes place! High I/O may very well be a symptom of unnecessary FTS or poor execution plans. Validate this first before tuning I/O and you will be much better off.
Regards,
Greg Rahn
http://structureddata.org

Similar Messages

  • SAP Data File size considerably reduced after Unicode Conversion

    Hello Experts
    I have just performed a CUUC (Upgrade along with unicode conversion) from R/3 4.7 to ECC 6.0 EHP5. The data size that i earlier had was close to 463 GB (MSSQL MDF, LDF Files), after the data export to be converted to unicode the size was 45 GB (10% of actual data size, i feel this is normal after heterogeneous system copy), but after the import again the DATA file size is only 247 GB, is this normal ? or have i lost some data ? For ex, I tried checking tables like MSEG and the number of entries have reduced from 15,678,790 to 15,290,545.
    Could you kindly let me know if there is a way to check from Basis perspective if i have not lost any data ? I have followed all the procedures as per SAP Standards.
    Waiting for your quick reply.
    Best Regards
    Pritish

    Hi Nicholas
    Data is compressed during the new R3load procedure is understood, but why number of table entries have reduced is something which is still a question to me (in some cases increased as well) ? For example      
                                 Source          Target
    STPO                  415,725          412,150
    STKO                 126,710          126,141
    PLAF                 74,671                   78,336
    MDKP                 193,487          192,747
    MDPB                 55,329                   63,557
    Any suggestions or ideas ?
    Best Regards
    Pritish

  • Sql loader maximum data file size..?

    Hi - I wrote sql loader script runs through shell script which will import data into table from CSV file. CSV file size is around 700MB. I am using Oracle 10g with Sun Solaris 5 environment.
    My question is, is there any maximum data file size. The following code from my shell script.
    SQLLDR=
    DB_USER=
    DB_PASS=
    DB_SID=
    controlFile=
    dataFile=
    logFileName=
    badFile=
    ${SQLLDR} userid=$DB_USER"/"$DB_PASS"@"$DB_SID \
              control=$controlFile \
              data=$dataFile \
              log=$logFileName \
              bad=$badFile \
              direct=true \
              silent=all \
              errors=5000Here is my control file code
    LOAD DATA
    APPEND
    INTO TABLE KEY_HISTORY_TBL
    WHEN OLD_KEY <> ''
    AND NEW_KEY <> ''
    FIELDS TERMINATED BY ','
    TRAILING NULLCOLS
            OLD_KEY "LTRIM(RTRIM(:OLD_KEY))",
            NEW_KEY "LTRIM(RTRIM(:NEW_KEY))",
            SYS_DATE "SYSTIMESTAMP",
            STATUS CONSTANT 'C'
    )Thanks,
    -Soma
    Edited by: user4587490 on Jun 15, 2011 10:17 AM
    Edited by: user4587490 on Jun 15, 2011 11:16 AM

    Hello Soma.
    How many records exist in your 700 MB CSV file? How many do you expect to process in 10 minutes? You may want to consider performing a set of simple unit tests with 1) 1 record, 2) 1,000 records, 3) 100 MB filesize, etc. to #1 validate that your shell script and control file syntax function as expected (including the writing of log files, etc.), and #2 gauge how long the processing will take for the full file.
    Hope this helps,
    Luke
    Please mark the answer as helpful or answered if it is so. If not, provide additional details.
    Always try to provide actual or sample statements and the full text of errors along with error code to help the forum members help you better.

  • Essbase cube data file size

    Hi,
    Why is it showing different numbers about my data file size in EAS>database>edit>properties>storage and a full database export.
    Thanks,
    KK

    Tim, in all seriousness, I am not a stalker. Honestly. You just post about things that interest me/we share some of the same skills. Alternatively, Glenn stalks me, I stalk you, it's time for you to become a sociopath too and stalk someone else.
    Okay, with that insanity out of the way, the other thing that could have a big impact on export size is if the OP did a level zero or input level export as opposed to a full export. In BSO databases in particular, that can lead to a very, very large set of .PAG (and to some extent .IND files) and a rather small export file size as the calculated values aren't gettting written out.
    If the export is done through EAS I think the default is level zero.
    Regards,
    Cameron Lackpour
    Edited by: CL on Sep 23, 2011 2:38 PM
    Bugger, the OP wrote "full database export". Okay, never mind, I am a terrible stalker. I agree with everything Tim wrote re compression.
    In an effort to add something useful, if you use the new-ish database archive feature in MaxL, you will essentially get .PAG files + .IND files combined into a single binary file. I used to be a big fan of full restructures, clears, and reloads to do defrags, but now go with the restructure command. Despite the fact that it's single threaded, in my limited testing (only done it at one client) it was faster as the export all, clear, reload. If you combine that with the archive mode you might have a better approach.
    Regards,
    Cameron Lackpour

  • Negative data file size

    RDBMS: oracle 10g R2
    when execute to statement to determinate size of data files, the data file DATA8B.ORA is negative, why?
    before droped a table with 114,000,000 rows in this tablespace.
    FILE_NAME FILE_SIZE USED PCT_USED FREE
    G:\DIL\DATA5D.ORA 4096 3840.06 93.75 255.94
    Total tablespace DATA5---------------------------> 16384 14728.24 89.9 1655.76
    I:\DIL\DATA6A.ORA 4096 3520.06 85.94 575.94
    I:\DIL\DATA6B.ORA 4096 3456.06 84.38 639.94
    I:\DIL\DATA6C.ORA 4096 3520.06 85.94 575.94
    I:\DIL\DATA6D.ORA 4096 3520.06 85.94 575.94
    Total tablespace DATA6---------------------------> 16384 14016.24 85.53 2367.76
    G:\DIL\DATA7A.ORA 4096 3664.06 89.45 431.94
    G:\DIL\DATA7B.ORA 4096 3720.06 90.82 375.94
    G:\DIL\DATA7C.ORA 4096 3656.06 89.26 439.94
    G:\DIL\DATA7D.ORA 4096 3728.06 91.02 367.94
    G:\DIL\DATA7E.ORA 4096 3728.06 91.02 367.94
    Total tablespace DATA7---------------------------> 20480 18496.3 90.3 1983.7
    G:\DIL\DATA8A.ORA 3500 2880.06 82.29 619.94
    G:\DIL\DATA8B.ORA 4000 -2879.69 -71.99 6879.69
    Total tablespace DATA8---------------------------> 7500 0.37 5.14 7499.63

    the query is:
    select substr(decode(grouping(b.file_name),
    1,
    decode(grouping(b.tablespace_name),
    1,
    rpad('TOTAL:', 48, '=') || '>>',
    rpad('Total tablespace ' || b.tablespace_name,
    49,
    '-') || '>'),
    b.file_name),
    1,
    50) file_name,
    sum(round(Kbytes_alloc / 1024, 2)) file_size,
    sum(round((kbytes_alloc - nvl(kbytes_free, 0)) / 1024, 2)) used,
    decode(grouping(b.file_name),
    1,
    decode(grouping(b.tablespace_name),
    1,
    sum(round(((kbytes_alloc - nvl(kbytes_free, 0)) /
    kbytes_alloc) * 100 / b.nbtbs,
    2)),
    sum(round(((kbytes_alloc - nvl(kbytes_free, 0)) /
    kbytes_alloc) * 100 / b.nbtbsfile,
    2))),
    sum(round(((kbytes_alloc - nvl(kbytes_free, 0)) / kbytes_alloc) * 100,
    2))) pct_used,
    sum(round(nvl(kbytes_free, 0) / 1024, 2)) free
    from (select sum(bytes) / 1024 Kbytes_free,
    max(bytes) / 1024 largest,
    tablespace_name,
    file_id
    from sys.dba_free_space
    group by tablespace_name, file_id) a,
    (select sum(bytes) / 1024 Kbytes_alloc,
    tablespace_name,
    file_id,
    file_name,
    count(*) over(partition by tablespace_name) nbtbsfile,
    count(distinct tablespace_name) over() nbtbs
    from sys.dba_data_files
    group by tablespace_name, file_id, file_name) b
    where a.tablespace_name(+) = b.tablespace_name
    and a.file_id(+) = b.file_id
    group by rollup(b.tablespace_name, file_name);
    the same negative data file size on Database Control...

  • S1000 Data file size limit is reached in statement

    I am new to Java and was given the task to trouble shoot a java application that was written a few years ago and no longer supported. The java application creates database files the user's directory: diwdb.properties, diwdb.data, diwdb.lproperties, diwdb.script. The purpose of the application is to open a zip file and insert the files into a table in the database.
    The values that are populated in the diwdb.properties file are as follows:
    #HSQL Database Engine
    #Wed Jan 30 08:55:05 GMT 2013
    hsqldb.script_format=0
    runtime.gc_interval=0
    sql.enforce_strict_size=false
    hsqldb.cache_size_scale=8
    readonly=false
    hsqldb.nio_data_file=true
    hsqldb.cache_scale=14
    version=1.8.0
    hsqldb.default_table_type=memory
    hsqldb.cache_file_scale=1
    hsqldb.log_size=200
    modified=yes
    hsqldb.cache_version=1.7.0
    hsqldb.original_version=1.8.0
    hsqldb.compatible_version=1.8.0
    Once the databsae file gets to 2GB it brings up the error meessage 'S1000 Data file size limit is reached in statement (Insert into <tablename>......
    From searching on the itnernet it appeared that the parameter hsqldb.cache_file_scale needed to be increased & 8 was a suggested value.
    I have the distribution files (.jar & .jnlp) that are used to run the application. And I have a source directory that was found that contains java files. But I do not see any properties files to set any parameters. I was able to load both directories into NetBeans but really don't know if the files can be rebuilt for distribution as I'm not clear on what I'm doing and NetBeans shows errors in some of the directories.
    I have also tried to add parameters to the startup url: http://uknt117.uk.infores.com/DIW/DIW.jnlp?hsqldb.large_data=true?hsqldb.cache_file_scale=8 but that does not affect the application.
    I have been struggling with this for quite some time. Would greatly appreciate any assistance to help resolve this.
    Thanks!

    Thanks! But where would I run the sql statement. When anyone launches the application it creates the database files in their user directory. How would I connect to the database after that to execute the statement?
    I see the create table statements in the files I have pulled into NetBeans in both the source folder and the distribution folder. Could I add the statement there before the table is created in the jar file in the distribution folder and then re-compile it for distribution? OR would I need to add it to the file in source directory and recompile those to create a new distribution?
    Thanks!

  • Maximum Data file size in 10g,11g

    DB Versions:10g, 11g
    OS & versions: Aix 6.1, Sun OS 5.9, Solaris 10
    This is what Oracle 11g Documentation
    http://download.oracle.com/docs/cd/B28359_01/server.111/b28320/limits002.htm
    says about the Maximum Data file size
    Operating system dependent. Limited by maximum operating system file size;typically 2^22 or 4 MB blocksI don't understand what this 2^22 thing is.
    In our AIX machine and ulimit command show
    $ ulimit -a
    time(seconds)        unlimited
    file(blocks)         unlimited  <-------------------------------------------
    data(kbytes)         unlimited
    stack(kbytes)        4194304
    memory(kbytes)       unlimited
    coredump(blocks)     unlimited
    nofiles(descriptors) unlimited
    threads(per process) unlimited
    processes(per user)  unlimitedSo, this means, In AIX that both the OS and Oracle can create a data file of any Size. Right?
    What about 10g, 11g DBs running on Sun OS 5.9 and Solaris 10 ? Is there any Limit on the data file size?

    How do i determine maximum number of blocks for an OS?df -g would give you the block size. OS blocksize is 512 bytes on AIX.
    Lets say the db_block_size is 8k. What would the maximum file size for data file in Small File tablespace and Big File tablespace be?Smallfile (traditional) Tablespaces - A smallfile tablespace is a traditional Oracle tablespace, which can contain 1022 datafiles or tempfiles, each of which can contain up to approximately 4 million (222) blocks. - 32G
    A bigfile tablespace contains only one datafile or tempfile, which can contain up to approximately 4 billion ( 232 ) blocks. The maximum size of the single datafile or tempfile is 128 terabytes (TB) for a tablespace with 32K blocks and 32TB for a tablespace with 8K blocks.
    HTH
    -Anantha

  • What is impact to MDF file size if change database to simple recovery mode

    Hi,
    Currently I have a Database with 27GB MDF and 80GB LDF.
    If I change from Full recovery to Simple recovery mode, would LDF information be transfered to MDF file and make
    MDF file size exceed 100GB?

    Hi
    May I know how to perform point in time recovery? Currently the only backup we perform every 4 hours is the server OS snapshot.
    Example :
    1. Now is 6pm and some error transaction occurred.
    2. We can perform 3pm server OS snapshot recovery on the mdf file. ( We would lost 3 hours data in this case )
    3. Could we apply the ldf transaction log after OS snapshot recovery and roll it forward till 5:50pm?
    You would be able to perform point in time recovery if you have
    1. Database configured in full recovery mode
    2. You were taking transaction log backups.(of course with full backup or may be differential)
    In your scenario applying snapshot wont help you.What you have to do it you should have full backup in place .If you had full backup ,like full backup at 3 PM then you would have restored it with no recovery.After that suppose you took tansaction log backups
    every on hour then restore 1 PM ,2 PM and 3 PM log backup all in nore covery.
    Now i should have mentioned first but before restoring full backup you can also take tail log backup read this article
    http://technet.microsoft.com/en-us/library/ms179314.aspx
    So now after full and all log backups are applied with no recobery apply tail log backup with recovery and its quite possible that you might not have a data loss or in some scenario very small data loss( not 3 hrs as you would have with snapshot)
    hope this helps
    Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers

  • Reducing oracle data file sizes to a bare minimum

    Hi,
    How can I create an oracle instances with minimal file sizes ?
    Using the database configuration wiz, i've specified 10~20mb sizes for SYSTEM01, UNDOTBS01 etc. but when the instance is created the files were still huge :
    SYSAUX01 was 83MB,
    SYSTEM01 was 256MB
    UNDOTBS01 was 158MB
    The only files that seems to have complied were the redo log files.
    It's a testing environment that's constantly restored so i'm trying to fit the files into a ram-disk (using imdisk) in favour of performance.
    The data required/created by each test is not large and because it's only a testing db reliability/consistency is not an issue - rollbacks are also minimal (i've set the retention at 15 secs so far)
    So if anyone can shed some light on how to keep filesizes to a minimal, I'm all ears.
    If there's a way to run without some of these files, i'm keen to hear too =)
    Thanks

    So if anyone can shed some light on how to keep filesizes to a minimal, I'm all ears.
    SQL below works.
    reduce SIZE value & see what works for yourself
    spool V888.log
    set term on echo on
    startup mount;
    CREATE DATABASE "V888"
        MAXLOGFILES 16
        MAXLOGMEMBERS 3
        MAXDATAFILES 100
        MAXINSTANCES 8
        MAXLOGHISTORY 292
    LOGFILE
      GROUP 1 '/u01/app/oracle/product/10.2.0/oradata/V888/redo01.log'  SIZE 50M,
      GROUP 2 '/u01/app/oracle/product/10.2.0/oradata/V888/redo02.log'  SIZE 50M,
      GROUP 3 '/u01/app/oracle/product/10.2.0/oradata/V888/redo03.log'  SIZE 50M
    EXTENT MANAGEMENT LOCAL
    SYSAUX DATAFILE '/u01/app/oracle/product/10.2.0/oradata/V888/aux88801.dbf'
         SIZE 20971520  REUSE AUTOEXTEND ON NEXT 655360  MAXSIZE 32767M
    DEFAULT TABLESPACE USERS DATAFILE
      '/u01/app/oracle/product/10.2.0/oradata/V888/system01.dbf'
         SIZE 20971520  REUSE AUTOEXTEND ON NEXT 655360  MAXSIZE 32767M
    DEFAULT TEMPORARY TABLESPACE TEMP888 TEMPFILE '/u01/app/oracle/product/10.2.0/or
    adata/V888/temp88801.dbf'
         SIZE 20971520  REUSE AUTOEXTEND ON NEXT 655360  MAXSIZE 32767M
    UNDO TABLESPACE UNDO888 DATAFILE
      '/u01/app/oracle/product/10.2.0/oradata/V888/undo88801.dbf'
         SIZE 20971520  REUSE AUTOEXTEND ON NEXT 655360  MAXSIZE 32767M
    CHARACTER SET AL32UTF8
    NATIONAL CHARACTER SET AL16UTF16
    spool offBut WHY are you doing so?

  • Limitation on data file size for Oracle 8i on window 2000

    What is the size limitation for each Oracle data file ?
    Oracle 8i
    Window 2000 server (32-bit)

    Hi,
    You can get details from the Doc itself
    Refer : http://www.taom.ru/docs/oradoc.817/server.817/a76961/ch43.htm#11789 (Oracle8i Reference Release 2 (8.1.6) )
    Check 10g also : http://download-uk.oracle.com/docs/cd/B19306_01/server.102/b14237/limits002.htm (10g Release 2 (10.2) )
    - Pavan Kumar N

  • Ways to handle large volume data (file size = 60MB) in PI 7.0 file to file

    Hi,
    In a file to file scenario (flat file to xml file), the flat file is getting picked up by FCC and then send to XI. In xi its performing message mapping and then XSL transformation in a sequence.
    The scenario is working fine for small files (size upto 5MB) but when the input flat file size is more then 60 MB, then XI is showing lots of problem like (1) JCo call error or (2) some times even XI is stoped and we have to strat it manually again to function properly.
    Please suggest some way to handle large volume (file size upto 60MB) in PI 7.0 file to file scenario.
    Best Regards,
    Madan Agrawal.

    Hi Madan,
    If every record of your source file was processed in a target system, maybe you could split your source file into several messages by setting up this in Recordset Per Messages parameter.
    However, you just want to convert you .txt file into a .xml file. So, try firstly to setting up
    EO_MSG_SIZE_LIMIT parameter in SXMB_ADM.
    However this could solve the problem in Inegration Engine, but the problem will persit in Adapter Engine, I mean,  JCo call error ...
    Take into account that file is first proccessed in Adapter Engine, File Content Conversion and so on...
    and then it is sent to the pipeline in Integration Engine.
    Carlos

  • Data File Size

    I wrote a java program that copies data from a Sql Server DB to Oracle 9i.
    As the program ran, the oracle9i server ran out of disk space. I ran the 'clean' portion of my application that deletes all data from the tables. After I did this, there was still the same amout of free space on the hard drive.
    I am not an oracle dba or expert by any means, I am a programmer but here is what I found, I'm not sure if its revelent.
    On the oracle server I went to the following folder : oracle\ora92\database\DB40
    I think this is the file where my data is stored. The file 'DB40DF' is 6 Gigs.
    I have 2 other databases on the server that are exactly the same as my db40 database and their 'DF' file sizes are 130MB. These have no data in them yet. So from this I would assume that if I emptied all of my tables then the DB40DF should be around 130MB. But this is not the case, after I ran my delete code the file is still 6GIG.
    Did I do my delete wrong? I assumed that my statment would commit and it seems to. When I do a select * from the tables there is no data. If there is no data in my table why is the file 6 gigs? I have included my delete java code incase I did something wrong on that side. If not, is there something I am suspose to do to get the DF file back to normal size? I have Toad 9.5 to administer the database.
    Here is a portion of my delete java code:
    stmt is a java.sql.Statement
    for (String table : tables) {
    sql = "DELETE FROM " + oracleSchema + table;
    stmt.addBatch(sql);
    int[] result = stmt.executeBatch();
    if (stmt != null) {
    stmt.close();
    stmt = null;
    When I print out the int array it shows how many rows were deleted from each table and the record cound seems ok. (ie table x clear 36007 rows deleted)
    Thank you.

    Oracle won't resize a datafile unless this is requested by the SQL statement ALTER DATABASE DATAFILE ... RESIZE... . Oracle won't release extents allocated to a table unless this is requested by the some ALTER TABLE statements. Extents will normally be reused by next INSERT statements. So the behavior you note is normal.

  • Suggested data file size for Oracle 11

    Hi all,
    Creating a new system (SolMan 7.1) on AIX 6.1 running Oracle 11. 
    I have 4 logical volumes for data sized at 100gb each.  During the installation I'm being asked to input the size for the data files. The default is "2000mb/2gb" is this acceptable for a system sized like mine, or should I double them to 4gb each? I know the max is 32gb per data file but that seems a bit large to me.  Just wanted to know if there was a standard best practice for this, or a formula to use based on system sizing.
    I was not able to find any quick suggestions in the Best Practices guide on this unfortunately...
    Any help would be greatly appreciated.
    Thanks!

    Ben Daniels wrote:
    Hi all,
    >
    > Creating a new system (SolMan 7.1) on AIX 6.1 running Oracle 11. 
    >
    > I have 4 logical volumes for data sized at 100gb each.  During the installation I'm being asked to input the size for the data files. The default is "2000mb/2gb" is this acceptable for a system sized like mine, or should I double them to 4gb each? I know the max is 32gb per data file but that seems a bit large to me.  Just wanted to know if there was a standard best practice for this, or a formula to use based on system sizing.
    >
    > I was not able to find any quick suggestions in the Best Practices guide on this unfortunately...
    >
    > Any help would be greatly appreciated.
    >
    > Thanks!
    Hi Ben,
    Check the note 129439 - Maximum file sizes with Oracle
    Best regards,
    Orkun Gedik

  • File size problem when performing XML Transformation into Excel

    Hi All,
    We are performing XML Transformation in ABAP which can open in Excel and saving to common share. But the file size of excel is around 50MB. After opening the excel and save with diffrent name the file is getting compressed and its below 1MB. So what kind of settings does we need to make to the transformation code which will create the excel files with less memory/size??
    <?sap.transform simple?>
    <tt:transform xmlns:tt="http://www.sap.com/transformation-templates">
    <tt:root name="AAAAAAA"/>
    <tt:root name="BBBBBBBB"/>
    <tt:root name="CCCCCCCC"/>
    <tt:template>
    <?mso-application progid="Excel.Sheet"?>
    <Workbook xmlns="urn:schemas-microsoft-com:office:spreadsheet"
    xmlns:o="urn:schemas-microsoft-com:office:office"
    xmlns:x="urn:schemas-microsoft-com:office:excel"
    xmlns:dt="uuid:C2F41010-65B3-11d1-A29F-00AA00C14882"
    xmlns:ss="urn:schemas-microsoft-com:office:spreadsheet"
    xmlns:html="http://www.w3.org/TR/REC-html40">
       <ProtectObjects>True</ProtectObjects>
       <ProtectScenarios>True</ProtectScenarios>
      </WorksheetOptions>
    </Worksheet>
    </Workbook>
    </tt:template>
    </tt:transform>
    <DocumentProperties xmlns="urn:schemas-microsoft-com:office:office">
    </DocumentProperties>
    <CustomDocumentProperties xmlns="urn:schemas-microsoft-com:office:office">
      <ContentTypeId dt:dt="string">0x01010049B4763FE606154C9C9BC639FE7EE179</ContentTypeId>
    </CustomDocumentProperties>
    <OfficeDocumentSettings xmlns="urn:schemas-microsoft-com:office:office">
      <Colors>
       <Color>
        <Index>0</Index>
        <RGB>#FF0000</RGB>
       </Color>
       <Color>
        <Index>1</Index>
        <RGB>#FFE4B5</RGB>
       </Color>
       <Color>
        <Index>2</Index>
        <RGB>#FFF8DC</RGB>
       </Color>
       <Color>
        <Index>3</Index>
        <RGB>#000000</RGB>
       </Color>
      </Colors>
    </OfficeDocumentSettings>
    <ExcelWorkbook xmlns="urn:schemas-microsoft-com:office:excel">
      <WindowHeight>7305</WindowHeight>
      <WindowWidth>11340</WindowWidth>
      <WindowTopX>0</WindowTopX>
      <WindowTopY>0</WindowTopY>
      <TabRatio>334</TabRatio>
      <ProtectStructure>True</ProtectStructure>
      <ProtectWindows>False</ProtectWindows>
    </ExcelWorkbook>
    </DocumentProperties>
    <CustomDocumentProperties xmlns="urn:schemas-microsoft-com:office:office">
      <ContentTypeId dt:dt="string">0x01010049B4763FE606154C9C9BC639FE7EE179</ContentTypeId>
    </CustomDocumentProperties>
    <OfficeDocumentSettings xmlns="urn:schemas-microsoft-com:office:office">

    Hi Raghu,
    have a look at the XLSX file generated by Excel, you'll see that it's simply a zipped file. So you can't do it with a transformation, but you need CL_ABAP_ZIP class. I advise you to look at the abap2xslx project ABAP code (currently 3.0) to see how it works.
    Best regards,
    Sandra

  • Photoshop Meta Data File Sizes

    I am using meta data "description" menu to add meta data to web images.
    The basics:
    title
    description
    keywords
    copyright
    Thats is about it for my application....so far and like I say it is just for web images at this point.
    At any rate to me surprise I did not realize how much the meta data added to file size.
    I had assummed it was minimal but in fact that is not the case. Meta data ends up adding a lot.
    Question:
    So is there a way in Photoshop to see what the final file size will be with the meta data included as I have found, so far, is the image itself file size when I view the
    "Saving for Web & Devices" menu screen.
    Also tried the "Info" menu and neither indicate the meta data additons ito an images file size.
    One of the reasons is, of course, that I would like to get as many keywords in there as possible, as an example lets say this was the logo or site's header and they have a lot of different products.
    The more keywords the better the SEO results will be. I am sure this statement in itself is disputable but as of now 5/2013 it appears that way as far as SE's are concerned.
    Tested this extensively and the result were remakable, but that is another story.
    Any help or links to other infomration will be
    greatly appreciated
    thank you
    Jonny

    Mylenium wrote:
    The more keywords the better the SEO results will be.
    I new that was a big mistake saying it that way.
    I was focused on the file size issue.
    Trying to use more efficient effective keywords for an image that represents several products.
    As a simple example let's say the site focused on pencils, pens, and paper clips.
    So just using one of those keywords or even just two of them would be misleading to the site visitor.
    So in my opinion using all three would be most effective.
    I don't intend to use red pens, black pens, purple pens, metal paper clips, plastic paper clips, colored paper clips, etc.
    But that is not my point anyway. What a pain in the *** it will be to try this, save it, try that, save it, blah, blah, blah,etc.
    Thank you very muich for taking the time to respond and KNOW that I greatly appreciate your advise.
    You confimred my suspicions.
    I did find a PS CS5 Plug-in that would supposedly make this task easier but i am very suspicous, won't waste my time.
    Took a copy of the default "raw data" from the "File Info" menus and pasted that into BBEdit (XML or whatever) and the file size increased by about 3.5k with just the default "raw data", whoa.
    So with experimetation, as you suggest I may get and idea of size more or less.
    I was hoping I was making it harder than it was but Adobe has not responded either.

Maybe you are looking for