SAP Data File size considerably reduced after Unicode Conversion

Hello Experts
I have just performed a CUUC (Upgrade along with unicode conversion) from R/3 4.7 to ECC 6.0 EHP5. The data size that i earlier had was close to 463 GB (MSSQL MDF, LDF Files), after the data export to be converted to unicode the size was 45 GB (10% of actual data size, i feel this is normal after heterogeneous system copy), but after the import again the DATA file size is only 247 GB, is this normal ? or have i lost some data ? For ex, I tried checking tables like MSEG and the number of entries have reduced from 15,678,790 to 15,290,545.
Could you kindly let me know if there is a way to check from Basis perspective if i have not lost any data ? I have followed all the procedures as per SAP Standards.
Waiting for your quick reply.
Best Regards
Pritish

Hi Nicholas
Data is compressed during the new R3load procedure is understood, but why number of table entries have reduced is something which is still a question to me (in some cases increased as well) ? For example      
                             Source          Target
STPO                  415,725          412,150
STKO                 126,710          126,141
PLAF                 74,671                   78,336
MDKP                 193,487          192,747
MDPB                 55,329                   63,557
Any suggestions or ideas ?
Best Regards
Pritish

Similar Messages

  • How do I completely crop a PDF so that the cropped data is removed and the file size is reduced?

    How do I completely crop a PDF so that the cropped data is removed and the total file size is reduced?
    When I use the "Crop" function, the cropped data still remains in the file and there is no reduction in file size. I need a way to truly crop a PDF using Acrobat software.

    When you export, try to get the full file path or else you will have to do a lot of manual searching.
    If you downloaded the picture from Messages, the picture is stored in your User Library/Messages. to make your User Library visible, hold down the option key while using the Finder “Go To Folder” command. Enter ~/Library/Messages/Attachments. 
    If you prefer to make your user library permanently visible, use the Terminal command found below.
    http://osxdaily.com/2011/07/04/show-library-directory-in-mac-os-x-lion/
    You might want to bookmark the command. I had to use it again after I installed 10.8.4. I have also been informed that if you drag the user library to Finder it will remain visible.

  • In CS6, why the file size remains same after croping? And how do I reduce the size after each cropin

    In CS6, why the file size remains same after croping? And how do I reduce the size after each croping? Thx

    Select the Crop Tool and check the box [  ] Delete Cropped Pixels and you should see a reduction in file size.  With the box unchecked, the data is still maintained in the document, you just can't see it.
    -Noel

  • SAP data files

    Hi
    My SAP version is ECC 5.0 with oracle 9.2.0.7 on SOLARIS 10.
    My current DB size is 300GB where i have 23 data files on my oracle/<SID>/sapdata1...4
    I have implemented MM,FICO,PP,QM modules.My monthly DB growth rate is 30-35 GB.
    My Questions
    01.Is this growing rate is acceptable.If not how can i find the error.
    02.PSAP<SID> table space is the place where my data is written.I have to keep on monitoring the space on this table space and i have to manually extend that once it is filled.Why do i need to do this whwre i have enable "AUTOEXTEND ON" and why it is not automatically extend.
    03.Table reqorganization/Index Rebuilding  is not giving me any gain and why?
    Pls give me your feed back.
    With the current growing rate i can go for another 1.5 yrs but after that what should i do..
    Roshantha

    Hi,
    Including the brtools,
    >> we are using Oracle 10g on RHEL5.4 64Bit i want to move all my sap data files along with Temp are there any specific notes that i need to follow apart form 868750 and there is no mention of Temp file in this note.
    You can move all the datafiles including TEMP tablespace, by the statement, below;
    ALTER DATABASE RENAME FILE '<old_full_path>' TO '<new_full_path>';
    >> Apart from this i want to know what changes need to be done in the control file for the information of the new file system,so that the database can be started using changed control file having information of the renamed and moved file system.
    You will not change anything on control files, manually. It will be updated after you executed the statement, respectfully
    >> Are there any oracle parameters or profile parameters that needs to be changed and how much downtime is required.
    No you don't need to change any database profile parameter after you move the datafile(s). If you prepare a script and execute it, it will be done in a few seconds. This is because only configuration will be updated. The datafiles will not be created from the scratch.
    Best regards,
    Orkun Gedik

  • S1000 Data file size limit is reached in statement

    I am new to Java and was given the task to trouble shoot a java application that was written a few years ago and no longer supported. The java application creates database files the user's directory: diwdb.properties, diwdb.data, diwdb.lproperties, diwdb.script. The purpose of the application is to open a zip file and insert the files into a table in the database.
    The values that are populated in the diwdb.properties file are as follows:
    #HSQL Database Engine
    #Wed Jan 30 08:55:05 GMT 2013
    hsqldb.script_format=0
    runtime.gc_interval=0
    sql.enforce_strict_size=false
    hsqldb.cache_size_scale=8
    readonly=false
    hsqldb.nio_data_file=true
    hsqldb.cache_scale=14
    version=1.8.0
    hsqldb.default_table_type=memory
    hsqldb.cache_file_scale=1
    hsqldb.log_size=200
    modified=yes
    hsqldb.cache_version=1.7.0
    hsqldb.original_version=1.8.0
    hsqldb.compatible_version=1.8.0
    Once the databsae file gets to 2GB it brings up the error meessage 'S1000 Data file size limit is reached in statement (Insert into <tablename>......
    From searching on the itnernet it appeared that the parameter hsqldb.cache_file_scale needed to be increased & 8 was a suggested value.
    I have the distribution files (.jar & .jnlp) that are used to run the application. And I have a source directory that was found that contains java files. But I do not see any properties files to set any parameters. I was able to load both directories into NetBeans but really don't know if the files can be rebuilt for distribution as I'm not clear on what I'm doing and NetBeans shows errors in some of the directories.
    I have also tried to add parameters to the startup url: http://uknt117.uk.infores.com/DIW/DIW.jnlp?hsqldb.large_data=true?hsqldb.cache_file_scale=8 but that does not affect the application.
    I have been struggling with this for quite some time. Would greatly appreciate any assistance to help resolve this.
    Thanks!

    Thanks! But where would I run the sql statement. When anyone launches the application it creates the database files in their user directory. How would I connect to the database after that to execute the statement?
    I see the create table statements in the files I have pulled into NetBeans in both the source folder and the distribution folder. Could I add the statement there before the table is created in the jar file in the distribution folder and then re-compile it for distribution? OR would I need to add it to the file in source directory and recompile those to create a new distribution?
    Thanks!

  • Impact of data file size on DB performance

    Hi,
    I have a general query regarding size of data files.
    Considering DB performance, which of the below 2 options are better?
    1. Bigger data file size but less number of files (ex. 2 files with size 8G each)
    2. Smaller data file size but more number of files (ex. 8 files with size 2G each)
    I am working on a DB where I have noticed where very high I/O.
    I understand there might be many reasons for this.
    However, I am checking for possibility to improve DB performance though optimizing data file sizes. (Including TEMP/UNDO table spaces)
    Kindly share your experiences with determining optimal file size.
    Please let me know in case you need any DB statistics.
    Few details are as follows:
    OS: Solaris 10
    Oracle: 10gR2
    DB Size: 80G (Approx)
    Data Files: UserData - 6 (15G each), UNDO - 2 (8G each), TEMP - 2 (4G each)
    Thanks,
    Ullhas

    Ullhas wrote:
    I have a general query regarding size of data files.
    Considering DB performance, which of the below 2 options are better?Size or number really does not matter assuming other variables constant. More files results in more open file handles, but in your size db, it matters not.
    I am working on a DB where I have noticed where very high I/O.
    I understand there might be many reasons for this.
    However, I am checking for possibility to improve DB performance though optimizing data file sizes. (Including TEMP/UNDO table spaces)Remember this when tuning I/O: The fastest I/O is the one that never takes place! High I/O may very well be a symptom of unnecessary FTS or poor execution plans. Validate this first before tuning I/O and you will be much better off.
    Regards,
    Greg Rahn
    http://structureddata.org

  • Negative data file size

    RDBMS: oracle 10g R2
    when execute to statement to determinate size of data files, the data file DATA8B.ORA is negative, why?
    before droped a table with 114,000,000 rows in this tablespace.
    FILE_NAME FILE_SIZE USED PCT_USED FREE
    G:\DIL\DATA5D.ORA 4096 3840.06 93.75 255.94
    Total tablespace DATA5---------------------------> 16384 14728.24 89.9 1655.76
    I:\DIL\DATA6A.ORA 4096 3520.06 85.94 575.94
    I:\DIL\DATA6B.ORA 4096 3456.06 84.38 639.94
    I:\DIL\DATA6C.ORA 4096 3520.06 85.94 575.94
    I:\DIL\DATA6D.ORA 4096 3520.06 85.94 575.94
    Total tablespace DATA6---------------------------> 16384 14016.24 85.53 2367.76
    G:\DIL\DATA7A.ORA 4096 3664.06 89.45 431.94
    G:\DIL\DATA7B.ORA 4096 3720.06 90.82 375.94
    G:\DIL\DATA7C.ORA 4096 3656.06 89.26 439.94
    G:\DIL\DATA7D.ORA 4096 3728.06 91.02 367.94
    G:\DIL\DATA7E.ORA 4096 3728.06 91.02 367.94
    Total tablespace DATA7---------------------------> 20480 18496.3 90.3 1983.7
    G:\DIL\DATA8A.ORA 3500 2880.06 82.29 619.94
    G:\DIL\DATA8B.ORA 4000 -2879.69 -71.99 6879.69
    Total tablespace DATA8---------------------------> 7500 0.37 5.14 7499.63

    the query is:
    select substr(decode(grouping(b.file_name),
    1,
    decode(grouping(b.tablespace_name),
    1,
    rpad('TOTAL:', 48, '=') || '>>',
    rpad('Total tablespace ' || b.tablespace_name,
    49,
    '-') || '>'),
    b.file_name),
    1,
    50) file_name,
    sum(round(Kbytes_alloc / 1024, 2)) file_size,
    sum(round((kbytes_alloc - nvl(kbytes_free, 0)) / 1024, 2)) used,
    decode(grouping(b.file_name),
    1,
    decode(grouping(b.tablespace_name),
    1,
    sum(round(((kbytes_alloc - nvl(kbytes_free, 0)) /
    kbytes_alloc) * 100 / b.nbtbs,
    2)),
    sum(round(((kbytes_alloc - nvl(kbytes_free, 0)) /
    kbytes_alloc) * 100 / b.nbtbsfile,
    2))),
    sum(round(((kbytes_alloc - nvl(kbytes_free, 0)) / kbytes_alloc) * 100,
    2))) pct_used,
    sum(round(nvl(kbytes_free, 0) / 1024, 2)) free
    from (select sum(bytes) / 1024 Kbytes_free,
    max(bytes) / 1024 largest,
    tablespace_name,
    file_id
    from sys.dba_free_space
    group by tablespace_name, file_id) a,
    (select sum(bytes) / 1024 Kbytes_alloc,
    tablespace_name,
    file_id,
    file_name,
    count(*) over(partition by tablespace_name) nbtbsfile,
    count(distinct tablespace_name) over() nbtbs
    from sys.dba_data_files
    group by tablespace_name, file_id, file_name) b
    where a.tablespace_name(+) = b.tablespace_name
    and a.file_id(+) = b.file_id
    group by rollup(b.tablespace_name, file_name);
    the same negative data file size on Database Control...

  • Maximum Data file size in 10g,11g

    DB Versions:10g, 11g
    OS & versions: Aix 6.1, Sun OS 5.9, Solaris 10
    This is what Oracle 11g Documentation
    http://download.oracle.com/docs/cd/B28359_01/server.111/b28320/limits002.htm
    says about the Maximum Data file size
    Operating system dependent. Limited by maximum operating system file size;typically 2^22 or 4 MB blocksI don't understand what this 2^22 thing is.
    In our AIX machine and ulimit command show
    $ ulimit -a
    time(seconds)        unlimited
    file(blocks)         unlimited  <-------------------------------------------
    data(kbytes)         unlimited
    stack(kbytes)        4194304
    memory(kbytes)       unlimited
    coredump(blocks)     unlimited
    nofiles(descriptors) unlimited
    threads(per process) unlimited
    processes(per user)  unlimitedSo, this means, In AIX that both the OS and Oracle can create a data file of any Size. Right?
    What about 10g, 11g DBs running on Sun OS 5.9 and Solaris 10 ? Is there any Limit on the data file size?

    How do i determine maximum number of blocks for an OS?df -g would give you the block size. OS blocksize is 512 bytes on AIX.
    Lets say the db_block_size is 8k. What would the maximum file size for data file in Small File tablespace and Big File tablespace be?Smallfile (traditional) Tablespaces - A smallfile tablespace is a traditional Oracle tablespace, which can contain 1022 datafiles or tempfiles, each of which can contain up to approximately 4 million (222) blocks. - 32G
    A bigfile tablespace contains only one datafile or tempfile, which can contain up to approximately 4 billion ( 232 ) blocks. The maximum size of the single datafile or tempfile is 128 terabytes (TB) for a tablespace with 32K blocks and 32TB for a tablespace with 8K blocks.
    HTH
    -Anantha

  • Sql loader maximum data file size..?

    Hi - I wrote sql loader script runs through shell script which will import data into table from CSV file. CSV file size is around 700MB. I am using Oracle 10g with Sun Solaris 5 environment.
    My question is, is there any maximum data file size. The following code from my shell script.
    SQLLDR=
    DB_USER=
    DB_PASS=
    DB_SID=
    controlFile=
    dataFile=
    logFileName=
    badFile=
    ${SQLLDR} userid=$DB_USER"/"$DB_PASS"@"$DB_SID \
              control=$controlFile \
              data=$dataFile \
              log=$logFileName \
              bad=$badFile \
              direct=true \
              silent=all \
              errors=5000Here is my control file code
    LOAD DATA
    APPEND
    INTO TABLE KEY_HISTORY_TBL
    WHEN OLD_KEY <> ''
    AND NEW_KEY <> ''
    FIELDS TERMINATED BY ','
    TRAILING NULLCOLS
            OLD_KEY "LTRIM(RTRIM(:OLD_KEY))",
            NEW_KEY "LTRIM(RTRIM(:NEW_KEY))",
            SYS_DATE "SYSTIMESTAMP",
            STATUS CONSTANT 'C'
    )Thanks,
    -Soma
    Edited by: user4587490 on Jun 15, 2011 10:17 AM
    Edited by: user4587490 on Jun 15, 2011 11:16 AM

    Hello Soma.
    How many records exist in your 700 MB CSV file? How many do you expect to process in 10 minutes? You may want to consider performing a set of simple unit tests with 1) 1 record, 2) 1,000 records, 3) 100 MB filesize, etc. to #1 validate that your shell script and control file syntax function as expected (including the writing of log files, etc.), and #2 gauge how long the processing will take for the full file.
    Hope this helps,
    Luke
    Please mark the answer as helpful or answered if it is so. If not, provide additional details.
    Always try to provide actual or sample statements and the full text of errors along with error code to help the forum members help you better.

  • Can the file size be reduced?  Large presentations cannot be emailed due to size.

    Can the file size be reduced.  Many power point presentations cannot be mailed as attachements

    Hello,
    The size of created PDFs cannot be reduced in the current version. If you are referring to large presentations before converting them to PDFs, you can try  accessing them on your iDevice via applications such as DropBox.
    Thanks.

  • I have Premier Elements 8.  I recorded some 45 minute lectures on a Sony Camcorder at 17 Mbps.  The resulting, edited file size is about 6.3 GB -- too big to record on a 4.7 GB DVD-R disc.  Using Adobe PE 8, can the file size be reduced in some way to end

    I have Premier Elements 8.  I recorded some 45 minute lectures on a Sony Camcorder at 17 Mbps.  The resulting, edited file size is about 6.3 GB -- too big to record on a 4.7 GB DVD-R disc.  Using Adobe PE 8, can the file size be reduced in some way to end up less than 4.7 GB (I realize image quality may be less htan the original)?

    pault
    What are you going for....DVD-VIDEO Widescreen on DVD disc or AVCHD format on DVD disc?
    Are you doing your burn to disc with a check mark next to "Fit Content To Available Space" in the burn dialog?
    When you have the DVD 4.7 GB/120 min in the burner tray and are in the burn dialog ready to hit Burn, what does the burn dialog quality area show for Space Require and Bitrate with and without the check mark next to "Fit Content To Available Space"?
    Just in case mention....the DVD disc with the 4.7 GB/120 min in reality is 4.38 GB.
    The goal is to find out if you are overlooking the "Fit Content To Available Space" option in the burn dialog. Depending on the circumstances to be defined, use of that option does not necessarily mean a compromise in the end product quality.
    Let us start here and then move forward based on the details of your reply.
    Thank you.
    ATR

  • Essbase cube data file size

    Hi,
    Why is it showing different numbers about my data file size in EAS>database>edit>properties>storage and a full database export.
    Thanks,
    KK

    Tim, in all seriousness, I am not a stalker. Honestly. You just post about things that interest me/we share some of the same skills. Alternatively, Glenn stalks me, I stalk you, it's time for you to become a sociopath too and stalk someone else.
    Okay, with that insanity out of the way, the other thing that could have a big impact on export size is if the OP did a level zero or input level export as opposed to a full export. In BSO databases in particular, that can lead to a very, very large set of .PAG (and to some extent .IND files) and a rather small export file size as the calculated values aren't gettting written out.
    If the export is done through EAS I think the default is level zero.
    Regards,
    Cameron Lackpour
    Edited by: CL on Sep 23, 2011 2:38 PM
    Bugger, the OP wrote "full database export". Okay, never mind, I am a terrible stalker. I agree with everything Tim wrote re compression.
    In an effort to add something useful, if you use the new-ish database archive feature in MaxL, you will essentially get .PAG files + .IND files combined into a single binary file. I used to be a big fan of full restructures, clears, and reloads to do defrags, but now go with the restructure command. Despite the fact that it's single threaded, in my limited testing (only done it at one client) it was faster as the export all, clear, reload. If you combine that with the archive mode you might have a better approach.
    Regards,
    Cameron Lackpour

  • After unicode conversion Variant missing

    Dear All,
    After unicode conversion in transaction S_ALR_87013611 and
    S_ALR_87013613 variant is missing.
    We checked in table VARIT - "Variant Texts" data is missing.
    Kindly suggest.
    Thanks and regards,
    Joseph

    Hello
    Did you discover the reason for this? We are now facing the same issue and we cannot find the reason.
    I don't think that note 987914 can be the answer.
    Can it be something connected to the procedure for conversion or caused by a wrong-executed step?
    Every suggestion can help,
    Thanks
    Luca

  • Timeouts increased after we moved USR, SAP data files and TLogs to new SAN

    We are having issues with timeouts after we moved our USR, SAP SQL Datafiles and SAP Transaction Logs from our old SAN to a new SAN.
    Timeouts for SAPGUI users are set to 10 minutes.
    We are running Windows Server 2003 with SQL Server 2005.
    The SAP database has 8 datafiles with a total size of about 350GB.
    Procedure we used to move SAP to new SAN:
    1. Attached 3 new SAN Volumes
         -a. USR
         -b. Data Files
         -c. Transaction Logs
    2. Shutdown SAP and SQL services
    3. Alligned the new volumes with a 1024kb offset and gave the Data files and Transaction log volumes a 64kb allocation
        size. (The alignment and 64kb allocation size were not setup for these volumes on the old SAN)
    4. Copied the 3 volumes from old to new.
    5. Changed the new volumes drive letter to the drive letters of the old volumes.
         -a. I had to restart in order to change the USR volume.
         -b. Because of this I had to resetup the sapmnt and saploc shares.
    6. Started SQL services and then SAP services and everything came up just fine.
    The week before we had anywhere from 1 to 9 timeouts per day.
    This week: Monday had 20 and Tuesday had 26.
    On Monday we saw that MD07 was the only transaction that was timing out, but Tuesday had others as well.
    The amount of users in the system is about the same.  The amount of orders going in are about the same.  No big transports went in right before we switched.
    Performance counters that I know about for disk look a lot better on the Data Files.
    - PAGEIOLATCH_SH ms/request is about 50% better
    - Under I/O Performance in DBACOCKPIT:
      - MS/OP is now anywhere from 5 to 30 - Old SAN: 50 to 300
    - The Hit Ratio is over 99% - same as the old SAN
    Looking at Wiley Introscope graphs:
    - The "SAP Host: Average queue length" is about 30% to 40% lower then the old SAN.
    - the "SAP Host: Disk utilization in %" is about the same.
    Questions:
    1. Did we do anything wrong or miss anything with our move procedure?
         a. Do we have to do anything in SQL since we changed volumes even though we kept the drive letters the same?
    2. What other logs or performance counters should I be looking at?
    Thank you,
    Neil

    Our new SAN Vendor is Compellent.  They have been fantastic.  I would highly recommend checking them out.
    The reasons for the timeouts had nothing to do with the SAN...Well kind of anyway.
    I decided to check t-code SM20 to see what users were doing when these timeouts were happening.  What I found was the program R_BAPI_NETWORK_MAINTAIN was being called thousands of times in a matter of 10 to 15 minutes at random times through out the day.  It would take up about 50 to 80 percent of the amount of programs being executed during these times.
    So, I sent this information to our developers and they found out that R_BAPI_NETWORK_MAINTAIN was being called from another program that was looping thousands of times. The trigger to stop the loop wasn't happening fast enough.  They made a change and we haven't seen the timeouts since.
    I think that the performance increases allowed the loop to run faster which caused the slow downs and timeouts to happen more often.
    Thank you to everyone for their help!
    Neil

  • How can I reduce the file size rendered by After Effects?

    When I render a relatively simple 5 second project in After Effects, the file size of the resultant .avi is 64MB.  If I change the properties to reduce the file size, the degradation makes the file unusable.  What am I doing wrong?

    Is AE's encoder really that inefficient? 
    The thing is, AVI doesn't mean much.
    It's pretty much an empty container box, which doesn't imply a quality level.
    So, AME could default to something completely different as a starting point to produce an AVI file.
    AE defaults to uncompressed video when you pick AVI as a format. So, obviously this produces huge file sizes. There could be similar quality thresholds with smaller sizes if you pick other AVIcodecs, but that's a different subject. And in any case, when you're rendering a production quality master,  file size is usually not your main concern. You typically use this high quality video file as a source for compressed flavors for distribution. So, pristine video files with huge sizes are a good thing - people then wonder why trailers at apple.com, for instance look so good. And the thing is, the most compressed formats benefit enormously from having an uncompressed file as a source.
    Regarding encoding efficiency, yes, AE is less efficient than dedicated encoding solutions. Above all, because it doesn't support 2 pass encoding. Note that for some formats, 2 pass makes a night and day difference, while for others, nothing as drastic as most users seem to believe.
    All of this is a moot point for AVI, because the default AVI codecs don't offer these encoding options, which are more the realm of distribution formats like FLV, MPEG-4/H264 or WMV.
    There are distribution codecs which use AVI as a container out there, but those are a different case.

Maybe you are looking for

  • Open dialog box window with checkboxes for each child record - Please Help

    Hello Everybody I have a 10g form with master record and 20 child records. In the child record form, currently there is a "Notes" Editor, which pops up when user click the "Edit" button. In the "Notes" editor, user enters remarks if anything is missi

  • Please Help!  I can't get Itunes or Quicktime to install properly

    Ok, I'm about fed up. I'm trying to get itunes installed along with quicktime, and everytime it goes through the installation & says everything is ok, but then when I try to run Itunes, it comes up an error. The problem seems to be with quicktime, b/

  • Keyboard, keypad glitch after Security Update 2015-002

    The overview is: keyboard problems developed shortly after Security Update-002 on Mavericks 10.9.5. I tried OS update to Yosemite; didn’t fix the problem. Reverted OS back to 10.9.5 using Time Machine for a backup date prior to the Sec Update; this f

  • 7.0.3 battery and startup problems on ipad

    Since I did the 7.0.3 update, two problems have arisen on my ipad 4. The first is that my battery drains fast when not in use.  It used to be that if my ipad were left alone, unplugged and on but not in use, the battery barely drained.  But now it dr

  • FaceTime has been working great until today

    It was working fine between my iPad2 (in NC) and my daughter's (in MN). Now it's not working on the 4s or MacBook Pro (no audio or video). Anybody have any guesses what's going wrong.