BigFile Issue

I need Basic help regarding this.
A bigfile tablespace is a tablespace with a single, but very large (up to 4G blocks) datafile. Traditional smallfile tablespaces, in contrast, can contain multiple datafiles, but the files cannot be as large. The benefits of bigfile tablespaces are the following:
A bigfile tablespace with 8K blocks can contain a 32 terabyte datafile. A bigfile tablespace with 32K blocks can contain a 128 terabyte datafile. The maximum number of datafiles in an Oracle Database is limited (usually to 64K files). Therefore, bigfile tablespaces can significantly enhance the storage capacity of an Oracle Database.
my question is are there no of blocks are 4G or this is the size of datablock
or more specifically
8K blocks can contain a 32 terabyte datafile.
here 8k blocks are total no of blocks or Blocksize=8k
pls mv my doubt.
manish

We are talking about 4G units of size 8k.
The size of the block is what you have defined for the database (or tablespace). In your case, the bigfile tablespace will have 4 G units, each of size 8k.

Similar Messages

  • Bigfile 10g

    for Bigfile 10g
    ASM is not only recommended, but a requirement because of inode issues: "If you create a truly large bigfile tablespace on a traditional file system, you will suffer from horrendous inode locking issues. That is why ASM is not really an optional “
    My question is How inode locking issue can negatively effect bigfile issue?

    Does the following link answer your question?
    http://kevinclosson.wordpress.com/2006/10/30/bigfile-tablspaces-require-asm-huh/

  • Restore Database to non-ASM Storage - Issue with Bigfile Tablespace

    I have been testing a restore of my prod database that uses ASM (and oracle managed files) for storage to a different server and non-ASM storage. Oracle version is 10g EE. My database has one bigfile tablespace and it's datafile is about 250GB. The restore fails and it has something to do with the bigfile tablespace.
    Here is the rman restore script:
    run
    set newname for datafile 1 to '/ora01/db/ehr/system01.dbf';
    set newname for datafile 2 to '/ora01/db/ehr/undotbs01.dbf';
    set newname for datafile 3 to '/ora01/db/ehr/sysaux01.dbf';
    set newname for datafile 4 to '/ora01/db/ehr/undotbs02.dbf';
    set newname for datafile 5 to '/ora01/db/ehr/users01.dbf';
    set newname for datafile 6 to '/ora01/db/ehr/apolloaud01.dbf';
    set newname for datafile 7 to '/ora01/db/ehr/apollohist01.dbf';
    set newname for datafile 8 to '/ora01/db/ehr/apolloidx01.dbf';
    set newname for datafile 9 to '/ora01/db/ehr/apollotab01.dbf';
    set newname for datafile 10 to '/ora01/db/ehr/apollotab02.dbf';
    set newname for datafile 11 to '/ora02/db/ehr/apollolob01.dbf';
    set newname for datafile 12 to '/ora01/db/ehr/apollofdb01.dbf';
    set newname for datafile 13 to '/ora01/db/ehr/apolloidx02.dbf';
    set newname for datafile 14 to '/ora01/db/ehr/apolloidx03.dbf';
    set newname for datafile 15 to '/ora01/db/ehr/apolloaud02.dbf';
    set newname for datafile 16 to '/ora01/db/ehr/apollotab03.dbf';
    set until sequence 60298 thread 2;
    restore database;
    switch datafile all;
    recover database;
    Datafile 11 is the datafile in the bigfile tablespace. Here are the weird things about the restore:
    1. The restore output shows this:
    creating datafile fno=11 name=/ora02/db/ehr/apollolob01.dbf
    channel ORA_DISK_1: starting datafile backupset restore
    channel ORA_DISK_1: specifying datafile(s) to restore from backup set
    restoring datafile 00001 to /ora01/db/ehr/system01.dbf
    restoring datafile 00002 to /ora01/db/ehr/undotbs01.dbf
    restoring datafile 00003 to /ora01/db/ehr/sysaux01.dbf
    restoring datafile 00004 to /ora01/db/ehr/undotbs02.dbf
    restoring datafile 00005 to /ora01/db/ehr/users01.dbf
    restoring datafile 00006 to /ora01/db/ehr/apolloaud01.dbf
    restoring datafile 00007 to /ora01/db/ehr/apollohist01.dbf
    restoring datafile 00008 to /ora01/db/ehr/apolloidx01.dbf
    restoring datafile 00009 to /ora01/db/ehr/apollotab01.dbf
    restoring datafile 00010 to /ora01/db/ehr/apollotab02.dbf
    restoring datafile 00012 to /ora01/db/ehr/apollofdb01.dbf
    restoring datafile 00013 to /ora01/db/ehr/apolloidx02.dbf
    restoring datafile 00014 to /ora01/db/ehr/apolloidx03.dbf
    restoring datafile 00015 to /ora01/db/ehr/apolloaud02.dbf
    restoring datafile 00016 to /ora01/db/ehr/apollotab03.dbf
    Why at the beginning is it "creating" datafile 11? Then it doesnt even say it is "restoring" that datafile. Only restoring datafiles 1,2,3,4,5,6,7,8,9,10,12,13,14,15, and 16.
    When it creates datafile 11 it is only 26GB, that is much smaller than it should be according to v$datafile view on source prod database. Also even though it says it is creating datafile 11 as /ora02/db/ehr/apollolob01.dbf it actually creates it as an oracle managed file at /ora02/db/ehr/EHR/datafile/o1_mf_apollolo_6crxyqs2_.dbf
    After the datafiles are restored the "switch datafile all" command fails:
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of switch command at 10/18/2010 13:58:37
    ORA-19625: error identifying file /ora02/db/ehr/apollolob01.dbf
    ORA-27037: unable to obtain file status
    Linux-x86_64 Error: 2: No such file or directory
    Additional information: 3
    So my question is how do I get this database restored to non-ASM (and non omf)?

    So I tried using a different scn with my "set until scn #####" then the restore created 2 datafiles. The datafile for apollolob and apollotab02.dbf. So I think I have narrowed the problem to be that I am not using the correct scn number so RMAN can successfully restore those datafiles and recreates them instead. How do I find the correct scn to use to do a successful restore of the entire database? I have seen different methods on the web, but cant figure it out. Ive used "select archivelog_change#-1 from v$database;" and I also did "list backup of archivelog all" and used the latest sequence number. How can I find the correct scn to use so the entire database will restore?
    Here is the output of "list backup":
    List of Backup Sets
    ===================
    BS Key Size Device Type Elapsed Time Completion Time
    19724 41.12M DISK 00:00:10 14-OCT-10
    BP Key: 65840 Status: AVAILABLE Compressed: YES Tag: TAG20101014T210022
    Piece Name: /mnt/migrate/rman/EHR_dbid3632734257_set113195_piece1_copy1_20101014
    List of Archived Logs in backup set 19724
    Thrd Seq Low SCN Low Time Next SCN Next Time
    1 50439 3230234843 14-OCT-10 3230268282 14-OCT-10
    1 50440 3230268282 14-OCT-10 3230286806 14-OCT-10
    2 60280 3230234852 14-OCT-10 3230251419 14-OCT-10
    2 60281 3230251419 14-OCT-10 3230268263 14-OCT-10
    2 60282 3230268263 14-OCT-10 3230286809 14-OCT-10
    BS Key Type LV Size Device Type Elapsed Time Completion Time
    19725 Full 126.40G DISK 09:11:51 15-OCT-10
    List of Datafiles in backup set 19725
    File LV Type Ckp SCN Ckp Time Name
    1 Full 3230287009 14-OCT-10 +DATA/ehr/datafile/system.625.609259453
    2 Full 3230287009 14-OCT-10 +DATA/ehr/datafile/undotbs1.620.609259461
    3 Full 3230287009 14-OCT-10 +DATA/ehr/datafile/sysaux.768.609259463
    4 Full 3230287009 14-OCT-10 +DATA/ehr/datafile/undotbs2.632.609259467
    5 Full 3230287009 14-OCT-10 +DATA/ehr/datafile/users.257.609259471
    6 Full 3230287009 14-OCT-10 +DATA/ehr/datafile/apolloaud.316.619537285
    7 Full 3230287009 14-OCT-10 +DATA/ehr/datafile/apollohist.629.619538155
    8 Full 3230287009 14-OCT-10 +DATA/ehr/datafile/apolloidx.312.619538169
    9 Full 3230287009 14-OCT-10 +DATA/ehr/datafile/apollotab.276.619538487
    10 Full 3230287009 14-OCT-10 +DATA/ehr/datafile/apollotab.576.619539331
    11 Full 3230287009 14-OCT-10 +DATA/ehr/datafile/apollolob.570.619539593
    12 Full 3230287009 14-OCT-10 +DATA/ehr/datafile/apollofdb.750.645974339
    13 Full 3230287009 14-OCT-10 +DATA/ehr/datafile/apolloidx.429.651171265
    14 Full 3230287009 14-OCT-10 +DATA/ehr/datafile/apolloidx.705.688680793
    15 Full 3230287009 14-OCT-10 +DATA/ehr/datafile/apolloaud.747.699632315
    16 Full 3230287009 14-OCT-10 +DATA/ehr/datafile/apollotab.330.715622123
    Backup Set Copy #1 of backup set 19725
    Device Type Elapsed Time Completion Time Compressed Tag
    DISK 09:11:51 20-OCT-10 YES TAG20101014T210039
    List of Backup Pieces for backup set 19725 Copy #1
    BP Key Pc# Status Piece Name
    65851 1 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece1_copy1_20101014
    65862 2 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece2_copy1_20101014
    65873 3 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece3_copy1_20101014
    65884 4 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece4_copy1_20101014
    65895 5 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece5_copy1_20101014
    65901 6 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece6_copy1_20101014
    65902 7 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece7_copy1_20101014
    65903 8 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece8_copy1_20101014
    65904 9 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece9_copy1_20101014
    65841 10 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece10_copy1_20101014
    65842 11 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece11_copy1_20101014
    65843 12 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece12_copy1_20101014
    65844 13 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece13_copy1_20101014
    65845 14 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece14_copy1_20101014
    65846 15 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece15_copy1_20101014
    65847 16 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece16_copy1_20101014
    65848 17 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece17_copy1_20101014
    65849 18 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece18_copy1_20101014
    65850 19 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece19_copy1_20101014
    65852 20 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece20_copy1_20101014
    65853 21 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece21_copy1_20101014
    65854 22 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece22_copy1_20101015
    65855 23 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece23_copy1_20101015
    65856 24 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece24_copy1_20101015
    65857 25 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece25_copy1_20101015
    65858 26 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece26_copy1_20101015
    65859 27 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece27_copy1_20101015
    65860 28 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece28_copy1_20101015
    65861 29 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece29_copy1_20101015
    65863 30 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece30_copy1_20101015
    65864 31 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece31_copy1_20101015
    65865 32 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece32_copy1_20101015
    65866 33 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece33_copy1_20101015
    65867 34 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece34_copy1_20101015
    65868 35 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece35_copy1_20101015
    65869 36 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece36_copy1_20101015
    65870 37 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece37_copy1_20101015
    65871 38 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece38_copy1_20101015
    65872 39 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece39_copy1_20101015
    65874 40 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece40_copy1_20101015
    65875 41 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece41_copy1_20101015
    65876 42 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece42_copy1_20101015
    65877 43 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece43_copy1_20101015
    65878 44 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece44_copy1_20101015
    65879 45 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece45_copy1_20101015
    65880 46 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece46_copy1_20101015
    65881 47 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece47_copy1_20101015
    65882 48 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece48_copy1_20101015
    65883 49 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece49_copy1_20101015
    65885 50 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece50_copy1_20101015
    65886 51 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece51_copy1_20101015
    65887 52 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece52_copy1_20101015
    65888 53 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece53_copy1_20101015
    65889 54 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece54_copy1_20101015
    65890 55 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece55_copy1_20101015
    65891 56 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece56_copy1_20101015
    65892 57 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece57_copy1_20101015
    65893 58 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece58_copy1_20101015
    65894 59 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece59_copy1_20101015
    65896 60 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece60_copy1_20101015
    65897 61 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece61_copy1_20101015
    65898 62 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece62_copy1_20101015
    65899 63 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece63_copy1_20101015
    65900 64 AVAILABLE /mnt/migrate/rman/EHR_dbid3632734257_set113196_piece64_copy1_20101015
    BS Key Size Device Type Elapsed Time Completion Time
    19726 228.10M DISK 00:00:49 15-OCT-10
    BP Key: 65905 Status: AVAILABLE Compressed: YES Tag: TAG20101015T061242
    Piece Name: /mnt/migrate/rman/EHR_dbid3632734257_set113197_piece1_copy1_20101015
    List of Archived Logs in backup set 19726
    Thrd Seq Low SCN Low Time Next SCN Next Time
    1 50441 3230286806 14-OCT-10 3230331993 14-OCT-10
    1 50442 3230331993 14-OCT-10 3230401945 14-OCT-10
    1 50443 3230401945 14-OCT-10 3230469794 15-OCT-10
    1 50444 3230469794 15-OCT-10 3230555010 15-OCT-10
    1 50445 3230555010 15-OCT-10 3230618396 15-OCT-10
    1 50446 3230618396 15-OCT-10 3230695020 15-OCT-10
    2 60283 3230286809 14-OCT-10 3230304858 14-OCT-10
    2 60284 3230304858 14-OCT-10 3230330891 14-OCT-10
    2 60285 3230330891 14-OCT-10 3230354275 14-OCT-10
    2 60286 3230354275 14-OCT-10 3230366292 14-OCT-10
    2 60287 3230366292 14-OCT-10 3230399805 14-OCT-10
    2 60288 3230399805 14-OCT-10 3230423577 14-OCT-10
    2 60289 3230423577 14-OCT-10 3230446176 15-OCT-10
    2 60290 3230446176 15-OCT-10 3230469756 15-OCT-10
    2 60291 3230469756 15-OCT-10 3230496786 15-OCT-10
    2 60292 3230496786 15-OCT-10 3230524710 15-OCT-10
    2 60293 3230524710 15-OCT-10 3230554981 15-OCT-10
    2 60294 3230554981 15-OCT-10 3230583802 15-OCT-10
    2 60295 3230583802 15-OCT-10 3230610465 15-OCT-10
    2 60296 3230610465 15-OCT-10 3230617887 15-OCT-10
    2 60297 3230617887 15-OCT-10 3230673207 15-OCT-10
    2 60298 3230673207 15-OCT-10 3230695022 15-OCT-10

  • Big File to IDOC - performance issue

    Hi All,
    I am trying to create scenario where I have a file with aproximately 10 000 rows. From each row I am creating one IDOC and want to send it to R/3. Interface looks fine - it is working, but it is killing XI box for some time and u cant access it.
    Full scenario look like this
    File -> BPM (for 1:n) -> IDOC
    I tried to find some solutions for doing the workload smaller by splitting file to less lines (500rows per file) but then file adapter picks up all file and processed them in parallel. So this is new scenario:
    BigFile -> XI -> File -> BPM(1:n) -> IDOC
    I tried to put second file sender communication channel as EOIO but looks like this does not work - or messages from queue are processed to fast. When one message starts BPM another file message start to be processed.
    Do You have any ideas on how to make it more responsive and less performance impact?
    thanks in advance.
    Dawid

    Hi ;
    Since mappings are processed by the J2EE Engine, the maximum available Java heap may be a limit-ing factor for the maximum document size the XI mapping service is able to process. Tests have shown that processing of XSLT mappings consumes up to 20 times the source document size (using identity mapping). The maximum available Java heap for 32bit JVMs is platform-dependent. Using 64bit JVM platforms is an option here.
    Current maximum heap sizes – 32bit
    OS
    Maximum heap (GB)
    Linux
    2
    Windows
    1.2 – 1.4
    The Java heap is limited by the heap limit of the process (may be limited by address space because operating system code or libraries may also be loaded within the same address space). Also, Java internal memory areas such as the permanent space for loading Java classes must fit into the same address space.
    Java VM tuning is one of the most crucial tuning steps, especially for more complex scenarios. For information about setting baseline JVM parameters, see SAP Note 723909. You must also take plat-form-specific parameters into account (for example, JIT compiler settings). The impact of Garbage Collection (GC) behavior especially may become a critical issue. Overall GC times for the J2EE appli-cation should be well below 5%. For more information about GC behavior and settings, see also SAP Note 552522.
    Specific to XI is the fact that you sometimes need to process large documents for mapping or when using signatures. This can lead to excessive memory usage on the Java side. Therefore, you must observe Garbage Collection and the available Java heap in order to evaluate performance and pre-vent OutOfMemory exceptions. Since XI mapping is processed by stateless session beans that are called using a JCo interface, this may lead to a reduction of parallel JCo server threads within the JCo RFC Provider service of a J2EE server node (you can compensate for this by adding J2EE server nodes).
    Mudit

  • Write Performanc Issues.

    Hi
    Oracle : 11.2.0.2.0 EE.
    Linux : Rhel 5.6.
    Dell R720 :
    EMC VNX SAN Storage :
    Here is the scenario:
    The dell server is connected to the SAN . We have 2 instances on this same server. When we create a tablespace on one instance it takes twice as long as creating the tablespace on the 2nd instance. The tablespaces are created on the san.
    At this point we have eliminated any hardware/ SAN issues. Since both databases are on the same server and hardware, connecting to the same lun on the san.
    Any help would be appreciated.
    Below is the relative information of the creation scripts and init.ora parameters.
    Tablesapace creation Script for ORCL1 (Notice the 8k block size corresponding to the db_block_size on the db ).
    CREATE BIGFILE TABLESPACE TEST_IO BLOCKSIZE 8k LOGGING
    DATAFILE '/uP01/oracle/oradata/orcl1/TEST_IO_01.dbf'
    SIZE 256G AUTOEXTEND OFF
    EXTENT MANAGEMENT LOCAL UNIFORM SIZE 256K
    SEGMENT SPACE MANAGEMENT AUTO
    PERMANENT ONLINE;
    14 minutes to create
    Tablespace creation script for orcl 2 (Notice the 32k block size corresponding to the db_block_size on the db ).
    CREATE BIGFILE TABLESPACE TEST_IO BLOCKSIZE 32k LOGGING
    DATAFILE '/uP01/oracle/oradata/orcl2/TEST_IO_01.dbf'
    SIZE 256G AUTOEXTEND OFF
    EXTENT MANAGEMENT LOCAL UNIFORM SIZE 256K
    SEGMENT SPACE MANAGEMENT AUTO
    PERMANENT ONLINE;
    30minutes to create
    Init.ora on orcl1 :
    orcl1.__db_cache_size=494927872
    orcl1.__java_pool_size=4194304
    orcl1.__large_pool_size=4194304
    orcl1.__oracle_base='/opt/app/oracle/ora11g'#ORACLE_BASE set from environment
    orcl1.__pga_aggregate_target=536870912
    orcl1.__sga_target=1073741824
    orcl1.__shared_io_pool_size=0
    orcl1.__shared_pool_size=515899392
    orcl1.__streams_pool_size=25165824
    *._optimizer_extend_jppd_view_types=FALSE
    *._optimizer_group_by_placement=FALSE
    *._replace_virtual_columns=FALSE
    *.audit_file_dest='/opt/app/oracle/ora11g/admin/orcl1/adump'
    *.audit_trail='none'
    *.compatible='11.1.0.0.0'
    *.db_block_size=8192
    *.db_domain=''
    *.db_name='orcl1'
    *.diagnostic_dest='/opt/app/oracle/ora11g'
    *.local_listener=''
    *.open_cursors=1000
    *.pga_aggregate_target=536870912
    *.processes=500
    *.recyclebin='OFF'
    *.remote_login_passwordfile='EXCLUSIVE'
    *.sec_case_sensitive_logon=FALSE
    *.sessions=555
    *.sga_target=1073741824
    *.undo_tablespace='UNDOTBS1'
    orcl2
    *.db_block_size=32768
    *.db_cache_size=23068672000
    *.db_domain=''
    *.db_file_multiblock_read_count=32
    *.db_files=10000
    *.db_flashback_retention_target=0
    *.db_keep_cache_size=134217728
    *.db_name='orcl2'
    *.db_unique_name='ORCL2'
    *.diagnostic_dest='/opt/app/oracle/ora11g'
    *.fast_start_mttr_target=1200
    *.filesystemio_options='SETALL'
    *.java_pool_size=268435456
    *.job_queue_processes=4
    *.large_pool_size=134217728
    *.log_buffer=134217728
    *.log_checkpoint_timeout=0
    *.log_checkpoints_to_alert=TRUE
    *.open_cursors=1000
    *.pga_aggregate_target=31457280000
    *.processes=300
    *.query_rewrite_integrity='STALE_TOLERATED'
    *.recyclebin='OFF'
    *.remote_login_passwordfile='EXCLUSIVE'
    *.resource_limit=TRUE
    *.resumable_timeout=7200
    *.sessions=335
    *.shared_pool_reserved_size=134217728
    *.shared_pool_size=2147483648
    *.star_transformation_enabled='TRUE'
    *.trace_enabled=FALSE
    *.undo_retention=36000
    *.undo_tablespace='UNDOTBS1'

    >
    The dell server is connected to the SAN . We have 2 instances on this same server. When we create a tablespace on one instance it takes twice as long as creating the tablespace on the 2nd instance. The tablespaces are created on the san.
    At this point we have eliminated any hardware/ SAN issues. Since both databases are on the same server and hardware, connecting to the same lun on the san.
    >
    If those are really the config paramters for your instances then you may have some serious configuration issues.
    Instance 1
    1. you have shown SGA_TARGET is set (implying Automatice Shared Memory Management) but then you show the individual components have also been set: SHARED_POOL_SIZE, LARGE_POOL_SIZE, etc.
    Which are trying to use manual or automatic shared memory management?
    2. SGA_TARGET is set to only 1 GB. Why so small?
    Instance 2
    1. You have NOT set DB_nK_CACHE_SIZE but say you want to use 32 KB block size. This parameter MUST BE SET.
    2. You have set DB_CACHE_SIZE to 23 GB. Why? Your block size is 32 KB that you want to use. Only the SYSTEM tablespace will use the standard 8K blocks so why the enormous cache? You only used a cache of .5 GB and total memory target of 1 GB for instance 1.
    3. SGA_TARGET is NOT set - why not? why are you not using the same memory management as what you tried to do in instance 1.
    What is going on with these two instances that their config is so radically different?
    Daniel suggested a possible issue with the 32 KB block size. My hypothesis is that your configuration for both instances is faulty and that for instance 2 in particular your failure to provide any setting for DB_nK_CACHE_SIZE is a likely suspect for causing the issue.
    Review the DBA Guide for how to configure memory and the requirements for using non-standard block sizes.
    I would expect that once you have your instances configured properly you won't have the problem you reported.
    http://docs.oracle.com/cd/E11882_01/server.112/e25494/memory004.htm
    About Automatic Shared Memory Management
    >
    Automatic Shared Memory Management simplifies SGA memory management. You specify the total amount of SGA memory available to an instance using the SGA_TARGET initialization parameter and Oracle Database automatically distributes this memory among the various SGA components to ensure the most effective memory utilization.
    >
    See the example for automatic management
    >
    You can take advantage of automatic shared memory management by setting Total SGA Size to 992M in Oracle Enterprise Manager, or by issuing the following statements:
    ALTER SYSTEM SET SGA_TARGET = 992M;
    ALTER SYSTEM SET SHARED_POOL_SIZE = 0;
    ALTER SYSTEM SET LARGE_POOL_SIZE = 0;
    ALTER SYSTEM SET JAVA_POOL_SIZE = 0;
    ALTER SYSTEM SET DB_CACHE_SIZE = 0;
    ALTER SYSTEM SET STREAMS_POOL_SIZE = 0;
    where 992M = 1200M minus 208M.
    >
    For non-standard block sizes see the section 'Setting the Buffer Cache Initialization Parameters'
    >
    Oracle Database supports multiple block sizes in a database. If you create tablespaces with nonstandard block sizes, you must configure nonstandard block size buffers to accommodate these tablespaces. The standard block size is used for the SYSTEM tablespace. You specify the standard block size by setting the initialization parameter DB_BLOCK_SIZE. Legitimate values are from 2K to 32K.
    If you intend to use multiple block sizes in your database, you must have the DB_CACHE_SIZE and at least one DB_nK_CACHE_SIZE parameter set. Oracle Database assigns an appropriate default value to the DB_CACHE_SIZE parameter, but the DB_nK_CACHE_SIZE parameters default to 0, and no additional block size caches are configured.
    The sizes and numbers of nonstandard block size buffers are specified by the following parameters:
    DB_2K_CACHE_SIZE
    DB_4K_CACHE_SIZE
    DB_8K_CACHE_SIZE
    DB_16K_CACHE_SIZE
    DB_32K_CACHE_SIZE
    Each parameter specifies the size of the cache for the corresponding block size.
    >
    Don't forget - when using non-standard block sizes you MUST set both the standard cache parameter (8k) and the non-standard parameter (32k for your use case).

  • Bigfile Temporary Tablespace?

    Hi,
    --------------------------Oracle 11G Release 11.1.0.6.0, Windows XP 32------------------------------------
    I've tried to find out but not successful. Can I create BIGFILE TEMPORARY TABLESPACE like I've created normal BIGFILE TABLESPACE.
    Secondly, If it is not possible then will creating multiple temporary files will effect performance will doing the sorting?
    ORA-01652: unable to extend temp segment by 128 in tablespace TEMP
    select inst_id, tablespace_name, total_blocks, used_blocks, free_blocks
          from gv$sort_segment;
       INST_ID TABLESPACE_NAME                 TOTAL_BLOCKS USED_BLOCKS FREE_BLOCKS
             1 TEMP                                12582144           0    12582144
    I've created 3 temp files of 30G each and get above error i.e unable to extend. adding a temp file will solve this issue, but I was thinking to create one bigfiel temporary file instead of adding files, that might decrease lot of performance.
    Thanks
    Regards

    kam555 wrote:
    Hi,
    --------------------------Oracle 11G Release 11.1.0.6.0, Windows XP 32------------------------------------
    I've tried to find out but not successful. Can I create BIGFILE TEMPORARY TABLESPACE like I've created normal BIGFILE TABLESPACE.
    Secondly, If it is not possible then will creating multiple temporary files will effect performance will doing the sorting?
    ORA-01652: unable to extend temp segment by 128 in tablespace TEMP
    select inst_id, tablespace_name, total_blocks, used_blocks, free_blocks
    from gv$sort_segment;
    INST_ID TABLESPACE_NAME                 TOTAL_BLOCKS USED_BLOCKS FREE_BLOCKS
    1 TEMP                                12582144           0    12582144I've created 3 temp files of 30G each and get above error i.e unable to extend. adding a temp file will solve this issue, but I was thinking to create one bigfiel temporary file instead of adding files, that might decrease lot of performance.
    Thanks
    RegardsBigfile tablespaces are supported only for locally managed tablespaces with automatic segment space management, with three exceptions: locally managed undo tablespaces, temporary tablespaces, and the SYSTEM tablespace.
    There is no performance overhead if you have multiple datafiles in a tablespace. That is structure of Oracle: Tablespace may consist of multiple datafiles.

  • Bigfile or smallfile tablespace?

    Hi community,
    I'd like to set up a Oracle database (Version 10.2.0.3 64-bit) on a Windows Server 2003 (64-bit).
    Server Hardware:
    - Dual Core AMD Opteron
    - Processor 8224 SE, 3,21 GHz
    - 16 GB RAM
    Storage:
    - HP StorageWorks MSA 1000
    - 3 x 250 GB (RAID 1)
    - At the moment the 250 GB drives are combined to one large 750 GB Logical Drive (RAID 0)
    The data content for the new database are coming from an DB2 (will be copied to Oracle DB) and the size is about 350 GB at the moment, upward trend.
    Data:
    150 tables with less than 100.000 rows.
    40 tables with more than 100.000 rows but less than 1.000.000 rows.
    15 tables with more than 1.000.000 rows (biggest table has 125.000.000 rows)
    So my questions now regarding how to size a tablespace for the new Oracle instance:
    1. Should I create a bigfile tablespace with one large datafile (350 - 400 GB) or should I create a smallfile tablespace with several smaller datafiles?
    2. At the moment the db_block_size is 8 kb. This means that I can create a datafile for a smallfile tablespace with a maximum of 32 GB. So for 350 GB I need at least 11 datafiles. I think this is not a good solution. So maybe I should change the db_block_size to 16 KB so I can create datafiles up to 64 GB? What effects will this change of the block size bring?
    3. Is it better, for performace issues, to use the 750 GB RAID10 Array or should I use three 250 GB RAID1 Arrays and allocate different datafiles on them?
    4. For the beginning should I size the tablespace as big as it really is (350GB) or should I size it bigger (400GB)?
    I would really appreciated, if somebody can help me with this issue...
    Thanks in advance,
    Tobias Schmidt

    Use standard tablespaces. You do not need the larger files with their limitations. Read more at http://tahiti.oracle.com. If performance is an issue then you should get management to invest in decent storage. Three internal drives with mirroring is totally inadequate for any serious work. Look at Pillar Data Systems or NetApp for NAS and/or SAN. Here's why:
    You have a 350GB database with RAID 0 so you just used 100% of your available disk. You've no room left. No room for the operating system and no room for the Oracle binaries if one assumes a swap space. Then where are you going to put SYSTEM, SYSAUX, TEMP, and UNDO tablespaces? And they too need to be mirrored. How about your archived redo logs? backups? flashback logs?
    I wouldn't consider putting a 350GB database on anything less than 6X that much space: More if, as you indicate, it is growing. And tell your storage admin that one LUN is not the right answer to just about any question.
    Also I am puzzled why, with what appears to be a serious database, you are using Windows rather than Linux.

  • New DVR Issues (First Run, Channel Switching, etc.)

    I've spent the last 30 minutes trying to find answers through the search with no luck, so sorry if I missed something.
    I recently switched to FIOS from RCN cable in New York.  I've gone through trying to setup my DVR and am running into issues and was hoping for some answers.
    1.  I setup two programs to record at 8PM, I was watching another channel at the time and only half paying attention.  Around 8:02 I noticed a message had popped up asking if I would like to switch channels to start recording.  I was expecting it to force it to switch like my old DVR, but in this case it didn't switch and I missed the first two minutes of one of the shows.  I typically leave my DVR on all day and just turn off the TV, this dual show handling will cause issues with that if I forget to turn off the DVR.  Is there a setting I can change that will force the DVR to choose one of the recording channels?
    2.  I setup all my recordings for "First Run" because I only want to see the new episodes.  One show I setup was The Daily Show on comedy central, which is shown weeknights at 11pm and repeated 3-4 times throughout the day.  My scheduled recordings is showing all these as planned recordings even though only the 11pm show is really "new".  Most of the shows I've setup are once a week so they aren't a problem, but this seems like it will quickly fill my DVR.  Any fixes?
    Thanks for the help.
    Solved!
    Go to Solution.

    I came from RCN about a year ago.  Fios is different in several ways, not all of them desirable.  Here are several ways to get--and fix--unwanted recordings from a series recording setup.
    Some general principles. 
    Saving changes.  When you originally create a series with options, or if you go back to edit the options for an existing series, You MUST save the Series Options changes.  Pretty much everywhere else in the user interface, when you change an option, the change takes effect immediately--but not in Series Options.  Look at the Series Options window.  Look at the far right side.  There is a vertical "Save" bar, which you must navigate to and click OK on to actually save your changes.  Exiting the Series Options window without having first saved your changes loses all your attempted changes--immediately.
    Default Series Options.  This is accessed  from [Menu]--DVR--Settings--Default Series Options.  This will bring up the series options that will automatically be applied to the creation of a NEW series. The options for every previously created series will not be affected by a subsequent modification of the Default Series Options.  You should set these options to the way you would like them to be for the majority of series recordings that you are likely to create.  Be sure to SAVE your changes.  This is what you will get when you select "Create Series Recording" from the Guide.  When creating a new series recording where you think that you may want options different from the default, select "Create Series with Options" instead.  Series Options can always be changed for any individual series set up later--but not for all series at once.
    Non-series recordings.  With Fios you have no directly available options for these.  With RCN and most other DVRs, you can change the start and end times for individual episodes, including individual episodes that are also in a series.  With Fios, your workarounds are to create a series with options for a single program, then delete the series later;  change the series options if the program is already in a series, then undo the changes you made to the series options later; or schedule recordings of the preceding and/or following shows as needed.
    And now, to the unwanted repeats. 
    First, make sure your series options for the specific series in question--and not just the series default options--include "First Run Only".  If not, fix that and SAVE.  Then check you results by viewing the current options using the Series Manager app under the DVR menu.
    Second, and most annoying, the Guide can have repeat programs on your channel tagged as "New".  It happens.  Set the series option "Air Time" to "Selected Time".  To make this work correctly, you must have set up the original series recording after selecting the program in the Guide at the exact time of a first run showing (11pm, in your case), and not on a repeat entry in the Guide.  Then, even it The Daily Show is tagged as New for repeat showings, these will be ignored. 
    Third, another channel may air reruns of the program in your series recording, and the first showing of a rerun episode on the other channel may be tagged as "New".  These can be ignored in your series if you set the series option "Channel" to "Selected Channel".  Related to this, if there is both an SD and HD channel broadcasting you series program, you will record them both if the series option "Duplicates" is set to "Yes".  However, when the Channel option is set to "Selected Channel", the Duplicates Option is always effectively "No", regardless of what shows up on the options screen.  
    As for you missing two minutes,  I have sereral instances in which two programs start recording at the same time.  To the best of my recollection, whenever the warning message has appeared, ignoring it has not caused a loss of recording time.  You might have an older software version.  Newest is v.1.8.  Look at Menu--Settings--System Info.  Or, I might not have noticed the loss of minutes.  I regularly see up to a minute of previous programming at the start of a recording, or a few missing seconds at the beginning or end of a recording.  There are a lot of possibilities for that, but the DVR clock being incorrect is not one of them.  With RCN, the DVR clocks occasionally drifted off by as much as a minute and a half.

  • Pension issue Mid Month Leaving

    Dear All,
    As per rule sustem should deduct mid month joining/leaving/absences or transfer scenarios, the Pension/PF Basis will be correspondingly prorated. But our system is not doing this. In RT table i have found 3FC Pension Basis for Er c 01/2010                    0.00           6,500.00.
    Employee leaving date is 14.04.2010. system is picking pension amout as 541. Last year it was coming right.
    Please suggest.
    Ashwani

    Dear Jayanti,
    We required prorata basis pension in case of left employees and system is not doing this. This is the issue. As per our PF experts Pension amount should come on prorata basis for left employees in case they left mid of month.System is doing prorata basis last year but from this year it is deducting 541. I am giving two RT cases of different years.
    RT table for year 2010. DOL 26.04.2010
    /111 EPF Basis              01/2010                    0.00           8,750.00 
    /139 VPF Basis              01/2010                    0.00           8,750.00 
    /3F1 Ee PF contribution     01/2010                    0.00           1,050.00 
    /3F3 Er PF contribution     01/2010                    0.00             509.00 
    /3F5 Ee Mon PF contribution 01/2010                    0.00           1,050.00 
    /3F6 Ee Ann PF contribution 01/2010                    0.00          12,600.00 
    /3F9 PF adm chrgs * 1,00,00 01/2010                    0.00              96.25 
    /3FA PF basis for Ee contri 01/2010                    0.00           8,750.00 
    /3FB PF Basis for Er Contri 01/2010                    0.00           8,750.00 
    /3FJ VPF basis for Ee contr 01/2010                    0.00           8,750.00 
    /3FL PF Basis for Er Contri 01/2010                    0.00           6,500.00 
    /3F4 Er Pension contributio 01/2010                    0.00             541.00
    /3FC Pension Basis for Er c 01/2010                    0.00           6,500.00
    /3FB PF Basis for Er Contri 01/2010                    0.00           8,750.00
    /3FC Pension Basis for Er c 01/2010                    0.00           6,500.00
    /3FJ VPF basis for Ee contr 01/2010                    0.00           8,750.00
    /3FL PF Basis for Er Contri 01/2010                    0.00           6,500.00
    /3R3 Metro HRA Basis Amount 01/2010                    0.00           8,750.00
    1BAS Basic Salary           01/2010                    0.00           8,750.00
    RT table for year 2009. DOL 27.10.2009
                                                                                    /111 EPF Basis              07/2009                    0.00           9,016.13
    /139 VPF Basis              07/2009                    0.00           9,016.13
    /3F1 Ee PF contribution     07/2009                    0.00           1,082.00
    /3F3 Er PF contribution     07/2009                    0.00             628.00
    /3F5 Ee Mon PF contribution 07/2009                    0.00           1,082.00
    /3F6 Ee Ann PF contribution 07/2009                    0.00           8,822.00
    /3F9 PF adm chrgs * 1,00,00 07/2009                    0.00              99.18
    /3FA PF basis for Ee contri 07/2009                    0.00           9,016.00
    /3FB PF Basis for Er Contri 07/2009                    0.00           9,016.00
    /3FJ VPF basis for Ee contr 07/2009                    0.00           9,016.00
    /3FL PF Basis for Er Contri 07/2009                    0.00           5,452.00
    /3FB PF Basis for Er Contri 07/2009                    0.00           9,016.00 
    /3FC Pension Basis for Er c 07/2009                    0.00           5,452.00 
    /3FJ VPF basis for Ee contr 07/2009                    0.00           9,016.00 
    /3FL PF Basis for Er Contri 07/2009                    0.00           5,452.00 
    /3R4 Non-metro HRA Basis Am 07/2009                    0.00           9,016.13 
    1BAS Basic Salary           07/2009                    0.00           9,016.13 
    Now please suggest what to do. where is the problem  ? If have also checked EXIT_HINCALC0_002 but nothing written in it.
    With Regards
    Ashwani

  • Open PO Analysis - BW report issue

    Hello Friends
    I constructed a query in BW in order to show Open Purchase Orders. We have custom DSO populated with standard
    datasource 2lis_02_itm (Purcahse Order Item). In this DSO we mapped the field ELIKZ to the infoobject 0COMP_DEL
    (Delivery completed).
    We loaded the data from ECC system for all POs and found the following issue for Stock Transport Purchase orders (DocType = UB).
    We have a PO with 4 line items. For line items 10 and 20, Goods issued, Goods received and both the flags "Delivery
    complete" and "Final delivery" checked. For line items 30 and 40, only delivery indicator note is issued for zero
    quantity and Delivery complete flag is checked (Final delivery flag is not checked) in ECC system. For this PO, the
    delivery completion indicator is not properly updated in the DSO for line items 30 and 40. The data looks like the
    following:
    DOC_NUM     DOC_ITEM       DOCTYPE     COMP_DEL
    650000001       10     UB        X
    650000001       20     UB        X
    650000001       30     UB
    650000001       40     UB      
    When we run the Open PO analysis report on BW side this PO is appearing in the report but the same is closed in ECC
    system.
    Any help is appreciated in this regard.
    Thanks and Regards
    sampath

    Hi Priya and Reddy
       Thanks for your response.
                         Yes the indicator is checked in EKPO table for items 30 and 40 and delta is running regularly for more than 1 year and no issues with other POs. This is happening only for few POs of type Stock Transport (UB).
                        I already checked the changes in ME23N and the Delivery completed indicator was changed and it reflected in EKPO table. Further, i checked the PSA records for this PO and i am getting the records with the Delivery completed flag but when i update from PSA to DSO the delivery completed indicator is not updating properly.
                       In PSA, for item 30 i have the following entries. Record number 42 is capturing the value X for ELIKZ but after that i am getting two more records 43 and 44 with process key 10 and without X for ELIKZ. I think this is causing the problem.
    Record No.    Doc.No.                    Item              Processkey         Rocancel     Elikz
        41               6500000001            30                    11                            X           ---    
        42               6500000001            30                    11                            ---           X
        43               6500000001            30                    10                            X           ---
        44               6500000001            30                    10                            ---         ---
    (Here --- means blank)        
    Thanks and Regards
    sampath

  • HP LaserJet Enterprise 600 M602 driver issue

    Hello,
    I've got issue with 600-series printers. We use the latest UPD drivrer ver. 61.175.1.18849 and print from XenApp 6.5. The error occurs every time when users try to print jpg files from XenApp session. It only happens with 600 series printers and UPD.
    Also I've tried to assign native 600-series driver ver. 6.3.9600.16384 and it works good. But with that driver system says that it's color printer and it brokes our printing reports. These reports are very important for us. So we can't use printer and that driver as well.
    Printer installed on Windows Server 2012 R2. All clients are Windows 7 x64. XenApp Servers are Server 2008R2.
    Is it possible to get fixed UPD driver or correct native driver for Server 2012 R2?
    Regards,
    Anatoly

    I am sorry, but to get your issue more exposure I would suggest posting it in the commercial forums since this is a commercial printer. You can do this at Printers - LaserJet.
    Click on New Post.
    I hope this helps.
    Please click “Accept as Solution ” if you feel my post solved your issue, it will help others find the solution.
    Click the “Kudos Thumbs Up" on the right to say “Thanks” for helping!
    Gemini02
    I work on behalf of HP

  • Windows 7 displays error message when exiting +cursor issue

    Two issues here. CS5 Phoshop on Wind 7 64 bit.
    Physical processor count: 8
    Processor speed: 3073 MHz
    Built-in memory: 12279 MB
    Free memory: 9577 MB
    Memory available to Photoshop: 10934 MB
    Memory used by Photoshop: 80 %
    Image tile size: 128K
    First issue is since the latest automatic Adobe update (why fix what isn't broken?) Every time I now exit Photoshop I get the message "Adobe QT Server has stoped working" and occasionally it happens when I exit bridge. Indesign is also behaving badly. I can no longer start a previous document from file manager without ID crashing out.
    The other is the cursors in Clone and erase lose their edge (become invisable) for no reason - well not quite. Noise Ninja crashed Photoshop when I tried to use it. I reinstalled it and all is well. The cursor issue seems to be intermittant but came back (for no reason) after I reinstalled NN. I can't seem to change the cursor, no matter what I do. The problem is now seriously affecting how I work. Almost enough to go back to Win XP which ran CS5 Photoshop flawlessly.
    Any help will be gratefully accepted.
    Doug

    function(){return A.apply(null,[this].concat($A(arguments)))}
    doug87510 wrote:
    The recent problem is the entire outline of the cursor (including the crosshair in the middle) was missing at any size of cursor. All I had was exactly what I'd get if I used a real spraygun.
    Well, that issue is simply a matter of hitting the Caps Lock key.  When Caps Lock is on, you'll see the cursor outline, and when it is off you'll see a crosshair.  That's a feature, not a bug.
    Glad to hear the 11.1 drivers are out.  I will download them and try them now myself.
    Regarding "Adobe QT" crashing...  QT brings to mind QuickTime, though that is Apple, not Adobe.  Do you have Apple QuickTime installed?
    Regarding memory usage, with 12 GB of installed RAM, you should be able to set Photoshop to use 90% or more in Edit - Preferences - Performance.
    -Noel

  • Issue in Creation of Periodicals for Contracts in CRM7.0

    Hello,
    I have a requirement to create Contracts in CRM7.0 system.
    And I am doing this using the BAPI *BAPI_BUSPROCESSND_CREATEMULTI*
    Good part is Contract Order gets created, but onlywith Header Details.
    The issues i am facing --
    1. I need to know what kind/type of data must be passed to the interface parameters, the F1 Help/Documentation is vague.
    2. I am passing data in the INPUT FIELDS structure with the Object ID, Handle Number, Reference GUID and Fieldname,
        here what does 'Logical Key' field indicate? What should be passed here.
        What does 'REFERENCE KIND' field indicate, i have been passng 'A' for everything (to be frank i dont know whats its significance is!!).
    3. With so much, My Order gets created but with less than half details, i.e. the Objects not getting created are -  Partner, Product, terms/appointments, Status, LongTexts......
    Any help/inputs would be appreciated.
    Hope my problem is stated clearly ...
    --Regards
    Dedeepya

    Hi Anu,
    i found my solution by debugging with existing data or while creating it in CRMD_ORDER.
    Ensure that you are passing a correct entry in INPUT_FIELDS structure.
    As i haven't worked on rebates i woudlnt be able to help you, I suggest you debug to arrive at a solution.
    You can preset your break-points at :-
    1. FM - CRM_ORDER_MAINTAIN
    2. CRM_ORDER_MAINTAIN_MULTI_OW -- Debug through the complete FM.
    3. CRM_ORDER_PREPARE_MULTI_OW -- The data is set in this function module.
    Regards
    Dedeepya C

  • Issue in creation of plant related data at receiving server using BD10

    Hi all,
    This is regarding Material master creation using B10.I am using MATMAS05 message type for sending data from one system to another.Data is sent and received successfully.When i go in mm03 i can see all the views created successfully accept views related to PLANT.Please guide to resolve the issue.
    When i entered into Log-
    1)"The field MBEW-BKLAS is defined as a required field; it does not contain an entry".
    2)"No material master data exists for material AB_08.04.09(30) in plant 4001".
    My segemnt is as follows-
    ZMATMAS05                      matmas05
           E1MARAM                        Master material general data (MARA)
               Z1KLART                        KLART----
    My extention
               E1MARA1                        Additional Fields for E1MARAM
               E1MAKTM                        Master material short texts (MAKT)
               E1MARCM                        Master material C segment (MARC)
                   Z1AUSPM                        E1AUSPMDistribution of Classification:----
    My extention
                   E1MARC1                        Additional Fields for E1MARCM
                   E1MARDM                        Master material warehouse/batch segment (MARD)
                   E1MFHMM                        Master material production resource/tool (MFHM)
                   E1MPGDM                        Master material product group
                   E1MPOPM                        Master material forecast parameter
                   E1MPRWM                        Master material forecast value
                   E1MVEGM                        Master material total consumption
                   E1MVEUM                        Master material unplanned consumption
                   E1MKALM                        Master material production version
               E1MARMM                        Master material units of measure (MARM)
               E1MBEWM                        Master material material valuation (MBEW)
               E1MLGNM                        Master material material data per warehouse number (MLGN)
               E1MVKEM                        Master material sales data (MVKE)
               E1MLANM                        Master material tax classification (MLAN)
               E1MTXHM                        Master material long text header
               E1CUCFG                        CU: Configuration data
           E1UPSLINK                      Reference from Object to Superior UPS
    Thanks.
    Edited by: sanu debu on Apr 27, 2009 7:10 PM

    CREATE CONTROLFILE SET DATABASE "NEWDB" NORESETLOGS ARCHIVELOGAlso when you are setting a new database, the option should be RESETLOGS and not NORESETLOGS.
    'D:\APP\ADMINISTRATOR\ORADATA\NEWDB\ONLINELOG\O1_MF_2_7FK0XKB8_.LOG
    D:\APP\ADMINISTRATOR\ORADATA\NEWDB\DATAFILE\O1_MF_SYSTEM_7FK0SKN0_.DBFWhy underscore(_) at the end of the datafile name. Any specific reason ?

  • Issue in Creation of new Value Field in CO-PA

    Hi,
    I have a query in CO-PA Value Field Linking.
    In my Development Client,
    1. Created a New Value Field (No Transport Request Generated)
    2. Linked to the above to new Conditon type created in SD. (Tranport request was generated) i.e. in Flow of Actual Values->Transfer of Billing Documents->Assign Value Fields
    However then i try creating a new Value Field in my Production Client it throws a message 'You have no authorization to change Fields".
    Is this an issue with authorization or i need to transport the Value field too from Development to Production client.
    Please Advise.
    Thanks in Advance,
    Safi

    Thanks Phaneendra for the response.
    The creation of Value field did not create any tranportation request. Will this too be transported if i transport the Operating Concern.
    Please Advise.
    Thanks,
    Safi

Maybe you are looking for