Proper use of one table in different business areas

Hi
We will be using certain DB tables in several business areas and we are more than one person to define them.
If a table is loaded from the database for each of the business areas it will be numbered 'table 1, 2...' which does't seem very desirable. Also I'd rather define hiearchies only once and for all.
If a table is loaded only once and used for all the business areas all the joins which are necessary in the different business areas are shown. This will be confusing. If one of us considers a certain join unnecessary he might accidentially delete the wrong one belonging to a different business area.
Exporting the business areas including the joins regularly is probably not a great help if afterwards new joins for other business areas were added.(?)
Am I misunderstanding the concept of Discoverer?
How do you handle such a situation?
Advice appreciated
Franziska

Hi Michael, et al,
I'm just planning my EUL and your comments in the last post seem relevant.
My general plan is as follows:
(1) Create a BaseMaster BA which is used to bring and tables/views from the database into disco. The folders in here will be created with "New Folder from database" and be fairly straightforward, with only very simple calculations and no aggregations in calculations. The joins will echo the joins in the underlying database. This is not shared with Users.
(2) Create a CustomMaster BA which contains Custom Folders which consist of various SQL views of the database. The SQL in these may use have some more complex calculations and these calculations may include aggregation. This will not be shared with users.
(3) Create a CentralMaster BA which contains Complex Folders which are assembled from items from the BaseMaster BA and the CustomMaster BA. These folders may include more complex calculations and the calculations may include aggregated items.
(4) Create a number of User BA's. Using Manage Folders, share the relevant Folders from the Central Master.
I've got a few questions relating to this.
(a) Custom Folders based on Folders
It would seem nice to me if it were possible to create Custom Folders which were based on other Folders, rather than Database Views. Then, even if you need a view and a complex transformation of that view in your business area, you still only have one place where the EUL brings that view into disco. My understanding is that this is not possible. Am I right? I suppose there could be an argument that complex transformations should be pushed back to the DBA but it seems reasonable to me that sometimes this sort of thing could be within the remit of the disco admin?
(b) SQL Efficiency
Am I right in thinking that generally a complex folder is usually not much less efficient than using a base folder because the SQL interpreter ends up optimising the SQL?
And a last question which has turned out rather complex - by all means ignore it if you like - it's me getting to grips with the process.
(c) Nested Complex Folders
I presume that if a transformation is needed which would in a normal database environment would need a mainquery and a subquery, then this could be done in a
number of ways (adhering to the above methodology) by:
(i) Get the DBA to write a view which does both mainquery and subquery and then bring into the BaseMaster and then straight into the CentralMaster folder.
(ii) Get the DBA to write the subquery, bring this into the BaseMaster and then implement the mainquery in the CentralMaster.
(iii) Create a Folder in CustomMaster which implements both mainquery and subquery and then bring this into the CentralMaster folder.
(iv) Create a Folder in the CustomMaster which implements just the subquery and then implement the main query in the Central Master.
If you've got this far - thankyou for baring with me.
Perhaps there's something written about this sort of thing or maybe it's just a matter of practice!
Any thoughts on all this would be much appreciated.
Kind regards
Suhada

Similar Messages

  • Is it possible to display one table on different sheets?

    I'd like to use one table on different sheets.
    On these sheets are also other tables, but one should be the same on each sheet.
    Is this possible?

    You can make one table refer to (take data from) another. 
    Here is a simple example where I made a table named "Source" and duplicated the table and named is "Destination-1".
    the table "Destination-1" refers to table 1 like this:
    A1=Source :: A1
    copy cell A1 in the table "Destination-1" , then select ALL cells using the menu item "Edit > Select All", then paste
    you may now duplicate this table to make additional "copies" as needed.  To duplicate a table, select the table, then select the menu iten "Edit > Duplicate"
    NOTE:  you should be aware that data flows FROM the table "Source" to the destination tables-- NOT the other way.

  • Different Business Area in Customer line item while Billing

    Dear Friends,
    We are on Ecc 6.0 and We have a requirement of Different Business Area in Customer line while billing. Since we had defined Business area location wise and requirement is Sundry Debtors should always book to Location as maintained in delivery plant field of sales area data in customer master.
    Requirement:- Our material is assigned to Plant 115.In customer master, sales area data -> shipping tab  -> Delivery Plant (KNVV- VWERKS), we are maintaining "112", Now while billing and while generating Accounting document account entry is
    Customer 1000.00 (Business area=115)
    sales    1000.00 (Business area=115)
    But our requirement while accounting document after billing is
    Customer 1000.00 (Business area=112), system should check and derive the same from TABLE KNVV - VWERKS(DEL PLANT)
    sales    1000.00 (Business area=115).
    Please revert.
    Regards,
    Sandeep

    Dear Friends,
    One can do the same via using a userexit.
    Regards,
    Sandeep

  • Same folders in different business areas

    Hi,
    Is it possible to access folders across business areas?
    I was not able to do the above so I created (New folder from database) the same folder into 2 different business areas. But I cannot have the same names. Why? I see some folders created previously with the same names in both the business areas. Is there some other way?
    Thanks in advance.

    Hello everyone
    Business area names, folder names, and join names are unique across the system. It is possible to have the same folder in multiple business areas and this is not uncommon. It is also possible to reference the same object using multiple folders, making sure the identifier and folder names are unique.
    If you have a folder that you want to duplicate, rather than share between business areas, you can highlight the folder in one area, copy it using CTRL-C, and the using CTRL-V paste it into the other area. Doing this gives you a copy of the folder pointing at the same underlying object. This is useful if you want to reference the same object several times, say for aliasing purposes or for ease of querying. I do this all the time when working with fiscal time folders and it works like a dream.
    Hope this helps
    Regards
    Michael

  • Document Splitting with Different Business Area

    HI
    I have a situation, where i need to create a new document type and it needs to be configured in such a way that vendor invoices and credit notes can be booked with the correct doc splitting and using different Business Areas.
    I like to know if this is possible, and if yes how.
    Earlier the client used to post the transactions, with different business areas, as we get only a warning message to reset the business area of the vendor.
    Now that we have activated New G/L accounting, with document splitting we get the warning message,
    but then the profit center will not balance, so we have an error.
    The client requires that they need to post to a different business area, and the document splitting should happen.
    Any suggestions on this will be great.

    Hi Srikanth,
    In my config setting under Document Splitting i have the Zero Balance ticked for it.
    My Scenario is as an Ex. I am debiting a G/L account for 1000, and giving Business Area X, with a cost Center which is assigned to Business Area X.
    And Crediting a Vendor for 1000 with Business Area Y.
    Will this scenario work with Document Splitting enabled, i am doubtful on this, but my client needs this.
    Can you tell me if this is a possible scnerio.
    If yes then how?

  • Validation require for different business area in MRO

    Dear friends
    System is allowing posting of different business area in Miro (Line item & Header item).For disallowing I have checked Validation, I think it is not usable.Please suggest me another solution.
    regds
    sachin

    Hello,
    You can use MRM_HEADER_CHECK bAdI for this validation.
    or
    You can use Enhancement (CMOD transaction) LMR1M001 -EXIT_SAPLMRMP_010 for this validation
    Regards,
    Burak

  • GR/IR Automatic Clearing for Different Business areas items

    Hello All,
         we have different business areas, and some times we create a PO for all business areas and invoice it in a specific business area
    in F.13 automatic clearing:
    for GR/IR account
    Documents with different business area cannot be cleared.
    any idea to solve this issue is appreciated,

    Thanks for your reply,
    I've tried such a solution with more parameters ( EBELN, XREF3 and EBELP)
    but in F.13 :
    - the document cannot be cleared when >>> selecting GR/IR account special process.
    if we treat GR account as a normal GL account, I mean without selecting GR/IR account special process.
    the document can be clear
    i want to be able to clear the document with selecting GR/IR field ( is it possible )
    and if it is not allowed, do we will face a problem by clearing GR account without selecting GR/IR parameter ??
    Many Thanks

  • I have 2 iPhones - one for personal use and one for work. They are currently connected with the same Apple ID. I would like to separate the two accounts. Does anybody know how to do this?

    I have 2 iPhones - one for personal use and one for work. They are currently connected with the same Apple ID. I would like to separate the two accounts. Does anybody know how to do this?

    Just create a new AppleID for your work.
    As Allan suggested, items purchased on one iTunes account cannot be moved to the other account.
    However, you can put items purchased on on account onto the other iPhone.

  • Number range based on different business area for same document type

    Hi All,
    The scenario is my client wants to give the different number range for a particular document type based on different business area.
    Is there any user exit by which I can restrict the same.
    Regards,
    Meenakshi

    Hello,
    Document number ranges are created per company code and fiscal year. You cannot create / restrict the document number ranges per business area.
    Business areas a independent of all organizational values, they cut across all company codes.
    Regards,
    Ravi

  • Can 2 ipods be used on one computer? (different users-2 accounts on one computer)

    We have one computer, 2 ipods (one classic, one nano), 2 itunes accounts.  Is it possible for 2 users to use the same computer with different ipods and different itunes accounts.  We are having problems accomplishing this.

    kickinit1999 wrote:
    We have one computer, 2 ipods (one classic, one nano), 2 itunes accounts.  Is it possible for 2 users to use the same computer with different ipods and different itunes accounts
    Yes.
    We are having problems accomplishing this.
    What problems?

  • How to improve speed of queries that use ORM one table per concrete class

    Hi,
    Many tools that make ORM (Object Relational Mapping) like Castor, Hibernate, Toplink, JPOX, etc.., have the one table per concrete class feature that maps objects to follow structure:
    CREATE TABLE ABSTRACTPRODUCT (
        ID VARCHAR(8) NOT NULL,
        DESCRIPTION VARCHAR(60) NOT NULL,
        PRIMARY KEY(ID)
    CREATE TABLE PRODUCT (
        ID VARCHAR(8) NOT NULL REFERENCES ABSTRACTPRODUCT(ID),
        CODE VARCHAR(10) NOT NULL,
        PRICE DECIMAL(12,2),
        PRIMARY KEY(ID)
    CREATE UNIQUE INDEX iProduct ON Product(code)
    CREATE TABLE BOOK (
        ID VARCHAR(8) NOT NULL REFERENCES PRODUCT(ID),
        AUTHOR VARCHAR(60) NOT NULL,
        PRIMARY KEY (ID)
    CREATE TABLE COMPACTDISK (
        ID VARCHAR(8) NOT NULL REFERENCES PRODUCT(ID),
        ARTIST VARCHAR(60) NOT NULL,
        PRIMARY KEY(ID)
    there is a way to improve queries like
    SELECT
        pd.code CODE,   
        abpd.description DESCRIPTION,
        DECODE(bk.id,NULL,cd.artist,bk.author) PERSON
    FROM
        ABSTRACTPRODUCT abpd,
        PRODUCT pd,
        BOOK bk,
        COMPACTDISK cd
    WHERE
        pd.id = abpd.id AND
        bk.id(+) = abpd.id AND
        cd.id(+) = abpd.id AND
        pd.code like '101%'
    or like this:
    SELECT
        pd.code CODE,   
        abpd.description DESCRIPTION,
        DECODE(bk.id,NULL,cd.artist,bk.author) PERSON
    FROM
        ABSTRACTPRODUCT abpd,
        PRODUCT pd,
        BOOK bk,
        COMPACTDISK cd
    WHERE
        pd.id = abpd.id AND
        bk.id(+) = abpd.id AND
        cd.id(+) = abpd.id AND
        abpd.description like '%STARS%' AND
        pd.price BETWEEN 1 AND 10
    think in a table with many rows, then exists something inside MaxDB to improve this type of queries? like some anotations on SQL? or declare tables that extends another by PK? on other databases i managed this using Materialized Views, but i think that this can be faster just using PK, i'm wrong? the better is to consolidate all tables in one table? what is the impact on database size with this consolidation?
    note: with consolidation i will miss NOT NULL constraint at database side.
    thanks for any insight.
    Clóvis

    Hi Lars,
    i dont understand because the optimizer get that Index for TM at execution plan, and because dont use the join via KEY column, note the WHERE clause is "TM.OID = MF.MY_TIPO_MOVIMENTO" by the key column, and the optimizer uses an INDEX that the indexed column is ID_SYS, that isnt and cant be a primary key, because its not UNIQUE, follow the index columns:
    indexes of TipoMovimento
    INDEXNAME     COLUMNNAME          SORT     COLUMNNO     DATATYPE     LEN     INDEX_USED     FILESTATE     DISABLED
    ITIPOMOVIMENTO     TIPO               ASC     1          VARCHAR          2     220546          OK          NO
    ITIPOMOVIMENTO     ID_SYS               ASC     2          CHAR          6     220546          OK          NO
    ITIPOMOVIMENTO     MY_CONTA_DEBITO          ASC     3          CHAR          8     220546          OK          NO
    ITIPOMOVIMENTO     MY_CONTA_CREDITO     ASC     4          CHAR          8     220546          OK          NO
    ITIPOMOVIMENTO1     ID_SYS               ASC     1          CHAR          6     567358          OK          NO
    ITIPOMOVIMENTO2     DESCRICAO          ASC     1          VARCHAR          60     94692          OK          NO
    after i create the index iTituloCobrancaX7 on TituloCobranca(OID,DATA_VENCIMENTO) in a backup instance and get surprised with the follow explain:
    OWNER     TABLENAME     COLUMN_OR_INDEX          STRATEGY                    PAGECOUNT     
         TC          ITITULOCOBRANCA1     RANGE CONDITION FOR INDEX          5368     
                   DATA_VENCIMENTO               (USED INDEX COLUMN)          
         MF          OID               JOIN VIA KEY COLUMN               9427     
         TM          OID               JOIN VIA KEY COLUMN               22     
                                  TABLE HASHED          
         PS          OID               JOIN VIA KEY COLUMN               1350     
         BOL          OID               JOIN VIA KEY COLUMN               497     
                                       NO TEMPORARY RESULTS CREATED          
         JDBC_CURSOR_19                    RESULT IS COPIED   , COSTVALUE IS     988
    note that now the optimizer gets the index ITITULOCOBRANCA1 as i expected, if i drop the new index iTituloCobrancaX7 the optimizer still getting this execution plan, with this the query executes at 110 ms, with that great news i do same thing in the production system, but the execution plan dont changes, and i still getting a long execution time this time at 413516 ms. maybe the problem is how optimizer measure my tables.
    i checked in DBAnalyser that the problem is catalog cache hit rate (we discussed this at [catalog cache hit rate, how to increase?|;
    ) and the low selectivity of this SQL command, then its because of this that to achieve a better selectivity i must have an index with, MF.MY_SACADO, MF.TIPO and TC.DATA_VENCIMENTO, as explained in previous posts, since this type of index inside MaxDB isnt possible, i have no choice to speed this type of query without changing tables structure.
    MaxDB developers can develop this type of index? or a feature like this dont have any plans to be made?
    if no, i must create another schema, to consolidate tables to speed queries on my system, but with this consolidation i will get more overhead, i must solve the less selectivity because i think if the data on tables increase, the query becomes impossible, i see that CREATE INDEX supports FUNCTION, maybe a   FUNCTION that join data of two tables can solve this?
    about instance configuration it is:
    Machine:
    Version:       '64BIT Kernel'
    Version:       'X64/LIX86 7.6.03   Build 007-123-157-515'
    Version:       'FAST'
    Machine:       'x86_64'
    Processors:    2 ( logical: 8, cores: 8 )
    data volumes:
    ID     MODE     CONFIGUREDSIZE     USABLESIZE     USEDSIZE     USEDSIZEPERCENTAGE     DROPVOLUME     TOTALCLUSTERAREASIZE     RESERVEDCLUSTERAREASIZE     USEDCLUSTERAREASIZE     PATH     
    1     NORMAL     4194304          4194288          379464          9               NO          0               0               0               /db/SPDT/data/data01.dat     
    2     NORMAL     4194304          4194288          380432          9               NO          0               0               0               /db/SPDT/data/data02.dat     
    3     NORMAL     4194304          4194288          379184          9               NO          0               0               0               /db/SPDT/data/data03.dat     
    4     NORMAL     4194304          4194288          379624          9               NO          0               0               0               /db/SPDT/data/data04.dat     
    5     NORMAL     4194304          4194288          380024          9               NO          0               0               0               /db/SPDT/data/data05.dat
    log volumes:
    ID     CONFIGUREDSIZE     USABLESIZE     PATH               MIRRORPATH
    1     51200          51176          /db/SPDT/log/log01.dat     ?
    parameters:
    KERNELVERSION                         KERNEL    7.6.03   BUILD 007-123-157-515
    INSTANCE_TYPE                         OLTP
    MCOD                                  NO
    _SERVERDB_FOR_SAP                     YES
    _UNICODE                              NO
    DEFAULT_CODE                          ASCII
    DATE_TIME_FORMAT                      ISO
    CONTROLUSERID                         DBM
    CONTROLPASSWORD                       
    MAXLOGVOLUMES                         2
    MAXDATAVOLUMES                        11
    LOG_VOLUME_NAME_001                   /db/SPDT/log/log01.dat
    LOG_VOLUME_TYPE_001                   F
    LOG_VOLUME_SIZE_001                   6400
    DATA_VOLUME_NAME_0005                 /db/SPDT/data/data05.dat
    DATA_VOLUME_NAME_0004                 /db/SPDT/data/data04.dat
    DATA_VOLUME_NAME_0003                 /db/SPDT/data/data03.dat
    DATA_VOLUME_NAME_0002                 /db/SPDT/data/data02.dat
    DATA_VOLUME_NAME_0001                 /db/SPDT/data/data01.dat
    DATA_VOLUME_TYPE_0005                 F
    DATA_VOLUME_TYPE_0004                 F
    DATA_VOLUME_TYPE_0003                 F
    DATA_VOLUME_TYPE_0002                 F
    DATA_VOLUME_TYPE_0001                 F
    DATA_VOLUME_SIZE_0005                 524288
    DATA_VOLUME_SIZE_0004                 524288
    DATA_VOLUME_SIZE_0003                 524288
    DATA_VOLUME_SIZE_0002                 524288
    DATA_VOLUME_SIZE_0001                 524288
    DATA_VOLUME_MODE_0005                 NORMAL
    DATA_VOLUME_MODE_0004                 NORMAL
    DATA_VOLUME_MODE_0003                 NORMAL
    DATA_VOLUME_MODE_0002                 NORMAL
    DATA_VOLUME_MODE_0001                 NORMAL
    DATA_VOLUME_GROUPS                    1
    LOG_BACKUP_TO_PIPE                    NO
    MAXBACKUPDEVS                         2
    LOG_MIRRORED                          NO
    MAXVOLUMES                            14
    LOG_IO_BLOCK_COUNT                    8
    DATA_IO_BLOCK_COUNT                   64
    BACKUP_BLOCK_CNT                      64
    _DELAY_LOGWRITER                      0
    LOG_IO_QUEUE                          50
    _RESTART_TIME                         600
    MAXCPU                                8
    MAX_LOG_QUEUE_COUNT                   0
    USED_MAX_LOG_QUEUE_COUNT              8
    LOG_QUEUE_COUNT                       1
    MAXUSERTASKS                          500
    _TRANS_RGNS                           8
    _TAB_RGNS                             8
    _OMS_REGIONS                          0
    _OMS_RGNS                             7
    OMS_HEAP_LIMIT                        0
    OMS_HEAP_COUNT                        8
    OMS_HEAP_BLOCKSIZE                    10000
    OMS_HEAP_THRESHOLD                    100
    OMS_VERS_THRESHOLD                    2097152
    HEAP_CHECK_LEVEL                      0
    _ROW_RGNS                             8
    RESERVEDSERVERTASKS                   16
    MINSERVERTASKS                        28
    MAXSERVERTASKS                        28
    _MAXGARBAGE_COLL                      1
    _MAXTRANS                             4008
    MAXLOCKS                              120080
    _LOCK_SUPPLY_BLOCK                    100
    DEADLOCK_DETECTION                    4
    SESSION_TIMEOUT                       180
    OMS_STREAM_TIMEOUT                    30
    REQUEST_TIMEOUT                       5000
    _IOPROCS_PER_DEV                      2
    _IOPROCS_FOR_PRIO                     0
    _IOPROCS_FOR_READER                   0
    _USE_IOPROCS_ONLY                     NO
    _IOPROCS_SWITCH                       2
    LRU_FOR_SCAN                          NO
    _PAGE_SIZE                            8192
    _PACKET_SIZE                          131072
    _MINREPLY_SIZE                        4096
    _MBLOCK_DATA_SIZE                     32768
    _MBLOCK_QUAL_SIZE                     32768
    _MBLOCK_STACK_SIZE                    32768
    _MBLOCK_STRAT_SIZE                    16384
    _WORKSTACK_SIZE                       8192
    _WORKDATA_SIZE                        8192
    _CAT_CACHE_MINSIZE                    262144
    CAT_CACHE_SUPPLY                      131072
    INIT_ALLOCATORSIZE                    262144
    ALLOW_MULTIPLE_SERVERTASK_UKTS        NO
    _TASKCLUSTER_01                       tw;al;ut;2000*sv,100*bup;10*ev,10*gc;
    _TASKCLUSTER_02                       ti,100*dw;63*us;
    _TASKCLUSTER_03                       equalize
    _DYN_TASK_STACK                       NO
    _MP_RGN_QUEUE                         YES
    _MP_RGN_DIRTY_READ                    DEFAULT
    _MP_RGN_BUSY_WAIT                     DEFAULT
    _MP_DISP_LOOPS                        2
    _MP_DISP_PRIO                         DEFAULT
    MP_RGN_LOOP                           -1
    _MP_RGN_PRIO                          DEFAULT
    MAXRGN_REQUEST                        -1
    _PRIO_BASE_U2U                        100
    _PRIO_BASE_IOC                        80
    _PRIO_BASE_RAV                        80
    _PRIO_BASE_REX                        40
    _PRIO_BASE_COM                        10
    _PRIO_FACTOR                          80
    _DELAY_COMMIT                         NO
    _MAXTASK_STACK                        512
    MAX_SERVERTASK_STACK                  500
    MAX_SPECIALTASK_STACK                 500
    _DW_IO_AREA_SIZE                      50
    _DW_IO_AREA_FLUSH                     50
    FBM_VOLUME_COMPRESSION                50
    FBM_VOLUME_BALANCE                    10
    _FBM_LOW_IO_RATE                      10
    CACHE_SIZE                            262144
    _DW_LRU_TAIL_FLUSH                    25
    XP_DATA_CACHE_RGNS                    0
    _DATA_CACHE_RGNS                      64
    XP_CONVERTER_REGIONS                  0
    CONVERTER_REGIONS                     8
    XP_MAXPAGER                           0
    MAXPAGER                              64
    SEQUENCE_CACHE                        1
    _IDXFILE_LIST_SIZE                    2048
    VOLUMENO_BIT_COUNT                    8
    OPTIM_MAX_MERGE                       500
    OPTIM_INV_ONLY                        YES
    OPTIM_CACHE                           NO
    OPTIM_JOIN_FETCH                      0
    JOIN_SEARCH_LEVEL                     0
    JOIN_MAXTAB_LEVEL4                    16
    JOIN_MAXTAB_LEVEL9                    5
    _READAHEAD_BLOBS                      32
    CLUSTER_WRITE_THRESHOLD               80
    CLUSTERED_LOBS                        NO
    RUNDIRECTORY                          /var/opt/sdb/data/wrk/SPDT
    OPMSG1                                /dev/console
    OPMSG2                                /dev/null
    _KERNELDIAGFILE                       knldiag
    KERNELDIAGSIZE                        800
    _EVENTFILE                            knldiag.evt
    _EVENTSIZE                            0
    _MAXEVENTTASKS                        2
    _MAXEVENTS                            100
    _KERNELTRACEFILE                      knltrace
    TRACE_PAGES_TI                        2
    TRACE_PAGES_GC                        20
    TRACE_PAGES_LW                        5
    TRACE_PAGES_PG                        3
    TRACE_PAGES_US                        10
    TRACE_PAGES_UT                        5
    TRACE_PAGES_SV                        5
    TRACE_PAGES_EV                        2
    TRACE_PAGES_BUP                       0
    KERNELTRACESIZE                       5369
    EXTERNAL_DUMP_REQUEST                 NO
    _AK_DUMP_ALLOWED                      YES
    _KERNELDUMPFILE                       knldump
    _RTEDUMPFILE                          rtedump
    _UTILITY_PROTFILE                     dbm.utl
    UTILITY_PROTSIZE                      100
    _BACKUP_HISTFILE                      dbm.knl
    _BACKUP_MED_DEF                       dbm.mdf
    _MAX_MESSAGE_FILES                    0
    _SHMKERNEL                            44601
    __PARAM_CHANGED___                    0
    __PARAM_VERIFIED__                    2008-05-03 23:12:55
    DIAG_HISTORY_NUM                      2
    DIAG_HISTORY_PATH                     /var/opt/sdb/data/wrk/SPDT/DIAGHISTORY
    _DIAG_SEM                             1
    SHOW_MAX_STACK_USE                    NO
    SHOW_MAX_KB_STACK_USE                 NO
    LOG_SEGMENT_SIZE                      2133
    _COMMENT                              
    SUPPRESS_CORE                         YES
    FORMATTING_MODE                       PARALLEL
    FORMAT_DATAVOLUME                     YES
    OFFICIAL_NODE                         
    UKT_CPU_RELATIONSHIP                  NONE
    HIRES_TIMER_TYPE                      CPU
    LOAD_BALANCING_CHK                    30
    LOAD_BALANCING_DIF                    10
    LOAD_BALANCING_EQ                     5
    HS_STORAGE_DLL                        libhsscopy
    HS_SYNC_INTERVAL                      50
    USE_OPEN_DIRECT                       YES
    USE_OPEN_DIRECT_FOR_BACKUP            NO
    SYMBOL_DEMANGLING                     NO
    EXPAND_COM_TRACE                      NO
    JOIN_TABLEBUFFER                      128
    SET_VOLUME_LOCK                       YES
    SHAREDSQL                             YES
    SHAREDSQL_CLEANUPTHRESHOLD            25
    SHAREDSQL_COMMANDCACHESIZE            262144
    MEMORY_ALLOCATION_LIMIT               0
    USE_SYSTEM_PAGE_CACHE                 YES
    USE_COROUTINES                        YES
    FORBID_LOAD_BALANCING                 YES
    MIN_RETENTION_TIME                    60
    MAX_RETENTION_TIME                    480
    MAX_SINGLE_HASHTABLE_SIZE             512
    MAX_HASHTABLE_MEMORY                  5120
    ENABLE_CHECK_INSTANCE                 YES
    RTE_TEST_REGIONS                      0
    HASHED_RESULTSET                      YES
    HASHED_RESULTSET_CACHESIZE            262144
    CHECK_HASHED_RESULTSET                0
    AUTO_RECREATE_BAD_INDEXES             NO
    AUTHENTICATION_ALLOW                  
    AUTHENTICATION_DENY                   
    TRACE_AK                              NO
    TRACE_DEFAULT                         NO
    TRACE_DELETE                          NO
    TRACE_INDEX                           NO
    TRACE_INSERT                          NO
    TRACE_LOCK                            NO
    TRACE_LONG                            NO
    TRACE_OBJECT                          NO
    TRACE_OBJECT_ADD                      NO
    TRACE_OBJECT_ALTER                    NO
    TRACE_OBJECT_FREE                     NO
    TRACE_OBJECT_GET                      NO
    TRACE_OPTIMIZE                        NO
    TRACE_ORDER                           NO
    TRACE_ORDER_STANDARD                  NO
    TRACE_PAGES                           NO
    TRACE_PRIMARY_TREE                    NO
    TRACE_SELECT                          NO
    TRACE_TIME                            NO
    TRACE_UPDATE                          NO
    TRACE_STOP_ERRORCODE                  0
    TRACE_ALLOCATOR                       0
    TRACE_CATALOG                         0
    TRACE_CLIENTKERNELCOM                 0
    TRACE_COMMON                          0
    TRACE_COMMUNICATION                   0
    TRACE_CONVERTER                       0
    TRACE_DATACHAIN                       0
    TRACE_DATACACHE                       0
    TRACE_DATAPAM                         0
    TRACE_DATATREE                        0
    TRACE_DATAINDEX                       0
    TRACE_DBPROC                          0
    TRACE_FBM                             0
    TRACE_FILEDIR                         0
    TRACE_FRAMECTRL                       0
    TRACE_IOMAN                           0
    TRACE_IPC                             0
    TRACE_JOIN                            0
    TRACE_KSQL                            0
    TRACE_LOGACTION                       0
    TRACE_LOGHISTORY                      0
    TRACE_LOGPAGE                         0
    TRACE_LOGTRANS                        0
    TRACE_LOGVOLUME                       0
    TRACE_MEMORY                          0
    TRACE_MESSAGES                        0
    TRACE_OBJECTCONTAINER                 0
    TRACE_OMS_CONTAINERDIR                0
    TRACE_OMS_CONTEXT                     0
    TRACE_OMS_ERROR                       0
    TRACE_OMS_FLUSHCACHE                  0
    TRACE_OMS_INTERFACE                   0
    TRACE_OMS_KEY                         0
    TRACE_OMS_KEYRANGE                    0
    TRACE_OMS_LOCK                        0
    TRACE_OMS_MEMORY                      0
    TRACE_OMS_NEWOBJ                      0
    TRACE_OMS_SESSION                     0
    TRACE_OMS_STREAM                      0
    TRACE_OMS_VAROBJECT                   0
    TRACE_OMS_VERSION                     0
    TRACE_PAGER                           0
    TRACE_RUNTIME                         0
    TRACE_SHAREDSQL                       0
    TRACE_SQLMANAGER                      0
    TRACE_SRVTASKS                        0
    TRACE_SYNCHRONISATION                 0
    TRACE_SYSVIEW                         0
    TRACE_TABLE                           0
    TRACE_VOLUME                          0
    CHECK_BACKUP                          NO
    CHECK_DATACACHE                       NO
    CHECK_KB_REGIONS                      NO
    CHECK_LOCK                            NO
    CHECK_LOCK_SUPPLY                     NO
    CHECK_REGIONS                         NO
    CHECK_TASK_SPECIFIC_CATALOGCACHE      NO
    CHECK_TRANSLIST                       NO
    CHECK_TREE                            NO
    CHECK_TREE_LOCKS                      NO
    CHECK_COMMON                          0
    CHECK_CONVERTER                       0
    CHECK_DATAPAGELOG                     0
    CHECK_DATAINDEX                       0
    CHECK_FBM                             0
    CHECK_IOMAN                           0
    CHECK_LOGHISTORY                      0
    CHECK_LOGPAGE                         0
    CHECK_LOGTRANS                        0
    CHECK_LOGVOLUME                       0
    CHECK_SRVTASKS                        0
    OPTIMIZE_AGGREGATION                  YES
    OPTIMIZE_FETCH_REVERSE                YES
    OPTIMIZE_STAR_JOIN                    YES
    OPTIMIZE_JOIN_ONEPHASE                YES
    OPTIMIZE_JOIN_OUTER                   YES
    OPTIMIZE_MIN_MAX                      YES
    OPTIMIZE_FIRST_ROWS                   YES
    OPTIMIZE_OPERATOR_JOIN                YES
    OPTIMIZE_JOIN_HASHTABLE               YES
    OPTIMIZE_JOIN_HASH_MINIMAL_RATIO      1
    OPTIMIZE_OPERATOR_JOIN_COSTFUNC       YES
    OPTIMIZE_JOIN_PARALLEL_MINSIZE        1000000
    OPTIMIZE_JOIN_PARALLEL_SERVERS        0
    OPTIMIZE_JOIN_OPERATOR_SORT           YES
    OPTIMIZE_QUAL_ON_INDEX                YES
    DDLTRIGGER                            YES
    SUBTREE_LOCKS                         NO
    MONITOR_READ                          2147483647
    MONITOR_TIME                          2147483647
    MONITOR_SELECTIVITY                   0
    MONITOR_ROWNO                         0
    CALLSTACKLEVEL                        0
    OMS_RUN_IN_UDE_SERVER                 NO
    OPTIMIZE_QUERYREWRITE                 OPERATOR
    TRACE_QUERYREWRITE                    0
    CHECK_QUERYREWRITE                    0
    PROTECT_DATACACHE_MEMORY              NO
    LOCAL_REDO_LOG_BUFFER_SIZE            0
    FILEDIR_SPINLOCKPOOL_SIZE             10
    TRANS_HISTORY_SIZE                    0
    TRANS_THRESHOLD_VALUE                 60
    ENABLE_SYSTEM_TRIGGERS                YES
    DBFILLINGABOVELIMIT                   70L80M85M90H95H96H97H98H99H
    DBFILLINGBELOWLIMIT                   70L80L85L90L95L
    LOGABOVELIMIT                         50L75L90M95M96H97H98H99H
    AUTOSAVE                              1
    BACKUPRESULT                          1
    CHECKDATA                             1
    EVENT                                 1
    ADMIN                                 1
    ONLINE                                1
    UPDSTATWANTED                         1
    OUTOFSESSIONS                         3
    ERROR                                 3
    SYSTEMERROR                           3
    DATABASEFULL                          1
    LOGFULL                               1
    LOGSEGMENTFULL                        1
    STANDBY                               1
    USESELECTFETCH                        YES
    USEVARIABLEINPUT                      NO
    UPDATESTAT_PARALLEL_SERVERS           0
    UPDATESTAT_SAMPLE_ALGO                1
    SIMULATE_VECTORIO                     IF_OPEN_DIRECT_OR_RAW_DEVICE
    COLUMNCOMPRESSION                     YES
    TIME_MEASUREMENT                      NO
    CHECK_TABLE_WIDTH                     NO
    MAX_MESSAGE_LIST_LENGTH               100
    SYMBOL_RESOLUTION                     YES
    PREALLOCATE_IOWORKER                  NO
    CACHE_IN_SHARED_MEMORY                NO
    INDEX_LEAF_CACHING                    2
    NO_SYNC_TO_DISK_WANTED                NO
    SPINLOCK_LOOP_COUNT                   30000
    SPINLOCK_BACKOFF_BASE                 1
    SPINLOCK_BACKOFF_FACTOR               2
    SPINLOCK_BACKOFF_MAXIMUM              64
    ROW_LOCKS_PER_TRANSACTION             50
    USEUNICODECOLUMNCOMPRESSION           NO
    about send you the data from tables, i dont have permission to do that, since all data is in a production system, the customer dont give me the rights to send any information. sorry about that.
    best regards
    Clóvis

  • Create Foreing Key to one table in different dataBase

    Hi, I need create a reference to one table in other database. I understand that Database Link can be used to connect me to other database but when I am creating the foreing key in my table, I can see only the tables in the schemas that have my data base not the other schemas. I created a connection with database link but I do not see the form to make reference to this connection from the create table wizard. I am using Oracle 9i and the client is 9.01.01.
    Thanks in advance,
    Mónica Alarcón

    We cannot use tables in another table to enforce data integrity. This makes a lot of sense if you think about what would happen if the remote database goes down.
    If you want to do this, then you need to use replication (materilaized view) to bring the parent table's data into the locaL database.
    Cheers, APC

  • How many primary keys use in one table

    Hi,
    Please help me. Maximum How many primary keys use with in one table
    Regards,
    Sunil Kumar.T

    Hi,
    For my knowledge, It depends on the Database & version u r working.. Right..
    This is a sample description what I seen for a Database...
    Technical Specification of SAP DB Version 7.4
    Description                            Maximum Value
    Database size                    32 TB (with 8 KB page size)
    Number of files/volumes per database64...4096, specified by a configuration parameter
    File/volume size (data)      518 ...8 GB (dependent on operating system limitations)
    File/volume size (log)1      6 TB (dependent on operating system limitations)
    SQL statement length>=  16 KB (Default value 64 KB, specified by a system variable)
    SQL character string lengthSpecified by a system variable
    Identifier length                32 characters
    Numeric precision              38 places
    Number of tables unlimited
    Number of columns per table (with KEY)  1024
    Number of columns per table (without KEY)  1023
    Number of primary key columns per table  512
    Number of columns in an index  16
    Reward Points if useful

  • Intra company / different business area service jobs flow

    Dear all
    The business process is below
    The Machinary plant which is providing the repair/services to its parent plant. All the incoming materils from parents plant will be non valuated for repairing thro notification. After receipt of the services ,machinary plant will create the service order. All the costs will be booked. The serviced materail is despatched to the parent plant with out inventory account. While services all the materails, activities are performed in machinary plant account.
    The service order which has expenses costs it should settle into the parents company cost center, order. But this one has separate business area.
    For above this shall we follow the process as _Maintenance Notification (by parent plant)---Non valuated materail reciept( at service plant) -- Material or equipment master creation -- service Notification creation - Service order creation & process Settlement into parents cost center.*-- Despatch of service Product---Receipt with out Purchase order-.*_
    Here i am not understanding  how the parent plant will get the maintenance cost of the equipments since of non valuable material.
    For above process any one explain please

    Hi,
    The TCodes for the Transfers are as follows:
    ABUMN: For intra-company or Inter-plant transfers
    ABT1N- For Inter-company transfers.
    If you have only 1 company code,then there is no need of using the TCode ABT1N, You can use ABUMN.
    Regarding the excise invoices,you can post the same once billing is done,by using TCode:J1IIN,but of course it will depend at which point your client would do the excise invoice.
    Hope this is useful.

  • Settlement to different business area cost objects

    Dear all
    In my requirement We are carrying the work at one Business area on behalf of another business area. The maintenance order has some expenses amount,after completion of the job i need to settle into another business areas cost objects. But sap is giving the error only. Is ot possible?

    The below error is getting while settling orders into another Business area cost objects.
    There are no accrued amounts; settlement is not possible.
    Message no. KD256
    Diagnosis
    The sender has a results analysis key, but no accrued values.
    Either the accrual was not yet started for the sender, or the values to be settled from the accrual equal zero.
    Note
    Actual costs are only settled once the sender is technically completed or finally delivered.
    This applies to
    o     Projects that cannot carry revenues
    u2022     In Service Management for sales order production to the indicator (Calculate WIP for orders in sales order production) set in the accrual version for
    o     Production orders
    o     Internal orders with no revenues
    o     Orders with no revenues
    u2022     In Service Management for engineer-to-order production to the indicator (Calculate WIP for orders in engineer-to-order production ) set in the accrual version for
    o     Internal orders with no revenues
    o     Orders with no revenues
    u2022     In Service Management to the indicator (Calculate WIP for internal and service orders with no revenues) set in the accrual version for
    o     Internal orders with no revenues
    o     Orders with no revenues
    u2022     To the indicator (Calculate WIP for production orders without settlement to material) set in the accrual version for
    o     Production orders without settlement to material
    Procedure
    1. If necessary, start the accrual calculation for the sender
    2. Restart settlement

Maybe you are looking for

  • Importing .ai files to AE

    I am trying to create a shape layer out of a vector layer.  When I import my .ai file to AE cs6, add it to a compisition, right click selecting create shape from vector layer I get a grey box.  I am assuming this is a transparency issue in Illustrato

  • IndesingCS2 server scriptable pluign: how to import image file into a frame.?

    Hello<br />I am creating a scriptable pluign for indesingcs2 server.<br />Now I am stuck at importing a image file in a image frame on a document.<br />The code which was running fine for indesingcs2 desktop is given below.<br />/////////////////////

  • Collecting "similar" images based on appearance??

    Can anyone recommend a Mac application that can aid in identifying "similar" images (from a specified group of many images) based on visual appearance?? In the past, I've used Microsoft's Expression Media Pro (formerly iView Media Pro) to assist me i

  • Linux version of iPlanet Directory Server 5.0 planned?

    I'm wondering if there are plans to release a Linux version of iPlanet Directory Server 5.0? If so, any estimation on when it might be released? Jon

  • Cannot install captivate 8 on Mac - Installer failed to initialise

    I have a iMac 10.7.5 and cannot install Captivate 8 I have installed it successfully on my Macbook air using the same file I have 6, and 7 on my iMAC and never had any problems installing software on this machine before. Any help would be very apprec