Issue with index on table

Hi,
We have created an index(assume z2) on table CATSDB with 2 fields. There is an other index(Z1 assume) with the same fields and the order is also same. When a report accesing the table it is taking more time to run when index Z2 is on table. But when deleted then the report ran quickly. Is it with the duplicate index created???
Please let me know
Regards
Shiva

Hi
i am giving total index and buffering concept details by seeing this you can understand how we can achive performance through these
<b>reward if usefull</b>
<b>Performance during table access</b>
<b>Indexes</b>
Primary and secondary indexes
Structure of an index
Accessing tables using indexes
<b>Table buffering</b>
Advantages of buffering
Concept of buffering
Buffering types
Buffer synchronization
<b>Primary and secondary indexes</b>
Index: Technical key of a database table.
Primary index: The primary index contains the key fields of the table and a pointer to the non-key fields of the table. The primary index is created automatically when the table is created in the database.
Secondary index: Additional indexes could be created considering the most frequently accessed dimensions of the table.
<b>Structure of an Index</b>
An index can be used to speed up the selection of data records from a table.
An index can be considered to be a copy of a database table reduced to certain fields. The data is stored in sorted form in this copy. This sorting permits fast access to the records of the table (for example using a binary search). Not all of the fields of the table are contained in the index. The index also contains a pointer from the index entry to the corresponding table entry to permit all the field contents to be read.
When creating indexes, please note that:
An index can only be used up to the last specified field in the selection! The fields which are specified in the WHERE clause for a large number of selections should be in the first position.
Only those fields whose values significantly restrict the amount of data are meaningful in an index.
When you change a data record of a table, you must adjust the index sorting. Tables whose contents are frequently changed therefore should not have too many indexes.
Make sure that the indexes on a table are as disjunctive as possible.
(That is they should contain as few fields in common as possible. If two indexes on a table have a large number of common fields, this could make it more difficult for the optimizer to choose the most selective index.)
<b>Accessing tables using Indexes</b>
The database optimizer decides which index on the table should be used by the database to access data records.
You must distinguish between the primary index and secondary indexes of a table. The primary index contains the key fields of the table. The primary index is automatically created in the database when the table is activated. If a large table is frequently accessed such that it is not possible to apply primary index sorting, you should create secondary indexes for the table.
The indexes on a table have a three-character index ID. '0' is reserved for the primary index. Customers can create their own indexes on SAP tables; their IDs must begin with Y or Z.
If the index fields have key function, i.e. they already uniquely identify each record of the table, an index can be called a unique index. This ensures that there are no duplicate index fields in the database.
When you define a secondary index in the ABAP Dictionary, you can specify whether it should be created on the database when it is activated. Some indexes only result in a gain in performance for certain database systems. You can therefore specify a list of database systems when you define an index. The index is then only created on the specified database systems when activated
<b>Database access using Buffer concept</b>
Buffering allows you to access data quicker by letting you
access it from the application server instead of the database.
<b>Advantages of buffering</b>
Table buffering increases the performance when the records of the table are read.
As records of a buffered table are read directly from the local buffer of the application server on which the accessing transaction is running, time required to access data is greatly reduced. The access improves by a factor of 10 to 100 depending on the structure of the table and on the exact system configuration.
If the storage requirements in the buffer increase due to further data, the data that has not been accessed for the longest time is displaced. This displacement takes place asynchronously at certain times which are defined dynamically based on the buffer accesses. Data is only displaced if the free space in  the buffer is less than a predefined value or the quality of the access is not satisfactory at this time.
Entering $TAB in the command field resets the table buffers on the corresponding application server. Only use this command if there are inconsistencies in the buffer. In large systems, it can take several hours to fill the buffers. The performance is considerably reduced during this time.
<b>Concept of buffering</b>
The R/3 System manages and synchronizes the buffers on the individual application servers. If an application program accesses data of a table, the database interfaces determines whether this data lies in the buffer of the application server. If this is the case, the data is read directly from the buffer. If the data is not in the buffer of the application server, it is read from the database and loaded into the buffer. The buffer can therefore satisfy the next access to this data.
The buffering type determines which records of the table are loaded into the buffer of the application server when a record of the table is accessed. There are three different buffering types.
With full buffering, all the table records are loaded into the buffer when one record of the table is accessed.
With generic buffering, all the records whose left-justified part of the key is the same are loaded into the buffer when a table record is accessed.
With single-record buffering, only the record that was accessed is loaded into the buffer.
<b>Buffering types</b>
With full buffering, the table is either completely or not at all in the buffer. When a record of the table is accessed, all the records of the table are loaded into the buffer.
When you decide whether a table should be fully buffered, you must take the table size, the number of read accesses and the number of write accesses into consideration. The smaller the table is, the more frequently it is read and the less frequently it is written, the better it is to fully buffer the table.
Full buffering is also advisable for tables having frequent accesses to records that do not exist. Since all the records of the table reside in the buffer, it is already clear in the buffer whether or not a record exists.
The data records are stored in the buffer sorted by table key. When you access the data with SELECT, only fields up to the last specified key field can be used for the access. The left-justified part of the key should therefore be as large as possible for such accesses. For example, if the first key field is not defined, the entire table is scanned in the buffer. Under these circumstances, a direct access to the database could be more efficient if there is a suitable secondary index there.
With generic buffering, all the records whose generic key fields agree with this record are loaded into the buffer when one record of the table is accessed. The generic key is a left-justified part of the primary key of the table that must be defined when the buffering type is selected. The generic key should be selected so that the generic areas are not too small, which would result in too many generic areas. If there are only a few records for each generic area, full buffering is usually preferable for the table. If you choose too large a generic key, too much data will be invalidated if there are changes to table entries, which would have a negative effect on the performance.
A table should be generically buffered if only certain generic areas of the table are usually needed for processing.
Client-dependent, fully buffered tables are automatically generically buffered. The client field is the generic key. It is assumed that not all of the clients are being processed at the same time on one application server. Language-dependent tables are a further example of generic buffering. The generic key includes all the key fields up to and including the language field.
The generic areas are managed in the buffer as independent objects. The generic areas are managed analogously to fully buffered tables. You should therefore also read the information about full buffering.
Single-record buffering is recommended particularly for large tables in which only a few records are accessed repeatedly with SELECT SINGLE. All the accesses to the table that do not use SELECT SINGLE bypass the buffer and directly access the database.
If you access a record that was not yet buffered using SELECT SINGLE, there is a database access to load the record. If the table does not contain a record with the specified key, this record is recorded in the buffer as non-existent. This prevents a further database access if you make another access with the same key
You only need one database access to load a table with full buffering, but you need several database accesses with single-record buffering. Full buffering is therefore generally preferable for small tables that are frequently accessed.
<b>Synchronizing local buffers</b>
The table buffers reside locally on each application server in the system. However, this makes it necessary for the buffer administration to transfer all changes made to buffered objects to all the application servers of the system.
If a buffered table is modified, it is updated synchronously in the buffer of the application server from which the change was made. The buffers of the whole network, that is, the buffers of all the other application servers, are synchronized with an asynchronous procedure.
Entries are written in a central database table (DDLOG) after each table modification that could be buffered. Each application server reads these entries at fixed time intervals.
If entries are found that show a change to the data buffered by this server, this data is invalidated. If this data is accessed again, it is read directly from the database. In such an access, the table can then be loaded to the buffer again.

Similar Messages

  • Issue with Update of Table VARINUM

    Hi,
    I am getting waiting Issues with Update of table VARINUM. Has anybody faced such an issue.
    I have a lot of Jobs which are running in background. I am submitting it through a report. what can be the issue.
    Regards,
    Abhishek jolly

    Thisi is quite old, but not answered properly yet, so there you go:
    SAP generates a new job and temporary variant on report RSDBSPJS, for each HTTP call,which creates database locks on table VARINUM .
    This causes any heavyweight BSP application  to hang and give timeout errors.
    The problem is fixed applying OSS note 1791958, which is not included in any service pack.

  • Issue with data dictionary -Table maintanance generator

    Hi all,
    I have an issue with Data dictionary, table maintenance generator. I have entered some records in a custom table (ZBCSECROLETOGRP) and changed the delivery class from C to A. When I create the table maintainance generator, I am encountered with the following errors:
    1)Field ZBCSECROLETOGRP-PORTALGROUP shortened (new visible length: 000032)
    2)0012 could not be generated
    3)In TCTRL_ZBCSECROLETOGRP field LENGTH has the invalid value 01
    My main motto is to create the table maintainace generator and transport to the furthur systems .
    Please help.
    ThnX in advance,
    Vishal..

    HI,
    Regenerate the table maintenance by selecting the checkbox of "Modified field structure" => new entry & then save.
    Also ensure that the new changes are not affecting old data bcz of data type changes. If that is the case, then delete the old records, regenerate table maint. & re-enter those records which you had deleted.
    Thanks,
    Best regards,
    Prashant

  • CVC creation - Strange issue with Master data table of 9AMATNR

    Hi Experts,
    We have encountered a strange issue with Master data table (/BI0/9APMATNR) of info object 9AMATNR.
    We have a BADI implemented for checking the valid Characteristic before creation of the CVC using transaction /SAPAPO/MC62. This BADI puts a select on master data tab of material /BI0/9APMATNR and returns no value. But the material actually exists in the table (checked through SE16).
    Now we go inside the info object 9AMATNR and go to the Master data Tab. There we go inside the master table
    /BI0/9APMATNR and activate that. After activating the table it is read by the select statement inside BADI (Strange) and allows the CVC to be created.
    Ideally it should not allow us to activate the SAP standard table /BI0/9APMATNR. I observed that in technical settings of this table it has single record buffering as switched on. (But as per my knowledge buffer gets refreshed every 2 to 4 mins and not in 2 days or something).
    Your expert comment is valuable to us. Thanks.
    Best Regards,
    Chandan Dubey

    Hi Chandan,
                 Try to use a WAIT statment with 5 seconds before your select statment.
    I'm not sure whether this will work. Anyway check it and let me know the result.
    Regards,
    Siva.

  • Issue with index creation of an infocube.

    Hi,
    I have an issue with creation of index's for a info cube in SAP BI. when i create index's using
    a process chain for the cube it is getting failed  in the process chain. when i try to check the index's  for this
    cube  manual the following below massage is shown.
    *The recalculation can take some time (depending on data volume)*
    *The traffic light status is not up to date during this time*
    Even i tried to repair the index's using the standard program "SAP_INFOCUBE_INDEXES_REPAIR" in SE38
    to repair the index so it is leading to dump in this case.
    Dear experts with the above issue please suggest. 
    Regards,
    Prasad.

    Hi,
    Please check the Performance tab in the Cube manage and try doing a repair index from there.
    This generates a job so check the job in SM37 and see if it finishes. If it fails, check the job log which will give you the exact error.
    These indices are F fact table indices so if nothing works, try activating the cube by the pgm 'RSDG_CUBE_ACTIVATE' and see if that resolves the issue.
    Let us know the results.

  • Issue with updating partitioned table

    Hi,
    Anyone seen this bug with updating partitioned tables.
    Its very esoteric - its occurs when we update a partitioned table using a join to a temp table (not non-temp table) when the join has multiple joins and you're updating the partitoned column that isn't the first column in the primary key and the table contains a bit field. We've tried changing just one of these features and the bug disappears.
    We've tested this on 15.5 and 15.7 SP122 and the error occurs in both of them.
    Here's the test case - it does the same operation of a partitioned table and a non-partitioned table, but the partitioned table shows and error of "Attempt to insert duplicate key row in object 'partitioned' with unique index 'pk'".
    I'd be interested if anyone has seen this and has a version of Sybase without the issue.
    Unfortunately when it happens on a replicated table - it takes down rep server.
    CREATE TABLE #table1
        (   PK          char(8) null,
            FileDate        date,
            changed         bit
    CREATE TABLE partitioned  (
      PK         char(8) NOT NULL,
      ValidFrom     date DEFAULT current_date() NOT NULL,
      ValidTo       date DEFAULT '31-Dec-9999' NOT NULL
    LOCK DATAROWS
      PARTITION BY RANGE (ValidTo)
      ( p2014 VALUES <= ('20141231') ON [default],
      p2015 VALUES <= ('20151231') ON [default],
      pMAX VALUES <= (MAX) ON [default]
    CREATE UNIQUE CLUSTERED INDEX pk
      ON partitioned(PK, ValidFrom, ValidTo)
      LOCAL INDEX
    CREATE TABLE unpartitioned  (
      PK         char(8) NOT NULL,
      ValidFrom     date DEFAULT current_date() NOT NULL,
      ValidTo       date DEFAULT '31-Dec-9999' NOT NULL,
    LOCK DATAROWS
    CREATE UNIQUE CLUSTERED INDEX pk
      ON unpartitioned(PK, ValidFrom, ValidTo)
    insert partitioned
    select "ET00jPzh", "Jan  7 2015", "Dec 31 9999"
    insert unpartitioned
    select "ET00jPzh", "Jan  7 2015", "Dec 31 9999"
    insert #table1
    select "ET00jPzh", "Jan 15 2015", 1
    union all
    select "ET00jPzh", "Jan 15 2015", 1
    go
    update partitioned
    set    ValidTo = dateadd(dd,-1,FileDate)
    from   #table1 t
    inner  join partitioned p on (p.PK = t.PK)
    where  p.ValidTo = '99991231'
    and    t.changed = 1
    go
    update unpartitioned
    set    ValidTo = dateadd(dd,-1,FileDate)
    from   #table1 t
    inner  join unpartitioned u on (u.PK = t.PK)
    where  u.ValidTo = '99991231'
    and    t.changed = 1
    go
    drop table #table1
    go
    drop table partitioned
    drop table unpartitioned
    go

    wrt to replication - it is a bit unclear as not enough information has been stated to point out what happened.  I also am not sure that your DBA's are accurately telling you what happened - and may have made the problem worse by not knowing themselves what to do - e.g. 'losing' the log points to fact that someone doesn't know what they should.   You can *always* disable the replication secondary truncation point and resync a standby system, so claims about 'losing' the log are a bit strange to be making. 
    wrt to ASE versions, I suspect if there are any differences, it may have to do with endian-ness and not the version of ASE itself.   There may be other factors.....but I would suggest the best thing would be to open a separate message/case on it.
    Adaptive Server Enterprise/15.7/EBF 23010 SMP SP130 /P/X64/Windows Server/ase157sp13x/3819/64-bit/OPT/Fri Aug 22 22:28:21 2014:
    -- testing with tinyint
    1> use demo_db
    1>
    2> CREATE TABLE #table1
    3>     (   PK          char(8) null,
    4>         FileDate        date,
    5> --        changed         bit
    6>  changed tinyint
    7>     )
    8>
    9> CREATE TABLE partitioned  (
    10>   PK         char(8) NOT NULL,
    11>   ValidFrom     date DEFAULT current_date() NOT NULL,
    12>   ValidTo       date DEFAULT '31-Dec-9999' NOT NULL
    13>   )
    14>
    15> LOCK DATAROWS
    16>   PARTITION BY RANGE (ValidTo)
    17>   ( p2014 VALUES <= ('20141231') ON [default],
    18>   p2015 VALUES <= ('20151231') ON [default],
    19>   pMAX VALUES <= (MAX) ON [default]
    20>         )
    21>
    22> CREATE UNIQUE CLUSTERED INDEX pk
    23>   ON partitioned(PK, ValidFrom, ValidTo)
    24>   LOCAL INDEX
    25>
    26> CREATE TABLE unpartitioned  (
    27>   PK         char(8) NOT NULL,
    28>   ValidFrom     date DEFAULT current_date() NOT NULL,
    29>   ValidTo       date DEFAULT '31-Dec-9999' NOT NULL,
    30>   )
    31> LOCK DATAROWS
    32>
    33> CREATE UNIQUE CLUSTERED INDEX pk
    34>   ON unpartitioned(PK, ValidFrom, ValidTo)
    35>
    36> insert partitioned
    37> select "ET00jPzh", "Jan  7 2015", "Dec 31 9999"
    38>
    39> insert unpartitioned
    40> select "ET00jPzh", "Jan  7 2015", "Dec 31 9999"
    41>
    42> insert #table1
    43> select "ET00jPzh", "Jan 15 2015", 1
    44> union all
    45> select "ET00jPzh", "Jan 15 2015", 1
    (1 row affected)
    (1 row affected)
    (2 rows affected)
    1>
    2> update partitioned
    3> set    ValidTo = dateadd(dd,-1,FileDate)
    4> from   #table1 t
    5> inner  join partitioned p on (p.PK = t.PK)
    6> where  p.ValidTo = '99991231'
    7> and    t.changed = 1
    Msg 2601, Level 14, State 6:
    Server 'PHILLY_ASE', Line 2:
    Attempt to insert duplicate key row in object 'partitioned' with unique index 'pk'
    Command has been aborted.
    (0 rows affected)
    1>
    2> update unpartitioned
    3> set    ValidTo = dateadd(dd,-1,FileDate)
    4> from   #table1 t
    5> inner  join unpartitioned u on (u.PK = t.PK)
    6> where  u.ValidTo = '99991231'
    7> and    t.changed = 1
    (1 row affected)
    1>
    2> drop table #table1
    1>
    2> drop table partitioned
    3> drop table unpartitioned
    -- duplicating with 'int'
    1> use demo_db
    1>
    2> CREATE TABLE #table1
    3>     (   PK          char(8) null,
    4>         FileDate        date,
    5> --        changed         bit
    6>  changed int
    7>     )
    8>
    9> CREATE TABLE partitioned  (
    10>   PK         char(8) NOT NULL,
    11>   ValidFrom     date DEFAULT current_date() NOT NULL,
    12>   ValidTo       date DEFAULT '31-Dec-9999' NOT NULL
    13>   )
    14>
    15> LOCK DATAROWS
    16>   PARTITION BY RANGE (ValidTo)
    17>   ( p2014 VALUES <= ('20141231') ON [default],
    18>   p2015 VALUES <= ('20151231') ON [default],
    19>   pMAX VALUES <= (MAX) ON [default]
    20>         )
    21>
    22> CREATE UNIQUE CLUSTERED INDEX pk
    23>   ON partitioned(PK, ValidFrom, ValidTo)
    24>   LOCAL INDEX
    25>
    26> CREATE TABLE unpartitioned  (
    27>   PK         char(8) NOT NULL,
    28>   ValidFrom     date DEFAULT current_date() NOT NULL,
    29>   ValidTo       date DEFAULT '31-Dec-9999' NOT NULL,
    30>   )
    31> LOCK DATAROWS
    32>
    33> CREATE UNIQUE CLUSTERED INDEX pk
    34>   ON unpartitioned(PK, ValidFrom, ValidTo)
    35>
    36> insert partitioned
    37> select "ET00jPzh", "Jan  7 2015", "Dec 31 9999"
    38>
    39> insert unpartitioned
    40> select "ET00jPzh", "Jan  7 2015", "Dec 31 9999"
    41>
    42> insert #table1
    43> select "ET00jPzh", "Jan 15 2015", 1
    44> union all
    45> select "ET00jPzh", "Jan 15 2015", 1
    (1 row affected)
    (1 row affected)
    (2 rows affected)
    1>
    2> update partitioned
    3> set    ValidTo = dateadd(dd,-1,FileDate)
    4> from   #table1 t
    5> inner  join partitioned p on (p.PK = t.PK)
    6> where  p.ValidTo = '99991231'
    7> and    t.changed = 1
    Msg 2601, Level 14, State 6:
    Server 'PHILLY_ASE', Line 2:
    Attempt to insert duplicate key row in object 'partitioned' with unique index 'pk'
    Command has been aborted.
    (0 rows affected)
    1>
    2> update unpartitioned
    3> set    ValidTo = dateadd(dd,-1,FileDate)
    4> from   #table1 t
    5> inner  join unpartitioned u on (u.PK = t.PK)
    6> where  u.ValidTo = '99991231'
    7> and    t.changed = 1
    (1 row affected)
    1>
    2> drop table #table1
    1>
    2> drop table partitioned
    3> drop table unpartitioned

  • Scalability issue with global temporary table.

    Hi All,
    Does create global temporary table would lock data disctionary like create table? if yes would not it be a scalable issue with multi user environment?
    Thanks and Regards,
    Rudra

    Billy  Verreynne  wrote:
    acadet wrote:
    am I correct in interpreting your response that we should be using GTT's in favour of bulk operations and collections and in memory operations? No. I said collections cannot scale. This means due to the fact that collections reside in expensive PGA memory, you cannot stuff large data volumes into them. Thus they do not make an ideal storage bin for temporary data (e.g. data loaded from file or a web service). GTTs otoh do not suffer from the same restrictions, can be indexed and offer vastly better scalability and so on.
    Multiple passes are often needed using such a data structure. Or filtering to find specific data. As a GTT is a SQL native, it offers a lot more flexibility and performance in this regard.
    And this makes sense - as where do we put out persistent data? Also in tables, but ones of a persistent and not temporary kind like a GTT.
    Collections are pretty useful - but limited in size and capability.
    Rudra states:
    I want to pull out few metrices from differnt tables and processing itIf this can't be achieved in a SQL statement, unless Rudra is a master of understatement then I would see GTT's as a waste of IO and programming effort. I agree.
    My comments however were about choices for a temporary data storage bin in PL/SQL.I agree with your general comments regarding temporary storage bins in Oracle, but to say that collections don't scale is putting to narrow a definition on scaling. True, collections can be resource intensive in terms of memory and CPU requirements, but their persistence will generally be much shorter than other types of temporary storage. Given the right characteristics, collections will scale and given the wrong characteristics GTT's wont scale.
    As you say it is all about choice. Getting back to the theme of this thread though, the original poster should be made aware that well designed and well coded applications are most likely to scale. Creating tables on the fly is generally considered bad practice and letting the database do what it does best, join tables in queries at the SQL level is considered good practice. The rest lies somewhere in between and knowing when to do which is why we get paid the big bucks (not). ;-)
    Regards
    Andre

  • Oracle 10g - issue with "DELETE from TABLE WHERE ID in (1,2,3)" (cfqueryparam used)

    Hello, everyone.
    I am having issues with running a DELETE statement on an Oracle 10g database.
    DELETE
    FROM tableA
    WHERE ID in (1,2,3)
    If there is only one ID for the IN clause, it works.  But if more than one ID is supplied, I get an "SQL command not properly ended" error message.  Here is the query as CF:
    DELETE
    FROM TRAINING
    WHERE userID = <cfqueryparam cfsqltype="CF_SQL_VARCHAR" value="#trim(form.userID)#">
         AND TRAINING_ID in <cfqueryparam value="#form.trainingIDs#" cfsqltype="CF_SQL_INTEGER" list="yes">
    Anyone work with Oracle that can help me with this?  I'm an experienced MS-SQL developer; Oracle is new to me.
    Thanks,
    ^_^

    Nevermind.. a co-worker just told me that I still have to use parenthesis around the values for the IN clause. 

  • Adobe Bridge issue with index.html files

    Hi, I have a perplexing problem.... Three weeks ago, I created a web photo gallery in Bridge. I transferred it to my website via FTP and it worked like a charm. Three days later I created another web gallery transferred it to my website using my ftp and the address of what I uploaded takes me to a blank page. I contacted my web hosting support and was told it looks like an issue with my index.html file. Here is a link to the gallery that is working: www.janieblanchard/com/galleries/prettylights/index.html
    Here is the link to the site that is not working:
    www.janieblanchard.com/galleries/macrogallery/index.html
    Any advice would be so helpful, I've spend too many hours trying different galleries and uploading multiple times.
    Thanks!

    What exact camera make and model?
    What specific, exact version of Adobe Camera Raw (ACR) plug-in?
    What specific, exact versions of Bridge and of Yosemite?
    BOILERPLATE TEXT:
    Note that this is boilerplate text.
    If you give complete and detailed information about your setup and the issue at hand,
    such as your platform (Mac or Win),
    exact versions of your OS, of Photoshop (not just "CS6", but something like CS6v.13.0.6) and of Bridge,
    your settings in Photoshop > Preference > Performance
    the type of file you were working on,
    machine specs, such as total installed RAM, scratch file HDs, total available HD space, video card specs, including total VRAM installed,
    what troubleshooting steps you have taken so far,
    what error message(s) you receive,
    if having issues opening raw files also the exact camera make and model that generated them,
    if you're having printing issues, indicate the exact make and model of your printer, paper size, image dimensions in pixels (so many pixels wide by so many pixels high). if going through a RIP, specify that too.
    etc.,
    someone may be able to help you (not necessarily this poster).
    a screen shot of your settings or of the image could be very helpful too.
    Please read this FAQ for advice on how to ask your questions correctly for quicker and better answers:
    http://forums.adobe.com/thread/419981?tstart=0
    Thanks!

  • Issue with Period Control Table after copying Essbase adapter

    Hi Experts,
    I'm working on version 11.1.1.3 and have copied the adapters (Essbase, Pull + EPRi) in the work bench so I can add an additional target for the FDM application. However, I have an issue with the import process; it returns an error with the Time & Periods (I guess it's something to do with the Periods category).
    I have reimported the Periods Control Table and updated the new application's Target Period & Year (whilst changing the system code in the Application settings to the newl adapter) and still receive the same error message.
    Any direction or thoughts would be welcome.
    Thanks
    Mark

    The time periods do not copy. You need to maintain them through the UI or upload them from excel. There is a KM article on this if you need additional detail.

  • Issue with Data Load Table

    Hi All,
           i am facing issue with apex 4.2.4 ,using the  Data Load Table concept's and in this look up used the
          Where Clause option  ,it seems to be not working this where clause ,Please help me on this

    hi all,
        it looks this where clause not filter with 'N'  data ,Please help me ,how to solve this or help me on this

  • Export Issues with Compressed Partition Tables?

    We recently partitioned and compressed some large tables. It appears, but I'm not sure yet, that this is causing the export to run extremely slow. The database is at 10.2.0.2 and we are using the exp utility, not datapump. Does anyone know of any known issues with using exp to export compressed, partitioned tables?

    can you give more details of the table structure with dbms_metadata if possible, and how you are taking the export please?
    did you try to take an sql*trace of the export process to see what is going on behind, this is an introduction if you may need;
    http://tonguc.wordpress.com/2006/12/30/introduction-to-oracle-trace-utulity-and-understanding-the-fundamental-performance-equation/

  • Issue with the shawdow table

    I am in the process of understanding the shawdow table and the error log table.
    i have a table created with a shadow table in place.
    1. ex : table emp( empno, ename), emp_err( ...., empno,ename)
    it contains the values (1,'A').
    Now i place the empno with a data rule, unique not null ... and configure the operator
    as MOVE TO ERROR.
    when i try to insert a row with 1,A, its not only moving the new row to be inserted but also the existing row in the table emp ie two rows are getting populated in the error table emp_err
    2. I have a scenario where i want to update a row in the fact, from an incoming row.
    If there is no match of the incoming row to that of the fact, how do i put that into the
    error_table ?
    Any ideas or tricks appreciated.
    Thanks
    Narayana.

    Hi,
      Remove the the internal tables memory by using FREE statement after processing the internal table.
       Also u can ask your Basis person to increase the page area.
    Reward if helpful.
    Regards,
    Umasankar.

  • Row locking issue with version enabled tables

    I've been testing the effect of locking in version enabled tables in order to assess workspace manager restrictions when updating records in different workspaces and I have encountered a locking problem where I can't seem to update different records of the same table in different sessions if these same records have been previously updated & committed in another workspace.
    I'm running the tests on 11.2.0.3.  I have ROW_LEVEL_LOCKING set to ON.
    Here's a simple test case (I have many other test cases which fail as well but understanding why this one causes a locking problem will help me understand the results from my other test cases):
    --Change tablespace names as required
    create table t1 (id varchar2(36) not null, name varchar2(50) not null) tablespace XXX;
    alter table t1 add constraint t1_pk primary key (id) using index tablespace XXX;
    exec dbms_wm.gotoworkspace('LIVE');
    insert into t1 values ('1', 'name1');
    insert into t1 values ('2', 'name2');
    insert into t1 values ('3', 'name3');
    commit;
    exec dbms_wm.enableversioning('t1');
    exec dbms_wm.gotoworkspace('LIVE');
    exec dbms_wm.createworkspace('TESTWSM1');
    exec dbms_wm.gotoworkspace('TESTWSM1');
    --update 2 records in a non-LIVE workspace in preparation for updating in different workspaces later
    update t1 set name = name||'changed' where id in ('1', '2');
    commit;
    quit;
    --Now in a separate session (called session 1 for this example) run the following without committing the changes:
    exec dbms_wm.gotoworkspace('LIVE');
    update t1 set name = 'changed' where id = '1';
    --Now in another session (session 2) update a different record from the same table.  The below update will hang waiting on the transaction in session 1 to complete (via commit/rollback):
    exec dbms_wm.gotoworkspace('LIVE');
    update t1 set name = 'changed' where id = '2';
    I'm surprised records of different ids can't be updated in different sessions i.e. why does session 1 lock the update of record 2 which is not being updated anywhere else.  I've tried this using different non-LIVE workspaces with similar results.  I've tried changing table properties e.g. initrans with and still get a lock.  The changes to table properties are successfully propagated to the _LT tables but not all the related workspace manager tables created for table T1 above.  I'm not sure if this is the issue.
    Note an example of the background workspace manager query that may create the lock is something like:
    UPDATE TESTWSM.T1_LT SET LTLOCK = WMSYS.LT_CTX_PKG.CHECKNGETLOCK(:B6 , LTLOCK, NEXTVER, :B3 , 0,'UPDATE', VERSION, DELSTATUS, :B5 ), NEXTVER = WMSYS.LT_CTX_PKG.GETNEXTVER(NEXTVER,:B4 ,VERSION,:B3 ,:B2 ,683) WHERE ROWID = :B1
    Any help with this will be appreciated.  Thanks in advance.

    Hi Ben,
    Thanks for your quick response.
    I've tested your suggestion and it does work with 2 workspaces but the same problem is enountered when additional workspaces are created. 
    It seems if multiple workspaces are used in a multi user environment, locks will be inevitable which will degrade performance especially if a long transaction is used. 
    Deadlocks can also be encountered where eventually one of the sessions is rolled back by the database. 
    Is there a way of avoiding this e.g. by controlling the creation of workspaces and table updates?
    I've updated my test case below to demonstrate the extra workspace locking issue.
    --change tablespace name as required
    create table t1 (id varchar2(36) not null, name varchar2(50) not null) tablespace XXX;
    alter table t1 add constraint t1_pk primary key (id) using index tablespace XXX;
    exec dbms_wm.gotoworkspace('LIVE');
    insert into t1 values ('1', 'name1');
    insert into t1 values ('2', 'name2');
    insert into t1 values ('3', 'name3');
    commit;
    exec dbms_wm.enableversioning('t1');
    exec dbms_wm.gotoworkspace('LIVE');
    exec dbms_wm.createworkspace('TESTWSM1');
    exec dbms_wm.gotoworkspace('TESTWSM1');
    update t1 set name = name||'changed' where id in ('1', '2');
    commit;
    Session 1:
    exec dbms_wm.gotoworkspace('LIVE');
    update t1 set name = 'changed' where id = '1';
    session 2:
    exec dbms_wm.gotoworkspace('LIVE');
    update t1 set name = 'changed' where id = '2';
    --end of original test case, start of additional workspace locking issue:
    Session 1:
    rollback;
    Session 2:
    rollback;
    --update record in both workspaces
    exec dbms_wm.gotoworkspace('LIVE');
    update t1 set name = 'changed' where id = '3';
    commit;
    exec dbms_wm.gotoworkspace('TESTWSM1');
    update t1 set name = 'changed' where id = '3';
    commit;
    Session 1:
    exec dbms_wm.gotoworkspace('LIVE');
    update t1 set name = 'changed' where id = '1';
    session 2:
    exec dbms_wm.gotoworkspace('LIVE');
    update t1 set name = 'changed' where id = '2';
    Session 1:
    rollback;
    Session 2:
    rollback;
    exec dbms_wm.gotoworkspace('LIVE');
    exec dbms_wm.createworkspace('TESTWSM2');
    exec dbms_wm.gotoworkspace('TESTWSM2');
    update t1 set name = name||'changed2' where id in ('1', '2');
    commit;
    Session 1:
    exec dbms_wm.gotoworkspace('LIVE');
    update t1 set name = 'changed' where id = '1';
    --this now gets locked out by session 1
    session 2:
    exec dbms_wm.gotoworkspace('LIVE');
    update t1 set name = 'changed' where id = '2';
    Session 1:
    rollback;
    Session 2:
    rollback;
    --update record 3 in TESTWSM2
    exec dbms_wm.gotoworkspace('TESTWSM2');
    update t1 set name = 'changed' where id = '3';
    commit;
    Session 1:
    exec dbms_wm.gotoworkspace('LIVE');
    update t1 set name = 'changed' where id = '1';
    --this is still locked out by session 1
    session 2:
    exec dbms_wm.gotoworkspace('LIVE');
    update t1 set name = 'changed' where id = '2';
    Session 1:
    rollback;
    Session 2:
    rollback;
    --try updating LIVE
    exec dbms_wm.gotoworkspace('LIVE');
    update t1 set name = 'changed' where id = '3';
    commit;
    Session 1:
    exec dbms_wm.gotoworkspace('LIVE');
    update t1 set name = 'changed' where id = '1';
    --this is still locked out by session 1
    session 2:
    exec dbms_wm.gotoworkspace('LIVE');
    update t1 set name = 'changed' where id = '2';
    Session 1:
    rollback;
    Session 2:
    rollback;
    --try updating TESTWSM1 workspace too - so all have been updated since TESTWSM2 was created
    exec dbms_wm.gotoworkspace('TESTWSM1');
    update t1 set name = 'changed' where id = '3';
    commit;
    Session 1:
    exec dbms_wm.gotoworkspace('LIVE');
    update t1 set name = 'changed' where id = '1';
    --this is still locked out by session 1
    session 2:
    exec dbms_wm.gotoworkspace('LIVE');
    update t1 set name = 'changed' where id = '2';
    Session 1:
    rollback;
    Session 2:
    rollback;
    --try updating every workspace afresh
    exec dbms_wm.gotoworkspace('LIVE');
    update t1 set name = 'changedA' where id = '3';
    commit;
    exec dbms_wm.gotoworkspace('TESTWSM1');
    update t1 set name = 'changedB' where id = '3';
    commit;
    exec dbms_wm.gotoworkspace('TESTWSM2');
    update t1 set name = 'changedC' where id = '3';
    commit;
    Session 1:
    exec dbms_wm.gotoworkspace('LIVE');
    update t1 set name = 'changed' where id = '1';
    --this is still locked out by session 1
    session 2:
    exec dbms_wm.gotoworkspace('LIVE');
    update t1 set name = 'changed' where id = '2';
    Session 1:
    rollback;
    Session 2:
    rollback;

  • Regarding the issue with mater-detail table rows

    Hi,
    I am using Jdeveloper 11.1.2.1.0
    I have a master - detail relationship based read-only tables in a page.
    When i execute the master view using the bind variables , I also use a stored procedure to generate additional values to detail view rows using the master row values,
    the selection in master table is at index zero for which the detail table row is not appearing. Once user select the next row for which detail row appears, then clicks the first master row , then only the first row's respective detail row appears.
    I have tried using triggers and also tried to queue the selection event using row key values after master view execution and procedure call.
    Give an idea to solve this issue.
    - Ramey

    Hi,
            Then what can be the solution if it is the refresh timing ?
    I tried to execute detail view with same bind variable as in view link along with view link execution,
    assuming that master's first row is not set as current row to get the detail row based on the default first row selection in master view can trigger detail view .
    Kindly suggest a possible solution.
    -Ramey

Maybe you are looking for