Inconsistencies on pool tables when import patches

Hello Experts.
I installed the enhacements packages of EA-PS and SAP_APPL (version 603) on my system SAP ECC 6.0 Unicode 64 bits. This process terminated correctly.
Then, when i'm importing another patch of SAP_BASIS or another component with the spam, the system return me the following error in the check_requirements phase:
" Some open conversion requests still exist in the ABAP Data Dictionary
for the following ABAP Dictionary objects. To avoid inconsistencies and
loss of data, you must process these conversions first.
Proceed as follows:
- Open a new session.
- Start the Database Utility (transaction SE14).
- Correct the inconsistencies for the specified objects.
- Repeat the import phase. If no more inconsistencies are found, the
  import continues.
Phase CHECK_REQUIREMENTS: Open ABAP Dictionary Conversions
Object Type       Object Name
Pools/Clusters    GLSP
                         GLTP"
In the se14, for both, there is not inconsistencies on the database objects but on the runtime objects, the
length of the vardata field is a half of the value of this field of the runtime object.
For correct this inconsistencies, i execute the reports RADPOCNV (ABAP Dictionary: Table Pool Conversion) and RATPONTC (Adaption of VARDATA field for table pools).
Now this tables are totally consistent but when i import again the SAP_BASIS patch or another patch, the inconsistence error still appears.
¿How could i resolve this inconsistencies?
Thanks in advance.

http://ora-24006.ora-code.com/

Similar Messages

  • Is it possible to use a pooled table when creating a view?

    Hi,
    I am trying to create a view based on table A005 but this table is a Pooled table and the system wont allow me to create a view on it.
    Is there any way to do this?
    Thanks,

    Hi,
      Join stmt can not be executed on Cluster tables & pooled tables.
    regards,
    ajit.

  • Select table when import using Data Pump API

    Hi,
    Sorry for the trivial question, I export the data using Data Pump API, with "TABLE" mode.
    So all tables will be exported in one .dmp file.
    My question is, then how to import few tables only using Data Pump API?, how to define "TABLES" property like command line interface?
    should I use DATA_FILTER procedures?, if yes how to do that?
    Really thanks in advance
    Regards,
    Kahlil

    Hi,
    You should be using metadata_filter procedure for the same.
    eg:
    dbms_datapump.metadata_filter
                (handle1
                 ,'NAME_EXPR'
                 ,'IN (''TABLE1'', '"TABLE2'')'
    {code}
    Regards
    Anurag                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • When we are using Event pooling tables explicitly we have insert data into

    hi all
    i have read about event pooling table,
    if we implement event pooling tables then explicitly we have insert table information (like product_dim) in event pooling table explicitly
    suppose i have updated more then 5 tables in my database that time i have to enter(insert) 5 table information in event pooling table(Tables_nq_emt) in the back end
    if so then what is the use of event pooling table???
    this is event pooling table i have crated in back end
    CREATE TABLE S_NQ_EPT (
    UPDATE_TYPE DECIMAL(10,0) DEFAULT 1 NOT NULL,
    UPDATE_TS DATE DEFAULT SYSDATE NOT NULL,
    DATABASE_NAME VARCHAR2(120) NULL,
    CATALOG_NAME VARCHAR2(120) NULL,
    SCHEMA_NAME VARCHAR2(120) NULL,
    TABLE_NAME VARCHAR2(120) NOT NULL,
    OTHER_RESERVED VARCHAR2(120) DEFAULT NULL NULL
    Thanks

    Hi,
    IF you are using event pooling then you should make use of triggres in database.Create a trigger 'after insert' on some major table which you think is gonna update after every etl load.As soon as some data is inserted into this table trigger will automatically insert a record in event pooling table and cache will be purged.I am not sure about the syntax of triggers.Please google it around.
    If you are trying to purge cache through oDBC procdure then you need to create a shell script and based on whether ETL load has completed ,you need to execute this script.
    regards,
    Sandeep

  • How to deal with delta queue when importing Support Package/Kernel Patch

    Hi,
    From my experience, when importing a Support Package for an installation, the system will issue an error message and get stuck if this Support Package is about to alter the structures used in delta loads.
    But I would like to double with you if there is possible that the support packet which is going to alter structure, but no error message. If so, the delta data will be lost.
    Do we need to clear down the delta queue every time we import support package?
    Anyway, is there anyone have any suggestions or steps regarding this question?
    Many Thanks
    Jonathan

    Hi,
    Delta queues during support package upgrade
    Its always better to drain the delta queues before an upgrade.
    As a standard practice we drain the delta queues by running the IP/ chain multiple times.
    As a prerequiste we cancel/reschedule the V3 jobs to a future date during this activity.
    The V3 extraction delta queues must be emptied prior to the upgrade to avoid any possible data loss.
    V3 collector jobs should be suspended for the duration of the upgrade.
    They can be rescheduled after re-activation of the source systems upon completion of the upgrade.
    See SAP Notes 506694 and 658992 for more details.
    Page 17
    Load and Empty all Data mart Delta Queues in SAP BW. (e.g. for all export DataSources)
    The SAP BW Service SAPI, which is used for internal and ‘BW to BW’ data mart extraction, is
    upgraded during the SAP BW upgrade. Therefore, the delta queues must be emptied prior to the
    upgrade to avoid any possibility of data loss.
    upgrade preparation and postupgrade checklist
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/472443f2-0c01-0010-20ab-fbd380d45881
    /message/3221895#3221895
    OSS notes 328181 and 762951 as a prerequisites.
    Failure to follow the instructions in those notes may probably result in data loss.
    https://websmp207.sap-ag.de/~form/sapnet?_FRAME=CONTAINER&_OBJECT=011000358700002662832005E
    /thread/804820
    Effect on BW of R/3 Upgrade   
    How To Tackle Upgrades to SAP ERP 6.0
    /people/community.user/blog/2008/03/20/how-to-tackle-upgrades-to-sap-erp-60
    Start with the Why — Not the How — When You Upgrade to SAP ERP 6.0
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/008dddd1-8775-2a10-ce97-f90b2ded0280
    Rapid SAP NetWeaver 7.0 BI Technical Upgrade
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/e0c9c8be-346f-2a10-2081-cd99177c1fb9
     https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/c2b3a272-0b01-0010-b484-8fc7c068975e
    Hope this helps.
    Thanks,
    JituK

  • No tables found Importing from SAP

    Hi,
    I'm trying to use OWB (10.2.0.1) SAP Connector to import data from SAP.
    I created a location and tested it with success, but when I try to import some metadata, nothing shows up. The RFC user have all rights, but no error is displayed and no tables appear.
    Does anyone have a clue?
    Thanks,
    RF

    Hi Ricardo,
    on my laptop I installed
    •     Oracle DB 10gR2 and OWB 10gR2 (+OWB patch 10.2.0.3)
    •     Oracle DB 11g (including OWB11g)
    In both cases, I copied the three required Sap library files to the paths mentioned in the documentation, (I also copied librfc32.dll to <windows>\system32)
    and included those paths in PATH env. Variable,
    and tried to connect to a Sap instance we have “in house” for testing purposes (not customer’s system).
    In both cases, SAP Location definition failed:
    •     10gR2: I get the following error message
    Class com/sap/mw/jco/JCO$Client is missing.
         This is strange: before upgrading to 10.2.0.3 I was able to connect to SAP,
    The problem was that I did not get any list of tables during Import Metadata from Sap
    •     11g: I get the following error message
    Some Location Details are missing.
    Please verify the location information is completely specified.
         even though I filled all required location information (user, password, application server, system number, client number, language)
    In both cases, I checked the Jar file is ok: I also invoked the command “java – jar <path+filename>” and it correctly showed the “about” window.
    Any clue ?
    Thanks,
    Silvio

  • Pool table & Cluster table

    Dear all,
    could you please help me out from the below.
    How to create pooled tables & cluster tables.
    When i am trying to create a table, by default it is showing table category as Transparent table.
    Regards
    Venkat

    hi,
    A pool table has many to one relation with the table in the database. For one table in the database there are many tables in the dictionary. Tha table in the database has a diff name than in the table in the data dict, it has diff no of fields and field names are different. A pooled table is stored in the pool at the database level. A table pool is a databse table with a special struct that enables the data of many R3 tables to be stored in it. It can hold only pooled tables
    Cluster table are logical tables that must be assigned to a table cluster when they are defined.
    Cluster table can be used to store control data they can also used to store temporary data or text such as documentation
    Pool table
    A database table defined in the ABAP Dictionary whose database instance is assigned to more than one table defined in the ABAP Dictionary. Multiple pool tables are assigned to a table pool in the database. The key fields of a pool table have to be character-type fields. The table pool's primary key consists of two fields: TABNAME for the name of a pool table, and VARKEY for the interdependent contents of the key fields in the corresponding pool table. The non-key fields of the pool table are stored in compressed format in their own column, called VARDATA, of the table pool. The only way to access pool tables is by using Open SQL. Joins are not allowed.
    Table Pool
    Database table in the database that contains the data of several pool tables.
    Cluster Table
    Database table defined in the ABAP Dictionary, whose version on the database is not only assigned to one table defined in the ABAP Dictionary. Several cluster tables are assigned to a table cluster in the database. The intersection of the key fields of the cluster tables forms the primary key of the table cluster. The other columns of the cluster tables are stored in compressed form in a single column VARDATA of the table cluster. You can access cluster tables only via Open SQL, and only without using joins.
    Table Cluster
    Database table in the database that contains the data of several cluster tables.
    Note: Never mix up with a database table that has the necessary structure for storing data clusters in database tables and in the shared memory. Those are called INDX-type, with reference to the database table INDX supplied by SAP. Data clusters are groupings of data objects for transient and persistent storage in a selectable storage medium. A data cluster can be processed using the statements IMPORT, EXPORT, and DELETE FROM
    Some pooled tables:
    T000 Clients
    T000C Table for Installing FI-SL Customizing
    T000CM Client-specific FI-AR-CR settings
    T000F Cross-Client FI Settings
    T000G Cross-Client FI-SL Postings
    T000GL Flexible general ledger: Customizing check
    T000K Group
    T000MD MRP at MRP Area Level
    T001 Company Codes
    T001_ARCH Archive contents short description
    T001_CONV Company codes affected by currency convers
    T001A Additional Local Currencies Control for Co
    T001B Permitted Posting Periods
    T001C Valid Posting Periods for Global Companies
    T001CM Permitted Credit Control Areas per Company
    T001D Validation of Accounting Documents
    T001E Company Code-Dependent Address Data
    T001F Company code-dependent form selection
    T001G Company Code-Dependent Standard Texts
    T001I Company Code - Parameter Types
    T001J Company Code - Parameter Type Names
    T001K Valuation area
    T001L Storage Locations
    T001M Data on Z5A Foreign Trade Regulations Repo
    T001N Company Code - EC Tax Numbers / Notificati
    A physical table definition is created in the database for the table definition stored
    in the ABAP Dictionary for transparent tables when the table is activated.
    The table definition is translated from the ABAP Dictionary to a definition of the particular database.
    On the other hand, <b>pooled tables and cluster tables are not created in the database.</b>The data of these tables is stored in the corresponding table pool or table cluster.
    It is not necessary to create indexes and technical settings for pooled and cluster tables.
    regards,
    pritha

  • Performance issue with FDM when importing data

    In the FDM Web console, a performance issue has been detected when importing data (.txt)
    In less than 10 seconds the ".txt" and the ".log" files are created the INBOX folder (the ".txt" file) and in the OUTBOX\Logs (the ".log" file).
    At that moment, system shows the message "Processing, please wait” during 10 minutes. Eventually the information is displayed, however if we want to see the second page, we have to wait more than 20 seconds.
    It seems a performance issue when system tries to show the imported data in the web page.
    It has been also noted that when a user tries to import a txt file directly clicking on the tab "Select File From Inbox", the user has to also wait other 10 minutes before the information is displayed on the web page.
    Thx in advance!
    Cheers
    Matteo

    Hi Matteo
    How much data is being imported / displayed when users are interacting with the system.
    There is a report that may help you to analyse this but unfortunately I cannot remember what it is called and don't have access to a system to check. I do remember that it breaks down the import process into stages showing how long it takes to process each mapping step and the overall time.
    I suspect that what you are seeing is normal behaviour but that isn't to say that performance improvements are not possible.
    The copying of files is the first part of the import process before FDM then starts the import so that will be quick. The processing is then the time taken to import the records, process the mapping and write to the tables. If users are clicking 'Select file from Inbox' then they are re-importing so it will take just as long as it would for you to import it, they are not just asking to retrieve previously imported data.
    Hope this helps
    Stuart

  • Use of "Pool Table(s)" in Module Pool Program

    Hi,
    I often see/hear that Pool tables play an important role in Module Pool Programs.
    anybody please explain me how Pool tables are used in Module Pools?? => Did you look for any documentation?
    if possible with code snippets. =>NO.
    Thanks,
    Kranthi.
    Edited by: kishan P on Nov 14, 2010 7:23 PM

    Hi,
    I often see/hear that Pool tables play an important role in Module Pool Programs.
    anybody please explain me how Pool tables are used in Module Pools?? => Did you look for any documentation?
    if possible with code snippets. =>NO.
    Thanks,
    Kranthi.
    Edited by: kishan P on Nov 14, 2010 7:23 PM

  • View, Clustur & Pooled Table

    Hi,
    I have few questions on the above topics.
    How can we retreive data from View , Clustur & Pooled Table using Select statements.
    If possible can you explain me with an example.
    what kind of data can we retreive from them . for example BSEC - how can we use it .
    Thank you in advance.
    Ry

    hi,
    You can access them using SELECT statements,  you CAN NOT use joins though.
    For example, BSEG is a cluster table.
    report znave_0003 .
    data: ibseg type table of bseg with header line.
    parameters: p_bukrs type bseg-bukrs.
    select * into table ibseg from bseg
            up to 100 rows
               where bukrs = p_bukrs.
    loop at ibseg.
      write:/ ibseg-bukrs, ibseg-belnr.
    endloop.
    to retrieve the data from a view....
    Declare the internal table with the type of View.
    Data : itab type table of zview001.
    or write select query as
    select * from zview001 into corresponding fields of itab.
    Also you can uncheck the Unicode checks active (checkbox) in the program attributes.
    {When you check that unicode checkbox, you should use an internal table without a header line. Instead you should declare a work area)
    Hope it helps...
    ~~Guduri

  • How to extract data from a pool table?

    Hello,
    I want to create an generic extractor for table T030, but this is a pool/cluster table and it is not possible to create a view for such a table.
    Do somebody know how to solve this problem? Does there exist function modules to read the data from a pool data?
    Thanks
    Theodor

    Function Module would be your best bet..considering thats the best way as far as performance is taken into consideration.
    But there is a work around too...
    If the other table u use in the join is not a clusture table then u can create a view on that table and u can add append structure to this  with fields from the clusture table and use an exit to populate it.
    Try it Will Work...used this technique when I used KONV.* This Procedure does not decrease performance considerably.
    at least in my case as thre were only a few fields i needed from pool table
    Hope this Helps
    Anand Raj

  • How can i decleare select-options in module pool table control?

    Hi everybody!!
    Can anyone tell me how can I decleare select-options in module pool table control screen?. I have declared it in a screen with a table control but a dump is triggered due to an error when generating the selection screen.
    Regards...

    My suggestion will be try to use fm
        call function 'FREE_SELECTIONS_DIALOG'
    Please search this forum you can find lot of threads related to this.

  • I am getting an error ORA-39143 when importing a dump file

    I get ORA-39143: dump file "c:\oraload\expdat.dmp" may be an original export dump file
    when importing a dump file created by 9.2 exp into 10.2 database using impdp
    I need to use the impdp utility with the table_exists_action option which doesn't appear to exist in imp
    What is the proper mix of versions/flags/options to export from a 9.2 database and import into an existing 10.2 database while replacing existing data?
    thanks

    well its on the link i provided
    "When tables are manually created before data is imported, the CREATE TABLE statement in the export dump file will fail because the table already exists. To avoid this failure and continue loading data into the table, set the import parameter IGNORE=y. Otherwise, no data will be loaded into the table because of the table creation error."
    original import tries to append (insert)
    there is no parameter here like table_exists_action=truncate/replace
    1. drop all the tables (dynamic sql), so there will be no errors because of already existent tables
    or
    2. truncate all the tables (dynamic sql) + use IGNORE=Y
    (or
    3. drop the whole schema in target database, recreate it, then do the import)

  • Internal table with Import and Export

    Hi All,
    Hi all
    Please let me know the use of <b>Internal table with Import and Export parameters and SET/GET parameters</b>, on what type of cases we can use these? Plese give me the syntax with some examples.
    Please give me detailed analysis on the above.
    Regards,
    Prabhu

    Hi Prabhakar,
    There are three types of memories.
    1. ABAP MEMORY
    2. SAP MEMORY
    3. EXTERNAL MEMORY.
    1.we will use EXPORT/ IMPORT TO/ FROM MEMORY-ID when we want to transfer between ABAP memory
    2. we will use GET PARAMETER ID/ SET PARAMETER ID to transfer between SAP MEMORY
    3. we will use EXPORT/IMPORT TO/FROM SHARED BUFFER to transfer between external memory.
    ABAP MEMORY : we can say that two reports in the same session will be in ABAP MEMORY
    SAP MEMORY: TWO DIFFERENT SESSIONS WILL BE IN SAP MEMORY.
    for ex: IF WE CALL TWO DIFFERENT TRANSACTIONS SE38, SE11
    then they both are in SAP MEMORY.
    EXTERNAL MEMORY: TWO different logons will be in EXTERNAL MEMORY.
    <b>Syntax</b>
    To fill the input fields of a called transaction with data from the calling program, you can use the SPA/GPA technique. SPA/GPA parameters are values that the system stores in the global, user-specific SAP memory. SAP memory allows you to pass values between programs. A user can access the values stored in the SAP memory during one terminal session for all parallel sessions. Each SPA/GPA parameter is identified by a 20-character code. You can maintain them in the Repository Browser in the ABAP Workbench. The values in SPA/GPA parameters are user-specific.
    ABAP programs can access the parameters using the SET PARAMETER and GET PARAMETER statements.
    To fill one, use:
    SET PARAMETER ID <pid> FIELD <f>.
    This statement saves the contents of field <f> under the ID <pid> in the SAP memory. The code <pid> can be up to 20 characters long. If there was already a value stored under <pid>, this statement overwrites it. If the ID <pid> does not exist, double-click <pid> in the ABAP Editor to create a new parameter object.
    To read an SPA/GPA parameter, use:
    GET PARAMETER ID <pid> FIELD <f>.
    This statement fills the value stored under the ID <pid> into the variable <f>. If the system does not find a value for <pid> in the SAP memory, it sets SY-SUBRC to 4, otherwise to 0.
    To fill the initial screen of a program using SPA/GPA parameters, you normally only need the SET PARAMETER statement.
    The relevant fields must each be linked to an SPA/GPA parameter.
    On a selection screen, you link fields to parameters using the MEMORY ID addition in the PARAMETERS or SELECT-OPTIONS statement. If you specify an SPA/GPA parameter ID when you declare a parameter or selection option, the corresponding input field is linked to that input field.
    On a screen, you link fields to parameters in the Screen Painter. When you define the field attributes of an input field, you can enter the name of an SPA/GPA parameter in the Parameter ID field in the screen attributes. The SET parameter and GET parameter checkboxes allow you to specify whether the field should be filled from the corresponding SPA/GPA parameter in the PBO event, and whether the SPA/GPA parameter should be filled with the value from the screen in the PAI event.
    When an input field is linked to an SPA/GPA parameter, it is initialized with the current value of the parameter each time the screen is displayed. This is the reason why fields on screens in the R/3 System often already contain values when you call them more than once.
    When you call programs, you can use SPA/GPA parameters with no additional programming overhead if, for example, you need to fill obligatory fields on the initial screen of the called program. The system simply transfers the values from the parameters into the input fields of the called program.
    However, you can control the contents of the parameters from your program by using the SET PARAMETER statement before the actual program call. This technique is particularly useful if you want to skip the initial screen of the called program and that screen contains obligatory fields.
    Reading Data Objects from Memory
    To read data objects from ABAP memory into an ABAP program, use the following statement:
    Syntax
    IMPORT <f1> [TO <g 1>] <f 2> [TO <g 2>] ... FROM MEMORY ID <key>.
    This statement reads the data objects specified in the list from a cluster in memory. If you do not use the TO <g i > option, the data object <f i > in memory is assigned to the data object in the program with the same name. If you do use the option, the data object <f i > is read from memory into the field <g i >. The name <key> identifies the cluster in memory. It may be up to 32 characters long.
    You do not have to read all of the objects stored under a particular name <key>. You can restrict the number of objects by specifying their names. If the memory does not contain any objects under the name <key>, SY-SUBRC is set to 4. If, on the other hand, there is a data cluster in memory with the name <key>, SY-SUBRC is always 0, regardless of whether it contained the data object <f i >. If the cluster does not contain the data object <f i >, the target field remains unchanged.
    Saving Data Objects in Memory
    To read data objects from an ABAP program into ABAP memory, use the following statement:
    Syntax
    EXPORT <f1> [FROM <g 1>] <f 2> [FROM <g 2>] ... TO MEMORY ID <key>.
    This statement stores the data objects specified in the list as a cluster in memory. If you do not use the option FROM <f i >, the data object <f i > is saved under its own name. If you use the FROM <g i > option, the data objet <g i > is saved under the name <f i >. The name <key> identifies the cluster in memory. It may be up to 32 characters long.
    Check this link.
    http://www.sap-img.com/abap/difference-between-sap-and-abap-memory.htm
    Thanks,
    Susmitha.
    Reward points for helpful answers.

  • Event Pooling Table doesn't work

    I have created Event Pooling Table and registered it. When I insert correct name into field TABLENAME nothing happens - when I look at BI Administrator - Manage - Cache - the table still has entry in cache (and record is deleted from EPT table). When I insert incorrect name info field TABLENAME, I can see message in errorlog (The physical table misstat1_db::MIP:MV_E_HIMA in a cache polled row does not exist.) and I suppose that it is working. But cache not purged.
    Can you someone tell me why?
    Thx.

    Hi,
    Stale cache entries are purged automatically at the specified polling intervals. As your saying record has deleted, it means that polling has occured.
    Are you sure that the cache entry you are looking at is the same table which has been modified?
    If possible, not a solution just for debugging, try to create sample request in Answers and check whether it's been changing when the table get modified.
    -Vency

Maybe you are looking for

  • Why is bad alignment happening when I go to my website?

    Hello, Go to   http://www.peterdanko.com/peterdanko.com/greenbelt2.html Please note on the bottom of the <div#right_text_column....there should be three text links: Pricing, belt colors, and finishing...and they should all be in a row....not a column

  • Trouble connecting C4795 printer wirelessly

    Ok, so here it what I have: Photosmart C4795 printer Mac Mini w/ OS X 10.9.5 Cisco DPC3825 router I have tried everything under the sun to connect this printer wirelessly to my computer to no avail.  I have searched the internet high and low for a so

  • Google search history not private?

    Hello, I'm receiving my friend's google search history on my iPhone. The problem started about two weeks ago. When the search history first appeared on my device, I realized it was hers. First of all: she asked for my apple ID to download some apps o

  • Ipod icon in source list

    Can I keep the ipod icon in the itunes source list - so that I can click on it and see details of my ipod library etc? when the ipod isn't connected to my computer? I could do this with my ipod shuffle, but with my nano it disappears when I disconnec

  • SM04, AL08 active unknown users getting increased

    Hi All, While connecting through SAPGUi, getting an error "Application Server is overloaded currently" "Maximum no of connections terminals reached" If i checked in the sap system in "SMO4 and AL08" the no of unknown users is continually increasing"