Imp is not importing zero rows tables

Hi....
I have successfully exported the schema......
But while importing into another DB, it is not importing zero row tables.....
Why it is happening like that.....is there any specific reason for that......
Plz help.....urgent....
Thanks in advance.

Hi
export is fine...in the log file it is showing all the zero row tables being exported
but while importing .....in the log it is missing the 5 zero row tables....
any way....
exp system/password file=/home/oracle/export_backups_manual/schema_backup_09_05_2011.dmp owner=schema_name log=/home/oracle/export_backups_manual/schema_backup_09_05_2011.log
imp system/password file=/home/oracle/schema_backup_09_05_2011.dmp fromuser=schema_from touser=schema_to log=/home/oracle/schema_backup_09_05_2011.imp.log
Thanks

Similar Messages

  • IMP-00028: partial import of previous table rolled back

    Dear all,
    I have oracle 8i installed on windows server 2000. I have a valid export file (ful database). when am trying to import, am getting the error :
    IMP-00009: abnormal end of export file
    IMP-00028: partial import of previous table rolled back: 9464 rows rolled back
    Import terminated successfully with warnings.
    But, the export file is a valid one as the same script is used to take export daily. the log file of export shows the export is completed successfully with out any warnings.
    Have anyone faced this problem before ?.. I tried importing into 9i DB and getting the same error ?
    Please guide
    Kai

    Thanks Anantha,
    Very norma script :
    exp userid=system/password file=dmpfile.dmp log=dmplog.log full=Y
    tried schema level export as well as full database export
    imp userid=system/password file=dmpfile.dmp log=schemaimp.log fromuser=username touser=username
    imp userid=system/password file=dmpfile.dmp log=schemaimp.log full=Y
    Thanks
    Kai

  • Imp Command to import a single table

    Dear All:
    i want to import a single table from my backup and want to preserve all the existing indexes using IMP command.
    UserID : NEW_USER
    PWD: qwert1

    imp system/password@tstdhk fromuser=syslog touser=newuser tables=logs200712 rows=y file=/u06/import_exp/syslog_logs200712_080217.dmp commit=Y grants=N statistics=NONE ignore=Y log=/u06/import_exp/logs_part_jan.log
    Regards
    Asif Kabir

  • DB Adapter not importing SQL Svr tables

    Hi,
    Oracle Software:
    OS: Windows
    AS10g (10.1.2)
    BPEL GA Release
    10g DB (used for dehydration)
    SQL Server:
    Microsoft SQL Server 2000 - 8.00.760 (Intel X86) Dec 17 2002 14:22:05 Copyright (c) 1988-2003 Microsoft Corporation Enterprise Edition on Windows NT 5.0 (Build 2195: Service Pack 4)
    Issue:
    BPEL DB adapter is not generating a list of tables to import from a SQL Server connection.
    What I have done so far:
    In JDev, create a new DB connection to connect to the SQL Server and it works fine. I am able to see all tablesn and views by expanding the connection.
    In my BPEL partner link, I use the adapter wizard to select DB Adapter and then select the DB connection (SQL sver connection) I have created. All works well until I get to the import table screen where it does not generate a list of available tables to choose from.
    Any help would be greatly appreciated.
    Thanks!

    Hmmm ... Just to understand your problem correctly:
    - can you click on the "Import Tables" button successfully ? Meaning does this bring up another dialog titled "Import Tables" ?
    - If yes, are you able to select the tables to import ?
    - Subsequently, is the wizard not doing anything ? Is this right?
    Regardless, it would also help to run jdev via the following command: $ORACLE_HOME\integration\jdev\jdev\bin\jdev.exe [instead of the windows shortcut - JDeveloper BPEL Designer - which uses jdevw.exe]. This should launch the jdev along with a DOS window where you can see any stack trace if any being dumped by the UI code. When doing the "Import", I am sure you will see some stack trace that you could post back to this thread which would help us debug this issue further.
    Thanks.
    Shashi

  • EXP/IMP does not preserve MONITORING on tables

    Consider the following (on 8.1.7):
    1. First, create a new user named TEST.
    SQL> CONNECT SYSTEM/MANAGER
    Connected.
    SQL> CREATE USER TEST IDENTIFIED BY TEST;
    User created.
    SQL> GRANT CONNECT, RESOURCE TO TEST;
    Grant succeeded.2. Connect as that user, create a table named T and enable monitoring on the table.
    SQL> CONNECT TEST/TEST
    Connected.
    SQL> CREATE TABLE T(X INT);
    Table created.
    SQL> ALTER TABLE T MONITORING;
    Table altered.
    SQL> SELECT TABLE_NAME, MONITORING FROM USER_TABLES WHERE TABLE_NAME = 'T';
    TABLE_NAME                     MON
    T                              YES3. Export the schema using EXP.
    SQL> HOST EXP OWNER=TEST FILE=TEST.DMP CONSISTENT=Y COMPRESS=N4. Drop and recreate the user.
    SQL> CONNECT SYSTEM/MANAGER
    Connected.
    SQL> DROP USER TEST CASCADE;
    User dropped.
    SQL> CREATE USER TEST IDENTIFIED BY TEST;
    User created.
    SQL> GRANT CONNECT, RESOURCE TO TEST;
    Grant succeeded.5. Finally, connect as the user, and import the schema.
    SQL> CONNECT TEST/TEST
    Connected.
    SQL> HOST IMP FROMUSER=TEST TOUSER=TEST FILE=TEST.DMPNow monitoring is no longer enabled:
    SQL> SELECT TABLE_NAME, MONITORING FROM USER_TABLES WHERE TABLE_NAME = 'T';
    TABLE_NAME                     MON
    T                              NOIs this behaviour documented anywhere?
    Are there any IMP/EXP options that will preserve MONITORING?

    Apparently it's a non-public bug #809007 in 8.1.7 which should be fixed with 9i

  • IMPDP FULL=Y did not import the rows just the structure

    HI,
    I would like to ask some information....
    I have a database 10g on a windows server and it will be migrated to 11g still windows server. In the 10g server I used EXPDP with FULL=Y. I pre-created the tablespaces in the new 11g server. I used IMPDP (also used FULL=Y). I noticed that it was so fast and the dumpfile is 3TB in size. When I checked the dba_segments, it was only 10GB is size and no rows were imported just the structure :( Am I missing something?
    Also on my 10g server, i have /ora/oradata/PROD/datafiles/DATAFILENAMES.dbf location for all my datafiles. On my 11g server it is a different location which is /u02/ora/datafiles/PROD/DATAFILENAMES.dbf. Does it affect anything? I have all the tablespaces from 10g present in the 11g server since i precreated them all.
    Any thoughts?

    Precreating your tablespaces does not have anything to do with this issue. Also, changing the directory structure doesn't affect it either. These are both supported for multiple reasons, but the easiest to show would be going from a Windows system to a linux system which would force you to have different directory structure.
    Having said that, I have no idea why this would happen. Can you list both your export and import command? Also, can you check to see if the data was exported in your export.log file? The lines would look something like:
    . . exported "HR"."COUNTRIES" 6.359 KB 25 rows
    . . exported "HR"."DEPARTMENTS" 7 KB 27 rows
    . . exported "HR"."EMPLOYEES" 16.79 KB 107 rows
    Thanks
    Dean

  • Table has 85 GB data space, zero rows

    This table has only one column. I ran a transaction that inserted more than a billion rows into this table but then rolled it back before completion.
    This table currently has zero rows but a select statement takes about two minutes to complete, and waits on I/O.
    The interesting thing here is that previous explanations to this were ghost records in case of deletes,
    there are none. m_ghostRecCnt is zeroed for all data pages.
    This is obviously not a situation in which the pages were placed in a deferred-drop queue either, or else the page count would be decreasing over time, and it is not.
    This is the output of DBCC PAGE for one of the pages:
    PAGE: (3:88910)
    BUFFER:
    BUF @0x0000000A713AD740
    bpage = 0x0000000601542000          bhash = 0x0000000000000000          bpageno = (3:88910)
    bdbid = 35                          breferences = 0                    
    bcputicks = 0
    bsampleCount = 0                    bUse1 = 61857                      
    bstat = 0x9
    blog = 0x15ab215a                   bnext = 0x0000000000000000          
    PAGE HEADER:
    Page @0x0000000601542000
    m_pageId = (3:88910)                m_headerVersion = 1                 m_type = 1
    m_typeFlagBits = 0x0                m_level = 0                        
    m_flagBits = 0x8208
    m_objId (AllocUnitId.idObj) = 99    m_indexId (AllocUnitId.idInd) = 256
    Metadata: AllocUnitId = 72057594044416000                                
    Metadata: PartitionId = 72057594039697408                                Metadata: IndexId = 0
    Metadata: ObjectId = 645577338      m_prevPage = (0:0)                  m_nextPage = (0:0)
    pminlen = 4                         m_slotCnt = 0                      
    m_freeCnt = 8096
    m_freeData = 7981                   m_reservedCnt = 0                   m_lsn
    = (1010:2418271:29)
    m_xactReserved = 0                  m_xdesId = (0:0)                    m_ghostRecCnt
    = 0
    m_tornBits = -249660773             DB Frag ID = 1                      
    Allocation Status
    GAM (3:2) = ALLOCATED               SGAM (3:3) = NOT ALLOCATED          
    PFS (3:80880) = 0x40 ALLOCATED   0_PCT_FULL                              DIFF (3:6) = CHANGED
    ML (3:7) = NOT MIN_LOGGED           
    DBCC execution completed. If DBCC printed error messages, contact your system administrator.
    Querying the allocation units system catalog shows that all pages are counted as "used".
    I saw some articles, such as the ones listed bellow, which addresses similar situations where pages arent deleted in a HEAP after a delete operation. It turns out pages are only deleted in a table when a table level lock is issued.
    http://blog.idera.com/sql-server/howbigisanemptytableinsqlserver/
    http://www.sqlservercentral.com/Forums/Topic1182140-392-1.aspx
    https://support.microsoft.com/kb/913399/en-us
    To rule this out, I inserted another 100k rows which caused no change on page counts, and then deleted all entries with a TABLOCK query hint. Only one page was deleted.
    So, it appears we have a problem with pages that were created during a transaction that was rolled back, huh? I guess rolling back a transaction doesn't take certain physical factors into consideration.
    I've looked everywhere but couldn't find a satisfactory answer to this. Does anybody have any ideas?
    Just because there are clouds in the sky it doesn't mean it isn't blue. Some people would disagree.

    And this is the reason why you should have heaps (unless your name is Thomas Kejser :-).
    Try TRUNCATE TABLE. Or ALTER TABLE tbl REBUILD.
    Erland Sommarskog, SQL Server MVP, [email protected]
    I rebuilt the HEAP a while ago, and then all pages were gone. I don't know if TRUNCATE would have the same results, I would have to repeat the test to find that out. There are many ways to fix the problem itself, including creating a clustered index as Satish
    suggested.
    Id like to focus on this interesting fact I wanted to bring to the table for discussion: You open a transaction, insert a huge load of records and then roll back. Why would the engine leave the pages created during the transaction behind? More specifically,
    why would they not be marked as "free pages" if they are all empty? Why are they not marked as free so scans would skip them and not generate a lot of I/O throughput and long response times just to query a zero row table? Isn't this like a design
    flaw or a bug?
    Just because there are clouds in the sky it doesn't mean it isn't blue. But someone will come and argue that in addition to clouds, birds, airplanes, pollution, sunsets, daltonism and nuclear bombs, all adding different colours to the sky, this
    is an undocumented behavior and should not be relied upon.

  • Row tables Or Column Tables Or both?

    I would like to know the basic approach in order to build a new model  in hana ( row tables or column tables or both) ?
    Suppose I've a source system for example excel or .csv..etc and different excels consists of different types of data that includes both dimensions and measures and each source file consists of around 500 fields. The source feed comes every day and I need to load each and every field in the database.
    Out of these 500 fields and assume 300 are dimensions and 200 are facts, and for reporting for purpose i just need 150 dimensions and 80 measures and some calculated measures. (Infuture, I may need to consider some more dimensions and facts) .
    Now my question is,
    As the source feed comes every day and I need to load all the fields,
    Do I need to create first row tables first as Row tables are preferred for insert operations or Can I go a head with column tables directly?
    I just want to know the guidelines to follow , where we need to load some thousand of fields and huge number of rows and at the same time my modeling should be good for reporting as well.
    What I am not able to catch is, SAP HANA recommends not to combine row tables and column table for operations otherwise first I load all the data into row tables and then create column tables specific to reporting purpose by using row tables' data? ( I can create some stored procs to load data from row tables to columns tables and run them after data load completes)
    Please let me know your inputs.
    Thanks,
    Sree

    Hello Shree,
    For the better performance it is always advisable to store table as a Column Store.
    This can be changed anytime to row store.
    HANA does have a row store and you can create row-oriented tables for some very specific scenarios:
    - Transient data like queues, where you insert and delete a lot and the data never persists
    - Configuration tables which are never joined, where you select individual entire rows
    - When you are advised to by a SAP support personnel
    Also if you are going to create views it only supports column based stored tables.
    Usually performance is better with the Column store.
    For more details you can refer:
    http://scn.sap.com/thread/2025441
    Regards,
    Saurabh

  • DAC not populating FACT table for the GL module - W_GL_OTHER_F (zero rows)

    All - We did our FULL loads from the Oracle Financials 11i into OBAW and got data in most of the dimension tables populated. HOWEVER, i am not seeing anything populating into the fact table W_GL_OTHER_F (zero rows). Following is a list of all dims / facts i am focusing towards for the GL module.
    W_BUSN_LOCATION_D     (8)
    W_COST_CENTER_D     (124)
    W_CUSTOMER_FIN_PROFL_D     (6,046)
    W_CUSTOMER_LOC_D     (4,611)
    W_CUSTOMER_LOC_D     (4,611)
    W_CUSTOMER_LOC_D     (4,611)
    W_DAY_D     (11,627)
    W_DAY_D     (11,627)
    W_DAY_D     (11,627)
    W_GL_ACCOUNT_D     (28,721)
    W_INT_ORG_D     (171)
    W_INVENTORY_PRODUCT_D     (395)
    W_LEDGER_D     (8)
    W_ORG_D     (3,364)
    W_PRODUCT_D     (21)
    W_PROFIT_CENTER_D     (23)
    W_SALES_PRODUCT_D     (395)
    W_STATUS_D     (7)
    W_SUPPLIER_D     (6,204)
    W_SUPPLIER_PRODUCT_D     (0)
    W_TREASURY_SYMBOL_D     (0)
    W_GL_OTHER_F <------------------------------------- NO FACT DATA AT ALL
    Question for the group: Are there any specific settings which might be preventing us from getting data loaded into our FACT tables? I was doing research and found the following on the internet:
    Map Oracle General Ledger account numbers to Group Account Numbers using the following file file_group_acct_names_ora.csv. Is this something that is necessary?
    Any help / guidance / pointers are greatly appreciated.
    Regards,

    There are many things to configure before your first full load.
    For the configuartion steps see Oracle Business Intelligence Applications Configuration Guide for Informatica PowerCenter Users
    - http://download.oracle.com/docs/cd/E13697_01/doc/bia.795/e13766.pdf (7951)
    - http://download.oracle.com/docs/cd/E14223_01/bia.796/e14216.pdf (796)
    3 Configuring Common Areas and Dimensions
    3.1 Source-Independent Configuration Steps
    Section 3.1.1, "How to Configure Initial Extract Date"
    Section 3.1.2, "How to Configure Global Currencies"
    Section 3.1.3, "How to Configure Exchange Rate Types"
    Section 3.1.4, "How to Configure Fiscal Calendars"
    3.2 Oracle EBS-Specific Configuration Steps
    Section 3.2.1, "Configuration Required Before a Full Load for Oracle EBS"
    Section 3.2.1.1, "Configuration of Product Hierarchy (Except for GL, HR Modules)"
    Section 3.2.1.2, "How to Assign UNSPSC Codes to Products"
    Section 3.2.1.3, "How to Configure the Master Inventory Organization in Product Dimension Extract for Oracle 11i Adapter (Except for GL & HR Modules)"
    Section 3.2.1.4, "How to Map Oracle GL Natural Accounts to Group Account Numbers"
    Section 3.2.1.5, "How to make corrections to the Group Account Number Configuration"
    Section 3.2.1.6, "About Configuring GL Account Hierarchies"
    Section 3.2.1.7, "How to set up the Geography Dimension for Oracle EBS"
    Section 3.2.2, "Configuration Steps for Controlling Your Data Set for Oracle EBS"
    Section 3.2.2.1, "How to Configure the Country Region and State Region Name"
    Section 3.2.2.2, "How to Configure the State Name"
    Section 3.2.2.3, "How to Configure the Country Name"
    Section 3.2.2.4, "How to Configure the Make-Buy Indicator"
    Section 3.2.2.5, "How to Configure Country Codes"
    5.2 Configuration Required Before a Full Load for Financial Analytics
    Section 5.2.1, "Configuration Steps for Financial Analytics for All Source Systems"
    Section 5.2.2, "Configuration Steps for Financial Analytics for Oracle EBS"
    Section 5.2.2.1, "About Configuring Domain Values and CSV Worksheet Files for Oracle Financial Analytics"
    Section 5.2.2.2, "How to Configure Transaction Types for Oracle General Ledger and Profitability Analytics (for Oracle EBS R12)"
    Section 5.2.2.3, "How to Configure Transaction Types for Oracle General Ledger and Profitability Analytics (for Oracle EBS R11i)"
    Section 5.2.2.4, "How to Specify the Ledger or Set of Books for which GL Data is Extracted"
    5.3 Configuration Steps for Controlling Your Data Set
    Section 5.3.1, "Configuration Steps for Financial Analytics for All Source Systems"
    Section 5.3.2, "Configuration Steps for Financial Analytics for Oracle EBS"
    Section 5.3.2.1, "How GL Balances Are Populated in Oracle EBS"
    Section 5.3.2.2, "How to Configure Oracle Profitability Analytics Transaction Extracts"
    Section 5.3.2.3, "How to Configure Cost Of Goods Extract (for Oracle EBS 11i)"
    Section 5.3.2.4, "How to Configure AP Balance ID for Oracle Payables Analytics"
    Section 5.3.2.5, "How to Configure AR Balance ID for Oracle Receivables Analytics and Oracle General Ledger and Profitability Analytics"
    Section 5.3.2.6, "How to Configure the AR Adjustments Extract for Oracle Receivables Analytics"
    Section 5.3.2.7, "How to Configure the AR Schedules Extract"
    Section 5.3.2.8, "How to Configure the AR Cash Receipt Application Extract for Oracle Receivables Analytics"
    Section 5.3.2.9, "How to Configure the AR Credit-Memo Application Extract for Oracle Receivables Analytics"
    Section 5.3.2.10, "How to Enable Project Analytics Integration with Financial Subject Areas"
    Also, another reason I had was that if you choose not to filter by set of books/type then you must still include the default values set of books id 1 and type NONE in the list on the DAC parameters (this is a consequence of strange logic in the decode statements for the sql in the extract etl).
    I would recomend you run the extract fact sql prior to running your execution plan to sanity check how many rows you expect to get. For example, to see how many journal lines, use informatica designer to view mapplet SDE_ORA11510_Adaptor.mplt_BC_ORA_GLXactsJournalsExtract.2.SQ_GL_JE_LINES.3 in mapping SDE_ORA11510_Adaptor.SDE_ORA_GLJournals then cut paste the SQL to run your favourite tool such as sqldeveloper. You will have to fill in the $$ variables yourself.

  • MARA table field STAWN (Comm./imp. code EU ) is not updating in MARC table

    Dear Experts,
    From MARA table field STAWN (Comm./imp. code EU ) is not updating in MARC table field STAWN (Comm./imp. code EU ) by MM02 Transaction
    Is there any SAP standard functionality to over come above issue or needed  ABAP coding for the same ????
    Regards
    Hanumant

    Dear All,
    Thanks for your reply
    I tried to update the the field after adding plant and other detail actually that field will be gray mode (display) and cannot update
    my question i want to update MARC_STAWN field (Forign trade import) for my existing all material , those material MARA_STAWN (basic data 3 ) is also blank but system is allowing me to update in MARA_STAWN (basic data 3) but system will not update to MARC_STAWN (Forign trade import) field.
    What is the best method to Coppy all MARA Stawn fields to the MARC Stawn field.
    Regards
    Hanumant.

  • Performance problem on a table with zero rows

    I queried the v$sqlarea table to find which SQL statement was doing the most disk reads and it turned out to be a query that read 278 Gigabytes from a table which usually has zero rows. This query runs every minute and reads any rows that are in the table. It does not do any joins. The process then processes the rows and deletes them from the table. Over the course of a day, a few thousand rows may be processed this way with a row length of about 80 bytes. This amounts to a few kilobytes, not 278 Gig over a couple of days. Buffer gets were even higher at 295 Gig. Note that only the query that reads the table is doing these disk reads, not the rest of the process.
    There are no indexes on the table, so a full table scan is done. The query reads all the rows, but usually there are zero. I checked the size of the table in dba_segments, and it was 80 Meg. At one point months ago, during a load, the table had 80 Meg of data in it, but this was deleted after being processed. The size of the table was never reduced.
    I can only assume that Oracle is doing a full table scan on all 80 Meg of this table every minute. Even when there are zero rows. Dividing the buffer gets in bytes by the number of executions yields 72 Meg which is close to the table size. Oracle is reading the entire table size from disk even when the table has zero rows.
    The solution was to truncate the table. This helped immediately and reduced the disk reads to zero most of the time. The buffer gets were also reduced to 3 per execution when the table was empty. The automatic segment manager reduced the size of the table to 64k overnight.
    Our buffer cache hit ratio was a dismal 72%. It should go up now that this problem has been resolved.
    Table statistics are gathered every week. We are running Oracle 9.2 on Solaris.
    Note that this problem is already resolved. I post it because it is an interesting case.
    Kevin Tyson, OCP
    DaimlerChrysler Tech Center, Auburn Hills, MI

    Kevin,
    The solution was to truncate the tableThis is not a scoop... isn't it ?
    Table statistics are gathered every weekIs there any reason for that ?
    If stats ran when no rows, perf can be very bad after loading data, and if stats ran when thousand rows, perf can be very bad after deleting. Perhaps can you find a more restrictive stat running ?
    Nicolas.
    Message was edited by:
    N. Gasparotto

  • Import new rows to the existing table (urgent)

    Hello everyone,
    i have two databases in different locations. but in the initial stage, both are same.
    if i add rows or update rows in database 1, i want to export the rows updated or added newly from database 1 and import to database 2 without any modification in database2 (i mean only the modified rows and new rows only to be imported to database2)
    for example;
    in initial stage
    database 1-
    table 1(5 rows)
    table 2( 6 rows)
    table 3 (1 row)
    database 2-
    table 1(5 rows)
    table 2( 6 rows)
    table 3 (1 row)
    later stage
    database 1-
    table 1(7 rows)
    table 2( 9 rows)
    table 3 (1 row)
    i need to update the below database 2 with 2 rows in table 1 and 3 rows in table 2. and it should not affect the table 3.
    database 2-
    table 1(5 rows)
    table 2( 6 rows)
    table 3 (1 row)
    i need a solution for this. if anyone know the solution plz post the reply.

    also expdp and impdp commands will work on 10g only i think so. but we are using 9i.Since you're using 9i, use Query option from Export(exp). Below notes would help.
    [http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:478275606639]
    [http://www.dbatools.net/experience/export_query_option.html]
    exp using query parameter
    -Anantha

  • I'm trying to open a 900kb Word doc (240pages) in Pages but get this error message:  Import Warning - A table was too big. Only the first 40 columns and 1,000 rows were imported.

    I'm trying to open a 900kb Word doc (240pages) in Pages but get this error message:  Import Warning - A table was too big. Only the first 40 columns and 1,000 rows were imported.

    Julian,
    Pages simply won't support a table with that many rows. If you need that many, you must use another application.
    Perhaps the originator could use a tabbed list rather than a table. That's the only way you will be able to contain a list that long in Pages. You can do the conversion yourself if you open the Word document in LibreOffice, Copy the Table, Paste the Table into Numbers, Export the Numbers doc to CSV, and import to Pages. In Pages Find and Replace the Commas with Tabs.
    There are probably other ways, but that's what comes to mind here.
    Jerry

  • How do i set the background of the table( not of cell / row / column).

    How do i set the background of the table( not of cell / row / column).
    What happens when i load the applet the table is blank and displays the background color is gray which we want to be white.
    We tried using the setBackGround but it is not working maybe we are not using it properly. Any help would be gr8.
    Thanks in advance.

    I don't understand very well, but i guess that the background is gray when the table content's empty, isn't it?
    When the table model is empty, the JTable doesn't paint, so its container displays its background (often gray).
    In this case, what you must do is force the table to paint, even if the model is empty. So, you have to create your own table and override three methods :
    public class MyTable extends JTable
    //specify the preferred and minum size when empty
    myPreferredWidth = 200;
    myPreferredHeigth =200;
    myMinimunWidth = ...;
    myMinimunHeigth = ...;
    public Dimension getPreferredSize()
    if(getModel().getRowCount() < 1)
    return new Dimension(myPreferredWidth, myPreferredHeigth);
    else
    return super.getPreferredSize();
    public Dimension getMinimumSize()
    if( getModel().getRowCount() > 0)
    return new Dimension(myMinimunWidth, myMinimunHeigth);
    else
    return super.getMinimumSize();
    protected void paintComponent(Graphics g)
    if (getModel().getRowCount<1 && isOpaque()) { //paint background
    g.setColor(Color.white);
    g.fillRect(0, 0, getWidth(), getHeight());
    else super.paintComponent(g);
    }

  • Database adapter not importing table having data type as WF_EVENT_T

    Hi All,
    I have a requirement to import a table in the Database adapter. That table is having a column of data type “WF_EVENT_T”.
    When I tried to import the table in database adapter I got the error as "The following tables are not supported in the Database Adapter and were not imported".
    Then I modified the table by adding one more column with some other data type and tried to import that table.
    It got imported successfully but I was not able to see the column with data type WF_EVENT_T in the table.
    Any pointers to this would be of great help.
    Thanks.

    Hi Harish
    Thanks for your response.
    I can create the table with the data type 'String'. However, the problem is when I try to update the table from a program in SE38 or a Function Module in SE37.
    When I try to activate the PROGRAM or FUCNTION, I GET A MESSAGE THAT I MENTIONED EARLIER.
    Here is the simple program that I have created that I am not able to activate
    ==========================================
    REPORT  ZTEST_STRING1.
    tables: ztest.
    ztest-zid = 2.
    ztest-zstring1 = 'ABC'.
    insert ztest.
    ===========================================
    ztest has two fields
    zid which is NUMC type
    and zstring1 which is STRING type.
    When I try to activate I get an error message as follows:
    'ztest' must be a flat structure. You cannot use internal tables,
    strings, references, or structures as components.
    Edited by: Ram Prasad on Mar 20, 2008 6:08 PM

Maybe you are looking for

  • How do I know if I received LUV?

    QuestionHow do I know if I received LUV? AnswerSo you received LUV from a friend, but can't quite tell who sent you LUV and how much. Lets take a look at your LUV page. The screenshot below shows my LUV page. You can see that I have some activity on

  • MIcrophone class not displaying decklink capture card in names

    hi, I have a desktop application in which i have an option to select the audio video devices to publish the stream.The Camera.names displaying the decklink capture card .But Microphone.names is not showing the name. Is there anyway to get it in the l

  • How to remove itune purches from my ihpone?

    How can i remove Itune  purches from my iphone?

  • HT3209 I have purchased a complete season of a TV show via my Apple TV.

    I am trying to figure out how to manage purchased show via Apple TV. I am finding it takes 1 hour and 40 minutes to download each episode. I am not finding where the episodes are downloaded to. I looked at the TV shows downloaded in iTunes via my Mac

  • Elements 10 I have no options under create

    I was trying to create a collage in PS elements 10 and when I go to the create option, there are no options underneath it, I have reset the panels and tolls  but nothing appaers. Please see the image below, makes no difference if I have an open image