Historical usage of database

Hi,
Can anybody help me about how to find info about database usage for last one year.
I just need to know is it possible to find historical usage of database during last year.
For example>
jan 100G
feb 150G
desc 450G
Thanks

Hi,
Which Database version are you using.
If you are using Oracle 11g rel2 than,
Querying History Data :
Flashback Data Archive provides seamless access to the historical data using the ‘AS OF’ or
‘VERSIONS BETWEEN’ SQL constructs. You can query for the state of any row in the tracked
table as far back as your specified retention period.
The following is an example for querying the salary details for the employee with id=193 on June
1, 2007:
SELECT last_name, first_name, salary
FROM EMPLOYEES
AS OF TIMESTAMP TO_TIMESTAMP(‘2007-06-01 00:00:00’,’YYYY-MM-DD HH24:MI:SS’)
WHERE employee_id=193;
• The FLASHBACK ARCHIVE ADMINISTER system privilege is required for creating a new
flashback data archive
• The following static data dictionary views are available
• DBA/USER_FLASHBACK_ARCHIVE – Displays information about flashback data
archives
• DBA/USER_FLASHBACK_ARCHIVE_TS – Displays tablespaces and the mapping to
flashback data archives
• The FLASHBACK ARCHIVE object privilege is required to enable Flashback Data Archive
• The following static data dictionary views are available:
• DBA/USER_FLASHBACK_ARCHIVE_TABLES – Displays information about tables
that are enabled for Flashback Data Archive
Best regards,
Rafi.
http://rafioracledba.blogspot.com/

Similar Messages

  • Historical Usage Image - * as tooltip problem

    Hello Everyone,
    We have upgraded our system from 4.4 E to 5.1 SP08 Patch Level 1 and observed that Historical Usage image which displays on the main screen after logging in the system is not visible, there is X mark in place of that image. When I move the mouse pointer over it, it displays the tool tip as *.
    There was a problem with the company logo image which I have fixed, but dont know how to fix this OOTB image. Please advise steps to fix it.
    Thanks !
    ******************RESOLVED*******************
    The problem is resolved now.
    It was a problem because the jcschart.jar was not specified in the Application server as a classpath.
    Once I have set the classpath in app server and restarted it, the bar charts are working now.
    ******************RESOLVED*******************
    Edited by: ESO123 on Dec 27, 2010 6:49 PM

    Resolved

  • No historical data in database. UCCE

    HI all. We are running UCCE 9.
    And today faced the problem: neither AWDB or HDS database tables   _half_hour or _interval shows no data.
    Real-time tables works fine. Cuic historical reports also shows no data
    What i've checked:
    Hds/aw databases has enough place for data.
    Reporting in Configuration manager>PG explorer>Agent distribution is enabled for HDS and real-time(site name is correct)
    Test calls and some activities was performed.
    What i've forgot to configure here?
    Any hints appreciated. TY

    Check the updateaw process and all other icm processes are working fine. Please share the updateaw logs during any changes ( like Agent creation, making test calls ) which should show in reporting. In short recreate the issue and send the updateaw, rtc,rpl logs.....or you can troubleshoot with AW logs.
    Hope this will help you..........:)
    Thanks & Regards,
    Hardik B Kansara

  • MDW Disk Usage for Database Report Error - A data source has not been supplied for the data source DS_TraceEvents

    Hello,
    On the MDW Disk Usage Collection Set report, I get the following error when I click on a database hyperlink.
    A data source has not been supplied for the data source DS_TraceEvents
    SQL profiler shows the following SQL statements are executed (I've replaced the database name with databaseX)
    1. exec sp_executesql N'SELECT
    dtb.name AS [Name]
    FROM
    master.sys.databases AS dtb
    WHERE
    (dtb.name=@_msparam_0)',N'@_msparam_0 nvarchar(4000)',@_msparam_0=N'databaseX'
    this returns zero rows as databaseX does not exist on my MDW central server, but is a database on a target server (i.e. one that is being monitored and uploaded into the MDW central server).
    2. USE [datatbaseX]
    this produces the following error:
    Msg 911, Level 16, State 1, Line 1
    Database 'databaseX' does not exist. Make sure that the name is entered correctly.
    why is the report looking for the database on my server?
    thanks
    Jag
    Environment: MDW (Management Data Warehouse) on SQL 2008 R2

    Hi Jag,
    Based on my test, while this database is offline, we will encounter this issue. This is because that while we click the certain database in “Disk Usage Collection
    Set” report, it will query some information with that certain database. If this database is offline, we will not access this database to acquire related information and generates this error.
    Therefore I recommend that you check the status of this database by using this system view:
    sys.databases. If it is not online, please execute
    the following statements in a new window to make this database to be online:
    USE master
    GO
    ALTER DATABASE <database name> SET ONLINE
    GO
    If anything is unclear, please let me know.
    Regards,
    Tom Li

  • Can UPK Usage Tracking Database be on Linux

    Gurus,
    Can we have UPK tracking database created on linux or only windows is supported.
    Thanks

    Hello,
    Currently the Usage Tracking server application requires IIS. However, the database associated with the Usage Tracking application server can be installed on a Linux machine assuming the database software itself supports Linux. Please refer to the [UPK Technical Specifications|http://www.oracle.com/applications/tutor/upk-technical-specification-data-sheet.pdf] for a list of support database software.
    Best regards,
    Marc

  • How to fix a corrupted data usage SQLite database?

    I have an iPhone 5 running the latest iOS 8.0.2 and this phone is never jailbroken.
    A few months ago I noticed my data usage was not updated anymore. I tried many things to restore it. Today I hooked up my iPhone to Xcode and there it revealed the problem in the device logs:
    Sep 28 13:38:01 Marcels-iPhone Preferences[331] <Warning>: CoreData: error: (11) Fatal error.  The database at /var/wireless/Library/Databases/DataUsage.sqlite is corrupted.  SQLite error code:11, 'database disk image is malformed'
    So apparently the data usage SQLite file on my iPhone got corrupted. How on earth can I restore this database?

    If you are able, try modifying or deleting the account using "/Applications" > "Utilities" > "NetInfo Manager.app" - (be careful that only the user and not the entire "users" directory is selected when you delete).
    Otherwise, the output from this commands might provide more info on the state of the user records, potentially providing a clue to how to fix it (if the second field in any entry contains anything other than asterisks, don't post them):<pre>nidump passwd /</pre>Also, if the user shows up using this command, that might provide another method for referring to the user using "NetInfo" based commands:<pre>nicl . -list /users</pre>If there aren't a lot of other users on the computer, you could always reset the whole "NetInfo" database as described in the document below. User accounts will have to be recreated (adjusting to match the original 'uid' values), and custom groups, share points, mounts, etc. (if any) will have to be reconfigured as well.
    http://docs.info.apple.com/article.html?artnum=107210

  • Checking Module to Prevent SQL Plus usage on Database

    I have a question regarding logon triggers checking against SYS_CONTEXT('USERENV', 'MODULE'). I created a logon trigger that looks for SYS_CONTEXT('USERENV', 'MODULE') = 'SQLPLUS.EXE'
    and if it matches then it only allows specified users to log in to the database. This code works but I am confused as to why when I check SYS_CONTEXT('USERENV', 'MODULE') after I login in
    shows SQL*Plus which clearly does not match my IF statement in my logon trigger.
    Second issue. If I rename sqlplus.exe to jeff.exe and run it I am abled to log in to the database as a non DBA user. But the module still shows as SQL*Plus. Why is this?
    Database Version: 11.2.0.2 64bit
    OS: Windows Server 2003 R2
    Client: 11.2.0.1
    /*********************Create Trigger******************************/
    CREATE OR REPLACE TRIGGER application_check_al
      after logon ON database 
    DECLARE
      l_username VARCHAR2(20);
      l_module   VARCHAR2(20);
    BEGIN
      l_username := SYS_CONTEXT('USERENV', 'SESSION_USER');
      l_module   := UPPER(SYS_CONTEXT('USERENV', 'MODULE'));
      IF l_module LIKE 'SQLPLUS.EXE' AND
         l_username NOT IN ('SYS', 'SYSTEM', 'DVOWNER', 'DVMGR') THEN
        raise_application_error(-20001, 'SQLPLUS ACCESS RESTRICTED FOR NON DBA USERS');
      END IF;
    END application_check_al;
    /*********************Run SQLPLUS******************************/
    SQL*Plus: Release 11.2.0.1.0 Production on Wed Mar 7 12:22:23 2012
    Copyright (c) 1982, 2010, Oracle.  All rights reserved.
    Enter user-name: jeffc@dev
    Enter password:
    ERROR:
    ORA-00604: error occurred at recursive SQL level 1
    ORA-20001: SQLPLUS ACCESS RESTRICTED FOR NON DBA USERS
    ORA-06512: at line 10
    Enter user-name: system@dev
    Enter password:
    Connected to:
    Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining, Oracle Database Vault
    and Real Application Testing options
    system@dev> select sys_context('USERENV','MODULE') from dual;
    SYS_CONTEXT('USERENV','MODULE')
    SQL*Plus
    SQL>

    jeff81 wrote:
    That doesn't make sense. Why am I able to log in when I renamed the exe? And why does the module still show as SQL*Plus?You are right - it does not make sense. The idea that Oracle might perhaps set module to SQLPLUS.EXE on executable start, and then re set from SQLPLUS.EXE to SQL*Plus after connect, or in glogin.sql, to ensure it is consistent across all operating system never crossed my mind.
    You might want to refer to Support Note "SQL*Plus Session/Module is Not Showing in V$SESSION" [ID 1312340.1] to see whether anything in there helps. I'm pretty sure http://docs.oracle.com/cd/E11882_01/server.112/e16604/ch_twelve040.htm#i2698573 doesn't help much, though.
    I'd certainly be raising it with Support as a potential security challenge, to get that potential hole closed.
    Edited by: Hans Forbrich on Mar 7, 2012 2:23 PM
    I wonder whether Oracle put that capability in there - if an untained SQLPLUS.EXE, it tells you that it is SQLPLUS.EXE, but if renamed it tells you 'SQL*Plus'? Specuklation, but it is one thing I might do to subtly raise the flag. Best bet - ask Support.
    Edited by: Hans Forbrich on Mar 7, 2012 2:29 PM

  • Cisco ISE Licence Historical Usage

    I am a bit frustrated that I am unable to find any report/chart showing license utilization in ISE since 1.0 to 1.1.1.
    The only info I found is ISE will send an alarm when the license pool is near fully utilized.
    However, how can I check the historical utilization data for capacty planning for proof that license was not bought in excess manner
    Anyone has idea on this?
    Thank you!

    Ning,
    I checked my ISE instance and there isnt a report that exists, however you can run a report of the active radius sessions at around their peak time and that should give some visibility as to how many endpoints are connected to the network.
    You can also take a screenshot of the active endpoints dashlet on the home screen since that graph spans either the last 24 hours or 60 minutes.
    Thanks,
    Tarik Admani
    *Please rate helpful posts*

  • How do connect to Historical Databases?

    Hi,
      please explain me what is Historical databases?
    Can u tell me how to connect to Historical databases? can we download these Database servers from net?
    - senthil

    Hi Senthil,
    You can try for RTXHDB (RTX Historical Database).
    RTX provide secured relational databases for LAN and WAN network.
    Often the use of relational database, or transaction-oriented technology, for off process historical LAN/WAN database applications, and doing unconventional operations on data.
    As an example,  It is in this area of off process historical data warehousing and retrieval that the RTXHDB (RTX Historical Database) system excels. RTXHDB processes and stores a massive flow of data for LAN/WAN database applications, which then can be retrieved with a great deal of speed and quickly turned into useful information for the PC user.
    You can check following links for more information.
    www.expertune.com/articles/isa2004perfmonhda.pdf
    http://www.rtx.com/hdbcncpt.htm
    Best Regards
    Ramshanker

  • Concerns over switching between new Azure SQL Database Service Tiers

    Windows Azure's new SQL Database
    service tier pricing model will be put into effect in less than 12 months. We currently have SQL Databases on the Business and Web Edition pricing models.
    We recently asked Azure Support a number of questions around the scalability and ability to switch between these tiers. The responses so far have been far from encouraging:
    Q: If we exceed the criteria for a given tier (see http://msdn.microsoft.com/library/azure/dn741336.aspx), how will Azure respond? For example, if we are on the S1 service tier, and we exceed the maximum number of sessions (200), will any new sessions be
    blocked until we manually increase the service tier? Or will you automatically move (and bill) us to/for the next tier level?
    A: If you exceed the criteria of the existing tier, you shall be notified of performance issue like throttling. Users may experience slowness and blocking. There will not be any automatic upgrade.
    Q: So to confirm, if we suddenly experience increased, unanticipated client activity overnight due to our web site becoming more popular, you will be blocking any user sessions over and above our performance level limit, until we manually make the transition
    to the next level? Is there no “overdraft” facility/scalability in this respect? If this scenario took place within a short space of time, how would we have time to react? Simply blocking new sessions and preventing customers from using our site is not acceptable.
    A: I’m sure you understand that SQL Azure database is a shared resource and hence we cap the resources for individual subscription so that a fair service is provided to all the concurrent users. If you feel that you cannot compromise on the user experience
    then you shall think of an edition that best suits you. Please refer to the following document to best understand the throttling and its impact. http://social.technet.microsoft.com/wiki/contents/articles/1541.windows-azure-sql-database-connection-management.aspx#Throttling_Limits
    Q: Since the changing of the performance level could take several minutes/hours depending on the size of the database, would we experience any downtime/degradation of performance of the database during this period?
    A: You might
    Q: If downtime/degradation of performance are a possibility whilst we switch service tiers, what do Microsoft recommend we do to safeguard against this? Should we create a second database on a higher tier level and then export the data from the lower tier and
    import to the higher, before switching over? How do Microsoft recommend we switch tiers in a production environment with minimal disruption?
    A: If you want to upgrade the tiers due to degradation of performance, you will have to create another database on a higher tier and import the data.
    One of our key reasons for moving to Azure hosting was the seamless scalability it appeared to offered. As you can imagine, the responses above are a major concern for our production environment. Does anyone else have any thoughts or concerns in this respect?

    James,
    I reviewed the support incident you referred to and believe that the questions you asked may have been answered later in the interaction you had with the engineer.  For the benefit of others viewing this forum I want to reiterate the answers
    to your key questions here as well.
    Q. What is the behavior when you reach the limits for the service tier?
    A. Each service tier currently has limits on 4 different dimensions (CPU, physical reads, log writes and memory) of resource consumption.  When you reach one of the limits, the behavior depends on which resource limit you are hitting, but
    generally speaking is consistent with the behavior you would see with a similar hardware limit in the SQL Server box product.  For example, when you reach the CPU limit your queries will start showing more SOS_SCHEDULER_YIELD waits,
    the memory limit will cause a higher percentage of pages to be read from disk instead of the buffer cache (PAGEIOLATCH_xx waits), etc.  This set of limits does not directly abort any of your queries--they just run longer as they
    vie for the fixed set of resources made available to your database.  In a system where the load greatly exceeds the resources, queries may start to time out.
    The new service tiers continue with the Web/Business edition behavior of limiting the number of sessions and concurrent requests (worker threads) you can have.  When you exceed these limits you'll get error 10928.  Note that
    each tier in Basic/Standard/Premium have different values for these limits (http://msdn.microsoft.com/en-us/library/azure/dn369873.aspx) than Web/Business and thus may encounter the errors at different usage levels.  The key is choosing the appropriate
    service level for the application, and to facilitate that the sys.resource_stats view shows historical usage information so you know where you stand as far as reaching any of the limits.  This resource consumption data is also available in the portal.
    Q. Is it possible to switch service tiers, and is there any disruption when you do so?
    A. Yes, you can change between service tiers as described in this MSDN documentation (http://msdn.microsoft.com/en-us/library/azure/dn369872.aspx).  This can be done through the portal, powershell, or REST APIs.  [The preview currently has a restriction
    where legacy servers don't support switching to Basic/Standard which is expected to be removed in the near future.]  The link above outlines the limits on the number of tier changes you can do in a 24 hour period, expected time to perform the change, and
    the client disconnect that occurs.
    If you have further questions, feel free to re-engage on the support incident or reply to this forum thread.

  • How can I connect to MySQL external database

    Hello.
    I have a SAP system running in SQL Server 2003.
    I need to connect to external MySQL DB., to operate with this information in ABAP program.
    I have done the step necesary... I mean I go to DBCO transaction and configure the connection like this:
    DB CONNECTION --> AFIS
    DBMS --> MSS
    user name --> xxxxxx
    DBpass -->xxxxxx / xxxxxx
    Conn Info -->MSSQL_SERVER=192.168.1.233 MSSQL_DBNAME=alliance OBJECT_SOURCE=alliance
    I do a test program, when I do the statement CONNECT TO, sy-subrc 0 and connection = DEFAULT... I mean, with this form I cannot connect to MySQL Database...
    Can you help me to do this?? I think the problem it's the connection string in DBCO... but I'm not sure...,
    Would be possible to connect by MySQL ODBC??? I mean , installing the ODBC driver in my SAP server, and using this in ABAP Program??
    Thks.
    DATA: BEGIN OF wa,
    cod_modelo(20),
    END OF wa.
    DATA: dbs TYPE dbcon-con_name.
    DATA: con(20) TYPE c.
    DATA : ls_wa LIKE wa.
    con = 'AFIS'. "DB Connection in DBCO above
    EXEC SQL.
    CONNECT TO :con
    ENDEXEC.
    WRITE sy-subrc. ---> The result it's 4
    EXEC SQL.
    GET CONNECTION :con
    ENDEXEC.
    WRITE : con. --> The result it's DEFAULT
    EXEC SQL.
    SET CONNECTION DEFAULT
    ENDEXEC.
    write : con. --> The result it's DEFAULT

    > It's for it, that when I go to DBCO... in DBMS ---... I can select Oracle, MSSQLServer, DB2... for this Databases..., exists the library (lib_dbsl)???
    yes, for all those databases exists the database interface library.
    > In resume:
    >  IT'S NOT POSSIBLE CONNECT TO MYSQL!!! ... I cannot believe it!!!...
    Well - as far as I remember there were some times ago efforts to port SAP applications to MySQL. That would explain why there's also a file "DDLMYS.TPL" created if you execute R3ldctl during a system copy - amongst DDL files for all other databases. I believe this was at the time MySQL was promoting SAPDB/MaxDB.
    MySQL is historically not a database engine for software, that requires transactional integrity; there were extensions to support that (InnoDB or others) and there was no customer demand in getting MySQL as engine for SAP applications. And only develop an interface to be able to connect to an external MySQL engine is not worth the effort.
    However, there is hope Some BusinessObjects applications also run with and against MySQL engines, depending on how the strategy to integrate those into the SAP software stack there may (or may not) be an interface for that database in the future.
    Markus

  • Sales usage outlier report .

    Hi ,
    I would like to create a report which is based on sales data . Here is the requirement ..
    Sales Usage Outlier Report - Unusual high or low sales for stock materials in a plant (BI) <br> The report should allow the user to select criteria such as: -ABC code -Standard deviation from forecast -Material.
    This report will be used to adjust historical usage for sales outliers that occur during the normal business process. These outliers can cause larger than normal swings in inventory purchases, if not captured during the forecasting process.
    Could you please let me know , do we have any std functionality to show outlier in BW ?
    Regards

    Hi,
    For the following requirment, here is the option,
    Sales Usage Outlier Report - Unusual high or low sales for stock materials in a plant (BI)
    - USE EXCEPTIONS TO HIGHLIGHT THE CHANGES THROUGH COLOR HUES.
    The report should allow the user to select criteria such as: -ABC code -Standard deviation from forecast -Material.
    - CREATE A USERENTRY VARIABLE ON THE "ABC CODE" FIELD AND ATTACH IN THE QUERY DEFINITION
    This report will be used to adjust historical usage for sales outliers that occur during the normal business process. These outliers can cause larger than normal swings in inventory purchases, if not captured during the forecasting process.
    THE REQUIREMENT IS NOT CLEAR.  BUT YOU MUST BE ABLE ANALYSE THE DATA AND CHECK THE USAGE OF SALES OUTLIERS AND DECIDE ON THE FORECAST VALUES.  YOU CAN CREATE CALCULATED KFs to analyse the historical data.
    Thanks.

  • Using Oracle Database Express Edition in development environment

    Hi All,
    I have doubt regarding the usage Oracle Database Express Edition in Development environment. I am not sure weather I can ask a non technical question here or not. Pleas forgive me if I have done any thing wrong.
    I am working in an IT company where we take up projects outsourced by our clients. As part of our current project we are making some modification to a web application used by an institution. Our client is using Oracle Data Base standard edition. Due to budgetary constraints of our Company we can not set up a Oracle standard edition data base in our development environment.
    So would it be illegal if we use  Oracle Database Express Edition in our development environment?  We can guarantee that  only our internal development team which comprise a maximum of 10 people will have access to this development database and this development data base will never be opened to our Client for their business purpose(Who have their own Oracle standard edition in their environment). As part of the project we delver only table DDL script and stored procedure to our client and they put it in their environment.The sole purpose of a internal Express database will be development only.
    Could some one please tell me if it would be violation of license agreement if we install Oracle Database Express Edition in our development environment.

    Hi Paul,
    Actually I have already gone through the Oracle Technology Network Developer License Terms for Oracle Database 11g Express Edition . But was not quiet clear about the content. It says in the license terms that "We grant you a nonexclusive, nontransferable limited license to use the programs for: (a) purposes of developing, prototyping and running your applications for your own internal data processing operations". Does the term "your application" includes an application we are developing for another company. Since the Express database  is installed in the development environment only and is not opened to any one else not even for Our client it definitely falls under the term "internal data processing operations" right?

  • Usage Tracking - Access problem when Authentication Mode = Windows

    Hi Everyone,
    I´m working on UPK Usage Tracking configuration, in order to provide the finished training material.
    1) In Server01 (on Window Server 2003) the UPK Usage Tracking is installed
    2) In Server02 (also on Windows Server 2003) the Usage Tracking database is installed
    3) By accessing the configuration file (http://Server01/ODSTrack/configuration/setup.aspx) on Server01,I setup the Authentication Mode = Forms
    Note: The rest of the configurations were done.
    4) Once the configuration from step 3 is done, I execute the traning material (on Server01) from another node of the windows network
    and as a result I´m able to perform it.
    5) I access the Statistc data on Server01 by accessing the file (http://Server01/ODSTrack/admin/default.aspx)and I´m also able to see the results.
    6) When I execute the step 3 but with Authentication Mode = Windows, and including the GROUP name (windows group specially created for this goal where my user is included),
    - I still have access to the training material (step 4)
    - I have NO aceess to the statistics data any more (step 5) and the following message is display
    "You do not have permission to access this page. Please contact your Usage Tracking Server Administrator to update your permissions. "
    I don´t know what else I can do, and I wonder if some other configurations need to be done at windows network and/or explorer lever or any other.
    Any help would be appreciated.
    Best Regards//
    Rubén Zamudio

    Hi All,
    This problem was solved by reconfiguring Usage Tracking in the authentication method (was anonymous and the solution was Windows integrated).
    It is important to count on people from your organization working on Networks with some knowledge in IIS.
    Best Regards
    Ruben

  • Oracle SQL Developer 3.1 Migration Third Party Databases Issues

    Hi,
    i had following issues with migrating from db2 v8 to oracle 11.2.
    Online:
    Due to missing privileges and roles for user db user migrations some steps have failed (CREATE USER -> ORA-01031 ...).
    After correcting this like described in "Creating a Database User for the Migration Repository" in sqldev online help this has worked.
    The problems are:
    a) on the overview page at the end of the migration assistent all steps (CAPTURE, CONVERT, GENERATE, DATAMOVE) are shown as complete, even if nothing has done
    b) on page 6/9 into migration assistant all changes for datatype convertion are ignored, for example CHAR to VARCHAR2
    c) generated files are not visible, even if you mark refresh on file view
    d) after restarting sqldev, generarted files are visible in file view, but when you add generated files to svn error message "svn: File: xxx has inconsitent newlines" is shown
    e) after sucessful migration on opened migration project pane "data quality" sourcenumrows are NULL, even if they always NOT NULL and count(*) on any table on both sites are equal
    Offline:
    Generated skripts contains errors:
    ./startDump.sh: line 157: syntax error near unexpected token `done'
    '/startDump.sh: line 157: `done < "schemas.dat"
    Can anybody help?
    Thanks in advance
    André

    Hi kgronau,
    thanks for your fast answer.
    Today i have found 2 new issue.
    When you have opened a migration project from repository, on pane "data quality" sourcenumrows are always null
    and
    sourcename and targetname shows always databse object names from the database on the first migration project in repository independently of extra section in drop down box model and source.
    kgronau wrote:
    André,
    I used SQL Dev 3.1 and I captured a DB2 database. Then I've changed the rule to map char to varchar2 and started the migration.
    When I now check out my custom tables all all of them that had in the source model a char column are now using varchar2.I have tried to changed dataype for target database in place over the drop down box, not the edit rule button. It's a little bit confusing to have this option, when it doesn't works.
    After using the edit rule button, all works fine. Just the summary page 9/9 doesn't report changed datatype assignment.
    >
    Could you please explain what you mean with your option c and d?c)
    Yes i'm meaning View -> Files. Sorry but on german windows i have just german menu items. That is sometimes tricky to retranslate for support questions and also not helpful when using the online help where all menu items referenced in english :-(
    (Do you have an ideo how can i configure sqldev with english menu aon german windows?)
    I think, this problemn is special for output folders under subversion control.
    d)
    Generated Files after the end of the migration are just visible in output folders under subversion control after restarting sqldev
    >
    Edited by: kgronau on Mar 7, 2012 12:30 PM
    Are you talking about Opening a File viewer window (View -> Files)? In my case I have chosen d:\temp\DB2 as output and monitored it during the migration. It isn't refreshed until I manually click on the refresh button - but once the migration has finished and written the output and when I then click on the refresh button I'll see all the directories and the files included.
    Edited by: kgronau on Mar 7, 2012 12:39 PM
    When a migration has finished then SQL Developer 3.1 now creates in the top directory an unload_script.sh file which calls the other unload scripts.That's right, all scripts are generated.
    Also the data unload scripts were created - I need to find a DB2 on Unix to check the script - a quick check of the windows scripts worked correctly.
    Edited by: kgronau on Mar 7, 2012 1:22 PM
    These unload shell scripts to unload the data out of a DB2 database are also working.
    Unfortunately I'm not able to test the shell script used for a source model unload as my UDB is running on Windows.
    Didn't the online source model collection work? For me it looks like it did as you mentioned you changed the char data types to varchar2 and this requires already a connection to the source database - except you used the scripts that were generated using the startDump.sh which has failed. Yes, online source model collection did work. Just the unix shell script produces an error on the source unix system with db2. Please see below th generated script.
    So please provide here some more details../startDump.sh was startet for testing purposes without any arguments
    ./startDump.sh: line 157: syntax error near unexpected token `done'
    '/startDump.sh: line 157: `done < "
    if [[ $# != 3 ]]; then
    echo "Usage: startDump <database> <user> <password>";
    exit 1;
    fi
    ROWTAG="'<row>'";
    ENDROWTAG="'</row>'";
    COLTAG="'<col><![CDATA['";
    ENDCOLTAG="']]></col>'";
    # Clear any other dat files
    echo "Clearing older data files"
    rm -f *.dat
    echo "Connnecting to $1 as $2";
    db2 -r connect.dat "connect to $1 user $2 using $3";
    if [[ $? != 0 ]]; then
    echo "Connection failed.";
    exit 20;
    fi
    # GET SCHEMA QUERY.
    echo "Get all schemas";
    db2 +o -x -r schemas.dat "select SCHEMANAME SCHEMA_NAME from SYSCAT.SCHEMATA WHERE DEFINER <> 'SYSIBM' AND
    SCHEMANAME <> 'NULLID' AND SCHEMANAME <> 'SQLJ'
    AND SCHEMANAME <> 'SYSTOOLS'";
    if [[ $? != 0 ]]; then
    echo "Get schemas failed.";
    exit 30;
    fi
    # Loop through file containing schema names and extract db objects for each of them
    while read SCHEMA_NAME
    do
    # Create schema directory
    rm -rf "${SCHEMA_NAME}";
    mkdir "${SCHEMA_NAME}";
    if [[ $? != 0 ]]; then
    echo "Could not create schema directory ${SCHEMA_NAME}.";
    exit 40;
    fi
    echo "Get all tables for schema $SCHEMA_NAME";
    tablesFile="${SCHEMA_NAME}/""tables.dat";
    # GET TABLES QUERY. */
    db2 -x +o -r $tablesFile "select "$ROWTAG", "$COLTAG"||COLUMNS.TABSCHEMA||"$ENDCOLTAG", "$COLTAG"||COLUMNS.TABNAME||"$ENDCOLTAG",
    "$COLTAG"||COLUMNS.COLNAME||"$ENDCOLTAG", "$COLTAG"||(CASE WHEN (COLUMNS.CODEPAGE = 0 and (COLUMNS.TYPENAME = 'VARCHAR' OR COLUMNS.TYPENAME = 'CHAR'
    OR COLUMNS.TYPENAME = 'LONG VARCHAR' OR COLUMNS.TYPENAME = 'CHARACTER')) THEN COLUMNS.TYPENAME || ' FOR BIT DATA'
    ELSE COLUMNS.TYPENAME END)||"$ENDCOLTAG", "$COLTAG"||CHAR(COLUMNS.LENGTH)||"$ENDCOLTAG",
    "$COLTAG"||CHAR(COLUMNS.SCALE)||"$ENDCOLTAG", "$COLTAG"||COLUMNS.NULLS||"$ENDCOLTAG",
    "$COLTAG"||COALESCE(COLUMNS.DEFAULT, '')||"$ENDCOLTAG", "$ENDROWTAG" from
    SYSCAT.COLUMNS COLUMNS, SYSCAT.TABLES TABLES WHERE
    COLUMNS.TABSCHEMA = '${SCHEMA_NAME}' AND
    COLUMNS.TABNAME = TABLES.TABNAME AND
    COLUMNS.TABSCHEMA = TABLES.TABSCHEMA AND
    TABLES.TYPE = 'T'
    ORDER BY COLUMNS.TABNAME, COLUMNS.COLNO";
    if [[ $? != 0 ]]; then
    echo "No tables found.";
    fi
    # GET SYNONYMS QUERY. */
    echo "Get all synonyms for schema $SCHEMA_NAME";
    synonymsFile="${SCHEMA_NAME}/""synonyms.dat";
    db2 -x +o -r $synonymsFile "select "$ROWTAG", "$COLTAG"||TABNAME||"$ENDCOLTAG", "$COLTAG"||BASE_TABSCHEMA||"$ENDCOLTAG",
    "$COLTAG"||BASE_TABNAME||"$ENDCOLTAG", "$ENDROWTAG" from syscat.tables
    where tabschema = '${SCHEMA_NAME}' and type = 'A'";
    if [[ $? != 0 ]]; then
    echo "No synonyms found.";
    fi
    # GET VIEW QUERY. */
    echo "Get all views for schema $SCHEMA_NAME";
    viewsFile="${SCHEMA_NAME}/""views.dat";
    db2 -x +o -r $viewsFile "select "$ROWTAG", "$COLTAG"||VIEWSCHEMA||"$ENDCOLTAG", "$COLTAG"||VIEWNAME||"$ENDCOLTAG",
    "$COLTAG"||COALESCE(TEXT, '')||"$ENDCOLTAG",
    "$COLTAG"||DEFINER||"$ENDCOLTAG", "$COLTAG"||READONLY||"$ENDCOLTAG", "$COLTAG"||VALID||"$ENDCOLTAG", "$ENDROWTAG"
    from syscat.views
    WHERE VIEWSCHEMA = '${SCHEMA_NAME}'
    ORDER BY VIEWNAME";
    if [[ $? != 0 ]]; then
    echo "No views found.";
    fi
    # GET INDEXES QUERY. */
    echo "Get all indexes for schema $SCHEMA_NAME";
    indexesFile="${SCHEMA_NAME}/""indexes.dat";
    db2 -x +o -r $indexesFile "select "$ROWTAG", "$COLTAG"||INDSCHEMA||"$ENDCOLTAG", "$COLTAG"||INDNAME||"$ENDCOLTAG",
    "$COLTAG"||TABSCHEMA||"$ENDCOLTAG", "$COLTAG"||TABNAME||"$ENDCOLTAG", "$COLTAG"||INDEXTYPE||"$ENDCOLTAG",
    "$COLTAG"||UNIQUERULE||"$ENDCOLTAG", "$ENDROWTAG" from SYSCAT.INDEXES
    WHERE INDSCHEMA = '${SCHEMA_NAME}' AND UNIQUERULE <> 'P'
    ORDER BY TABNAME, INDNAME";
    if [[ $? != 0 ]]; then
    echo "No indexes found.";
    fi
    # GET INDEX DETAILS QUERY. */
    echo "Get all index details for schema $SCHEMA_NAME";
    indexeDetailsFile="${SCHEMA_NAME}/""indexDetails.dat";
    db2 -x +o -r $indexeDetailsFile "select "$ROWTAG", "$COLTAG"||INDSCHEMA||"$ENDCOLTAG", "$COLTAG"||INDNAME||"$ENDCOLTAG",
    "$COLTAG"||COLNAME||"$ENDCOLTAG", "$COLTAG"||CHAR(COLSEQ)||"$ENDCOLTAG", "$ENDROWTAG" from SYSCAT.INDEXCOLUSE
    WHERE INDSCHEMA = '${SCHEMA_NAME}'";
    if [[ $? != 0 ]]; then
    echo "No index details found.";
    fi
    # GET TRIGGERS QUERY. */
    echo "Get all triggers for schema $SCHEMA_NAME";
    triggersFile="${SCHEMA_NAME}/""triggers.dat";
    db2 -x +o -r $triggersFile "select "$ROWTAG", "$COLTAG"||TRIGSCHEMA||"$ENDCOLTAG",
    "$COLTAG"||TRIGNAME||"$ENDCOLTAG", "$COLTAG"||DEFINER||"$ENDCOLTAG", "$COLTAG"||TABSCHEMA||"$ENDCOLTAG",
    "$COLTAG"||TABNAME||"$ENDCOLTAG", "$COLTAG"||TRIGEVENT||"$ENDCOLTAG", "$COLTAG"||VALID||"$ENDCOLTAG",
    "$COLTAG"||COALESCE(TEXT, '')||"$ENDCOLTAG",
    "$COLTAG"||COALESCE(REMARKS, '')||"$ENDCOLTAG", "$ENDROWTAG"
    from SYSCAT.TRIGGERS
    WHERE TRIGSCHEMA = '${SCHEMA_NAME}'";
    if [[ $? != 0 ]]; then
    echo "No triggers found.";
    fi
    # The for GET Promary Key CONSTRAINT QUERY. */
    echo "Get all primary keys for schema $SCHEMA_NAME";
    primarykeysFile="${SCHEMA_NAME}/""primarykeys.dat";
    db2 -x +o -r $primarykeysFile "select "$ROWTAG", "$COLTAG"||X.CONSTNAME||"$ENDCOLTAG", "$COLTAG"||X.TYPE||"$ENDCOLTAG",
    "$COLTAG"||X.TABSCHEMA||"$ENDCOLTAG", "$COLTAG"||X.TABNAME||"$ENDCOLTAG", "$COLTAG"||Z.COLNAME||"$ENDCOLTAG",
    "$COLTAG"||CHAR(Z.COLSEQ)||"$ENDCOLTAG", "$COLTAG"||COALESCE(X.REMARKS, '')||"$ENDCOLTAG", "$ENDROWTAG" from
    (select CONSTNAME, TYPE, TABSCHEMA, TABNAME, REMARKS from SYSCAT.TABCONST where (type = 'P' OR type = 'U')) X
    FULL OUTER JOIN
    (select COLNAME, COLSEQ, CONSTNAME, TABSCHEMA, TABNAME from SYSCAT.KEYCOLUSE) Z
    on
    (X.CONSTNAME = Z.CONSTNAME and X.TABSCHEMA = Z.TABSCHEMA and X.TABNAME = Z.TABNAME)
    WHERE X.TABSCHEMA='${SCHEMA_NAME}'
    ORDER BY X.CONSTNAME";
    if [[ $? != 0 ]]; then
    echo "No primary keys found.";
    fi
    # The for GET Check constraints QUERY. */
    echo "Get all Check constraints for schema $SCHEMA_NAME";
    constraintsFile="${SCHEMA_NAME}/""checkConstraints.dat";
    db2 -x +o -r $constraintsFile "SELECT "$ROWTAG", "$COLTAG"||A.CONSTNAME||"$ENDCOLTAG", "$COLTAG"|| COALESCE(TEXT, '') ||"$ENDCOLTAG", "$COLTAG"|| A.TABSCHEMA||"$ENDCOLTAG", "$COLTAG"|| A.TABNAME ||"$ENDCOLTAG", "$COLTAG"|| COLNAME ||"$ENDCOLTAG", "$ENDROWTAG" FROM SYSCAT.CHECKS A , SYSCAT.COLCHECKS B
    WHERE A.CONSTNAME = B.CONSTNAME AND A.TABSCHEMA = B.TABSCHEMA AND A.TABNAME=B.TABNAME AND A.TABSCHEMA = '${SCHEMA_NAME}'";
    if [[ $? != 0 ]]; then
    echo "No check constraints found.";
    fi
    done < "schemas.dat"
    # GET PROCEDURES QUERY. */
    . getProcedures.sh schemas.dat
    # The for GET Foreign Key CONSTRAINT QUERY. */
    . getForeignKeys.sh schemas.dat

Maybe you are looking for