Table Management in oracle 11g

I am using 11g database, I have to release some space tablespace level.
Here this is the situation.
One of the big table (CAMPAIGN_REPORT_RAW), i guess more fragmenation is there in that table.
It is created under INCIH_DATA tablespace (having 10 datafiles).INCHIH_DATA tablespace occupying 33 GB , in that 25 GB used space 8 GB freespace. We are using filesystem management.
In that respective filesystem we have to get 10 GB space for creating new tablespace. Unfortunately we dont have that much space in that filesystem, For that my plan is shrink the CAMPAIGN_REPORT_RAW table, resize the datafile!!
ALTER TABLE CAMPAIGN_REPORT_RAW SHRINK SPACE;
ALTER TABLE CAMPAIGN_REPORT_RAW SHRINK SPACE CASCADE;
alter database datafile '<full_file_name>' resize <size>M;
for that
I need your help to get the command for
1) how to find the size of the table
2) how to find the used size of the table
3) how to find the Hight Water Mark level of the table
4) how to find this table occupying which datafile
Thanks

Take a guideline from http://download.oracle.com/docs/cd/B28359_01/server.111/b28310/schema003.htm#ADMIN10161
Size of a table can be seen by,
select * from dba_segments where segment_name='<your table name>';
Above query is the 'used' size of the table. Minimum unit of allocating space in Oracle is an 'extent'. So, even if 90% of the extent is empty, you cannot reclaim those blocks, so there is no point in finding empty/used blocks of data.
You can only find out which tablespace the table belongs to. We cannot find out which 'datafile' the table is in and we do not need to know that as well. Tell me why you want to know it?

Similar Messages

  • Automatic table partitioning in Oracle 11g

    Hi All,
    I need to implement automatic table partitioning in Oracle 11g version, but partitioning interval should be on daily basis(For every day).
    I was able to perform this for Monthly and Yearly but not on daily basis.
    create table part
    (a date)PARTITION BY RANGE (a)
    INTERVAL (NUMTOYMINTERVAL(1,'*MONTH*'))
    (partition p1 values less than (TO_DATE('01-NOV-2007','DD-MON-YYYY'))
    Table created
    create table part
    (a date)PARTITION BY RANGE (a)
    INTERVAL (NUMTOYMINTERVAL(1,'*YEAR*'))
    (partition p1 values less than (TO_DATE('01-NOV-2007','DD-MON-YYYY'))
    Table createdBut if i use DD or DAY instead of YEAR or MONTH it fails......Please suggest me how to perform this on daily basis.
    SQL>
      1  create table part
      2  (a date)PARTITION BY RANGE (a)
      3  INTERVAL (NUMTOYMINTERVAL(1,'*DAY*'))
      4  (partition p1 values less than (TO_DATE('01-NOV-2007','DD-MON-YYYY'))
      5* )
    SQL> /
    INTERVAL (NUMTOYMINTERVAL(1,'DAY'))
    ERROR at line 3:
    ORA-14752: Interval expression is not a constant of the correct type
    SQL> create table part
    (a date)PARTITION BY RANGE (a)
    INTERVAL (NUMTOYMINTERVAL(1,'*DD*'))
    (partition p1 values less than (TO_DATE('01-NOV-2007','DD-MON-YYYY'))
    );  2    3    4    5
    INTERVAL (NUMTOYMINTERVAL(1,'DD'))
    ERROR at line 3:
    ORA-14752: Interval expression is not a constant of the correct typePlease suggest me to resolve this ORA-14752 error for using DAY or DD or HH24
    -Yasser

    Yes, for differenct partitions for different months.
    interval (numtoyminterval(1,'MONTH'))
    store in (TS1,TS2,TS3)
    This code will store data in partitions in tablespaces TS1, TS2, and TS3 in a round robin manner.
    for Day wise day yes you can store
    INTERVAL (NUMTODSINTERVAL(1,'day')) or
    INTERVAL (NUMTODSINTERVAL(2,'day')) or
    INTERVAL (NUMTODSINTERVAL(3,'day')) or
    INTERVAL (NUMTODSINTERVAL(4,'day')) or
    INTERVAL (NUMTODSINTERVAL(5,'day')) or
    INTERVAL (NUMTODSINTERVAL(n,'day'))

  • Existing table compression in oracle 11g

    Hi,
    We have a achema of 45gb in oracle 11g and need to compress the tables as it is rarely used,
    please can you tell me what are the option available to compress the tables.
    Thanks

    Thanks for the update.
    I was able to compress the other tables which are very less in size and i am able to find one lobsegment which is 37GB
    so how i can use compression on the LOBSEGMENT as i could not found any document for this
    i found one document in metalink says that
    "To achieve LOB compression, you need to specify LOB column storage as SECUREFILE.  With BASICFILE option, you can not use COMPRESSION for LOB column"
    SQL> select owner, segment_name,segment_type , bytes/1024/1024 MB from dba_segments where owner='WEBSPR' order by 4 desc ;
    OWNER                     SEGMENT_NAME              SEGMENT_TYPE                      MB
    WEBSPR                    SYS_LOB0000012869C00002$$ LOBSEGMENT                     37760
    and my existing lobsegment metadata is like below
    LOB ("DATA") STORE AS (
      TABLESPACE "WEBSPR_DATA" ENABLE STORAGE IN ROW CHUNK 8192 PCTVERSION 10
      NOCACHE LOGGING
      STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT))
    and my lob column is either SECUREFILE nor BASICFILE so is there any other method to compress the existing LOBSEGMENT
    Appreciated for the inputs.Thanks

  • Locating user tables in an Oracle 11g database

    Excuse my ignorance on this subject
    But our company has an Oracle 11g database that drives one of our business applications. I am not an oracle admin and there is very little documentation on the application itself, however the application seems to have its own set of explicit login (username and password) credentials so I am guessing they are hashed somewhere in the database tables.
    My question would be – are there any default oracle tables where user credentials would typically be? or tips on tracking down where the password hashes may be? Or can this differ from application to application? Any tips welcome. Apologies for the naivity of the question. My goal is to identify which database accounts can query the table the hashes are in, as we have some users who can access the database for data analysis purposes - but I dont want them to have access to the table.

    user599292 wrote:
    EdStevens wrote:
    user599292 wrote:
    Excuse my ignorance on this subject
    But our company has an Oracle 11g database that drives one of our business applications. I am not an oracle admin and there is very little documentation on the application itself, however the application seems to have its own set of explicit login (username and password) credentials so I am guessing they are hashed somewhere in the database tables.
    My question would be – are there any default oracle tables where user credentials would typically be? or tips on tracking down where the password hashes may be? Or can this differ from application to application? Any tips welcome. Apologies for the naivity of the question. My goal is to identify which database accounts can query the table the hashes are in, as we have some users who can access the database for data analysis purposes - but I dont want them to have access to the table.The information relative to the user accounts is revealed in the view DBA_USERS, which normal users should not have a need to see. However, the passwords are stored in a true hash. It cannot be used directly, and cannot be reversed. So being able to see the hashed password does not in itself constitute a security risk.
    When a user is being authenticated, the procedure that oracle uses is NOT to 'decrypt' the stored password to see if it matches the password presented by the user. Rather, the password presented by the user is hashed, and that hash value is compared against the stored value.My concern was if they could extract those password hash values, there are many free password crackers where if they run dictionary values against those hash values and if any match they then have some passwords to gain perhaps elevated access in the application.Such a method would have to assume a password, know how oracle 'salts' the password, hash the result, then compare to the hashed values from the table. If you employ even a modicum of password complexity enforcement, I doubt that your developers are going to have access to the kind of computing capacity that would be required to get a positive result within your lifetime.
    You need to do three things
    First and foremost, adhere to the principle of 'least privilege'. Do not grant a user account any privileges that are not required for that account to complete it's business task. That includes access to any tables or views. Be wary of any "--ANY---" privileges.
    Second, use the password complexity function to enforce a reasonable level of password complexity.
    Third, Set the user's profile to expire the pasword after 'x' number of days and prevent the reuse of a password until after 'y' iterations.

  • Use or not to use table compression in Oracle 11g (11.2)?

    Hi All,
    I was trying to explore the difference between COMPRESS FOR ALL OPERATIONS, COMPRESS FOR DIRECT_LOAD OPERATIONS and NOCOMPRESS, for a table in Oracle 11.2.
    I know, we can go thru documentation and make a decision.
    Still I have run some very simple tests here.
    Case 1. Create table with COMPRESS FOR DIRECT_LOAD OPERATIONS and then update few records
    Case 2. Create table with COMPRESS FOR ALL OPERATIONS and then update few records
    Case 3. Create table with NOCOMPRESS and update few rows
    I know, Case 1 is a real dummy, but still I did that to see difference between Case1 and Case2.
    --  ---------- CASE 1 --------
    SQL> create table aaa
      2  nologging
      3  compress for direct_load operations
      4  as
      5  select * from all_objects ;
    Table created.
    Elapsed: 00:00:02.00
    SQL> select count(*) from aaa ;
      COUNT(*)
         50317
    Elapsed: 00:00:00.11
    SQL> update aaa set created=sysdate where owner='SYS' and object_type='VIEW';
    3485 rows updated.
    *Elapsed: 00:00:05.43*
    SQL> commit;
    Commit complete.
    Elapsed: 00:00:00.04
    SQL>
    --  ---------- CASE 2 --------
    SQL>
    SQL> create table bbb
      2  nologging
      3  compress for all operations
      4  as
      5  select * from all_objects ;
    Table created.
    Elapsed: 00:00:02.01
    SQL> select count(*) from bbb ;
      COUNT(*)
         50318
    Elapsed: 00:00:00.20
    SQL> update bbb set created=sysdate  where owner='SYS' and object_type='VIEW';
    3485 rows updated.
    *Elapsed: 00:00:05.31*
    SQL> commit;
    Commit complete.
    Elapsed: 00:00:00.04
    SQL>
    SQL>
    --  ---------- CASE 3 --------
    SQL> create table ccc
      2  nologging
      3  nocompress
      4  as
      5  select * from all_objects ;
    Table created.
    Elapsed: 00:00:01.84
    SQL> select count(*) from ccc ;
      COUNT(*)
         50319
    Elapsed: 00:00:00.15
    SQL> update ccc set created=sysdate  where owner='SYS' and object_type='VIEW';
    3485 rows updated.
    *Elapsed: 00:00:00.06*Case1 and Case2 took 5.43 and 5.31 seconds respectively. Case 3 took 0.06 seconds.
    Difference is drastic.
    Am I doing wrong kind of test (lets be honest)?
    Should we not use compression for OLTP systems (or any systems with reasonable updates)?
    Apart from allowing to drop a column, what is the difference between COMPRESS FOR ALL OPERATIONS and COMPRESS FOR DIRECT_LOAD OPERATIONS ? where/how can I see that difference?
    Thoughts please.
    Thanks in advance.

    Hi,
    I have realised that I am using the syntax which is deprecated in 11.2.
    So I am doing the same test with
    COMPRESS BASIC
    COMPRESS FOR OLTP
    instead of
    COMPRESS FOR DIRECT_LOAD OPERATIONS (deprecated)
    COMPRESS FOR ALL OPERATIONS (deprecated)
    But the results are same. Even if I do COMPRESS FOR OLTP, my update is taking 5.4 seconds which is not very different from COMPRESS BASIC
    -- --------- CASE 1 ---------------
    SQL> create table aaa
      2  nologging
      3  compress basic
      4  as
      5  select * from all_objects ;
    Table created.
    Elapsed: 00:00:02.46
    SQL>
    SQL> select count(*) from aaa ;
      COUNT(*)
         50318
    Elapsed: 00:00:00.11
    SQL>
    SQL> update aaa set created=sysdate where owner='SYS' and object_type='VIEW';
    3485 rows updated.
    Elapsed: 00:00:05.48
    -- ---------- CASE 2 ---------------
    SQL> create table bbb
      2  nologging
      3  compress for oltp
      4  as
      5  select * from all_objects ;
    Table created.
    Elapsed: 00:00:02.01
    SQL>
    SQL> select count(*) from bbb ;
      COUNT(*)
         50319
    Elapsed: 00:00:00.12
    SQL>
    SQL> update bbb set created=sysdate  where owner='SYS' and object_type='VIEW';
    3485 rows updated.
    Elapsed: 00:00:05.25
    -- ---------- CASE 3 ---------------
    SQL> create table ccc
      2  nologging
      3  nocompress
      4  as
      5  select * from all_objects ;
    Table created.
    Elapsed: 00:00:01.81
    SQL>
    SQL> select count(*) from ccc ;
      COUNT(*)
         50320
    Elapsed: 00:00:00.10
    SQL>
    SQL> update ccc set created=sysdate  where owner='SYS' and object_type='VIEW';
    3485 rows updated.
    Elapsed: 00:00:00.04Any thoughts??

  • Oracle Profile Vs Oracle Resource Manager in Oracle 11g

    Hi Friends,
    Whats is the Avantages / Dis advantages of using Oracle Profile and Oracle Resource Manager to limit the oracle resources to oracle users in Oracle 11.2.0.1
    Regards,
    DB

    If you have a single database servicing multiple Services / Applications / Schemas (e.g. a typical database consolidation), you could use Profiles and Resource Manager to place limits on resource usage by Service / Application / Schema. That way, for example, a 32 CPU database server running a single database can still limit the number of cores for each Service / Application / Schema using the Resource Manager.
    Hemant K Chitale

  • Create Table Trigger to replicate data from MSSQL2K5 to Oracle 11G on Linux

    I am trying to create a trigger on my MSSQL 2k5 server so that when a record is inserted, a replicated record is created in a table on an Oracle 11g database on a Linux server (Oracle Linux 6).
    Creating the trigger is easy, but when I test it I am getting an error stating the following:
    .NetSqlClient Data Provider The operation could not be performed because OLE DB Provider 'OraOLEDB.Oracle' for linked server "<myserver>" was unable to begin the distributed transaction.
    OLEDB Provider "OraOLEDB.Oracle" for linked server "<myserver>" returned: "New transaction cannot enlist in the specified transaction coordinator"
    Here is the trigger (MSSQL):
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    CREATE PROCEDURE insert_aban8_state
        @an8 int,
        @st nvarchar(3)
    AS
    BEGIN
        SET NOCOUNT ON;
        declare @c numeric
        select @c = count(*) from [e9db]..[CRPDTA].[ABAN8_STATE$] where alan8=@an8 and aladds=@st
        if(@c =0)
         begin
            insert into [e9db]..[CRPDTA].[ABAN8_STATE$]
            values(@an8, @st)
         end
        END
    GO
    After reviewing the MS Transaction Coordinator, I am now totally confused. I checked the services and have the MS DTC enabled and running, but am not sure what to do on the Linux side.
    Does the Oracle Services for Microsoft Transaction Server (OraMTS) work on Linux? I could only find references for this for Oracle 11g on Windows.
    What do I need to do to enable this replication via mssql table trigger to Oracle11g on Linux?

    nsidev wrote:
    While I would agree in part, it appears from the message that the trigger is requiring the Transaction Service to be enabled on both the host and target. The point of this post is to determine what, if anything, I need to do on my Oracle DB to allow the trigger to complete successfully.
    There are many posts found with Google concerning the OraMTS service on the Oracle system, but they all appear to be for Windows based systems. My question is, is this service part of the Linux based Oracle DB and if so, how do I initialize it?
    If I am mistaken and this is truly an issue with the MSSQL server, I will replicate the post in those forums. I am just looking for direction and help.
    1) I have NEVER heard that Oracle has, knows about, or supports any "Transaction Service".
    2) Consider what I previously posted regarding the flavor of client source.
    If your assertion about this mythical service were correct, then the Oracle DB would have to be able to "know" that this client connection was originated by SQL Server.
    I don't understand how or why Oracle should behave differently depending upon whether INSERT is done inside or outside a MS SQL Server trigger.
    Please explain & elaborate why Oracle should behave different depending upon the source of any INSERT statement.
    3) From Oracle DB standpoint an INSERT is an INSERT; regardless of the client.

  • The danger of memory target in Oracle 11g - request for discussion.

    Hello, everyone.
    This is not a question, but kind of request for discussion.
    I believe that many of you heard something about automatic memory management in Oracle 11g.
    The concept is that Oracle manages the target size of SGA and PGA. Yes, believe it or not, all we have to do is just to tell Oracle how much memory it can use.
    But I have a big concern on this. The optimizer takes the PGA size into consideration when calculating the cost of sort-related operations.
    So what would happen when Oracle dynamically changes the target size of PGA? Following is a simple demonstration of my concern.
    UKJA@ukja116> select * from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
    PL/SQL Release 11.1.0.6.0 - Production
    CORE    11.1.0.6.0      Production
    TNS for 32-bit Windows: Version 11.1.0.6.0 - Production
    NLSRTL Version 11.1.0.6.0 - Production
    -- Configuration
    *.memory_target=350m
    *.memory_max_target=350m
    create table t1(c1 int, c2 char(100));
    create table t2(c1 int, c2 char(100));
    insert into t1 select level, level from dual connect by level <= 10000;
    insert into t2 select level, level from dual connect by level <= 10000;
    -- First 10053 trace
    alter session set events '10053 trace name context forever, level 1';
    select /*+ use_hash(t1 t2) */ count(*)
    from t1, t2
    where t1.c1 = t2.c1 and t1.c2 = t2.c2
    alter session set events '10053 trace name context off';
    -- Do aggressive hard parse to make Oracle dynamically change the size of memory segments.
    declare
      pat1     varchar2(1000);
      pat2     varchar2(1000);
      va       number;
      vc       sys_refcursor;
      vs        varchar2(1000);
    begin
      select ksppstvl into pat1
        from sys.xm$ksppi i, sys.xm$ksppcv v   -- views for x$ table
        where i.indx = v.indx
        and i.ksppinm = '__pga_aggregate_target';
      for idx in 1 .. 10000000 loop
        execute immediate 'select count(*) from t1 where rownum = ' || (idx+1)
              into va;
        if mod(idx, 1000) = 0 then
          sys.dbms_system.ksdwrt(2, idx || 'th execution');
          select ksppstvl into pat2
          from sys.xm$ksppi i, sys.xm$ksppcv v   -- views for x$ table
          where i.indx = v.indx
          and i.ksppinm = '__pga_aggregate_target';
          if pat1 <> pat2 then
            sys.dbms_system.ksdwrt(2, 'yep, I got it!');
            exit;
          end if;
        end if;
      end loop;
    end;
    -- As to alert log file,
    25000th execution
    26000th execution
    27000th execution
    28000th execution
    29000th execution
    30000th execution
    yep, I got it! <-- the pga target changed with 30000th hard parse
    -- Second 10053 trace for same query
    alter session set events '10053 trace name context forever, level 1';
    select /*+ use_hash(t1 t2) */ count(*)
    from t1, t2
    where t1.c1 = t2.c1 and t1.c2 = t2.c2
    alter session set events '10053 trace name context off';With above test case, I found that
    1. Oracle invalidates the query when internal pga aggregate size changes, which is quite natural.
    2. With changed pga aggregate size, Oracle recalculates the cost. These are excerpts from the both of the 10053 trace files.
    -- First 10053 trace file
    PARAMETERS USED BY THE OPTIMIZER
      PARAMETERS WITH ALTERED VALUES
    Compilation Environment Dump
    _smm_max_size                       = 11468 KB
    _smm_px_max_size                    = 28672 KB
    optimizer_use_sql_plan_baselines    = false
    optimizer_use_invisible_indexes     = true
    -- Second 10053 trace file
    PARAMETERS USED BY THE OPTIMIZER
      PARAMETERS WITH ALTERED VALUES
    Compilation Environment Dump
    _smm_max_size                       = 13107 KB
    _smm_px_max_size                    = 32768 KB
    optimizer_use_sql_plan_baselines    = false
    optimizer_use_invisible_indexes     = true
    Bug Fix Control Environment10053 trace file clearly says that Oracle recalculates the cost of the query with the change of internal pga aggregate target size. So, there is a great danger of unexpected plan change while Oracle dynamically controls the memory segments.
    I believe that this is a desinged behavior, but the negative side effect is not negligible.
    I just like to hear your opinions on this behavior.
    Do you think that this is acceptable? Or is this another great feature that nobody wants to use like automatic tuning advisor?
    ================================
    Dion Cho - Oracle Performance Storyteller
    http://dioncho.wordpress.com (english)
    http://ukja.tistory.com (korean)
    ================================

    I made a slight modification with my test case to have mixed workloads of hard parse and logical reads.
    *.memory_target=200m
    *.memory_max_target=200m
    create table t3(c1 int, c2 char(1000));
    insert into t3 select level, level from dual connect by level <= 50000;
    declare
      pat1     varchar2(1000);
      pat2     varchar2(1000);
      va       number;
    begin
      select ksppstvl into pat1
        from sys.xm$ksppi i, sys.xm$ksppcv v
        where i.indx = v.indx
        and i.ksppinm = '__pga_aggregate_target';
      for idx in 1 .. 1000000 loop
        -- try many patterns here!
        execute immediate 'select count(*) from t3 where 10 = mod('||idx||',10)+1' into va;
        if mod(idx, 100) = 0 then
          sys.dbms_system.ksdwrt(2, idx || 'th execution');
          for p in (select ksppinm, ksppstvl
              from sys.xm$ksppi i, sys.xm$ksppcv v
              where i.indx = v.indx
              and i.ksppinm in ('__shared_pool_size', '__db_cache_size', '__pga_aggregate_target')) loop
              sys.dbms_system.ksdwrt(2, p.ksppinm || ' = ' || p.ksppstvl);
          end loop;
          select ksppstvl into pat2
          from sys.xm$ksppi i, sys.xm$ksppcv v
          where i.indx = v.indx
          and i.ksppinm = '__pga_aggregate_target';
          if pat1 <> pat2 then
            sys.dbms_system.ksdwrt(2, 'yep, I got it! pat1=' || pat1 ||', pat2='||pat2);
            exit;
          end if;
        end if;
      end loop;
    end;
    /This test case showed expected and reasonable result, like following:
    100th execution
    __shared_pool_size = 92274688
    __db_cache_size = 16777216
    __pga_aggregate_target = 83886080
    200th execution
    __shared_pool_size = 92274688
    __db_cache_size = 16777216
    __pga_aggregate_target = 83886080
    300th execution
    __shared_pool_size = 88080384
    __db_cache_size = 20971520
    __pga_aggregate_target = 83886080
    400th execution
    __shared_pool_size = 92274688
    __db_cache_size = 16777216
    __pga_aggregate_target = 83886080
    500th execution
    __shared_pool_size = 88080384
    __db_cache_size = 20971520
    __pga_aggregate_target = 83886080
    1100th execution
    __shared_pool_size = 92274688
    __db_cache_size = 20971520
    __pga_aggregate_target = 83886080
    1200th execution
    __shared_pool_size = 92274688
    __db_cache_size = 37748736
    __pga_aggregate_target = 58720256
    yep, I got it! pat1=83886080, pat2=58720256Oracle continued being bounced between shared pool and buffer cache size, and about 1200th execution Oracle suddenly stole some memory from PGA target area to increase db cache size.
    (I'm still in dark age on this automatic memory target management of 11g. More research in need!)
    I think that this is very clear and natural behavior. I just want to point out that this would result in unwanted catastrophe under special cases, especially with some logic holes and bugs.
    ================================
    Dion Cho - Oracle Performance Storyteller
    http://dioncho.wordpress.com (english)
    http://ukja.tistory.com (korean)
    ================================

  • How to connect from Oracle 11g to SQL Server 2008 R2

    Hi,
    Is it possible to connect from Oracle 11g on AIX to SQL Server 2008 R2? If so, what is the preferred method?
    SQL Server has the original table. From Oracle 11g, we want to access data which is in SQL Server real time.
    Thank You
    Sarayu

    Hi,
    Have a look at these Oracle notes for the full information on the gateways -
    Master Note for Oracle Gateway Products (Doc ID 1083703.1)
    Functional Differences Between DG4ODBC and Specific Database Gateways (Doc ID 252364.1)
    Gateway and Generic Connectivity Licensing Considerations (Doc ID 232482.1)
    How to Setup DG4MSQL (Oracle Database Gateway for MS SQL Server) 64bit Unix OS (Linux, Solaris, AIX,HP-UX) (Doc ID 562509.1)
    How to Configure DG4ODBC on 64bit Unix OS (Linux, Solaris, AIX, HP-UX Itanium) to Connect to Non-Oracle Databases Post Install (Doc ID 561033.1)
    The Database Gateway for SQL*Server (DG4MSQL) needs a separate license but the Database Gateway for ODBC (DG4ODBC) is included in your RDBMS license. You only need to provide the third party ODBC driver needed by DG4ODBC.
    Regards,
    Mike

  • Simple way to connect Oracle 11g XE with MS SQL Server 2000

    Is there a simple way to access SQL server database/ Tables within from Oracle 11g XE (Windows-32bit) on same machine. I am a novice so kindly keep it simple. Thanks

    To connect to a SQL Server you need to use an Oracle product called Database gateway for ODBC which uses a 3rd party ODBC driver to connect to the SQL Server.
    The easiest set up is to install DG4ODBC release 11.2 on the SQL Server. How to configure the Database Gateway for ODBC is described in note:
    How to Configure DG4ODBC (Oracle Database Gateway for ODBC) on Windows 32bit to Connect to Non-Oracle Databases Post Install          [Document 466225.1]     
    when you install DG4ODBC on a 32bit Windows operating system and the instructions for a 64bit Widnows operating system can be found in this note:
    How to Configure DG4ODBC (Oracle Database Gateway for ODBC) on 64bit Windows Operating Systems to Connect to Non-Oracle Databases Post Install          [Document 1266572.1]     
    The database gateway for ODBC is available for free from here:
    http://www.oracle.com/technetwork/database/enterprise-edition/downloads/index.html
    Please make sure you select the 32bit or 64bit Windows operating system depending on the platform where you've installed the SQL Server and on which you now install the gateway and download the <win32/64>_11gR2_gateways.zip CD.
    Once downloaded, unzip it and install it using the Oracle Universal installer. Make sure you select the product Database Gateway for ODBC (there's also a dedicated SQL server gateway called Database Gateway for MS SQL Server - this gateway is NOT for free and it requires a separate license).

  • Table space not reduce after delete in oracle 11G

    Hi Team,
    I have a DB 11.1.0.7 on unix.
    I have execute delete tables on tablespace, but this not reduce.
    Thanks

    935299 wrote:
    What segment space management type is defined for the tablespace in question?
    MANUAL
    Then you should check out the documentation some more.
    But even if you shrink the table segement what is that going to do for the data file size?
    I don't undertand you.
    ThanksYour thread is titled "Table space not reduce after delete in oracle 11G" which implies to me that you are interested in reducing the size of a tablespace (which really means reducing the size of the underlying datafile(s)).
    So, if you shrink the size of the sys.aud$ table, will that cause the datafile(s) to become smaller? Will it accomplish your goal? What else, if anything, needs to happen?

  • Can.t find employee and other sample tables in oracle 11g express edition

    hi i am new to oracle 11g express edition. I want to run my queries over the employee table, but here i can'nt find this. I use the
    Select * from tab;
    it shows no rows selected.
    Is it present in the 11g or not.

    You have to unlock 'hr' Db user.These are the tables u will find in hr
    TNAME
    REGIONS
    COUNTRIES
    LOCATIONS
    DEPARTMENTS
    JOBS
    EMPLOYEES
    JOB_HISTORY
    EMP_DETAILS_VIEW
    For Unlocking hr,in the oracle webpage connect as system (or) sys.Goto Administation-->Manage Database Users-->clik hr and unlock.

  • Enterprise manager version with Oracle 11g

    Which version of Enterprise Manager do I need to install with 64 bit Oracle 11g. What is the latest version? Where can I get this information. The latest version that I could find for download on Oracle website is 10.2.0.4, but the one that comes with 11g Enterprise edition says 11.1 (the one without the Grid control).

    The latest grid control is 10.2.0.4 - note that you have to turn off instance status metric or you get too many files open - a bug that they are doing a patch for.
    You are correct thou - 11g DB control is bundled with the db, but that is db control not grid control - a similar but unique product.

  • How to find the size of a Oracle 11g table

    I want to find the size of a table in Oracle 11g.
    I hope, my question is clear.
    Please revert with the reply to my query.
    Regards

    --- List all tables sorted by size
    set lines 100 pages 999
    col     segment_name     format a40
    col      mb           format 999,999,999
    select     segment_name
    ,     ceil(sum(bytes) / 1024 / 1024) "MB"
    from     dba_segments
    where     segment_type = 'TABLE'
    group     by segment_name
    order      by ceil(sum(bytes) / 1024 / 1024) desc
    --- List all tables owned by a user sorted by size
    set lines 100 pages 999
    col     segment_name     format a40
    col      mb           format 999,999,999
    select     segment_name
    ,     ceil(sum(bytes) / 1024 / 1024) "MB"
    from     dba_segments
    where     owner like '&user'
    and     segment_type = 'TABLE'
    group     by segment_name
    order      by ceil(sum(bytes) / 1024 / 1024) desc
    /

  • External tables in Oracle 11g database is not loading null value records

    We have upgraded our DB from Oracle 9i to 11g...
    It was noticed that data load to external tables in 9i is rejecting the records with null columns..However upgrading it to 11g,it allows the records with null values.
    Is there are way to restrict loading the records that has few coulmns that are null..
    Can you please share if this is the expected behaviour in Oracle 11g...
    Thanks.

    Data isn't really loaded to an External Table. Rather, the external table lets you query an external data source as if it were a regular database table. To not see the rows with the NULL value, simply filter those rows out with your SQL statement:
    SELECT * FROM my_external_table WHERE colX IS NOT NULL;
    HTH,
    Brian

Maybe you are looking for

  • Using the "highlight" parameter in an URL to open a PDF

    I have read the "PDF Open Parameters" document: http://partners.adobe.com/public/developer/en/acrobat/PDFOpenParameters.pdf I am trying to use the "page" and "highlight" parameter in a URL (called from Flash) to open a PDF to a specific page and then

  • How can I include Adobe Flash in a client installion?

    Hi, I am currently considering adding Flash components to a downloadable client I have on my website. As the downloadable client has always been C++ based, and will now require Adobe Flash Player as a prerequisite, I would like to know what my option

  • Calendar on ML stopped syncing with Exchange but...

    Hi Community, I use 5 Apple products: - iMac at work - iMac at home - MacBook at work for travel - iPhone - iPad (lucky me...) The iMac and MacBook at work stopped syncing my calendar entries. The iMac, iPhone and iPad do sync properly and show all m

  • Java.lang.NoSuchMethodError from CMP bean

    Hello guys, Iam trying to convert some of my BMP beans to CMP. I removed all the SQL from the EJB and changed my deployment profile. My deployment went fine..(i was able to generate the SQL's) and the tables were successfully created...but when i run

  • Black sections of fillable forms.  Any solutions?

    I have an HP Pavilion g7 that runs Windows 7.  When I access fillable forms on the internet (and even some PDFs on websites), sections of the documents are black.  I can type or paste into the sections, and I can copy and paste to Word and see what i