List All Tables in Schema with LONG ?

Is there a way to ID any Table in my Schema that has a LONG data type ?
Looking for Query
Thank you!

or
SELECT table_name,column_name FROM user_Tab_Columns WHERE data_type='LONG'or
SELECT table_name,column_name FROM cols WHERE data_type='LONG'max

Similar Messages

  • Need a query to list all table names

    I have more than 500 tables in my database.
    'insert_date' & 'update_date' columns are found in more than 100 tables with data type as date.
    I need a query to list all table names and 'insert_date' , 'update_date' column's content.
    Please Help
    Lee1212
    Message was edited by:
    LEE1212

    I have more than 500 tables in my database.
    'insert_date' & update_date column is found in more
    than 100 tables with data type as date.
    I need a query to list all table names and
    'insert_date' column's content.What do you mean by "column's content". A table can have many rows. Do you want to display all the distinct value for these columns?
    Below is the query to get the tables which has columns insert_Date and update_date
    select table_name
    from user_tab_columns
    where column_name ='INSERT_DATE'
    or column_name ='UPDATE_DATE'
    /You can write a PL/SQL block to retrive the distinct values of INSERT_DATE for these tables
    declare
    TYPE ref_cur IS REF CURSOR;
    insert_date_cur ref_cur;
    lv_insert_date DATE;
    cursor tables_list IS
    select table_name
    from user_tab_columns
    where column_name ='INSERT_DATE';
    begin
    for cur_tables in tables_list
    loop
      OPEN insert_date_cur for 'SELECT DISTINCT insert_date from '||cur_tables.table_name;
      DBMS_OUTPUT.PUT_LINE(cur_tables.table_name);
      DBMS_OUTPUT.PUT_LINE('--------------------------------');
      LOOP
      FETCH insert_date_cur into lv_insert_date;
      EXIT WHEN insert_date_cur%NOTFOUND;
      DBMS_OUTPUT.PUT_LINE(lv_insert_date);
      END LOOP;
    CLOSE insert_date_cur;
    end loop;
    end;I haven't tested this code. There might be some errors. Just posted something to start with for you.

  • Analyzing all tables in schema

    Hello everyone,
    I am used below command to analyze all tables in schema
    EXEC DBMS_STATS.gather_schema_stats (ownname => 'CONTRACT', cascade =>true,estimate_percent => dbms_stats.auto_sample_size);when look at tables in dba_tables, for none of the tables LAST_ANALYZED date is changed to today. But when I did below
    EXECUTE DBMS_STATS.GATHER_TABLE_STATS(ownname => 'CONTRACT', tabname => 'CONT_NAME', method_opt => 'FOR ALL COLUMNS', granularity => 'ALL', cascade => TRUE, degree => DBMS_STATS.DEFAULT_DEGREE);I am see LAST_ANALYZED changed to today in dba_tables.
    If I need to change LAST_ANALYZED to all tables do I need to produce the above command for all tables? There are more then 700 tables for this application.
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
    PL/SQL Release 11.2.0.2.0 - Production
    CORE    11.2.0.2.0      Production
    TNS for IBM/AIX RISC System/6000: Version 11.2.0.2.0 - Production
    NLSRTL Version 11.2.0.2.0 - Production

    user3636719 wrote:
    EXEC DBMS_STATS.gather_schema_stats (ownname => 'CONTRACT', cascade =>true,estimate_percent => dbms_stats.auto_sample_size);
    and
    EXECUTE DBMS_STATS.GATHER_TABLE_STATS(ownname => 'CONTRACT', tabname => 'CONT_NAME', method_opt => 'FOR ALL COLUMNS', granularity => 'ALL', cascade => TRUE, degree => DBMS_STATS.DEFAULT_DEGREE);are fundamentally different, you cannot compare them. In gather_schema_stats, oracle used most defaults, decided none needed new stats collected, so it didn't do anything. In the second, you changed method_opt, granularity and degree etc from default values (as set in your db perhaps), so db went ahead and collected stats.
    You need to look up manual and try to understand the default and non-default behavior for parameters and then make an educated decision. Changing stats randomly is not generally a great idea.

  • Find all tables in db with column name of a particular string?

    I'm looking for all tables in a db that have a certain column name. How can I find this?

    John,
    thanks for your answer.
    John Mcginnis wrote:
    The quick search field in the schema browser is doing client-side filtering of the list of objects. This means we can only filter on things that we already have fetched from the database, like the object name. We have no current plans to extend the types of information that can filter on, although it is possible we might add the ability to filter based on a few other types of information that we generally fetch with the object name. However, filtering by column name would require pre-fetching the lists of columns for all tables and as such is not likely to be added to the schema browsers search field.
    I'm sure you guys had a Toad review to understand how things are done there.....so
    My 2 cents are: instead of pre-fetching the columns of all tables why don't you add a button below the table drop-down list which should fetch the tables based on the filter columns condition - exactly like Toad.
    Thanks,
    Dani

  • How to delete all TABLEs in Schema SYS which are created since 09:15?

    Unfortunately a script created lots of tables in the wrong Tablespace (=SYSTEM) and Schema (=SYS).
    How can I delete (in one DDL command) all TABLES which are created inTablespace=SYSTEM and SCHEMA=SYS
    during the last 3 hours resp. since 09:15 of 25th Sep 2011 ?
    Alternatively: How can I move these TABLEs to another Schema (e.g. ATEST) and Tablespace (USERS)?
    Is this possible with Oracle XE or only with Oracle Enterprise?
    Peter

    user559463 wrote:
    Unfortunately a script created lots of tables in the wrong Tablespace (=SYSTEM) and Schema (=SYS).
    How can I delete (in one DDL command) all TABLES which are created inTablespace=SYSTEM and SCHEMA=SYS
    during the last 3 hours resp. since 09:15 of 25th Sep 2011 ?
    Alternatively: How can I move these TABLEs to another Schema (e.g. ATEST) and Tablespace (USERS)?
    Is this possible with Oracle XE or only with Oracle Enterprise?
    PeterYou can query dba_objects and join it with dba_tables where tablespace_name='SYSTEM' , then drop the tables result of the query; the idea is to use the following query;
    SQL> select OWNER, OBJECT_NAME from dba_objects where OBJECT_TYPE='TABLE' and OWNER = 'SYS' and CREATED >= sysdate - 3 / 24;Please consider marking your questions as answered, when it is the case;
    Handle:      user559463 
    Status Level:      Newbie
    Registered:      Feb 18, 2007
    Total Posts:      583
    Total Questions:      266 (186 unresolved)Edited by: orawiss on Sep 26, 2011 4:03 PM

  • How to list all  tables that belongs to the current user?

    hi all
    "select tblowner,tblname from tables where tblowner in (select user from dual);"
    I want a list of all tables that belongs to the current user. I use the SQL above, but given "no rows selectd",but if
    I replace the subquery with literal,like
    "select tblowner,tblname from tables where tblowner = 'JFMDB');".
    I got the list. why?
    Thnk u very much!

    This looks like a bug that was fixed in 7.0.5.13 and onwards. I can reproduce what you are seeing in 7.0.5.10.0 (Linux x86-64 / Access control enabled instance from root install) but not 7.0.5.13.0 (Linux x86-64 / Access control enabled instance from root install).
    Unfortunately I don't have a bug number to pass on. I can't see anything relevant listed in the Release Notes and I haven't found a likely candidate in our internal listings. This may well have been one that was fixed "in passing" when RnD were working on something similar.

  • Fastest way to update column in all tables of schema

    In our schema we have two columns ColA and ColB common in all tables in our schema.
    Suppose these columns have values as below in all tables
    ColA     ColB     
    A1     B11     
    A12     B22     
    ABC     DEF     
    Now we have to update ColA and ColB where we have alphanumeric values in all tables, some tables have few hundred records and some tables have millions of records.
    Could you gurus suggest me with a fastest way to acheive this.
    What we are thinking is to write a procedure where we can input multiple tables which could be updated simultaneously and make a collection within procedure with following values
    ColA     ColA_R     ColB     ColB_R     
    A1     aa     B11     bb     
    A12     aaa     B22     bbb     
    ABC     No Update DEF     No Update     
    So whenever we have value matching A1 update it with value aa if we have value matching B11 update it with value bb and so on.
    Your inputs are welcome so that to acheive this in fastest manner.
    Thanks,
    Tony
    Edited by: tony29743 on Nov 9, 2010 9:15 AM

    I would be tempted to do it something like this:
    Create an index organized table for the cola updates (old_val, new_val) with a PK on old_val and another one for the colb Values. This could possibliy be a single table, depending on how many distinct values there were for cola and colb and if you are sure that "But if colA and colB have value A1 then it will be updated with aa".
    Then do the updates as an updateable join view something like:
    UPDATE (SELECT t1.cola, iot.new_val
            from tab1 t1, new_values_iot iot
            where t1.cola = iot.old_val)
    SET cola = new_valThis would require two rounds of updates, one for cola and one for colb, but they could be parallelized somewhat by distributing the tables to be updated through several pl/sql blocks each updating a different set of tables.
    You may be able to do it in a single query like:
    UPDATE (SELECT t1.cola, t1.colb, iota.new_val new_vala, iotb.new_val new_valb
            from tab1 t1, new_values_iot iota, new_values_iot iotb,
            where t1.cola = iota.old_val and
                  t1.cola = iotb.old_val)
    SET cola = new_vala,
        colb = new_valbHowever, given that you said there were some values in both cola and colb that did not require updating, that may not work since the join will fail on one of cola or colb if that value is not in the IOT, so you will not get all of the rows updated. If, and it is a big if, either both of cola and colb or neither of cola annd colb need to be updated in a single row, it might work. So, looking at your original examples (ABC and DEF do not require updates but A1 does), if there could be as case where cola = 'A1' and colb = 'DEF' then you will have to do it in two updates per table.
    John

  • Datapump API: Import all tables in schema

    Hi,
    how can I import all tables using a wildcard in the datapump-api?
    Thanks in advance,
    tensai

    _tensai_ wrote:
    Thanks for the links, but I already know them...
    My problem is that I couldn't find an example which shows how to perform an import via the API which imports all tables, but nothing else.
    Can someone please help me with a code-example?I'm not sure what you mean by "imports all tables, but nothing else". It could mean that you only want to import the tables, but not the data, and/or not the statistics etc.
    Using the samples provided in the manuals:
    DECLARE
      ind NUMBER;              -- Loop index
      h1 NUMBER;               -- Data Pump job handle
      percent_done NUMBER;     -- Percentage of job complete
      job_state VARCHAR2(30);  -- To keep track of job state
      le ku$_LogEntry;         -- For WIP and error messages
      js ku$_JobStatus;        -- The job status from get_status
      jd ku$_JobDesc;          -- The job description from get_status
      sts ku$_Status;          -- The status object returned by get_status
      spos NUMBER;             -- String starting position
      slen NUMBER;             -- String length for output
    BEGIN
    -- Create a (user-named) Data Pump job to do a "schema" import
      h1 := DBMS_DATAPUMP.OPEN('IMPORT','SCHEMA',NULL,'EXAMPLE8');
    -- Specify the single dump file for the job (using the handle just returned)
    -- and directory object, which must already be defined and accessible
    -- to the user running this procedure. This is the dump file created by
    -- the export operation in the first example.
      DBMS_DATAPUMP.ADD_FILE(h1,'example1.dmp','DATA_PUMP_DIR');
    -- A metadata remap will map all schema objects from one schema to another.
      DBMS_DATAPUMP.METADATA_REMAP(h1,'REMAP_SCHEMA','RANDOLF','RANDOLF2');
    -- Include and exclude
      dbms_datapump.metadata_filter(h1,'INCLUDE_PATH_LIST','''TABLE''');
      dbms_datapump.metadata_filter(h1,'EXCLUDE_PATH_EXPR','LIKE ''TABLE/C%''');
      dbms_datapump.metadata_filter(h1,'EXCLUDE_PATH_EXPR','LIKE ''TABLE/F%''');
      dbms_datapump.metadata_filter(h1,'EXCLUDE_PATH_EXPR','LIKE ''TABLE/G%''');
      dbms_datapump.metadata_filter(h1,'EXCLUDE_PATH_EXPR','LIKE ''TABLE/I%''');
      dbms_datapump.metadata_filter(h1,'EXCLUDE_PATH_EXPR','LIKE ''TABLE/M%''');
      dbms_datapump.metadata_filter(h1,'EXCLUDE_PATH_EXPR','LIKE ''TABLE/P%''');
      dbms_datapump.metadata_filter(h1,'EXCLUDE_PATH_EXPR','LIKE ''TABLE/R%''');
      dbms_datapump.metadata_filter(h1,'EXCLUDE_PATH_EXPR','LIKE ''TABLE/TR%''');
      dbms_datapump.metadata_filter(h1,'EXCLUDE_PATH_EXPR','LIKE ''TABLE/STAT%''');
    -- no data please
      DBMS_DATAPUMP.DATA_FILTER(h1, 'INCLUDE_ROWS', 0);
    -- If a table already exists in the destination schema, skip it (leave
    -- the preexisting table alone). This is the default, but it does not hurt
    -- to specify it explicitly.
      DBMS_DATAPUMP.SET_PARAMETER(h1,'TABLE_EXISTS_ACTION','SKIP');
    -- Start the job. An exception is returned if something is not set up properly.
      DBMS_DATAPUMP.START_JOB(h1);
    -- The import job should now be running. In the following loop, the job is
    -- monitored until it completes. In the meantime, progress information is
    -- displayed. Note: this is identical to the export example.
    percent_done := 0;
      job_state := 'UNDEFINED';
      while (job_state != 'COMPLETED') and (job_state != 'STOPPED') loop
        dbms_datapump.get_status(h1,
               dbms_datapump.ku$_status_job_error +
               dbms_datapump.ku$_status_job_status +
               dbms_datapump.ku$_status_wip,-1,job_state,sts);
        js := sts.job_status;
    -- If the percentage done changed, display the new value.
         if js.percent_done != percent_done
        then
          dbms_output.put_line('*** Job percent done = ' ||
                               to_char(js.percent_done));
          percent_done := js.percent_done;
        end if;
    -- If any work-in-progress (WIP) or Error messages were received for the job,
    -- display them.
           if (bitand(sts.mask,dbms_datapump.ku$_status_wip) != 0)
        then
          le := sts.wip;
        else
          if (bitand(sts.mask,dbms_datapump.ku$_status_job_error) != 0)
          then
            le := sts.error;
          else
            le := null;
          end if;
        end if;
        if le is not null
        then
          ind := le.FIRST;
          while ind is not null loop
            dbms_output.put_line(le(ind).LogText);
            ind := le.NEXT(ind);
          end loop;
        end if;
      end loop;
    -- Indicate that the job finished and gracefully detach from it.
      dbms_output.put_line('Job has completed');
      dbms_output.put_line('Final job state = ' || job_state);
      dbms_datapump.detach(h1);
    exception
      when others then
        dbms_output.put_line('Exception in Data Pump job');
        dbms_datapump.get_status(h1,dbms_datapump.ku$_status_job_error,0,
                                  job_state,sts);
        if (bitand(sts.mask,dbms_datapump.ku$_status_job_error) != 0)
        then
          le := sts.error;
          if le is not null
          then
            ind := le.FIRST;
            while ind is not null loop
              spos := 1;
              slen := length(le(ind).LogText);
              if slen > 255
              then
                slen := 255;
              end if;
              while slen > 0 loop
                dbms_output.put_line(substr(le(ind).LogText,spos,slen));
                spos := spos + 255;
                slen := length(le(ind).LogText) + 1 - spos;
              end loop;
              ind := le.NEXT(ind);
            end loop;
          end if;
        end if;
        -- dbms_datapump.stop_job(h1);
        dbms_datapump.detach(h1);
    END;
    /This should import nothing but the tables (excluding the data and the table statistics) from an schema export (including a remapping shown here), you can play around with the EXCLUDE_PATH_EXPR expressions. Check the serveroutput generated for possible values used in EXCLUDE_PATH_EXPR.
    Use the DBMS_DATAPUMP.DATA_FILTER procedure if you want to exclude the data.
    For more samples, refer to the documentation:
    http://download.oracle.com/docs/cd/B28359_01/server.111/b28319/dp_api.htm#i1006925
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • Avoid trigger on all tables in schema

    I need to set a CREATE TIME STAMP to the database server timestamp (not the session timestamp) in all tables in a schema for both create and update. Other than creating a trigger on all the tables in the schema is there a less tedious way to do that?
    Similarly I need to to set columns such as CREATE_USER, LAST_UPDATE_USER.
    Thanks in anticipation.

    You can easily generate the DDL to add the new columns.
    As far as populating the columns your choices are either use table befire insert and update triggers to populate the columns or have the application provide the necessary information.
    The basic trigger logic would be pretty well the same for all the tables so writing a little SQL or PL/SQL to generate the trigger code should be pretty straight forward.
    Depending on your application, such as web based with only one Oracle user, you may need to obtain the real user via dbms_application_info set by the application server based logic.
    HTH -- Mark D Powell --
    Edited by: Mark D Powell on May 5, 2010 7:48 AM

  • Truncate all tables in schema

    Hi,
    I want to truncate all tables (300 tables) in a schema. There are some enabled constraints in this schema. I want to keep all tables attached objects (like indexes, constraints, etc). I’d like to create a procedure to do this job. What's the safe and right way for this?
    I am thinking as follows:
    1- Retrieve all user constraints name using user_constrants view and then disable them.
    2- Retrieve all user tables name using user_tables view and then Truncate them.
    3- Enable all constraints that were disabled in step 1.
    Is this a safe and right way?
    Let me know your thought please. Thanks.

    The table was probably created using double qoutes.
    declare
    v_new_tab_name varchar2(35);
    for c1 in (select table_name from user_tables) loop
       begin
           execute immediate ( 'truncate table '||C1.Table_Name );
       exception
          when others then
              v_new_tab_name := '"' || C1.Table_Name|| '"' ;
              execute immediate (' truncate table '||v_new_tab_name) ;
       end ;
    end loop;
    end;

  • Tools to list all table-field used by a program

    Hello everyone
    Is there any available tool providing the list of fields used by a program?
    There are several tools to list the tables used by a program but I need to have a list of the fields of those tables.
    Here is a scenario.
    DATA: lv_matnr TYPE matnr.
    SELECT SINGLE matnr INTO lv_matnr FROM mara WHERE mtart = 'FERT'
    WRITE:/ lv_matnr.
    The list should contain the following information:
    MARA-MATNR
    MARA-MTART
    Because both fields are used by the program. None of the other fields of the table mara should be listed.
    Regards
    dstj

    All BlackBerry device use the same APN blackberry.net. This is unique accross all carrier.
    tanzim                                                                                  
    If your query is resolved then please click on “Accept as Solution”
    Click on the LIKE on the bottom right if the post deserves credit

  • List All Tables in a Microsoft Access Database

    Hello,
    I am writing in a Java code. This code analyze a Microsft Access Database file and then print a report about it.
    The connectivity between Java and the Database is fine, but My problem is that I want to display ALL Table names of the Database:
    For example: if the database is save in a file Data.mdb and if the database contain tables: Customer, Employee, Sales. I want a way to get those names!. I tried to excute a query "Desc Data" but it gave me SQL error sayas that you are allowed to use Select, Insert, Delete and Update keyword only.
    Any HELP?

    you can do it the following way...
    using your Connection obj (say connection)
    DatabaseMetaData dm = connection.getMetaData();
    ResultSet rs = dm.getCatalogs();
    String cat = null;
    if(rs.next())
    cat = rs.getString("TABLE_CAT");
    String[] str = new String{}["TABLE"]
    rs = dm.getTables(cat,null,null,str);
    while(rst.next())
    System.out.println("Tables"+rs.getString("TABLE_NAME"));
    kind regards
    Oliver

  • Export all tables in schema using exp utility

    I need to export all the tables in a schema based on a where clause, how can I do this without having to identify all the tables in the tables= parameter?

    You can get all the tables by doing a user-level export, i.e.
    exp scott/tiger@TNS_name owner=scottwill export all Scott's tables. If you need to export only some of the tables owned by a particular user, you're stuck giving an explicit list until 10g.
    Justin
    Distributed Database Consulting, Inc.
    http://www.ddbcinc.com/askDDBC

  • Select from all tables in schema

    hi, i am trying to x,y from tables in a schema, getting "missing expression" error, working with oracle 11g.
    declare
    v_sql varchar2(4000);
    v_x number;
    v_y number;
    v_n number;
    begin
    for rec in (select table_name as table_name from all_tables where table_name like '%AM_%' ORDER BY 1) loop
    v_sql := 'select a.idnumber, t.x, t.y, table(sdo_util.getvertices(a.geometry)) t FROM '||rec.table_name ||' a';
    EXECUTE IMMEDIATE v_sql INTO v_n, v_x, v_y;
    dbms_output.put_line(v_n||v_x||v_y);
    end loop;
    end;

    hi two rows, with idnum and geometry columns...
    IDNUM
    GEOMETRY(SDO_GTYPE, SDO_SRID, SDO_POINT(X, Y, Z), SDO_ELEM_INFO, SDO_ORDINATES)
    GD8
    SDO_GEOMETRY(2003, 8307, NULL, SDO_ELEM_INFO_ARRAY(1, 1003, 1), SDO_ORDINATE_ARR
    AY(-.48230432, 51.4645609, -.47600566, 51.464582, -.47206108, 51.4645953, -.4654
    6537, 51.4646174, -.46423724, 51.4646216, -.45892656, 51.4646394, -.45671873, 51
    .4646468, -.45007509, 51.4646691, -.4487052, 51.4646737, -.44809122, 51.4646758,
    -.44748667, 51.4646778, -.44118568, 51.4646989, -.44038184, 51.4647016, -.43534
    624, 51.4647186, -.43415307, 51.4647226, -.43413338, 51.4647226, -.43410223, 51.
    4647227, -.43408667, 51.4647228, -.43408688, 51.4648537, -.43408691, 51.4648694,
    -.43408703, 51.4649432, -.43408705, 51.4649579, -.43408717, 51.4650319, -.4340872,
    51.4650481, -.43408741, 51.465177, -.4341061, 51.4651769, -.43413411, 51.465
    1768, -.43415863, 51.4651767, -.43493934, 51.4651741, -.43724392, 51.4651664, -.
    4381469, 51.4651633, -.43876878, 51.4651612, -.44038073, 51.4651558, -.4409811,
    51.4651538, -.44732658, 51.4651325, -.44759329, 51.4651316, -.44870078, 51.46512
    79, -.45213755, 51.4651164, -.45482423, 51.4651073, -.45795448, 51.4650968, -.46
    041684, 51.4650886, -.46194096, 51.4650834, -.46348669, 51.4650783, -.46492913,
    51.4650734, -.46744722, 51.465065, -.47410127, 51.4650426, -.47616935, 51.465035
    7, -.48152654, 51.4650177, -.48230505, 51.4650151, -.48235217, 51.4650149, -.482
    35215, 51.4650018, -.48235145, 51.4645722, -.48235143, 51.4645607, -.48230432, 51.4645609))
    IDNUM
    GEOMETRY(SDO_GTYPE, SDO_SRID, SDO_POINT(X, Y, Z), SDO_ELEM_INFO, SDO_ORDINATES)
    GD4
    SDO_GEOMETRY(2003, 8307, NULL, SDO_ELEM_INFO_ARRAY(1, 1003, 1), SDO_ORDINATE_ARR
    AY(-.48497261, 51.4772685, -.48209892, 51.4772786, -.48097228, 51.4772825, -.474
    22571, 51.477306, -.47275437, 51.4773112, -.46898013, 51.4773243, -.46659847, 51
    .4773326, -.46535204, 51.477337, -.4624971, 51.4773469, -.46135428, 51.4773509,
    -.45879869, 51.4773598, -.45693251, 51.4773663, -.45570651, 51.4773706, -.453196
    56, 51.4773794, -.45006644, 51.4773903, -.44747333, 51.4773993, -.44686181, 51.4774014, -.44408948,
    51.4774111, -.4389499, 51.477429, -.43830451, 51.4774313, -.
    4346623, 51.477444, -.43398218, 51.4774464, -.43331459, 51.4774487, -.43326608,
    51.4774489, -.43326612, 51.4775874, -.43326612, 51.4776043, -.43326616, 51.47774
    86, -.43326616, 51.4777641, -.43326619, 51.4779031, -.4333147, 51.4779029, -.472
    655, 51.4777657, -.47413516, 51.4777605, -.48497272, 51.4777227, -.48502579, 51.
    4777226, -.48502579, 51.4777106, -.48502568, 51.4772809, -.48502568, 51.4772684,
    -.48497261, 51.4772685))

  • How to list all tables/stored procedures used by the report

    All the reports i create are getting data from stored procedure(s). Is there a way to obtaining a listing of all the stored procedures without having to open report by report and check under Database > Set Datasource Location > Properties > Table Name?
    Finding this info it would be extremely valuable, as it would help me to judge the impact of any changes that i might be considering to one or more of the stored proc.
    So far i maintained a manual listing but it is not up-to-date and reliable. I would rather prefer to get an updated listing every time i want to change/drop a stored procedure.
    Thanks so much for your help.
    Rick

    Dell can you be a little bit more specific about the SDK solution. I could ask one of the developers to help me but i need to gather more details.
    I took a look .rpt inspector Pro but it does not do what i need. All i need is the the listing of all the database tables (in my case stored procs) used in my reports. No need to replace or change anything. I need to scan the directory where i have all the reports for the different applications and get report names and table/stored procs used. i can export the txt file to excel and that's all.

Maybe you are looking for

  • How to fix this error in db

    Error Executing Database Query. [Macromedia][SQLServer JDBC Driver][SQLServer]A constant expression was encountered in the ORDER BY list, position 1.       Hi i dont what is this error and how to fix this type of error :  here is the code : order by

  • Is it possible to upgrade the GPU in this laptop HP DV7-2043cl

    I've been searching all over the net, and I am getting mixed answers.  I just got an extra laptop through a craigslist deal, and was wondering what, if any upgrades I could do for it. I have done a little bit of research, and if i could upgrade it se

  • WLAN User Idle Timeout and WPA2-PSK authentication

    Hi, There is a WLAN for Guest users with Session Timeout of 65535 sec and User Idle Timeout of 28800 sec. The WLAN uses PSK as Layer-2 authentication and Web Auth as Layer-3 authentication. Authentication source is locally created users on the contro

  • Help with formatting a Graph in Numbers on ipad

    I have created a spreadsheet to track the overall percentages of our kids basketball team free throws for the coaches. I then have it on a graph but the colunms are super close to each other (attached picture). I have tried to get it to spread them o

  • Wrong values in the cube S001.

    good morning people, we are with the ecc 6.0 installed almost 01 years, the analysis made in the reports that use the standard cube S001, are with with wrong values. This problem does not happen with all materials in a given month, it appears as if t