Best Practices - Multiple Tables

Post Author: lpangelrob
CA Forum: Data Connectivity and SQL
Hmm. I like this forum much better.Anyways... we're a futures company, and SQL Server 2k5 is set up so that there are a couple of ways to get the previous trade date and put it into the report. I want the previous trade date in the report, but I generally can never link the data in the report to the date table, so I get the warning "Your current link configuration contains multiple starting points. / Please be advised that this is generally not supported."6 reports in, it seems as though this creates some problems. When I try to plug in the date from this second starting point, it continuously comes up as blank.Are there any other good ways to plug in arbitrary distinct data (e.g. "select max(process_date) from date_table where process_flag = 'Y'") besides adding commands through Database Expert?

Post Author: PeterLiebich
CA Forum: Data Connectivity and SQL
Create a subreport  with the second table - no need to link it to the main table
If you only want to get a signle piece of information (eg max(process date)) then stick the sub-report in a section of the report header and suppress the printing. In the sub-report you should set a Shared variable.
In the main report create a formula which references the shared variable. Note. You will need to set the evaluation time of this formula to "WhilePrinitingRecords" to ensure that it evaluates after the sub report

Similar Messages

  • Best Practices - Removing table fragmentations.

    What is the best practices for removing the fragmentation in the tables. Does this process be used after every week month or every day.

    Hi Chunm,
    What is the best practices for removing the fragmentation in the tables.I monitor chained rows (table fetch continued rows), reorg with dbms_redefinition (or move to larger blocksize), and ALWAYS adjust PCTFREE to prevent future fragmentation. Read this:
    http://www.dba-oracle.com/t_identify_chained_rows.htm
    http://www.dba-oracle.com/oracle_tips_fetch_cont_rws.htm
    Hope this helps. . .
    Don Burleson
    Oracle Press author

  • Best Practice Agregate Tables for agregate awarness ..

    Hello,
    the question i m facing now is how to improve performance trough agregate awarness. How to plan the agregate tables in order to get the best covering for reporting requirments ? How to realize maximum use of agregate tables and minimum use of Detail tables.
    Is there a method / technique to do that the best way?
    until now did it this way.
    1. I generated a kind of select * from DetailTable. (exp. 5 Mio rows)
    2. progressivly i removed one or more dimension from the statment and got for exp. only 3 Mio Rows. the removed dimension is something like Postal code of Customers or so that i get only information about city and country. Or removing Day from date so that i only have info about the month .. etc.
    3. I remove further dimensions .. (less rows exp 1 Mio)
    4. at the end i have 5 or 6 agregate tables candidate for use in agregate awarness ..
    Is that Ok like this .. could u please share your experiences ..?
    Edited by: MM on Sep 9, 2010 3:57 PM

    What was already suggested is, of course, valid. You need to find out what questions your business users are asking else you run the risk of providing an answer to an unasked question. However, there are a few other tips I would like to share.
    First, if you can't get an order of magnitude reduction, it's not worth the effort. Going from 5 million to 3 million rows is not good enough to consider. Going from 5 million to 500,000 rows is, as the resulting aggregate is smaller by a factor of 10. And then going from 500,000 to 50,000 could be even better.
    Second, you might consider what options your database has instead of using Aggregate Aware. For example, Teradata offers the AJI or Aggregate Join Index feature which - to be honest - is far more flexible than using Aggregate Aware in the universe. Using your database means your aggregate tables are usable by any tool that hits the database, even from tools that do not hit the universe such as Crystal.
    Third, and this goes back to the first point in a way, you have to be very careful in defining your aggregate path. The Aggregate Aware feature of the semantic layer is powerful but it does have limitations. For example, you have to build a linear reduction in your path. That probably doesn't make sense (and I have a draft blog post that I have to get finished one day) but here it is in a nutshell. Suppose you have five factors in your fact, call them A, B, C, D, and E. You want to build an aggregate table at the A, B level. That's fine. You can also build an aggregate at the A, C level. But you can't use both of them in the same @Aggregate_Aware() object definition. You have to build a pyramid, like this:
    A
    A, B
    A, B, C
    A, B, C, D
    A, B, C, D, E
    This does not work because items disappear and reappear later on, and that does not work.
    A
    A, B
    B, C
    A, C
    A, D, E
    A, B, C, D, E
    With Teradata AJI definitions you can do this second case. It does not work in the universe, because you need one clear "best" answer at all times. In the second case, where is the single "best place" to retrieve attribute B? There isn't one, and the aggregate aware process fails.
    Hope these ideas help.

  • Best Practices for Table creation

    Is it a good practice to have a primary key and/or unique key identifier for every tables created for an application even if for some reason the table is only being used as temporary or interface table? Thanks.

    Hi,
    only being used as temporaryFor temporary tables, lookup "CREATE GLOBAL TEMPORARY TABLE" in TFM
    As for me a table without a PK suffers from an error in it's design. PKs are one of Merise's foundations.
    Sure there are exceptions, but if more than x% of your tables have no PK, there's a problem.
    Regards,
    Yoann.

  • Best Practice - Analyze table with very large partitions

    We have a table that contains 100 partitions with about 20m rows in each. Right now the analyze is taking about 1 hour per partition. The table is used for reporting and will have a nightly load of the previous days data.
    What would be the best way to analyze this table? Besides using a low value for ESTIMATE and using low GRANULARITY.
    Thank You.

    Are you suggesting that the table is so big, its not feasible to analyze anymore?I'm suggesting it's not necessary. I think it's highly unlikely that a nightly load is going to change the stats in any meaningful way, unless you are loading millions of rows. the law of diminishing returns has kicked in.
    Remember, the standard advice from Oracle is to gather statistics once and then only bother refreshing those stats when we need to. From Metalink note #44961.1:
    "Given the 'best' plan is unlikely to change, frequent gathering statistics has no benefit. It does incur costs though."
    What you might find useful is to export the stats from your last run before you do the new run (you should do this anyway). Then after the next stats refresh import both sets of stats into dummy schemas and compare them. If the difference is significant then you ought to keep analysing (especially if yours is a DSS or warehousing database). But if they are broadly the same then maybe it's time to stop.
    Cheers, APC

  • Best Practice Table Creation for Multiple Customers, Weekly/Monthly Sales Data in Multiple Fields

    We have an homegrown Access database originally designed in 2000 that now has an SQL back-end.  The database has not yet been converted to a higher format such as Access 2007 since at least 2 users are still on Access 2003.  It is fine if suggestions
    will only work with Access 2007 or higher.
    I'm trying to determine if our database is the best place to do this or if we should look at another solution.  We have thousands of products each with a single identifier.  There are customers who provide us regular sales reporting for what was
    sold in a given time period -- weekly, monthly, quarterly, yearly time periods being most important.  This reporting may or may not include all of our product identifiers.  The reporting is typically based on calendar-defined timing although we have
    some customers who have their own calendars which may not align to a calendar month or calendar year so recording the time period can be helpful.
    Each customer's sales report can contain anything from 1,000-20,000 rows of products for each report.  Each customer report is different and they typically have between 4-30 columns of data for each product; headers are consistently named.  The
    product identifiers included may vary by customer and even within each report for a customer; the data in the product identifier row changes each week.  Headers include a wide variety of data such as overall on hand, overall on order, unsellable on hand,
    returns, on hand information for each location or customer grouping, sell-through units information for each location or customer grouping for that given time period, sell-through dollars information for each location or customer grouping for that given time
    period,  sell-through units information for each location or customer grouping for a cumulative time period (same thing for dollars), warehouse on hands, warehouse on orders, the customer's unique categorization of our product in their system, the customer's
    current status code for that product, and so on.
    Currently all of this data is stored in a multitude of Excel spreadsheets (by customer, division and time period).  Due to overall volume of information and number of Excel sheets, cross-referencing can take considerable time.  Is it possible to
    set-up tables for our largest customers so I can create queries and pivot tables to more quickly look at sales-related information by category, by specific product(s), by partner, by specific products or categories across partners, by specific products or
    categories across specific weeks/months/years, etc.  We do have a separate product table so only the product identifier or a junction table may be needed to pull in additional information from the product table with queries.  We do need to maintain
    the sales reporting information indefinitely.
    I welcome any suggestions, best practice or resources (books, web, etc).
    Many thanks!

    Currently all of this data is stored in a multitude of Excel spreadsheets (by customer, division and time period).  Due to overall volume of information and number of Excel sheets, cross-referencing can take considerable time.  Is it possible to
    set-up tables .....
    I assume you want to migrate to SQL Server.
    Your best course of action is to hire a professional database designer for a short period like a month.
    Once you have the database, you need to hire a professional DBA to move your current data from Access & Excel into the new SQL Server database.
    Finally you have to hire an SSRS professional to design reports for your company.
    It is also beneficial if the above professionals train your staff while building the new RDBMS.
    Certain senior SQL Server professionals may be able to do all 3 functions in one person: db design, database administration/ETL & business intelligence development (reports).
    Kalman Toth Database & OLAP Architect
    SELECT Video Tutorials 4 Hours
    New Book / Kindle: Exam 70-461 Bootcamp: Querying Microsoft SQL Server 2012

  • Multi layer table view/navigation controller hierarchy best practice

    Hi,
    I am new to iPad/iPhone development and wondering what the best practice for multiple layers of table views is? I understand the principle of a navigation controller providing the framework for moving up and down a list but have not yet quite got my head around if you should have one navigation controller for the whole tree or several navigation controllers.
    In my app I need to have the following:
    Main view -> window view showing some interactive elements (picker, buttons etc.)
    Setup view -> Hierarchy managed by nav controller/table views
    The setup view needs to manage the following hierarchy...
    - Level A:
    - Global app variables (one table view)
    - Level B Items (table view showing list of items at belonging to Level B)
    - Level B Item 1 (table view showing list of items at level C belonging to level B item 1)
    - Level C Item 1 (table view showing list of items at level D belonging to level C item 1)
    - Level D Item 1 (table view showing list of items at level E belonging to level D item 1)
    - Level E item (table view for properties of item at Level E)
    - Level D Item n
    - Level C Item n
    - Level B Item n
    Each level in this has some properties and then a list of child items.
    What would be the best way of structuring this? I would assume that creating a class that extends a view controller for each level is a given but what about the control of the navigation? Should this be handled by one navigation controller or one per level? I think I know the right answer but have not seen a neat way of implementing
    I think I am also best off having each level in it's own xib but, once again, am not 100% sure that this is the best design pattern.
    Many thanks in advance for any help/pointers!
    Cheers
    jez

    Hi Julian,
    I have struggled with the same questions you are addressing. On a previous project we tried to model based on packages, but during the course of the project we encountered some problems that grew overtime. The main problems were:
    1. It is hard to enforce rules on package assignments
    2. With multiple developers on the project and limited time we didn't have time to review package assignment
    3. Devopelers would click away warnings that an object was already part of another project and just continue
    4. After go-live the maintenance partner didn't care.
    So, my experience is is that it is a nice feature, but only from a high level design point of view. In real life it will get messy and above all, it doesn't add much value to the development. On my neew assignment we are just working with packages based on functional area and that works just fine.
    Roy

  • Multiple IPs and Outbound IP on 2008, best practice suggestion...

    Hello,
    I need a suggestion on an issue;
    I have a Windows 2008 R2 SP1 Std. Ed. I have 3 IPs for that server, each of them uses the same gateway. By design the IP which is closest to the gateway is the default outbound IP on W2K8_R2_SP1_SE.
    I want to choose any other IP out of other 2 assigned IPs as default outbound one.
    example:
    GATEWAY: 10.0.0.1
    IP1: 10.0.0.2 (default outbound by design)
    IP2: 10.0.0.3 (the one I want it to be default outbound)
    IP3: 10.0.0.4 (not important)
    There are basically 2 choices available to me doable right now. Can you please take a moment and suggest one of the solutions below or state if you know the best practice for such a case? Thank you very much in advance =)
    First Solution:
    apply this command: Netsh int ipv4 add address 12 10.0.0.1 255.x.x.x skipassource=true
    then apply these 3 hotfixes:
    IP addresses are still registered on the DNS servers even if the IP addresses are not used for outgoing traffic on a computer that is running Windows 7 or Windows Server 2008 R2
    http://support.microsoft.com/kb/2386184
    The "skipassource" flag of IP addresses is cleared after you use the GUI to change IP settings of a network adapter in Windows 7 or in Windows Server 2008 R2
    http://support.microsoft.com/kb/2554859
    FIX: IIS Manager does not display IP addresses that are assigned to the network adapter together with the skipassource flag
    http://support.microsoft.com/kb/2551090
    Second Solution:
    Simply create 2 interfaces. Use the first one with the IP that I want to be as outbound default, dump all other IPs to the second interface. 2 interfaces will have the same gateway but Windows will assume the first one as the outbound default.

    I believe you want to set the metric on the interfaces.
    You can do this by altering your routing table with
    route.exe or alternatively, you can change the interface metric in the TCP/IP advanced properties for your network adapter (via Control Panel). By default it uses an automatic metric (i.e. Windows chooses which interface to use).
    For your reference (and the reference of anyone else facing a similar challenge), the metric is a weighted value Windows will use to determine which interface to use for a particular endpoint. Here is the definition from the route.exe documentation:
    metric   Metric   : Specifies
    an integer cost metric (ranging from 1 to 9999) for the route, which is used when choosing among multiple routes in the routing table that most closely match the destination address of a packet being forwarded. The route with the lowest metric is chosen. The
    metric can reflect the number of hops, the speed of the path, path reliability, path throughput, or administrative properties.
    Jason Warren
    @jaspnwarren
    jasonwarren.ca
    habaneroconsulting.com/Insights

  • Logical level in Fact tables - best practice

    Hi all,
    I am currently working on a complex OBIEE project/solution where I am going straight to the production tables, so the fact (and dimension) tables are pretty complex since I am using more sources in the logical tables to increase performance. Anyway, what I am many times struggling with is the Logical Levels (in Content tab) where the level of each dimension is to be set. In a star schema (one-to-many) this is pretty straight forward and easy to set up, but when the Business Model (and physical model) gets more complex I sometimes struggle with the aggregates - to get them work/appear with different dimensions. (Using the menu "More" - "Get levels" does not allways give the best solution......far from). I have some combinations of left- and right outer join as well, making it even more complicated for the BI server.
    For instance - I have about 10-12 different dimensions - should all of them allways be connected to each fact table? Either on Detail or Total level. I can see the use of the logical levels when using aggregate fact tables (on quarter, month etc.), but is it better just to skip the logical level setup when no aggregate tables are used? Sometimes it seems like that is the easiest approach...
    Does anyone have a best practice concerning this issue? I have googled for this but I haven't found anything good yet. Any ideas/articles are highly appreciated.

    Hi User,
    For instance - I have about 10-12 different dimensions - should all of them always be connected to each fact table? Either on Detail or Total level.It not necessary to connect to all dimensions completely based on the report that you are creating ,but as a best practice we should maintain all at Detail level only,when you are mentioning any join conditions in physical layer
    for example for the sales table if u want to report at ProductDimension.ProductnameLevel then u should use detail level else total level(at Product,employee level)
    Get Levels. (Available only for fact tables) Changes aggregation content. If joins do not exist between fact table sources and dimension table sources (for example, if the same physical table is in both sources), the aggregation content determined by the administration tool will not include the aggregation content of this dimension.
    Source admin guide(get level definition)
    thanks,
    Saichand.v

  • Temp Tables - Best Practice

    Hello,
    I have a customer who uses temp tables all over their application.
    This customer is a novice and the app has its roots in VB6. We are converting it to .net
    I would really like to know the best practice for using temp tables.
    I have seen code like this in the app.
    CR2.Database.Tables.Item(1).Location = "tempdb.dbo.[##Scott_xwPaySheetDtlForN]"
    That seems to work, though i do not know why the full tempdb.dbo.[## is required.
    However, when i use this in the new report I am doing I get runtime errors.
    i also tried this
    CR2.Database.Tables.Item(1).Location = "##Scott_xwPaySheetDtlForN"
    I did not get errors, but I was returned data i did not expect.
    Before i delve into different ways to do this, i could use some help with a good pattern to use.
    thanks

    Hi Scott,
    Are you using the RDC still? It's not clear but looks like it.
    We had an API that could piggy back the HDBC handle in the RDC ( craxdrt.dll ) but that API is no longer available in .NET. Also, the RDC is not supported in .NET since .NET uses the framework and RDC is COM.
    Work around is to copy the temp data into a data set and then set location to the data set. There is no way that I know of to get to the tempdb from .NET. Reason being is there is no CR API to set the owner of the table to the user, MS SQL Server locks the tempdb to that user has exclusinve rights on it.
    Thank you
    Don

  • Best Practice to fetch SQL Server data and Insert into Oracle Tables

    Hello,
    I want to read sqlserver data everry half an hour and write into oracle tables ( in two different databases). What is the best practice for doing this?
    We do not have any database dblinks from oracle to sqlserver and vice versa.
    Any help is highly appreciable?
    Thanks

    Well, that's easy:
    use a TimerTask to do the following every half an hour:
    - open a connection to sql server
    - open two connections to the oracle databases
    - for each row you read from the sql server, do the inserts into the oracle databases
    - commit
    - close all connections

  • Best practice for a same query against 2 different tables

    Hello all,
    I want to extract info about tablespaces storage, both permanent and temporary. For that I use 2 different cursors that do exactly the same query but against a different table (dba_data_files and dba_temp_files).
    CURSOR permanentTBSStorageInfo (tablespaceName VARCHAR2) IS
    SELECT file_name, bytes, autoextensible, maxbytes, increment_by
    FROM dba_data_files
    WHERE tablespace_name = tablespaceName;
    CURSOR temporaryTBSStorageInfo (tablespaceName VARCHAR2) IS
    SELECT file_name, bytes, autoextensible, maxbytes, increment_by
    FROM dba_temp_files
    WHERE tablespace_name = tablespaceName;
    First I'm bothered that I have to use 2 cursors to execute the same query against 2 different tables. Is there no another way around?
    Then I fetch the results of this cursors in 2 different loops because I didn't find a way to dynamically call the cursors. I am looking for best practice here, knowing that I will do the same parsing against the results of the 2 cursors.
    Thank you,

    Hi
    Check whether the below query is helpful or not
    select      fs.tablespace_name "Tablespace",
         fs.tempspace "Temp MB",
         df.totalspace "Total MB"
         from
         (select
         tablespace_name,
         round(sum(bytes) / 1048576) TotalSpace
         from
         dba_data_files
         group by
         tablespace_name
         ) df,
         (select
         tablespace_name,
         round(sum(bytes) / 1048576) tempSpace
         from
         dba_temp_files
         group by
         tablespace_name
         ) fs
         where
         df.tablespace_name = fs.tablespace_name;
    Thanks

  • Best practice for migrating data tables- please comment.

    I have 5 new tables seeded with data that need to be promoted from a development to a production environment.
    Instead of the DBAs just using a tool to migrate the data they are insistent that I save and provide scripts for every single commit, in proper order, necessary to both build the table and insert the data from ground zero.
    I am very unaccustomed to this kind of environment and it seems much riskier for me to try and rebuild the objects from scratch when I already have a perfect, tested, ready model.
    They also require extensive documentation where every step is recorded in a document and use that for the deployment.
    I believe their rationale is they don't want to rely on backups but instead want to rely on a document that specifies each step to recreate.
    Please comment on your view of this practice. Thanks!

    >
    Please comment on your view of this practice. Thanks!
    >
    Sounds like the DBAs are using best practices to get the job done. Congratulations to them!
    >
    I have 5 new tables seeded with data that need to be promoted from a development to a production environment.
    Instead of the DBAs just using a tool to migrate the data they are insistent that I save and provide scripts for every single commit, in proper order, necessary to both build the table and insert the data from ground zero.
    >
    The process you describe is what I would expect, and require, in any well-run environment.
    >
    I am very unaccustomed to this kind of environment and it seems much riskier for me to try and rebuild the objects from scratch when I already have a perfect, tested, ready model.
    >
    Nobody cares if if is riskier for you. The production environment is sacred. Any and all risk to it must be reduced to a minimum at all cost. In my opinion a DBA should NEVER move ANYTHING from a development environment directly to a production environment. NEVER.
    Development environments are sandboxes. They are often not backed up. You or anyone else could easily modify tables or data with no controls in place. Anything done in a DEV environment is assumed to be incomplete, unsecure, disposable and unvetted.
    If you are doing development and don't have scripts to rebuild your objects from scratch then you are doing it wrong. You should ALWAYS have your own backup copies of DDL in case anything happens (and it does) to the development environment. By 'have your own' I mean there should be copies in a version control system or central repository where your teammates can get their hands on them if you are not available.
    As for data - I agree with what others have said. Further - ALL data in a dev environment is assumed to be dev data and not production data. In all environments I have worked in ALL production data must be validated and approved by the business. That means every piece of data in lookup tables, fact tables, dimension tables, etc. Only computed data, such as might be in a data warehouse system generated by an ETL process might be exempt; but the process that creates that data is not exempt - that process and ultimately the data - must be signed off on by the business.
    And the business generally has no access to, or control of, a development environment. That means using a TEST or QA environment for the business users to test and validate.
    >
    They also require extensive documentation where every step is recorded in a document and use that for the deployment.
    I believe their rationale is they don't want to rely on backups but instead want to rely on a document that specifies each step to recreate.
    >
    Absolutely! That's how professional deployments are performed. Deployment documents are prepared and submitted for sign off by each of the affected groups. Those groups can include security, dba, business user, IT and even legal. The deployment documents always include recovery steps so that is something goes wrong or the deployment can't procede there is a documented procedure of how to restore the system to a valid working state.
    The deployments themselves that I participate in have representatives from the each of those groups in the room or on a conference call as each step of the deployment is performed. Your 5 tables may be used by stored procedures, views or other code that has to be deployed as part of the same process. Each step of the deployment has to be performed in the correct order. If something goes wrong the responsible party is responsible for assisting in the retry or recovery of their component.
    It is absolutely vital to have a known, secure, repeatable process for deployments. There are no shortcuts. I agree, for a simple 5 new table and small amount of data scenario it may seem like overkill.
    But, despite what you say it simply cannot be that easy for one simple reason. Adding 5 tables with data to a production system has no business impact or utility at all unless there is some code, process or application somewhere that accesses those tables and data. Your post didn't mention the part about what changes are being made to actually USE what you are adding.

  • Best Practice for Designing Database Tables?

    Hi,
    I work at a company for tracking devices (GPS Devices). Our SQL Server database is designed to have a table for each device we sell, currently there is 2500 tables in our database and they all have the same columns they only differ in table name. Each device
    sends about 4K records per day.
    currently each table hold from 10K records to 300K records
    What is the best practice to design a database in this situation? 
    When accessing database from a C# application, which is better to use, direct SQL commands or views? 
    a detailed description about what is best to do in such scenario would be great. 
    Thanks in advance.
    Edit:
    Tables columns are:
    [MessageID]
          ,[MessageUnit]
          ,[MessageLong]
          ,[MessageLat]
          ,[MessageSpeed]
          ,[MessageTime]
          ,[MessageDate]
          ,[MessageHeading]
          ,[MessageSatNumber]
          ,[MessageInput]
          ,[MessageCreationDate]
          ,[MessageInput2]
          ,[MessageInput3]
          ,[MessageIO]

    Hello Louis, thank you so much for your informative post. I'll describe in detail what situations I came through my 9 months of work in the company (working as a software engineer, but I am planning to take over database maintenance since no one is maintaining
    it right now and I cannot do anything else in the code to make it faster)
    At every end of the month our clients generate report for the previous month for all their cars, some clients have 100+ cars, and some have few. This is when real issue start, they are calling their data from our server through internet while having 2000
    unit sending data to our server, they keep on getting read time out since SQL Server gives priority to insert and hold all select commands. I solved it temporary in the code using "Read Uncommitted" once I initialize a connection through C#. 
    The other issue is generating reports for a month or two takes lots of time when selecting 100+ units. Thats what I want to solve, the problem is the one who wrote the C# app used hard coded SQL Statements
    AND
    the company is refusing to upgrade from SQL Server 2003 and Windows Server 2003. 
    Now talking about reports, there are summary reports, stops reports, zone reports ..etc most of them depend usually on at least MessageTime, MessageDate, MessageSpeed, MessageIO and MessageSatNumber.
    So from your post I conclude that for now I need to set snapshots so that select statements don't get kicked out in favor for insert commands, but does SQL Server automatically select from the snapshots or do I have to tell it to do so? 
    Other than proper indexing what else I need? Tom
    Phillips suggested Table partitioning but I don't think it is needed in my case since our database size is 78GB
    When I run code analysis on the app, Visual Studio tells me I better use stored procedures, views than using hard coded Select Statements, what difference will this bring me when talking about performance?
    Thanks in advance. 

  • Best Practice : how 2 fetch tables, views, ... names and schema

    hi,
    I am looking for the best practice about getting the catalog of a database.
    I have seen that I can make some select in system-tables(or views) such as DBA_TABLES and DBA_VIEWS, or DBA_CATALOG, but is that the best way to grab thses informations ?
    (I ask this question because It seems a strange way to me to get the table names using a simple select, but getting column info using a specialized function, OCIDescribeAny(). this does not look like a coherent API...)
    thanks for your advice
    cd

    in the same idea, why use OCIDescribeAny instead of doing an appropriate select in DBA_TAB_COLUMNS ?
    cd

Maybe you are looking for

  • Need to Print the Report Output

    Hi, I want to print the report output. How to install a printer in Oracle Appls. If i give number of copies 1 or morethan 1(from SRS Window)...the report is completed-warning. pls advice Thanks in advance.

  • How do I transfer iMovie 11 EVENTS to an external hard drive so that I can delete them off my macbook pro to make space

    How in the world do I transfer iMovie events from my event library into my external hard drive? There is no option for that. Even when I select "copy" an event, and then go to the external drive and want to paste it, there is no paste option. The onl

  • Image in Graphic

    how can i draw an image in a component and how can i draw an image instead of a graphic please help, i'm turning around

  • LoginContest maintains state?

    I have written a custom LoginModule to authenticate a client's x509 certificate against OID. My client is a java class that calls a jsp page - the jsp page grabs the cert info and calls the LoginContext as follows: LoginContext lc = null; Subject sub

  • How do I fix Adobe Reader?

    This is the the error message that pops up after it tells me there the program needs to shut down. Problem signature:   Problem Event Name:          APPCRASH   Application Name:          AcroRd32.exe   Application Version:          10.1.4.38   Applic